Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

July 22, 2019

Coder DojoVolunteering as a mentor at CoderDojo to teach young people, including my own kids, how to write software.

Last week, I published an opinion piece on CNN featuring my thoughts on what is wrong with the web and how we might fix it.

In short, I really miss some things about the original web, and don't want my kids to grow up being exploited by mega-corporations.

I am hopeful that increased regulation and decentralized web applications may fix some of the web's current problems. While some problems are really difficult to fix, at the very least, my kids will have more options to choose from when it comes to their data privacy and overall experience on the web.

You can read the first few paragraphs below, and view the whole article on CNN.

I still remember the feeling in the year 2000 when a group of five friends and I shared a modem connection at the University of Antwerp. I used it to create an online message board so we could chat back and forth about mostly mundane things. The modem was slow by today's standards, but the newness of it all was an adrenaline rush. Little did I know that message board would change my life.

In time, I turned this internal message board into a public news and discussion site, where I shared my own experiences using experimental web technologies. Soon, I started hearing from people all over the world that wanted to provide suggestions on how to improve my website, but that also wanted to use my site's technology to build their own websites and experiment with emerging web technologies.

Before long, I was connected to a network of strangers who would help me build Drupal.

July 19, 2019

July 17, 2019

Being wary of all things tracking by Google & Facebook, both of who’s products I love but data capturing practices I hate, for the last 4 years or so I always logged in into these in “Private browsing” sessions in Firefox (because why trust the worlds biggest advertising platform with your privacy, right?)

Now I just “discovered” that the Mozilla team have rendered that somewhat clumsy procedure -which required me to log in each time I restarted my computer or browser- redundant with their “Firefox Multi-Account Containers” add-on, allowing you to contain entire sessions to one (or more) tabs;

YouTube Video
Watch this video on YouTube.

So now I have one browser window with a couple of tabs in the Google container, one tab in a Facebook container and all others in the “default” container where Google & Facebook can’t track me (fingerprinting aside, but there’s an option for that).

July 16, 2019

It's been quiet on my blog but for good reason: I got married!

We had an amazing two-day wedding in the heart of Tuscany. The wedding took place in a renovated Italian villa from the 11th century, surrounded by vineyards and olive groves. A magical place to celebrate with family and friends who flew in from all over the world.

Many people emailed and texted asking for some wedding photos. It will take our wedding photographer a few months to deliver the photos, but they shared some preview photos today.

The photos capture the love, energy and picturesque location of our wedding quite well!

Vanessa looking through the window
Dries hugging the boys prior to the wedding ceremony
Dries and Vanessa walking hand-in-hand down the aisle after the ceremony
Outdoor dinner tables with chandeliers hanging from the trees
Dinner table setting with lemons and rosemary
Vanessa and Dries kissing leaning on a vintage car
Vanessa and Dries walking through a vineyard holding hands

July 15, 2019

octopress_logo I migrated my blog from Octopress to Jekyll. The primary reason is that octopress isn’t maintained any more. I’m sure its great theme will live on in a lot of projects.

I like static webpage creators, they allow you to create nice websites without the need to have any code on the remote website. Anything that runs code has the possibility to be cracked, having a static website limit the attack vectors. You still need to protect the upload of the website and the system(s) that hosts your site of course.

Octopress was/is based on Jekyll, so Jekyll seems to be the logical choice as my next website creator. My blog posts are written in markdown, this makes it easier to migrate to a new site creator.

There are a lot of Jekyll themes available, I’m not the greatest website designer so after reviewing a few themes I went with the Minimal Mistakes theme. jekyll_logo It has a nice layout and is very well documented.

The migration was straight-forward … as simple as copying the blog posts markdown files to the new location. Well kind of… There were a few pitfalls.

  • post layout

Minimal Mistakes doesn’t have a post layout, it has a single layout that is recommended for posts. But all my post markdown files had the layout: post directive set. I’d have removed this from all my blog posts but I created a soft link to get around this issue.

  • images

Images - or the image location - are bit of a problem in markdown. I was using the custom octopress img tag. With the custom octopress img tag it was easy to get around the markdown image issue to get the correct path; I didn’t needed to care about absolute and relative paths. In Jekyll and Octogress the baseurl is set in the site configuration file _config.yaml. The custom img tag resolved it by added baseurl to the image path automatically.

This can be resolved with a relative_url pre-processed script.

{{ '/images/opnsense_out_of_inodes.jpg' | remove_first:'/' | absolute_url }}

So I create a few sed scripts to transfor the octopress img tags.

  • youtube

I have a few links to youtube videos and was using a custom plugin for this. I replaced it plain markdown code.

[![jenkins build](](

With the custom tags removed and few customizations, my new blog site was ready. Although I still spend a few hours on it…

Have fun


July 11, 2019

Last weekend, I sat down to learn a bit more about angular, a TypeScript-based programming environment for rich client webapps. According to their website, "TypeScript is a typed superset of JavaScript that compiles to plain JavaScript", which makes the programming environment slightly more easy to work with. Additionally, since TypeScript compiles to whatever subset of JavaScript that you want to target, it compiles to something that should work on almost every browser (that is, if it doesn't, in most cases the fix is to just tweak the compatibility settings a bit).

Since I think learning about a new environment is best done by actually writing a project that uses it, and since I think it was something we could really use, I wrote a video player for the DebConf video team. It makes use of the metadata archive that Stefano Rivera has been working on the last few years (or so). It's not quite ready yet (notably, I need to add routing so you can deep-link to a particular video), but I think it's gotten to a state where it is useful for more general consumption.

We'll see where this gets us...

July 10, 2019

The post Trigger an on demand uptime & broken links check after a deploy with the Oh Dear! API appeared first on

You can use our API to trigger an ondemand run of both the uptime check and the broken links checker. If you add this to, say, your deploy script, you can have near-instant validation that your deploy succeeded and didn't break any links & pages.

Source: Trigger an ondemand uptime & broken links check after a deploy -- Oh Dear! blog

The post Trigger an on demand uptime & broken links check after a deploy with the Oh Dear! API appeared first on

July 09, 2019

The post There’s more than one way to write an IP address appeared first on

Most of us write our IP addresses the way we've been taught, a long time ago:,, ... but that gets boring after a while, doesn't it?

Luckily, there's a couple of ways to write an IP address, so you can mess with coworkers, clients or use it as a security measure to bypass certain (input) filters.

Not all behaviour is equal

I first learned about the different ways of writing an IP address by this little trick.

On Linux:

$ ping 0
PING 0 ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=64 time=0.053 ms
64 bytes from icmp_seq=2 ttl=64 time=0.037 ms

This translates the 0 to However, on a Mac:

$ ping 0
PING 0 ( 56 data bytes
ping: sendto: No route to host
ping: sendto: No route to host

Here, it translates 0 to a null-route

Zeroes are optional

Just like in IPv6 addresses, some zeroes (0) are optional in the IP address.

$ ping 127.1
PING 127.1 ( 56 data bytes
64 bytes from icmp_seq=0 ttl=64 time=0.033 ms
64 bytes from icmp_seq=1 ttl=64 time=0.085 ms

Note though, a computer can't just "guess" where it needs to fill in the zeroes. Take this one for example:

$ ping 10.50.1
PING 10.50.1 ( 56 data bytes
Request timeout for icmp_seq 0

It translates 10.50.1 to, adding the necessary zeroes before the last digit.

Overflowing the IP address

Here's another neat trick. You can overflow a digit.

For instance:

$ ping 10.0.513
PING 10.0.513 ( 56 data bytes
64 bytes from icmp_seq=0 ttl=61 time=10.189 ms
64 bytes from icmp_seq=1 ttl=61 time=58.119 ms

We ping 10.0.513, which translates to The last digit can be interpreted as 2x 256 + 1. It shifts the values to the left.

Decimal IP notation

We can use a decimal representation of our IP address.

$ ping 167772673
PING 167772673 ( 56 data bytes
64 bytes from icmp_seq=0 ttl=61 time=15.441 ms
64 bytes from icmp_seq=1 ttl=61 time=4.627 ms

This translates 167772673 to

Hex IP notation

Well, if decimal notation worked, HEX should work too -- right? Of course it does!

$ ping 0xA000201
PING 0xA000201 ( 56 data bytes
64 bytes from icmp_seq=0 ttl=61 time=7.329 ms
64 bytes from icmp_seq=1 ttl=61 time=18.350 ms

The hex value A000201 translates to By prefixing the value with 0x, we indicate that what follows, should be interpreted as a hexadecimal value.

Octal IP notation

Take this one for example.

$ ping
PING ( 56 data bytes

Notice how that last .010 octet gets translated to .8?

Using sipcalc to find these values

There's a useful command line IP calculator called sipcalc you can use for the decimal & hex conversions.

The post There’s more than one way to write an IP address appeared first on

July 04, 2019

This week, the second edition of “Pass-The-Salt” was organized in Lille, France. The conference was based on the same format at last year and organized at the same location (see my previous diary). I like this kind of event where you can really meet people (the number of attendees was close to 200) without fixing an appointment at this time, this place to be sure to catch your friends! The conference was a mix of talks and technical workshop spread across three days. The scheduled was based on “topics” and presentation were grouped together (Low-Level Hacking & Breaking, Free Software Projects under Stress, etc).

After a brief introduction by Christophe, the first set of talks started (the first day started at 2PM). Ange Albertini was the first speaker and presented the results of his current research about hash collisions. Yes, everybody knows that MD5 is broken for a while but what does it mean for us in our daily infosec life? What is really the impact? After a short reminder about hash functions and why they are so useful in many cases (to check passwords, to index files or validate them), Ange described how it is possible to generate files with the same hash. The approach is called a pre-image attack or PIA:

  • Generate a file X with a hash H:
  • Given any H, make X so that hash(X) = H

The types of collision are called IPC (“Identical Prefix Collision”) or CPC (“Chosen Prefix Collision”). After a review of both techniques, Ange explained how to practically implement this and gave some examples of “malicious” files like a single file being a PDF document, an executable, an image and a video! (depending on the application used to open it). That’s also the case of his slide desk (available here). Rename the file with a ‘.html’ extension and open it in your browser!

The next presentation covered the tool called “Dexcalibur” by Georges-B. Michel. The idea behind the tool is to automate the reversing of Android applications. A lot of Android applications (.dex) are obfuscated and generate a new .dex while executed. The problem for the reverse-engineer is that you can only hook functions that you’re aware of. The tool presented by Georges-B is based on Frida (another cool tool) and other tools like Baksmali, LIEF, Capstone, etc).

Aurélien Pocheville explained how he reversed an NFC reader (ChameleonMini). Aurélien used Ghidra to achieve this. Not my domain of predilection but it was interesting. I like the funny anecdote provided by Aurélien like when he bricked his friend’s garage token and the original was… in the closed garage 🙂

After a short coffee break, Johannes vom Dorp presented his tool called FACT or “Firmware Analysis and Comparison Tool”. Today, many devices are based on a firmware (think about all the IoT toys around you. A typical firmware analysis process has the following steps: Unpacking > Information gathering > Identifying weaknesses > Reverse Engineering. This can quickly become a boring task. FACT replaces steps 1 – 3 by automating them. The tool looks very interesting and is available here.

Thomas Barabosch presented “cwe_checker: Hunting Binary Code Vulnerabilities Across CPU Architectures”. The idea of this presentation looks like the same as above with firmware analysis. Indeed, we need to automate as much as possible our boring/daily tasks. But it’s not always easy. The case presented by Thomas was a perfect example: IoT devices are running on multiple CPU architectures. This makes the process of finding vulnerabilities not easy. Thomas introduces the tool ‘cwe_checked’ which is based on the Binary Analysis Platform (BAP).

The first day completed with a talk about the Proxmark3, the awesome NFC tool. Christian Herrmann (Iceman) started with a fact: many people have a Proxmark but can’t use them. I fully agree it’s also my case. I bought one because I had a good opportunity but I never took the time to play with it. He gave details about rdv40, the latest version of the tool. Christian also did also a very workshop on the Proxmark (which was very popular).

Day two started with a presentation by François Lesueur: “Mini-Internet Using LXC”. What is MI-LXC? It’s a framework to construct virtual infrastructures on top of LXC to practise security on realistic infrastructures. Cyberranges is a cart with some hardware and a framework to manage VMs and scenarios. MI-LXC has common services, routing, information systems… just like the regular internet. There were already other existing frameworks but they did not meet all Francois’s expectations. So, he started his own project. Components: containers based on LXC, LXC python binding, bash scripts. With an initial config of 20 containers, 8 internal bridges, you can still run a small fully working Internet (using 4GB of storage and 800MB of memory. The project is available here.

The next talk was “Phishing Awareness, feedback on a bank’s strategy” by Thibaud Binetruy. First slide deck with a TLP indication. Thibaut first presented the tool they developed: “Swordphish” (link) and, in a second part, how they used it inside the Société Générale bank. First, they used a commercial tool but it was expensive and not practical for 150K users. In many cases, the classic story is to decide to write your own tool. It does not have to be complex: just send emails and wait for feedback. Make it “simple” (used by non-tech people). First create a template (mail with a link,, redirection, of, etc). Create a campaign (ransomware, fake form. Dashboard. You can tag people (tech, non-tech; …) to improve reporting and statistics. Output to Excel because people like to use it to generate their own stats. They also developed a reporting button in Outlook to report phishing. Because they don’t know who’s the security contact. Also helps to get all SMTP headers!

After the morning break, we switched to more offensive talks, especially against web technologies like tokens and API’s. Louis Nyffenegger presented “JWAT… Attacking JSON Web Tokens”. Perfect timing, after learning how to abuse JWT, the next presentation was about protecting API’s. MAIF (a French mutual insurance company) has open-sourced Otoroshi, a reverse proxy & API management solution. Key features are exposure, quotas & throttling, resiliency and monitoring. and security of course. Apps need to be sure that request actually comes from Otoroshi. Otoroshi sends a header containing a signed random state for a short TTL. App sends back a header containing the original token and signed with a short TTL. Otoroshi handles TLS end-to-end. JWT tokens are supported (see the previous talk).

After the lunch break (the idea to bring food trucks at the venue was nice), the topic related to attack/defence started. The first presentation was “Time-efficient assessment of open-source projects for Red Teamers” by Thomas Chauchefoin & Julien Szlamowicz. Red-team is a nice way to say “real world pentest” so with a large scope (access, physical, etc). They explained with very easy to read slides about how they compromised an application. Then, Orange Tsai (from DEVCORE) explained how he found vulnerabilities in Jenkins (which is a very juicy target seeing the number of companies using it).

Jean-Baptiste Kempf from the VLC project came to defend his program. VLC is the most used video player worldwide. Jean-Baptiste gave crazy numbers like 1M downloads per day! When you are a key player, you are often the target of many attacks and campaigns. He explained step by step how VLC is developed and updated and tried to kill some myths about in a very freedom style.

The last set of talks was related to privacy. Often, privacy is like security: it is addressed at the end of a project and this is bad! The first talk was “OSS in the quest for GDPR compliance”. Aaron Macsween (from XWiki / Cryptpad) explained the impact of GDPR on such tool (XWiki, Cryptpad) address this issue. Then Nicolas Pamart explained how TLS 1.3 is used by the Stormshield firewall solutions. This new TLS version focuses on enhancing user privacy and security. Finally, Quinn Norton and Raphaël Vinot presented their tool called Lookyloo a web interface allowing to scrape a website and then displays a tree of domains calling each other.

The day ended with about one hour of lightning talks (or rump sessions as called at Pass-The-Salt) and the social event in the centre of Lille.

The last day started for me with my workshop about OSSEC & Threat Hunting. The room was almost full and we had interesting exchanges during the morning. Sadly, I missed the talk about Sudo by Peter Czanik which looked very interesting (I just need to read the slides now).

In the afternoon, we had a bunch of presentations about how to secure the Internet. The first one was about a new project kicked off by CIRCL. When they release a new tool, you can expect something cool. Alexandre Dulaunoy presented a brand new tool: The D4 Project. The description on the website resumes everything: “A large-scale distributed sensor network to monitor DDoS and other malicious activities relying on an open and collaborative project.”. As explained by Alexandre, everybody is running “tools” like PassiveDNS, PassiveSSL, honeypots to collect data but it’s very difficult to correlate the collected data or to make analysis at a large scale. The idea behind D4 is to implement a simple protocol which can be used to send collected data to a central repository for further analysis.

Then Max Mehl (Free Software Foundation Europe) came to explain why free software is imported in IT security but may also lead to some issues.

The next talk was presented by Kenan Ibrović: “Managing a growing fleet of WiFi routers combining OpenWRT, WireGuard, Salt and Zabbix”. The title resumes everything. The idea was to demonstrate how you can manage a worldwide network with free tools.

Finally, two presentations closed the day: “Better curl!” by Yoann Lamouroux. He made a lightning talk last year but there were so many interesting things to say about Curl, the command line browser, that he came back with a regular talk. He presented many (unknown) features of Curl. And “PatrOwl” was presented by Nicolas Mattiocco. PatrOwl is a tool to orchestrate sec-ops and automate calls to commercial or open source tools that perform checks.

To conclude, three intense days with many talks (the fact that many of them where 20-mins talks, the schedule may contain more), relaxed atmosphere, good environment. It seems to be a “go” for the 2020 edition!

Note: All the slides have been (or remaining will be soon) uploaded here.

[The post Pass-The-Salt 2019 Wrap-Up has been first published on /dev/random]

June 28, 2019

This week, Acquia announced the opening of its new office in Pune, India, which extends our presence in the Asia Pacific region. In addition to Pune, we already have offices in Australia and Japan.

I've made several trips to India in recent years, and have experienced not only Drupal's fast growth, but also the contagious excitement and passion for Drupal from the people I've met there.

While I wasn't able to personally attend the opening of our new office, I'm looking forward to visiting the Pune office soon.

For now, here are a few pictures from our grand opening celebration:

Acquians at the opening of the Pune, India office
Acquians at the opening of the Pune, India office
Acquians at the opening of the Pune, India office

June 25, 2019

At the end of last week, Klaas (one of my best friends) and I drove to from Belgium to Wales dead-set on scrambling up Tryfan's North Ridge and hiking the Ogwen Valley in Snowdonia.

Scrambling means hiking up steep, rocky terrain using your hands, without the need for ropes or any other kind of protection. It's something between hiking and rock climbing.

Tryfan North RidgeTryfan's North Ridge silhouette next to lake Lyn Ogwen.

17 people died on Tryfan the past 30 years, and 516 parties had to be rescued. While the scrambling on Tryfan is rarely technically challenging, it can be dangerous and difficult at times (video of Klaas scrambling), especially when carrying heavy backpacks. Tryfan shouldn't be taken lightly.

It took us five hours to make it to the top — and it's taking me four days to recover so far. After we reached the top, we descended a few hundred meters and found a patch of grass where we could set up our tent.

Our tent setup on a small ridge on TryfanOur campsite on a ridge on the back of Tryfan. The views were spectacular.

Carrying those heavy backpacks paid off not only because we were able to bring our camping supplies but also because Klaas carried up a steak dinner with cocktails — a late birthday surprise for my 40th birthday. Yes, you read that correctly: a steak dinner with cocktails on top of a mountain! It was a real treat!

Grilling a steak on a small ridge on Tryfan
Making cocktails on a small ridge on Tryfan

During dinner, the weather started to turn; dark clouds came in and it started to rain. By night time the temperature had dropped to 2 degrees Celsius (35 degrees Fahrenheit). Fortunately, we were prepared and had hauled not only a tent and a steak dinner up the mountain, but also warm clothing.

Klaas wearing multiple layers of clothing when the night sets on TryfanThe temperatures swung from 20ºC (68ºF) during the day time to 2ºC (35ºF) during night time. In the evenings, we were forced to put on warm clothes and layer up.

What didn't go so well was that my brand new sleeping pad had a leak and I didn't bring a repair kit. Although, sleeping on the ground wasn't so bad. The next morning, when we opened our tent, we were greeted not only by an amazing view, but also by friendly sheep.

A beautiful view from our tent
A sheep in front of our tent

The next two days, we hiked through the Ogwen Valley. Its wide glacial valley is surrounded by soaring mountains and is incredibly beautiful.

A stone wall with a step ladder
Dries standing on a cliff
Klaas resting on some rocks looking into the distance after scrambling up Tryfan
Refilling our water bottles on a creek

After three days of hiking we made it back to the base of Tryfan where it all started. We felt a big sense of accomplishment.

Klaas and Dries at the end of their tripSelfie taken with Klaas' iPhone 8. Me pointing to the Tryfan's North Ridge where our hike began just three days earlier.

We hadn't taken a shower in four days, so we definitely started to become aware of each other's smell. As soon as we got to Klaas' Volkswagen California (campervan), we showered in the parking lot, behind the car. I ended up washing my armpits four times, once for each day I didn't shower.

Dries washing his hair

For more photos, check out my photo album.

June 20, 2019

Earlier this week at our Acquia Engage conference in London, Acquia announced a new product called "Content Cloud", a headless, SaaS-based content-as-a-service solution built on Drupal.

Years ago, we heard that organizations wanted to:

  • Create content that is easy to re-use across different channels, such as websites and mobile applications, email, digital screens, and more.

  • Use a content management system with a modern web service API that allows them to use their favorite front-end framework (e.g. React, Angular, Vue.js, etc) to build websites and digital experiences.

As a result, Acquia spent the last 5+ years helping to improve Drupal's web services capabilities and authoring experience.

But we also heard that organizations want to:

  • Use single repository to manage all their organization's content.
  • Make it really easy to synchronize content between all their Drupal sites.
  • Manage all content editors from a central place to enable centralized content governance and workflows.
  • Automate the installation, maintenance, and upgrades of their Drupal-based content repository.

All of the above becomes even more important as organizations scale the number of content creators, websites and applications. Many large organizations have to build and maintain hundreds of sites and manage hundreds of content creators.

So this week, at our European customer conference, we lifted the curtain on Acquia Content Cloud, a new Acquia product built using Drupal. Acquia Content Cloud is a content-as-a-service solution that enables simplified, headless content creation and syndication across multi-channel digital experiences.

For now, we are launching an early access beta program. If you’re interested in being considered for the beta or want to learn more as Content Cloud moves toward general availability, you can sign up here.

In time, I plan to write more about Content Cloud, especially as we get closer to its initial release. Until then, you can watch the Acquia Content Cloud teaser video below:

June 18, 2019

Today, we released a new version of Acquia Lift, our web personalization tool.

In today's world, personalization has become central to the most successful customer experiences. Most organizations know that personalization is no longer optional, but have put it off because it can be too difficult. The new Acquia Lift solves that problem.

While before, Acquia Lift may have taken a degree of fine-tuning from a developer, the new version simplifies how marketers create and launch website personalization. With the new version, anyone can point, click and personalize content without any code.

We started working on the new version of Acquia Lift in early 2018, well over a year ago. In the process we interviewed over 50 customers, redesigned the user interface and workflows, and added various new capabilities to make it easier for marketers to run website personalization campaigns. And today, at our European customer conference, Acquia Engage London, we released the new Acquia Lift to the public.

You can see all of the new features in action in this 5-minute Acquia Lift demo video:

The new Acquia Lift offers the best web personalization solution in Acquia's history, and definitely the best tool for Drupal.

June 14, 2019

Le terrorisme a toujours été une invention politique d’un pouvoir en place. Des milliers de personnes meurent chaque année dans des accidents de voiture ou en consommant des produits néfastes pour la santé ? C’est normal, ça fait tourner l’économie, ça crée de l’emploi. Qu’un fou suicidaire trucide les gens autour de chez lui, on parle d’un désaxé. Mais qu’un musulman sorte un couteau dans une station de métro et on invoque, en boucle sur toutes les chaines de télévision, le terrorisme qui va faire camper des militaires dans nos villes, qui va permettre de passer des lois liberticides, de celles qui devraient attirer l’attention de n’importe quel démocrate avisé.

La définition de terrorisme implique qu’il est une vue de l’esprit, une terreur entretenue et non pas une observation rationnelle. On nous parle de moyens, de complicités, de financement. Il s’agit juste d’une tactique pour obnubiler nos cerveaux, pour activer le mode conspirationniste dont notre système neuronal ne sait malheureusement pas se défaire sans un violent effort conscient.

Car si le pouvoir et les médias inventent le terrorisme, c’est avant tout parce que nous sommes avides de combats, de sang, de peur, de grandes théories complexes. La réalité est bien plus prosaïque : un homme seul peut faire bien plus de dégâts que la plupart des attentats terroristes récents. J’irais même plus loin ! Je prétends qu’à quelques exceptions près, le fait d’agir en groupe a desservi les terroristes. Leurs attentats sont nivelés par le bas, leur bêtise commune fait obstacle à la moindre lueur de lucidité individuelle. Je ne parle pas en l’air, j’ai décidé de le prouver.

Il est vrai que les panneaux m’ont coûté un peu de temps et des allers-retours au magasin de bricolage. Mais je n’étais pas mécontent du résultat. Trente panneaux mobiles reprenant les consignes de sécurité antiterrorisme : pas d’armes, pas d’objets dangereux, pas de liquides. En dessous, trois compartiments poubelles pour faire du recyclage et qui servent également de support.

Dans ce socle, bien caché, un dispositif électronique très simple avec une charge fumigène. De quoi faire peur sans blesser.

Il m’a ensuite suffi de louer une camionnette de travaux à l’aspect vaguement officiel pour aller déposer, vêtu d’une salopette orange, mes panneaux devant les entrées du stade et des différentes salles de concert de la capitale.

Bien sûr que je me serais fait arrêter si j’avais déposé des paquets mal finis en djellaba. Mais que voulez-vous dire à un type bien rasé, avec une tenue de travail, qui pose des panneaux informatifs avec le logo officiel de la ville imprimé dessus ? À ceux qui posaient des questions, j’ai dit qu’on m’avait juste ordonné de les mettre en place. Le seul agent de sécurité qui a trouvé ça un peu bizarre, je lui sortis un ordre de mission signé par l’échevin des travaux. Ça l’a rassuré. Faut dire que je n’ai même pas imité la signature : j’ai repris celle publiée dans un compte-rendu du conseil communal sur le site web de la ville. Mon imprimante couleur a fait le reste.

D’une manière générale, personne ne se méfie si tu ne prends rien. J’apporte du matériel qui coûte de l’argent. Je ne peux donc pas avoir inventé ça tout seul.

Mes trente panneaux mis en place, je suis passé à la seconde phase de mon plan qui nécessitait un timing un peu plus serré. J’avais minutieusement choisi mon jour à cause d’un match international important au stade et de plusieurs concerts d’envergure.

Si j’avais su… Aujourd’hui encore je regrette de ne pas l’avoir fait un peu plus tôt. Ou plus tard. Pourquoi en même temps ?

Mais n’anticipons pas. M’étant changé, je me rendis à la gare ferroviaire. Mon billet de Thalys dument acheté en main, je gagnai ma place et, saluant ma voisine de travée, je mis mon lourd attaché-caisse dans le porte-bagage. Je glissai aussi mon journal dans le filet devant moi. Consultant ma montre, je remarquai à haute voix qu’il restait quelques minutes avant le départ. D’un air innocent, je demandai où se trouvait le wagon-bar. Je me levai et, après avoir traversé deux wagons, sortis du train.

Il n’y a rien de plus louche qu’un bagage abandonné. Mais si son propriétaire est propre sur lui, porte la cravate, ne montre aucun signe de nervosité et laisse sa veste sur le siège, le bagage n’est plus abandonné. Il est momentanément déposé. C’est aussi simple que cela !

Pour la beauté du geste, j’avais calculé l’emplacement idéal pour mettre mon détonateur et acheté ma place en conséquence. J’ai ajouté une petite touche de complexité : un micro-ordinateur avec un capteur GPS qui déclencherait mon bricolage au bon moment. Ce n’était pas strictement nécessaire, mais en quelques sortes une cerise technophile sur le gâteau.

Je ne voulais que faire peur, uniquement effrayer ! La coïncidence est malheureuse, mais pensez que j’avais été jusqu’à m’assurer que mon fumigène ne produirait pas la moindre déflagration susceptible d’être interprétée comme un coup de feu ! Je voulais éviter une panique !

En sortant de la gare, j’ai sauté dans la navette à destination de l’aéroport. Je me faisais un point d’honneur à parachever mon œuvre. Je suis arrivé juste à temps. Sous le regard d’un agent de sécurité, j’ai mis dans la poubelle prévue à cet effet des bouteilles en plastique qui contenait mon dispositif. C’était l’heure de pointe, la file était immense.

Je n’ai jamais compris pourquoi les terroristes cherchaient soit à s’introduire dans l’avion, soit à faire exploser leur bombe dans la large zone d’enregistrement. Le contrôle de sécurité est la zone la plus petite et la plus densément peuplée à cause des longues files. Renforcer les contrôles ne fait que rallonger les files et rendre cette zone encore plus propice à un attentat. Quelle absurdité !

Paradoxalement, c’est également la seule zone où abandonner un objet relativement volumineux n’est pas louche : c’est même normal et encouragé ! Pas de contenant de plus d’un dixième de litre ! Outre les bouteilles en plastique, j’ai pris une mine dépitée pour mon thermo bon marché. Le contrôleur m’a fait signe d’obtempérer. C’est sur son ordre que j’ai donc déposé le détonateur dans la poubelle, au beau milieu des files s’entrecroisant.

J’ai appris la nouvelle en sirotant un café à proximité des aires d’embarquement. Une série d’explosions au stade, au moment même où le public se pressait pour entrer. Mon sang se glaça ! Pas aujourd’hui ! Pas en même temps que moi !

Les réseaux sociaux bruissaient de rumeurs. Certains parlaient d’explosions devant des salles de concert. Ces faits étaient tout d’abord démentis par les autorités et par les journalistes, me rassurant partiellement. Jusqu’à ce qu’une vague de tweets me fasse réaliser que, obnubilées par les explosions du stade et par plusieurs centaines de blessés, les autorités n’étaient tout simplement pas au courant. Il n’y avait plus d’ambulances disponibles. Devant les salles, les mélomanes aux membres arrachés agonisaient dans leur sang. Le réseau téléphonique était saturé, les mêmes images tournaient en boucle, repartagées des milliers de fois par ceux qui captaient un semblant de wifi.

Certains témoignages parlaient d’une attaque massive, de combattants armés criant « Allah Akbar ! ». Des comptes-rendus parlaient de militaires patrouillant dans les rues et se défendant vaillamment. Les corps jonchaient les pavés, même loin de toute explosion. À en croire Twitter, c’était la guerre totale !

Il parait qu’en réalité, seule une quarantaine de personnes ont été touchées par les coups de feu des militaires apeurés. Et que ce n’est qu’à un seul endroit qu’une personne armée, se croyant elle-même attaquée, a riposté, tuant un des militaires et entrainant une riposte qui a fait deux morts et huit blessés. Le reste des morts et des blessés hors des sites d’explosion serait dû à des mouvements de panique.

Mais la présence des militaires a également permis de pallier, dans certains foyers, au manque d’ambulance. Ils ont prodigué les premiers soins et sauvé des vies. Paradoxalement, ils ont dans certains cas soigné des gens sur lesquels ils venaient de tirer.

Encore heureux que les armes de guerre qu’ils trimbalaient ne fussent pas chargées. Une seule balle d’un engin de ce calibre peut traverser plusieurs personnes, des voitures, des cloisons. La convention de Genève interdit strictement leur usage en dehors des zones de guerre. Elles ne servent que pour assurer le spectacle et une petite rente aux ostéopathes domiciliés en bordure des casernes. En cas d’attaque terroriste, les militaires doivent donc se défaire de leur hideux fardeau pour sortir leurs armes de poing. Qui n’en restent pas moins mortelles.

J’étais blême à la lecture de mon flux Twitter. Autour de moi, tout le monde tentait d’appeler la famille, des amis. Je crois que le déraillement des deux TGVs est quasiment passé inaperçu au milieu de tout ça. Exactement comme je l’avais imaginé : une déflagration assez intense dans un wagon, à hauteur des bagages, calculée pour se produire au moment où le train croisait la route d’un autre. Relativement, les deux rames se déplaçaient à 600km/h l’une par rapport à l’autre. Il n’est pas difficile d’imaginer que si l’une vacille sous le coup d’une explosion et vient toucher l’autre, c’est la catastrophe.

Comment s’assurer de l’explosion au bon moment ?

Je ne sais pas comment les véritables terroristes s’y sont pris, mais, moi, de mon côté, j’avais imaginé un petit algorithme d’apprentissage qui reconnaissait le bruit et le changement de pression dans l’habitacle lors du croisement. Je l’ai testé une dizaine de fois et je l’ai couplé à un GPS pour délimiter une zone de mise à feu. Un gadget très amusant. Mais ce n’était couplé qu’à un fumigène, bien sûr.

C’est lorsque l’explosion a retenti dans l’aéroport que je fus forcé d’écarter un simple concours de circonstances. La coïncidence devenait trop énorme. J’ai immédiatement pensé à mon billet de blog programmé pour être publié et partagé sur les réseaux sociaux à l’heure de la dernière explosion. J’y expliquais ma démarche, je m’excusais pour les désagréments des fumigènes et me justifiais en disant que c’était un prix bien faible à payer pour démontrer que toutes les atteintes à nos droits les plus fondamentaux, toutes les surveillances du monde ne pourraient jamais arrêter le terrorisme. Que la seule manière d’éviter le terrorisme est de donner aux gens des raisons pour aimer leur propre vie. Que, pour éviter la radicalisation, les peines de prison devraient être remplacées par des peines de bibliothèque sans télévision, sans Internet, sans smartphone. Incarcéré entre Victor Hugo et Amin Maalouf, l’extrémisme rendrait rapidement son dernier soupir.

Mon blog avait-il été partagé trop tôt à cause d’une erreur de programmation de ma part ? Ou piraté ? Mes idées avaient-elles été reprises par un véritable groupe terroriste qui comptait me faire porter le chapeau ?

Tout cela n’avait aucun sens. Les gens hurlaient dans l’aéroport, se jetaient à plat ventre. Ceux qui fuyaient percutaient ceux qui voulaient aider ou voir la source de l’explosion. Au milieu de ce tumulte, je devais m’avouer que je m’étais souvent demandé ce que donneraient « mes » attentats. J’avais même fait des recherches poussées sur les explosifs en masquant mon activité derrière le réseau Tor. De cette manière, j’ai découvert beaucoup de choses et j’ai même fait quelques tests, mais il ne me serait jamais venu à l’esprit de tuer.

Tout a été planifié comme un simple jeu intellectuel, une manière d’exposer spectaculairement l’inanité de nos dirigeants.

Il est vrai que de véritables explosions seraient encore plus frappantes. Je me le suis dit plusieurs fois. Mais de là à passer à l’acte !

Pourtant, à la lecture de votre enquête, un malaise m’envahit. Je me suis tellement souvent imaginé la méthode pour amorcer de véritables explosifs… Croyez-vous que je l’aie inconsciemment accompli ?

Je ne veux pas tuer. Je ne voulais pas tuer. Tous ces morts, ces images d’horreur. La responsabilité m’étouffe, m’effraie. À quelle ignominie m’aurait poussé mon subconscient !

Jamais je ne me serais senti capable de tuer ne fut-ce qu’un animal. Mais si votre enquête le démontre, je dois me rendre à l’évidence. Je suis le seul coupable, l’unique responsable de tous ces morts. Il n’y a pas de terroristes, pas d’organisation souterraine, pas d’idéologie.

Quand on y pense, c’est particulièrement amusant. Mais je ne suis pas sûr que ça vous arrange. Êtes-vous réellement prêt à révéler la vérité au grand public ? À annoncer que la loi martiale mise en place ne concernait finalement qu’un ingénieur un peu distrait qui a confondu fumigènes et explosifs ? Que les milliards investis dans l’antiterrorisme ne pourront jamais arrêter un individu seul et sont à la base de ces attentats ?

Bonne chance pour expliquer tout cela aux médias et à nos irresponsables politiques, Madame la Juge d’instruction !

Terminé le 14 avril 2019, quelque part au large du Pirée, en mer Égée.Photo by Warren Wong on Unsplash

Je suis @ploum, conférencier et écrivain électronique. Si vous avez apprécié ce texte, n'hésitez pas à me soutenir sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens réguliers, même symboliques, sont une réelle motivation et reconnaissance. Merci !

Ce texte est publié sous la licence CC-By BE.

June 11, 2019

Recently I was interviewed on RTL Z, the Dutch business news television network. In the interview, I talk about the growth and success of Drupal, and what is to come for the future of the web. Beware, the interview is in Dutch. If you speak Dutch and are subscribed to my blog (hi mom!), feel free to check it out!

June 10, 2019

Recently, GitHub announced an initiative called GitHub Sponsors where open source software users can pay contributors for their work directly within GitHub.

There has been quite a bit of debate about whether initiatives like this are good or bad for Open Source.

On the one hand, there is the concern that the commercialization of Open Source could corrupt Open Source communities, harm contributors' intrinsic motivation and quest for purpose (blog post), or could lead to unhealthy corporate control (blog post).

On the other hand, there is the recognition that commercial sponsorship is often a necessary condition for Open Source sustainability. Many communities have found that to support their growth, as a part of their natural evolution, they need to pay developers or embrace corporate sponsors.

Personally, I believe initiatives like GitHub Sponsors, and others like Open Collective, are a good thing.

It helps not only with the long-term sustainability of Open Source communities, but also improves diversity in Open Source. Underrepresented groups, in particular, don't always have the privilege of free time to contribute to Open Source outside of work hours. Most software developers have to focus on making a living before they can focus on self-actualization. Without funding, Open Source communities risk losing or excluding valuable talent.

I published the following diary on “Interesting JavaScript Obfuscation Example“:

Last Friday, one of our reader (thanks Mickael!) reported to us a phishing campaign based on a simple HTML page. He asked us how to properly extract the malicious code within the page. I did an analysis of the file and it looked interesting for a diary because a nice obfuscation technique was used in a Javascript file but also because the attacker tried to prevent automatic analysis by adding some boring code. In fact, the HTML page contains a malicious Word document encoded in Base64. HTML is wonderful because you can embed data into a page and the browser will automatically decode it. This is often used to deliver small pictures like logos… [Read more]

[The post [SANS ISC] Interesting JavaScript Obfuscation Example has been first published on /dev/random]

June 07, 2019

The post Nginx: add_header not working on 404 appeared first on

I had the following configuration block on an Nginx proxy.

server {
  listen        80 default;
  add_header X-Robots-Tag  noindex;

The idea was to add the X-Robots-Tag header on all requests being served. However, it didn't work for 404 hits.

$ curl -I "http://domain.tld/does/not/exist"
HTTP/1.1 404 Not Found

... but no add_header was triggered.

Turns out, you need to specify the header should be added always, not just on successfull requests.

server {
  listen        80 default;

  add_header X-Robots-Tag  noindex always;

This fixes it, by adding always to the end of the add_header directive.

The post Nginx: add_header not working on 404 appeared first on

June 04, 2019

June 01, 2019

A few months ago, I moved from Mechelen, Belgium to Cape Town, South Africa.

Muizenberg Beach

No, that's not the view outside my window. But Muizenberg beach is just a few minutes away by car. And who can resist such a beach? Well, if you ignore the seaweed, that is.

Getting settled after a cross-continental move takes some time, especially since the stuff that I decided I wanted to take with me would need to be shipped by boat, and that took a while to arrive. Additionally, the reason of the move was to move in with someone, and having two people live together for the first time requires some adjustment in their daily routine; and that is no different for us.

After having been here for several months now, however, things are pulling together:

I had a job lined up before I moved to cape town; but there were a few administrative issues in getting started, which meant that I had to wait at home for a few months while my savings were being reduced to almost nothingness. That is now mostly resolved (except that there seem to be a few delays with payment, but nothing that can't be resolved).

My stuff arrived a few weeks ago. Unfortunately the shipment was not complete; the 26" 16:10 FullHD monitor that I've had for a decade or so now, somehow got lost. I contacted the shipping company and they'll be looking into things, but I'm not hopeful. Beyond that, there were only a few minor items of damage (one of the feet of the TV broke off, and a few glasses broke), but nothing of real consequence. At least I did decide to go for the insurance which should cover loss and breakage, so worst case I'll just have to buy a new monitor (and throw away a few pieces of debris).

While getting settled, it was harder for me to spend quality time on doing free software- or Debian-related things, but I did manage to join Gerry and Mark to do some FOSDEM-related video review a while ago (meaning, all the 2019 videos that could be released have been released now), and spent some time configuring the Debian SReview instance for the Marseille miniconf last weekend, and will do the same for the Hamburg one that will be happening soon. Additionally, there have been some comments on the upstream nbd mailinglist that I cooperated with.

All in all, I guess it's safe to say that I'm slowly coming out of hibernation. Up next: once the payment issues have been fully resolved and I can spend money with a bit more impudence, join a local choir and/or orchestra and/or tennis club, and have some off time.

May 29, 2019

I published the following diary on “Behavioural Malware Analysis with Microsoft ASA“:

When you need to quickly analyze a piece of malware (or just a suspicious program), your goal is to determine as quickly as possible what’s the impact. In many cases, we don’t have time to dive very deep because operations must be restored asap. To achieve this, there are different actions that can be performed against the sample: The first step is to perform a static analysis (like extracting strings, PE sections, YARA rules, etc).  Based on the first results, you can decide to go to the next step: the behavioural analysis. And finally, you decide to perform some live debugging or reverse-engineering. Let’s focus on the second step, the behavioural analysis… [Read more]

[The post [SANS ISC] Behavioural Malware Analysis with Microsoft ASA has been first published on /dev/random]

May 27, 2019

The post Some useful changes to the Oh Dear! monitoring service appeared first on

Things have been a bit more quiet on this blog, but it's only because my time & energy has been spent elsewhere. In the last couple of months, we launched some useful new additions to the Oh Dear! monitoring service.

On top of that we shared tips on how to use our crawler to keep your cache warm or and we've been featured by Google as a prominent user of their new .app TLD. Cool!

In addition, we're always looking for feedback. If you notice something we're missing in terms of monitoring, reach out and we'll see what we can do!

The post Some useful changes to the Oh Dear! monitoring service appeared first on

May 25, 2019

37,000 American flags on Boston Common

More than 37,000 American flags are on the Boston Common — an annual tribute to fallen military service members. Seeing all these flags was moving, made me pause, and recognize that Memorial Day weekend is not just about time off from work, outdoor BBQs with friends, or other fun start-of-summer festivities.

Autoptimize just joined the “1+ million active installs”-club. Crazy!

I’m very happy, thanks everyone for using, thanks for the support-questions & all the great feedback therein and especially thanks to the people who actively contributed and especially-especially to Emilio López (Turl) for creating Autoptimize and handing it over to me back in 2013 and to Tomaš Trkulja who cleaned up al lot of the messy code I added to it and introducing me to PHP codesniffer & Travis CI tests.

May 24, 2019

Autoptimize 2.5.1 (out earlier this week) does not have an option to enforce font-display on Google Fonts just yet, but this little code snippet does exactly that;

function add_display($in) {
  return $in.'&display=swap';

Happy swapping (or fallback-ing or optional-ing) :-)

May 23, 2019

Last month, Special Counsel Robert Mueller's long-awaited report on Russian interference in the U.S. election was released on the website.

With the help of Acquia and Drupal, the report was successfully delivered without interruption, despite a 7,000% increase in traffic on its release date, according to the Ottawa Business Journal.

According to Federal Computer Week, by 5pm on the day of the report's release, there had already been 587 million site visits, with 247 million happening within the first hour.

During these types of high-pressure events when the world is watching, no news is good news. Keeping sites like this up and available to the public is an important part of democracy and the freedom of information. I'm proud of Acquia's and Drupal's ability to deliver when it matters most!

May 22, 2019

Drupal 8.7 was released with huge API-First improvements!

The REST API only got fairly small improvements in the 7th minor release of Drupal 8, because it reached a good level of maturity in 8.6 (where we added file uploads, exposed more of Drupal’s data model and improved DX.), and because we of course were busy with JSON:API :)

Thanks to everyone who contributed!

  1. JSON:API #2843147

    Need I say more? :) After keeping you informed about progress in October, November, December and January, followed by one of the most frantic Drupal core contribution periods I’ve ever experienced, the JSON:API module was committed to Drupal 8.7 on March 21, 2019.

    Surely you’re know thinking But when should I use Drupal’s JSON:API module instead of the REST module? Well, choose REST if you have non-entity data you want to expose. In all other cases, choose JSON:API.

    In short, after having seen people use the REST module for years, we believe JSON:API makes solving the most common needs an order of magnitude simpler — sometimes two. Many things that require a lot of client-side code, a lot of requests or even server-side code you get for free when using JSON:API. That’s why we added it, of course :) See Dries’ overview if you haven’t already. It includes a great video that Gabe recorded that shows how it simplifies common needs. And of course, check out the spec!

  2. datetime & daterange fields now respect standards #2926508

    They were always intended to respect standards, but didn’t.

    For a field configured to store date + time:

      "value": "2017-03-01T20:02:00",
      "value": "2017-03-01T20:02:00+11:00",

    The site’s timezone is now present! This is now a valid RFC3339 datetime string.

    For a field configured to store date only:

      "value": "2017-03-01T20:02:00",
      "value": "2017-03-01",

    Time information used to be present despite this being a date-only field! RFC3339 only covers combined date and time representations. For date-only representations, we need to use ISO 8601. There isn’t a particular name for this date-only format, so we have to hardcode the format. See

    Note: backward compatibility is retained: a datetime string without timezone information is still accepted. This is discouraged though, and deprecated: it will be removed for Drupal 9.

    Previous Drupal 8 minor releases have fixed similar problems. But this one, for the first time ever, implements these improvements as a @DataType-level normalizer, rather than a @FieldType-level normalizer. Perhaps that sounds too far into the weeds, but … it means it fixed this problem in both rest.module and jsonapi.module! We want the entire ecosystem to follow this example.

  3. rest.module has proven to be maintainable!

    In the previous update, I proudly declared that Drupal 8 core’s rest.module was in a “maintainable” state. I’m glad to say that has proven to be true: six months later, the number of open issues still fit on “a single page” (fewer than 50). :) So don’t worry as a rest.module user: we still triage the issue queue, it’s still maintained. But new features will primarily appear for jsonapi.module.

Want more nuance and detail? See the REST: top priorities for Drupal 8.7.x issue on

Are you curious what we’re working on for Drupal 8.8? Want to follow along? Click the follow button at API-First: top priorities for Drupal 8.8.x — whenever things on the list are completed (or when the list gets longer), a comment gets posted. It’s the best way to follow along closely!1

Was this helpful? Let me know in the comments!

For reference, historical data:

  1. ~50 comments per six months — so very little noise. ↩︎

May 21, 2019

opnsense with no inodes

I use OPNsense as my firewall on a Pcengines Alix.

The primary reason is to have a firewall that will be always up-to-update, unlike most commercial customer grade firewalls that are only supported for a few years. Having a firewall that runs opensource software - it’s based on FreeBSD - also make it easier to review and to verify that there are no back doors.

When I tried to upgrade it to the latest release - 19.1.7 - the upgrade failed because the filesystem ran out of inodes. There is already a topic about this at the OPNsense forum and a fix available for the upcoming nano OPNsense images.

But this will only resolve the issue when a new image becomes available and would require a reinstallation of the firewall.

Unlike ext2/3/4 or Sgi’s XFS on GNU/Linux, it isn’t possible to increase the number of inodes on a UFS filesystem on FreeBSD.

To resolve the upgrade issue I created a backup of the existing filesystem, created a new filesystem with enough inodes, and restored the backup.

You’ll find my journey of fixing the out of inodes issue below all commands are executed on a FreeBSD system. Hopefully this useful for someone.

Fixing the out of inodes

I connected the OPNsense CF disk to a USB cardreader on a FreeBSD virtual system.

Find the disk

Find the CF disk I use gpart show as this will also display the disk labels etc.

root@freebsd:~ # gpart show
=>      40  41942960  vtbd0  GPT  (20G)
        40      1024      1  freebsd-boot  (512K)
      1064       984         - free -  (492K)
      2048   4194304      2  freebsd-swap  (2.0G)
   4196352  37744640      3  freebsd-zfs  (18G)
  41940992      2008         - free -  (1.0M)

=>      0  7847280  da0  BSD  (3.7G)
        0  7847280    1  freebsd-ufs  (3.7G)

=>      0  7847280  ufsid/5a6b137a11c4f909  BSD  (3.7G)
        0  7847280                       1  freebsd-ufs  (3.7G)

=>      0  7847280  ufs/OPNsense_Nano  BSD  (3.7G)
        0  7847280                  1  freebsd-ufs  (3.7G)

root@freebsd:~ # 


Create a backup with the old-school dd, just in case.

root@freebsd:~ # dd if=/dev/da0 of=/home/staf/opnsense.dd bs=4M status=progress
  4013948928 bytes (4014 MB, 3828 MiB) transferred 415.226s, 9667 kB/s
957+1 records in
957+1 records out
4017807360 bytes transferred in 415.634273 secs (9666689 bytes/sec)
root@freebsd:~ # 


Verify that we can mount the filesystem.

root@freebsd:/ # mount /dev/da0a /mnt
root@freebsd:/ # cd /mnt
root@freebsd:/mnt # ls
.cshrc                          dev                             net        entropy                         proc
.profile                        etc                             rescue
.rnd                            home                            root
COPYRIGHT                       lib                             sbin
bin                             libexec                         sys
boot                            lost+found                      tmp
boot.config                     media                           usr
conf                            mnt                             var
root@freebsd:/mnt # cd ..
root@freebsd:/ #  df -i /mnt
Filesystem 1K-blocks    Used   Avail Capacity iused ifree %iused  Mounted on
/dev/da0a    3916903 1237690 2365861    34%   38203  6595   85%   /mnt
root@freebsd:/ # 


root@freebsd:/ # umount /mnt


We’ll a create a backup with dump, we’ll use this backup to restore it again after we created a new filesystem with enough inodes.

root@freebsd:/ # dump 0uaf /home/staf/opensense.dump /dev/da0a
  DUMP: Date of this level 0 dump: Sun May 19 10:43:56 2019
  DUMP: Date of last level 0 dump: the epoch
  DUMP: Dumping /dev/da0a to /home/staf/opensense.dump
  DUMP: mapping (Pass I) [regular files]
  DUMP: mapping (Pass II) [directories]
  DUMP: estimated 1264181 tape blocks.
  DUMP: dumping (Pass III) [directories]
  DUMP: dumping (Pass IV) [regular files]
  DUMP: 16.21% done, finished in 0:25 at Sun May 19 11:14:51 2019
  DUMP: 33.27% done, finished in 0:20 at Sun May 19 11:14:03 2019
  DUMP: 49.65% done, finished in 0:15 at Sun May 19 11:14:12 2019
  DUMP: 66.75% done, finished in 0:09 at Sun May 19 11:13:57 2019
  DUMP: 84.20% done, finished in 0:04 at Sun May 19 11:13:41 2019
  DUMP: 99.99% done, finished soon
  DUMP: DUMP: 1267205 tape blocks on 1 volume
  DUMP: finished in 1800 seconds, throughput 704 KBytes/sec
  DUMP: level 0 dump on Sun May 19 10:43:56 2019
  DUMP: Closing /home/staf/opensense.dump
root@freebsd:/ # 


According to the newfs manpage: We can specify the inode density with the -i option. A lower number will give use more inodes.

root@freebsd:/ # newfs -i 1 /dev/da0a
density increased from 1 to 4096
/dev/da0a: 3831.7MB (7847280 sectors) block size 32768, fragment size 4096
        using 8 cylinder groups of 479.00MB, 15328 blks, 122624 inodes.
super-block backups (for fsck_ffs -b #) at:
 192, 981184, 1962176, 2943168, 3924160, 4905152, 5886144, 6867136
root@freebsd:/ # 

mount and verify

root@freebsd:/ # mount /dev/da0a /mnt
root@freebsd:/ # df -i
Filesystem         1K-blocks    Used    Avail Capacity iused    ifree %iused  Mounted on
zroot/ROOT/default  14562784 1819988 12742796    12%   29294 25485592    0%   /
devfs                      1       1        0   100%       0        0  100%   /dev
zroot/tmp           12742884      88 12742796     0%      11 25485592    0%   /tmp
zroot/usr/home      14522868 1780072 12742796    12%      17 25485592    0%   /usr/home
zroot/usr/ports     13473616  730820 12742796     5%  178143 25485592    1%   /usr/ports
zroot/usr/src       13442804  700008 12742796     5%   84122 25485592    0%   /usr/src
zroot/var/audit     12742884      88 12742796     0%       9 25485592    0%   /var/audit
zroot/var/crash     12742884      88 12742796     0%       8 25485592    0%   /var/crash
zroot/var/log       12742932     136 12742796     0%      21 25485592    0%   /var/log
zroot/var/mail      12742884      88 12742796     0%       8 25485592    0%   /var/mail
zroot/var/tmp       12742884      88 12742796     0%       8 25485592    0%   /var/tmp
zroot               12742884      88 12742796     0%       7 25485592    0%   /zroot
/dev/da0a            3677780       8  3383552     0%       2   980988    0%   /mnt
root@freebsd:/ # 


root@freebsd:/mnt # restore rf /home/staf/opensense.dump
root@freebsd:/mnt # 


root@freebsd:/mnt # df -ih /mnt
Filesystem    Size    Used   Avail Capacity iused ifree %iused  Mounted on
/dev/da0a     3.5G    1.2G    2.0G    39%     38k  943k    4%   /mnt
root@freebsd:/mnt # 


The installation had a OPNsense_Nano label, underscores are not allowed anymore in label name on FreeBSD 12. So I used OPNsense instead, we’ll update /etc/fstab with the new label name.

root@freebsd:~ # tunefs -L OPNsense /dev/da0a
root@freebsd:~ # gpart show
=>      40  41942960  vtbd0  GPT  (20G)
        40      1024      1  freebsd-boot  (512K)
      1064       984         - free -  (492K)
      2048   4194304      2  freebsd-swap  (2.0G)
   4196352  37744640      3  freebsd-zfs  (18G)
  41940992      2008         - free -  (1.0M)

=>      0  7847280  da0  BSD  (3.7G)
        0  7847280    1  freebsd-ufs  (3.7G)

=>      0  7847280  ufsid/5ce123f5836f7018  BSD  (3.7G)
        0  7847280                       1  freebsd-ufs  (3.7G)

=>      0  7847280  ufs/OPNsense  BSD  (3.7G)
        0  7847280             1  freebsd-ufs  (3.7G)

root@freebsd:~ # 

update /etc/fstab

root@freebsd:~ # mount /dev/da0a /mnt
root@freebsd:~ # cd /mnt
root@freebsd:/mnt # cd etc
root@freebsd:/mnt/etc # vi fstab 
# Device                Mountpoint      FStype  Options         Dump    Pass#
/dev/ufs/OPNsense       /               ufs     rw              1       1


root@freebsd:/mnt/etc # cd
root@freebsd:~ # umount /mnt
root@freebsd:~ # 


                  ______  _____  _____                         
                 /  __  |/ ___ |/ __  |                        
                 | |  | | |__/ | |  | |___  ___ _ __  ___  ___ 
                 | |  | |  ___/| |  | / __|/ _ \ '_ \/ __|/ _ \
                 | |__| | |    | |  | \__ \  __/ | | \__ \  __/
                 |_____/|_|    |_| /__|___/\___|_| |_|___/\___|

 +=========================================+     @@@@@@@@@@@@@@@@@@@@@@@@@@@@
 |                                         |   @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
 |  1. Boot Multi User [Enter]             |   @@@@@                    @@@@@
 |  2. Boot [S]ingle User                  |       @@@@@            @@@@@    
 |  3. [Esc]ape to loader prompt           |    @@@@@@@@@@@       @@@@@@@@@@@
 |  4. Reboot                              |         \\\\\         /////     
 |                                         |   ))))))))))))       (((((((((((
 |  Options:                               |         /////         \\\\\     
 |  5. [K]ernel: kernel (1 of 2)           |    @@@@@@@@@@@       @@@@@@@@@@@
 |  6. Configure Boot [O]ptions...         |       @@@@@            @@@@@    
 |                                         |   @@@@@                    @@@@@
 |                                         |   @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
 |                                         |   @@@@@@@@@@@@@@@@@@@@@@@@@@@@  
                                                 19.1  ``Inspiring Iguana''   

/boot/kernel/kernel text=0x1406faf data=0xf2af4+0x2abeec syms=[0x4+0xf9f50+0x4+0x1910f5]
/boot/entropy size=0x1000
/boot/kernel/carp.ko text=0x7f90 data=0x374+0x74 syms=[0x4+0xeb0+0x4+0xf40]
/boot/kernel/if_bridge.ko text=0x7b74 data=0x364+0x3c syms=[0x4+0x1020+0x4+0x125f]
loading required module 'bridgestp'
/boot/kernel/bridgestp.ko text=0x4878 data=0xe0+0x18 syms=[0x4+0x6c0+0x4+0x65c]
/boot/kernel/if_enc.ko text=0x1198 data=0x2b8+0x8 syms=[0x4+0x690+0x4+0x813]
/boot/kernel/if_gre.ko text=0x31d8 data=0x278+0x30 syms=[0x4+0xa30+0x4+0xab8]
Root file system: /dev/ufs/OPNsense
Sun May 19 10:28:00 UTC 2019

*** OPNsense.stafnet: OPNsense 19.1.6 (i386/OpenSSL) ***

 HTTPS: SHA256 E8 9F B2 8B BE F9 D7 2D 00 AD D3 D5 60 E3 77 53
               3D AC AB 81 38 E4 D2 75 9E 04 F9 33 FF 76 92 28
 SSH:   SHA256 FbjqnefrisCXn8odvUSsM8HtzNNs+9xR/mGFMqHXjfs (ECDSA)
 SSH:   SHA256 B6R7GRL/ucRL3JKbHL1OGsdpHRDyotYukc77jgmIJjQ (ED25519)
 SSH:   SHA256 8BOmgp8lSFF4okrOUmL4YK60hk7LTg2N08Hifgvlq04 (RSA)

FreeBSD/i386 (OPNsense.stafnet) (ttyu0)

root@OPNsense:~ # df -ih
Filesystem           Size    Used   Avail Capacity iused ifree %iused  Mounted on
/dev/ufs/OPNsense    3.5G    1.2G    2.0G    39%     38k  943k    4%   /
devfs                1.0K    1.0K      0B   100%       0     0  100%   /dev
tmpfs                307M     15M    292M     5%     203  2.1G    0%   /var
tmpfs                292M     88K    292M     0%      27  2.1G    0%   /tmp
devfs                1.0K    1.0K      0B   100%       0     0  100%   /var/unbound/dev
devfs                1.0K    1.0K      0B   100%       0     0  100%   /var/dhcpd/dev
root@OPNsense:~ # 
CTRL-A Z for help | 115200 8N1 | NOR | Minicom 2.7.1 | VT102 | Offline | ttyUSB0                                    


login: root
|      Hello, this is OPNsense 19.1          |         @@@@@@@@@@@@@@@
|                                            |        @@@@         @@@@
| Website:        |         @@@\\\   ///@@@
| Handbook:   |       ))))))))   ((((((((
| Forums:  |         @@@///   \\\@@@
| Lists:  |        @@@@         @@@@
| Code:  |         @@@@@@@@@@@@@@@

*** OPNsense.stafnet: OPNsense 19.1.6 (i386/OpenSSL) ***

 HTTPS: SHA256 E8 9F B2 8B BE F9 D7 2D 00 AD D3 D5 60 E3 77 53
               3D AC AB 81 38 E4 D2 75 9E 04 F9 33 FF 76 92 28
 SSH:   SHA256 FbjqnefrisCXn8odvUSsM8HtzNNs+9xR/mGFMqHXjfs (ECDSA)
 SSH:   SHA256 B6R7GRL/ucRL3JKbHL1OGsdpHRDyotYukc77jgmIJjQ (ED25519)
 SSH:   SHA256 8BOmgp8lSFF4okrOUmL4YK60hk7LTg2N08Hifgvlq04 (RSA)

  0) Logout                              7) Ping host
  1) Assign interfaces                   8) Shell
  2) Set interface IP address            9) pfTop
  3) Reset the root password            10) Firewall log
  4) Reset to factory defaults          11) Reload all services
  5) Power off system                   12) Update from console
  6) Reboot system                      13) Restore a backup

Enter an option: 12

Fetching change log information, please wait... done

This will automatically fetch all available updates, apply them,
and reboot if necessary.

This update requires a reboot.

Proceed with this action? [y/N]: y 

** Have fun! **


In his post Iconic consoles of the IBM System/360 mainframes, 55 years old, Ken Shirrif gives a beautiful overview of how IBM mainframes were operated.

I particularly liked this bit:

The second console function was “operator intervention”: program debugging tasks such as examining and modifying memory or registers and setting breakpoints. The Model 30 console controls below were used for operator intervention. To display memory contents, the operator selected an address with the four hexadecimal dials on the left and pushed the Display button, displaying data on the lights above the dials. To modify memory, the operator entered a byte using the two hex dials on the far right and pushed the Store button. (Although the Model 30 had a 32-bit architecture, it operated on one byte at a time, trading off speed for lower cost.) The Address Compare knob in the upper right set a breakpoint.

IBM System/360 Model 30 console, lower part

Debugging a program was built right into the hardware, to be performed at the console of the machine. Considering the fact that these machines were usually placed in rooms optimized for the machine rather than the human, that must have been a difficult job. Think about that the next time you’re poking at a Kubernetes cluster using your laptop, in the comfort of your home.

Also recommended is the book Core Memory: A Visual Survey of Vintage Computers. It really shows the intricate beauty of some of the earlier computers. It also shows how incredibly cumbersome these machines must have been to handle.

Core Memory: A Visual Survey of Vintage Computers

Even when you’re in IT operations, it’s getting more and more rare to see actual hardware and that’s probably a good thing. It never hurts to look at history to get a taste of how far we’ve come. Life in operations has never been more comfortable: let’s enjoy it by celebrating the past!

Comments | More on | @rubenv on Twitter

May 16, 2019

I published the following diary on “The Risk of Authenticated Vulnerability Scans“:

NTLM relay attacks have been a well-known opportunity to perform attacks against Microsoft Windows environments for a while and they remain usually successful. The magic with NTLM relay attacks? You don’t need to lose time to crack the hashes, just relay them to the victim machine. To achieve this, we need a “responder” that will capture the authentication session on a system and relay it to the victim. A lab is easy to setup: Install the Responder framework. The framework contains a tool called which helps to relay the captured NTLM authentication to a specific target and, if the attack is successful, execute some code! (There are plenty of blog posts that explain in details how to (ab)use of this attack scenario)… [Read more]

[The post [SANS ISC] The Risk of Authenticated Vulnerability Scans has been first published on /dev/random]

May 13, 2019

I published the following diary on “From Phishing To Ransomware?“:

On Friday, one of our readers reported a phishing attempt to us (thanks to him!). Usually, those emails are simply part of classic phishing waves and try to steal credentials from victims but, this time, it was not a simple phishing. Here is a copy of the email, which was nicely redacted… [Read more]

[The post [SANS ISC] From Phishing To Ransomware? has been first published on /dev/random]

May 12, 2019


In my previous two posts (1, 2 ), we created Docker Debian and Arch-based images from scratch for the i386 architecture.

In this blog post - last one in this series - we’ll do the same for yum based distributions like CentOS and Fedora.

Building your own Docker base images isn’t difficult and let you trust your distribution Gpg signing keys instead of the docker hub. As explained in the first blog post. The mkimage scripts in the contrib directory of the Moby project git repository is a good place to start if you want to build own docker images.


Fedora is one of the GNU/Linux distributions that supports 32 bits systems. Centos has a Special Interest Groups to support alternative architectures. The Alternative Architecture SIG create installation images for power, i386, armhfp (arm v732 bits) and aarch64 (arm v8 64-bit).


In this blog post, we will create centos based docker images. The procedure to create Fedora images is the same.

Clone moby

staf@centos386 github]$ git clone
Cloning into 'moby'...
remote: Enumerating objects: 7, done.
remote: Counting objects: 100% (7/7), done.
remote: Compressing objects: 100% (7/7), done.
remote: Total 269517 (delta 0), reused 1 (delta 0), pack-reused 269510
Receiving objects: 100% (269517/269517), 139.16 MiB | 3.07 MiB/s, done.
Resolving deltas: 100% (182765/182765), done.
[staf@centos386 github]$ 

Go to the contrib directory

[staf@centos386 github]$ cd moby/contrib/
[staf@centos386 contrib]$

When you run you get the usage message.

[staf@centos386 contrib]$ ./ [OPTIONS] <name>
  -p "<packages>"  The list of packages to install in the container.
                   The default is blank. Can use multiple times.
  -g "<groups>"    The groups of packages to install in the container.
                   The default is "Core". Can use multiple times.
  -y <yumconf>     The path to the yum config to install packages from. The
                   default is /etc/yum.conf for Centos/RHEL and /etc/dnf/dnf.conf for Fedora
  -t <tag>         Specify Tag information.
                   default is reffered at /etc/{redhat,system}-release
[staf@centos386 contrib]$

build the image

The script will use /etc/yum.conf or /etc/dnf.conf to build the image. <name> will create the image with name.

[staf@centos386 contrib]$ sudo ./ centos
[sudo] password for staf: 
+ mkdir -m 755 /tmp/
+ mknod -m 600 /tmp/ c 5 1
+ mknod -m 600 /tmp/ p
+ mknod -m 666 /tmp/ c 1 7
+ mknod -m 666 /tmp/ c 1 3
+ mknod -m 666 /tmp/ c 5 2
+ mknod -m 666 /tmp/ c 1 8
+ mknod -m 666 /tmp/ c 5 0
+ mknod -m 666 /tmp/ c 4 0
+ mknod -m 666 /tmp/ c 1 9
+ mknod -m 666 /tmp/ c 1 5
+ '[' -d /etc/yum/vars ']'
+ mkdir -p -m 755 /tmp/
+ cp -a /etc/yum/vars /tmp/
+ [[ -n Core ]]
+ yum -c /etc/yum.conf --installroot=/tmp/ --releasever=/ --setopt=tsflags=nodocs --setopt=group_package_types=mandatory -y groupinstall Core
Loaded plugins: fastestmirror, langpacks
There is no installed groups file.
Maybe run: yum groups mark convert (see man yum)
+ tar --numeric-owner -c -C /tmp/ .
+ docker import - centos:7.6.1810
+ docker run -i -t --rm centos:7.6.1810 /bin/bash -c 'echo success'
+ rm -rf /tmp/
[staf@centos386 contrib]$


A new image is created with the name centos.

[staf@centos386 contrib]$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
centos              7.6.1810            7cdb02046bff        3 minutes ago       281 MB
[staf@centos386 contrib]$ 

You might want to rename to include your name or project name. You can do this by retag the image and remove the old image name.

[staf@centos386 contrib]$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
centos              7.6.1810            7cdb02046bff        20 seconds ago      281 MB
[staf@centos386 contrib]$ docker rmi centos
Error response from daemon: No such image: centos:latest
[staf@centos386 contrib]$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
centos              7.6.1810            7cdb02046bff        3 minutes ago       281 MB
[staf@centos386 contrib]$ docker tag 7cdb02046bff stafwag/centos_386:7.6.1810 
[staf@centos386 contrib]$ docker images
REPOSITORY           TAG                 IMAGE ID            CREATED             SIZE
centos               7.6.1810            7cdb02046bff        7 minutes ago       281 MB
stafwag/centos_386   7.6.1810            7cdb02046bff        7 minutes ago       281 MB
[staf@centos386 contrib]$ docker rmi centos:7.6.1810
Untagged: centos:7.6.1810
[staf@centos386 contrib]$ docker images
REPOSITORY           TAG                 IMAGE ID            CREATED             SIZE
stafwag/centos_386   7.6.1810            7cdb02046bff        8 minutes ago       281 MB
[staf@centos386 contrib]$ 


[staf@centos386 contrib]$ docker run -it --rm stafwag/centos_386:7.6.1810 /bin/sh
sh-4.2# yum update -y
Loaded plugins: fastestmirror
Determining fastest mirrors
 * base:
 * extras:
 * updates:
base                                                                                                                   | 3.6 kB  00:00:00     
extras                                                                                                                 | 2.9 kB  00:00:00     
updates                                                                                                                | 2.9 kB  00:00:00     
(1/4): updates/7/i386/primary_db                                                                                       | 2.5 MB  00:00:00     
(2/4): extras/7/i386/primary_db                                                                                        | 157 kB  00:00:01     
(3/4): base/7/i386/group_gz                                                                                            | 166 kB  00:00:01     
(4/4): base/7/i386/primary_db                                                                                          | 4.6 MB  00:00:02     
No packages marked for update

** Have fun! **

May 10, 2019

I published the following diary on “DSSuite – A Docker Container with Didier’s Tools“:

If you follow us and read our daily diaries, you probably already know some famous tools developed by Didier (like, and many more). Didier is using them all the time to analyze malicious documents. His tools are also used by many security analysts and researchers. The complete toolbox is available on his page. You can clone the repository or download the complete package available as a zip archive. However, it’s not convenient to install them all the time when you’re switching from computers all the time if, like me, you’re always on the road between different customers… [Read more]

[The post [SANS ISC] DSSuite – A Docker Container with Didier’s Tools has been first published on /dev/random]

May 08, 2019

Acquia joins forces with Mautic

I'm happy to announce today that Acquia acquired Mautic, an open source marketing automation and campaign management platform.

A couple of decades ago, I was convinced that every organization required a website — a thought that sounds rather obvious now. Today, I am convinced that every organization will need a Digital Experience Platform (DXP).

Having a website is no longer enough: customers expect to interact with brands through their websites, email, chat and more. They also expect these interactions to be relevant and personalized.

If you don't know Mautic, think of it as an alternative to Adobe's Marketo or Salesforce's Marketing Cloud. Just like these solutions, Mautic provides marketing automation and campaign management capabilities. It's differentiated in that it is easier to use, supports one-to-one customer experiences across many channels, integrates more easily with other tools, and is less expensive.

The flowchart style visual campaign builder you saw in the beginning of the Mautic demo video above is one of my favorite features. I love how it allows marketers to combine content, user profiles, events and a decision engine to deliver the best-next action to customers.

Mautic is a relatively young company, but has quickly grown into the largest open source player in the marketing automation space, with more than 200,000 installations. Its ease of use, flexibility and feature completeness has won over many marketers in a very short time: the company's top-line grew almost 400 percent year-over-year, its number of customers tripled, and Mautic won multiple awards for product innovation and customer service.

The acquisition of Mautic accelerates Acquia's product strategy to deliver the only Open Digital Experience Platform:

The building blocks of a Digital Experience Platform and how Mautic accelerates Acquia's vision.  The pieces that make up a Digital Experience Platform, and how Mautic fits into Acquia's Open Digital Experience Platform. Acquia is strong in content management, personalization, user profile management and commerce (yellow blocks). Mautic adds or improves Acquia's multi-channel delivery, campaign management and journey orchestration capabilities (purple blocks).

There are many reasons why we like Mautic, but here are my top 3:

Reason 1: Disrupting the market with "open"

Open Source will disrupt every component of the modern technology stack. It's not a matter of if, it's when.

Just as Drupal disrupted web content management with Open Source, we believe Mautic disrupts marketing automation.

With Mautic, Acquia is now the only open and open source alternative to the expensive, closed, and stagnant marketing clouds.

I'm both proud and excited that Acquia is doubling down on Open Source. Given our extensive open source experience, we believe we can help grow Mautic even faster.

Reason 2: Innovating through integrations

To build an optimal customer experience, marketers need to integrate with different data sources, customer technologies, and bespoke in-house platforms. Instead of buying a suite from a single vendor, most marketers want an open platform that allows for open innovation and unlimited integrations.

Only an open architecture can connect any technology in the marketing stack, and only an open source innovation model can evolve fast enough to offer integrations with thousands of marketing technologies (to date, there are 7,000 vendors in the martech landscape).

Because developers are largely responsible for creating and customizing marketing platforms, marketing technology should meet the needs of both business users and technology architects. Unlike other companies in the space, Mautic is loved by both marketers and developers. With Mautic, Acquia continues to focus on both personas.

Reason 3: The same technology stack and business model

Like Drupal, Mautic is built in PHP and Symfony, and like Drupal, Mautic uses the GNU GPL license. Having the same technology stack has many benefits.

Digital agencies or in-house teams need to deliver integrated marketing solutions. Because both Drupal and Mautic use the same technology stack, a single team of developers can work on both.

The similarities also make it possible for both open source communities to collaborate — while it is not something you can force to happen, it will be interesting to see how that dynamic naturally plays out over time.

Last but not least, our business models are also very aligned. Both Acquia and Mautic were "born in the cloud" and make money by offering subscription- and cloud-based delivery options. This means you pay for only what you need and that you can focus on using the products rather than running and maintaining them.

Mautic offers several commercial solutions:

  • Mautic Cloud, a fully managed SaaS version of Mautic with premium features not available in Open Source.
  • For larger organizations, Mautic has a proprietary product called Maestro. Large organizations operate in many regions or territories, and have teams dedicated to each territory. With Maestro, each territory can get its own Mautic instance, but they can still share campaign best-practices, and repeat successful campaigns across territories. It's a unique capability, which is very aligned with the Acquia Cloud Site Factory.

Try Mautic

If you want to try Mautic, you can either install the community version yourself or check out the demo or sandbox environment of Mautic Open Marketing Cloud.


We're very excited to join forces with Mautic. It is such a strategic step for Acquia. Together we'll provide our customers with more freedom, faster innovation, and more flexibility. Open digital experiences are the way of the future.

I've got a lot more to share about the Mautic acquisition, how we plan to integrate Mautic in Acquia's solutions, how we could build bridges between the Drupal and Mautic community, how it impacts the marketplace, and more.

In time, I'll write more about these topics on this blog. In the meantime, you can listen to this podcast with DB Hurley, Mautic's founder and CTO, and me.

May 07, 2019

A couple of weeks ago our first-born daughter appeared into my life. All the clichés of what this miracle does with a man are very well true. Not only is this (quite literally) the start of a new life, it also gives you a pause to reflect on your own life.

Around the same time I’ve finished working on the project that has occupied most of my time over the past years: helping a software-as-a-service company completely modernize and rearchitect their software stack, to help it grow further in the coming decade.

Going forward, I’ll be applying the valuable lessons learned while doing this, combined with all my previous experiences, as a consultant. More specifically I’ll be focusing on DevOps and related concerns. More information on that can be found on this page.

I also have a new business venture in the works, but that’s the subject of a future post.

Comments | More on | @rubenv on Twitter

May 05, 2019

In my previous post, we started with creating Debian based docker images from scratch for the i386 architecture.

In this blog post, we’ll create Arch GNU/Linux based images.

Arch GNU/Linux

Arch Linux stopped supporting i386 systems. When you want to run Archlinux on an i386 system there is a community maintained Archlinux32 project and the Free software version Parabola GNU/Linux-libre.

For the arm architecture, there is Archlinux Arm project that I used. in moby

I used from the Moby/Docker project in the past, but it failed when I tried it this time…

I created a small patch to fix it and created a pull request. Till the issue is resolved, you can use the version in my cloned git repository.

Build the docker image

Install the required packages

Make sure that your system is up-to-date.

staf@archlinux32 contrib]$ sudo pacman -Syu

Install the required packages.

[staf@archlinux32 contrib]$ sudo pacman -S arch-install-scripts expect wget


Create a directory that will hold the image data.

[staf@archlinux32 ~]$ mkdir -p dockerbuild/archlinux32
[staf@archlinux32 ~]$ cd dockerbuild/archlinux32
[staf@archlinux32 archlinux32]$ 


[staf@archlinux32 archlinux32]$ wget
--2019-05-05 07:46:32--
Loaded CA certificate '/etc/ssl/certs/ca-certificates.crt'
Resolving (
Connecting to (||:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3841 (3.8K) [text/plain]
Saving to: ''                 100%[====================================================>]   3.75K  --.-KB/s    in 0s      

2019-05-05 07:46:33 (34.5 MB/s) - '' saved [3841/3841]

[staf@archlinux32 archlinux32]$ 

Make it executable.

[staf@archlinux32 archlinux32]$ chmod +x
[staf@archlinux32 archlinux32]$ 

Setup your pacman.conf

Copy your pacmnan.conf to the directory that holds

[staf@archlinux32 contrib]$ cp /etc/pacman.conf mkimage-arch-pacman.conf
[staf@archlinux32 contrib]$ 

Build your image

[staf@archlinux32 archlinux32]$ TMPDIR=`pwd` sudo ./
spawn pacstrap -C ./mkimage-arch-pacman.conf -c -d -G -i /var/tmp/rootfs-archlinux-wqxW0uxy8X base bash haveged pacman pacman-mirrorlist --ignore dhcpcd,diffutils,file,inetutils,iproute2,iputils,jfsutils,licenses,linux,linux-firmware,lvm2,man-db,man-pages,mdadm,nano,netctl,openresolv,pciutils,pcmciautils,psmisc,reiserfsprogs,s-nail,sysfsutils,systemd-sysvcompat,usbutils,vi,which,xfsprogs
==> Creating install root at /var/tmp/rootfs-archlinux-wqxW0uxy8X
==> Installing packages to /var/tmp/rootfs-archlinux-wqxW0uxy8X
:: Synchronizing package databases...
 core                                              198.0 KiB   676K/s 00:00 [##########################################] 100%
 extra                                               2.4 MiB  1525K/s 00:02 [##########################################] 100%
 community                                           6.3 MiB   396K/s 00:16 [##########################################] 100%
:: dhcpcd is in IgnorePkg/IgnoreGroup. Install anyway? [Y/n] n
:: diffutils is in IgnorePkg/IgnoreGroup. Install anyway? [Y/n] n
:: file is in IgnorePkg/IgnoreGroup. Install anyway? [Y/n] n
==> WARNING: /var/tmp/rootfs-archlinux-wqxW0uxy8X is not a mountpoint. This may have undesirable side effects.
Generating locales...
  en_US.UTF-8... done
Generation complete.
tar: ./etc/pacman.d/gnupg/S.gpg-agent.ssh: socket ignored
tar: ./etc/pacman.d/gnupg/S.gpg-agent.extra: socket ignored
tar: ./etc/pacman.d/gnupg/S.gpg-agent: socket ignored
tar: ./etc/pacman.d/gnupg/S.gpg-agent.browser: socket ignored
[staf@archlinux32 archlinux32]$ 


A new image is created with the name archlinux.

[staf@archlinux32 archlinux32]$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED              SIZE
archlinux           latest              18e74c4d823c        About a minute ago   472MB
[staf@archlinux32 archlinux32]$ 

You might want to rename it. You can do this by retag the image and remove the old image name.

[staf@archlinux32 archlinux32]$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED              SIZE
archlinux           latest              18e74c4d823c        About a minute ago   472MB
[staf@archlinux32 archlinux32]$ docker tag stafwag/archlinux:386 18e74c4d823c
Error response from daemon: No such image: stafwag/archlinux:386
[staf@archlinux32 archlinux32]$ docker tag 18e74c4d823c stafwag/archlinux:386             
[staf@archlinux32 archlinux32]$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
archlinux           latest              18e74c4d823c        3 minutes ago       472MB
stafwag/archlinux   386                 18e74c4d823c        3 minutes ago       472MB
[staf@archlinux32 archlinux32]$ docker rmi archlinux
Untagged: archlinux:latest
[staf@archlinux32 archlinux32]$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
stafwag/archlinux   386                 18e74c4d823c        3 minutes ago       472MB
[staf@archlinux32 archlinux32]$ 


[staf@archlinux32 archlinux32]$ docker run --rm -it stafwag/archlinux:386 /bin/sh
sh-5.0# pacman -Syu
:: Synchronizing package databases...
 core is up to date
 extra is up to date
 community is up to date
:: Starting full system upgrade...
 there is nothing to do

** Have fun! **

May 04, 2019

The post Enable the RPC JSON API with password authentication in Bitcoin Core appeared first on

The bitcoin daemon has a very useful & easy-to-use HTTP API built-in, that allows you to talk to it like a simple webserver and get JSON responses back.

By default, it's enabled but it only listens on localhost port 8223, and it's unauthenticated.

$ netstat -alpn | grep 8332
tcp     0   0*    LISTEN      31667/bitcoind
tcp6    0   0 ::1:8332           :::*         LISTEN      31667/bitcoind

While useful if you're on the same machine (you can query it locally without username/password), it won't help much if you're querying a remote node.

In order to allow bitcoind to bind on a public-facing IP and have username/password authentication, you can modify the bitcoin.conf.

$ cat .bitcoin/bitcoin.conf
# Expose the RPC/JSON API

If you restart your daemon with this config, it would try to bind to IP "" and open the RCP JSON API endpoint on its default port 8332. To authenticate, you'd give the user & password as shown in the config.

If you do not pass the rpcallowip parameter, the server won't bind on the requested IP, as confirmed in the manpage:

Bind to given address to listen for JSON-RPC connections. Do not expose
the RPC server to untrusted networks such as the public internet!
This option is ignored unless -rpcallowip is also passed. Port is
optional and overrides -rpcport. Use [host]:port notation for
IPv6. This option can be specified multiple times (default: and ::1 i.e., localhost)

Keep that note that it's a lot safer to actually pass the allowed IPs and treat it as a whitelist, not as a workaround to listen to all IPs like I did above.

Here's an example of a curl call to query the daemon.

$ curl \
  --user bitcoin \
  --data-binary '{"jsonrpc": "1.0", "id":"curltest", "method": "getnetworkinfo", "params": [] }' \ 
  -H 'content-type: text/plain;' \
Enter host password for user 'bitcoin':


You can now safely query your bitcoin daemon with authentication.

The post Enable the RPC JSON API with password authentication in Bitcoin Core appeared first on

May 02, 2019

The post How to upgrade to the latest Bitcoin Core version appeared first on

This guide assumes you've followed my other guide, where you compile Bitcoin Core from source. These steps will allow you to upgrade a running Bitcoin Core node to the latest version.

Stop current Bitcoin Core node

First, stop the running version of the Bitcoin Core daemon.

$ su - bitcoin
$ bitcoin-cli stop
Bitcoin server stopping

Check to make sure your node is stopped.

$ bitcoin-cli status
error: Could not connect to the server

Once that's done, download & compile the new version.

Install the latest version of Bitcoin Core

To do so, you can follow all other steps to self-compile Bitcoin Core.

In those steps, there's a part where you do a git checkout to a specific version. Change that to refer to the latest release. In this example, we'll upgrade to 0.18.0.

$ git clone
$ cd bitcoin
$ git checkout v0.18.0

And compile Bitcoin Core using the same steps as before.

$ ./
$ ./configure
$ make -j $(nproc)

Once done, you have the latest version of Bitcoin Core.

Start the Bitcoin Core daemon

Start it again as normal;

$ bitcoind --version
Bitcoin Core Daemon version v0.18.0
$ bitcoind -daemon
Bitcoin server starting

Check the logs to make sure everything is OK.

$ tail -f ~/.bitcoin/debug.log
Bitcoin Core version v0.18.0 (release build)
Assuming ancestors of block 0000000000000000000f1c54590ee18d15ec70e68c8cd4cfbadb1b4f11697eee have valid signatures.
Setting nMinimumChainWork=0000000000000000000000000000000000000000051dc8b82f450202ecb3d471
Using the 'sse4(1way),sse41(4way),avx2(8way)' SHA256 implementation
Using RdSeed as additional entropy source
Using RdRand as an additional entropy source
Default data directory /home/bitcoin/.bitcoin

And you're now running the latest version.

The post How to upgrade to the latest Bitcoin Core version appeared first on

I published the following diary on “Another Day, Another Suspicious UDF File“:

In my last diary, I explained that I found a malcious UDF image used to deliver a piece of malware. After this, I created a YARA rule on VT to try to spot more UDF files in the wild. It seems like the tool ImgBurn is the attacker’s best friend to generate such malicious images. To find more UDF images, I used the following very simple YARA rule… [Read more]

[The post [SANS] Another Day, Another Suspicious UDF File has been first published on /dev/random]

May 01, 2019

The post I forgot how to manage a server appeared first on

Something embarrassing happened to me the other day. I was playing around with a new server on Digital Ocean and it occurred to me: I had no idea how to manage it.

This is slightly awkward because I've been a sysadmin for over 10yrs, the largest part of my professional career.

The spoils of configuration management

The thing is, I've been writing and using config management for the last 6-ish years. I've shared many blogposts regarding Puppet, some of its design patterns, even pretended to hold all the knowledge by sharing my lessons learned after 3yr of using Puppet.

And now I've come to the point where I no longer know how to install, configure or run software without Puppet.

My config management does this for me. Whether it's Puppet, Ansible, Chef, ... all of the boring parts of being a sysadmin have been hidden behind management tools. Yet here I am, trying to quickly configure a personal server, without my company-managed config management to aid me.

Boy did I feel useless.

I had to Google the correct SSH config syntax to allow root logins, but only via public keys. I had to Google for iptables rule syntax and using ufw to manage them. I forgot where I had to place the configs for supervisor for running jobs, let alone how to write the config.

I know these configs in our tools. In our abstractions. In our automation. But I forgot what it's like in Linux itself.

A pitfall to remember

I previously blogged about 2 pitfalls I had already known about: trying to automate a service you don't fully understand or blindly trusting the automation of someone else, not understanding what it's doing under the hood.

I should add a third one to my pitfall list: I'm forgetting the basic and core tools used to manage a Linux server.

Is this a bad thing though? I'm not sure yet. Maybe the lower level knowledge isn't that valuable, as long as we have our automation standby to take care of this? It frees us from having to think about too many things and allows us to focus on the more important aspects of being a sysadmin.

But it sure felt strange Googling things I had to Google almost a decade ago.

The post I forgot how to manage a server appeared first on

April 30, 2019

The Drupal Association announced today that Heather Rocker has been selected as its next Executive Director.

This is exciting news because it concludes a seven month search since Megan Sanicki left.

We looked long and hard for someone who could help us grow the global Drupal community by building on its diversity, working with developers and agency partners, and expanding our work with new audiences such as content creators and marketers.

The Drupal Association (including me) believes that Heather can do all of that, and is the best person to help lead Drupal into its next phase of growth.

Heather earned her engineering degree from Georgia Tech. She has dedicated much of her career to working with women in technology, both as the CEO of Girls, Inc. of Greater Atlanta and the Executive Director of Women in Technology.

We were impressed not only with her valuable experience with volunteer organizations, but also her work in the private sector with large customers. Most recently, Heather was part of the management team at Systems Evolution, a team of 250 business consultants, where she specialized in sales operations and managed key client relationships.

She is also a robotics fanatic who organizes and judges competitions for children. So, maybe we’ll see some robots roaming around DrupalCon in the future!

As you can tell, Heather will bring a lot of great experience to the Drupal community and I look forward to partnering with her.

Last but not least, I want to thank Tim Lehnen for serving as our Interim Executive Director. He did a fantastic job leading the Drupal Association through this transition.

Comment le simple fait d’avoir un compte Facebook m’a rendu injoignable pendant 5 ans pour plusieurs dizaines de lecteurs de mon blog

Ne pas être sur Facebook ou le quitter est souvent sujet au débat : « Mais comment vont faire les gens pour te contacter ? Comment vas-tu rester en contact ? ». Tout semble se réduire au choix cornélien : préserver sa vie privée ou bien être joignable par le commun des mortels.

Je viens de me rendre compte qu’il s’agit d’un faux débat. Tout comme Facebook offre l’illusion de popularité et d’audience à travers les likes, la disponibilité en ligne est illusoire. Pire ! J’ai découvert qu’être sur Facebook m’avait rendu moins joignable pour toute une catégorie de lecteurs de mon blog !

Il y’a une raison toute simple qui me pousse à garder un compte Facebook Messenger : c’est l’endroit où les quelques apnéistes belges organisent leurs sorties. Si je n’y suis pas, je rate les sorties, aussi simple que ça. Du coup, j’ai installé l’application Messenger Lite, dans le seul but de pouvoir aller plonger. Or, en fouillant dans les options de l’app, j’ai découvert une sous-rubrique bien cachée intitulée « Invitations filtrées ».

Là, j’y ai trouvé plusieurs dizaines de messages qui m’ont été envoyés depuis 2013. Plus de 5 années de messages dont j’ignorais l’existence ! Principalement des réactions, pas toujours positives, à mes billets de blogs. D’autres étaient plus factuels. Ils émanaient d’organisateurs de conférences, de personnes que j’avais croisées et qui souhaitaient rester en contact. Tous ces messages, sans exception, auraient eu leur place dans ma boîte mail.

Je ne savais pas qu’ils existaient. À aucun moment Facebook ne m’a signalé l’existence de ces messages, ne m’a donné la chance d’y répondre alors que certains datent d’une époque où j’étais très actif sur ce réseau, où l’app était installée sur mon téléphone.

À toutes ces personnes, Facebook a donné l’illusion qu’elles m’avaient contacté. Que j’étais joignable. À tous ceux qui ont un jour posté un commentaire sous un de mes billets publiés automatiquement, Facebook a donné l’impression d’être en contact avec moi.

À moi, personnage public, Facebook a donné l’illusion qu’on pouvait me joindre, que ceux qui n’utilisaient pas l’email pouvaient me contacter.

Tous, nous avons été trompés.

Il est temps de faire tomber le voile. Facebook n’offre pas un service, il offre une illusion de service. Une illusion qui est peut-être ce que beaucoup cherchent. L’illusion d’avoir des amis, d’avoir une activité sociale, une reconnaissance, un certain succès. Mais si vous ne cherchez pas l’illusion, alors il est temps de fuir Facebook. Ce n’est pas facile, car l’illusion est forte. Tout comme les adorateurs de la Bible prétendent qu’elle est la vérité ultime « car c’est écrit dans la Bible », les utilisateurs de Facebook se sentent entendus, écoutés, car « Facebook me dit que j’ai été vu ».

C’est d’ailleurs le seul et unique but de Facebook. Nous faire croire que nous sommes connectés, peu importe que ce soit vrai ou pas.

Pour chaque contenu posté, l’algorithme Facebook va tenter de trouver les quelques utilisateurs qui ont une probabilité maximale de commenter. Et encourager les autres à cliquer sur le like sans même lire ce qui s’y passe, juste parce que la photo est jolie ou que tel utilisateur like spontanément les posts de tel autre. Au final, une conversation entre 5 individus ponctuée de 30 likes donnera l’impression d’un retentissement national. Exception faite des célébrités, qui récolteront des dizaines de milliers de likes et de messages parce que ce sont des célébrités, peu importe la plateforme.

Facebook nous donne une petite impression de célébrité et de gloriole grâce à quelques likes, Facebook nous donne l’impression de faire partie d’une tribu, d’avoir des relations sociales.

Il est indéniable que Facebook a également des effets positifs, permet des échanges qui n’existeraient pas sans cela. Mais, pour paraphraser Cal Newport dans son livre Digital Minimalism : est-ce que le prix que nous payons n’est pas trop élevé pour les bénéfices que nous en retirons ?

Je rajouterais : tirons-nous vraiment des bénéfices ? Ou bien l’illusion de ceux-ci ?

Photo by Aranka Sinnema on Unsplash

Je suis @ploum, conférencier et écrivain électronique. Si vous avez apprécié ce texte, n'hésitez pas à me soutenir sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens réguliers, même symboliques, sont une réelle motivation et reconnaissance. Merci !

Ce texte est publié sous la licence CC-By BE.

April 28, 2019

Puppet CA/puppetmasterd cert renewal

While we're still converting our puppet controlled infra to Ansible, we still have some nodes "controlled" by puppet, as converting some roles isn't something that can be done in just one or two days. Add to that other items in your backlog that all have priority set to #1 and then time is flying, until you realize this for your existing legacy puppet environment (assuming false FQDN here, but you'll get the idea):

Warning: Certificate 'Puppet CA:' will expire on 2019-05-06T12:12:56UTC
Warning: Certificate '' will expire on 2019-05-06T12:12:56UTC

So, as long as your PKI setup for puppet is still valid, you can act in advance, resign/extend CA and puppetmasterd and distribute newer CA certs to agents, and go forward with other items in your backlog, while still converting from puppet to Ansible (at least for us)


Before anything else, (in case you don't backup this, but you should), let's take a backup on the Puppet CA (in our case, it's a Foreman driven puppetmasterd, so foreman host is where all this will happen, YMMV)

tar cvzf /root/puppet-ssl-backup.tar.gz /var/lib/puppet/ssl/

CA itself

We first need to regenerate the CSR for the CA cert, and sign it again Ideally we confirm that the ca_key.pem and the existing ca_crt.pem "matches" through modulus (should be equals)

cd /var/lib/puppet/ssl/ca
( openssl rsa -noout -modulus -in ca_key.pem  2> /dev/null | openssl md5 ; openssl x509 -noout -modulus -in ca_crt.pem  2> /dev/null | openssl md5 ) 

(stdin)= cbc4d35f58b28ad7c4dca17bd4408403
(stdin)= cbc4d35f58b28ad7c4dca17bd4408403

As it's the case, we can now Regenerate from that private key and existing crt a CSR

openssl x509 -x509toreq -in ca_crt.pem -signkey ca_key.pem -out ca_csr.pem
Getting request Private Key
Generating certificate request

Now that we have the CSR for CA, we need to sign it again, but we have to add extensions

cat > extension.cnf << EOF
basicConstraints = critical,CA:TRUE
nsComment = "Puppet Ruby/OpenSSL Internal Certificate"
keyUsage = critical,keyCertSign,cRLSign
subjectKeyIdentifier = hash

And now archive old CA crt and sign (new) extended one

cp ca_crt.pem ca_crt.pem.old
openssl x509 -req -days 3650 -in ca_csr.pem -signkey ca_key.pem -out ca_crt.pem -extfile extension.cnf -extensions CA_extensions
Signature ok
subject=/CN=Puppet CA:
Getting Private key

openssl x509 -in ca_crt.pem -noout -text|grep -A 3 Validity
            Not Before: Apr 29 08:25:49 2019 GMT
            Not After : Apr 26 08:25:49 2029 GMT

Puppetmasterd server

We have also to regen the CSR from the existing cert (assuming our fqdn for our cert is correctly also the currently set hostname)

cd /var/lib/puppet/ssl
openssl x509 -x509toreq -in certs/$(hostname).pem -signkey private_keys/$(hostname).pem -out certificate_requests/$(hostname)_csr.pem
Getting request Private Key
Generating certificate request

Now that we have CSR, we can sign with new CA

cp certs/$(hostname).pem certs/$(hostname).pem.old #Backing up
openssl x509 -req -days 3650 -in certificate_requests/$(hostname)_csr.pem -CA ca/ca_crt.pem \
  -CAkey ca/ca_key.pem -CAserial ca/serial -out certs/$(hostname).pem
Signature ok  

Validating that puppetmasted key and new certs are matching (so crt and private keys are ok)

( openssl rsa -noout -modulus -in private_keys/$(hostname).pem  2> /dev/null | openssl md5 ; openssl x509 -noout -modulus -in certs/$(hostname).pem 2> /dev/null | openssl md5 )

(stdin)= 0ab385eb2c6e9e65a4ed929a2dd0dbe5
(stdin)= 0ab385eb2c6e9e65a4ed929a2dd0dbe5

It seems all good, so let's restart puppetmasterd/httpd (foremand launches puppetmasterd for us)

systemctl restart puppet

Puppet agents

From this point, puppet agents will not complain about the puppetmasterd cert, but still about the fact that CA itself will expire soon :

Warning: Certificate 'Puppet CA:' will expire on 2019-05-06T12:12:56GMT

But as we have now the new ca_crt.pem at the puppetmasterd/foreman side, we can just distribute it on clients (through puppet or ansible or whatever) and then it will continue to work

cd /var/lib/puppet/ssl/certs
mv ca.pem ca.pem.old

And now distribute the new ca_crt.pem as ca.pem here

puppet snippet for this (in our puppet::agent class)

 file { '/var/lib/puppet/ssl/certs/ca.pem': 
   source => 'puppet:///puppet/ca_crt.pem', 
   owner => 'puppet', 
   group => 'puppet', 
   require => Package['puppet'],

Next time you'll "puppet agent -t" or that puppet will contact puppetmasterd, it will apply the new cert on and on next call, no warning, issue anymore

Info: Computing checksum on file /var/lib/puppet/ssl/certs/ca.pem
Info: /Stage[main]/Puppet::Agent/File[/var/lib/puppet/ssl/certs/ca.pem]: Filebucketed /var/lib/puppet/ssl/certs/ca.pem to puppet with sum c63b1cc5a39489f5da7d272f00ec09fa
Notice: /Stage[main]/Puppet::Agent/File[/var/lib/puppet/ssl/certs/ca.pem]/content: content changed '{md5}c63b1cc5a39489f5da7d272f00ec09fa' to '{md5}e3d2e55edbe1ad45570eef3c9ade051f'

Hope it helps

I've loved this song since I was 15 years old, so after 25 years it definitely deserves a place in my favorite music list. When I watched this recording, I stopped breathing for a while. Beautifully devastating. Don't mix this song with alcohol.

April 25, 2019

Après avoir lu plusieurs de ses livres, j’ai eu l’occasion de rencontrer l’ingénieur philosophe Luc de Brabandere. Autour d’un jus d’orange, nous avons, entre une conversation sur le vélo et la blockchain, discuté de sa présence sur les listes Écolo aux prochaines élections européennes et de la pression de ses proches pour se mettre sur les réseaux sociaux.

« On me demande comment j’espère faire des voix si je ne suis pas sur les réseaux sociaux », m’a-t-il confié.

Une problématique que je connais bien, ayant moi-même créé un compte Facebook en 2012 dans le seul but d’être candidat aux élections pour le Parti Pirate. Si, il y’a quelques années, ne pas être sur les réseaux sociaux était perçu comme ne pas être en phase avec son époque, les choses sont-elles différentes aujourd’hui ? Les comptes Facebook ne sont-ils pas devenus l’équivalent des affiches placardées un peu partout ? En politique, un adage dit d’ailleurs que les affiches ne font pas gagner de voix, mais que ne pas avoir d’affiches peut en faire perdre.

Peut-être que pour un politicien professionnel dont la seule finalité est d’être élu à n’importe quel prix, les réseaux sociaux sont aussi indispensables que le serrage de mains dans les cafés et sur les marchés. Mais Luc n’entre clairement pas dans cette catégorie. Sa position de 4e suppléant sur la liste européenne rend son élection mathématiquement improbable.

Il ne s’en cache d’ailleurs pas, affirmant être candidat avant tout pour réconcilier écologie et économie. Mais les réseaux sociaux ne seraient-ils pas justement un moyen de diffuser ses idées ?

Je pense que c’est le contraire.

Les idées de Luc sont amplement accessibles à travers ses livres, ses écrits, ses conférences. Les réseaux sociaux broient la rigueur, la finesse de l’analyse pour la transformer en succédané idéologique, en prêt-à-partager. Sur les réseaux sociaux, le penseur devient vendeur et publicitaire. Les chiffres affolent et forcent à rentrer dans une sarabande de likes, dans une illusoire impression de fausse popularité savamment orchestrée.

Qu’on le veuille ou non, l’outil nous transforme. Les réseaux sociaux sont conçus pour nous empêcher de penser, de réfléchir. Ils échangent notre âme et notre esprit critique contre quelques chiffres indiquant une croissance de followers, de clics ou de likes. Ils se rappellent à nous, nous envahissent et nous conforment à un idéal consumériste. Ils servent d’exutoires à nos colères et à nos idées pour mieux les laisser tomber dans l’oubli une fois le feu de paille du buzz éteint.

Bref, les réseaux sociaux sont l’outil rêvé du politicien. Et l’ennemi juré du philosophe.

Une campagne électorale transformera n’importe qui en véritable politicien assoiffé de pouvoir et de popularité. Et que sont les réseaux sociaux sinon une campagne permanente pour chacun de nous ?

Ne pas être sur les réseaux sociaux est, pour l’électeur que je suis, une affirmation électorale forte. Prendre le temps de déposer ses idées dans des livres, un blog, des vidéos ou tout support hors réseaux sociaux devrait être la base du métier de penseur de la chose publique. Je cite d’ailleurs souvent l’exemple du conseiller communal Liégeois François Schreuer qui, à travers son blog et son site, transcende le débat politico-politicien pour tenter d’appréhender l’essence même des problématiques, apportant une véritable transparence citoyenne aux débats abscons d’un conseil communal.

Après, il reste la question de savoir si la présence de Luc sur les listes Écolo n’est pas qu’un attrape voix, si ses idées auront une quelconque influence une fois les élections terminées. Il y’a sans doute un peu de cela. Avant de voter pour lui, il faut se poser la question de savoir si on veut voter pour Philippe Lamberts, la tête de liste.

J’ai de nombreux différends avec Écolo au niveau local ou régional, mais je constate que c’est le parti qui me représente le mieux à l’Europe sur les sujets qui me sont chers : vie privée, copyright, contrôle d’Internet, transition écologique, revenu de base, remise en question de la place du travail.

Si je pouvais voter pour Julia Reda, du Parti Pirate allemand, la question ne se poserait pas car j’admire son travail. Mais le système électoral européen étant ce qu’il est, je crois que ma voix ira à Luc de Brabandere.

Entre autres parce qu’il n’est pas sur les réseaux sociaux.

Photo by Elijah O’Donnell on Unsplash

Je suis @ploum, conférencier et écrivain électronique. Si vous avez apprécié ce texte, n'hésitez pas à me soutenir sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens réguliers, même symboliques, sont une réelle motivation et reconnaissance. Merci !

Ce texte est publié sous la licence CC-By BE.

April 22, 2019

The post Retrieving the Genesis block in Bitcoin with bitcoin-cli appeared first on

If you run a Bitcoin full node, you have access to every transaction and block that was ever created on the network. This also allows you to look at the content of, say, the genesis block. The first block ever created, over 10y ago.

Retrieving the genesis block

First, you can ask for the block hash by providing it the block height. As with everything in computer science, arrays and block counts start at 0.

You use command getblockhash to find the correct hash.

$ bitcoin-cli getblockhash 0

Now you have the block hash that matches with the first ever block.

You can now request the full content of that block using the getblock command.

$ bitcoin-cli getblock 000000000019d6689c085ae165831e934ff763ae46a2a6c172b3f1b60a8ce26f
  "hash": "000000000019d6689c085ae165831e934ff763ae46a2a6c172b3f1b60a8ce26f",
  "confirmations": 572755,
  "strippedsize": 285,
  "size": 285,
  "weight": 1140,
  "height": 0,
  "version": 1,
  "versionHex": "00000001",
  "merkleroot": "4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127b7afdeda33b",
  "tx": [
  "time": 1231006505,
  "mediantime": 1231006505,
  "nonce": 2083236893,
  "bits": "1d00ffff",
  "difficulty": 1,
  "chainwork": "0000000000000000000000000000000000000000000000000000000100010001",
  "nTx": 1,
  "nextblockhash": "00000000839a8e6886ab5951d76f411475428afc90947ee320161bbf18eb6048"

This is the only block that doesn't have a previousblockhash, all other blocks will have one as they form the chain itself. But, the first block can't have a previous one.

Retrieving the first and only transaction from the genesis block

In this block, there is only one transaction included. The one with the hash 4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127b7afdeda33b. This is a coinbase transaction, it's the block reward for finding the miner for finding this block (50BTC).

  "tx": [

Let's have a look at what's in there, shall we?

$ bitcoin-cli getrawtransaction 4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127b7afdeda33b
The genesis block coinbase is not considered an ordinary transaction and cannot be retrieved

Ah, sucks! This is a special kind of transaction, but we'll see a way to find the details of it later on.

Getting more details from the genesis block

We retrieved the block details using the getblock command, but there's actually more details in that block than initially shown. You can get more verbose output by adding the 2 at the end of the command, indicating you want a json object with transaction data.

$ bitcoin-cli getblock 000000000019d6689c085ae165831e934ff763ae46a2a6c172b3f1b60a8ce26f 2
  "hash": "000000000019d6689c085ae165831e934ff763ae46a2a6c172b3f1b60a8ce26f",
  "confirmations": 572758,
  "strippedsize": 285,
  "size": 285,
  "weight": 1140,
  "height": 0,
  "version": 1,
  "versionHex": "00000001",
  "merkleroot": "4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127b7afdeda33b",
  "tx": [
      "txid": "4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127b7afdeda33b",
      "hash": "4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127b7afdeda33b",
      "version": 1,
      "size": 204,
      "vsize": 204,
      "weight": 816,
      "locktime": 0,
      "vin": [
          "coinbase": "04ffff001d0104455468652054696d65732030332f4a616e2f32303039204368616e63656c6c6f72206f6e206272696e6b206f66207365636f6e64206261696c6f757420666f722062616e6b73",
          "sequence": 4294967295
      "vout": [
          "value": 50.00000000,
          "n": 0,
          "scriptPubKey": {
            "asm": "04678afdb0fe5548271967f1a67130b7105cd6a828e03909a67962e0ea1f61deb649f6bc3f4cef38c4f35504e51ec112de5c384df7ba0b8d578a4c702b6bf11d5f OP_CHECKSIG",
            "hex": "4104678afdb0fe5548271967f1a67130b7105cd6a828e03909a67962e0ea1f61deb649f6bc3f4cef38c4f35504e51ec112de5c384df7ba0b8d578a4c702b6bf11d5fac",
            "reqSigs": 1,
            "type": "pubkey",
            "addresses": [
      "hex": "01000000010000000000000000000000000000000000000000000000000000000000000000ffffffff4d04ffff001d0104455468652054696d65732030332f4a616e2f32303039204368616e63656c6c6f72206f6e206272696e6b206f66207365636f6e64206261696c6f757420666f722062616e6b73ffffffff0100f2052a01000000434104678afdb0fe5548271967f1a67130b7105cd6a828e03909a67962e0ea1f61deb649f6bc3f4cef38c4f35504e51ec112de5c384df7ba0b8d578a4c702b6bf11d5fac00000000"
  "time": 1231006505,
  "mediantime": 1231006505,
  "nonce": 2083236893,
  "bits": "1d00ffff",
  "difficulty": 1,
  "chainwork": "0000000000000000000000000000000000000000000000000000000100010001",
  "nTx": 1,
  "nextblockhash": "00000000839a8e6886ab5951d76f411475428afc90947ee320161bbf18eb6048"

Aha, that's more info!

Now, you'll notice there is a section with details of the coinbase transaction. It shows the 50BTC block reward, and even though we can't retrieve it with getrawtransaction, the data is still present in the genesis block.

      "vout": [
          "value": 50.00000000,
          "n": 0,
          "scriptPubKey": {
            "asm": "04678afdb0fe5548271967f1a67130b7105cd6a828e03909a67962e0ea1f61deb649f6bc3f4cef38c4f35504e51ec112de5c384df7ba0b8d578a4c702b6bf11d5f OP_CHECKSIG",
            "hex": "4104678afdb0fe5548271967f1a67130b7105cd6a828e03909a67962e0ea1f61deb649f6bc3f4cef38c4f35504e51ec112de5c384df7ba0b8d578a4c702b6bf11d5fac",
            "reqSigs": 1,
            "type": "pubkey",
            "addresses": [

Satoshi's Embedded Secret Message

I've always heard that Satoshi encoded a secret message in the first genesis block. Let's find it?

In our extensive output, there's a hex line in the block.

"hex": "01000000010000000000000000000000000000000000000000000000000000000000000000ffffffff4d04ffff001d0104455468652054696d65732030332f4a616e2f32303039204368616e63656c6c6f72206f6e206272696e6b206f66207365636f6e64206261696c6f757420666f722062616e6b73ffffffff0100f2052a01000000434104678afdb0fe5548271967f1a67130b7105cd6a828e03909a67962e0ea1f61deb649f6bc3f4cef38c4f35504e51ec112de5c384df7ba0b8d578a4c702b6bf11d5fac00000000"

If we transform this hexadecimal format to a more readable ASCII form, we get this:

$ echo "01000000010000000000000000000000000000000000000000000000000000000000000000ffffffff
5504e51ec112de5c384df7ba0b8d578a4c702b6bf11d5fac00000000" | xxd -r -p

����M��EThe Times 03/Jan/2009 Chancellor on brink of second bailout for banks�����*CAg���UH'g�q0�\֨(�9	�yb��a޶I�?L�8��U���\8M�

This confirms there is indeed a message in the form of "The Times 03/Jan/2009 Chancellor on brink of second bailout for banks", referring to a newspaper headline at the time of the genesis block.

The post Retrieving the Genesis block in Bitcoin with bitcoin-cli appeared first on

The post Requesting certificates with Let’s Encrypt’s official certbot client appeared first on

There's plenty of guides on this already, but I recently used Let's Encrypt certbot client again manually (instead of through already automated systems) and figured I'd write up the commands for myself. Just in case.

$ git clone /opt/letsencrypt
$ cd /opt/letsencrypt

Now that the client is available on the system, you can request new certificates. If the DNS is already pointing to this server, it's super easy with the webroot validation.

$ /opt/letsencrypt/letsencrypt-auto certonly --expand \
  --email you@domain.tld --agree-tos \
  --webroot -w /var/www/vhosts/yoursite.tld/htdocs/public/ \
  -d yoursite.tld \
  -d www.yoursite.tld

You can add multiple domains with the -d flag and point it to the right document root using the -w flag.

After that, you'll find your certificates in

$ ls -alh /etc/letsencrypt/live/yoursite.tld/*
/etc/letsencrypt/live/yoursite.tld/cert.pem -> ../../archive/yoursite.tld/cert1.pem
/etc/letsencrypt/live/yoursite.tld/chain.pem -> ../../archive/yoursite.tld/chain1.pem
/etc/letsencrypt/live/yoursite.tld/fullchain.pem -> ../../archive/yoursite.tld/fullchain1.pem
/etc/letsencrypt/live/yoursite.tld/privkey.pem -> ../../archive/yoursite.tld/privkey1.pem

You can now use these certs in whichever webserver or application you like.

The post Requesting certificates with Let’s Encrypt’s official certbot client appeared first on

Autoptimize 2.5 has been released earlier today (April 22nd).

Main focus of this release is more love for image optimization, now on a separate tab and including lazyload and WebP support.

Lots of other bugfixes and smaller improvements too off course, e.g. an option to disable the minification of excluded CSS/ JS (which 2.4 did by default).

No Easter eggs in there though :-)

I was using docker on an Odroid U3, but my Odroid stopped working. I switched to another system that is i386 only.

You’ll find my journey to build docker images for i386 below.

Reasons to build your own docker images

If you want to use docker you can start with docker images on the docker registry. There are several reasons to build your own base images.

  • Security

The first reason is security, docker images are not signed by default.

Anyone can upload docker images to the public docker hub with bugs or malicious code.

There are “official” docker images available at when you execute a docker search the official docker images are tagged on the official column and are also signed by Docker. To only allow signed docker images you need to set the DOCKER_CONTENT_TRUST=1 environment variable. - This should be the default IMHO -

There is one distinction, the “official” docker images are signed by the “Repo admin” of the Docker hub, not by the official GNU/Linux distribution project. If you want to trust the official project instead of the Docker repo admin you can resolve this building your own images.

  • Support other architectures

Docker images are generally built for AMD64 architecture. If you want to use other architectures - ARM, Power, SPARC or even i386 - you’ll find some images on the Docker hub but these are usually not Official docker images.

  • Control

When you build your own images, you have more control over what goes or not goes into the image.

Building your own docker base images

There are several ways to build your own docker images.

The Mobyproject is Docker’s development project - a bit like what Fedora is to RedHat -. The Moby project has a few scripts that help you to create docker base images and is also a good start if you want to review how to build your own images.

GNU/Linux distributions

I build the images on the same GNU/Linux distribution (e.g. The debian images are build on a Debian system) to get the correct gpg keys.

Debian GNU/Linux & Co

Debian GNU/Linux makes it very easy to build your own Docker base images. Only debootstrap is required. I’ll use the moby script to the Debian base image and debootstrap to build an i386 docker Ubuntu 18.04 image.

Ubuntu doesn’t support i386 officially but includes the i386 userland so it’s possible to build i386 Docker images.

Clone moby

staf@whale:~/github$ git clone
Cloning into 'moby'...
remote: Enumerating objects: 265639, done.
remote: Total 265639 (delta 0), reused 0 (delta 0), pack-reused 265640
Receiving objects: 99% (265640/265640), 137.75 MiB | 3.05 MiB/s, done.
Resolving deltas: 99% (179885/179885), done.
Checking out files: 99% (5508/5508), done.

Make sure that debootstrap is installed

staf@whale:~/github/moby/contrib$ sudo apt install debootstrap
[sudo] password for staf: 
Reading package lists... Done
Building dependency tree       
Reading state information... Done
debootstrap is already the newest version (1.0.114).
debootstrap set to manually installed.
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

The Moby way

Go to the contrib directory

staf@whale:~/github$ cd moby/contrib/
staf@whale:~/github/moby/contrib$ --help gives you more details howto use the script.

staf@whale:~/github/moby/contrib$ ./ --help
usage: [-d dir] [-t tag] [--compression algo| --no-compression] script [script-args]
   ie: -t someuser/debian debootstrap --variant=minbase jessie -t someuser/ubuntu debootstrap --include=ubuntu-minimal --components=main,universe trusty -t someuser/busybox busybox-static -t someuser/centos:5 rinse --distribution centos-5 -t someuser/mageia:4 mageia-urpmi --version=4 -t someuser/mageia:4 mageia-urpmi --version=4 --mirror=http://somemirror/

build the image

staf@whale:~/github/moby/contrib$ sudo ./ -t stafwag/debian_i386:stretch debootstrap --variant=minbase stretch
[sudo] password for staf: 
+ mkdir -p /var/tmp/docker-mkimage.dY9y9apEoK/rootfs
+ debootstrap --variant=minbase stretch /var/tmp/docker-mkimage.dY9y9apEoK/rootfs
I: Target architecture can be executed
I: Retrieving InRelease 
I: Retrieving Release 
I: Retrieving Release.gpg 
I: Checking Release signature
I: Valid Release signature (key id 067E3C456BAE240ACEE88F6FEF0F382A1A7B6500)
I: Retrieving Packages 


Verify that images is imported.

staf@whale:~/github/moby/contrib$ docker images
REPOSITORY            TAG                 IMAGE ID            CREATED              SIZE
stafwag/debian_i386   stretch             cb96d1663079        About a minute ago   97.6MB

Run a test docker instance

staf@whale:~/github/moby/contrib$ docker run -t -i --rm stafwag/debian_i386:stretch /bin/sh
# cat /etc/debian_version 

The debootstrap way

Make sure that debootstrap is installed

staf@ubuntu184:~/github/moby$ sudo apt install debootstrap
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Suggested packages:
The following NEW packages will be installed:
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 35,7 kB of archives.
After this operation, 270 kB of additional disk space will be used.
Get:1 bionic-updates/main amd64 debootstrap all 1.0.95ubuntu0.3 [35,7 kB]
Fetched 35,7 kB in 0s (85,9 kB/s)    
Selecting previously unselected package debootstrap.
(Reading database ... 163561 files and directories currently installed.)
Preparing to unpack .../debootstrap_1.0.95ubuntu0.3_all.deb ...
Unpacking debootstrap (1.0.95ubuntu0.3) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
Setting up debootstrap (1.0.95ubuntu0.3) ...


Create a directory that will hold the chrooted operating system.

staf@ubuntu184:~$ mkdir -p dockerbuild/ubuntu


staf@ubuntu184:~/dockerbuild/ubuntu$ sudo debootstrap --verbose --include=iputils-ping --arch i386 bionic ./chroot-bionic
I: Retrieving InRelease 
I: Checking Release signature
I: Valid Release signature (key id 790BC7277767219C42C86F933B4FE6ACC0B21F32)
I: Validating Packages 
I: Resolving dependencies of required packages...
I: Resolving dependencies of base packages...
I: Checking component main on
I: Retrieving adduser 3.116ubuntu1
I: Validating adduser 3.116ubuntu1
I: Retrieving apt 1.6.1
I: Validating apt 1.6.1
I: Retrieving apt-utils 1.6.1
I: Validating apt-utils 1.6.1
I: Retrieving base-files 10.1ubuntu2
I: Configuring python3-yaml...
I: Configuring python3-dbus...
I: Configuring apt-utils...
I: Configuring
I: Configuring nplan...
I: Configuring networkd-dispatcher...
I: Configuring kbd...
I: Configuring console-setup-linux...
I: Configuring console-setup...
I: Configuring ubuntu-minimal...
I: Configuring libc-bin...
I: Configuring systemd...
I: Configuring ca-certificates...
I: Configuring initramfs-tools...
I: Base system installed successfully.


You can customize your installation before it goes into the image. One thing that you should customize is include update in the image.

Update /etc/resolve.conf

staf@ubuntu184:~/dockerbuild/ubuntu$ sudo vi chroot-bionic/etc/resolv.conf

Update /etc/apt/sources.list

staf@ubuntu184:~/dockerbuild/ubuntu$ sudo vi chroot-bionic/etc/apt/sources.list

And include the updates

deb bionic main
deb bionic-security main
deb bionic-updates main

Chroot into your installation and run apt-get update

staf@ubuntu184:~/dockerbuild/ubuntu$ sudo chroot $PWD/chroot-bionic
root@ubuntu184:/# apt update
Hit:1 bionic InRelease
Get:2 bionic-updates InRelease [88.7 kB]   
Get:3 bionic-security InRelease [88.7 kB]       
Get:4 bionic/main Translation-en [516 kB]                  
Get:5 bionic-updates/main i386 Packages [492 kB]           
Get:6 bionic-updates/main Translation-en [214 kB]          
Get:7 bionic-security/main i386 Packages [241 kB]     
Get:8 bionic-security/main Translation-en [115 kB]
Fetched 1755 kB in 1s (1589 kB/s)      
Reading package lists... Done
Building dependency tree... Done

and apt-get upgrade

root@ubuntu184:/# apt upgrade
Reading package lists... Done
Building dependency tree... Done
Calculating upgrade... Done
The following NEW packages will be installed:
The following packages will be upgraded:
  apt apt-utils base-files bsdutils busybox-initramfs console-setup console-setup-linux
  distro-info-data dpkg e2fsprogs fdisk file gcc-8-base gpgv initramfs-tools
  initramfs-tools-bin initramfs-tools-core keyboard-configuration kmod libapparmor1
  libapt-inst2.0 libapt-pkg5.0 libblkid1 libcom-err2 libcryptsetup12 libdns-export1100
  libext2fs2 libfdisk1 libgcc1 libgcrypt20 libglib2.0-0 libglib2.0-data libidn11
  libisc-export169 libkmod2 libmagic-mgc libmagic1 libmount1 libncurses5 libncursesw5
  libnss-systemd libpam-modules libpam-modules-bin libpam-runtime libpam-systemd
  libpam0g libprocps6 libpython3-stdlib libpython3.6-minimal libpython3.6-stdlib
  libseccomp2 libsmartcols1 libss2 libssl1.1 libstdc++6 libsystemd0 libtinfo5 libudev1
  libunistring2 libuuid1 libxml2 mount ncurses-base ncurses-bin netcat-openbsd networkd-dispatcher nplan openssl perl-base procps python3 python3-gi
  python3-minimal python3.6 python3.6-minimal systemd systemd-sysv tar tzdata
  ubuntu-keyring ubuntu-minimal udev util-linux
84 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 26.6 MB of archives.
After this operation, 450 kB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 bionic-security/main i386 i386 0.40.1~18.04.4 [64.6 kB]
Get:2 bionic-updates/main i386 base-files i386 10.1ubuntu2.4 [60.3 kB]
Get:3 bionic-security/main i386 libapparmor1 i386 2.12-4ubuntu5.1 [32.7 kB]
Get:4 bionic-security/main i386 libgcrypt20 i386 1.8.1-
running python rtupdate hooks for python3.6...
running python post-rtupdate hooks for python3.6...
Setting up initramfs-tools-core (0.130ubuntu3.7) ...
Setting up initramfs-tools (0.130ubuntu3.7) ...
update-initramfs: deferring update (trigger activated)
Setting up python3-gi (3.26.1-2ubuntu1) ...
Setting up file (1:5.32-2ubuntu0.2) ...
Setting up python3-netifaces (0.10.4-0.1build4) ...
Processing triggers for systemd (237-3ubuntu10.20) ...
Setting up networkd-dispatcher (1.7-0ubuntu3.3) ...
Installing new version of config file /etc/default/networkd-dispatcher ...
Setting up (0.40.1~18.04.4) ...
Setting up nplan (0.40.1~18.04.4) ...
Setting up ubuntu-minimal (1.417.1) ...
Processing triggers for libc-bin (2.27-3ubuntu1) ...
Processing triggers for initramfs-tools (0.130ubuntu3.7) ...


Go to your chroot installation.

staf@ubuntu184:~/dockerbuild/ubuntu$ cd chroot-bionic/

and import the image.

staf@ubuntu184:~/dockerbuild/ubuntu/chroot-bionic$ sudo tar cpf - . | docker import - stafwag/ubuntu_i386:bionic


staf@ubuntu184:~/dockerbuild/ubuntu/chroot-bionic$ docker images
REPOSITORY            TAG                 IMAGE ID            CREATED              SIZE
stafwag/ubuntu_i386   bionic              83560ef3c8d4        About a minute ago   315MB
staf@ubuntu184:~/dockerbuild/ubuntu/chroot-bionic$ docker run -it --rm stafwag/ubuntu_i386:bionic /bin/bash
root@665cec6ee24f:/# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 18.04.2 LTS
Release:        18.04
Codename:       bionic

** Have fun! **


April 21, 2019

Several years ago, I created a list of ESXi versions with matching VM BIOS identifiers. The list is now complete up to vSphere 6.7 Update 2.
Your Linux runs on a VMware VM, but which on which ESXi version? You can see for yourself: run "dmidecode" and look at lines 10, 11 and 12.
ESX 2.5 - BIOS Release Date: 04/21/2004 - Address 0xE8480 - Size 97152 bytes
ESX 3.0 - BIOS Release Date: 04/17/2006 - Address 0xE7C70 - Size 99216 bytes
ESX 3.5 - BIOS Release Date: 01/30/2008 - Address 0xE7910 - Size 100080 bytes
ESX 4 - BIOS Release Date: 08/15/2008 - Address 0xEA6C0 - Size 88384 bytes
ESX 4U1 - BIOS Release Date: 09/22/2009 - Address 0xEA550 - Size 88752 bytes
ESX 4.1 - BIOS Release Date: 10/13/2009 - Address 0xEA2E0 - Size 89376 bytes
ESXi 5 - BIOS Release Date: 01/07/2011 - Address 0xE72C0 - Size 101696 bytes
ESXi 5.1 - BIOS Release Date: 06/22/2012 - Address: 0xEA0C0 - Size: 89920 bytes
ESXi 5.5 - BIOS Release Date: 07/30/2013 - Address: 0xEA050 - Size: 90032 bytes
ESXi 6 - BIOS Release Date: 09/30/2014 - Address: 0xE9A40 - Size: 91584 bytes
ESXi 6.5 - BIOS Release Date: 04/05/2016 - Address: 0xEA580 - Size: 88704 bytes 
ESXi 6.7 - BIOS Release Date: 07/03/2018 - Address: 0xEA520 - Size: 88800 bytes
ESXi 6.7 U2 - BIOS Release Date 12/12/2018 - Address: 0xEA490 - Size: 88944 bytes
NB These DMI properties are set at boot time. Even if your VM gets live-migrated to a host running a different vSphere version, your VM will keep the values it got from the host it booted on. What you see is the vSphere version of the host your VM booted on. It is the VM power-on that matters, so a guest OS reboot will not regenerate the DMI properties. A guest OS shut down on the other hand will also power off the VM, and the next power-on will regenerate the DMI properties.

April 20, 2019

We ditched the crowded streets of Seattle for a short vacation in Tuscany's beautiful countryside. After the cold winter months, Tuscany's rolling hills are coming back to life and showing their new colors.

Beautiful tuscany
Beautiful tuscany
Beautiful tuscany

April 18, 2019

I published the following diary on “Malware Sample Delivered Through UDF Image“:

I found an interesting phishing email which was delivered with a malicious attachment: an UDF image (.img). UDF means “Universal Disk Format” and, as said by Wikipedia], is an open vendor-neutral file system for computer data storage. It has supplented the well-known ISO 9660 format (used for burning CD & DVD) that was also used in previous campaign to deliver malicious files… [Read more]

[The post [SANS ISC] Malware Sample Delivered Through UDF Image has been first published on /dev/random]

April 16, 2019

In a recent VMware project, an existing environment of vSphere ESXi hosts had to be split off to a new instance of vCenter. These hosts were member of a distributed virtual switch, an object that saves its configuration in the vCenter database. This information would be lost after the move to the new vCenter, and the hosts would be left with "orphaned" distributed vswitch configurations.

Thanks to the export/import function now available in vSphere 5.5 and 6.x, we can now move the full distributed vswitch configuration to the new vCenter:

  • In the old vCenter, right-click the switch object, click "Export configuration" and choose the default "Distributed switch and all port groups"
  • Add the hosts to the new vCenter
  • In the new vCenter, right-click the datacenter object, click "Import distributed switch" in the "Distributed switch" sub-menu.
  • Select your saved configuration file, and tick the "Preserve original distributed switch and port group identifiers" box (which is not default!)
What used to be orphaned configurations on the host, are now valid member switches of the distributed switch you just imported!
In vSphere 6, if the vi-admin account get locked because of too many failed logins, and you don't have the root password of the appliance, you can reset the account(s) using these steps:

  1. reboot the vMA
  2. from GRUB, "e"dit the entry
  3. "a"ppend init=/bin/bash
  4. "b"oot
  5. # pam_tally2 --user=vi-admin --reset
  6. # passwd vi-admin # Optional. Only if you want to change the password for vi-admin.
  7. # exit
  8. reset the vMA
  9. log in with vi-admin
These steps can be repeated for root or any other account that gets locked out.

If you do have root or vi-admin access, "sudo pam_tally2 --user=mylockeduser --reset" would do it, no reboot required.
Most VMware appliances (vCenter Appliance, VMware Support Appliance, vRealize Orchestrator) have the so called VAMI: the VMware Appliance Management Interface, generally served via https on port 5480. VAMI offers a variety of functions, including "check updates" and "install updates". Some appliances offer to check/install updates from a connected CD iso, but the default is always to check online. How does that work?
VMware uses a dedicated website to serve the updates: Each appliance is configured with a repository URL: . The PRODUCT-ID is a hexadecimal code specific for the product. vRealize Orchestrator uses 00642c69-abe2-4b0c-a9e3-77a6e54bffd9, VMware Support Appliance uses 92f44311-2508-49c0-b41d-e5383282b153, vCenter Server Appliance uses 647ee3fc-e6c6-4b06-9dc2-f295d12d135c. The VERSION-ID contains the current appliance version and appends ".latest":,,
The appliance will check for updates by retrieving the repository URL /manifest/manifest-latest.xml . This xml contains the latest available version in fullVersion and version (fullVersion includes the build number), pre- and post-install scripts, EULA, and a list of updated rpm packages. Each entry has a that can be appended to the repository URL and downloaded. The update procedure downloads manifest and rpms, verifies checksums on downloaded rpms, executes the preInstallScript, runs rpm -U on the downloaded rpm packages, executes the postInstallScript, displays the exit code and prompts for reboot.
With this information, you can setup your own local repository (for cases where internet access is impossible from the virtual appliances), or you can even execute the procedure manually. Be aware that manual update would be unsupported. Using a different repository is supported by a subset of VMware appliances (e.g. VCSA, VRO) but not all (VMware Support Appliance).
I did not yet update my older post when vSphere 6.7 was released. The list now complete up to vSphere 6.7. Your Linux runs on a VMware VM, but which on which ESXi version? You can see for yourself: run "dmidecode" and look at lines 10, 11 and 12.
ESX 2.5 - BIOS Release Date: 04/21/2004 - Address 0xE8480 - Size 97152 bytes
ESX 3.0 - BIOS Release Date: 04/17/2006 - Address 0xE7C70 - Size 99216 bytes
ESX 3.5 - BIOS Release Date: 01/30/2008 - Address 0xE7910 - Size 100080 bytes
ESX 4 - BIOS Release Date: 08/15/2008 - Address 0xEA6C0 - Size 88384 bytes
ESX 4U1 - BIOS Release Date: 09/22/2009 - Address 0xEA550 - Size 88752 bytes
ESX 4.1 - BIOS Release Date: 10/13/2009 - Address 0xEA2E0 - Size 89376 bytes
ESXi 5 - BIOS Release Date: 01/07/2011 - Address 0xE72C0 - Size 101696 bytes
ESXi 5.1 - BIOS Release Date: 06/22/2012 - Address: 0xEA0C0 - Size: 89920 bytes
ESXi 5.5 - BIOS Release Date: 07/30/2013 - Address: 0xEA050 - Size: 90032 bytes
ESXi 6 - BIOS Release Date: 09/30/2014 - Address: 0xE9A40 - Size: 91584 bytes
ESXi 6.5 - BIOS Release Date: 04/05/2016 - Address: 0xEA580 - Size: 88704 bytes 
ESXi 6.7 - BIOS Release Date: 07/03/2018 - Address: 0xEA520 - Size: 88800 bytes
NB These DMI properties are set at boot time. Even if your VM gets live-migrated to a host running a different vSphere version, your VM will keep the values it got from the host it booted on. What you see is the vSphere version of the host your VM booted on. It is the VM power-on that matters, so a guest OS reboot will not regenerate the DMI properties. A guest OS shut down on the other hand will also power off the VM, and the next power-on will regenerate the DMI properties.