Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

June 18, 2018

I published the following diary on “Malicious JavaScript Targeting Mobile Browsers“:

A reader reported a suspicious piece of a Javascript code that was found on a website. In the meantime, the compromized website has been cleaned but it was running WordPress (again, I would say![1]).  The code was obfuscated, here is a copy… [Read more]

[The post [SANS ISC] Malicious JavaScript Targeting Mobile Browsers has been first published on /dev/random]

June 15, 2018

And here we go with the wrap-up of the 3rd day of the SSTIC 2018 “Immodium” edition. Indeed, yesterday, a lot of people suffered from digestive problems (~40% of the 800 attendees were affected!). This will for sure remains a key story for this edition. Anyway, it was a good edition!

The first timeslot is never an easy one on Friday. It was assigned to Christophe Devigne: “A Practical Guide to Differential Power Analysis of USIM Cards“. USIM cards are the SIM cards that you use in your mobile phones. Guest what? They are vulnerable to some types of attacks to extract the authentication secret. What does it mean? A complete confidentiality lost for the user’s communications. An interesting fact, Christophe and his team tested several USIM cards (9) – 5 of them from French operators – and one was vulnerable. Also, 75% of the French mobile operators still distribute cards with a trivial PIN code. The technology used is called “MILENAGE“. Christophe described it and the explained how, thanks to an oscilloscope, he was able to extract keys.
The second talk was targeting the Erlang language. Erlang is not widely used and was developed by Ericsson. The talk title was “Starve for Erlang cookie to gain remote code exec” and presented by Guillaume Teissier. It is used for many applications but mainly in the telecom sector to manage network devices.
Erlang has a feature that allows two processes to communicate. Guillaume explained how communications are established between the processes – via a specific TCP port – and how they authenticate together – via a cookie. This cookie is always a string of 20 uppercase characters. The talk focussed on how to intercept communications between those processes and recover this cookie. Guillaume released a tool for this.
The next talk was about HACL*, a crypto library written in formally verified code and used by FireFox. Benjamin Beurdouche and Jean Karim Zinzindohoue explained how they developed the library (using the F* language).
Then,  Jason Donenfeld presented his project: Wireguard. This is a Layer-3 secure network tunnel for IPv4 & IPv6 (read: a VPN) designed for the Linux kernel (but available on other platforms – MacOS, Android and other embedded OS). It is UDP based and provides an authentication similar to SSH and its .ssh/authentication-keys. It can replace without problem a good old OpenVPN or IPsec solution. Compared to other solutions, the code is very slow and can be easily audited/reviewed. The setup is very easy:
# ip link add wg0 type wireguard
# ip address add dev wg0
# ip route add default wg0
# ifconfig wg0 ...
# iptables -A input -i wg0 ...
Jason explained in details how the authentication mechanism has been implemented to ensure that once a packet reached a system was are sure of the origin. So easy to setup, here is a quick tutorial on a friend’s wiki.
The next presentation was made by Yvan GENUER and focussed on SAP (“Ca sent le SAPin!“). Everybody knows SAP, the worldwide leader in ERP solutions. A lot of security issues have already been found in multiple tools or modules. But this time, the focus was on a module called SAP IGS or “Internet Graphic Services”. This module helps to render and process multiple files inside an SAP infrastructure. After some classic investigations (network traffic capture, search in the source code – yes, SAP code is stored in databases), they find an interesting call: “ADM:INSTALL”. It is used to install new shape files. They explained the two vulnerabilities found: The service allows the creation of any files on the file system and a DoS when you create a file with a filename longer than 256 characters.
The next talk was not usual but very interesting: Yves-Alexis Perez from the Debian Security Team came on stage to explain how his team is working. How they handle security issues with the Debian Linux distribution. The core team is based on 10 people (5 being really active) and other developers and maintainers. He reviewed the process that is followed when a vulnerability is reported (triage, push of patches, etc). He also reviewed some vulnerabilities from the past and how they were handled.
After a nice lunch break with Friends and some local food, back in the auditorium for two talks: Ivan Kwiatkowski demonstrated the tool he wrote to help pentester to handler remote shells in a comfortable way: “Hacking Harness open-source“. Ivan started with some bad stories that every pentester in the world faced. You got a shell but no TTY, you lose it, you suffer from latency, etc… This tool helps to get rid of these problems and allow the pentester to work like in a normal shell without any footprint. Other features allow, for example, to transfer files back to the attacker. It looks to be a nice tool, have a look at it, definitively!
Then, Florian Maury presented “DNS Single Point of Failure Detection using Transitive Availability Dependency Analysis“. Everybody has a love/hate relation with DNS. No DNS, no Internet. Florian came back on the core principle of the DNS and also a weak point: the single point of failure that can make your services not reachable on the Internet. He wrote a tool that, based on DNS requests, shows you if a domain is vulnerable to one or more single point of failure. In the second part of the talk, Florian presented the results of a research he performed on 4M of domains (+ the Alexa top list). Guess what? There are a lot of domains that suffer from, at least, one SPoF.

Finally, the closing keynote was presented by Patrick Pailloux, the technical director of the DGSE (“Direction Générale de la Sécurité Extérieure”). Excellent speaker who presented the “Cyber” goals of the French secret services, of course, what he was authorized to disclosed 😉 It was also a good opportunity to repeat that they are always looking to skilled security people.

[The post SSTIC 2018 Wrap-Up Day #3 has been first published on /dev/random]

The Composer Initiative for Drupal

At DrupalCon Nashville, we launched a strategic initiative to improve support for Composer in Drupal 8. To learn more, you can watch the recording of my DrupalCon Nashville keynote or read the Composer Initiative issue on

While Composer isn't required when using Drupal core, many Drupal site builders use it as the preferred way of assembling websites (myself included). A growing number of contributed modules also require the use of Composer, which increases the need to make Composer easier to use with Drupal.

The first step of the Composer Initiative was to develop a plan to simplify Drupal's Composer experience. Since DrupalCon Nashville, Mixologic, Mile23, Bojanz, Webflo, and other Drupal community members have worked on this plan. I was excited to see that last week, they shared their proposal.

The first phase of the proposal is focused on a series of changes in the main Drupal core repository. The directory structure will remain the same, but it will include scripts, plugins, and embedded packages that enable the bundled Drupal product to be built from the core repository using Composer. This provides users who download Drupal from a clear path to manage their Drupal codebase with Composer if they choose.

I'm excited about this first step because it will establish a default, official approach for using Composer with Drupal. That makes using Composer more straightforward, less confusing, and could theoretically lower the bar for evaluators and newcomers who are familiar with other PHP frameworks. Making things easier for site builders is a very important goal; web development has become a difficult task, and removing complexity out of the process is crucial.

It's also worth noting that we are planning the Automatic Updates Initiative. We are exploring if an automated update system can be build on top of the Composer Initiative's work, and provide an abstraction layer for those that don't want to use Composer directly. I believe that could be truly game-changing for Drupal, as it would remove a great deal of complexity.

If you're interested in learning more about the Composer plan, or if you want to provide feedback on the proposal, I recommend you check out the Composer Initiative issue and comment 37 on that issue.

Implementing this plan will be a lot of work. How fast we execute these changes depends on how many people will help. There are a number of different third-party Composer related efforts, and my hope is to see many of them redirect their efforts to make Drupal's out-of-the-box Composer effort better. If you're interested in getting involved or sponsoring this work, let me know and I'd be happy to connect you with the right people!

June 14, 2018

The second day started with a topic this had a lot of interest for me: Docker containers or “Audit de sécurité d’un environnement Docker” by Julien Raeis and Matthieu Buffet. Docker is everywhere today and, like new technologies, is not always mature when deployed, sometimes in a corner by developers. They explained (for those that are living on the moon) what is Docker in 30 seconds. The idea of the talk was not to propose a tool (you can have a look here). Based on their research, most containers are deployed with the default configuration. Images are downloaded without security pre-checks. If Docker is very popular on Linux systems, it is also available for Windows. In this case, there are two working modes: Via the Windows Server Containers (based on objects of type “job”) or Hyper-V container. They reviewed different aspects of the containers like privilege escalation, abuse of resources and capabilities. Some nice demonstrations were presented like privilege escalation and access to a file on the host from the container. Keep in mind that Docker is not considered as a security tool by the developers! Interesting talks but with a lack of practical stuff that could help auditors.
The next talk was also oriented to virtualization and, more precisely, how to protect them from a guest point of view. This was presented by Jean-Baptiste Galet. The scenario was: “if the hypervisor is already compromized by an attacker, how to protect the VMs running on top of it? We can face the same kind of issues with a rogue admin. By design, an admin has full access to the virtual hosts. The goal is to reach the following requirements;
  • To use a trusted hypervisor
  • To verify the boot sequence integrity
  • To encrypt disks (and snapshots!)
  • To protect memory
  • To perform a safe migration between different hypervisors
  • To restrict access to console, ports, etc.

Some features have already been implemented by VMware in 2016 like an ESXi secure boot procedure, VM encryption and VMotion data encryption. Jean-Baptiste explained in detail how to implement such controls. For example, to implement a safe boot, UEFI & a TPM chip can be used.

The two next slot was assigned to short presentations (15 mins) and focussed on specific tools. The first one was The tool helps in the development of an ASN.1 encoder/decoder. ASN means “Abstract Syntax Notation 1” and is used in many domains, the most important one being the mobile network operators.
The second one was ProbeManager, developed by  Matthieu Treussart. Why this talk? Matthieu was looking for a tool to help in the day-to-day management of IDS (like Suricata) but did not found a solution that matched his requirements. So, he decided to write his own tool. ProbeManager was born! The tool is written in Python and has a (light) web interface to perform all the classic tasks to manage IDS sensors (creation, deployment, the creation of rules, monitoring, etc). The tool is nice but the web interface is very light and it suffers from a lack of IDS rules finetuning. Note that it is also compatible with Bro and OSSEC (soon). I liked the built-in integration with MISP!
After the morning coffee break, we had the chance to welcome Daniel Jeffrey on stage. Daniel is working for the Internet Security Research Group of the Linux Foundation and is involved in the Let’s Encrypt project. In the first part, Daniel explained why HTTPS became mandatory to better protect the Internet users privacy but SSL is hard! It’s boring, time-consuming, confusing and costly. The goal of the Let’s Encrypt project is to automate, to offer for free and be open. Let’s Encrypt is maintenance by a team of 12 people (only!). They went into production in eight months only. Then, Daniel explained how Let’s Encrypt is implemented. It was interesting to learn more about the types of challenges available to enrol/renew certificates: DNS-01 is easy with many frontends needing simultaneous renewals. HTTP-01 is useful for a few servers that get certs and when DNS lag can be an issue.
Then, two other tools were presented.”YaDiff” (available here) which helps to propagate symbols between analysis sessions. The idea of the tool came as a response to a big issue with malware analysis: it is a repeating job. The idea is, once the analyzis on a malware completed, symbols are exported and can be reused in other analysis (in IDA). Interesting tool if you are performing reverse engineering as a core activity. The second one was Sandbagility. After a short introduction to the different methods available to perform malware analysis (static, dynamic, in a sandbox), the authors explained their approach. The idea is to interact with a Windows sandbox without an agent installed on it but, instead, to interact with the hypervisor. The result of their research is a framework, written in Python. It implements a protocol called “Fast Debugging Protocol”. They performed some demos and showed how easy it is to extract information from the malware but also to interact with the sandbox. One of the demos was based on the Wannacry ransomware. Warning, this is not a new sandbox. The guest Windows system must still be fine-tuned to prevent easy VM detection! This is very interesting and deserves to be tested!
After the lunch, the last regular presentation started with one about “Java Card”, presented by Guillaume Bouffard and Léo Gaspard. It was in some way, an extension of the talk about an armoured USB device, the Java Card is one of the components.
As usual, the afternoon was completed with a wrap-up of the SSTIC challenge and rump sessions. The challenge was quite complex (as usual?) and included many problems based on crypto. The winner came on site and explained how he solve the challenge. This is part of the competition, players must deliver a document containing all the details and findings of the game. A funny anecdote about the challenge, the server was compromized because an ~/.ssh/authorized-keys was left writable.
Rump sessions are also a key event during the conference. Rules are simple: 5 minutes (4 today due to the number of proposals received), if people applaud, you stop otherwise you can continue. Here is the list of topics that were presented:
  • A “Burger Quizz” alike session about the SSTIC
  • Pourquoi c’est flou^Wnet? (How the SSTIC crew provides live streaming and recorded videos)
  • Docker Explorer
  • Nordic made easy – Reverse engineering of a nRF5 firmware (from Nordic Semiconductor)
  • RTFM – Read the Fancy Manual
  • IoT security
  • Mirai, dis-moi qui est la poubelle?
  • From LFI to domain admin rights
  • Perfect (almost) SQL injection detection
  • Invite de commande pour la toile (dans un langage souverain): WinDev
  • How to miss your submission to a call-for-paper
  • Suricata & les moutons
  • Les redteams sont nos amies or what mistakes to avoid when you are in a red team (very funny!)
  • ipibackups
  • Representer l’arboresence matérielle
  • La télé numérique dans le monde
  • ARM_NOW (
  • Signing certi with SSH keys
  • Smashing the func for SSTIC and profit
  • Wookey
  • Coffee Plz! (or how to get free coffee in your company)
  • Modmobjam
  • Bug bounty
  • (Un)protected users
  • L’anonymat du pauvre
  • Abuse of the YAML format

The day ended with the classic social event in the beautiful place of “Le couvent des Jacobins“:

Le couvent des jacobins

My feeling is that there were less entertaining talks today (based on my choices/feeling of course) but the one about Let’s Encrypt was excellent. Stay tuned for the last day tomorrow!

[The post SSTIC 2018 Wrap-Up Day #2 has been first published on /dev/random]

I published the following diary on “A Bunch of Compromized WordPress Sites“:

A few days ago, one of our readers contacted reported an incident affecting his website based on WordPress. He performed quick checks by himself and found some pieces of evidence:

  • The main index.php file was modified and some very obfuscated PHP code was added on top of it.
  • A suspicious PHP file was dropped in every sub-directories of the website.
  • The wp-config.php was altered and database settings changed to point to a malicious MySQL server.

[Read more]

[The post [SANS ISC] A Bunch of Compromized WordPress Sites has been first published on /dev/random]

June 13, 2018

Hello Readers,
I’m back in the beautiful city of Rennes, France to attend my second edition of the SSTIC. My first one was a very good experience (you can find my previous wrap-up’s on this blog – day 1, day 2, day 3) and this one was even more interesting because the organizers invited me to participate to the review and selection of the presentations. The conference moved to a new location to be able to accept the 800 attendees, quite challenging!

As usual, the first day started with a keynote which was assigned to Thomas Dullien aka Halvar Flake. The topic was “Closed, heterogeneous platforms and the (defensive) reverse engineers dilemma”. Thomas is a reverse engineer for years and he decided to have a look back at twenty years of reverse engineering. In 2010, this topic was already covered in a blog post and, perhaps, it’s time to have another look. What are the progress? Thomas reviewed today’s challenges, some interesting changes and the future (how computing is changing and the impacts in reverse engineering tasks). Thomas’s feeling is that we have many tools available today (Frida, Radare, Angr, BinNavi, ….) which should be helpful but it’s not the case. Getting debugging live and traces from devices like mobile devices is a pain (closed platform) and there is a clear lack of reliable library to retrieve enough amount of data. Also, the “debugability” is reduced due to more and more security controls in place (there is clearly a false sense of security: “It’s not because your device is not debuggable that it is safe!” said Thomas. Disabling the JTAG on a router PCB will not make it more secure. There is also a “left shift” in the development process to try to reduce the time to market (software is developed on hardware not completely ready). Another fact? The poor development practices of most reverse engineers. Take as example a quick Python script written to fix a problem at a time ‘x’. Often, the same script is still used months or years later without proper development guidelines. Some tools are also developed as support for a research or a presentation but does not work properly in real-life cases. For Thomas, the future will still change with more changes in technologies than in the last 20 years, the cloud will bring not only “closed source” tools but also “closed binary” and infrastructures will become heterogeneous. Very nice keynote and Thomas did not hesitate to throw a stone into the water!
After a first coffee break, Alexandre Gazet and Fabien Perigaud presented a research about HP iLO interfaces: “Subverting your server through its BMC: the HPE iLO4 case”. After a brief introduction of the product and what it does (basically: to allow an out-of-band control/monitoring of an HP server), a first demo was presented based on their previous research. Dumping the kernel memory of the server, implement a shellcode and become root in the Linux server. Win! Their research generated the CVE-2017-12542 and a patch is available for a while (it was a classic buffer overflow). But does it mean that iLO is a safe product now? They came back with a new research to demonstrate that no, it’s not secure yet. Even if HP did a good job to fix the previous issue, they still lack some controls. Alexandre & Fabien explained how the firmware upgrade process fails to validate the signature and can be abused to perform malicious activities, again! The goal was to implement a backdoor in the Linux server running on the HP server controlled by the compromized iLO interface. They release a set of tools to check your iLO interface but the recommendation remains the same: to patch and do not deploy iLO interfaces in the wild.
The next talk was about “T-Brop” or “Taint-Based Return Oriented Programming” presented by Colas Le Guernic & Francois Khourbiga. A very difficult topic for me. They reviewed what it “ROP” (Return Oriented Programming) and described the two existing techniques to detect possible ROP in a program: syntactic or symbolic with pro & con of both solutions. Then, they introduced their new approach called T-Brop which is a mix of the best of both solutions.
The next talk was about “Certificate Transparency“, presented by Christophe Brocas & Thomas Damonneville. HTTPS is really pushed on stage for a while to improve web security and one of the controls available to help to track certificates and rogue websites is the Certificate Transparency. It’s a Google initiative known as RFC 6962. They explained what’s behind this RFC. Basically, all created SSL certificates must be added in an unalterable list which can be accessed freely for tracking and monitoring purposes. Christophe & Thomas are working for a French organization that is often targeted by phishing campaigns and this technology helps them in their day-to-day operations to track malicious sites. More precisely, they track two types of certificates:
  • The ones that mimic the official ones (typo-squatting, new TLD’s, …)
  • Domains used in their organization and that can be used in the wrong way.

In the second scenario, they spotted a department which developed a web application hosted by a 3rd party company and using Let’s Encrypt. This is not compliant with their internal rules. Their tools have been release (here). Definitively a great talk because it does not require a lot of investment (time, money) and can greatly improve your visibility of potential issues (ex: detecting phishing attacks before they are really started).

After the lunch, a bunch of small talks was scheduled. First, Emmanuel Duponchelle and Pierre-Michel Ricordel presented “Risques associés aux signaux parasites compromettants : le cas des câbles DVI et HDMI“. Their research focused on the TEMPEST issue with video cables. They just started with a live demo which demonstrated how a computer video flow can be captured:
Then, they explained how video signals work and what are the VGA, DVI & HDMI standards (FYI, HDMI is like DVI but with a new type of connector). To solve the TEMPEST issues, it’s easy as used properly shielded cables. They demonstrated different cables, good and bad. Keep in mind: low-cost cables are usually very bad (not a surprise). To make the demo, they used the software called TempestSDR. Also, for sensitive computers, use VGA cables instead of HDMI, they leak less data!
The next talk was close to the previous topic. This time, it focussed on SmartTV’s and, more precisely, the DVB-T protocol. José Lopes Esteves & Tristan Claverie presented their research which is quite… scary! Basically, a SmartTV is a computer with many I/O interfaces and, as they are cheaper than a normal computer monitor, they are often installed in meeting rooms, where sensitive information are exchanged. They explained that, besides the audio & video flows, subtitles, programs, “apps” can also be delivered via a DVB-T signal. Such “apps” are linked to a TV channel (that must be selected/viewed). Those apps are web-based and, if the info is provided, can be installed silently and automatically! So nice! Major issues are:
  • HTTP vs HTTPS (no comment!)
  • Legacy mode fallback (si pas signé, pas grave)
  • Unsafe API’s
  • Time-based trust

They explained how to protect against this, like asking the user to approve the installation of an app or access to this or this resources but no easy to implement in a “TV” used by no technical people. Another great talk! Think about this when you will see a TV connected in a meeting room.

The next talk was the demonstration of a complete pwnage of a SmartPlug (again, a “smart” device) that can be controlled via a WiFi connection: “Three vulns, one plug” by Gwenn Feunteun, Olivier Dubasque and Yves Duchesne. It started with a mention on the manufacturer website. When you read something like “we are using top-crypto algorithm…“, this is a good sign of failure. Indeed. They bought an adapter and started to analyze its behaviour. The first issue was to understand how the device was able to “automatically” configure the WiFi interface via a mobile phone. By doing a simple MitM attack, they checked the traffic between the smartphone and the SmartPlug. They discovered that the WiFi key was broadcasted using a … Caesar cipher (of 120)! The second vulnerability was found in the WiFi chipset that implements a backdoor via an open UDP port. They discovered also that WPS was available but not used. For the fun, they decided to implement it using an Arduino 🙂 For the story, the same kind of WiFi chipset is also used in medical and industrial devices… Just one remark about the talk: it looks that the manufacturer of the SmartPlug was never contacted to report the vulnerabilities found… sad!

Then, Erwan Béguin came to present the Escape Room they developed at his school. The Escape Room focusses on security and awareness. It is for non-tech people. When I read the abstract, I had a strange feeling about the talk but it was nice and explained how people reacted and some finding about their behaviours when they are working in groups. Example: in a group, if the “leader” gives his/her approval, people will follow and perform unsafe actions like inserting a malicious USB device in a laptop.
After the afternoon coffee break, Damien Cauquil presented a cool talk about hacking PCB’s: “Du PCB à l’exploit: étude de cas d’une serrure connectée Bluetooth Low Energy“. When you are facing some piece of hardware, they are different approaches: You can open the box, locate the JTAG, use, brute force the serial speed, get a shell, root access. Completed! Damien does not like this approach and prefers to work in a more strict way but which can be helpful in many cases. Sometimes, just be inspecting the PCB, you can deduct some features or missing controls. At the moment, they are two frameworks to address the security of IoT devices: the OWASP IoT project and the one from Rapid7. In the second phase, Damien applied his technique to a real device (a smart door lock). Congrats to him for finishing the presentation in a hurry due to the video problems!
Then, the “Wookey” project was presented by a team of the ANSSI. The idea behind this project is to build a safe USB storage that will protect against all types of attack like data leak, USBKill, etc… The idea is nice, they performed a huge amount of work but it is very complex and not ready to be used by most people…
Finally, Emma Benoit presented the result of a pentest she realized with Guillaume Heilles, Philippe Teuwen on an embedded device: “Attacking serial flash chip: case study of a black box device“. The device had a flash chip on the PCB that should contain interesting data. They are two types of attacks: “in circuit” (probes are plugged on the chip PINs) or “chip-off” (or physical extraction). In this case, they decided to use the second method and explained step by step how they succeeded. The most challenging step was to find an adapter to connect the unsoldered chip on an analyzer. Often, you don’t have the right adapter and you must build your own. All the steps were described and finally data extracted from the flash. Bonus, there was a telnet interface available without any password 😉
That’s all for today! See you tomorrow for another wrap-up!

[The post SSTIC 2018 Wrap-Up Day #1 has been first published on /dev/random]

June 11, 2018

Merkel and Macron should use everything in their economic power to invest in our own European Military.

For example whenever the ECB must pump money in the EU-system, it could do that by increased spending on European military.

This would be a great way to increase the EURO inflation to match the ‘below but near two percent annual inflation’ target.

However. The EU budget for military should not go to NATO. Right now it should go to EU’s own national armies. NATO is more or less the United State’s military influence in Europe. We’ve seen last G7 that we can’t rely on the United States’ help.

Therefor, it should use exclusively European suppliers for military hardware. We don’t want to spend EUROs outside of our EU system. Let the money circulate within our EU economy. This implies no F-35 for Belgium. Instead, for example the Eurofighter Typhoon. The fact that Belgium can’t deliver the United States’s nuclear weapons without their F-35, means that the United States should take their nuclear bombs back. There is no democratic legitimacy to keep them in Belgium anyway.

It’s also time to create a pillar similar to the European Union: a military branch of the EU.

Already are Belgium and The Netherlands sharing military marine and air force resources. Let’s extend this principle to other EU countries.

June 08, 2018

Logo JenkinsCe jeudi 28 juin 2018 à 19h se déroulera la 70ème séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : Jenkins en 2018


  • exceptionnellement, il s’agit du 4ème jeudi du mois !
  • Un atelier introductif “Docker/Jenkins” sera organisé dès 14h ! Détails ci-après.

Thématique : sysadmin

Public : sysadmin|développeurs|entreprises|étudiants

Les animateurs conférenciers : Damien Duportal et Olivier Vernin (CloudBees)

Lieu de cette séance : Université de Mons, Faculté Polytechnique, Site Houdain, Rue de Houdain, 9, auditoire 3 (cf. ce plan sur le site de l’UMONS, ou la carte OSM). Entrée par la porte principale, au fond de la cour d’honneur. Suivre le fléchage à partir de là.

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page La séance sera suivie d’un verre de l’amitié, vers 21H. Un écran géant sera installé de manière à suivre la seconde mi-temps de l’événement footballistique (Belgique -Angleterre) en direct !

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, OpenSides, MeaWeb et Phonoid.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

Description :

  • Partie I : Introduction à Jenkins-X: une solution d’intégration et de déploiement continus pour des applications “cloud” modernes dans Kubernetes. Résumé : Jenkins X est un projet qui repense la manière dont les développeurs devraient interagir avec l’intégration et le déploiement continus dans le cloud en mettant l’accent sur la productivité des équipes de développement grâce à l’automatisation, à l’outillage et à de meilleures pratiques DevOps.
  • Partie II: Le plugin Configuration-as-Code de Jenkins. Résumé : En 2018, nous savons comment définir nos Jobs avec JobDSL ou Pipeline. Mais comment définir la configuration de Jenkins avec le même modèle, avec seulement un fichier YAML ?

Short bios :

  • Damien Duportal : Training Engineer @ CloudBees, i am an IT Engineer tinkering from development to production areas. Trainer and mentor, i also love to transmit and teach. Open-Source afficionado. Docker & Apple addict. Human being.
  • olivier Vernin : Fascinated by new technologies and particulary in computer sciences, I am continuously looking for ways to improve my skills. The domain of Linux/Unix especially piqued my interest and encourages myself to get in-depth understanding.

Atelier introductif “Docker/Jenkins”, de 14H à 17H30 :

  • Introduction à Docker
  • Introduction à Jenkins
  • Intégration de Jenkins avec Docker
  • Bring your own challenge

L’atelier est limité à 25 personnes. Détails et inscription via la page

Ordering a revision of a power electronics board from Aisler I decided to get a metal paste stencil as well to be able to cleanly solder using the reflow oven.

I already did a first board just taping the board and stencil to the table and applying solder paste. This worked but it is not very handy.

Then I came with the idea to use a 3D printed PCB holder that would ease the process.

The holder

The holder (just a rectangle with a hole) tightly fits the PCB. It is a bit larger then the stencil and 0.1mm less thick then the PCB to make sure the connection between the PCB and the stencil is tight.

I first made some smaller test prints but after 3 revisions the following openSCAD script gave a perfectly fitting PCB holder:

// PCB size
bx = 41;
by = 11.5;
bz = 1.6;

// stencil size (with some margin for tape)
sx = 100; // from 84.5
sy = 120; // from 104

// aisler compensation
board_adj_x = 0.3;
board_adj_y = 0.3;

// 3D printer compensation
printer_adj_x = 0.1;
printer_adj_y = 0.1;

x = bx + board_adj_x + printer_adj_x;
y = by + board_adj_y + printer_adj_y;
z = bz - 0.1; // have PCB be ever so slightly higher

difference() {
    cube([sx,sy,z], center=true);
    cube([x,y,z*2], center=true);


The PCB in the holder:

PCB in holder

The stencil taped to it:

Stencil taped

Paste on stencil:

Paste on stencil

Paste applied:

Paste applied

Stencil removed:

Stencil removed

Components placed:

Components placed

Reflowed in the oven:



Using the 3D printed jig worked good. The board under test:

Under test

June 06, 2018

I published the following diary on “Converting PCAP Web Traffic to Apache Log“:

PCAP data can be really useful when you must investigate an incident but when the amount of PCAP files to analyse is counted in gigabytes, it may quickly become tricky to handle. Often, the first protocol to be analysed is HTTP because it remains a classic infection or communication vector used by malware. What if you could analyze HTTP connections like an Apache access log? This kind of log can be easily indexed/processed by many tools… [Read more]

[The post [SANS ISC] Converting PCAP Web Traffic to Apache Log has been first published on /dev/random]

One of the most stressful experiences for students is the process of choosing the right university. Researching various colleges and universities can be overwhelming, especially when students don't have the luxury of visiting different campuses in person.

At Acquia Labs, we wanted to remove some of the complexity and stress from this process, by making campus tours more accessible through virtual reality. During my presentation at Acquia Engage Europe yesterday, I shared how organizations can use virtual reality to build cross-channel experiences. People that attended Acquia Engage Europe asked if they could have a copy of my video, so I decided to share it on my blog.

The demo video below features a high school student, Jordan, who is interested in learning more about Massachusetts State University (a fictional university). From the comfort of his couch, Jordan is able to take a virtual tour directly from the university's website. After placing his phone in a VR headset, Jordan can move around the university campus, explore buildings, and view program resources, videos, and pictures within the context of his tour.

All of the content and media featured in the VR tour is stored in the Massachusetts State University's Drupal site. Site administrators can upload media and position hotspots directly from within Drupal backend. The React frontend pulls in information from Drupal using JSON API. In the video below, Chris Hamper (Acquia) further explains how the decoupled React VR application takes advantage of new functionality available in Drupal 8.

It's exciting to see how Drupal's power and flexibility can be used beyond traditional web pages. If you are interesting in working with Acquia on virtual reality applications, don't hesitate to contact the Acquia Labs team.

Special thanks to Chris Hamper for building the virtual reality application, and thank you to Ash Heath, Preston So and Drew Robertson for producing the demo videos.

June 05, 2018

I published the following diary on “Malicious Post-Exploitation Batch File“:

Here is another interesting file that I found while hunting. It is a malicious Windows batch file (.bat) which helps to exploit a freshly compromised system (or… to be used by a rogue user). I don’t have a lot of information about the file origin, I found it on VT (SHA256: 1a611b3765073802fb9ff9587ed29b5d2637cf58adb65a337a8044692e1184f2). The script is very simple and relies on standard windows system tools and external utilities downloaded when needed… [Read more]

[The post [SANS ISC] Malicious Post-Exploitation Batch File has been first published on /dev/random]

June 04, 2018

Microsoft acquires GitHub

Today, Microsoft announced it is buying GitHub in a deal that will be worth $7.5 billion. GitHub hosts 80 million source code repositories, and is used by almost 30 million software developers around the world. It is one of the most important tools used by software organizations today.

As the leading cloud infrastructure platforms — Amazon, Google, Microsoft, etc — mature, they will likely become functionally equivalent for the vast majority of use cases. In the future, it won't really matter whether you use Amazon, Google or Microsoft to deploy most applications. When that happens, platform differentiators will shift from functional capabilities, such as multi-region databases or serverless application support, to an increased emphasis on ease of use, the out-of-the-box experience, price, and performance.

Given multiple functionally equivalent cloud platforms at roughly the same price, the simplest one will win. Therefore, ease of use and out-of-the-box experience will become significant differentiators.

This is where Microsoft's GitHub acquisition comes in. Microsoft will most likely integrate its cloud services with GitHub; each code repository will get a button to easily test, deploy, and run the project in Microsoft's cloud. A deep and seamless integration between Microsoft Azure and GitHub could result in Microsoft's cloud being perceived as simpler to use. And when there are no other critical differentiators, ease of use drives adoption.

If you ask me, Microsoft's CEO, Satya Nadella, made a genius move by buying GitHub. It could take another ten years for the cloud wars to mature, and for us to realize just how valuable this acquisition was. In a decade, $7.5 billion could look like peanuts.

While I trust that Microsoft will be a good steward of GitHub, I personally would have preferred to see GitHub remain independent. I suspect that Amazon and Google will now accelerate the development of their own versions of GitHub. A single, independent GitHub would have maximized collaboration among software projects and developers, especially those that are Open Source. Having a variety of competing GitHubs will most likely introduce some friction.

Over the years, I had a few interactions with GitHub's co-founder, Chris Wanstrath. He must be happy with this acquisition as well; it provides stability and direction for GitHub, ends a 9-month CEO search, and is a great outcome for employees and investors. Chris, I want to say congratulations on building the world's biggest software collaboration platform, and thank you for giving millions of Open Source developers free tools along the way.

June 03, 2018

Wordt het eens geen tijd dat ons centrum voor cybersecurity overheidsdiensten zoals het Belgisch leger oplegt om steeds a.d.h.v. met bv. PGP (minimaal) getekende (en hopelijk ook geëncrypteerde) E-mails te communiceren? Ja ja. We kunnen ze zelfs encrypteren. Hightech at Belgium. Stel je dat maar eens voor. Waanzin!

Stel je voor. Men zou zowel de E-mail (de content, het bericht zelf) kunnen verifiëren, als de afzender als dat men tijdens de transit én opslag van het bericht de inhoud zou kunnen encrypteren. Bij een eventueel “onafhankelijk” onderzoek zouden we (wiskundige) garanties hebben dat één en ander nu exact is zoals hoe het toen verstuurd werd.

Allemaal zaken die erg handig zouden geweest zijn in de saga over de E-mails over of onze F-16 vliegtuigen langer kunnen vliegen of niet.

Bij de ICT diensten van de oppositiepartijen zou men dan een opleiding van een halfuurtje kunnen krijgen over hoe ze met PGP in de hand één en ander cryptografisch kunnen verifiëren.

ps. Ik weet ook wel dat, in het wereldje waar het over gaat, nu net het feit dat bepaalde zaken achteraf niet meer te achterhalen zijn als waardevolle feature gezien wordt.

Wij hebben in Leuven de beste cryptografen van de wereld zitten. Maar ons Belgisch leger kan dit niet implementeren voor hun E-mails?

The title of this blog post comes from a recent Platformonomics article that analyzes how much Amazon, Google, Microsoft, IBM and Oracle are investing in their cloud infrastructure. It does that analysis based on these companies' publicly reported CAPEX numbers.

Capital expenditures, or CAPEX, is money used to purchase, upgrade, improve, or extend the life of long-term assets. Capital expenditures generally takes two forms: maintenance expenditure (money spent for normal upkeep and maintenance) and expansion expenditures (money used to buy assets to grow the business, or money used to buy assets to actually sell). This could include buying a building, upgrading computers, acquiring a business, or in the case of cloud infrastructure vendors, buying the hardware needed to invest in the growth of their cloud infrastructure.

Building this analysis on CAPEX spending is far from perfect, as it includes investments that are not directly related to scaling cloud infrastructure. For example, Google is building subsea cables to improve their internet speed, and Amazon is investing a lot in its package and shipping operations, including the build-out of its own cargo airline. These investments don't advance their cloud services businesses. Despite these inaccuracies, CAPEX is still a useful indicator for measuring the growth of their cloud infrastructure businesses, simply because these investments dwarf others.

The Platformonomics analysis prompted me to do a bit of research on my own.

The evolution of Amazon, Alphabet, Google, IBM and Oracle's CAPEX between 2008 and 2018

The graph above shows the trailing twelve months (TTM) CAPEX spending for each of the five cloud vendors. CAPEX don't lie: cloud infrastructure services is clearly a three-player race. There are only three cloud infrastructure companies that are really growing: Amazon, Google (Alphabet) and Microsoft. Oracle and IBM are far behind and their spending is not enough to keep pace with Amazon, Microsoft or Google.

Amazon's growth in CAPEX is the most impressive. This becomes really clear when you look at the percentage growth:

The percentage growth of Amazon, Alphabet, Google, IBM and Oracle's CAPEX between 2008 and 2018

Amazon's CAPEX has exploded over the past 10 years. In relative terms, it has grown more than all other companies' CAPEX combined.

The scale is hard to grasp

To put the significance of these investments in cloud services in perspective, in the last 12 months, Amazon and Alphabet's CAPEX is almost 10x the size of Coca-Cola's, a company whose products are available in every grocery store, gas station, and vending machine in every town and country in the world. More than 3% of all beverages consumed around the world are Coca-Cola products. In contrast, the amount of money cloud infrastructure vendors are investing in CAPEX is hard to grasp.

The CAPEX of Amazon, Alphabet, Google vs Coca-Cola between 2008 and 2018
Disclaimers: As a public market investor, I'm long Amazon, Google and Microsoft. Also, Amazon is an investor in my company, Acquia.

June 02, 2018

This is an ode to the Drupal Association.


  1. Yesterday, I stumbled upon Customizing DrupalCI Testing for Projects, written by Ryan “Mixologic” Aslett. It contains detailed, empathic 1 explanations. He also landed d.o/node/2969363 to make Drupal core use this capability, and to set an example.
  2. I’ve been struggling in d.o/project/jsonapi/issues/2962461 to figure out why an ostensibly trivial patch would not just fail tests, but cause the testing infrastructure to fail in inexplicable ways after 110 minutes of execution time, despite JSON API test runs normally taking 5 minutes at most! My state of mind: (ノಠ益ಠ)ノ彡┻━┻
    Three days ago, Mixologic commented on the issue and did some DrupalCI infrastructure-level digging. I didn’t ask him. I didn’t ping him. He just showed up. He’s just monitoring the DrupalCI infrastructure!

In 2015 and 2016, I must have pinged Mixologic (and others, but usually him) dozens of times in the #drupal-infrastructure IRC channel about testbot/DrupalCI being broken yet again. Our testing infrastructure was frequently having troubles then; sometimes because Drupal was making changes, sometimes because DrupalCI was regressing, and surprisingly often because Amazon Web Services was failing.

Thanks to those two things in the past few days, I realized something: I can’t remember the last time I had to ping somebody about DrupalCI being broken! I don’t think I did it once in 2018. I’m not even sure I did in 2017! This shows what a massive improvement the Drupal Association contributed to the velocity of the Drupal project!

Of course, many others at the Drupal Assocation help make this happen, not just Ryan.

For example Neil “drumm” Drumm. He has >2800 commits on the customizations project! Lately, he’s done things like making newer & older releases visible on project release pages, exposing all historical issue credits, providing nicer URLs for issues and giving project maintainers better issue queue filtering. BTW, Neil is approaching his fifteenth Drupal anniversary!
Want to know about new features as they go live? Watch the change recordsRSS feed available.

In a moment of frustration, I tweeted fairly harshly (uncalled for … sorry!) to @drupal_infra, and got a forgiving and funny tweet in response:

(In case it wasn’t obvious yet: Ryan is practically a saint!)

Thank you!

I know that the Drupal Association does much more than the above (an obvious example is organizing DrupalCons). But these are the ways in which they are most visible to me.

When things are running as smoothly as they are, it’s easy to forget that it takes hard work to get there and stay there. It’s easy to take this for granted. We shouldn’t. I shouldn’t. I did for a while, then realized … this blog post is the result!

A big thanks to everyone who works/worked at the Drupal Association! You’ve made a tangible difference in my professional life! Drupal would not be where it is today without you.

  1. Not once is there a Just do [jargon] and it’ll magically work in there, for example! There’s screenshots showing how to navigate Jenkins’ (peculiar) UI to get at the data you need. ↩︎

May 30, 2018

Most VMware appliances (vCenter Appliance, VMware Support Appliance, vRealize Orchestrator) have the so called VAMI: the VMware Appliance Management Interface, generally served via https on port 5480. VAMI offers a variety of functions, including "check updates" and "install updates". Some appliances offer to check/install updates from a connected CD iso, but the default is always to check online. How does that work?
VMware uses a dedicated website to serve the updates: Each appliance is configured with a repository URL: . The PRODUCT-ID is a hexadecimal code specific for the product. vRealize Orchestrator uses 00642c69-abe2-4b0c-a9e3-77a6e54bffd9, VMware Support Appliance uses 92f44311-2508-49c0-b41d-e5383282b153, vCenter Server Appliance uses 647ee3fc-e6c6-4b06-9dc2-f295d12d135c. The VERSION-ID contains the current appliance version and appends ".latest":,,
The appliance will check for updates by retrieving the repository URL /manifest/manifest-latest.xml . This xml contains the latest available version in fullVersion and version (fullVersion includes the build number), pre- and post-install scripts, EULA, and a list of updated rpm packages. Each entry has a that can be appended to the repository URL and downloaded. The update procedure downloads manifest and rpms, verifies checksums on downloaded rpms, executes the preInstallScript, runs rpm -U on the downloaded rpm packages, executes the postInstallScript, displays the exit code and prompts for reboot.
With this information, you can setup your own local repository (for cases where internet access is impossible from the virtual appliances), or you can even execute the procedure manually. Be aware that manual update would be unsupported. Using a different repository is supported by a subset of VMware appliances (e.g. VCSA, VRO) but not all (VMware Support Appliance).

Firefox 60 was released a few weeks ago and now comes with support for the upcoming Web Authentication (WebAuthn) standard.

Other major web browsers weren't far behind. Yesterday, the release of Google Chrome 67 also included support for the Web Authentication standard.

I'm excited about it because it can make the web both easier and safer to use.

The Web Authentication standard will make the web easier, because it is a big step towards eliminating passwords on the web. Instead of having to manage passwords, we'll be able to use web-based fingerprints, facial authentication, voice recognition, a smartphone, or hardware security keys like the YubiKey.

It will also make the web safer, because it will help reduce or even prevent phishing, man-in-the-middle attacks, and credential theft. If you are interested in learning more about the security benefits of the Web Authentication standard, I recommend reading Adam Langley's excellent analysis.

When I have a bit more time for side projects, I'd like to buy a YubiKey 4C to see how it fits in my daily workflow, in addition to what it would look like to add Web Authentication support to Drupal and

May 29, 2018

I'm taking a brief look at cheap quality PCB providers oshpark, aisler and JLCPCB.

all 3

PCB quality

All 3 provide nice quality good looking PCBs. (Click on a picture to see the full scale photo)


all 3

As always in beautiful OshPark Purple. Only small downside compared to others is the rough breakoffs.


all 3

Looks great.


all 3

Looks great as well. No gold but it still soldered great. A bit sad they include a production code on the silk screen, which could be a problem for PCBs that are visible.

Ease of Order


Just upload the .kicad_pcb file, very convenient. Shows a drawing of how your board will look.


Same, just upload the .kicad_pcb file and shows a drawing. Option to get a stencil.


Upload gerbers and shows a drawing. Option to get a stencil.


This is where it gets a bit more tricky to compare ;)


3 boards $1.55 shipped, as cheap as it gets for a 20 x 10 mm board.


3 boards 5.70 Euro shipped, still a good price.


This is of course a bigger board (the other two were the same).

10 boards $2 + $5.7 shipping gives $7.7.



  • ordered: April 30th 2018
  • shipped: May 8th 2018 (from USA)
  • arrived: May 15th 2018 (in Belgium)

Took 15 days from order to arrival.


  • ordered: April 30th 2018
  • shipped: May 9th 2018 (from Germany)
  • arrived: May 11th 2018 (in Belgium)

Took 11 days from order to arrival.


  • ordered: May 5th 2018
  • shipped: May 7th 2018 (from Singapore)
  • arrived: May 18th 2018 (in Belgium)

Took 13 days from order to arrival.


All three show an impressively fast delivery and a good quality board. Oshpark is still the king of cheap for tiny boards. JLCpcb gives you 10 boards, and could be cheaper for bigger boards. Aisler is the fastest, but only marginally.

Both Aisler and JLC have an option for a stencil which is interesting.

I'll be using all of them depending on the situation (need for stencil, quantity, board size, rush shipping, ...)

In the beginning of the year I started doing some iOS development for my POSSE plan. As I was new to iOS development, I decided to teach myself by watching short, instructional videos. Different people learn in different ways, but for me, videos tutorials were the most effective way to learn.

Given that recent experience, I'm very excited to share that all of the task tutorials in the Drupal 8 User Guide are now accompanied by video tutorials. These videos are embedded directly into every user guide page on You can see an example on the "Editing with the in-place editor" page.

These videos provide a great introduction to installing, administering, site building and maintaining the content of a Drupal-based website — all important skills for a new Drupalist to learn. Supplementing user guides with video tutorials is an important step towards improving our evaluator experience, as video can often convey a lot more than text.

Creating high-quality videos is hard and time-consuming work. Over the course of six months, the team at Drupalize.Me has generously contributed a total of 52 videos! I want to give a special shout-out to Joe Shindelar and the Drupalize.Me team for creating these videos and to Jennifer Hodgdon and Neil Drumm (Drupal Association) for helping to get each video posted on

What a fantastic gift to the community!

May 28, 2018

Zoals ik al voorspelde wordt onze overheid aangeklaagd omdat ze te weinig doet om kinderen van Syrië strijders in veiligheid te brengen.

Ongeacht hoe moeilijk dit onderwerp ook ligt, mogen we nooit onschuldige kinderen gaan veroordelen. Deze kinderen hebben niet gekozen waar hun ouders schuldig aan zijn. Ons land is verantwoordelijk om die kinderen op te vangen, er voor te zorgen en ze veiligheid te bieden.

Zelfs na de Tweede Wereld Oorlog deden we niet zo raar over de kinderen van collaborateurs. We kunnen dit niet maken.

Het kan voor mij niet. Het arbitrair straffen van onschuldige kinderen hoort strafbaar te zijn. Dat is een schending van de mensenrechten.

Wat is onfatsoenlijk?

It was such a beautiful day at the Belgian coast that we decided to go on a bike ride. We ended up doing a 44 km (27 miles) ride that took us from the North Sea beach, through the dunes into the beautiful countryside around Bruges.

Riante polder route

The photo shows the seemingly endless rows of poplar trees along a canal in Damme. The canal (left of the trees, not really visible in the photo) was constructed by Napoleon Bonaparte to enable the French army to move around much faster and to transport supplies more rapidly. At the time, canal boats were drawn by horses on roads alongside the canal. Today, many of these narrow roads have been turned into bike trails.

May 26, 2018

May 25, 2018

Since the release of Drupal 8.0.0 in November 2015, the Drupal 8 core committers have been discussing when and how we'll release Drupal 9. Nat Catchpole, one of Drupal 8's core committers, shared some excellent thoughts about what goes into making that decision.

The driving factor in that discussion is security support for Drupal 8’s third party dependencies (e.g. Symfony, Twig, Guzzle, jQuery, etc). Our top priority is to ensure that all Drupal users are using supported versions of these components so that all Drupal sites remain secure.

In his blog, Nat uses Symfony as an example. The Symfony project announced that it will stop supporting Symfony 3 in November 2021, which means that Symfony 3 won't receive security updates after that date. Consequently, by November 2021, we need to prepare all Drupal sites to use Symfony 4 or later.

Nothing has been decided yet, but the current thinking is that we have to move Drupal to Symfony 4 or later, release that as Drupal 9, and allow enough time for everyone to upgrade to Drupal 9 by November 2021. Keep in mind that this is just looking at Symfony, and none of the other components.

This proposal builds on top of work we've already done to make Drupal upgrades easy, so upgrades from Drupal 8 to Drupal 9 should be smooth and much simpler than previous upgrades.

If you're interested in the topic, check out Nat's post. He goes in more detail about potential release timelines, including how this impacts our thinking about Drupal 7, Drupal 8 and even Drupal 10. It's a complicated topic, but the goal of Nat's post is to raise awareness and to solicit input from the broader community before we decide our official timeline and release dates on

I published the following diary on “Antivirus Evasion? Easy as 1,2,3“:

For a while, ISC handlers have demonstrated several obfuscation techniques via our diaries. We always told you that attackers are trying to find new techniques to hide their content to not be flagged as malicious by antivirus products. Such of them are quite complex. And sometimes, we find documents that have a very low score on VT. Here is a sample that I found (SHA256: bac1a6c238c4d064f8be9835a05ad60765bcde18644c847b0c4284c404e38810)… [Read more]

[The post [SANS ISC] Antivirus Evasion? Easy as 1,2,3 has been first published on /dev/random]

Last weekend, over 29 million people watched the Royal Wedding of Prince Harry and Meghan Markle. While there is a tremendous amount of excitement surrounding the newlyweds, I was personally excited to learn that the royal family's website is built with Drupal! is the official website of the British royal family, and is visited by an average of 12 million people each year. Check it out at!

Royal uk

May 24, 2018

I published the following diary on “Blocked Does Not Mean Forget It“:

Today, organisations are facing regular waves of attacks which are targeted… or not. We deploy tons of security controls to block them as soon as possible before they successfully reach their targets. Due to the amount of daily generated information, most of the time, we don’t care for them once they have been blocked. A perfect example is blocked emails. But “blocked” does not mean that we can forget them, there is still valuable information in those data… [Read more]

[The post [SANS ISC] “Blocked” Does Not Mean “Forget It” has been first published on /dev/random]

We’re finalizing work on Autoptimize 2.4 with beta-3, which was just released. There are misc. changes under the hood, but the main functional change is the inclusion of image optimization!

This feature uses Shortpixel’s smart image optimization proxy, where requests for images are routed through their URL and automagically optimized. As the image proxy is hosted on one of the best preforming CDN’s with truly global presence obviously using HTTP/2 for parallelized requests, as the parameters are not in a query-string but are part of the URL and as behind the scenes the same superb image optimization logic is used that can also be found in the existing Shortpixel plugin, you can expect great results from just ticking that “optimize image” checkbox on the “Extra” tab :-)

Image optimization will be completely free during the 2.4 Beta-period. After the official 2.4 release this will remain free up until a still to be defined threshold per domain, after which additional service can be purchased at Shortpixel’s.

If you’re already running AO 2.4 beta-2, you’ll be prompted to upgrade. And if you’d like to join the beta to test image optimization, you can download the zip-file here. The only thing I ask in return is feedback (bugs, praise, yay’s, meh’s, …)  ;-)


May 23, 2018

May 22, 2018

Adobe acquires Magento for $1.68 billion

Yesterday, Adobe announced that it agreed to buy Magento for $1.68 billion. When I woke up this morning, 14 different people had texted me asking for my thoughts on the acquisition.

Adobe acquiring Magento isn't a surprise. One of our industry's worst-kept secrets is that Adobe first tried to buy Hybris, but lost the deal to SAP; subsequently Adobe tried to buy DemandWare and lost out against Salesforce. It's evident that Adobe has been hungry to acquire a commerce platform for quite some time.

The product motivation behind the acquisition

Large platform companies like Salesforce, Oracle, SAP and Adobe are trying to own the digital customer experience market from top to bottom, which includes providing support for marketing, commerce, personalization, and data management, in addition to content and experience management and more.

Compared to the other platform companies, Adobe was missing commerce. With Magento under its belt, Adobe can better compete against Salesforce, Oracle and SAP.

While Salesforce, SAP and Oracle offer good commerce capability, they lack satisfactory content and experience management capabilities. I expect that Adobe closing the commerce gap will compel Salesforce, SAP and Oracle to act more aggressively on their own content and experience management gap.

While Magento has historically thrived in the SMB and mid-market, the company recently started to make inroads into the enterprise. Adobe will bring a lot of operational maturity; how to sell into the enterprise, how to provide enterprise grade support, etc. Magento stands to benefit from this expertise.

The potential financial outcome behind the acquisition

According to Adobe press statements, Magento has achieved "approximately $150 million in annual revenue". We also know that in early 2017, Magento raised $250 million in funding from Hillhouse Capital. Let's assume that $180 million of that is still in the bank. If we do a simple back-of-the-envelope calculation, we can subtract this $180 million from the $1.68 billion, and determine that Magento was valued at roughly $1.5 billion, or a 10x revenue multiple on Magento's trailing twelve months of revenue. That is an incredible multiple for Magento, which is primarily a licensing business today.

Compare that with Shopify, which is trading at a $15 billion dollar valuation and has $760 million of twelve month trailing revenue. This valuation is good for a 20x multiple. Shopify deserves the higher multiple, because it's the better business; all of its business is delivered in the cloud and at 65% year-over-year revenue growth, it is growing much faster than Magento.

Regardless, one could argue that Adobe got a great deal, especially if it can accelerate Magento's transformation from a licensing business into a cloud business.

Most organizations prefer best-of-breed

While both the product and financial motivations behind this acquisition are seemingly compelling, I'm not convinced organizations want an integrated approach.

Instead of being confined to proprietary vendors' prescriptive suites and roadmaps, global brands are looking for an open platform that allows organizations to easily integrate with their preferred technology. Organizations want to build content-rich shopping journeys that integrate their experience management solution of choice with their commerce platform of choice.

We see this first hand at Acquia. These integrations can span various commerce platforms, including IBM WebSphere Commerce, Salesforce Commerce Cloud/Demandware, Oracle/ATG, SAP/hybris, Magento and even custom transaction platforms. Check out Quicken (Magento), Weber (Demandware), Motorola (Broadleaf Commerce), Tesla (custom to order a car, and Shopify to order accessories) as great examples of Drupal and Acquia working with various commerce platforms. And of course, we've quite a few projects with Drupal's native commerce solution, Drupal Commerce.

Owning Magento gives Adobe a disadvantage, because commerce vendors will be less likely to integrate with Adobe Experience Manager moving forward.

It's all about innovation through integration

Today, there is an incredible amount of innovation taking place in the marketing technology landscape (full-size image), and it is impossible for a single vendor to have the most competitive product suite across all of these categories. The only way to keep up with this unfettered innovation is through integrations.

Marketing technology landscape 2018An image of the Marketing Technology Landscape 2018. For reference, here are the 2011, 2012, 2014, 2015, 2016 and 2017 versions of the landscape. It shows how fast the marketing technology industry is growing.

Most customers want an open platform that allows for open innovation and unlimited integrations. It's why Drupal and Acquia are winning, why the work on Drupal's web services is so important, and why Acquia remains committed to a best-of-breed strategy for commerce. It's also why Acquia has strong conviction around Acquia Journey as a marketing integration platform. It's all about innovation through integration, making those integrations easy, and removing friction from adopting preferred technologies.

If you acquire a commerce platform, acquire a headless one

If I were Adobe, I would have looked to acquire a headless commerce platform such as Elastic Path, Commerce Tools, Moltin, Reaction Commerce or even Salsify.

Today, there is a lot of functional overlap between Magento and Adobe Experience Manager — from content editing, content workflows, page building, user management, search engine optimization, theming, and much more. The competing functionality between the two solutions makes for a poor developer experience and for a poor merchant experience.

In a headless approach, the front end and the back end are decoupled, which means the experience or presentation layer is separated from the commerce business layer. There is a lot less overlap of functionality in this approach, and it provides a better experience for merchants and developers.

Alternatively, you could go for a deeply integrated approach like Drupal Commerce. It has zero overlap between its commerce, content management and experience building capabilities.

For Open Source, it could be good or bad

How Adobe will embrace Magento's Open Source community is possibly the most intriguing part of this acquisition — at least for me.

For a long time, Magento operated as Open Source in name, but wasn't very Open Source in practice. Over the last couple of years, the Magento team worked hard to rekindle its Open Source community. I know this because I attended and keynoted one of its conferences on this topic. I have also spent a fair amount of time with Magento's leadership team discussing this. Like other projects, Magento has been taking inspiration from Drupal.

For example, the introduction of Magento 2 allowed the company to move to GitHub for the first time, which gave the community a better way to collaborate on code and other important issues. The latest release of Magento cited 194 contributions from the community. While that is great progress, it is small compared to Drupal.

My hope is that these Open Source efforts continue now that Magento is part of Adobe. If they do, that would be a tremendous win for Open Source.

On the other hand, if Adobe makes Magento cloud-only, radically changes their pricing model, limits integrations with Adobe competitors, or doesn't value the Open Source ethos, it could easily alienate the Magento community. In that case, Adobe bought Magento for its install base and the Magento brand, and not because it believes in the Open Source model.

This acquisition also signals a big win for PHP. Adobe now owns a $1.68 billion PHP product, and this helps validate PHP as an enterprise-grade technology.

Unfortunately, Adobe has a history of being "Open Source"-second and not "Open Source"-first. It acquired Day Software in July 2010. This technology was largely made using open source frameworks — Apache Sling, Apache Jackrabbit and more — and was positioned as an open, best-of-breed solution for developers and agile marketers. Most of that has been masked and buried over the years and Adobe's track record with developers has been mixed, at best.

Will the same happen to Magento? Time will tell.

In March during TROOPERS’18, I discovered a very nice tiny device developed by Luca Bongiorni (see my wrap-up here): The WiFi HID Injector. Just to resume what’s behind this name, we have a small USB evil device which offers: a Wireless access point for the configuration and exfiltration of data, an HID device simulating a keyboard (like a Teensy or Rubberducky) and a serial port. This is perfect for the Evil Mouse Project!

Just after the conference, I bought my first device and started to play with it. The idea is to be able to control an air-gapped computer and ex-filtrate data. The modus-operandi is the following:

  1. Connect the evil USB device to the victim’s computer or ask him/her to do it (with some social engineering tricks)
  2. Once inserted, the USB device adds a new serial port and fires up the wireless network
  3. The attacker, located close enough to get the wireless signal, takes control of the device, loads his payload
  4. The HID (keyboard) injects the payload by simulating keystrokes (like a Teensy) and executes it
  5. The payload sends data to the newly created serial port. Data will be saved to a flat file on the USB device storage
  6. The attack can download those files

By using the serial port, no suspicious traffic is generated by the host but the feature is, of course, available if more speed is required to exfiltrate data. Note that everything is configurable and the WHID can also automatically connect to another wireless network.

During his presentation, Luca explained how he weaponized another USB device to hide the WHID. For the fun, he chose to use a USB fridge because people like this kind of goodies. IMHO, a USB fridge will not fit on all types of desks, especially for C-levels… Why not use a device that everybody needs: a mouse.

Hopefully, most USB mouses have enough inside space to hide extra cables and the WHID. To connect both USB devices on the same cable, I found a nano-USB hub with two ports:

Nano-USB Hub

This device does a perfect job only on a 12x12mm circuit! I bought a second WHID device and found an old USB mouse ready to be weaponized to start a new life. The idea is the following: cut the original USB cable and solder it to the nano hub then reconnect the mouse and the WHID to the available ports (to gain some space, the USB connector was unsoldered.

My soldering-Fu is not good enough to assemble such small components by myself so my friend @doegox made a wonderful job as you can see from the pictures below. Thanks to him!

Inside the Evil Mouse

Inside the Evil Mouse

Once all the cables re-organized properly inside the mouse, it looks completely safe (ok, it is an old Microsoft mouse used for this proof-of-concept) and 100% ready to use. Just plug it in the victim’s computer and have fun:

Final MouseThe Evil Mouse

If you need to build one for a real pentest or red-team exercise, here is a list of the components:

  • A new mouse labelled with a nice brand – it will be even more trusted (9.69€) on Amazon
  • A WHID USB device (13.79€) on AliExpress
  • A nano-USB hub (8.38€) on Tindie

With this low price, you can leave the device on site, no need to recover it after the engagement. Offer it to the victim.

Conclusion: Never trust USB devices (and not only storage devices…)

[The post The Evil Mouse Project has been first published on /dev/random]

I published the following diary on “Malware Distributed via .slk Files“:

Attackers are always trying to find new ways to infect computers by luring not only potential victims but also security controls like anti-virus products. Do you know what SYLK files are? SYmbolic LinK files (they use the .slk extension) are Microsoft files used to exchange data between applications, specifically spreadsheets… [Read more]

[The post [SANS ISC] Malware Distributed via .slk Files has been first published on /dev/random]

May 21, 2018

Sometimes, a security incident starts with an email. A suspicious email can be provided to a security analyst for further investigation. Most of the time, the mail is provided in EML or “Electronic Mail Format“. EML files store the complete message in a single file: SMTP headers, mail body and all MIME content. Opening such file in a mail client can be dangerous if dynamic content is loaded (remember the EFAIL vulnerability disclosed a few days ago?) and reading a big file in a text editor is not easy to quickly have an overview of the mail content. To help in this task, I wrote a Python script that parses an EML file and generates a PNG image based on its content. In a few seconds, an analyst will be able to “see” what’s in the mail and can decide if further investigation is useful. Here is an example of generated image:

EML Rendering SampleThe script reads SMTP headers and extracts the most important ones. It extracts the body and MIME part “text/plain”, “text/html”. Attached images are decoded and displayed. If other MIME parts are found, they are just listed below the email. The image is generated using wkhtmltoimage and requires some dependencies. For an easier development, I build a Docker container ready to use:

$ git pull rootshell/emlrender
$ git run emlrender/rootshell

The container runs a web service via HTTPS (with a self-signed certificate at the moment). It provides a classic web interface and a REST API. Once deployed, the first step is to configure users to access the rendering engine. Initialize the users’ database with an ‘admin’ user and create additional users if required:

$ curl -k -X POST -d '{"password":"strongpw"}'
[{"message": "Users database successfully initialized"}, {"password": "strongpw"}]
$ curl -k -u admin:secretpw -X POST -d '{"username":"john", "password":"strongpw"}'
[{"message": "Account successfully created", "username": "john", "password": "strongpw"}]

Note: if you don’t provide a password, a random one will be generated for you.

The REST API provides the following commands and names are explicit:

  • POST /init
  • GET /users/list
  • POST /users/add
  • POST /users/delete
  • POST /users/resetpw

To render EML files, use the REST API or a browser. Via the REST API:

$ curl -k -u john:strongpw -F file=@"spam.eml" -o spam.png
$ curl -k -u john:strongpw -F file=@"" -F password=infected \
  -o malicious.png

You can see that EML files can be submitted as is or in a ZIP archive protected by a password (which is common when security analysts exchange samples)

Alternatively, point your browser to A small help page is available at Help Page

EMLRender is not a sandbox. If links are present in the HTML code, they will be visited. It is recommended to deploy the container on a “wild” Internet connection and not your corporate network.

The code is available on my GitHub repository and a ready-to-use Docker image is available on Docker hub. Feel free to post improvement ideas!

[The post Rendering Suspicious EML Files has been first published on /dev/random]

May 20, 2018


Back in Februari 2013 then coworker Romain S. showed me the new trend of programming editors that do continuous compilation while you type, showing you immediate feedback on your code. In parallel I also worked on 3D modeling for my 3D printer using the OpenSCAD program. OpenSCAD works by writing code in its custom language and then have it rendered.

An example:

openSCAD image

I had this idea of combining these two approaches to make an electronics footprint generator. And so the development of the original madparts program started.

To go quick I decided to write the program in python, but I wanted a compact language for the footprints so there I decided to use coffeescript, a functional javascript like language that compiles to javascript. Also I used openGL shading language for most of the drawing, because I found it interesting and hadn't touched it before.

Several changes were made among the way, and end of March 2013 a first release was made. It supported both Kicad and Eagle and linux, mac and windows.

After this a command line version of the program was added, and Debian packaging and 1.1 was released in May 2013.

In August 2013 1.2 was released which added support for the then brand new symbolic expressions based file format in Kicad.

In August 2015 the 2.0 release was done with mostly bugfixes and an update to the file format, but it also completely removed the whole file management system which existed, simplifying the program to just work on one file.

At this point the program pretty much did all I needed so further development stalled except for some minor bugfixes.

madparts image

Rust rewrite

In August 2016 I had been playing with the then pretty new rust programming language and decided a rewrite in it and simplifying the program even further would be fun to do.

The following were my initial goals:

  • ditch the editor, and support any external editor
  • use python instead of coffeescript
  • introduce KLC (Kicad Library Convention) checking
  • only support kicad export

Some brief searching showed that Qt (which I used in the python version) support was limited at best but GTK+ and cairo support seemed quite good with reasonable APIs (gtk-rs), and python support seemed to be ok with PyO3.


file monitoring

By monitoring the file being edited by an external editor via inotify the program can see when a change is saved and render the file. There is a practical caveat there though: most editors actually write the content to a temporary file and move that file over the old file on save (to avoid loss on case of system crash or power loss). This means in practice instead of monitoring the file, you have to monitor the containing directory.

    let _file_watch = ino.add_watch(
        WatchMask::CREATE | WatchMask::MOVED_TO | WatchMask::CLOSE_WRITE,

python interfacing

Using PyO3 running a python interpreter inside of rust is pretty straightforward. The biggest issue I ran into was dealing with the error situations:

  • the python file is invalid python (compile time error)
  • the python file fails to run (runtime error)

While it is possible to get errors out of python into rust this is tedious and verbose. After some testing I came with a simple solution: do it all in python, providing python the filename to process, and when it fails, capture this in python as well and convert it in a simple Error object meaning from the rust perspective the python code always succeeds, it just has to check for this error object and display then contained message it when it is there instead of drawing the footprint.

def handle_load_python(filename):
        exec(open(filename).read(), globals(), globals())
        return flatten(footprint())
        import sys, traceback
        exc_type, exc_value, exc_traceback = sys.exc_info()
        message = "".join(traceback.format_exception(exc_type, exc_value, exc_traceback))
        e = PythonError(message)
        return [e]


Using Cairo rendering is pretty straightforward as well. A few traits on the Element type make it work:

trait BoundingBox {
    fn bounding_box(&self) -> Bound;

pub trait DrawElement {
    fn draw_element(&self, &cairo::Context, layer: Layer);

BoundingBox calculates the Bound of an element. By knowing all the bounds and combining them the program can automatically scale the drawing canvas correctly.

DrawElement allows an element to draws on a certain Layer. This is called Layer by layer for each element to have proper z-axis stacking of the drawings.

openSCAD image


KLC is supported by just executing the python KLC tool The result is displayed in the KLC tab.

openSCAD image


Exporting just saves the footprint as a .kicad_mod file for usage in Kicad.


This is the release 1.0 and the first public release. This means the program works, but is still far from feature complete. I'm adding more features as I need them for footprints. Documentation is not available yet, but I suggest looking at the examples in the footprint/ subdirectory or look in src/ directly to see what is supported by the python format.

For now the program is only tested in Linux but it should also run with perhaps minimal changes on OSX or Windows. I'm always happy to get a pull request for that on github.

Further plans

More features will be added as needed. Other things planned:

  • command line direct conversion option for scripting/building
  • possibility for having arguments for the footprint() function to allow for generating of collections of footprints with one script

(this blog post was originally posted at

May 19, 2018

I published the following diary on “Malicious Powershell Targeting UK Bank Customers”:

I found a very interesting sample thanks to my hunting rules… It is a PowerShell script that was uploaded on VT for the first time on the 16th of May from UK. The current VT score is still 0/59. The upload location is interesting because the script targets major UK bank customers as we will see below… [Read more]


[The post [SANS ISC] Malicious Powershell Targeting UK Bank Customers has been first published on /dev/random]

May 17, 2018

As web applications have evolved from static pages to application-like experiences, end-users' expectations of websites have become increasingly demanding. JavaScript, partnered with effective user-experience design, enable the seamless, instantaneous interactions that users now expect.

The Drupal project anticipated this trend years ago and we have been investing heavily in making Drupal API-first ever since. As a result, more organizations are building decoupled applications served by Drupal. This approach allows organizations to use modern JavaScript frameworks, while still benefiting from Drupal's powerful content management capabilities, such as content modeling, content editing, content workflows, access rights and more.

While organizations use JavaScript frameworks to create visitor-facing experiences with Drupal as a backend, Drupal's own administration interface has not yet embraced a modern JavaScript framework. There is high demand for Drupal to provide a cutting-edge experience for its own users: the site's content creators and administrators.

At DrupalCon Vienna, we decided to start working on an alternative Drupal administrative UI using React. Sally Young, one of the initiative coordinators, recently posted a fantastic update on our progress since DrupalCon Vienna.

Next steps for Drupal's API-first and JavaScript work

While we made great progress improving Drupal's web services support and improving our JavaScript support, I wanted to use this blog post to compile an overview of some of our most important next steps:

1. Stabilize the JSON API module

JSON API is a widely-used specification for building web service APIs in JSON. We are working towards adding JSON API to Drupal core as it makes it easier for JavaScript developers to access the content and configuration managed in Drupal. There is a central plan issue that lists all of the blockers for getting JSON API into core (comprehensive test coverage, specification compliance, and more). We're working hard to get all of them out of the way!

2. Improve our JavaScript testing infrastructure

Drupal's testing infrastructure is excellent for testing PHP code, but until now, it was not optimized for testing JavaScript code. As we expect the amount of JavaScript code in Drupal's administrative interface to dramatically increase in the years to come, we have been working on improving our JavaScript testing infrastructure using Headless Chrome and Nightwatch.js. Nightwatch.js has already been committed for inclusion in Drupal 8.6, however some additional work remains to create a robust JavaScript-to-Drupal bridge. Completing this work is essential to ensure we do not introduce regressions, as we proceed with the other items in our roadmap.

3. Create designs for a React-based administration UI

Having a JavaScript-based UI also allows us to rethink how we can improve Drupal's administration experience. For example, Drupal's current content modeling UI requires a lot of clicking, saving and reloading. By using React, we can reimagine our user experience to be more application-like, intuitive and faster to use. We still need a lot of help to design and test different parts of the Drupal administration UI.

4. Allow contributed modules to use React or Twig

We want to enable modules to provide either a React-powered administration UI or a traditional Twig-based administration UI. We are working on an architecture that can support both at the same time. This will allow us to introduce JavaScript-based UIs incrementally instead of enforcing a massive paradigm shift all at once. It will also provide some level of optionality for modules that want to opt-out from supporting the new administration UI.

5. Implement missing web service APIs

While we have been working for years to add web service APIs to many parts of Drupal, not all of Drupal has web services support yet. For our React-based administration UI prototype we decided to implement a new permission screen (i.e. We learned that Drupal lacked the necessary web service APIs to retrieve a list of all available permissions in the system. This led us to create a support module that provides such an API. This support module is a temporary solution that helped us make progress on our prototype; the goal is to integrate these APIs into core itself. If you want to contribute to Drupal, creating web service APIs for various Drupal subsystems might be a great way to get involved.

6. Make the React UI extensible and configurable

One of the benefits of Drupal's current administration UI is that it can be configured (e.g. you can modify the content listing because it has been built using the Views module) and extended by contributed modules (e.g. the Address module adds a UI that is optimized for editing address information). We want to make sure that in the new React UI we keep enough flexibility for site builders to customize the administrative UI.

All decoupled builds benefit

All decoupled applications will benefit from the six steps above; they're important for building a fully-decoupled administration UI, and for building visitor-facing decoupled applications.

Useful for decoupling of visitor-facing front-ends Useful for decoupling of the administration backend
1. Stabilize the JSON API module
2. Improve our JavaScript testing infrastructure
3. Create designs for a React-based administration UI
4. Allow contributed modules to use React or Twig
5. Implement missing web service APIs
6. Make the React UI extensible and configurable


Over the past three years we've been making steady progress to move Drupal to a more API-first and JavaScript centric world. It's important work given a variety of market trends in our industry. While we have made excellent progress, there are more challenges to be solved. We hope you like our next steps, and we welcome you to get involved with them. Thank you to everyone who has contributed so far!

Special thanks to Matt Grill and Lauri Eskola for co-authoring this blog post and to Wim Leers, Gabe Sullice, Angela Byron, and Preston So for their feedback during the writing process.

Two big “maintainability” milestones have been hit in the past few days:

1. rest.module now is in a maintainable state

For the first time ever, the issue tracker for the rest.module Drupal core component fits on a single page: 48 46 open issues, of which several are close to RTBC, so that number will likely still go down. Breakdown:

  • 19 of those 48 are feature requests.
  • 6 are “plan” issues.
  • At least 10 issues are related to “REST views” — for which we are only fixing critical bugs.
  • And 12 are postponed — blocked on other subsystems usually.

(There is overlap among those breakdown bullets.)

Finally the REST module is getting to a place where it is maintainable and we’re not extinguishing whatever the current fire is! It’s been a long road, but we’re getting there!

2. Instilling API-First responsibility

Two days ago, #2910883: Move all entity type REST tests to the providing modules landed, which is a huge milestone in making Drupal 8 truly API-First: every module now owns its REST test coverage, which conveys the fact that every component/module is responsible for its own REST API-compatibility, rather than all responsibility lying in the rest.module component!

This is largely made possible by \Drupal\Tests\rest\Functional\EntityResource\EntityResourceTestBase, which is a base test class that each entity type should subclass to test how it behaves when exposed via REST. Every entity type in core has tests based on it, but contrib and custom entity types are encouraged to do the same!

API-First ecosystem

But the rest.module being stable is not the only thing that matters — it’s a key part, but not the only part of API-First Drupal. The remaining challenges lie elsewhere in Drupal: the issues tagged API-First Initiative are now mainly in the modules providing field types, entity types, Entity/Field API and Configuration API.

The good thing is that the fixes to any of those always help the entire API-First ecosystem:

If you want to follow along a bit more closely, or want the news right as it happens, follow REST: top priorities for Drupal 8.6.x!

Me and Nokia, we go a long way back; my first mobile phone was a trusty Nokia 3210 and it’s unique gaming proposition Snake.

I switched it for a Nokia 7710 because I was thrilled to be able to go on the Internet. The CSD data transport protocol, WAP and WML were very limited however and the phone crashed regularly (apparently it was the first Nokia with the Series 40 OS and it showed) so that didn’t last too long really.

A couple of years of infidelity later, while half the world went crazy about those iPhone’s and the other half was busy looking very business-like on their Blackberrys, I returned to Nokia for the Symbian-powered Nokia e61i, which I absolutely loved not in the least because of the hardware keyboard (a feature I missed on smartphones years after I bailed on my e61i).

And now I, the prodigal son, after a long string of good, bad and broken smartphones, have finally returned home, buying a 2018 Nokia 6.1 with 64GB internal memory (and 4GB or RAM) for under €300. Great build-quality with a aluminum unibody shell, 5.5 inch screen, fast charging, fingerprint, Zeiss lens … And it is running bloat-less stock Android 8.1 and given the Android One stamp it will receive continuous security updates and 2 years of OS upgrades. After 2 weeks of usage I can say this is the best smartphone I have owned to date! It sure feels good to have come back home ;-)

May 15, 2018

I have a rather sizeable DVD collection. The database that I created of them a few years back after I'd had a few episodes where I accidentally bought the same movie more than once claims there's over 300 movies in the cabinet. Additionally, I own a number of TV shows on DVD, which, if you count individual disks, will probably end up being about the same number.

A few years ago, I decided that I was tired of walking to the DVD cabinet, taking out a disc, and placing it in the reader. That instead, I wanted to digitize them and use kodi to be able to watch a movie whenever I felt like it. So I made some calculations, and came up with a system with enough storage (on ZFS, of course) to store all the DVDs without needing to re-encode them.

I got started on ripping most of the DVDs using dvdbackup, but it quickly became apparent that I'd made a miscalculation; where I thought that most of the DVDs would be 4.7G ones, it turns out that most commercial DVDs are actually of the 9G type. Come to think of it, that does make a lot of sense. Additionally, now that I had a home server that had some significant reduntant storage, I found that I had some additional uses for such things. The storage that I had, vast enough though it may be, wouldn't suffice.

So, I gave this some more thought, but then life interfered and nothing happened for a few years.

Recently however, I've picked it up again, changing my workflow. I started using handbrake to re-encode the DVDs so they wouldn't take up quite so much space; having chosen VP9 as my preferred codec, I end up storing the DVDs as about 1 to 2 G per main feature, rather than the 8 to 9 that it used to be -- a significant gain. However, my first workflow wasn't very efficient; I would run the handbrake GUI from my laptop on ssh -X sessions to multiple machines, encoding the videos directly from DVD that way. That worked, but it meant I couldn't shut down my laptop to take it to work without interrupting work that was happening; also, it meant that if a DVD finished encoding in the middle of the night, I wouldn't be there to replace it, so the system would be sitting idle for several hours. Clearly some form of improvement was necessary if I was going to do this in any reasonable amount of time.

So after fooling around a bit, I came up with the following:

  • First, I use dvdbackup -M -r a to read the DVD without re-encoding anything. This can be done at the speed of the optical medium, and can therefore be done much more efficiently than to use handbrake directly from the DVD. The -M option tells dvdbackup to read everything from the DVD (to make a mirror of it, in effect). The -r a option tells dvdbackup to abort if it encounters a read error; I found that DVDs sometimes can be read successfully if I eject the drive and immediately reinsert it, or if I give the disk another clean, or even just try again in a different DVD reader. Sometimes the disk is just damaged, and then using dvdbackup's default mode of skipping the unreadable blocks makes sense, but not in a first attempt.
  • Then, I run a small little perl script that I wrote. It basically does two things:

    1. Run HandBrakeCLI -i <dvdbackup output> --previews 1 -t 0, parse its stderr output, and figure out what the first and the last titles on the DVD are.
    2. Run qsub -N <movie name> -v FILM=<dvdbackup output> -t <first title>-<last title> convert-film
  • The convert-film script is a bash script, which (in its first version) did this:

    mkdir -p "$OUTPUTDIR/$FILM/tmp"
    HandBrakeCLI -x "threads=1" --no-dvdnav -i "$INPUTDIR/$FILM" -e vp9 -E copy -T -t $SGE_TASK_ID --all-audio --all-subtitles -o "$OUTPUTDIR/$FILM/tmp/T${SGE_TASK_ID}.mkv"

    Essentially, that converts a single title to a VP9-encoded matroska file, with all the subtitles and audio streams intact, and forcing it to use only one thread -- having it use multiple threads is useful if you care about a single DVD converting as fast as possible, but I don't, and having four DVDs on a four-core system all convert at 100% CPU seems more efficient than having two convert at about 180% each. I did consider using HandBrakeCLI's options to only extract the "interesting" audio and subtitle tracks, but I prefer to not have dubbed audio (to have subtitled audio instead); since some of my DVDs are originally in non-English languages, doing so gets rather complex. The audio and subtitle tracks don't take up that much space, so I decided not to bother with that in the end.

The use of qsub, which submits the script into gridengine, allows me to hook up several encoder nodes (read: the server plus a few old laptops) to the same queue.

That went pretty well, until I wanted to figure out how far along something was going. HandBrakeCLI provides progress information on stderr, and I can just do a tail -f of the stderr output logs, but that really works well only for one one DVD at a time, not if you're trying to follow along with about a dozen of them.

So I made a database, and wrote another perl script. This latter will parse the stderr output of HandBrakeCLI, fish out the progress information, and put the completion percentage as well as the ETA time into a database. Then it became interesting:

  IF (TG_OP = 'INSERT') OR (TG_OP = 'UPDATE' AND (NEW.progress != OLD.progress) OR NEW.finished = TRUE) THEN
    PERFORM pg_notify('transjob', row_to_json(NEW)::varchar);
$$ LANGUAGE plpgsql;
CREATE TRIGGER transjob_tcn_trigger

This uses PostgreSQL's asynchronous notification feature to send out a notification whenever an interesting change has happened to the table.

#!/usr/bin/perl -w

use strict;
use warnings;

use Mojolicious::Lite;
use Mojo::Pg;


helper dbh => sub { state $pg = Mojo::Pg->new->dsn("dbi:Pg:dbname=transcode"); };

websocket '/updates' => sub {
    my $c = shift;
    my $cb = $c->dbh->pubsub->listen(transjob => sub { $c->send(pop) });
    $c->on(finish => sub { shift->dbh->pubsub->unlisten(transjob => $cb) });


This uses the Mojolicious framework and Mojo::Pg to send out the payload of the "transjob" notification (which we created with the FOR EACH ROW trigger inside PostgreSQL earlier, and which contains the JSON version of the table row) over a WebSocket. Then it's just a small matter of programming to write some javascript which dynamically updates the webpage whenever that happens, and Tadaa! I have an online overview of the videos that are transcoding, and how far along they are.

That only requires me to keep the queue non-empty, which I can easily do by running dvdbackup a few times in parallel every so often. That's a nice saturday afternoon project...

May 14, 2018

I just published a new update of my imap2thehive tool. A quick reminder: this tool is aimed to poll an IMAP mailbox and feed an instance of TheHive with processed emails. This new version is now able to extract interesting IOCs from the email body and attached HTML files. The following indicators are supported:

  • IP addresses
  • Domains
  • FQDNs
  • URLs
  • Email addresses
  • Filenames
  • Hashes (MD5, SHA1, SHA256)

To use it, add the following directive in the configuration file:

observables: true

Newly created cases will contain the IOCs found. They will be tagged with the same TLP level as the case.

The script is available here.

[The post Imap2TheHive: Support for Observables has been first published on /dev/random]

We spent the weekend exploring Tuscany by car. Each day we drove through vineyards, hilltop towns and medieval cities on narrow, winding roads that often turned into unpaved backroads. Wanting to get the most out of the trip, we stopped from time to time to take in the amazing views and eat pecorino cheese.

Rolling hills with vineyards

Terracotta roof tiles

The post Remote Desktop error: CredSSP encryption oracle remediation appeared first on

A while back, Microsoft announced it would ship updates to both its RDP client & server components to resolve a critical security vulnerability. That rollout is now happening and many clients have received auto-updates for their client.

As a result, you might see this message/error when connecting to an unpatched Windows server:

It refers to CredSSP updates for CVE-2018-0886, which further explains the vulnerability and why it's been patched now.

But here's the catch: if your client is updated but your server isn't (yet), you can no longer RDP to that machine. Here's a couple of fixes;

  1. Find an old computer/RDP client to connect with
  2. Get console access to the server to run the updates & reboot the machine

If your client has been updated, there's no way to connect to an unpatched Windows server via Remote Desktop anymore.

The post Remote Desktop error: CredSSP encryption oracle remediation appeared first on

May 11, 2018

How the iMac was Apple's financial lifeline© Asymco

I love this graph. It shows that for some time, Apple's primary source of revenue was the sale of the Macintosh computer. The Macintosh provided Apple with a bridge between the desktop era and the mobile era, represented by the two clusters on the graph. That bridge was a financial lifeline. Without it, Apple might not have survived.

May 10, 2018

The Sport Loop is the most comfortable band for the Apple Watch

I've been using my new Apple Watch 3 for several months, and recently I've been in the market for a new band. Previously, I was using a standard synthetic rubber band. I'd come home from work, and the first thing I wanted to do was take my Apple Watch off. I didn't like the clammy feel of the band, and the fit was either too loose or too tight. This week, I decided to try the new Sport Loop.

I'm currently in Chicago visiting our Acquia office, and it's pretty warm out. The Sport Loop has proven to been a great alternative. It is made out of woven nylon, it's breathable and it has a little bit of stretch. It's not going to win fashion awards, but it is comfortable enough to wear all day and I no longer feel the urge to take off my watch in the evening.

May 09, 2018

Normaal worden is een goede zaak. Wanneer we normaal zijn, kunnen we goede wetgeving maken.

Zulke wetgeving is belangrijk. We moeten bijvoorbeeld nadenken over wat we gaan doen met de kinderen van Syrië strijders.

Die kinderen zijn onschuldig. We hebben als land dus een verantwoordelijkheid. Nochtans nemen wij deze verantwoordelijkheid niet.

We moeten bijvoorbeeld nadenken over wat we gaan doen met politieke partijen die onze grondrechten in vraag stellen.

De Belgische grondrechten in vraag stellen is beschermd door de vrijheid van meningsuiting. Staat die vrijheid van meningsuiting gelijk aan de vrijheid van het oprichten van een partij?

We konden deze vragen niet beantwoorden wanneer we ons als samenleving bedreigd voelden.

Wanneer we ons normaal en veilig voelen, kunnen we dat misschien wel?

Ik pleit er dus voor om terug normaal te worden.

I mean, look at the conversation we’re having right now. You’re certainly willing to risk offending me in the pursuit of truth. Why should you have the right to do that? It’s been rather uncomfortable.

– Jordan Peterson, 2018

In this blog post I share the Guidelines for New Software Projects that we use at Wikimedia Deutschland.

I wrote down these guidelines recently after a third-party that was contracted by Wikimedia Deutschland delivered software that was problematic in several ways. The department contracting this third-party was not the software development department, as since we did not have written down guidelines anywhere, both this department and the third-party lacked information they needed to do a good job. The guidelines are a short-and-to-the-point 2 page document that can be shared third parties and serves as a starting point when making choices for internal projects.

If you do not have such guidelines at your organization, you can copy these, though you will of course need to update the “Widespread Knowledge at Our Department” section. If you have suggestions for improvements or interesting differences in your own guidelines, leave a comment!

Domain of Applicability

These guidelines are for non-trivial software that will need to be maintained or developed further. They do not apply to throw-away scripts (if indeed they will be thrown away) or trivial code such as a website with a few static web pages. If on the other hand the software needs to be maintained, then these guidelines are relevant to the degree that the software is complex and hard to replace.

Choice of Stack

The stack includes programming language, frameworks, libraries, databases and tools that are otherwise required to run or develop the software.

We want

  • It to be easy to hire people who can work with the stack
  • It to be easy for our software engineering department to work with the stack
  • The stack to continue to be supported and develop with the rest of the ecosystem


  • Solutions (ie frameworks, libraries) known by many developers are preferred over obscure ones
  • Custom/proprietary solutions, where mature and popular alternatives are available, are undesired
  • Solutions known by our developers are preferred
  • Solutions with many contributors are preferred over those with few
  • Solutions with active development are preferred over inactive ones
  • More recent versions of the solutions are preferred, especially over unsupported ones
  • Introducing new dependencies requires justification for the maintenance cost they add
  • Solutions that are well designed (see below) are preferred over those that are not

Widespread Knowledge at Our Department

  • Languages: PHP and JavaScript
  • Databases: MySQL and SQLite
  • Frameworks: Symfony Components and Silex (discontinued)
  • Testing tools: PHPUnit, QUnit, Selenium/Mocha
  • Virtualization tools: Docker, Vagrant

Architecture and Code Quality

We want

  • It to be easy (and thus cheap) to maintain and further develop the project


  • Decoupled architecture is preferred, including
    • Decoupling from the framework, especially if the framework has issues (see above) and thus forms a greater than usual liability.
    • Separation of persistence from domain and application logic
    • Separation of presentation from domain and application logic
  • The code should adhere to the SOLID principles, not be STUPID and otherwise be clean on a low-level.
  • The applications logic should be tested with tests of the right type (ie Unit, Integration, Edge to Edge) that are themselves well written and not laden with unneeded coupling.
  • Setup of the project for development should be automated, preferably using virtualization tools such as Docker. After this automated setup developers should be able to easily run the tests and interact with the application.
  • If the domain is non-trivial, usage of Domain-driven design (including the strategic patterns) is preferred

How the Open Demographics Initiative recommends you ask for gender information© Open Demographics Initiative's gender identification questions

Last week, Nikki Stevens presented "Other, Please Specify" for TEDx at Arizona State University. In her TED Talk, Nikki shares the story behind the Open Demographics Initiative, which is developing a recommended set of questions that anyone can use to ask online community members about their demographics.

Nikki demonstrates how a majority of demographic surveys require users to conform to restrictive identity fields, which can alienate minority or underrepresented groups. The Open Demographics Initiative wants to develop forms that are more inclusive, in addition to giving people more control over the data and information they chose to disclose.

Inspired by Nikki's presentation, I reached out to the engineering team at the Drupal Association to see if there are plans to implement the Open Demographics Initiative's recommendations on I was happy to learn that they are collaborating with the Open Demographics team to add the recommendations to the user registration process on

Adopting Open Demographics on will also allow us to improve reporting on diversity and inclusion, which in turn will help us better support initiatives that advance diversity and inclusion. Plus, we can lead by example and inspire other organizations to do the same.

Thank you Nikki, for sharing the story behind the Open Demographics Initiative, and for helping to inspire change in the Drupal community and beyond.

I published the following diary on “Nice Phishing Sample Delivering Trickbot“:

Users have to deal with phishing for a very long time. Today, most of them remain dumb messages quickly redacted with a simple attached file and a message like “Click on me, it’s urgent!”. Yesterday, I put my hands on a very nice sample that deserve to be dissected to demonstrate that phishing campaigns remain an excellent way to infect a computer! In the following scenario, nobody was hurt and the targeted user took the right action to report it to the security team… [Read more]


[The post [SANS ISC] Nice Phishing Sample Delivering Trickbot has been first published on /dev/random]

May 07, 2018

I published the following diary on “Adding Persistence Via Scheduled Tasks“:

Once a computer has been infected by a malware, one of the next steps to perform is to keep persistence. Usually, endpoints (workstations) are primary infection vectors due to the use made of it by people: they browse the Internet, they read emails, they open files. But workstations have a major limitation: They are rebooted often (by policy – people must turn off their computer when not at the office or by maintenance tasks like patches installation)… [Read more]

[The post [SANS ISC] Adding Persistence Via Scheduled Tasks has been first published on /dev/random]

May 06, 2018

The Autoptimize Power-up has just been released to the plugin repository!

This plugin integrates with and extends Autoptimize. It integrates with, a premium service which uses a “headless” browser to extract real critical CSS. That way it  fully automates the extraction of high-quality critical CSS and the creation of rules for that critical CSS. It can work 100% automated, but also allows semi-automated jobs (where you enter a URL for which you want a specific rule, the power-up will do the rest) and manual rules (where you create the rule and add critical CSS yourself).

May 04, 2018

Last night, I read a thoughtful blog post from Sebastian Greger, which examines the challenges of implementing privacy in a decentralized, social web. As a part of my own POSSE plan, I had proposed implementing support for Webmention on This would allow me to track comments, likes, reposts, and other rich interactions across the web on my own site.

Sebastian correctly explains that when you pull in content from social media websites into your own site with the intention of owning the conversation, you are effectively taking ownership of other people's content. This could be in conflict with the GDPR regulation, for example, which sets tight rules on how personal data is processed, and requires us to think more about how personal data is syndicated on the web.

Data protection is important, but so is a decentralized, social web. These conversations, and the innovation that hopefully results from it, are important. If we fail to make the Open Web compliant with data regulations, we could empower walled gardens and stifle innovation towards a more decentralized web.

CoderDojo MonsCe jeudi 17 mai 2018 à 19h se déroulera la 69ème séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : CoderDojo, un jeu d’enfants

Thématique : Education | Programmation

Public : Tout public

Les animateurs conférenciers : Christopher Richard et Quentin Carpentier (CoderDojo Mons)

Lieu de cette séance : HEPH Condorcet, Chemin du Champ de Mars, 15 – 7000 Mons – Auditoire Bloc E – situé au fond du parking (cf. ce plan sur le site d’Openstreetmap; suivre la voirie interne du site pour atteindre le bâtiment E). Le bâtiment est différent de celui utilisé lors de séances précédentes.

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page La séance sera suivie d’un verre de l’amitié.

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, OpenSides, MeaWeb et Phonoid.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

Description : Apprendre la logique de programmation est de plus en plus important dans le monde moderne. En réalité, son apprentissage est très facile et accessible aux enfants dès l’âge de 5 ans. CoderDojo est un réseau mondial de clubs de programmation communautaires et entièrement gratuits, destinés aux jeunes et dirigés par des bénévoles. La présentation a pour but de couvrir ce qu’est CoderDojo, depuis le lancement de l’aventure jusqu’à un aperçu des techniques de coaching. Cette initiation permet de convaincre les jeunes à utiliser les nouvelles technologies avec créativité et passion plutôt que de les consommer avec (ou sans) modération!

Short bio : Christopher et Quentin sont tous deux consultants et développeurs, mais aussi et surtout grands fervents d’éducation. Ils ont co-fondé CoderDojo Mons avec deux autres anciens stagiaires du Microsoft Innovation Center. Leur Dojo se donne un dimanche après-midi par mois depuis bientôt deux ans, réunissant jusqu’à 30 ninjas à chaque session. Plus d’infos sur la page Facebook: Vous pouvez également les suivre sur LinkedIn: Christopher & Quentin.

May 03, 2018

nvidiaAnother post-it note so I don’t forget (the first one is about how to purge the nvidia drivers and use the open source drivers).

The nvidia drivers are often problematic, and as long the open source drivers have decent performance, that’s OK. On laptops you can always switch to the Intel card with prime, using but in fact bypassing the nvidia drivers.This is also how the open source nouveau driver work.

However, since a few Ubuntu releases the Nvidia drivers seem more buggy, on one one of my system I see this out of the box behaviour:

  • With the open source nouveau drivers the performance on Xorg is terrible (choppy video, OK desktop). Xorg is the default graphical server on Ubuntu 18.04. On 17.10 the default was Wayland.
  • With the open source nouveau drivers the performance on Wayland is good (e.g. for 1080p HEVC video).
  • With the closed source nvidia driver set to use the intel driver the performance on Xorg and Wayland is terrible (choppy video, OK desktop).
  • With the closed source nvidia driver set to use the nvidia you’re stuck in a login manager loop, a black screen or terrible performance on Xorg and Wayland  (choppy video, OK desktop).

If you’re ok with using Wayland (I am on the machine in question, at work my machine has Intel video and I am using Xorg), stick with nouveau and Wayland. Your laptop will get less hot (fans!) and have better battery life. Virtual consoles (ctrl+alt+) will still be available.

If you need Xorg, using the closed source nvidia driver is the only way to get good performance, at least on my machine. If you want to use this driver, follow the instructions below. Be sure to know what you do, you won’t loose files, but you may make your system unbootable (recover with an usb disk) if not done correctly:

– Make sure nvidia is not blacklisted, remove bumblebee if installed. Have a look in /etc/modules and /etc/modprobe.d/*:

$ sudo apt-get remove --purge bumblebee
$ grep nvidia /etc/modules /etc/modprobe.d/*

“blacklist nvidia” is what’s mostly used, remove that.

– Add “nvidia-drm.modeset=1” to the GRUB_CMDLINE line in /etc/default/grub:
$ sudo vi /etc/default/grub
(or use nano instead of vi)

GRUB_CMDLINE_LINUX_DEFAULT="quiet splash nvidia-drm.modeset=1"

– Activate the changes:
$ sudo update-grub

– Install Nvidia drivers:
$ sudo ubuntu-drivers autoinstall

– Check if prime has nvidia selected, select nvidia otherwise:
$ prime-select query
$ sudo prime-select nvidia

5. Reboot
$ sudo init 6

Be warned that modeset=1 disables the virtual consoles (ctl+alt+<nr>).

That’s it.

You might have noticed that Wikipedia recently started enabling link previews; when you hover over a link, it displays a card with more information about the linked page.

Wikipedia link previews

My first reaction was: what took them so long? Link previews help to solve an important usability problem of having to open many articles, often in multiple browser tabs. However, after I started to read more about how Wikipedia implemented the link previews, I was reminded of how hard it is to do things at the scale Wikipedia requires.

Nirzar Pangarka, who works as a designer at the Wikimedia Foundation, shared that more than 10,000 links get hovered each second across Wikipedia. In another post, David Lyall, an engineering manager at the Wikimedia Foundation, shared that they are seeing up to half a million hits every minute on the API that serves the link preview cards.

I have a great appreciation for Wikipedia's seemingly straightforward link previews. Delivering a feature at this scale is an impressive achievement.