Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

October 17, 2018

The second day started early with an eye-opener talk: “IPC – the broken dream of inherent security” by Thanh Bui. IPC or “Inter-Process Communications” are everywhere. You can compare them as a network connection between a client and a server but inside the operating system. The idea of Thanh’s research was to see how to intercept communications occurring via IPC based on MitMA or “Man in the Machine” attacks. There are multiple attack vectors available. Usually, a server bind to a specific identifier (a network socket, a named pipe). Note that they are also secure methods via socket pairs or unnamed pipes. In the case of a network socket, the server binds to a specific port on<port> and waits for a client connection but, often, they have failover ports. A malicious process could bind to the official port and connect to the server via the failover port and receive connections from the client on the first port. Thanh reviewed also named pipes on Windows systems (accessible via \\.\pipe\). The second part of the talk focused on juicy targets that use IPC: password managers. Most of them use IPC for communications between the desktop client and the browser (to fill passwords automatically). Thanh demonstrated how it is possible to intercept juicy information! Another popular device using IPC is the Fido U2F security key. The attack scenario is the following:
  1. Attacker signs in using the 1st factor and receive a challenge
  2. Attacker keeps sending the challenge at the device at a high rate
  3. Victim signs in to ANY service using the same security key and touches the button
  4. Attacker receives the response with high probability

A nice talk to start the day! Then, I decided to attend a workshop about Sigma. I knew the tool for a while but I never had the opportunity to play with it. This workshop was the perfect opportunity to learn more.
Sigma is an open signature format that allows you to write rules in YAML and export them to many log management solution (Splunk, ES, QRadar, <name your preferred tool here>).  Here is a simple example of a Sigma rule to detect Mimikatz binaries launched on a Windows host:

title: Exercise 1
status: experimental
description: Detects the execution of Mimikatz binaries
    - https://website
author: Xavier Mertens
data: 2018/10/17
    - attack.execution
    product: windows
    service: sysmon
        EventID: 1
            - 97f93fe103c5a3618b505e82a1ce2e0e90a481aae102e52814742baddd9fed41
            - 6bfc1ec16f3bd497613f57a278188ff7529e94eb48dcabf81587f7c275b3e86d
            - e46ba4bdd4168a399ee5bc2161a8c918095fa30eb20ac88cac6ab1d6dbea2b4a
    condition: selection and filter
level: high
After the lunch break, I attended a new set of presentation. It started with Ange Albertini who presented the third episode of his keynote trilogy: “Education & communication“. The idea was the same as the previous presentations: Can we imagine a world where everything is secure? Definitively not. And today, our life is bound to computers. It’s a fact that infosec is a life requirement for everybody, said Ange. We need to share our expertise because we are all on the same boat. But the problem is that not everybody is acting in the same way when facing a security issue or something suspicious. It’s so easy to blame people reacting in a bad way. But, according to Ange, education and communication are part of our daily job as infosec professionals. Keep in mind that “not knowing is not a crime” said Ange. Interesting keynote but unfortunately not easy to implement on a day to day basis.
The next talk was presented by Saumil Shad, the classic speaker at Saumil is an awesome speaker and has always very interesting topics. This year it was: “Make ARM Shellcode Great Again“. It was quite hard for me… Saumil’s talk was about ARM shellcode. But as usual, demos well prepared and just working!
Then Alicia Hickey and Dror-John Roecher came on stage to present “Finding the best threat intelligence provider for a specific purpose: trials and tribulations“. IOCs are a hot topic for a while and many organizations are looking into implementing security controls based on IOCs to detect suspicious activities. Speaking about IOCs, there are the classic two approaches: free sources and a commercial feed. What’s the best one? It depends on you but Alicia & Dror make a research about commercial feeds and tried to compare them, which was not so easy! They started with a huge list of potential partners but the study focused on a pool of four (They did not disclose the names). They used MISP to store received IOCs but they had many issues like a lack of standard format between the tested partners. About the delivery of IOCs, there were also differences in terms of time but what do you prefer? Common IOCs quickly released or a set of valuable IOCs within a fixed context? IMHO, both are required and must be used for different purposes. This was an interesting research.
The next talk was presented by Sva: “Pretty Easy Privacy“. The idea behind this talk was to present the PEP project. I already saw this talk last year at FSEC.
My best choice for today was the talk presented by Stephan Gerling: “How to hack a Yacht – swimming IoT“. Last year, Stephan presented an awesome talk about smart locks. This year, he came back with something again smart but way bigger: Yacht! Modern Yachts are big floating IoT devices. They have a complete infrastructure based on a router, WiFi access-points, radars, cameras, AIS (“Automatic Identification System”) and also entertainment systems. There is also a CANbus like cars that carry data from multiple sensors to pilot the boat. If it’s easy to jam a GPS signal (to make the boat blind), Stephan tried to find ways to pwn the boat from the Internet connection. The router (brand: Locomarine) i, in fact,t a Mikrotik router that is managed by a crappy management tool that uses FTP to retrieve/push XML configuration files. In the file, the WiFi password can be found in clear text, amongst other useful information. The funniest demo was when Stephan used Shodan to find live boats. The router interface is publicly available and the administrative page can be accessed without authentication thanks to this piece of (pseudo) code:
if (user == "Dealer") { display_admin_page(); }
Awesome and scary talk at the same time!
The last regular talk was presented by Irena Damsky: “Simple analysis using pDNS“. Passive DNS is not new but many people do not realize the power of these databases. After a quick recap about how DNS works, Irena performed several live demonstrations by querying a passive DNS instance to collect juicy information. What you can find? (short list)
  • List of domains hosted on a name server
  • Malicious domains
  • Suspicious domains (typosquatting & co))

This is a very useful tool for incident responders but don’t forget that there is only ONE condition: the domain must have been queried at least once!

And the second day ended with a set of lightning talks (always interesting ideas and tools promoted in a few minutes) followed by the famous PowerPoint karaoké! Stay tuned tomorrow for the next wrap-up!

[The post 2018 Wrap-Up Day #2 has been first published on /dev/random]

The post The convincing Bitcoin scam e-mail extorting you appeared first on

A few months ago I received an e-mail that got me worried for a few seconds. It looked like this, and chances are you've seen it too.

From: Kalie Paci 
Subject: mattias - UqtX7m

It seems that, UqtX7m, is your pass word. You do not know me and you are probably thinking
why you are getting this mail, correct?

Well, I actually placed a malware on the adult video clips (porn) web-site and guess what,
you visited this site to have fun (you know what I mean). While you were watching videos,
your browser started operating as a RDP (Remote control Desktop) that has a keylogger which
gave me access to your display and also web camera. Immediately after that, my software
program collected your entire contacts from your Messenger, FB, and email.

What exactly did I do?

I created a double-screen video. First part displays the video you were viewing (you have
a nice taste lol), and second part displays the recording of your web camera.

What should you do?

Well, in my opinion, $1900 is a fair price for our little secret. You’ll make the payment
through Bitcoin (if you do not know this, search “how to buy bitcoin” in Google).

BTC Address: 1MQNUSnquwPM9eQgs7KtjDcQZBfaW7iVge
(It is cAsE sensitive, so copy and paste it)

You now have one day to make the payment. (I’ve a unique pixel in this message, and right
now I know that you have read this email message). If I don’t get the BitCoins, I will
send your video recording to all of your contacts including members of your family,
colleagues, and many others. Having said that, if I do get paid, I will destroy the video
immidiately. If you need evidence, reply with “Yes!” and I definitely will send your video
recording to your 11 friends. This is a non-negotiable offer, and so please don’t waste
my personal time and yours by responding to this mail.

If you read it, it looks like spam -- doesn't it?

Well, the thing that got me worried for a few seconds was that the subject line and the body contained an actual password I used a while back: UqtX7m.

To receive an email with a -- what feels like -- personal secret in the subject, it draws your attention. It's clever in the sense that you feel both violated and ashamed for the consequences. It looks legit.

Let me tell you clearly: it's a scam and you don't need to pay anyone.

I first mentioned it on my Twitter describing what feels like the brilliant part of this scam:

  • Email + Passwords, easy to get (plenty of leaks online)
  • Everyone watches porn
  • Nobody wants that leaked
  • The same generic e-mail can be used for every victim

Whoever is running this scam thought about the psychology of this one and found the sweet spot: it gets your attention and it gets you worried.

Well played. But don't fall for it.

The post The convincing Bitcoin scam e-mail extorting you appeared first on

October 16, 2018

Remember that time when ATI employees tried to re-market their atomBIOS bytecode as scripts?

You probably don't, it was a decade ago.

It was in the middle of the RadeonHD versus -ati shitstorm. One was the original, written in actual C poking display registers directly, and depending on ATIs atomBIOS as little as practical. The other was the fork, implementing everything the fglrx way, and they stopped at nothing to smear the real code (including vandalizing the radeonhd repo, _after_ radeonhd had died). It was AMD teamed with SUSE versus ATI teamed with redhat. From a software and community point of view, it was real open source code and the goal of technical excellence, versus spite and grandstanding. From an AMD versus ATI point of view, it was trying to gain control over and clean up a broken company, and limiting the damage to server sales, versus fighting the new "corporate" overlord and people fighting to keep their old (ostensibly wrong) ways of working.

The RadeonHD project started april 2007, when we started working on a proposal for an open source driver. AMD management loved it, and supported innovations such as fully free docs (the first time since 3dfx went bust in 1999), and we started coding in July 2007. This is also when we were introduced to John Bridgman. At SUSE, we were told that John Bridgman was there to provide us with the information we needed, and to make sure that the required information would go through legal and be documented in public register documents. As an ATI employee, he had previously been tasked by AMD to provide working documentation infrastructure inside ATI (or bring one into existence?). From the very start, John Bridgman was underhandedly working on slowly killing the RadeonHD project. First by endlessly stalling and telling a different lie every week about why he did not manage to get us information this time round either. Later, when the RadeonHD driver did make it out to the public, by playing a very clear double game, specifically by supporting a competing driver project (which did things the ATI way) and publicly deriding or understating the role of the AMD backed SUSE project.

In November 2007, John Bridgman hired Alex Deucher, a supposed open source developer and community member. While the level of support Bridgman had from his own ATI management is unclear to me, to AMD management he claimed that Alex was only there to help out the AMD sponsored project (again, the one with real code, and public docs), and that Alex was only working on the competing driver in his spare time (yeah right!). Let's just say that John slowly "softened" his claim there over time, as this is how one cooks a frog, and Mr Bridgman is an expert on cooking frogs.

One particularly curious instance occurred in January 2008, when John and Alex started to "communicate" differently about atomBIOS, specifically by consistently referring to it as a set of "scripts". You can see one shameful display here on alex his (now defunct) blog. He did this as part of a half-arsed series of write-ups trying to educate everyone about graphics drivers... Starting of course with... those "scripts" called atomBIOS...

I of course responded with a blog entry myself. Here is a quote from it: "At no point do AtomBIOS functions come close to fitting the definition of script, at least not as we get them. It might start life as "scripts", but what we get is the bytecode, stuck into the ROM of our graphics cards or our mainboard." (libv, 2008-01-28)

At the same time, Alex and John were busy renaming the C code for r100-r4xx to "legacy", inferring both that these old graphics cards were too old to support, and that actual C code is the legacy way of doing things. "The warping of these two words make it impossible to deny: it is something wrong, that somehow has to be made to appear right." (libv, 2008-01-29) Rewriting an obvious truth... Amazing behaviour from someone who was supposed to be part of the open source community.

At no point did ATI provide us with any tools to alter these so-called scripts. They provided only the interpreter for the bytecode, the bare minimum of what it took for the rest of the world to actually use atomBIOS. There never was atomBIOS language documentation. There was no tooling for converting to and from the bytecode. There never was tooling to alter or create PCI BIOS images from said atomBIOS scripts. And no open source tool to flash said BIOSes to the hardware was available. There was an obscure atiflash utility doing the rounds on the internet, and that tool still exits today, but it is not even clear who the author is. It has to be ATI, but it is all very clandestine, i think it is safe to assume that some individuals at some card makers sometimes break their NDAs and release this.

The only tool for looking at atomBIOS is Matthias Hopfs excellent atomdis. He wrote this in the first few weeks of the RadeonHD project. This became a central tool for RadeonHD development, as this gave us the insight into how things fit together. Yes, we did have register documentation, but Mr. Bridgman had given us twice 500 pages of "this bit does that" (the same ones made public a few months later), in the hope that we would not see the forest through the trees. Atomdis, register docs, and the (then) most experienced display driver developer on the planet (by quite a margin) made the RadeonHD a viable driver in record time, and it gave ATI no way back from an open source driver. When we showed Mr. Bridgman the output of atomdis in september 2007, he was amazed just how readable its output was compared to ATI internal tools. So much for scripts eh?

I have to confess though that i convinced Matthias to hold off making atomdis public, as i knew that people like Dave Airlie would use it against us otherwise (as they ended up doing with everything else, they just did not get to use atomdis for this purpose as well). In Q2 2009. after the RadeonHD project was well and truly dead, Matthias brought up the topic again, and i wholeheartedly agreed to throw it out. ATI and the forkers had succeeded anyway, and this way we would give others a tool to potentially help them move on from atomBIOS. Sadly, not much has happened with this code since.

One major advantage of full openness is the ability to support future use cases that were unforeseeable at the time of hardware, code or documentation release. One such use-case is the today rather common (still) use of GPUs for cryptocurrency "mining". This was not a thing back in 2007-2009 when we were doing RadeonHD. It was not a thing when AMD created a GPGPU department out of a mix of ATI and new employees (this is the department that has been pushing out 3D ISA information ever since). It was also not considered when ATI stopped the flow of (non shader ISA) documentation back in Q1 2009 (coinciding with radeonHD dying). We only just got to the point of providing a 3d driver then, and never got near considering pushing for open firmware for Radeon GPUs. Fully open power management and fully open 3d engine firmware could mean that both could be optimised to provide a measurable boost to a very specific use-case, which cryptocurrency mining usually is. By retreating in its proprietary shell with the death of the RadeonHD driver, and by working on the blob based fig-leaf driver to keep the public from hating the fglrx driver as much, ATI has denied us the ability to gain those few 10ths of percents or even whole percents that there are likely to be gained from optimised support.

Today though, miners are mostly just altering the gpu and memory frequencies and voltages in their atomBIOS based data tables (not touching the function tables). They tend to use a binary only gui tool to edit those values. Miners also extensively use the rather horrible atiflash. That all seems very limited compared to what could have been if AMD had not lost the internal battle.

So much for history, as giving ATI the two fingered salute is actually more of an added bonus for me. I primarily wanted to play around with some newer ideas for tracing registers, and i wanted to see how much more effective working with capstone would make me. Since SPI chips are simple hardware, and SPI engines are just as trivial, ati spi engines seemed like an easy target. The fact that there is one version of atiflash (that made it out) that runs under linux, made this an obvious and fun project. So i spent the last month, on and off, instrumenting this binary, flexing my REing muscle.

The information i gleaned is now stuck into flashrom, a tool to which i have been contributing since 2007, from right before i joined SUSE. I even held a coreboot devroom at FOSDEM in 2010, and had a talk on how to RE BIOSes to figure out board specific flash enables. I then was on a long flashrom hiatus from 2010 til earlier this year, when i was solely focused on ARM. But a request for a quick board enable got me into flashrom again. Artificial barriers are made to be breached, and it is fun and rewarding to breach them, and board enables always were a quick fix.

The current changes are still in review at, but i have a github clone for those who want immediate and simple access to the complete tree.

To use the ati_spi programmer in flashrom you need to use:
./flashrom -pati_spi
and then specify the usual flashrom arguments and commands.

Current support is limited to the hardware i have in my hands today, namely Rx6xx, and only spans to that hardware that was obviously directly compatible to the Rx6xx SPI engine and GPIO block. This list of a whopping 182 devices includes:
* rx6xx: which are the radeon HD2xxx and HD3xxx series, released april 2007
* rx7xx: or HD4xxx, released june 2008.
* evergreen: or HD5xxx, released september 2009.
* northern island: HD6xxx series, released oktober 2010
* Lombok, part of the Southern Island family, release in january 2012.

The other Southern Islanders have some weird IO read hack that i need to go test.

I have just ordered 250eur worth of used cards that should extend support across all PCIE devices all the way through to Polaris. Vega is still out of reach, as that is prohibitively expensive for a project that is not going to generate any revenue for yours truly anyway (FYI, i am self-employed these days, and i now need to find a balance between fun, progress and actual revenue, and i can only do this sort of thing during downtime between customer projects). According to pci.ids, Vega 12 and 20 are not out yet, but they too will need specific changes when they come out. Having spent all that time instrumenting atiflash, I do have enough info to quickly cover the entire range.

One good thing about targetting AMD/ATI again is that I have been blackballed there since 2007 anyway, so i am not burning any bridges that were not already burned a decade ago \o/

If anyone wants to support this or related work, either as a donation, or as actually invoiced time, drop me an email. If any miners are interested in specifically targetting Vega, soon, or doing other interesting things with ATI hardware, I will be happy to set up a coinbase wallet. Get in touch.

Oh, and test out flashrom, and report back so we can update the device table to "Tested, OK" status, as there are currently 180 entries ranked as "Not-Tested", a number that is probably going to grow quite a bit in the next few weeks as hardware trickles in :)
The 14th edition (!) of is ongoing in Luxembourg. I arrived yesterday to attend the MISP summit which was a success. It’s great to see that more and more people are using this information sharing platform to fight bad guys! Today, the conference officially started with the regular talk. I spent my day in the main room to follow most of the scheduled talks. Here is my quick recap…

There was no official keynote speaker this year, the first ones to come on stage where

Ankit Gangwal and Eireann Leverett with a talk about ransomware: “”Come to the dark side! We have radical insurance groups & ransomware”. But it was a different one with an interesting approach. Yesterday Eireann already presented the results of his research based on MISP: “Logistical Budget: Can we quantitatively compare APTs with MISP”. It was on the same topic today: How to quantify the impact of ransomware attacks. Cyber insurance likes quantifiable risks. How to put some numbers (read: the amount of money) on this threat? They reviewed the ransomware life cycle as well as some popular malware families and estimated the financial impact when a company gets infected. It was an interesting approach. Also the analysis of cryptocurrency used to pay the ransom (how often, when – weekend vs week ays). Note that also developed a set of script to help to extract IOCs from ransomware sample (the code is available here).
Back to the technical side with the next talk presented by Matthieu Tarral. He presented his solution to debug malware in a virtualized environment. What are the problems related to classic debuggers? They are noisy, they alter the environment, they can affect what the analyst sees or the system view can be incomplete. For Matthieu, a better approach is to put the debugger at level -1 to be stealthier and be able to perform a full analysis of the malware. Another benefit is that the guest is unmodified (no extra process, no serial connection, … Debuggers working at hypervisor level are not new, he mentioned HyperDBG (from 2010!), virtdgb and PulseDBG. For the second part, Matthieu presented his own project based on LibVMI which is a VMI abstraction layer library independent of any hypervisor. At the moment, Xen is fully supported and KVM is coming. He showed a nice demo (video) about a malware analysis. Very nice project!
The next slide was again less technical but quite interesting. It was presented by Fabien Mathey and was focussing on problems related to performing risks assessment in a company (time constraints, budgets, motivated people and lack of proper tool). The second part was a presented of MONARC, a tool developed in Luxembourg and dedicated to performing risks assessment through a nice web interface.
And the series of non-technical talks continued with the one of Elle Armageddon about threat intelligence. The first part of the talk was a comparison between medical and infosec environment. If you check them carefully, you’ll quickly spot many common ideas like:
  • Take preventive measures
  • Listen to users
  • Get details before recommending strategies and tools
  • Be good stewards of user data
  • Threat users with respect and dignity

The second part was too long and less interesting (IMHO) and focused on the different types of threats like stalkers, state repression, etc. What they have in common, what are the differences, etc. I really like the comparison of medical/infosec environments!

After the lunch break (we get always good food at!), Dan Demeter came on stage to talk about YARA and particularly the tools he developed: “Let me Yara that for you!“. YARA is a well-known tool for security researchers. It helps to spot malicious files but it may also become quickly very difficult to manage all the rules that you collected here and there or that you developed by yourself. Klara is the tool developed by Dan which helps to automate this. It can be described as a distributed YARA scanner. It offers a web interface, users groups, email notifications and more useful options. The idea is to be able to manager rules but also to (re)apply them to a set of samples in a quick way (for retro-search purposes). About performances, Klara is able to scan 10TB in 30 mins!
I skipped the next talk – “The (not so profitable) path towards automated heap exploitation“. It looked to be a technical one. The next slop was assigned to Emmanual Nicaise who presented a talk about “Neuro Hacking” or “The science behind social engineering and an effective security culture“. Everybody knows what is social engineering and the goal to convince the victim to perform actions or to disclose sensitive information. But to achieve an efficient social engineering, it is mandatory to understand how the human brain is working and Emmanuel has a deep knowledge about this topic. Our brain is very complex and it can be compared to a computer with inputs/outputs, a bus, a firewall etc. Compared to computers, humans do not perform multi-tasking but time-sharing. He explained what are neurotransmitters and how they can be excitatory (ex: glutamate) or inhibitory (GABA). Hormones were also covered. Our brains have two main functions:
  • To make sense
  • To make pattern analysis
Emmanual demonstrated with several examples of how our brain put a picture in a category or another one just based on the context. Some people might look good or bad depending on the way the brain see the pictures. The way we process information is also important (fast vs slow mode). For the social engineer, it’s best to keep the brain in fast mode to make it easier to manipulate or predict. As a conclusion, to perform efficient social engineering:
  • Define a frame / context
  • Be consistent, respect the pattern
  • Pick a motivation
  • Use emotion accordingly
  • Redefine the norm
The next talk was about Turla or Snake: “The Snake keeps reinventing itself” by Jean-Ian Boutin and Matthieu Faou. This is a group very active for a long time that targeted governments and diplomats. It has a large toolset targetting all major platforms. What is the infection vector? Mosquito was a backdoor distributed via a fake Flash installer via the URL admdownload[.]adobe[.]com. It was pointing to an IP in the Akamai CDN used by Adobe. How did it work? Mitm? BGP hijack? Based on the location of the infected computers, it was via a compromised ISP. Once installed, Mosquito used many tools to perform lateral movement:
  • Sniffing network for port 21, 22, 110, 143, 22, 80, 389)
  • Cliproxy (a reverse shell)
  • Quarks PwDump
  • Mimikatz
  • NirSoft tools
  • ComRat (a keylogger and recent file scrapper)

A particular attention was discovered to cover all the tracks (deletion of all files, logs, etc). Then, the speakers presented the Outlook backdoor used to interact with the compromised computer (via the MAPI protocol). All received emails were exfiltrated to a remote address and commands received via attached PDF files in emails. They finished the presentation with a nice demo of a compromised Outlook which popped up a calc.exe via a malicious email received.

The next presentation was (for me) the best one of this first day. Not only because the content was technically great but the way it was presented by funny. The title was “What the fax?” by Eyal Itkin, Yaniv Balmas. Their research started last year at the hotel bar with a joke. Is it possible to compromise a company just be sending a fax? Challenge accepted! Fax machines are old devices (1966) and the ITU standard was defined in 1980 but, still today, many companies accept faxes as a communication channel. Today, fax machines disappeared and are replaced by MFPs (“Multi-Functions Printers”). And such devices are usually connected to the corporate network. They focused their research on an HP printer due to the popularity of the brand. They explained how to grad the firmware, how to discovered the compression protocol used. Step by step, the explained how they discovered vulnerabilities (CVE-2018-5925 & CVE-2018-5925). Guess what? They made a demo: a printer received a fax and, using EternalBlue, they compromised a computer connected on the same LAN. Awesome to see how an old technology can still be (ab)used today!
I skipped the last two talks: One about the SS7 set of protocols and how it can be abused to collect information about mobile users or intercept SMS message. The last one was about DDoS attacks based on IoT devices.
There is already a Youtube channel with all the presentations (added as soon as possible) and slides will be uploaded on the archive website.  That’s all for today folks, stay tuned for another wrap-up tomorrow!

[The post 2018 Wrap-Up Day #1 has been first published on /dev/random]

Ça y’est, c’est fait ! J’ai désinstallé Pocket de mon smartphone. Pocket qui est pourtant l’application que j’ai toujours trouvée la plus importante, l’application qui a justifié que je repasse sur un Kobo afin de lire les articles sauvegardés. Pocket qui est, par définition, une manière de garder des articles glanés sur le web « pour plus tard », liste qui tend à se remplir au gré des messages sur les réseaux sociaux.

Or, ne glanant plus sur le web depuis ma déconnexion, force est de constater que je n’ajoute rien de nouveau à Pocket (sauf quand vous m’envoyez un mail avec un lien qui devrait selon vous m’intéresser. J’adore ces recommandations, merci, continuez !). Le fait d’avoir le temps pour lire la quantité d’articles amassés non encore lus (une petite centaine) m’a fait réaliser à quel point ces articles sont très très rarement intéressants (comme je le signalais dans le ProofOfCast 12, ceux sur la blockchain disent absolument tous la même chose). En fait, sur la centaine d’articles, j’en ai partagé 4 ou 5 sur le moment même, j’en retiens un seul qui m’a vraiment appris des choses (un article qui résume l’œuvre d’Harold Adams Innis) et 0 qui ont changé quoi que ce soit à ma perspective.

L’article sur Innis est assez paradoxal dans le sens que son livre le plus connu est dans ma liste de lecture depuis des mois, que je n’ai encore jamais pris le temps de le lire. Globalement, je peux dire que ma liste Pocket était inutile à 99% et, à 1%, un succédané d’un livre que j’ai envie de lire. Que lire ces 100 articles m’a sans doute pris le temps que j’aurais pris à lire rapidement le livre en question.

Le résultat est donc assez peu brillant…

Mais il y’a pire !

Pocket possède un flux d’articles « recommandés ». Ce flux est extrêmement mal conçu (beaucoup de répétitions, des mélanges à chaque rafraichissement, affiche même des articles qui sont déjà dans notre liste de lecture) mais est la seule application qui reste sur mon téléphone à fournir un flux d’actualités.

Vous me voyez venir…

Mon cerveau a très rapidement pris l’habitude de lancer Pocket pour « vider ma liste de lecture » avant d’aller consulter les « articles recommandés » (entre nous, la qualité est vraiment déplorable).

Aujourd’hui, alors qu’il ne me reste que 2 articles à lire, voici deux suggestions qui se suivaient dans mon flux :

La coïncidence est absolument troublante !

D’abord un article pour nous faire rêver et dont le titre peut se traduire par « Devenir milliardaire en 2 ans à 20 ans, c’est possible ! » suivi d’un article « Le réchauffement climatique, c’est la faute des milliardaires ».

Ces deux articles résument sans doute à eux seuls l’état actuel des médias que nous consommons. D’un côté, le rêve, l’envie et de l’autre le catastrophisme défaitiste.

Il est évident qu’on a envie de cliquer sur le premier lien ! Peut-être qu’on pourrait apprendre la technique pour faire pareil ? Même si ça parait stupide, le simple fait que je sois dans un flux prouve que mon cerveau ne cherche pas à réfléchir, mais à se faire du bien.

L’article en question, pour le préciser, ne contient qu’une seule information factuelle : la startup Brex, fondée par deux vingtenaires originaires du Brésil, vient de lever 100 millions lors d’un Round C, ce qui place sa valeur à 1 milliard. Dit comme ça, c’est beaucoup moins sexy et sans intérêt si vous n’êtes pas dans le milieu de la fintech. Soit dit en passant, cela ne fait pas des fondateurs des milliardaires vu qu’ils doivent à présent bosser pour faire un exit profitable (même si, financièrement, on se doute qu’ils ne doivent pas gagner le SMIC et que l’aspect financier n’est plus un problème pour eux).

Le second article, lui, nous rappelle que les plus grosses industries sont celles qui polluent le plus (quelle surprise !) et qu’elles ont toujours fait du lobby contre toute tentative de réduire leurs profits (sans blague !). Le rapport avec les milliardaires est extrêmement ténu (on sous-entend que ce sont des milliardaires qui sont dans le conseil d’administration de ces entreprises). L’article va jusqu’à déculpabiliser le lecteur en disant que, vu les chiffres, le consommateur moyen ne peut rien faire contre la pollution. Alors que la phrase porte en elle sa solution : dans « consommateur », il y’a « consommer » et donc les « consommateurs » peuvent décider de favoriser les entreprises qui polluent moins (ce qui, remarquons-le, est en train de se passer depuis plusieurs années, d’où le green-washing des entreprises suivi d’un actuel mouvement pour tenter de voir clair à travers le green-washing).

Bref, l’article est inutile, dangereusement stupide, sans rapport avec son titre, mais le titre et l’image donnent envie de cliquer. Pire en fait : le titre et l’image donnent envie de discuter, de réagir. J’ai été témoin de nombreux débats sur Facebook dans les commentaires d’un article, débat traitant… du titre de l’article !

Lorsqu’un commentateur un peu plus avisé que les autres signale que les commentaires sont à côté de la plaque, car la remarque est adressée dans l’article et/ou l’article va plus loin que son titre, il n’est pas rare de voir le commentateur pris en faute dire « qu’il n’a pas le temps de lire l’article ». Les réseaux sociaux sont donc peuplés de gens qui ne lisent pas plus loin que les titres, mais se lancent dans des diatribes (car on a toujours le temps de réagir). Ce qui est tout bénéf pour les réseaux sociaux, car ça fait de l’animation, des interactions, des visites, des publicités affichées. Mais également pour les auteurs d’articles car ça fait des likes et des vues sur leurs articles.

Le fait que personne ne lise le contenu ? Ce n’est pas l’objectif du business. Tout comme ce n’est pas l’objectif d’un fast-food de s’inquiéter que vous ayez une alimentation équilibrée riche en vitamines.

Si vous voulez une alimentation équilibrée, il faut trouver un fournisseur dont le business model n’est pas de vous « remplir » mais de vous rendre « meilleur ». Intellectuellement, cela signifique se fournir directement chez vos petits producteurs locaux, vos blogueurs et écrivains bio qui ne vivent que de vos contributions.

Mais passons cette intrusion publicitaire éhontée pour remarquer que j’ai désinstallé Pocket, mon app la plus indispensable, après seulement 12 jours de déconnexion.

Suis-je en train d’établir des changements durables ou bien suis-je encore dans l’enthousiasme du début, excité par la nouveauté que représente cette déconnexion ? Vais-je tenir le coup sans absolument aucun flux ? (et punaise, c’est bien plus difficile que prévu) Vais-je abandonner la jalousie envieuse de ceux devenus milliardaires (car si ça m’arrivait, je pourrais me consacrer à l’écriture) et me mettre à écrire en réduisant ma consommation (vu que moins exposé aux pubs) ?

L’avenir nous le dira…

Photo by Mantas Hesthaven on Unsplash

Je suis @ploum, conférencier et écrivain électronique déconnecté rémunérés en prix libre sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens, même symboliques, font une réelle différence pour moi. Merci !

Ce texte est publié sous la licence CC-By BE.

October 14, 2018

For a few years now I've been planning to add support for responsive images to my site.

The past two weeks, I've had to take multiple trips to the West Coast of the United States; last week I traveled from Boston to San Diego and back, and this week I'm flying from Boston to San Francisco and back. I used some of that airplane time to add responsive image support to my site, and just pushed it to production from 30,000 feet in the air!

When a website supports responsive images, it allows a browser to choose between different versions of an image. The browser will select the most optimal image by taking into account not only the device's dimensions (e.g. mobile vs desktop) but also the device's screen resolution (e.g. regular vs retina) and the browser viewport (e.g. full-screen browser or not). In theory, a browser could also factor in the internet connection speed but I don't think they do.

First of all, with responsive image support, images should always look crisp (I no longer serve an image that is too small for certain devices). Second, my site should also be faster, especially for people using older smartphones on low-bandwidth connections (I no longer serve an image that is too big for an older smartphone).

Serving the right image to the right device can make a big difference in the user experience.

Many articles suggest supporting three image sizes, however, based on my own testing with Chrome's Developer Tools, I didn't feel that three sizes was sufficient. There are so many different screen sizes and screen resolutions today that I decided to offer six versions of each image: 480, 640, 768, 960, 1280 and 1440 pixels wide. And I'm on the fence about adding 1920 as a seventh size.

Because I believe in being in control of my own data, I host almost 10,000 original images on my site. This means that in addition to the original images, I now also store 60,000 image variants. To further improve the site experience, I'm contemplating adding WebP variants as well — that would bring the total number of stored images to 130,000.

If you notice that my photos are clearer and/or page delivery a bit faster, this is why. Through small changes like these, my goal is to continue to improve the user experience on

October 13, 2018

In 1999, I decided to start (formally as a place to blog, write, and deepen my thinking. While I ran other websites before, my blog is one of my longest running projects.

Working on my site helps me relax, so it's not unusual for me to spend a few hours now and then making tweaks. This could include updating my photo galleries, working on more POSSE features, fixing broken links, or upgrading to the latest version of Drupal.

The past month, a collection of smaller updates have resulted in a new visual design for my site. If you are reading this post through an RSS aggregator or through my mailing list, consider checking out the new design on

2018 redesignBefore (left) and after (right).

The new may not win design awards, but will hopefully make it easier to consume the content. My design goals were the following:

  • Improve the readability of the content
  • Improve the discoverability of the content
  • Optimize the performance of my site
  • Give me more creative freedom

Improve readability of the content

To improve the readability of the content, I implemented various usability best practices for spacing text and images.

I also adjusted the width of the main content area. For optimal readability, you should have between 45 and 75 characters on each line. No more, no less. The old design had about 95 characters on each line, while the new design is closer to 70.

Both the line width and the spacing changes should improve the readability.

Improve the discoverability of content

I also wanted to improve the discoverability of my content. I cover a lot of different topics on my blog — from Drupal to Open Source, startups, business, investing, travel, photography and the Open Web. To help visitors understand what my site is about, I created a new navigation. The new Archive page shows visitors a list of the main topics I write about. It's a small change, but it should help new visitors figure out what my site is about.

Optimize the performance of my site

Less noticeable is that the underlying HTML and CSS code is now entirely different. I'm still using Drupal, of course, but I decided to rewrite my Drupal theme from scratch.

The new design's CSS code is more than three times smaller: the previous design had almost 52K of theme-specific CSS while the new design has only 16K of theme-specific CSS.

The new design also results in fewer HTTP requests as I replaced all stand-alone icons with inline SVGs. Serving the page you are reading right now only takes 16 HTTP requests compared to 33 HTTP requests with the previous design.

All this results in faster site performance. This is especially important for people visiting my site from a mobile device, and even more important for people visiting my site from areas in the world with slow internet. A lighter theme with fewer HTTP requests makes my site more accessible. It is something I plan to work more on in the future.

Website bloat is a growing problem and impacts the user experience. I wanted to lead by example, and made my site simpler and faster to load.

The new design also uses Flexbox and CSS Grid Layout — both are more modern CSS standards. The new design is fully supported in all main browsers: Chrome, Firefox, Safari and Edge. It is, however, not fully supported on Internet Explorer, which accounts for 3% of all my visitors. Internet Explorer users should still be able to read all content though.

Give me more creative freedom

Last but not least, the new design provides me with a better foundation to build upon in subsequent updates. I wanted more flexibility for how to lay out images in my blog posts, highlight important snippets, and add a table of content on long posts. You can see all three in action in this post, assuming you're looking at this blog post on a larger screen.

We are pleased to announce the developer rooms that will be organised at FOSDEM 2019. Developer rooms are assigned to self-organising groups to work together on open source projects, to discuss topics relevant to a broader subset of the community, etc. The individual developer room organisers will issue their calls for participation in the next few days. We will update this table with links to the calls for participation. Saturday 2 February 2019 Topic Call for Participation CfP deadline .NET and TypeScript - TBA Ada - TBA BSD announcement 2018-12-10 Collaborative Information and Content Management Applications - TBA Decentralized舰

October 12, 2018

I was talking to Chetan Puttagunta yesterday, and we both agreed that 2018 has been an incredible year for Open Source businesses so far. (Chetan helped lead NEA's investment in Acquia, but is also an investor in Mulesoft, MongoDB and Elastic.)

Between a series of acquisitions and IPOs, Open Source companies have shown incredible financial returns this year. Just look at this year-to-date list:

Company Acquirer Date Value
CoreOS RedHat January 2018 $250 million
Mulesoft Saleforce May 2018 $6,5 billion
Magento Adobe June 2018 $1,7 billion
GitHub Microsoft June 2018 $7,5 billion
Suse EQT partners July 2018 $2,5 billion
Elastic IPO September 2018 $4,9 billion

For me, the success of Open Source companies is not a surprise. In 2016, I explained how open source crossed the chasm in 2016, and predicted that proprietary software giants would soon need to incorporate Open Source into their own offerings to remain competitive:

The FUD-era where proprietary software giants campaigned aggressively against open source and cloud computing by sowing fear, uncertainty and doubt is over. Ironically, those same critics are now scrambling to paint themselves as committed to open source and cloud architectures.

Adobe's acquisition of Magento, Microsoft's acquisition of GitHub, the $5,2 billion merger of Hortonworks and Cloudera, or SalesForce's acquisition of Mulesoft endorse this prediction. The FUD around Open Source businesses is officially over.

I published the following diary on “More Equation Editor Exploit Waves“:

This morning, I spotted another wave of malicious documents that (ab)use again CVE-2017-11882 in the Equation Editor (see my yesterday’s diary). This time, malicious files are RTF files. One of the samples is SHA256:bc84bb7b07d196339c3f92933c5449e71808aa40a102774729ba6f1c152d5ee2 (VT score: 19/57)… [Read more]

[The post [SANS ISC] More Equation Editor Exploit Waves has been first published on /dev/random]

Dans mon pays, il faut 18 ans pour pouvoir se présenter aux élections (il n’y a pas si longtemps, il fallait même 23 ans pour être sénateur). Je suppose que la raison est que l’on considère que la plupart des jeunes de moins de 18 ans sont trop immatures intellectuellement. Ils sont trop aisément influençables, ils ne connaissent pas encore les réalités de la vie.

Bien sûr, la limite est arbitraire. Certains sont bien plus matures et réalistes à 15 ans que ce que certains ne seront jamais de toute leur vie de centenaire.

Nous acceptons cette limite arbitraire pour éviter l’innocence et la naïveté de la jeunesse.

Mais qu’en est-il de l’inverse ?

J’observe que beaucoup d’adultes qui étaient très idéalistes à 20 ans deviennent soudainement plus préoccupés par leur petit confort et payer les traites de la maison dans la trentaine et la quarantaine. Au-delà, nous sommes nombreux à devenir conservateurs, voire réactionnaires. Ce n’est certes pas une généralité, mais c’est quand même une forme de constance humaine observée depuis les premiers philosophes. Pline (ou Sénèque ? Plus moyen de retrouver le texte) en parlait déjà dans mes cours de latin.

Mais du coup pourquoi ne pas mettre une limite d’âge à la candidature aux élections ?

Plutôt que de faire des élections un concours de serrage de mains de retraités dans les kermesses, laissons les jeunes prendre le pouvoir. Ils ont encore des idéaux eux. Ils sont vachement plus concernés par le futur de leur planète, car ils savent qu’ils y vivront. Ils sont plus vifs, plus au courant des dernières tendances. Mais, contrairement aux plus âgés, ils n’ont ni le temps ni l’argent à investir dans des élections.

Imaginez un instant un monde où aucun élu ne dépasserait l’âge de 35 ans (ce qui me disqualifie directement) ! Et bien, en toute honnêteté, je trouve ça pas mal. Les ainés deviendraient les conseillers des plus jeunes et non plus le contraire où les plus jeunes font les cafés des plus vieux en espérant monter dans la hiérarchie et bénéficier d’un poste enviable à 50 ans.

En approchant de la limite fatidique des 35 ans, les élus commenceraient à vouloir plaire aux plus jeunes encore empreints de rébellion. Comme ce serait beau de voir ces trentenaires considérer les intérêts des plus jeunes comme leur principale priorité. Comme il serait jouissif de ne plus subir la suffisance d’élus cacochymes admonestant la fainéantise de la jeunesse entre deux tournois de golf.

Bien sûr, il y’a énormément de vieux qui restent jeunes dans leur cœur et leur esprit. Vous les reconnaitrez aisément : ils ne cherchent pas à occuper l’espace, mais à mettre en avant la jeunesse, la relève. Allez savoir pourquoi, j’ai l’impression que ceux-là auraient tendance à être plutôt d’accord avec ce billet. 

Après tout, on se prive bien de toute l’intelligence et de la vivacité des moins de 18 ans, je n’ai pas l’impression que le conservatisme des anciens jeunes nous manquera beaucoup.

Photo by Cristina Gottardi on Unsplash

Je suis @ploum, conférencier et écrivain électronique déconnecté rémunérés en prix libre sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens, même symboliques, font une réelle différence pour moi. Merci !

Ce texte est publié sous la licence CC-By BE.

October 11, 2018

I published the following diary on “New Campaign Using Old Equation Editor Vulnerability“:

Yesterday, I found a phishing sample that looked interesting:

From: sales@tjzxchem[.]com
To: me
Subject: RE: Re: Proforma Invoice INV 075 2018-19 ’08
Reply-To: exports.sonyaceramics@gmail[.]com

[Read more]

[The post [SANS ISC] New Campaign Using Old Equation Editor Vulnerability has been first published on /dev/random]

October 10, 2018

The post MySQL 8 removes shorthand for creating user + permissions appeared first on

I used to run a one-liner for creating a new database and adding a new user to it, with a custom password. It looked like this:

mysql> GRANT ALL PRIVILEGES ON ohdear_ci.*
       TO 'ohdear_ci'@'localhost'
       IDENTIFIED BY 'ohdear_secret';

In MySQL 8 however, you'll receive this error message.

ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that
  corresponds to your MySQL server version for the right syntax to use
  near 'IDENTIFIED BY 'ohdear_secret'' at line 1

The reason appears to be that MySQL dropped support for this short-hand version and now requires the slightly longer version instead.

mysql> CREATE USER 'ohdear_ci'@'localhost' IDENTIFIED BY 'ohdear_secret';
Query OK, 0 rows affected (0.11 sec)

mysql> GRANT ALL ON ohdear_ci.* TO 'ohdear_ci'@'localhost';
Query OK, 0 rows affected (0.15 sec)

If you have scripting in place that uses the short, one-liner version, be aware those might need changing if you move to MySQL 8.

The post MySQL 8 removes shorthand for creating user + permissions appeared first on

I published the following diary on “‘OG’ Tools Remain Valuable“:

For vendors, the cybersecurity landscape is a nice place to make a very lucrative business. New solutions and tools are released every day and promise you to easily detect malicious activities on your networks. And it’s a recurring story. Once they have been implemented by many customers, vendors come back again with new version flagged as “2.0”, “NG” or “Next Generation”. Is it really useful or just a hype? I won’t start the debate but keep in mind that good old tools and protocols remain still very valuable today… [Read more]

[The post [SANS ISC] “OG” Tools Remain Valuable has been first published on /dev/random]

October 09, 2018

So I forgot to post my customary love-declaration for the month of September, which is as I’m sure everyone will agree the most beautiful month in the world. But when looking outside it still feels very much like September and maybe I was just waiting for Chilly Gonzales to release this beautiful Septemberish tune, which he did last week;

YouTube Video
Watch this video on YouTube.

Solid concept inrupt

Last week, I had a chance to meet with Inrupt, a startup founded by Sir Tim Berners-Lee, who is best known as the inventor of the World Wide Web. Inrupt is based in Boston, so their team stopped by the Acquia office to talk about the new company.

To learn more about Inrupt's founding, I recommend reading Tim Berners-Lee's blog or Inrupt's CEO John Bruce's announcement.

Inrupt is on an important mission

Inrupt's mission is to give individuals control over their own data. Today, a handful of large platform companies (such as Facebook) control the media and flow of information for a majority of internet users. These companies have profited from centralizing the Open Web and lack transparent data privacy policies on top of that. Inrupt's goal is not only to improve privacy and data ownership, but to take back power from these large platform companies.

Inrupt will leverage Solid, an open source, decentralized web platform that Tim and others have been developing at MIT. Solid gives users a choice of where their personal data is stored, how specific people and groups can access select elements, and which applications can use it. Inrupt is building a commercial ecosystem around Solid to fuel its success. If Solid and/or Inrupt are widely adopted, it could radically change the way web sites and web applications work today.

As an advocate for the Open Web, I'm excited to see how Inrupt's mission continues to evolve. I've been writing about the importance of the Open Web for many years and even proposed a solution that mirrors Solid, which I called a Personal Information Broker. For me, this is an especially exciting and important mission, and I'll be rooting for Inrupt's success.

My unsolicited advice: disrupt the digital marketing world

It was really interesting to have the Inrupt team visit the Acquia office, because we had the opportunity to discuss how their technology could be applied. I shared a suggestion to develop a killer application that surround "user-controlled personalization".

Understanding visitors' interests and preferences to deliver personalized experiences is a big business. Companies spend a lot of time and effort trying to scrape together information about its website's visitors. However, behavior-based personalization can be slow and inaccurate. Marketers have to guess a visitor's intentions by observing their behavior; it can take a long time to build an accurate profile.

By integrating with a "Personal Information Broker" (PIB), marketers could get instant user profiles that would be accurate. When a user visits a site, they could chose to programmatically share some personal information (using a standard protocol and standard data schema). After a simple confirmation screen, the PIB could programmatically share that information and the site would instantly be optimized for the user. Instead of getting "cold leads" and trying to learn what each visitor is after, marketers could effectively get more "qualified leads".

It's a win not only for marketers, but a win for the site visitor too. To understand how this could benefit site visitors, let's explore an example. I'm 6'5" tall, and using a commerce site to find a pair of pants that fit can be a cumbersome process. I wouldn't mind sharing some of my personal data (e.g. inseam, waist size, etc) with a commerce site if that meant I would instantly be recommended pants that fit based on my preferences. Or if the store has no pants that would fit, it could just tell me; Sorry, we currently have no pants long enough for you!. It would provide me a much better shopping experience, making it much more likely for me to come back and become a long-time customer.

It's a simple idea that provides a compelling win-win for both the consumer and retailer, and has the opportunity to disrupt the digital sales and marketing world. I've been thinking a lot about user-controlled personalization over the past few years. It's where I'd like to take Acquia Lift, Acquia's own personalization product.

Inrupt's success will depend on good execution

I love what Solid and Inrupt are building because I see a lot of potential in it. Disrupting the digital marketing world is just one way the technology could be applied. Whatever they decide to focus on, I believe they are onto something important that could be a foundational component of the future web.

However, it takes a lot more than a good idea to build a successful company. For startups, it's all about good execution, and Inrupt has a lot of work to do. Right now, Inrupt has prototype technology that needs to be turned into real solutions. The main challenge is not building the technology, but to have it widely adopted.

For an idea this big, Inrupt will have to develop a protocol (something Tim Berners-Lee obviously has a lot of experience with), build out a leading Open Source reference implementation, and foster a thriving community of developers that can help build integrations with Drupal, WordPress and hundreds of other web applications. Last but not least, Inrupt needs to look for a sustainable business model by means of value-added services.

The good news is that by launching their company now, Inrupt has put themselves on the map. With Tim Berners-Lee's involvement, Inrupt should be able to attract top talent and funding for years to come.

Long story short, I like what Inrupt is doing and believe it has a lot of potential. I'm not sure what specific problem and market they'll go after, but I think they should consider going after "user-controlled personalization" and disrupt the digital marketing world. Regardless, I'll be paying close attention, will be cheering for their success and hopefully find a way to integrate it in Acquia Lift!

October 05, 2018

« Talk is cheap. Show me the code. »

Ce jeudi 18 octobre 2018 à 19h se déroulera la 71ème séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : Hacking photo pour les nuls ou Création d’un automate de prise de vue

Thématique : Photographie|Hardware & Embarqué

Public : Tout public

L’animateur conférencier : Robert Viseur

Lieu de cette séance : Université de Mons, Campus Plaine de Nimy, avenue Maistriau, bâtiment 6, Auditoire 18 (cf. ce plan sur le site de l’UMONS, ou la carte OSM).

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, OpenSides, MeaWeb et Phonoid.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

Description : Les logiciels et le matériels libres permettent la création d’automates de prise de vue. Ces derniers peuvent être créés avec une carte Raspberry Pi et une Camera Pi ou en pilotant un appareil photographique numérique depuis un ordinateur sous GNU/Linux. Le pilotage peut aller du simple déclenchement à l’automatisation d’un focus stacking. En aval de la prise de vue, des traitements sont par ailleurs possibles : augmentation de la dynamique, travail sur les couleurs, application de filtres, etc. Après une première conférence en 2015 présentant des exemples de réalisations, le sujet sera remis à l’honneur en 2018 avec des exemples de code source à la clef.

Short bio : Robert Viseur est Chargé de cours et Chef de service de Technologies de l’Information et de la Communication au sein de la Faculté Warocqué d’Économie et de Gestion de l’Université de Mons (UMONS). Ses recherches portent principalement sur la transformation numérique, le management de l’innovation et le management des projets open source. Photographe passionné, il est Administrateur du Cercle Royal Photographique de Charleroi. Il complète ses activités artistiques par le développement de systèmes de traitement d’images et d’automates de prise de vue à base de technologies open source (Raspberry Pi, Camera Pi, Python, gPhoto2, Enfuse,…).

October 04, 2018

Après Junior, c’est au tour de Max d’être tué par les défenses de l’étrange bâtiment. Eva et Nellio restent seuls.

Tout en parlant, nous continuons à marcher dans un long couloir de vitres dorées, laissant derrière nous le corps désarticulé de Max. Je suis étrangement insensible. Peut-être est-ce l’état de choc ? Nous nous arrêtons devant une large porte en bois verni et aux poignées dorées. Dans cet univers, la porte semble incongrue, hors du temps. Elle se dresse comme un cercueil au milieu d’un jardin d’enfants. Le mur qui l’entoure est constellé de bas-reliefs en marbre, de luminaires en fer forgé.

Eva semble réfléchir mais je n’en démords pas. J’exige des explications. Je veux savoir, comprendre à tout prix, quelle que soit l’incongruité du décor.

— Seconde question, Eva. Pourquoi sommes-nous, selon toi, morts ? Et pourquoi parles-tu des humains en disant “vous” ? Tu viens pourtant de prouver que tu es entièrement biologique !

— Alors, tu n’as pas encore compris ?

Ses yeux se plongent dans les miens. J’y lis une incroyable humanité, un mélange de naïveté teintée d’une infinie sagesse. Eva a les yeux d’un nouveau né qui aurait mille ans et vécu toutes les guerres. Elle est à la fois humaine et inhumaine dans sa totale humanité. Elle est le surhomme qui est tous les hommes à la fois. Face à elle, je me sens d’une idiotie sans nom. Que n’ai-je pas vu qui est pourtant tellement évident ?

— Non, je n’ai pas compris. Ou alors, je me refuse à accepter. Je veux entendre la vérité de ta propre bouche. Je veux que tu me dises tout sans sous-entendu, sans tabou. Que tu m’expliques cette vérité qui m’échappe comme tu le ferais à un enfant de cinq ans.

Lentement, elle me passe la main sur le visage. Doucement, elle se mordille les lèvres. Je frémis. Je ferme les yeux en sentant l’odeur de ses doigts frôlants mes narines.

— Quelle vérité, Nellio ?

Je la regarde en déglutissant. Elle ne m’a jamais semblé si belle qu’en cet instant. Au milieu du cataclysme chimique qu’est mon corps se mêle désormais un bestial instinct de reproduction patiemment transformé en amour par les millénaires d’évolution et de sélection naturelle.

— La vérité, Eva. La seule, l’unique vérité. Ce qui s’est vraiment passé.
— Comment pourrais-tu prétendre à la vérité toi dont la perception se limite obligatoirement à cinq sens sous-développés. Sens qui apportent leurs informations erronées a un cerveau rachitique. Une cellule du genou d’un astronome peut-elle comprendre ce qu’est une planète, ce qu’est la gravitation ? Un humain ne peut pas comprendre l’infrarouge ni l’ultraviolet. Il ne peut entendre qu’une gamme de sons tellement étroite et peut à peine imaginer ce qu’est une odeur là où un chien reconnaitra une personne plusieurs heures après son passage.
— Je…
— Nellio, les vérités sont infinies, complexes, changeantes.

Nos corps se sont rapprochés. Ses mains carressent ma poitrine, mon cou. Machinalement, je l’ai saisie par les hanches. Ma respiration se fait haletante.

— Eva, j’ai besoin de comprendre. Pourquoi ne te considères-tu pas comme faisant partie des humains ?

Pour toute réponse, elle pose goulument ses lèvres sur les miennes. Sa langue cherche maladroitement à me pénétrer. Des bras, j’entoure ses épaules, caresse son dos, descend doucement sur le creux de ses reins.

Elle s’écarte un instant pour reprendre son souffle et contemple mon visage ahuri. Un rire franc, cristallin, féminin, contagieux se répand et m’entoure. Je ne peux résister. Je ris, je l’accompagne.

Sans que je ne sache trop comment, nous nous retrouvons enlacés, roulants sur le sol vitré et brillant. Frénétiquement, elle déboutonne mon pantalon, tente de me l’enlever avec de petits gestes fébriles. Mon cœur s’emballe, ma vue se brouille. Dans mon excitation, je peine à avaler quelques bouffées d’air tout en tentant de lui prodiguer quelques maladroites caresses.

Le temps a cessé d’exister. Les morts, le risque, l’incongruité de l’endroit ont été effacés de ma mémoire alors que nos corps nus ne cherchent plus qu’à s’unir, se reproduire. Je ne suis que désir : entasser, posséder, pénétrer, aimer, recopier les chromosomes qui me constituent.

Sur le sol s’activent deux animaux gouvernés par des hormones, deux amas de cellules cherchant à se reproduire, à perpétuer la vie. Toute intelligence a disparu. Mon être, mon histoire, ma vie, ma philosophie sont condensés en cet unique instant où ma semence viendra féconder une femelle, où, peut-être, je transmettrai la vie avant de dépérir, inconscient de ma propre inutilité.

Mon sexe fouille, glisse avant de s’insérer dans ce corps que je possède, que je tiens dans mes mains. Entre deux grognements essoufflés, je vois Eva rire, gémir, se crisper de douleur, être surprise, sourire, jouir.

Je suis sur elle, je l’écrase de mon poids. L’instant d’après, elle me chevauche, me domine. Nous roulons, nous tournons. J’admire ses seins, son ventre, sa gorge, son dos, ses fesses sombres et délicieuses. Mes mains cherchent à la caresser, à jouir de chaque centimètre de son corps.

Alors qu’elle se tortille sous moi, mon sexe est brusquement enserré. Durant une fraction de seconde, ma gorge se contracte, je déglutis. Puis, je jouis dans un râle profond, bestial, organique. Mon corps est pris de spasmes incontrôlables. Eva me répond par un petit couinement avant que je ne m’affale à ses côtés, épuisé, repus.

Combien de temps restons-nous côte-à-côte sans rien dire, reprenant notre souffle ? Tout mon esprit lutte contre cet implacable sommeil post-coïtal. Je tente de reprendre mes esprits.

Eva est la première à rompre le silence :
— C’est donc cela être humain ? C’est tellement beau et effrayant à la fois. Je croyais que la douleur et le plaisir étaient deux extrêmes opposés mais je constate que, chez l’humain, la frontière est floue. Jamais je n’avais compris cela. Je tentais de minimiser la douleur et, sans le savoir, je tuais le plaisir. J’ai été créé pour le plaisir. Pourtant, je suis le fruit d’un monde de douleur.
— Eva, de quoi parles-tu ?

Elle soupire avant de m’adresser un regard à la fois complice et condescendant. Du bout des doigts, elle frôle ma joue.
— Il est temps que je t’explique qui je suis, Nellio…

Dans un claquement, la porte devant laquelle nous gisons, pantins nus et désarticulés, s’ouvre brusquement.

Photo by Alessia Cocconi on Unsplash

Je suis @ploum, conférencier et écrivain électronique déconnecté rémunérés en prix libre sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens, même symboliques, font une réelle différence pour moi. Merci !

Ce texte est publié sous la licence CC-By BE.

October 03, 2018


Ceci est un répondeur automatique.

Jusqu’au 31 décembre 2018, j’ai décidé de tenter l’expérience de me déconnecter complètement des réseaux sociaux, des sites de presse/média/informations/actualités et de tout service web vivant de la publicité. Je vous détaille mes motivations dans ce billet.

Mes comptes Twitter, Mastodon et ma page Facebook continueront à exister et être alimentés automatiquement par les billets de ce blog mais je ne verrai pas vos demandes d’ajout, vos commentaires, vos notifications, vos likes ni vos messages. Les autres comptes (Facebook perso, Google+, Diaspora, …) seront certainement désertés. 

Cette expérience sera partagée sur ce blog, tant au niveau des réflexions que des outils techniques mis en place. Si vous la trouvez intéressante, n’hésitez pas à me soutenir sur Tipeee.

Si vous voyez une personne tentant d’interagir avec moi sur un réseau social, merci de lui envoyer ce lien.

Pour me contacter, utilisez le mail: ou laissez un message après le bip sonore.


Photo by Amy Lister on Unsplash

Je suis @ploum, conférencier et écrivain électronique déconnecté rémunérés en prix libre sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens, même symboliques, font une réelle différence pour moi. Merci !

Ce texte est publié sous la licence CC-By BE.

October 02, 2018

Pourquoi je compte retourner dans ma Thébaïde et partager avec vous mon expérience de déconnexion totale des réseaux sociaux et des informations.

Depuis le temps que je suis Thierry Crouzet sur les réseaux sociaux, que je lis ses livres, j’avoue ne jamais avoir été intéressé par son “J’ai débranché”, récit où il raconte son overdose d’Internet et sa déconnexion totale pendant 6 mois.

Un soir, alors que je checkais compulsivement mon flux Twitter pour me récompenser d’avoir accompli un travail ennuyeux, le titre m’est revenu à l’esprit et j’ai subitement eu envie de le lire. Miracle de la génération Internet/pirates, il me fallu moins d’une minute pour que les caractères s’impriment sur l’écran de ma liseuse.

L’histoire de Thierry me parle particulièrement car j’ai découvert que je partage énormément avec lui. Par certains aspects, j’ai même l’impression d’être une version belge du Crouzet grande gueule franchouillard. Asocial limite misanthrope (sauf pour donner des conférences), en colère contre la bêtise du monde (mais moins que lui), accro à la lecture et à Internet (mais moins que lui), écrivain électronique (mais moins que lui). Comme lui je ne sais pas discuter sans placer d’incessantes références à mes écrits (bon moi ce sont des billets de blogs, lui des livres) et comme lui je suis amoureux de vélo et d’admirer de vastes étendues d’eau (mais moi je n’ai pas la chance de l’avoir dans mon jardin, ce qui me rend mortellement jaloux).

Bref, je reconnais dans son texte une caricature de ce que j’ai l’impression d’être : chevalier blanc parti sauver le monde sur Internet. Les premiers chapitres sont sans concession : le premier pas vers la guérison est l’acceptation que l’on est malade. Crouzet était malade d’Internet. Le suis-je également ?

Pourtant je trouve Internet tellement merveilleux. Tellement magnifique. Je pense encore qu’Internet représente le futur de l’humanité, que notre salvation passera par l’évolution d’un Homo Sapiens en Homo Collaborans, cellule d’un gigantesque être vivant dont la colonne vertébrale sera Internet.

Du coup, encore une fois, je trouve Crouzet un peu trop extrême : se déconnecter d’Internet au point d’aller chercher des numéros de téléphone dans un papier, c’est rigolo pour la télé-réalité type “Mon Crouzet chez les non-connectés” mais est-ce constructif ?

Pour moi, le cœur du problème reste avant tout la publicité. Outre son effet destructeur indéniable sur notre cerveau, la publicité a introduit un effet particulièrement pervers : la course au clic. Même si ce n’est plus réellement rentable, les sites webs doivent désormais vendre des clics et du temps passés en ligne. Ils s’optimisent donc tous sur ce modèle : rendre les utilisateurs le plus accro possible pour qu’ils viennent souvent et qu’ils cliquent le plus sur les publicités. Le clic est devenu l’observable (je vous avais dit que je faisais des autoréférences) qui fait tourner le web au point que peu sont ceux qui osent même imaginer un modèle de rémunération non lié à la publicité ou au clic.

Cette forme de monétisation à introduit toute sorte d’innovations particulièrement délétères pour notre cerveau : notifications rouges pour donner envie de cliquer, flux infini, pour avoir l’impression qu’on va rater quelque chose si on quitte la page, “likes” surdimensionnés pour être facile à envoyer en un clic et pour entrainer un shoot de dopamine chez le posteur (alors qu’ils sont fondamentalement inutiles, tout comme les “pokes” ou le défunt réseau social “yo”).

Je me rends compte que je préfère chercher de nouveaux articles à lire et à ajouter à Pocket que de les lire pour de bon ! Je rêve également de lire attentivement certains livres sur mon étagère mais je les pose à côté de mon ordinateur. Lorsque je les feuillette, ordinateurs et téléphones semblent clignoter, m’appeler !

Au départ, je pensais qu’une simple prise de conscience suffisait. Que je n’étais pas si addict. Puis, j’ai installé un bloqueur temporaire (SelfControl). J’ai découvert que, alors que je savais pertinemment que certains sites étaient bloqués, mes doigts tapaient l’URL par réflexe lorsque j’en avais marre de ce que je faisais. J’ai été ébahi de voir une page d’erreur de connexion pour Twitter alors que je n’avais jamais consciemment décidé d’aller sur Twitter. Le cerveau a tellement renforcé ces connexions et lié cela à de petites décharges de plaisir que j’ai perdu le contrôle.

J’ai donc décidé de tenter une expérience de déconnexion. Non pas d’Internet mais des réseaux sociaux et des médias en général. De reconstruire ma Thébaïde qui s’est fait largement envahir ces dernières années. Contrairement à Thierry, je n’ai pas eu d’alerte médicale. Ma motivation est toute autre : je me rends compte que le temps passé en ligne n’est plus productif. Je n’écris plus de formats longs, mes textes croupissent, je préfère l’immédiateté d’un Tweet qui recevra 2 ou 200 likes.

J’ai besoin et envie d’écrire. Mais j’ai également besoin et envie d’être lu car un texte n’existe que dans l’esprit du lecteur. Ma déconnexion concernera les réseaux sociaux et tous les sites qui tentent de “m’informer”, de me prendre du temps de cerveau. J’ai n’ai pas besoin ni envie de savoir ce qui se passe dans le monde. C’est certes amusant, souvent énervant mais cela me donne envie de rebondir immédiatement. Cela me fait perdre du temps, de l’énergie mentale et cela manipule mes émotions.

Le but est d’arriver à trouver un ensemble de règles, de sites à bloquer, et un setup qui me permettent de profiter d’Internet sans en être esclave. Avoir les bons côtés sans les désagréments. C’est pourquoi je partagerai sur ce blog mes ressentis et mes résultats.

Depuis plusieurs années, il n’y a pas de statistiques ni de commentaires sur ce blog. Deux décisions que je n’ai jamais regrettées. Mais les statistiques et les commentaires se sont finalement déplacés vers les réseaux sociaux. En m’en coupant une nouvelle fois, je ne fais que mettre en place ce que j’ai toujours prôné. Le plus amusant c’est que, à part si vous m’envoyez un mail direct, je ne saurai jamais si ce que j’écris est populaire, utile à d’autres, repartagé ou ignoré. Mais c’est cela la beauté de l’expérience ! Ma seule observable sera de voir le nombre de contributeurs augmenter sur Tipeee et Liberapay.

Le temps de boucler mes bagages et je suis en partance pour cette expérience que je partagerai en intégralité avec vous sur ce blog et, ironiquement, sur les réseaux sociaux où mes billets seront partagés automatiquement sans intervention de ma part.

Je suis curieux et impatient même, si je l’avoue, la communauté Mastodon va peut-être me manquer un peu.

Photo by Nicholas Barbaros on Unsplash

Je suis @ploum, conférencier et écrivain électronique déconnecté rémunérés en prix libre sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens, même symboliques, font une réelle différence pour moi. Merci !

Ce texte est publié sous la licence CC-By BE.

Drupal 8’s REST API reached a next level of maturity in 8.5. In 8.6, we matured it further, added features and closed some gaps.

Drupal 8.6 was released 1 with some significant API-First improvements!

The REST API made a big step forward with the 6th minor release of Drupal 8 — I hope you’ll like these improvements :)

Thanks to everyone who contributed!

  1. File uploads! #1927648

    No more crazy per-site custom REST resource plugins, complex work-arounds or base64-encoded hacks! Safe file uploads of any size are now natively supported!

    POST /file/upload/node/article/field_hero_image?_format=json HTTP/1.1
    Content-Type: application/octet-stream
    Content-Disposition: file; filename="filename.jpg"
    [… binary file data …]

    then, after receiving a response to the above request:

    POST /node?_format=json HTTP/1.1
    Content-Type: application/json
      "type": [{"value": "article"}],
      "title": [{"value": "Dramallama"}],
      // Note that this is using the file ID we got back in the response to our previous request!
      "field_hero_image": [
          "target_id": 345345,
          "description": "The most fascinating image ever!"

    If you’d like a more complete example, see the change record, which explains it in detail. And if you want to read about the design rationale, see the dedicated blog post.

  2. parent field on Term now is a standard entity reference #2543726

    "parent": []

      "target_id": 2,
      "target_type": "taxonomy_term",
      "target_uuid": "371d9486-1be8-4893-ab20-52cf5ae38e60",
      "url": ""
    We fixed this at the root, which means it not only helps core’s REST API, but also the contributed JSON API and GraphQL modules, as well as removing the need for its previously custom Views support!

  3. alt property on image field lost in denormalization #2935738

      "target_id": 2,
      "target_type": "file",
      "target_uuid": "be13c53e-7f95-4add-941a-fd3ef81de979",
      "alt": "Beautiful llama!"

    after denormalizing, saving and then normalizing, this would result in:

      "target_id": 2,
      "target_type": "file",
      "target_uuid": "be13c53e-7f95-4add-941a-fd3ef81de979",
      "alt": ""

    Same thing for the description property on file and image fields, as well as text, width and height on image fields. Denormalization was simply not taking any properties into account that specializations of the entity_reference field type were adding!

  4. PATCHing a field → 403 response without with reason #2938035

    {"message":"Access denied on updating field 'sticky'."} {"message":"Access denied on updating field 'sticky'. The 'administer nodes' permission is required."}

    Just like we improved PATCH support in Drupal 8.5 (see point 4 in the 8.5 blog post), we again improved it! Previously when you’d try to modify a field you’re not allowed to modify, you’d just get a 403 response … but that wouldn’t tell you why you weren’t allowed to do so. This of course was rather frustrating, and required a certain level of Drupal knowledge to solve. Now Drupal is far more helpful!

  5. 406 responses now lists & links supported formats #2955383

    Imagine you’re doing a HTTP request like GET /entity/block/bartik_branding?_format=hal_json. The response is now more helpful.

    Content-Type: application/hal+json
    {"message": "No route found for the specified format hal_json."}

    Content-Type: application/hal+json
    Link: <>; rel="alternate"; type="application/json", >>; rel="alternate"; type="text/xml"
    {"message": "No route found for the specified format hal_json. Supported formats: json, xml."}
  6. Modules providing entity types now responsible for REST tests

    Just like we achieved comprehensive test coverage in Drupal 8.5 (see point 7 in the 8.5 blog post), we again improved it! Previously, the rest.module component in Drupal core provided test coverage for all core entity types. But if Drupal wants to be API-First, then we need every component to make HTTP API support a priority.
    That is why in Drupal 8.6, the module providing an entity type contains said test coverage (A). We also still have test coverage test coverage (B). Put A and B together, and we’ve effectively made HTTP API support a new gate for entity types being added to Drupal core. Also see the dedicated blog post.

  7. rest.module is now maintainable!

    I’m happy to be able to proudly declare that Drupal 8 core’s rest.module in Drupal 8.6 can for the first time be considered to be in a “maintainable” state, or put differently: in a well-maintained state. I already wrote about this in a dedicated blog post 4.5 months ago. Back then, for the first time, the number of open issues fit on “a single page” (fewer than 50). Today, several months later, this is still the case. Which means that my assessment has proven true :) Whew!

Want more nuance and detail? See the REST: top priorities for Drupal 8.6.x issue on

Are you curious what we’re working on for Drupal 8.7? Want to follow along? Click the follow button at REST: top priorities for Drupal 8.7.x — whenever things on the list are completed (or when the list gets longer), a comment gets posted. It’s the best way to follow along closely!2

The other thing that we’re working on for 8.7 besides the REST API is getting the JSON API module polished to core-worthiness. All of the above improvements help JSON API either directly or indirectly! I also wrote about this in my State of JSON API blog post. Given that the REST API is now in a solid place, for most of 2018 the majority of our attention has actually gone to JSON API, not core’s REST API. I expect this to continue to be the case.

Was this helpful? Let me know in the comments!

For reference, historical data:

  1. This blog post is long overdue since 8.6 was released almost a month ago. Some personal life issues caused a delay. ↩︎

  2. ~50 comments per six months — so very little noise. ↩︎

September 29, 2018

Au cours de ma vie, je suis souvent passé par des périodes d’exploration, de dispersion suivies de périodes de recentrage, de concentration, tentant d’aller à l’essentiel. C’est ainsi que depuis quelques années je cherche à me focaliser sur ce qui me passionne, l’écriture.

Mais écrire se révèle une activité bien plus variée que je ne le croyais. Outre le blog et mon feuilleton Printeurs, je ne peux m’empêcher d’explorer de nouveaux terrains.

Tout d’abord avec la littérature pour enfant, à travers les aventures d’Aristide dont le crowdfunding se termine lundi. Et, je l’avoue, je n’en serai pas fâché. C’est épuisant de rappeler sans cesse que la campagne est en cours, de vous demander de soutenir alors que la plupart des gens voyant ce message soutiennent déjà, de faire le spammeur quand on a horreur du spam. Mais sans doute est-ce une expérience indispensable dans la vie d’un auteur moderne. (et si vous n’avez pas encore votre exemplaire commandé, c’est le moment !)

À l’opposé des contes enfantins, je découvre également avec délectation la position d’éditorialiste sur le ProofOfCast.

Si vous ne l’avez pas encore écouté, je vous conseille vivement ce podcast en français consacré aux cryptomonnaies et autres blockchains. D’ailleurs, le dernier épisode vient juste d’être mis en ligne et vous pouvez nous suivre sur Mastodon.

Lorsque Sébastien Arbogast a lancé l’idée, j’ai tout de suite accepté, sachant qu’il serait aux manettes : enregistrement, montage, mise en ligne, promotion, rédaction. De mon côté, je me contente de la confortable position d’éditorialiste.

Après 5 éditoriaux, j’avoue trouver l’expérience absolument fascinante. Écrire un éditorial pour un podcast, et donc pour le langage parlé, est un exercice complètement différent de l’écriture de blog, de livres ou de quoi que ce soit d’autres. C’est une diversification que j’adore et je commence déjà à trouver quelques marques, quelques réflexes, à remarquer que l’écriture devient plus plaisante. En fait, je rigole à mes propres blagues en écrivant mes éditos (je suis bon public) et, comme le dit Seb, on n’est jamais aussi bien servi que par soi-même.

Bref, si vous n’avez pas encore entendu nos délires, je vous encourage à rattraper votre retard en réécoutant les épisodes de ProofOfCast (je n’interviens pas dans tous mais ils sont tous biens quand même).

Et si vous cherchez une plume éditoriale pour votre podcast ou votre émission radio, n’hésitez pas à me soumettre le projet. C’est un exercice dans lequel je débute totalement mais où je prendrais beaucoup de plaisir à m’améliorer.

Photo by Will Francis on Unsplash

Je suis @ploum, conférencier et écrivain électronique déconnecté rémunérés en prix libre sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens, même symboliques, font une réelle différence pour moi. Merci !

Ce texte est publié sous la licence CC-By BE.

September 28, 2018

I published the following diary on “More Excel DDE Code Injection“:

The “DDE code injection” technique is not brand new. DDE stands for “Dynamic Data Exchange”. It has already been discussed by many security researchers. Just a quick reminder for those who missed it. In Excel, it is possible to trigger the execution of an external command by using the following syntax… [Read more]

[The post [SANS ISC] More Excel DDE Code Injection has been first published on /dev/random]

I received my Yubikey 4C Nano for a while (“C” because it is compatible with USB-C connectors) but I did not have time yet to configure it to be used with my PGP key. It’s now done! As you can see, it fits perfectly in my Macbook pro:

Yubikey 4C Nano

I won’t write a tutorial about the installation of PGP keys on a Yubikey, just use your Google-fu to find some references. Just a few words about the nice “touch” feature that allows you protecting the use of the private keys with an extra physical layer. Once enabled, the result of a cryptographic operation involving a private key (signature, decryption or authentication) is released only if the correct user PIN is provided and the YubiKey touch sensor is triggered.  The YubiKey is blinking a few seconds to remind you to touch it.

So, my new PGP key has been published on key servers. The old one will be revoked soon. If you are exchanging sensitive data with me, please be sure to use the new one from now!

[The post New PGP Key has been first published on /dev/random]

September 25, 2018

September 24, 2018

The post ssh error: unable to negotiate with IP: no matching cipher found appeared first on

I have a pretty old Synology NAS at home that I use for some basic file storage. I also abuse it as a cron-server, doing some simple rsync's from remote systems. When I last tried to SSH into it, I was greeted with this error.

$ ssh admin@nas.home
Unable to negotiate with port 22: no matching cipher found.
  Their offer: aes128-cbc,3des-cbc,aes192-cbc,aes256-cbc

Turns out my clients' SSH was updated and was blocking several insecure ciphers by default. And this Synology runs an ancient SSH daemon, that only supports those ancient outdated ciphers.

To check which ciphers your client supports, run this:

$ ssh -Q cipher

In this list are several ciphers that are supported by my ancient SSH server as well as the client, they're just blocked by default on the client. Things like 3des-cbc, aes128-cbc, aes256-cbc, ... etc.

To enable those ciphers anyway, you can force their use with the -c parameter.

$ ssh -c aes256-cbc admin@nas.home

This will re-allow access to that ancient SSH server.

Now to find a way to upgrade the SSH daemon on that Synology without breaking it ...

The post ssh error: unable to negotiate with IP: no matching cipher found appeared first on

September 23, 2018

Recently I had to update the existing code running behind (the service that returns you a list of validated mirrors for yum, see the /etc/yum.repos.d/CentOS*.repo file) as it was still using the Maxmind GeoIP Legacy country database. As you can probably know, Maxmind announced that they're discontinuing the Legacy DB, so that was one reason to update the code. Switching to GeoLite2 , with python2-geoip2 package was really easy to do and so was done already and pushed last month.

But that's when I discussed with Anssi (if you don't know him, he's maintaining the CentOS external mirrors DB up2date, including through the centos-mirror list ) that we thought about not only doing that change there, but in the whole chain (so on our "mirror crawler" node, and also for the service), and random chat like these are good because suddenly we don't only want to "fix" one thing, but also take time on enhancing it and so adding more new features.

The previous code was already supporting both IPv4 and IPv6, but it was consuming different data sources (as external mirrors were validated differently for ipv4 vs ipv6 connnectivity). So the first thing was to rewrite/combine the new code on the "mirror crawler" process for dual-stack tests, and also reflect that change o nthe frontend (aka nodes.

While we were working on this, Anssi proposed to also not adapt the code, but convert it in the same python format as the, which he did.

Last big change also that was added is the following : only some repositories/architectures were checked/validated in the past but not all the other ones (so nothing from the SIGs and nothing from AltArch, so no mirrorlist support for i386/armhfp/aarch64/ppc64/ppc64le).

While it wasn't a real problem in the past when we launched the SIGs concept, and that we added after that the other architectures (AltArch), we suddenly started suffering from some side-effects :

  • More and more users "using" RPM content from (mainly through SIGs - which is a good indicator that those are successful, which is a good "problem to solve")
  • We are currently losing some nodes in that network (it's still entirely based on free dedicated servers donated to the project)

To address first point, offloading more content to the 600+ external mirrors we have right now would be really good, as those nodes have better connectivity than we do, and with more presence around the globe too, so slowly pointing SIGs and AltArch to those external mirrors will help.

The other good point is that , as we switched to the GeoLite2 City DB, it gives us more granularity and also for example, instead of "just" returning you a list of 10 validated mirrors for USA (if your request was identified as coming from that country of course), you now get a list of validated mirrors in your state/region instead. That means that then for such big countries having a lot of mirrors, we also better distribute the load amongst all of those, which is a big win for everybody - users and mirrors admins - )

For people interested in the code, you'll see that we just run several instances of the python code, behind Apache running with mod_proxy_balancer. That means that if we just need to increase the number of "instances", it's easy to do but so far it's running great with 5 running instances per node (and we have 4 nodes behind Worth noting that on average, each of those nodes gets 36+ millions requests per week for the mirrorlist service (so 144+ millions in total per week)

So in (very) short summary :

  • code now supports SIGs/AltArch repositories (we'll sync with SIGs to update their .repo file to use mirrorlist= instead of baseurl= soon)
  • we have better accuracy for large countries, so we redirect you to a 'closer' validated mirror

One reminder btw : you know that you can verify which nodes are returned to you with some simple requests :

# to force ipv4
curl '' -4
# to force ipv6
curl '' -6

Last thing I wanted to mention was a potential way to fix point #2 from the list : when I checked in our "donated nodes" inventory, we still are running CentOS on nodes from ~2003 (yes, you read that correctly), so if you want to help/sponsor the CentOS Project, feel free to reach out !

September 20, 2018

Petit rappel sur le danger des plateformes centralisées

On a tendance à l’oublier mais, aujourd’hui, la plupart de nos interactions sur le net ont lieu à travers des plateformes centralisées. Cela signifie qu’une seule entité possède le pouvoir absolu sur ce qui se passe sur sa plateforme.

C’est fort théorique jusqu’au jour où, sans raison apparente, votre compte est suspendu. Du jour au lendemain, tous vos contenus sont inaccessibles. Tous vos contacts sont injoignables (et vous êtes injoignables pour eux).

C’est insupportable lorsque ça vous arrive à titre privé. Cela peut être tout bonnement la ruine si cela vous arrive à titre professionnel. Et ce n’est pas réservé qu’aux autres.

Un ami s’est ainsi un jour réveillé pour constater que son compte Google était complètement suspendu. Il n’avait plus accès à ses emails personnels et semi-professionnels. Il n’avait plus accès à ses documents Google Drive, à son calendrier. Il n’avait plus accès à tous les services qui utilisent votre adresse Google pour se connecter. La plupart des apps de son téléphone ne fonctionnaient plus. Il était numériquement anéanti et n’avait absolument aucun recours. De plus, il était en voyage à l’autre bout du monde. Cela pourrait limite faire le scenario d’un film.

Par chance, un de ses contacts travaillait chez Google et a réussi à débloquer la situation après quelques semaines mais sans aucune explication. Cela pourrait lui arriver encore demain. Cela pourrait vous arriver à vous. Faites l’expérience de vivre une semaine sans votre compte Google et vous comprendrez.

Personnellement, je ne me sentais pas trop concerné car, si je suis encore partiellement chez Google, j’ai un compte payant avec mon propre nom de domaine. Celui-là, me disais-je, ne devrait jamais avoir de problème. Après tout, je suis un client payant.

Mais voilà que je découvre que mon profil Google+ a été suspendu. Si je peux toujours utiliser mes mails et mon calendrier, tous mes posts Google+ sont désormais inaccessibles. Heureusement que, depuis des années, je n’y postais plus activement et que j’avais copié sur mon blog tout ce que je trouvais important pour moi. Néanmoins, pour les 3000 personnes qui me suivent sur Google+, j’ai tout simplement cessé d’exister et ils ne le savent pas (merci de leur envoyer ce billet). Lorsque je collabore à un document sur Google Drive, ce n’est plus ma photo et le nom que j’ai choisi qui s’affichent. Je ne peux plus non plus commenter sur Youtube (pas que ça m’arrive).

Bref, si cette suppression de profil n’est pas dramatique pour moi, elle sert quand même de piqûre de rappel. Sur Twitter, j’ai vécu un incident similaire lorsque j’ai décidé de tester le modèle publicitaire en payant pour promouvoir les aventures d’Aristide, mon livre pour enfant (que vous pouvez encore commander durant quelques jours). Après quelques impressions, ma campagne a été bloquée et j’ai été averti que ma publicité ne respectait pas les règles de Twitter (un livre pour enfants, imaginez un peu le scandale !). Chance, mon compte n’a pas été affecté par cette décision (cela m’aurait bien plus ennuyé que la suspension Google+) mais, encore une fois, on n’est pas passé loin.

Même histoire sur Reddit où, là, mon compte a été “shadow banned”. Cela veut dire que je ne m’en rendais pas compte mais tout ce que je postais était invisible. Je m’étonnais de n’avoir aucune réponse à mes commentaires et c’est plusieurs contacts qui m’ont confirmé ne pas voir ce que je postais. Un modérateur anonyme a décidé, de manière irrévocable, que je ne convenais pas à Reddit et je n’en savais rien.

Sur Facebook, les histoires de ce genre sont nombreuses même si je n’en ai pas fait l’expérience personnellement. Sur Medium, le projet Liberapay a également connu ce genre d’aventure car, comme toute plateforme centralisée, Medium s’arroge le droit de décider ce qui est publiable ou non sur sa plateforme.

Dans chacun des cas, vous remarquez qu’il n’y a aucune infraction clairement identifiée et qu’il n’y a aucun recours. Parfois il est possible de demander de réexaminer la situation mais la discussion n’est généralement pas possible, tout est automatique.

Mais alors, que faire ?

Et bien c’est la raison pour laquelle je vous recommande chaudement de me suivre également sur Mastodon, un réseau social décentralisé. Je ne possède pas ma propre instance mais j’ai choisi de créer mon compte sur qui est mis en place par La Quadrature du Net. J’ai la conviction qu’en cas de problème ou de litige, j’aurai face à moi un interlocuteur humain qui partage plus ou moins ma sensibilité.

Bien sûr, la solution n’est pas parfaite. Si votre instance Mastodon disparait, vous perdrez également tout. C’est la raison pour laquelle beaucoup de mastonautes ont désormais plusieurs comptes. J’avoue que, dans ce cas-ci, je fais une confiance aveugle aux capacités techniques de La Quadrature du Net. Mais, à choisir, je préfère perdre un compte suite à un problème technique que suite à une décision politique inique et indiscutable.

En attendant une solution parfaite, il est important de se rappeler constamment que nous offrons du contenu aux plateformes et qu’elles en font ce qu’elles veulent ou peuvent. Ne perdez donc pas trop d’énergie à faire de grandes tartines, à travailler vos textes là-bas. Gardez à l’esprit que tous vos comptes, vos écrits, vos données, vos contacts peuvent disparaitre demain. Prévoyez des solutions de rechange, soyez résilients.

C’est la raison pour laquelle ce blog reste, depuis 14 ans, une constante de ma présence en ligne, mon seul historique officiel et unique. Il a déjà une longévité plus grande que la plupart des services web et j’espère même qu’il me survivra. 

PS: Sans explication, mon compte Google+ semble être de nouveau actif. Mais jusqu’à quand ?

Photo by Kev Seto on Unsplash

Je suis @ploum, conférencier et écrivain électronique déconnecté rémunérés en prix libre sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens, même symboliques, font une réelle différence pour moi. Merci !

Ce texte est publié sous la licence CC-By BE.

I published the following diary on “Hunting for Suspicious Processes with OSSEC“:

Here is a quick example of how OSSEC can be helpful to perform threat hunting. OSSEC  is a free security monitoring tool/log management platform which has many features related to detecting malicious activity on a live system like the rootkit detection or syscheck modules. Here is an example of rules that can be deployed to track malicious processes running on a host (it can be seen as an extension of the existing rootkit detection features). What do I mean by malicious processes? Think about crypto miners. They are plenty of suspicious processes that can be extracted from malicious scripts… [Read more]

[The post [SANS ISC] Hunting for Suspicious Processes with OSSEC has been first published on /dev/random]

September 18, 2018

September 17, 2018

Ou bien écoutez-vous Facebook ?

Vous avez probablement entendu parler de cette rumeur : Facebook écouterait toutes nos conversations à travers le micro de nos téléphones et ses algorithmes de reconnaissance en profiteraient pour nous afficher des publicités liées à nos récentes discussions.

Plusieurs témoignages abondent en ce sens, toujours selon la même structure : un utilisateur de Facebook voit apparaitre une publicité qui lui semble en rapport avec une discussion qu’il vient d’avoir. Ce qui le frappe c’est qu’à aucun moment il n’a eu un comportement en ligne susceptible d’informer les publicitaires (visite de sites sur le sujet, recherches, like de posts, etc).

Facebook a formellement nié utiliser le micro des téléphones pour enregistrer les conversations. La plupart des spécialistes considèrent d’ailleurs que le faire à une telle échelle n’est pas encore technologiquement réalisable ni rentable. Google, de son côté, reconnait enregistrer l’environnement de ses utilisateurs mais sans utiliser ces données de manière publicitaire (vous pouvez consulter les enregistrements faits par votre compte Google sur ce lien, c’est assez saisissant d’entendre des moments aléatoires de votre vie quotidienne).

Ce qui frappe dans ce débat, c’est tout d’abord la facilité avec laquelle le problème pourrait être résolu : désinstaller l’application Facebook du téléphone (et accéder à Facebook via son navigateur).

Ensuite, s’il n’est pas complètement impossible que Facebook enregistre nos conversations, il est amusant de constater que des millions de personnes paient pour installer chez eux un engin qui fait exactement cela : Amazon Alexa, Google Echo ou Apple Homepod enregistrent nos conversations et les transmettent à Amazon, Google et Apple. Ouf, comme ce n’est pas Facebook, alors ça va. Et votre position ? Les personnes avec qui vous êtes dans un endroit ? Les personnes dont vous regardez les photos ? Les personnes avec qui vous avez des échanges épistolaires ? Bon, ça, ça va, Facebook peut le savoir. De toutes façons Facebook saura tout même si vous faites attention. La développeuse Laura Kalbag s’est ainsi vu proposer des publicités pour un service de funérailles à la mort de sa mère malgré un blocage Facebook complet. La faille ? Une de ses sœurs aurait informé une amie via Messenger.

Cependant, la rumeur des micros de téléphone persiste. L’hypothèse qui semble la plus probable est celle de la simple coïncidence. Nous sommes frappés par la similitude d’une publicité avec une conversation que nous avons eue plus tôt mais, lorsque ce n’est pas le cas, nous ne remarquons pas consciemment la publicité et n’enregistrons pas l’événement.

Mais il existe une autre hypothèse. Encore plus effrayante. Effrayante de simplicité et de perversité. L’hypothèse toute simple que si nous avons une conversation sur un sujet particulier, c’est parce que l’un ou plusieurs d’entre nous avons vu une publicité sur ce sujet. Parfois sans nous en souvenir. Souvent sans le réaliser.

Du coup, la publicité nous paraitrait bien plus voyante par après. Nous refuserions d’admettre l’hypothèse que notre libre-arbitre soit à ce point manipulable. Et nous imaginerions que Facebook contrôle nos téléphones, contrôle nos micros.

Pour ne pas avouer qu’il contrôle déjà nos esprits. Qu’ils contrôlent nos sujets de conversation car c’est son business model. Même si nous ne sommes pas sur Facebook : il suffit que nos amis y soient, eux.

Facebook n’a pas besoin d’écouter nos conversations. Il décide déjà de quoi nous parlons, ce que nous pensons et pour qui nous allons voter.

Mais c’est plus facile de s’indigner à l’idée que Facebook puisse contrôler un simple microphone que de remettre en question ce qui fonde nos croyances et notre identité…

Photo by Nathaniel dahan on Unsplash

Je suis @ploum, conférencier et écrivain électronique déconnecté rémunérés en prix libre sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens, même symboliques, font une réelle différence pour moi. Merci !

Ce texte est publié sous la licence CC-By BE.

Last week, nearly 1,000 Drupalists gathered in Darmstadt, Germany for Drupal Europe. In good tradition, I presented my State of Drupal keynote. You can watch a recording of my keynote (starting at 4:38) or download a copy of my slides (37 MB).

Drupal 8 continues to mature

I started my keynote by highlighting this month's Drupal 8.6.0 release. Drupal 8.6 marks the sixth consecutive Drupal 8 release that has been delivered on time. Compared to one year ago, we have 46 percent more stable Drupal 8 modules. We also have 10 percent more contributors are working on Drupal 8 Core in comparison to last year. All of these milestones indicate that the Drupal 8 is healthy and growing.

The number of stable modules for Drupal 8 is growing fast

Next, I gave an update on our strategic initiatives:

An overview of Drupal 8's strategic initaitives

Make Drupal better for content creators

A photo from Drupal Europe in Darmstadt© Paul Johnson

The expectations of content creators are changing. For Drupal to be successful, we have to continue to deliver on their needs by providing more powerful content management tools, in addition to delivering simplicity though drag-and-drop functionality, WYSIWYG, and more.

With the release of Drupal 8.6, we have added new functionality for content creators by making improvements to the Media, Workflow, Layout and Out-of-the-Box initiatives. I showed a demo video to demonstrate how all of these new features not only make content authoring easier, but more powerful:

We also need to improve the content authoring experience through a modern administration user interface. We have been working on a new administration UI using React. I showed a video of our latest prototype:

Extended security coverage for Drupal 8 minor releases

I announced an update to Drupal 8's security policy. To date, site owners had one month after a new minor Drupal 8 release to upgrade their sites before losing their security updates. Going forward, Drupal 8 site owners have 6 months to upgrade between minor releases. This extra time should give site owners flexibility to plan, prepare and test minor security updates. For more information, check out my recent blog post.

Make Drupal better for evaluators

One of the most significant updates since DrupalCon Nashville is Drupal's improved evaluator experience. The time required to get a Drupal site up and running has decreased from more than 15 minutes to less than two minutes and from 20 clicks to 3. This is a big accomplishment. You can read more about it in my recent blog post.

Promote Drupal

After launching Promote Drupal at DrupalCon Nashville, we hit the ground running with this initiative and successfully published a community press release for the release of Drupal 8.6, which was also translated into multiple languages. Much more is underway, including building a brand book, marketing collaboration space on, and a Drupal pitch deck.

The Drupal 9 roadmap and a plan to end-of-life Drupal 7 and Drupal 8

To keep Drupal modern, maintainable, and performant, we need to stay on secure, supported versions of Drupal 8's third-party dependencies. This means we need to end-of-life Drupal 8 with Symfony 3's end-of-life. As a result, I announced that:

  1. Drupal 8 will be end-of-life by November 2021.
  2. Drupal 9 will be released in 2020, and it will be an easy upgrade.

Historically, our policy has been to only support two major versions of Drupal; Drupal 7 would ordinarily reach end of life when Drupal 9 is released. Because a large number of sites might still be using Drupal 7 by 2020, we have decided to extend support of Drupal 7 until November 2021.

For those interested, I published a blog post that further explains this.

Adopt GitLab on

Finally, the Drupal Association is working to integrate GitLab with GitLab will provide support for "merge requests", which means contributing to Drupal will feel more familiar to the broader audience of open source contributors who learned their skills in the post-patch era. Some of GitLab's tools, such as inline editing and web-based code review, will also lower the barrier to contribution, and should help us grow both the number of contributions and contributors on

To see an exciting preview of's gitlab integration, watch the video below:

Thank you

Our community has a lot to be proud of, and this progress is the result of thousands of people collaborating and working together. It's pretty amazing! The power of our community isn't just visible in minor releases or a number of stable modules. It was also felt at this very conference, as many volunteers gave their weekends and evenings to help organize Drupal Europe in the absence of a DrupalCon Europe organized by the Drupal Association. From code to community, the Drupal project is making an incredible impact. I look forward to celebrating our community's work and friendships at future Drupal conferences.

A summary of our recent progress

Someone pointed me towards this email, in which Linus apologizes for some of his more unhealthy behaviour.

The above is basically a long-winded way to get to the somewhat painful personal admission that hey, I need to change some of my behavior, and I want to apologize to the people that my personal behavior hurt and possibly drove away from kernel development entirely.

To me, this came somewhat as a surprise. I'm not really involved in Linux kernel development, and so the history of what led up to this email mostly passed unnoticed, at least for me; but that doesn't mean I cannot recognize how difficult this must have been to write for him.

As I know from experience, admitting that you have made a mistake is hard. Admitting that you have been making the same mistake over and over again is even harder. Doing so publically? Even more so, since you're placing yourself in a vulnerable position, one that the less honorably inclined will take advantage of if you're not careful.

There isn't much I can contribute to the whole process, but there is this: Thanks, Linus, for being willing to work on those things, which can only make the community healthier as a result. It takes courage to admit things like that, and that is only to be admired. Hopefully this will have the result we're hoping for, too; but that, only time can tell.

September 14, 2018

Now that Debian has migrated away from alioth and towards a gitlab instance known as salsa, we get a pretty advanced Continuous Integration system for (almost) free. Having that, it might make sense to use that setup to autobuild and -test a package when committing something. I had a look at doing so for one of my packages, ola; the reason I chose that package is because it comes with an autopkgtest, so that makes testing it slightly easier (even if the autopkgtest is far from complete).

Gitlab CI is configured through a .gitlab-ci.yml file, which supports many options and may therefore be a bit complicated for first-time users. Since I've worked with it before, I understand how it works, so I thought it might be useful to show people how you can do things.

First, let's look at the .gitlab-ci.yml file which I wrote for the ola package:

  - build
  - autopkgtest
.build: &build
  - apt-get update
  - apt-get -y install devscripts adduser fakeroot sudo
  - mk-build-deps -t "apt-get -y -o Debug::pkgProblemResolver=yes --no-install-recommends" -i -r
  - adduser --disabled-password --gecos "" builduser
  - chown -R builduser:builduser .
  - chown builduser:builduser ..
  stage: build
    - built
  - sudo -u builduser dpkg-buildpackage -b -rfakeroot
  - mkdir built
  - dcmd mv ../*ges built/
.test: &test
  - apt-get update
  - apt-get -y install autopkgtest
  stage: autopkgtest
  - autopkgtest built/*ges -- null
  <<: *build
  image: debian:testing
  <<: *build
  image: debian:sid
  <<: *test
  - build:testing
  image: debian:testing
  <<: *test
  - build:unstable
  image: debian:sid

That's a bit much. How does it work?

Let's look at every individual toplevel key in the .gitlab-ci.yml file:

  - build
  - autopkgtest

Gitlab CI has a "stages" feature. A stage can have multiple jobs, which will run in parallel, and gitlab CI won't proceed to the next stage unless and until all the jobs in the last stage have finished. Jobs from one stage can use files from a previous stage by way of the "artifacts" or "cache" features (which we'll get to later). However, in order to be able to use the stages feature, you have to create stages first. That's what we do here.

.build: &build
  - apt-get update
  - apt-get -y install devscripts autoconf automake adduser fakeroot sudo
  - mk-build-deps -t "apt-get -y -o Debug::pkgProblemResolver=yes --no-install-recommends" -i -r
  - adduser --disabled-password --gecos "" builduser
  - chown -R builduser:builduser .
  - chown builduser:builduser ..
  stage: build
    - built
  - sudo -u builduser dpkg-buildpackage -b -rfakeroot
  - mkdir built
  - dcmd mv ../*ges built/

This tells gitlab CI what to do when building the ola package. The main bit is the script: key in this template: it essentially tells gitlab CI to run dpkg-buildpackage. However, before we can do so, we need to install all the build-dependencies and a few helper things, as well as create a non-root user (since ola refuses to be built as root). This we do in the before_script: key. Finally, once the packages have been built, we create a built directory, and use devscripts' dcmd to move the output of the dpkg-buildpackage command into the built directory.

Note that the name of this key starts with a dot. This signals to gitlab CI that it is a "hidden" job, which it should not start by default. Additionally, we create an anchor (the &build at the end of that line) that we can refer to later. This makes it a job template, not a job itself, that we can reuse if we want to.

The reason we split up the script to be run into three different scripts (before_script, script, and after_script) is simply so that gitlab can understand the difference between "something is wrong with this commit" and "we failed to even configure the build system". It's not strictly necessary, but I find it helpful.

Since we configured the built directory as the artifacts path, gitlab will do two things:

  • First, it will create a .zip file in gitlab, which allows you to download the packages from the gitlab webinterface (and inspect them if needs be). The length of time for which the artifacts are stored can be configured by way of the artifacts:expire_in key; if not set, it defaults to 30 days or whatever the salsa maintainers have configured (of which I'm not sure what it is)
  • Second, it will make the artifacts available in the same location on jobs in the next stage.

The first can be avoided by using the cache feature rather than the artifacts one, if preferred.

.test: &test
  - apt-get update
  - apt-get -y install autopkgtest
  stage: autopkgtest
  - autopkgtest built/*ges -- null

This is very similar to the build template that we had before, except that it sets up and runs autopkgtest rather than dpkg-buildpackage, and that it does so in the autopkgtest stage rather than the build one, but there's nothing new here.

  <<: *build
  image: debian:testing
  <<: *build
  image: debian:sid

These two use the build template that we defined before. This is done by way of the <<: *build line, which is YAML-ese to say "inject the other template here". In addition, we add extra configuration -- in this case, we simply state that we want to build inside the debian:testing docker image in the build:testing job, and inside the debian:sid docker image in the build:unstable job.

  <<: *test
  - build:testing
  image: debian:testing
  <<: *test
  - build:unstable
  image: debian:sid

This is almost the same as the build:testing and the build:unstable jobs, except that:

  • We instantiate the test template, not the build one;
  • We say that the test:testing job depends on the build:testing one. This does not cause the job to start before the end of the previous stage (that is not possible); instead, it tells gitlab that the artifacts created in the build:testing job should be copied into the test:testing working directory. Without this line, all artifacts from all jobs from the previous stage would be copied, which in this case would create file conflicts (since the files from the build:testing job have the same name as the ones from the build:unstable one).

It is also possible to run autopkgtest in the same image in which the build was done. However, the downside of doing that is that if one of your built packages lacks a dependency that is an indirect dependency of one of your build dependencies, you won't notice; by blowing away the docker container in which the package was built and running autopkgtest in a pristine container, we avoid this issue.

With that, you have a complete working example of how to do continuous integration for Debian packaging. To see it work in practice, you might want to look at the ola version

UPDATE (2018-09-16): dropped the autoreconf call, isn't needed (it was there because it didn't work from the first go, and I thought that might have been related, but that turned out to be a red herring, and I forgot to drop it)

Seven months ago, Matthew Grasmick published an article describing how hard it is to install Drupal. His article included the following measurements for creating a new application on his local machine, across four different PHP frameworks:

Platform Clicks Time
Drupal 20+ 15:00+
Symfony 3 1:55
WordPress 7 7:51
Laravel 3 17:28

The results from Matthew's blog were clear: Drupal is too hard to install. It required more than 15 minutes and 20 clicks to create a simple site.

Seeing these results prompted me to launch a number of initiatives to improve the evaluator experience at DrupalCon Nashville. Here is the slide from my DrupalCon Nashville presentation:

Improve technical evaluator experience

A lot has happened between then and now:

  • We improved the download page to improve the discovery experience on
  • We added an Evaluator Guide to
  • We added a quick-start command to Drupal 8.6
  • We added the Umami demo profile to Drupal 8.6
  • We started working on a more modern administration experience (in progress)

You can see the result of that work in this video:

Thanks to this progress, here is the updated table:

Platform Clicks Time
Drupal 3 1:27
Symfony 3 1:55
WordPress 7 7:51
Laravel 3 17:28

Drupal now requires the least time and is tied for least clicks! You can now install Drupal in less than two minutes. Moreover, the Drupal site that gets created isn't an "empty canvas" anymore; it's a beautifully designed and fully functional application with demo content.

Copy-paste the following commands in a terminal window if you want to try it yourself:

mkdir drupal && cd drupal && curl -sSL | tar -xz --strip-components=1
php core/scripts/drupal quick-start demo_umami

For more detailed information on how we achieved these improvements, read Matthew's latest blog post: The New Drupal Evaluator Experience, by the numbers.

A big thank you to Matthew Grasmick (Acquia) for spearheading this initiative!

September 13, 2018

I published the following diary on “Malware Delivered Through MHT Files“:

What are MHT files? Microsoft is a wonderful source of multiple file formats. MHT files are web page archives. Usually, a web page is based on a piece of HTML code with links to external resources, images and other media. MHT files contain all the data related to a web page in a single place and are therefore very useful to archive them. Also called MHTML (MIME Encapsulation of Aggregate HTML Documents), there are encoded like email messages using MIME parts… [Read more]

[The post [SANS ISC] Malware Delivered Through MHT Files has been first published on /dev/random]

Since the launch of Drupal 8.0, we have successfully launched a new minor release on schedule every six months. I'm very proud of the community for this achievement. Prior to Drupal 8, most significant new features were only added in major releases like Drupal 6 or Drupal 7. Thanks to our new release cadence we now consistently and predictably ship great new features twice a year in minor releases (e.g. Drupal 8.6 comes with many new features).

However, only the most recent minor release has been actively supported for both bug fixes and security coverage. With the release of each new minor version, we gave a one-month window to upgrade to the new minor. In order to give site owners time to upgrade, we would not disclose security issues with the previous minor release during that one-month window.

Old Drupal 8 security policy for minor releasesIllustration of the security policy since the launch of Drupal 8.0 for minor releases, demonstrating that previous minor releases receive one month of security coverage. Source: issue #2909665: Extend security support to cover the previous minor version of Drupal and Drupal Europe DriesNote.

Over the past three years, we have learned that users find it challenging to update to the latest minor in one month. Drupal's minor updates can include dependency updates, internal API changes, or features being transitioned from contributed modules to core. It takes time for site owners to prepare and test these types of changes, and a window of one month to upgrade isn't always enough.

At DrupalCon Nashville we declared that we wanted to extend security coverage for minor releases. Throughout 2018, Drupal 8 release managers quietly conducted a trial. You may have noticed that we had several security releases against previous minor releases this year. This trial helped us understand the impact to the release process and learn what additional work remained ahead. You can read about the results of the trial at #2909665: Extend security support to cover the previous minor version of Drupal.

I'm pleased to share that the trial was a success! As a result, we have extended the security coverage of minor releases to six months. Instead of one month, site owners now have six months to upgrade between minor releases. It gives teams time to plan, prepare and test updates. Releases will have six months of normal bug fix support followed by six months of security coverage, for a total lifetime of one year. This is a huge win for Drupal site owners.

New Drupal 8 security policy for minor releasesIllustration of the new security policy for minor releases, demonstrating that the security coverage for minor releases is extended to six months. Source: issue #2909665: Extend security support to cover the previous minor version of Drupal and the Drupal Europe DriesNote.

It's important to note that this new policy only applies to Drupal 8 core starting with Drupal 8.5, and only applies to security issues. Non-security bug fixes will still only be committed to the actively supported release.

While the new policy will provide extended security coverage for Drupal 8.5.x, site owners will need to update to an upcoming release of Drupal 8.5 to be correctly notified about their security coverage.

Next steps

We still have some user experience issues we'd like to address around how site owners are alerted of a security update. We have not yet handled all of the potential edge cases, and we want to be very clear about the potential actions to take when updating.

We also know module developers may need to declare that a release of their project only works against specific versions of Drupal core. Resolving outstanding issues around semantic versioning support for contrib and module version dependency definitions will help developers of contributed projects better support this policy. If you'd like to get involved in the remaining work, the policy and roadmap issue on is a great place to find related issues and see what work is remaining.

Special thanks to Jess and Jeff Beeman for co-authoring this post.

The post HHVM is Ending PHP Support appeared first on

It looks like HHVM is becoming an independent language, free from the shackles of PHP.

HHVM v3.30 will be the last release series where HHVM aims to support PHP.

The key dates are:
-- 2018-12-03: branch cut: expect PHP code to stop working with master and nightly builds after this date
-- 2018-12-17: expected release date for v3.30.0
-- 2019-01-28: expected release date for v4.0.0, without PHP support
-- 2019-11-19: expected end of support for v3.30

Ultimately, we recommend that projects either migrate entirely to the Hack language, or entirely to PHP7 and the PHP runtime.

Source: Ending PHP Support, and The Future Of Hack | HHVM

The post HHVM is Ending PHP Support appeared first on

September 12, 2018

We just released Drupal 8.6.0. With six minor releases behind us, it is time to talk about the long-term future of Drupal 8 (and therefore Drupal 7 and Drupal 9). I've written about when to release Drupal 9 in the past, but this time, I'm ready to provide further details.

The plan outlined in this blog has been discussed with the Drupal 7 Core Committers, the Drupal 8 Core Committers and the Drupal Security Team. While we feel good about this plan, we can't plan for every eventuality and we may continue to make adjustments.

Drupal 8 will be end-of-life by November 2021

Drupal 8's innovation model depends on introducing new functionality in minor versions while maintaining backwards compatibility. This approach is working so well that some people have suggested we institute minor releases forever, and never release Drupal 9 at all.

However that approach is not feasible. We need to periodically remove deprecated functionality to keep Drupal modern, maintainable, and performant, and we need to stay on secure, supported versions of Drupal 8's third-party dependencies. As Nathaniel Catchpole explained in his post "The Long Road to Drupal 9", our use of various third party libraries such as Symfony, Twig, and Guzzle means that we need to be in sync with their release timelines.

Our biggest dependency in Drupal 8 is Symfony 3, and according to Symfony's roadmap, Symfony 3 has an end-of-life date in November 2021. This means that after November 2021, security bugs in Symfony 3 will not get fixed. To keep your Drupal sites secure, Drupal must adopt Symfony 4 or Symfony 5 before Symfony 3 goes end-of-life. A major Symfony upgrade will require us to release Drupal 9 (we don't want to fork Symfony 3 and have to backport Symfony 4 or Symfony 5 bug fixes). This means we have to end-of-life Drupal 8 no later than November 2021.

Drupal 8 will be end-of-life by November 2021

Drupal 9 will be released in 2020, and it will be an easy upgrade

If Drupal 8 will be end-of-life on November 2021, we have to release Drupal 9 before that. Working backwards from November 2021, we'd like to give site owners one year to upgrade from Drupal 8 to Drupal 9.

If November 2020 is the latest we could release Drupal 9, what is the earliest we could release Drupal 9?

We certainly can't release Drupal 9 next week or even next month. Preparing for Drupal 9 takes a lot of work: we need to adopt Symfony 4 and/or Symfony 5, we need to remove deprecated code, we need to allow modules and themes to declare compatibility with more than one major version, and possibly more. The Drupal 8 Core Committers believe we need more than one year to prepare for Drupal 9.

Therefore, our current plan is to release Drupal 9 in 2020. Because we still need to figure out important details, we can't be more specific at this time.

Drupal 9 will be released in 2020

If we release Drupal 9 in 2020, it means we'll certainly have Drupal 8.7 and 8.8 releases.

Wait, I will only have one year to migrate from Drupal 8 to 9?

Yes, but fortunately moving from Drupal 8 to 9 will be far easier than previous major version upgrades. The first release of Drupal 9 will be very similar to the last minor release of Drupal 8, as the primary goal of the Drupal 9.0.0 release will be to remove deprecated code and update third-party dependencies. By keeping your Drupal 8 sites up to date, you should be well prepared for Drupal 9.

And what about contributed modules? The compatibility of contributed modules is historically one of the biggest blockers to upgrading, so we will also make it possible for contributed modules to be compatible with Drupal 8 and Drupal 9 at the same time. As long as contributed modules do not use deprecated APIs, they should work with Drupal 9 while still being compatible with Drupal 8.

Drupal 7 will be supported until November 2021

Historically, our policy has been to only support two major versions of Drupal; Drupal 7 would ordinarily reach end of life when Drupal 9 is released. Because a large number of sites might still be using Drupal 7 by 2020, we have decided to extend support of Drupal 7 until November 2021. Drupal 7 will receive community support for three whole more years.

Drupal 7 will be supported until November 2021

We'll launch a Drupal 7 commercial Long Term Support program

In the past, commercial vendors have extended Drupal's security support. In 2015, a Drupal 6 commercial Long Term Support program was launched and continues to run to this day. We plan a similar paid program for Drupal 7 to extend support beyond November 2021. The Drupal Security Team will announce the Drupal 7 commercial LTS program information by mid-2019. Just like with the Drupal 6 LTS program, there will be an application for vendors.

We'll update Drupal 7 to support newer versions of PHP

The PHP team will stop supporting PHP 5.x on December 31st, 2018 (in 3 months), PHP 7.0 on December 3rd, 2018 (in 2 months), PHP 7.1 on December 1st, 2019 (in 1 year and 3 months) and PHP 7.2 on November 30th, 2020 (in 2 years and 2 months).

Drupal will drop official support for unsupported PHP versions along the way and Drupal 7 site owners may have to upgrade their PHP version. The details will be provided later.

We plan on updating Drupal 7 to support newer versions of PHP in line with their support schedule. Drupal 7 doesn't fully support PHP 7.2 yet as there have been some backwards-incompatible changes since PHP 7.1. We will release a version of Drupal 7 that supports PHP 7.2. Contributed modules and custom modules will have to be updated too, if not already.


If you are still using Drupal 7 and are wondering what to do, you currently have two options:

  1. Stay on Drupal 7 while also updating your PHP version. If you stay on Drupal 7 until after 2021, you can either engage a vendor for a long term support contract, or migrate to Drupal 9.
  2. Migrate to Drupal 8 by 2020, so that it's easier to update to Drupal 9 when it is released.

The announcements in this blog post made option (1) a lot more viable and/or hopefully helps you better evaluate option (2).

If you are on Drupal 8, you just have to keep your Drupal 8 site up-to-date and you'll be ready for Drupal 9.

We plan to have more specifics by April 2019 (DrupalCon Seattle).

Thanks for the Drupal 7 Core Committers, the Drupal 8 Core Committers and the Drupal Security Team for their contributions to this blog post.

September 11, 2018

Wow, 10  years already! In a few weeks, this is the 10th edition of BruCON or the “0x0A edition“. If you know me or follow me, you probably know that I’m part of this wonderful experience since the first edition. I’m also sponsoring the conference through my company with a modest contribution: I’m hosting the websites and other technical kinds of stuff. Getting close to the event, we are still receiving many requests from people who are looking for tickets but we are sold-out for a while. That’ a good news (for us) but it may be frustrating (if you’re still looking for one).

As a sponsor, I still have a free ticket to give away… but it must be earned! I wrote a quick challenge for you. Not too complicated nor too easy because everybody must have a chance to solve it. If you have some skills in document analysis, it should not be too complicated.

Here are the rules of the game:

  • The ticket will be assigned to the first one who submits the flag by email ONLY (I’m easy to catch!)
  • There is no malicious activity required to solve the challenge, nothing relevant on this blog or any BruCON website.
  • Do not just submit the flag to explain how you found it (provide a quick wrap-up, a few lines are ok)
  • You’re free to play and solve the challenge but claim the ticket if you will be certain to attend the conference! (Be fair-play)
  • All costs besides the free ticket are on you! (hotel, travel, …)

Ready? So, enjoy this file


[The post Wanna Come to BruCON? Solve This Challenge! has been first published on /dev/random]

The post Indie Hackers interview with Oh Dear! appeared first on

I recently did an interview with Indie Hackers about our newly launched Oh Dear! monitoring service.

Some fun questions that I hope give more insight into how the idea came to be, where we are now and what our plans are for the future.

Hi! My name is Mattias Geniar and together with my partner Freek Van der Herten we founded Oh Dear!, a new tool focused on monitoring websites.

We both got fed up with existing tools that didn't fit our needs precisely. Most came close, but there was always something — whether settings or design choices — that we just didn't like. Being engineers, the obvious next step was to just build it ourselves!

Oh Dear! launched in private beta earlier this year and has been running in production for a few months now.

Source: The Market Lacked the Tool We Wanted. So We Built it Ourselves. -- Indie Hackers

Some more details are available in the forum discussion about our launch.

The post Indie Hackers interview with Oh Dear! appeared first on

The post Elasticsearch fails to start: access denied “/etc/elasticsearch/${instance}/scripts” appeared first on

Ran into a not-so-obvious issue with Elasticsearch today. After an update from 6.2.x to 6.4.x, an elasticsearch instance failed to start with following error in its logs.

$ tail -f instance.log
Caused by:
  access denied ("" "/etc/elasticsearch/production/scripts" "read")
   at java.lang.SecurityManager.checkPermission(

The logs indicated it couldn't read from /etc/elasticsearch/production/scripts, but all permissions were OK. Apparently, Elasticsearch doesn't follow symlinks for the scripts directory, even if it has all the required read permissions.

In this case, that scripts directory was indeed a symlink to a shared resource.

$ ls -alh /etc/elasticsearch/production/scripts 
/etc/elasticsearch/production/scripts -> /usr/share/elasticsearch/scripts

The fix was simple, as the shared directory didn't contain any scripts to begin with.

$ rm -f /etc/elasticsearch/production/scripts
$ mkdir /etc/elasticsearch/production/scripts
$ chown elasticsearch:elasticsearch /etc/elasticsearch/production/scripts

Turn that symlink into a normal directory & restart your instance.

If you run multiple instances, you'll have to repeat these steps for every instance that fails to start.

The post Elasticsearch fails to start: access denied “/etc/elasticsearch/${instance}/scripts” appeared first on

For the past two years, I've examined's commit data to understand who develops Drupal, how much of that work is sponsored, and where that sponsorship comes from.

I have now reported on this data for three years in a row, which means I can start to better compare year-over-year data. Understanding how an open-source project works is important because it establishes a benchmark for project health and scalability.

I would also recommend taking a look at the 2016 report or the 2017 report. Each report looks at data collected in the 12-month period between July 1st and June 30th.

This year's report affirms that Drupal has a large and diverse community of contributors. In the 12-month period between July 1, 2017 and June 30, 2018, 7,287 different individuals and 1,002 different organizations contributed code to This include contributions to Drupal core and all contributed projects on

In comparison to last year's report, both the number of contributors and contributions has increased. Our community of contributors (including both individuals and organizations) is also becoming more diverse. This is an important area of growth, but there is still work to do.

For this report, we looked at all of the issues marked "closed" or "fixed" in our ticketing system in the 12-month period from July 1, 2017 to June 30, 2018. This includes Drupal core and all of the contributed projects on, across all major versions of Drupal. This year, 24,447 issues were marked "closed" or "fixed", a 5% increase from the 23,238 issues in the 2016-2017 period. This averages out to 67 feature improvements or bug fixes a day.

In total, we captured 49,793 issue credits across all 24,447 issues. This marks a 17% increase from the 42,449 issue credits recorded in the previous year. Of the 49,793 issue credits reported this year, 18% (8,822 credits) were for Drupal core, while 82% (40,971 credits) went to contributed projects.

"Closed" or "fixed" issues are often the result of multiple people working on the issue. We try to capture who contributes through's unique credit system. We used the data from the credit system for this analysis. There are a few limitations with this approach, which we'll address at the end of this report.

What is the credit system?

In the spring of 2015, after proposing ideas for giving credit and discussing various approaches at length, added the ability for people to attribute their work to an organization or customer in the issue queues. Maintainers of Drupal modules, themes, and distributions can award issue credits to people who help resolve issues with code, translations, documentation, design and more.

Example issue credit on drupal orgA screenshot of an issue comment on You can see that jamadar worked on this patch as a volunteer, but also as part of his day job working for TATA Consultancy Services on behalf of their customer, Pfizer.

Credits are a powerful motivator for both individuals and organizations. Accumulating credits provides individuals with a way to showcase their expertise. Organizations can utilize credits to help recruit developers, to increase their visibility within the marketplace, or to showcase their Drupal expertise.

Who is working on Drupal?

In the 12-month period between July 1, 2017 to June 30, 2018, 24,447, received code contributions from 7,287 different individuals and 1,002 different organizations.

Contributions by individuals vs organizations

While the number of individual contributors rose, a relatively small number of individuals still do the majority of the work. Approximately 48% of individual contributors received just one credit. Meanwhile, the top 30 contributors (the top 0.4%) account for more than 24% of the total credits. These individuals put an incredible amount of time and effort in developing Drupal and its contributed projects:

9Wim Leers395
21drunken monkey231

Out of the top 30 contributors featured, 15 were also recognized as top contributors in our 2017 report. These Drupalists' dedication and continued contribution to the project has been crucial to Drupal's development. It's also exciting to see 15 new names on the list. This mobility is a testament to the community's evolution and growth. It's also important to recognize that a majority of the 15 repeat top contributors are at least partially sponsored by an organization. We value the organizations that sponsor these remarkable individuals, because without their support, it could be more challenging to be in the top 30 year over year.

How diverse is Drupal?

Next, we looked at both the gender and geographic diversity of code contributors. While these are only two examples of diversity, this is the only available data that contributors can currently choose to share on their profiles. The reported data shows that only 7% of the recorded contributions were made by contributors that do not identify as male, which continues to indicates a steep gender gap. This is a one percent increase compared to last year. The gender imbalance in Drupal is profound and underscores the need to continue fostering diversity and inclusion in our community.

To address this gender gap, in addition to advancing representation across various demographics, the Drupal community is supporting two important initiatives. The first is to adopt more inclusive user demographic forms on Adopting Open Demographics on will also allow us to improve reporting on diversity and inclusion, which in turn will help us better support initiatives that advance diversity and inclusion. The second initiative is supporting the Drupal Diversity and Inclusion Contribution Team, which works to better include underrepresented groups to increase code and community contributions. The DDI Contribution Team recruits team members from diverse backgrounds and underrepresented groups, and provides support and mentorship to help them contribute to Drupal.

It's important to reiterate that supporting diversity and inclusion within Drupal is essential to the health and success of the project. The people who work on Drupal should reflect the diversity of people who use and work with the software. While there is still a lot of work to do, I'm excited about the impact these various initiatives will have on future reports.

Contributions by gender

When measuring geographic diversity, we saw individual contributors from 6 different continents and 123 different countries:

Contributions by continent
Contributions by countryThe top 20 countries from which contributions originate. The data is compiled by aggregating the countries of all individual contributors behind each commit. Note that the geographical location of contributors doesn't always correspond with the origin of their sponsorship. Wim Leers, for example, works from Belgium, but his funding comes from Acquia, which has the majority of its customers in North America.

123 different countries is seven more compared to the 2017 report. The new countries include Rwanda, Namibia, Senegal, Sierra Leone, and Swaziland, Zambia. Seeing contributions from more African countries is certainly a highlight.

How much of the work is sponsored?

Issue credits can be marked as "volunteer" and "sponsored" simultaneously (shown in jamadar's screenshot near the top of this post). This could be the case when a contributor does the minimum required work to satisfy the customer's need, in addition to using their spare time to add extra functionality.

While Drupal started out as a 100% volunteer-driven project, today the majority of the code on is sponsored by organizations. Only 12% of the commit credits that we examined in 2017-2018 were "purely volunteer" credits (6,007 credits), in stark contrast to the 49% that were "purely sponsored". In other words, there were four times as many "purely sponsored" credits as "purely volunteer" credits.

Contributions by volunteer vs sponsored

A few comparisons between the 2017-2018 and the 2016-2017 data:

  • The credit system is being used more frequently. In total, we captured 49,793 issue credits across all 24,447 issues in the 2017-2018 period. This marks a 17% increase from the 42,449 issue credits recorded in the previous year. Between July 1, 2016 and June 30, 2017, 28% of all credits had no attribution while in the period between July 1, 2017 to June 30, 2018, only 25% of credits lacked attribution. More people have become aware of the credit system, the attribution options, and their benefits.
  • Sponsored credits are growing faster than volunteer credits. Both "purely volunteer" and "purely sponsored" credits grew, but "purely sponsored" credits grew faster. There are two reasons why this could be the case: (1) more contributions are sponsored and (2) organizations are more likely to use the credit system compared to volunteers.

No data is perfect, but it feels safe to conclude that most of the work on Drupal is sponsored. At the same time, the data shows that volunteer contribution remains very important to Drupal. Maybe most importantly, while the number of volunteers and sponsors has grown year over year in absolute terms, sponsored contributions appear to be growing faster than volunteer contributions. This is consistent with how open source projects grow and scale.

Who is sponsoring the work?

Now that we've established a majority of contributions to Drupal are sponsored, we want to study which organizations contribute to Drupal. While 1,002 different organizations contributed to Drupal, approximately 50% of them received four credits or less. The top 30 organizations (roughly the top 3%) account for approximately 48% of the total credits, which implies that the top 30 companies play a crucial role in the health of the Drupal project. The graph below shows the top 30 organizations and the number of credits they received between July 1, 2017 and June 30, 2018:

Top 30 organizations contributing to DrupalThe top 30 contributing organizations based on the number of commit credits.

While not immediately obvious from the graph above, a variety of different types of companies are active in Drupal's ecosystem:

Category Description
Traditional Drupal businesses Small-to-medium-sized professional services companies that primarily make money using Drupal. They typically employ fewer than 100 employees, and because they specialize in Drupal, many of these professional services companies contribute frequently and are a huge part of our community. Examples are Chapter Three and Lullabot (both shown on graph).
Digital marketing agencies Larger full-service agencies that have marketing-led practices using a variety of tools, typically including Drupal, Adobe Experience Manager, Sitecore, WordPress, etc. They tend to be larger, with the larger agencies employing thousands of people. Examples are Wunderman and Mirum.
System integrators Larger companies that specialize in bringing together different technologies into one solution. Example system agencies are Accenture, TATA Consultancy Services, Capgemini and CI&T (shown on graph).
Technology and infrastructure companies Examples are Acquia (shown on graph), Lingotek, BlackMesh, Rackspace, Pantheon and
End-users Examples are Pfizer (shown on graph) or NBCUniversal.

A few observations:

  • Almost all of the sponsors in the top 30 are traditional Drupal businesses. Companies like MD Systems (12 employees), Valuebound (58 employees), Chapter Three (33 employees), Commerce Guys (13 employees) and PreviousNext (22 employees) are, despite their size, critical to Drupal's success.
  • Compared to these traditional Drupal businesses, Acquia has nearly 800 employees and at least ten full-time Drupal contributors. Acquia works to resolve some of the most complex issues on, many of which are not recognized by the credit system (e.g. release management, communication, sprint organizing, and project coordination). Acquia added several full-time contributors compared to last year, however, I believe that Acquia should contribute even more due to its comparative size.
  • No digital marketing agencies show up in the top 30, though some of them are starting to contribute. It's exciting that an increasing number of digital marketing agencies are delivering beautiful experiences using Drupal. As a community, we need to work to ensure that each of these firms are contributing back to the project with the same commitment that we see from firms like Commerce Guys, CI&T or Acro Media. Compared to last year, we have not made meaningful progress on growing contributions from digital marketing agencies. It would be interesting to see what would happen if more large organizations mandated contributions from their partners. Pfizer, for example, only works with agencies and vendors that contribute back to Drupal, and requires that its agency partners contribute to open source. If more organizations took this stance, it could have a big impact on the number of digital agencies that contribute to Drupal
  • The only system integrator in the top 30 is CI&T, which ranked 3rd with 959 credits. As far as system integrators are concerned, CI&T is a smaller player with approximately 2,500 employees. However, we do see various system integrators outside of the top 30, including Globant, Capgemini, Sapient and TATA Consultancy Services. Each of these system integrators reported 30 to 85 credits in the past year. The top contributor is TATA with 85 credits.
  • Infrastructure and software companies also play an important role in our community, yet only Acquia appears in the top 30. While Acquia has a professional services division, more than 75% of the contributions come from the product organization. Other infrastructure companies include Pantheon and, which are both venture-backed, platform-as-a-service companies that were born from the Drupal community. Pantheon has 6 credits and has 47 credits. Amazee Labs, a company that is building an infrastructure business, reported 40 credits. Compared to last year, Acquia and Rackspace have slightly more credits, while Pantheon, and Amazee contributed less. Lingotek, a vendor that offers cloud-based translation management software has 84 credits.
  • We also saw three end-users in the top 30 as corporate sponsors: Pfizer (491 credits, up from 251 credits the year before), Thunder (432 credits), and the German company, bio.logis (319 credits, up from 212 credits the year before). Other notable customers outside of the top 30, include Workday, Wolters Kluwer, Burda Media, YMCA and OpenY, and NBCUniversal. We also saw contributions from many universities, including University of Colorado Boulder, University of Waterloo, Princeton University, University of Adelaide, University of Sydney, University of Edinburgh, McGill University and more.
Contributions by technology companies

We can conclude that technology and infrastructure companies, digital marketing agencies, system integrators and end-users are not making significant code contributions to today. How can we explain this disparity in comparison to the traditional Drupal businesses that contribute the most? We believe the biggest reasons are:

  1. Drupal's strategic importance. A variety of the traditional Drupal agencies almost entirely depend on Drupal to support their businesses. Given both their expertise and dependence on Drupal, they are most likely to look after Drupal's development and well-being. Contrast this with most of the digital marketing agencies and system integrators who work with a diversified portfolio of content management platforms. Their well-being is less dependent on Drupal's success.
  2. The level of experience with Drupal and open source. Drupal aside, many organizations have little or no experience with open source, so it is important that we motivate and teach them to contribute.
  3. Legal reservations. We recognize that some organizations are not legally permitted to contribute, let alone attribute their customers. We hope that will change as open source continues to get adopted.
  4. Tools barriers. Drupal contribution still involves a patch-based workflow on's unique issue queue system. This presents a fairly steep learning curve to most developers, who primarily work with more modern and common tools such as GitHub. We hope to lower some of these barriers through our collaboration with GitLab.
  5. Process barriers. Getting code changes accepted into a Drupal project — especially Drupal core — is hard work. Peer reviews, gates such as automated testing and documentation, required sign-offs from maintainers and committers, knowledge of best practices and other community norms are a few of the challenges a contributor must face to get code accepted into Drupal. Collaborating with thousands of people on a project as large and widely-used as Drupal requires such processes, but new contributors often don't know that these processes exist, or don't understand why they exist.

We should do more to entice contribution

Drupal is used by more than one million websites. Everyone who uses Drupal benefits from work that thousands of other individuals and organizations have contributed. Drupal is great because it is continuously improved by a diverse community of contributors who are enthusiastic to give back.

However, the vast majority of the individuals and organizations behind these Drupal websites never participate in the development of the project. They might use the software as it is or don't feel the need to help drive its development. We have to provide more incentive for these individuals and organizations to contribute back to the project.

Consequently, this data shows that the Drupal community can do more to entice companies to contribute code to The Drupal community has a long tradition of encouraging organizations to share code rather than keep it behind firewalls. While the spirit of the Drupal project cannot be reduced to any single ideology — not every organization can or will share their code — we would like to see organizations continue to prioritize collaboration over individual ownership.

We understand and respect that some can give more than others and that some might not be able to give back at all. Our goal is not to foster an environment that demands what and how others should give back. Our aim is not to criticize those who do not contribute, but rather to help foster an environment worthy of contribution. This is clearly laid out in Drupal's Values and Principles.

Given the vast amount of Drupal users, we believe continuing to encourage organizations and end-users to contribute is still a big opportunity. From my own conversations, it's clear that organizations still need need education, training and help. They ask questions like: "Where can we contribute?", "How can we convince our legal department?", and more.

There are substantial benefits and business drivers for organizations that contribute: (1) it improves their ability to sell and win deals and (2) it improves their ability to hire. Companies that contribute to Drupal tend to promote their contributions in RFPs and sales pitches. Contributing to Drupal also results in being recognized as a great place to work for Drupal experts.

What projects have sponsors?

To understand where the organizations sponsoring Drupal put their money, I've listed the top 20 most sponsored projects:

RankProject nameIssues
1Drupal core5919
3Drupal Commerce607
4Varbase: The Ultimate Drupal 8 CMS Starter Kit (Bootstrap Ready)551
5Commerce Point of Sale (POS)324
7Commerce Migrate307
10Open Social222
11Search API Solr Search212
12Drupal Connector for Janrain Identity Cloud197 security advisory coverage applications189
15Open Y162
17Web Page Archive154
18Drupal core - JavaScript Modernization Initiative145
20XML sitemap120

Who is sponsoring the top 30 contributors?

Rank Username Issues Volunteer Sponsored Not specified Sponsors
1 RenatoG 851 0% 100% 0% CI&T (850), Johnson & Johnson (23)
2 RajabNatshah 745 14% 100% 0% Vardot (653), Webship (90)
3 jrockowitz 700 94% 97% 1% The Big Blue House (680), Memorial Sloan Kettering Cancer Center (7), Rosewood Marketing (2), Kennesaw State University (1)
4 adriancid 529 99% 19% 0% Ville de Montréal (98)
5 bojanz 515 0% 98% 2% Commerce Guys (503), Torchbox (17), Adapt (6), Acro Media (4), Bluespark (1)
6 Berdir 432 0% 92% 8% MD Systems (396), (10), Acquia (2)
7 alexpott 414 13% 84% 10% Chapter Three (123), Thunder (120), Acro Media (103)
8 mglaman 414 5% 96% 1% Commerce Guys (393), Impactiv (17), Circle Web Foundry (16), Rosewood Marketing (14), LivePerson (13), Bluespark (4), Acro Media (4), (3), Thinkbean (2), Matsmart (2)
9 Wim Leers 395 8% 94% 0% Acquia (371)
10 larowlan 360 13% 97% 1% PreviousNext (350), University of Technology, Sydney (24), Charles Darwin University (10), Australian Competition and Consumer Commission (ACCC) (1), Department of Justice & Regulation, Victoria (1)
11 DamienMcKenna 353 1% 95% 5% Mediacurrent (334)
12 dawehner 340 48% 86% 4% Chapter Three (279), Torchbox (10), Drupal Association (5), Tag1 Consulting (3), Acquia (2), TES Global (1)
13 catch 339 1% 97% 3% Third and Grove (320), Tag1 Consulting (8)
14 heddn 327 2% 99% 1% MTech (325)
15 xjm 303 0% 97% 3% Acquia (293)
16 pifagor 284 32% 99% 1% GOLEMS GABB (423), Drupal Ukraine Community (73)
17 quietone 261 48% 55% 5% Acro Media (143)
18 borisson_ 255 93% 55% 3% Dazzle (136), Intracto digital agency (1), Acquia (1), DUG BE vzw (Drupal User Group Belgium) (1)
19 adci_contributor 255 0% 100% 0% ADCI Solutions (255)
20 volkswagenchick 254 1% 100% 0% Hook 42 (253)
21 drunken monkey 231 91% 22% 0% DBC (24), Vizala (20), Sunlime Web Innovations GmbH (4), Wunder Group (1), epiqo (1), Zebralog (1)
22 amateescu 225 3% 95% 3% Pfizer (211), Drupal Association (1), Chapter Three (1)
23 joachim 199 56% 44% 19% Torchbox (88)
24 mkalkbrenner 195 0% 99% 1% bio.logis (193), OSCE: Organization for Security and Co-operation in Europe (119)
25 chr.fritsch 185 0% 99% 1% Thunder (183)
26 gaurav.kapoor 178 0% 81% 19% OpenSense Labs (144), DrupalFit (55)
27 phenaproxima 177 0% 99% 1% Acquia (176)
28 mikeytown2 173 0% 0% 100%
29 joelpittet 170 28% 74% 16% The University of British Columbia (125)
30 timmillwood 169 1% 100% 0% Pfizer (169), Appnovation (163), Millwood Online (6)

We observe that the top 30 contributors are sponsored by 58 organizations. This kind of diversity is aligned with our desire to make sure that Drupal is not controlled by a single organization. These top contributors and organizations are from many different parts of the world, and work with customers large and small. Nonetheless, we will continue to benefit from an increased distribution of contribution.

Limitations of the credit system and the data

While the benefits are evident, it is important to note a few of the limitations in's current credit system:

  • Contributing to issues on is not the only way to contribute. Other activities, such as sponsoring events, promoting Drupal, and providing help and mentorship are also important to the long-term health of the Drupal project. Many of these activities are not currently captured by the credit system. For this post, we chose to only look at code contributions.
  • We acknowledge that parts of Drupal are developed on GitHub and therefore aren't fully credited on The actual number of contributions and contributors could be significantly higher than what we report. The Drupal Association is working to integrate GitLab with GitLab will provide support for "merge requests", which means contributing to Drupal will feel more familiar to the broader audience of open source contributors who learned their skills in the post-patch era. Some of GitLab's tools, such as inline editing and web-based code review, will also lower the barrier to contribution, and should help us grow both the number of contributions and contributors on
  • Even when development is done on, the credit system is not used consistently. As using the credit system is optional, a lot of code committed on has no or incomplete contribution credits.
  • Not all code credits are the same. We currently don't have a way to account for the complexity and quality of contributions; one person might have worked several weeks for just one credit, while another person might receive a credit for ten minutes of work. In the future, we should consider issuing credit data in conjunction with issue priority, patch size, etc. This could help incentivize people to work on larger and more important problems and save coding standards improvements for new contributor sprints. Implementing a scoring system that ranks the complexity of an issue would also allow us to develop more accurate reports of contributed work.

Like Drupal itself, the credit system needs to continue to evolve. Ultimately, the credit system will only be useful when the community uses it, understands its shortcomings, and suggests constructive improvements.


Our data confirms that Drupal is a vibrant community full of contributors who are constantly evolving and improving the software. While we have amazing geographic diversity, we still need greater gender diversity, in addition to better representation across various demographic groups. Our analysis of the credit data concludes that most contributions to Drupal are sponsored. At the same time, the data shows that volunteer contribution remains very important to Drupal.

As a community, we need to understand that a healthy open source ecosystem includes more than the traditional Drupal businesses that contribute the most. We still don't see a lot of contribution from the larger digital marketing agencies, system integrators, technology companies, or end-users of Drupal — we believe that might come as these organizations build out their Drupal practices and Drupal becomes more strategic for them.

To grow and sustain Drupal, we should support those that contribute to Drupal and find ways to get those that are not contributing involved in our community. We invite you to help us continue to strengthen our ecosystem.

September 10, 2018

The post MySQL: Calculate the free space in IBD files appeared first on

If you use MySQL with InnoDB, chances are you've seen growing IBD data files. Those are the files that actually hold your data within MySQL. By default, they only grow -- they don't shrink. So how do you know if you still have free space left in your IBD files?

There's a query you can use to determine that:

SELECT round((data_length+index_length)/1024/1024,2)
FROM information_schema.tables
  AND table_name='history_text';

The above will check a database called zabbix for a table called history_text. The result will be the size that MySQL has "in use" in that file. If that returns 5.000 as a value, you have 5GB of data in there.

In my example, it showed the data size to be 16GB. But the actual IBD file was over 50GB large.

$ ls -alh history_text.ibd
-rw-r----- 1 mysql mysql 52G Sep 10 15:26 history_text.ibd

In this example I had 36GB of wasted space on the disk (52GB according to the OS, 16GB in use by MySQL). If you run MySQL with innodb_file_per_table=ON, you can individually shrink the IBD files. One way, is to run an OPTIMIZE query on that table.

Note: this can be a blocking operation, depending on your MySQL version. WRITE and READ I/O can be blocked to the table for the duration of the OPTIMIZE query.

MariaDB [zabbix]> OPTIMIZE TABLE history_text;
Stage: 1 of 1 'altering table'   93.7% of stage done
Stage: 1 of 1 'altering table'    100% of stage done

| Table               | Op       | Msg_type | Msg_text                                                          |
| zabbix.history_text | optimize | note     | Table does not support optimize, doing recreate + analyze instead |
| zabbix.history_text | optimize | status   | OK                                                                |
2 rows in set (55 min 37.37 sec)

The result is quite a big file size savings:

$ ls -alh history_text.ibd
-rw-rw---- 1 mysql mysql 11G Sep 10 16:27 history_text.ibd

The file that was previously 52GB in size, is now just 11GB.

The post MySQL: Calculate the free space in IBD files appeared first on

September 09, 2018

Thanks to updates from Vignesh Jayaraman, Anton Hillebrand and Rolf Eike Beer, a new release of cvechecker is now made available.

This new release (v3.9) is a bugfix release.

Petit exercice de transparence comptable pour l’impression des aventures d’Aristide.

Grâce à votre soutien, notre projet Ulule pour soutenir les aventures d’Aristide est un franc succès avec plus de 200 livres commandés alors que nous sommes tout juste à la moitié de la campagne !

Je suis un fervent partisan de la transparence, que ce soit en politique ou dans une démarche qui est, il faut l’avouer, commerciale. J’avais donc envie de vous détailler pourquoi ce cap de 200 exemplaires est un soulagement financier : c’est à partir de 200 exemplaires que nous commençons à rentrer dans nos frais !

Pour imprimer 200 exemplaires, quantité minimale d’impression, l’imprimeur demande 12€ par exemplaire. Ajoutez à cela les frais d’envoi (entre 1€ et 2€ pour l’emballage plus 5€/6€ d’envoi en Belgique et près de 10€ pour l’envoi en France), cela fait que chaque exemplaire nous coûte un minimum de 18€. C’est pour cette raison que j’ai décidé de livrer une partie en vélo : c’est à la fois un beau challenge mais cela permet aussi de faire de sérieuses économies pour financer l’envoi à des lecteurs de l’hexagone !

Rien que l’emballage nous coûtait 2€ pièce si nous commandions 150 cartons mais passe à 1€ pièce si nous en commandons 450. 450 cartons ou un mètre cube à stocker à la maison en attendant les livres !

À cela ajoutez les 8% de commission Ulule et différents frais fixes (comptable, frais administratifs uniques, etc). Il apparait du coup qu’avec 100 exemplaires, nous aurions du mettre de notre poche. Mais Vinch et moi avions tellement envie de voir Aristide exister entre les mains des lecteurs !

Au-dessus de 200 exemplaires, nous passons chez l’imprimeur dans la tranche supérieure (500 exemplaires). Et là, miracle, chaque exemplaire ne nous coûte plus que 7€ (les autres frais étant identiques). Au total, cela revient à dire que quelque part entre 200 et 300 exemplaires, la campagne s’équilibre financièrement et finance un excès de livres. Nous ne gagnerons pas d’argent directement mais nous pourrons peut-être nous rémunérer en vendant le trop plein d’exemplaires après la campagne (ce qui, après un bref calcul, ne paiera jamais le centième du temps passé sur ce projet mais c’est mieux que d’y aller de notre poche).

À partir de 1000 exemplaires, le livre ne couterait plus que 4€ à fabriquer. Là ça deviendrait vraiment intéressant. Mais encore faut-il les vendre…

C’est peut-être pour ça que j’aime tant publier mes textes sur mon blog en vous encourageant à me soutenir à prix libre sur Tipeee, Patreon ou Liberapay. Vendre, c’est un métier pour lequel je ne suis pas vraiment taillé.

Mais Aristide avait tellement envie de connaître le papier, de se faire caresser par les petites mains pleines de terre et de nourriture, de se faire déchirer dans un excès d’enthousiasme, d’exister dans un moment complice d’où les écrans sont bannis.

Merci de lui permettre cela ! La magie d’Internet, c’est aussi de permettre à deux auteurs d’exister sans avoir le moindre contact dans le monde de l’édition, de permettre à tout le monde de devenir auteur et créateur.

PS: comme nous sommes désormais sûr de le produire 500 exemplaires, n’hésitez pas à recommander Aristide à vos amis. Cela me ferait trop de peine de voir quelques centaines d’exemplaires pourrir dans des cartons ou, pire, être envoyés au pilon ! Plus que 20 jours de campagne !

Je suis @ploum, conférencier et écrivain électronique déconnecté rémunérés en prix libre sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens, même symboliques, font une réelle différence pour moi. Merci !

Ce texte est publié sous la licence CC-By BE.

I’m happy to release a two Clean Architecture + Bounded Contexts diagrams into the public domain (CC0 1.0).

I created these diagrams for Wikimedia Deutchland with the help of Jan Dittrich, Charlie Kritschmar and Hanna Petruschat. They represent the architecture of our fundraising codebase. I explain the rules of this architecture in my post Clean Architecture + Bounded Contexts. The new diagrams are based on the ones I published two years ago in my Clean Architecture Diagram post.

Diagram 1: Clean Architecture + DDD, generic version. Click to enlarge. Link: SVG version

Diagram 1: Clean Architecture + DDD, fundraising version. Click to enlarge. Link: SVG version


September 07, 2018

I published the following diary on “Crypto Mining in a Windows Headless Browser“:

Crypto miners in the browser are not new. Delivery through malicious or compromised piece of javascript code is common these days (see my previous diary about this topic). This time, it’s another way to deliver the crypto miner that we found in a sample reported by one of our readers… [Read more]

[The post [SANS ISC] Crypto Mining in a Windows Headless Browser has been first published on /dev/random]

The post Chrome stops showing “www”-prefix in URI appeared first on

Almost 10 years ago I ranted about removing the WWW prefix from your sites. Today, Chrome is starting to push that a bit more.

Here's what a www site looks like now (example:

But in reality, it's actually that has been loaded, with the www-prefix.

Since it's no longer shown in the most popular browser, why not drop the www-prefix altogether?

As a webdeveloper or web agency, how many times will you have to explain what's going on here?

(Hat tip to @sebdedeyne, who showed this behaviour)

The post Chrome stops showing “www”-prefix in URI appeared first on

September 06, 2018

I published the following diary on “Malicious PowerShell Compiling C# Code on the Fly“:

What I like when hunting is to discover how attackers are creative to find new ways to infect their victim’s computers. I came across a Powershell sample that looked new and interesting to me. First, let’s deobfuscate the classic way.

It started with a simple Powerscript command with a big Base64 encoded string… [Read more]

[The post [SANS ISC] Malicious PowerShell Compiling C# Code on the Fly has been first published on /dev/random]

Last night, we shipped Drupal 8.6.0! I firmly believe this is the most significant Drupal 8 release to date. It is significant because we made a lot of progress on all twelve of Drupal 8 core's strategic initiatives. As a result, Drupal 8.6 delivers a large number of improvements for content authors, evaluators, site builders and developers.

What is new for content authors?

For content authors, Drupal 8.6 adds support for "remote media types". This means you can now easily embed YouTube or Vimeo videos in your content.

The Media Library in Drupal 8.6The Media Library in Drupal 8.6

Content authors want Drupal to be easy to use. We made incredible progress on a variety of features that will help to achieve that: we've delivered an experimental media library, added the Workspaces module as experimental, providing sophisticated content staging capabilities, and made great strides on the upcoming Layout Builder. The Layout Builder is shaping up to be a very powerful tool that solves a lot of authoring challenges, and is something many are looking forward to.

The Workspaces module in Drupal 8.6The Workspaces module in Drupal 8.6

Each initiative related to content authoring is making disciplined and steady progress. These features not only solve for the most requested authoring improvements, but provide a solid foundation on which we can continue to innovate. This means we can provide better compatibility and upgradability for contributed modules.

Top requests for content authorsThe top 10 requested features for content creators according to the 2016 State of Drupal survey.

What is new for evaluators?

Evaluators want an out-of-the-box experience that allows them to install and test drive Drupal in minutes. With Drupal 8.6, we have finally delivered on this need.

Prior to Drupal 8.6, downloading and installing Drupal was a complex and lengthy process that ended with an underwhelming "blank slate".

Now, you can install Drupal with the new "Umami demo profile". The Umami demo profile showcases some of Drupal's most powerful capabilities by providing a beautiful website filled with content right out of the box. A demo profile will not only help to onboard new users, but it can also be used by Drupal professionals and digital agencies to showcase Drupal to potential customers.

The experimental layout builder in Drupal 8.6 with the Umami demo profileThe new Umami demo profile together with the Layout Builder.

In addition to a new installation profile, we added a "quick-start" command that allows you to launch a Drupal site in one command using only one dependency, PHP. If you want to try Drupal, you no longer have to setup a webserver, a database, containers, etc.

Last but not least, the download experience and evaluator documentation on has been vastly improved.

With Drupal 8.6, you can download and install a fully functional Drupal demo application in less than two minutes. That is something to be very excited about.

What is new for developers?

You can now upgrade a single-language Drupal 6 or Drupal 7 site to Drupal 8 using the built-in user interface. While we saw good progress on multilingual migrations, they will remain experimental as we work on the final gaps.

I recently wrote about our progress in making Drupal an API-first platform, including an overview of REST improvements in Drupal 8.6, an update on JSON API, and the reasons why JSON API didn't make it into this release. I'm looking forward to JSON API being added in Drupal 8.7. Other decoupled efforts, including a React-based administration application and GraphQL support are still under heavy development, but making rapid progress.

We also converted almost all of our tests from SimpleTest to PHPUnit; and we've added Nightwatch.js and Prettier for JavaScript developers. While Drupal 8 has extensive back-end test coverage, using PHPUnit and Nightwatch.js provides a more modern platform that will make Drupal more familiar to PHP and JavaScript developers.

Drupal 8 continues to hit its stride

These are just some of the highlights that I'm most excited about. If you'd like to read more about Drupal 8.6.0, check out the official release announcement and important update information from the release notes. The next couple of months, I will write up more detailed progress reports on initiatives that I didn't touch upon in this blog post.

In my Drupal 8.5.0 announcement, I talked about how Drupal is hitting its stride, consistently delivering improvements and new features:

In future releases, we plan to add a media library, support for remote media types like YouTube videos, support for content staging, a layout builder, JSON API support, GraphQL support, a React-based administration application and a better out-of-the-box experience for evaluators.

As you can see from this blog post, Drupal 8.6 delivered on a number of these plans and made meaningful progress on many others.

In future releases we plan to:

  • Stabilize more of the features targeting content authors
  • Add JSON API, allowing developers to more easily and rapidly create decoupled applications
  • Provide stable multilingual migrations
  • Make big improvements for developers with Composer and configuration management changes
  • Continually improve the evaluator experience
  • Iterate towards an entirely new decoupled administrative experience
  • ... and more

Releases like Drupal 8.6.0 only happen with the help of hundreds of contributors and organizations. Thank you to everyone that contributed to this release. Whether you filed issues, wrote code, tested patches, funded a contributor, tested pre-release versions, or cheered for the team from the sidelines, you made this release happen. Thank you!

September 05, 2018

Lorsque j’ai rencontré XyglZ (de la 3ème nébuleuse), nos conversations nous ont vite menés sur nos modes de vie respectifs. Et l’un des concepts qui l’étonnait le plus était celui d’état. Quoi que je fasse, le principe d’un état levant des impôts lui semblait inintelligible.

— C’est pourtant simple, tentai-je de lui expliquer pour la énième fois. L’état est une somme d’argent mise en commun pour financer les services publics. Chacun verse un pourcentage de ce qu’il gagne à l’état, c’est l’impôt. Plus tu gagnes, plus ton pourcentage est élevé. De cette manière, les riches paient plus.

Expliqué comme cela, rien ne me semblait plus beau, plus simple et plus juste. Jusqu’au moment ou XyglZ (de la 3ème nébuleuse) me demanda comment on calculait ce que chacun gagnait.

— Et bien pour ceux qui reçoivent un salaire fixe d’une seule personne, la question est triviale. Par contre, pour les autres, il s’agit de l’argent gagné moins l’argent dépensé à caractère professionnel.

Disant cela, je réalisai combien définir l’argent gagné en fonction de comment il serait dépensé était absurde.

XyglZ (de la 3ème nébuleuse) releva très vite que cela divisait arbitrairement la société en deux castes : ceux qui touchaient une somme fixe d’une seule autre personne et ceux qui avait des revenus variés. La deuxième caste pouvait grandement diminuer ses impôts en s’arrangeant de dépenser l’argent gagné de manière « professionnelle », cette notion étant arbitrairement floue.

— Mais, tentai-je de justifier, les deux castes peuvent diminuer leurs impôts en dépensant leur argent de certaines manières que le gouvernement souhaite encourager. C’est l’abattement d’impôts.

XyglZ (de la 3ème nébuleuse) remarqua que c’était injuste. Car ceux qui gagnaient moins payaient moins d’impôts. Ils avaient donc moins le loisir de bénéficier de réductions fiscales. La réduction d’impôt bénéficiait donc toujours d’une manière ou d’une autre aux plus riches. Sans compter la complexité que tout cela engendrait.

– En effet, arguais-je. C’est pour cela qu’il me semble juste que l’impôt soit plus important pour ceux qui gagnent plus d’argent.

XyglZ (de la 3ème nébuleuse) n’était pas convaincu. Selon lui, il suffisait de dépenser l’argent de manière professionnelle pour que ces impôts soient inutiles. En fait, ce système encourageait très fortement à gagner le moins d’argent donc à le dépenser professionnellement le plus vite possible. Des comportements économiquement irrationnels devenaient soudainement plus avantageux.

– Et tu ne connais pas la meilleure, ne pus-je m’empêcher d’ajouter. Notre planète est divisée en pays. Chaque pays a des règles complètement différentes. Les riches ont le loisir de s’installer où c’est le plus rentable pour eux, voire de créer des sociétés dans différents pays. Ce n’est pas le cas des plus pauvres.

XyglZ (de la 3ème nébuleuse) émit un son qui, sur sa planète, correspondait à un rire moqueur. Entre deux hoquets, il me demanda comment nous pouvions gérer une telle complexité. Avions-nous des supers ordinateurs gérant toutes nos vies ?

— Non, fis-je. Chaque pays possède une administration importante qui ne sert qu’à appliquer les règles de l’impôt, les calculer et s’assurer que chacun paie. Un pourcentage non-négligeable des habitants d’un pays est employé par cette administration.

XyglZ (de la 3ème nébuleuse) fut visiblement surpris. Il me demanda comment étaient payés ces fonctionnaires si nombreux et capables d’appliquer des procédures si complexes.

– Ils sont payés par l’impôt, fis-je.

XyglZ (de la 3ème nébuleuse) se frappa le front de son tentacule, grimpa dans sa soucoupe et s’envola prestement.

Photo by Jonas Verstuyft on Unsplash

Je suis @ploum, conférencier et écrivain électronique déconnecté rémunérés en prix libre sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens, même symboliques, font une réelle différence pour moi. Merci !

Ce texte est publié sous la licence CC-By BE.

September 04, 2018

During this summer, I went to SANSFire, Defcon and BSidesLV. Usually, the month of September is lighter without big events for me. This is to prepare for the next wave of conferences ahead! Of course, BruCON will be held on the first week of October but, especially, which remains one of my favourite years after years. First of all, because the relaxed atmosphere is favourable to networking: I meet friends on a yearly basis at and there are here and there meetups like the Belgian dinner which is organized for a few years mainly between Belgian Infosec people (but we remain open to friends 😜). The second reason (the official one) is also the number of high-quality talks that are scheduled on a single track. The organizers told me that they received a huge amount of replies to the call-for-papers. It was not easy to prepare the schedule (days are not extensible!).

The agenda has been published today and I already made my list of favourite sessions. Here is the list of topics:

  • Ransomware and their economic model (how to quantify them)
  • MONARC, a risk analysis framework and the ability to share information
  • Threat intelligence in the real world or the gap that exists between what the security community think users need, what users think they need and what they actually need
  • Klara, the Kaspersky’s open source tool which allows building a YARA scanner in the cloud
  • Neuro-Hacking or how to perform efficient social-engineering
  • “WHAT THE FAX?!” or how to pwn a company with a 30-years old technology
  • DDoS or how to optimize attacks based on a combination of big-data analysis and pre-attack analysis to defeat modern anti-DDoS solutions
  • Vulnerabilities related to IPC  (“Inter-Process Communication) in computers
  • How to improve your pentesting skills
  • Privilege escalation and post-exploitation with bash on Windows
  • Relations between techies and users in the world of Infosec
  • How to find the best threat intelligence provider?
  • Boats are also connected, “How to hack a Yacht?”
  • How to discover API’s?
  • Data exfiltration on air-gapped networks
  • Presentation about the Security Information Exchange (S.I.E.)
  • Practical experiences in operating large scale honeypot sensor networks
  • An overview of the security industrial serial-to-ethernet converter

Here is a link to the official agenda. Besides the presentations, there are 18 workshops scheduled across the three days. I’ll be present to write some wrap-up’s! (Here is a list of the previous ones starting from 2011). See you there!

[The post 2018 is ahead! has been first published on /dev/random]

Consciencieusement, le jeune apprenti rangeait les bocaux sur les étagères qui s’entremêlaient à perte de vue.
— Vérifie qu’ils soient bien fermés ! bougonna un vieil homme qui s’affairait dans un grand livre de comptes.
— Bien sûr Monsieur Joutche, je fais toujours attention.
— Tu fais toujours attention mais sais-tu ce qui se passe lorsqu’un pot reste ouvert ?
— euh… Le contenu s’échappe ?
— Et qu’en penses-tu ?
— Je ne vois pas trop le mal Monsieur Joutche. En fait…
— Quoi ?

Le vieillard bondit de sa chaise, renversant l’encrier sur le grand livre. Ses narines palpitaient et il ne parut pas prêter attention au liquide noir qui s’étalait, formant un minuscule lac d’abîmes.

— Ces pots contiennent des rêves, petit imprudent. Rien moins que des rêves. S’ils s’échappent, ils ont alors la possibilité de se réaliser.
Blême et bégayant, l’apprenti tenta de rétorquer.
— Mais c’est bien de réaliser des rêves, non ?
— Tu sais combien ça me coûte de stocker tout ces rêves ? Tu sais combien de rêves sont dans ce magasin ? Et l’anarchie que cela représenterait s’ils se réalisaient tous en même temps ?

Fulminant, il attrapa le jeune homme par la manche et le traina dans une allée. Après quelques détours, ils se plantèrent face à une étagère particulière.
— Regarde ces bocaux ! Ce sont tes rêves à toi.
L’apprenti en était bouche-bée.
— Imagine ce qu’il adviendrait si tout tes rêves pouvaient soudainement se réaliser ?
— Ce serait…
— Non, ce serait terrible ! Crois-moi, il est important de garder les rêves enfermés. Lorsqu’une personne souhaite réaliser l’un de ses rêves, elle vient au magasin. J’analyse alors les conséquences du rêve et je fixe un prix pour le réaliser. De cette manière, tout est sous contrôle.
— Il suffit donc de payer pour voir son rêve se réaliser ?
— Pas vraiment. Une fois le bocal ouvert, le rêve a la possibilité de se réaliser. Mais le sujet doit encore agir pour concrétiser cette réalisation. Nous ne faisons que rendre un rêve possible.
— Et si la personne n’a pas d’argent ?
— Alors on en reste aux petits rêves sans impacts. Tiens, vu que nous sommes la veille de Noël, je fais une réduction et j’offrirai un rêve sans envergure à chaque client qui viendra aujourd’hui.

Le vieillard s’était adouci. Ses yeux pétillaient alors qu’il couvait du regard les rangées de bocaux.

— Mais il est indispensable de contrôler les rêves, de limiter les possibles. Sans cela, nous tomberions dans l’anarchie. C’est pour cela que notre métier est si important. Allez, c’est la veille de Noël, tu as bien travaillé, tu peux rentrer chez toi. Car demain sera une grosse journée. Nous ne pouvons pas fermer le jour de Noël ! Les gens sont enclins à dépenser beaucoup plus pour leurs rêves !

D’un pas lent, l’apprenti laissa ses pas le guider jusqu’à sa petite maison. Il contempla l’âtre éteint de sa petite cheminée et, comme tous les noëls, il déposa ses souliers en adressant une prière muette.

— Père Noël, je ne veux pas de cadeaux, je veux juste que les choses deviennent un peu différentes.

Il se réveilla au petit matin avec un étrange pressentiment. Père Noël l’avait entendu ! Il en était sûr ! Sautant hors de son lit, il se rua dans son petit salon.

Hélas, ses chaussures étaient vides.

Tristement, il s’habilla et les enfila avant de se rendre au magasin. Les allées lui semblèrent bien sombres et silencieuses.
— Monsieur Joutche ?

Intrigué, il se mit à parcourir les rayonnages. Il trouva son patron le visage blême, la bouche grande ouverte juste en face des rêves qu’il lui avait montré, ses propres rêves.
— Tout va bien monsieur Joutche ?

Le vieillard se retourna d’un bond.

— Quoi ? Tu oses remettre les pieds ici ? Dehors ! Je ne veux plus te voir ! Employé indigne ! Jamais je n’ai vu ça en quarante ans de carrière ! Tu m’entends ? Jamais ! Je te flanque à la porte !
— Mais…
— Dehors !

L’apprenti n’osa pas insister mais, avant de s’en retourner, il aperçut les flacons qui contenaient ses rêves. Tous étaient ouverts. Les couvercles avaient disparu ! Il n’y avait pas prêté attention mais cela lui sautait désormais aux yeux : tous les rêves de l’entrepôt s’étaient échappés.
— Dehors ! hurla monsieur Joutche.

Sans demander son reste, l’apprenti se mit à courir. Il sortit du magasin, déboucha sur la rue. Des flocons de neige tournoyait doucement dans le petit matin de Noël.

L’apprenti se mit à sourire.
— Merci Père Noël ! Quel beau cadeau ! murmura-t-il.

Puis, sans se départir de son sourire, il se mit à courir dans la neige en poussant des petits cris de liberté…

Photo by Javardh on Unsplash.

Je suis @ploum, conférencier et écrivain électronique rémunérés en prix libre sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens, même symboliques, font une réelle différence pour moi. Merci !

Ce texte est publié sous la licence CC-By BE.

Today is the first day back from my two-week vacation. We started our vacation in Maine, and we ended our vacation with a few days in Italy.  

While I did some work on vacation, it was my first two-week vacation since starting Acquia 11 years ago.

This morning when the alarm went off I thought: "Why is my alarm going off in the middle of the night?". A few moments later, reality struck. It's time to go back to work.  

Going on vacation is like going to space. Lots of work before take-off, followed by serenity and peaceful floating around in space, eventually abrupted by an insane re-entry process into the earth's atmosphere.

I got up early this morning to work on my "re-entry" and prioritize what I have to do this week. It's a lot!

Drupal Europe is only one week away and I have to make a lot of progress on my keynote presentation and prepare for other sessions and meetings. Between now and Drupal Europe, I also have two analyst meetings (Forrester and Gartner), three board meetings, and dozens of meetings to catch up with co-workers, projects, partners and customers.  Plus, I would love to write about the upcoming Drupal 8.6.0 release and publish my annual "Who sponsors Drupal development?" report. Lots to do this week, but all things I'm excited about.

If you're expecting to hear from me, know that it might take me several weeks to dig out.

August 31, 2018

As you might have read on the Drupal Association blog, Megan Sanicki, the Executive Director of the Drupal Association, has decided to move on.

Megan has been part of the Drupal Association for almost 8 years. She began as our very first employee responsible for DrupalCon Chicago sponsorship sales in 2011, and progressed to be our Executive Director, in charge of the Drupal Association.

It's easy to forget how far we've come in those years. When Megan started, the Drupal Association had little to no funding. During her tenure, the Drupal Association grew from one full-time employee to the 17 full-time employees, and from $1.8 million in annual revenues to $4 million today. We have matured into a nonprofit that can support and promote the mission of the Drupal project.

Megan led the way. She helped grow, mature and professionalize every aspect of the Drupal Association. The last two years in her role as Executive Director she was the glue for our staff and the driving force for expanding the Drupal Association's reach and impact. She understood how important it is to diversify the community, and include more stakeholders such as content creators, marketers, and commercial organizations.

I'm very grateful for all of this and more, including the many less visible contributions that it takes to make a global organization run each day, respond to challenges, and, ultimately, to thrive. Her work impacted everyone involved with Drupal.

It's sad to see Megan go, both professionally and personally. I enjoyed working with Megan from our weekly calls, to our strategy sessions as well as our email and text messages about the latest industry developments, fun stories taking place in our community, and even the occasional frustration. Open source stewardship can be hard and I'm glad we could lean on each other. I'll miss our collaboration and her support but I also understand it is time for Megan to move on. I'm excited to see her continue her open source adventure at Google.

It will be hard to fill Megan's shoes, but we have a really great story to tell. The Drupal community and the Drupal Association are doing well. Drupal continues to be a role model in the Open Source world and impacts millions of people around the world. I'm confident we can find excellent candidates.

Megan's last day is September 21st. We have activated our succession plan: putting in place a transition team and readying for a formal search for a new Executive Director. An important part of this plan is naming Tim Lehnen Interim Executive Director, elevating him from Director, Engineering. I'm committed to find a new Executive Director who can take the Drupal Association to the next level. With the help of Tim and the staff, our volunteers, sponsors and the Board of Directors, the Drupal Association is in good hands.

August 30, 2018

I published the following diary on “Crypto Mining Is More Popular Than Ever!“:

We already wrote some diaries about crypto miners and they remain more popular than ever. Based on my daily hunting statistics, we can see that malicious scripts performing crypto mining operations remain on top of the sample I detected for the last 24h… [Read more]

[The post [SANS ISC] Crypto Mining Is More Popular Than Ever! has been first published on /dev/random]