Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

September 28, 2016

In the past, after every major release of Drupal, most innovation would shift to two areas: (1) contributed modules for the current release, and (2) core development work on the next major release of Drupal. This innovation model was the direct result of several long-standing policies, including our culture of breaking backward compatibility between major releases.

In many ways, this approach served us really well. It put strong emphasis on big architectural changes, for a cleaner, more modern, and more flexible codebase. The downsides were lengthy release cycles, a costly upgrade path, and low incentive for core contributors (as it could take years for their contribution to be available in production). Drupal 8's development was a great example of this; the architectural changes in Drupal 8 really propelled Drupal's codebase to be more modern and flexible, but also came at the cost of four and a half years of development and a complex upgrade path.

As Drupal grows — in lines of code, number of contributed modules, and market adoption — it becomes harder and harder to rely purely on backward compatibility breaks for innovation. As a result, we decided to evolve our philosophy starting after the release of Drupal 8.

The only way to stay competitive is to have the best product and to help people adopt it more seamlessly. This means that we have to continue to be able to reinvent ourselves, but that we need to make the resulting changes less scary and easier to absorb. We decided that we wanted more frequent releases of Drupal, with new features, API additions, and an easy upgrade path.

To achieve these goals, we adopted three new practices:

  1. Semantic versioning: a major.minor.patch versioning scheme that allows us to add significant, backwards-compatible improvements in minor releases like Drupal 8.1.0 and 8.2.0.
  2. Scheduled releases: new minor releases are timed twice a year for predictability. To ensure quality, each of these minor releases gets its own beta releases and release candidates with strict guidelines on allowed changes.
  3. Experimental modules in core: optional alpha-stability modules shipped with the core package, which allow us to distribute new functionality, gather feedback, and iterate faster on the modules' planned path to stability.

Now that Drupal 8 has been released for about 10 months and Drupal 8.2 is scheduled to be released next week, we can look back at how this new process worked. Drupal 8.1 introduced two new experimental modules (the BigPipe module and a user interface for data migration), various API additions, and usability improvements like spell checking in CKEditor. Drupal 8.2 further stabilizes the migration system and introduces numerous experimental alpha features, including significant usability improvements (i.e. block placement and block configuration), date range support, and advanced content moderation — among a long list of other stable and experimental improvements.

It's clear that these regular feature updates help us innovate faster — we can now add new capabilities to Drupal that previously would have required a new major version. With experimental modules, we can get features in users' hands early, get feedback quickly, and validate that we are implementing the right things. And with the scheduled release cycle, we can deliver these improvements more frequently and more predictably. In aggregate, this enables us to innovate continuously; we can bring more value to our users in less time in a sustainable manner, and we can engage more developers to contribute to core.

It is exciting to see how Drupal 8 transformed our capabilities to continually innovate with core, and I'm looking forward to seeing what we accomplish next! It also raises questions about what this means for Drupal 9 — I'll cover that in a future blog post.

September 27, 2016

A few days ago a vulnerability was reported in the SELinux sandbox user space utility. The utility is part of the policycoreutils package. Luckily, Gentoo's sys-apps/policycoreutils package is not vulnerable - and not because we were clairvoyant about this issue, but because we don't ship this utility.

What is the SELinux sandbox?

The SELinux sandbox utility, aptly named sandbox, is a simple C application which executes its arguments, but only after ensuring that the task it launches is going to run in the sandbox_t domain.

This domain is specifically crafted to allow applications most standard privileges needed for interacting with the user (so that the user can of course still use the application) but removes many permissions that might be abused to either obtain information from the system, or use to try and exploit vulnerabilities to gain more privileges. It also hides a number of resources on the system through namespaces.

It was developed in 2009 for Fedora and Red Hat. Given the necessary SELinux policy support though, it was usable on other distributions as well, and thus became part of the SELinux user space itself.

What is the vulnerability about?

The SELinux sandbox utility used an execution approach that did not shield off the users' terminal access sufficiently. In the POC post we notice that characters could be sent to the terminal through the ioctl() function (which executes the ioctl system call used for input/output operations against devices) which are eventually executed when the application finishes.

That's bad of course. Hence the CVE-2016-7545 registration, and of course also a possible fix has been committed upstream.

Why isn't Gentoo vulnerable / shipping with SELinux sandbox?

There's some history involved why Gentoo does not ship the SELinux sandbox (anymore).

First of all, Gentoo already has a command that is called sandbox, installed through the sys-apps/sandbox application. So back in the days that we still shipped with the SELinux sandbox, we continuously had to patch policycoreutils to use a different name for the sandbox application (we used sesandbox then).

But then we had a couple of security issues with the SELinux sandbox application. In 2011, CVE-2011-1011 came up in which the seunshare_mount function had a security issue. And in 2014, CVE-2014-3215 came up with - again - a security issue with seunshare.

At that point, I had enough of this sandbox utility. First of all, it never quite worked enough on Gentoo as it is (as it also requires a policy which is not part of the upstream release) and given its wide open access approach (it was meant to contain various types of workloads, so security concessions had to be made), I decided to no longer support the SELinux sandbox in Gentoo.

None of the Gentoo SELinux users ever approached me with the question to add it back.

And that is why Gentoo is not vulnerable to this specific issue.

September 26, 2016

While working on the second edition of my first book, SELinux System Administration - Second Edition I had to test out a few commands on different Linux distributions to make sure that I don't create instructions that only work on Gentoo Linux. After all, as awesome as Gentoo might be, the Linux world is a bit bigger. So I downloaded a few live systems to run in Qemu/KVM.

Some of these systems however use cloud-init which, while interesting to use, is not set up on my system yet. And without support for cloud-init, how can I get access to the system?

Mounting qemu images on the system

To resolve this, I want to mount the image on my system, and edit the /etc/shadow file so that the root account is accessible. Once that is accomplished, I can log on through the console and start setting up the system further.

Images that are in the qcow2 format can be mounted through the nbd driver, but that would require some updates on my local SELinux policy that I am too lazy to do right now (I'll get to them eventually, but first need to finish the book). Still, if you are interested in using nbd, see these instructions or a related thread on the Gentoo Forums.

Luckily, storage is cheap (even SSD disks), so I quickly converted the qcow2 images into raw images:

~$ qemu-img convert root.qcow2 root.raw

With the image now available in raw format, I can use the loop devices to mount the image(s) on my system:

~# losetup /dev/loop0 root.raw
~# kpartx -a /dev/loop0
~# mount /dev/mapper/loop0p1 /mnt

The kpartx command will detect the partitions and ensure that those are available: the first partition becomes available at /dev/loop0p1, the second /dev/loop0p2 and so forth.

With the image now mounted, let's update the /etc/shadow file.

Placing a new password hash in the shadow file

A google search quickly revealed that the following command generates a shadow-compatible hash for a password:

~$ openssl passwd -1 MyMightyPassword

The challenge wasn't to find the hash though, but to edit it:

~# vim /mnt/etc/shadow
vim: Permission denied

The image that I downloaded used SELinux (of course), which meant that the shadow file was labeled with shadow_t which I am not allowed to access. And I didn't want to put SELinux in permissive mode just for this (sometimes I /do/ have some time left, apparently).

So I remounted the image, but now with the context= mount option, like so:

~# mount -o context="system_u:object_r:var_t:s0: /dev/loop0p1 /mnt

Now all files are labeled with var_t which I do have permissions to edit. But I also need to take care that the files that I edited get the proper label again. There are a number of ways to accomplish this. I chose to create a .autorelabel file in the root of the partition. Red Hat based distributions will pick this up and force a file system relabeling operation.

Unmounting the file system

After making the changes, I can now unmount the file system again:

~# umount /mnt
~# kpart -d /dev/loop0
~# losetup -d /dev/loop0

With that done, I had root access to the image and could start testing out my own set of commands.

It did trigger my interest in the cloud-init setup though...

September 25, 2016

Ik dacht eerst een beknopt overzicht te maken, maar waarom post ik niet gewoon de hele zooi?

Waarom blog ik dit? Omdat dit de randen van onze vrijheden raakt. Zijnde het medische geheim. Ik geloof niet dat het schenden van het beroepsgeheim van dokters ons welke terrorist dan ook zal opleveren. Toch gaan we dit moeten afgeven. Ik vraag me af of mensen die echt psychische problemen hebben hier mee gediend zullen zijn?

(lees de E-mail-boom van onder naar boven voor volledige context)

On Wed, 2016-09-07 at 15:04 +0000, FMF Kabinet Info (ACA) wrote:
> Geachte heer,

> Het gedeeld beroepsgeheim dat wij willen instellen is een uitzonderingsmaatregel
> op het bestaande beroepsgeheim. Hierdoor zal het mogelijk worden voor
> beroepsgroepen met geheimhoudingsplicht om binnen een bepaalde context
> “geheime informatie” te delen.

Worden in jullie voorstel de mensen in die beroepsgroepen opgeleid om de
juist om te gaan met de keuze deze geheime informatie te delen?

Welk zullen de criteria zijn?

> Het is uitdrukkelijk niet de bedoeling dat de artsen zomaar preventief toegang
> kunnen krijgen tot gerechtelijke informatie of tot de politiedatabanken.

En omgekeerd? Zal de politie preventief toegang kunnen krijgen tot
medische dossiers zonder duidelijke voorafgaande goedkeuring en opdracht
van een onderzoeksrechter (die proportionaliteit e.d. toetst)?

> Het is wél de bedoeling dat er (op structurele basis) een overleg kan plaatsvinden,
> ook tussen artsen, politie, parket en bestuurlijke overheden, over bijv.
> risicopatiënten, herhaaldelijke incidenten of hotspots en dat er op basis daarvan
> afspraken kunnen worden gemaakt aangaande beveiliging.

Tussen artsen wil dus zeggen dat deze geheime informatie gedeeld zal
worden met een relatief grote groepen mensen? M.a.w. zal ze vrij snel gelekt
worden. Want al die mensen hun systemen beveiligen is een onmogelijke
zaak. Dokters hun computersystemen worden al geviseerd door oa.
cybercriminelen en er is al sprake van dat deze gegevens via zwarte
markten internationaal verkocht worden. Zelfs op massale schaal.

Moesten jullie dit onbezonnen doen met computerbestanden die her en der
op allerlei netwerken en individuele dokters hun computers komen te
staan, ga je dus over een paar jaar mensen hebben die nooit meer werk
gaan kunnen vinden. Want het is vrijwel zeker dat die gegevens aan HR
bedrijven gaan verkocht worden.

Recent nog was er een Belgisch bedrijf dat medische dossiers voor een
Nederlandse firma beheert, gehacked. Daarbij waren tienduizenden geheime
dossiers buitgemaakt.

Hoe gaan jullie er voor zorgen dat dit niet gebeurt? Het budget voor
cyberbeveiliging is voorts bitter laag.

> Ik ben dus wel van mening dat we de wettelijke beperkingen moeten versoepelen,

Ja, maar wordt de controle er op strenger?

Alleen maar versoepelen zal, maar dit is slechts mijn mening, leiden tot
chaotisch misbruik.

> zodat het mogelijk wordt om absoluut noodzakelijke informatie met de
> juiste partners te delen, ook al is deze informatie verkregen in het
> kader van de vertrouwensrelatie van het beroep.

Met de juiste partners.

Zal er dan ook naast een comité I en P een comité M komen om de dokters
te controleren? Of zal het sterk onderbemande comité I dit doen?

> Dit moet niet alleen in geval van een accute noodsituatie kunnen,
> maar ook om weerkerende problemen of risico’s aan te pakken. De inschatting
> van de opportuniteit om deze informatie te delen binnen dat wettelijk
> kader blijft dan wel bij de houder van de informatie.

Dus niet bij een daarvoor opgeleide persoon, zoals een

> Uw opmerkingen zijn overigens terecht: het criterium “mensen die geweld
> gebruik(t)en” zou veel te vaag zijn om een gedegen risicoanalyse te laten
> voeren.

Precies. Dus de inschatting om deze informatie te delen moet dus gemaakt
worden door iemand die hiervoor opgeleid is?

> En dit zou trouwens erg stigmatiserend werken en contra-productief ten
> aanzien van de vertrouwensrelatie tussen arts en patiënt en de bereidheid
> tot behandeling.

Dus we zijn het met elkaar eens dat enkel iemand die hiervoor opgeleid
wordt de afweging kan maken? M.a.w. een onderzoeksrechter.

Want die persoon kan deze afweging al maken, zolang hij – zij maar overleg
pleegt met de orde der geneesheren.

Misschien heb ik de wet inzake bijzondere inlichtingen niet goed
begrepen, natuurlijk …

Vriendelijke groet,


From: FMF Kabinet Info (ACA) <>
To: <>
Subject: RE: Het gedeelde beroepsgeheim tussen artsen en politie
Date: Wed, 7 Sep 2016 15:04:15 +0000 (09/07/2016 05:04:15 PM)

Geachte heer,

Het gedeeld beroepsgeheim dat wij willen instellen is een uitzonderingsmaatregel op het bestaande beroepsgeheim. Hierdoor zal het mogelijk worden voor beroepsgroepen met geheimhoudingsplicht om binnen een bepaalde context “geheime informatie” te delen.

Het is uitdrukkelijk niet de bedoeling dat de artsen zomaar preventief toegang kunnen krijgen tot gerechtelijke informatie of tot de politiedatabanken.

Het is wél de bedoeling dat er (op structurele basis) een overleg kan plaatsvinden, ook tussen artsen, politie, parket en bestuurlijke overheden, over bijv. risicopatiënten, herhaaldelijke incidenten of hotspots en dat er op basis daarvan afspraken kunnen worden gemaakt aangaande beveiliging.

Ik ben dus wel van mening dat we de wettelijke beperkingen moeten versoepelen, zodat het mogelijk wordt om absoluut noodzakelijke informatie met de juiste partners te delen, ook al is deze informatie verkregen in het kader van de vertrouwensrelatie van het beroep. Dit moet niet alleen in geval van een accute noodsituatie kunnen, maar ook om weerkerende problemen of risico’s aan te pakken. De inschatting van de opportuniteit om deze informatie te delen binnen dat wettelijk kader blijft dan wel bij de houder van de informatie.

Uw opmerkingen zijn overigens terecht: het criterium “mensen die geweld gebruik(t)en” zou veel te vaag zijn om een gedegen risicoanalyse te laten voeren.

En dit zou trouwens erg stigmatiserend werken en contra-productief ten aanzien van de vertrouwensrelatie tussen arts en patiënt en de bereidheid tot behandeling.

Vriendelijke groet,

Voor de Minister,

Trees Van Eykeren

Persoonlijk medewerkster Minister Geens

Kabinet Minister van Justitie| cabinet Ministre de la Justice Koen Geens
Waterloolaan 115
1000 Brussel
Tel +32 2 542 8011

—–Oorspronkelijk bericht—–
Van: Philip Van Hoof []
Verzonden: Saturday 20 August 2016 7:06 PM
Aan: FMF Kabinet Info (ACA)
Onderwerp: Het gedeelde beroepsgeheim tussen artsen en politie

Dag Koen,

Wanneer de ruil voor een gedeeld beroeps geheim betekent dat daarvoor de politie in noodsituatie toegang moeten kunnen krijgen tot medische informatie van mensen die geweld gebruiken, vraag ik me af wat het criterium zal zijn voor “mensen die geweld gebruiken”. Aan welke punten zal je als Belgisch burger moeten voldoen teneinde je een mens bent die, geweld gebruikt?

Wat zal met andere woorden de definitie zijn van “geweld gebruiken”, teneinde je een burger wordt die geweld gebruikte in het verleden.

M.a.w. vanaf wanneer ben je lid van de groep, die dokters als ongewenst kunnen beschouwen?

En wat gebeurt er met het dossierdelen van een burger zijn of haar dossier wanneer je berecht en nadien gestraft bent voor “gewelddelicten”, en-maar uw straf uitgezeten is?

Zullen dokters blijvend inzage in dat dossier krijgen? Met andere woorden, zullen deze mensen blijvend en voor altijd gestraft blijven? U weet natuurlijk ook dat heel wat dokters zullen weigeren deze mensen hulp te bieden.

Hoe zal dit zorgen voor de herintegratie van deze mensen? Ik dacht dat onze samenleving er voor stond dat eens veroordeeld, gestraft en eens de straf uitgezeten; je terug geïntegreerd wordt in de samenleving. Maar dat geldt dan niet, of wel, wat betreft medische zorgen?

Hoe zorgt U er met de wetsvoorstellen voor dat mensen die hulp nodig hebben, doordat deze wetten inzage het dossierdelen tussen politie en arts bestaan, niet zullen afzien om bij een expert ter zake hulp te gaan zoeken?

Met andere woorden; wanneer iemand psychische problemen heeft maar nog wel helder genoeg is te beseffen dat psycholoog of psychiater een zekere plicht heeft de psychische problemen aan de politie te melden, denk ik dat die persoon zal afzien van hulp te zoeken. Hoe zal U ervoor zorgen dat uw wetsvoorstellen deze situatie vermijden?

Denkt U voorts veel psychopathische criminelen te vangen met dit nieuwe systeem?

Waarom zouden psychopathische mensen, die doorgaans intelligent zijn, plots aan de dokter hun kwaadaardige gedachten melden? Vooral nu iedereen (dus ook de psychopathische mensen met kwaadaardige gedachten) weet dat de dokter zo goed als verplicht wordt om zulke kwaadaardige gedachten met de politie te delen.

Met vriendelijke groeten,



September 23, 2016

What do security analysts when they aren’t on fire? They hunt for malicious activity on networks and servers! A few days ago, some suspicious traffic was detected. It was an HTTP GET request to a URL like hxxp://xxxxxx.xx/south/fragment/subdir/… Let’s try to access this site from a sandbox. Too bad, I landed on a login page which looked like a C&C. I tried some classic credentials, searched for the URL or some patterns on Google, in mailing lists and private groups, nothing! Too bad…

Then, you start some stupid tricks like moving to the previous directory in the path (like doing a “cd ..”) again and again to finally… find another (unprotected) page! This page was indexing screenshots sent by the malware from compromised computers. Let’s do a quick ‘wget -m’ to recursively collect the data. I came back a few hours later, pressed ‘F5’ and the number of screenshots increased. The malware was still in the wild. A few hours and some ‘F5’ later, again more screenshots! Unfortunately, the next day, the malicious content was removed from the server. Hopefully, I got copies of the screenshots. Just based on them, it is possible to get interesting info about the attack / malware:

  • People from many countries were infected (speaking Chinese, Russian, German, Arab, …)
  • It targeted mainly organizations
  • The malware was delivered via two files:
    • A “scan001.ace” archive containing a “scan001.exe” malicious PE file.
    • A “PR~Equipments-110 00012404.ace” file
  • The malicious file was opened on file servers and even a DC!
  • The malicious file was analyzed in sandboxes (easy to recognize them, Cuckoo & FireEye)

Here is a selection of interesting screenshots (anonymized). The original screenshots were named “<hostname>_<month>_<day>_<hour>_<min>_<sec>.jpg”. Based on the filename format, it seems that the malware is taking one screenshot per minute. I renamed all the files with their MD5 hash to prevent disclosure of sensitive info.

f7c75af9f6d84a761f979ebf490f921d ee517028d9b1bfaf2aae8abf6176735f e640309d8a27c14118906c3be7308363 e17d33f4f6969970d29f67063f416820 e6f74e098268b361261f26842fe05701 da5c267c26529951d914b1985b2b70df beae96aee2e7977bdda886c130c0d769 c0c429c65a61d6ef039b33c0b52263a2 c1f0b66cea6740c74b55b27e5eff72b7 c8d73ddafc18e8f3ecb1c2c69091b0bb d351e118cb3f9ce0e319ad9e527e650d d0344809b6b32ddec99d98eb96ff5995 b78c32559c276048e028e8af2b06f1ed b10b50a956d1dfd3952678161b9a8242 b1f39eaf121a3d7c9bb1093dc5e5e66b af66c8924f1bb047f44f0d3be39247f7 9643b3c28fa9cf71df8fbc1568e7d82e 957dc126433c79c71383a37ee3da4a5f 0134fc9dda9c6ffd2d3a2ed48c000851 81d74df34b1e85bd326570726dd6eacb 018b6037b4fa2ae9790e3c6fb98fb1e7 9fda6c140a772b5069bd07b7ee898dba 9ed4787a1e215f341aff9b5099846bfe 09c5cfb440193b35017ae2a5552cd748 8c64f33d219f5cd0eadd90e1fcdc97ec 8c7c1fd9938e9cb78b0e649079a714df 6b76b6456af4a2ab54c4bd5935a5726a 6a4c19fb2a13121ee03577c9b37924a9 5aaf455193b2d4bfd13128a5c2502db8 4ba9db95f7bbeb58f73969f2262eea8b 2c48880ea3a8644985ffe038fe9a1260

[The post Go Hunt for Malicious Activity! has been first published on /dev/random]

September 22, 2016

Intel NUCI decided to move my home backup drives to ZFS because I wanted built-in file checksumming as a prevention against silent data corruption. I chose ZFS over BrtFS because I have considerable experience with ZFS on Solaris.

I knew that ZFS loves RAM, hence I upgraded my home “server” (NFS/Samba/Docker) from an old laptop with 2GB of RAM to the cheapest Intel NUC I could find with USB3, Gigabit ethernet and 8GB of RAM. The C5CPYH model fitted the bill.

Two remarks for those that want to install Linux on this barebones mini-pc:

  • Update the BIOS first, otherwise the Ubuntu 16.04 server USB installer won’t start. My model had a very recent BIOS version, but still I needed the latest. BIOS updates can be found here. (There is also an option to select Linux as the OS in the BIOS.)
  • Ubuntu 16.04 server did not find the network card at install time (missing Realtek drivers). Just finish the installation. Once rebooted, the correct driver for the network card will be already loaded. Just finish the IP configuration in /etc/network/interfaces.

Filed under: Uncategorized Tagged: "home server", Linux, nuc, Ubuntu

September 21, 2016

As soon as you're running some IT services, there is one thing that you already know : you'll have downtimes, despite all your efforts to avoid those...

As the old joke says : "What's up ?" asked the Boss. "Hopefully everything !" answered the SysAdmin guy ....

You probably know that the CentOS infra is itself widespread, and subject to quick move too. Recently we had to announce an important DC relocation that impacts some of our crucial and publicly facing services. That one falls in the "scheduled and known outages" category, and can be prepared. For such "downtime" we always announced that through several mediums, like sending a mail to the centos-announce, centos-devel (and in this case , also to the ci-users) mailing lists. But even when we announce that in advance, some people forget about it, or people using (sometimes "indirectly") the concerned service are surprized and then ask about it (usually in #centos or #centos-devel on

In parallel to those "scheduled outages", we have also the worst ones : the unscheduled ones. For those ones, depending on the impact/criticity of the impacted service, and also the estimated RTO, we also send a mail to the concerned mailing lists (or not).

So we just decided to show a very simple and public dashboard for the CentOS Infra, but only covering the publicly facing services, to have a quick overview of that part of the Infra. It's now live and hosted on

We use Zabbix to monitor our Infra (so we build it for multiple arches, like x86_64,i386,ppc64,ppc64le,aarch64 and also armhfp) , including through remote zabbix proxies (because of our "distributed" network setup right now, with machines all around the world). For some of those services listed on, we can "manually" announce a downtime/maintenance period, but Zabbix also updates on its own that dashboard. The simple way to link those together was to use zabbix custom alertscripts and you can even customize those to send specific macros and have that alertscript just parsing and then updating the dashboard.

We hope to enhance that dashboard in the future, but it's a good start, and I have to thank again Patrick Uiterwijk who wrote that tool for Fedora initially (and that we adapted to our needs).

September 19, 2016

Four weeks ago we went on a vacation in Tuscany. I finally had some time to process the photos and write down our memories from the trip.

Day 1

Al magrini farmhouse

We booked a last-minute house in a vineyard called Fattoria di Fubbiano. The vineyard has been producing wine and olive oil since the 14th century. On the eastern edge of the estate, is Al Magrini, a Tuscan farmhouse surrounded by vines and olive trees.

When we arrived, we were struck by the remoteness. We had to drive through dirt roads for 10 minutes to get to our house. But once we got there, we were awestruck. The property overlooks a valley of olive groves and vines. We could have lunch and dinner outside among the rose bushes, and enjoy our own swimming pool with its own sun beds, deck chairs and garden umbrellas.

While it was full of natural beauty, it was also very simple. We quickly realized there was no TV or internet, no living room, and only a basic kitchen; we couldn't run two appliances at the same time. But nothing some wine and cheese can't fix. After some local cheese, olives and wine, we went for a swim in the pool. Vacation had started!

We had dinner in a great little restaurant in the middle of nowhere. We ate some local, traditional food called "tordelli lucchesi". Nearly every restaurant in Lucca serves a version of this traditional Lucchesan dish. Tordelli look like ravioli, but that is where the resemblance ends. The filling is savory rather than cheesy, and the cinnamon- and sage-infused ragù with which the tordelli are served is distinctly Tuscan. The food was exceptional.

Day 2

Swimming pool

We were woken up by loud screaming from Stan: "Axl got hurt! He fell out of the window!". Our hearts skipped several beats because the bedrooms were on the second floor and we told them they couldn't go downstairs in the morning.

Turns out Axl and Stan wanted to surprise us by setting the breakfast table outside. They snuck downstairs and originally set the table inside, wrote a sweet surprise note in their best English, and made "sugar milk" for everyone -- yes, just like it sounds they added tablespoons full of sugar to the milk. Axl then decided he wanted to set the table outside instead. They overheard us saying how much we enjoyed eating breakfast outside last time we were in Italy. They couldn't open the door to the backyard so Axl decided to climb out of the window, thinking he could unlock the door from the outside. In the process, he fell out of the window from about one meter. Fortunately since it was a first floor window (ground level window), Axl got nothing but a few scratches. Sweet but scary.

Later on, we went to the grocery store and spent most of the day at the pool. The boys can't get enough of playing in the water with the inflatable crocodile "Crocky" raft Stan had received for his birthday two years ago. Vanessa can't get enough of the sun and she also confiscated my Kindle.

With no Kindle to read on, I discovered poop next to the pool. I thought it was from a wild horse and was determined to go to look for it in the coming days.

In the late afternoon, we had snacks and prosecco, something which became our daily tradition on vacation. The Italian cheese was great and the "meloni" was so sweet. The food was simple, but tasted so much better than at home. Maybe it's the taste of vacation.

Vanessa did our first load of laundry which needed to dry in the sun. The clothes were a little crunchy, but there was something fulfilling about the simplicity of it.

Day 3

Hike up the hill

In good tradition, I made coffee in the morning. As I head downstairs the morning light peeks through all the cracks of the house, and highlights the old brick and stone walls. The coffee machine is charmingly old school. We had to wait 20 minutes or so for the whole pot to brew.

Vanessa made french toast for breakfast. She liked to shout in Dutch "Het is vakantie!" during the breakfast preparation. Stan moaned repeatedly during breakfast - he loved the french toast! It made us laugh really hard.

Today was a national holiday in Italy so everything is closed. We decided to spend the time at the pool; no one was complaining about that. Most weeks feels like a marathon at work, so it was nice to do absolutely nothing for a few days, not keep track of time, and just enjoy our time together.

To take a break from the pool, we decided to walk through the olive groves looking for those wild horses. Axl and Stan weren't especially fond of the walk as it started off uphill. Stan told us "I'm sweating" as if we would turn back. Instead of wild horses we found a small mountain village. The streets were empty and the shutters were closed to keep the peak heat of the day out. It seemed like we had stepped back in time 30-40 years.

Sitting next to the pool gave me a lot of time to think and reflect. It's nice to have some headspace. Our afternoon treat by the pool was iced coffee! We kept the leftover coffee from the morning to pour over ice for a refreshing drink. One of Vanessa's brilliant ideas.

Our evening BBQs are pretty perfect. We made Spanish style bruschetta; first grilling the bread, then rubbing it with garlic and tomato, drizzle some local olive oil over it, and add salt and pepper. After the first bite it was requested we make this more often.

We really felt we're all connecting. We even had an outdoor dance party as the sun was setting. Axl wrote in our diary: "Vanessa laughed so hard she almost peed her pants. LOL.". Stan wanted to know if his moves made her laugh the hardest.

Every evening we would shower to wash off the bug spray, because mosquitos were everywhere. When it was finally my time to shower, we ran out of water -- just when I was all soaped up. Fortunately, we had a bottle of Evian that I could use to rinse off (just like the Kardashians).

Day 4

Italian house

We set the alarm for 7:30am so we could head to Lucca, a small city 30 minutes from our house -- 15 minutes of that is spent getting out of the vineyard and mountain trails. We were so glad we rented "Renny", our 4x4 Jeep Renegade, as there are no real paved roads in the vineyard.

We visited "La Boutique Dei Golosi", a tiny shop that sold local wines, olive oils and other Italian goods. The shop owner, Alain, opened bottles of wine and let us taste different olive oils on bread. He offered the boys samples of everything the adults tried and was impressed that they liked it. Interestingly enough, all four of us preferred the same olive oil. We shipped 5 bottles home, along with several bottles of wine, limoncello and 3 jars of white truffle paste. It was fun knowing a big box of Italian goods would arrive once we were home.

When we got back from Lucca, we fired up the grill and drank our daily bottle of prosecco. Every hour we hear bells ring -- it's from the little town up on the hill. The bells are how we kept track of time. The go-at-your-own-pace lifestyle is something all North Americans should experience. The rhythm of Tuscany's countryside is refreshing -- the people there know how to live.

Axl and Stan enjoyed the yard. When they weren't playing soccer or hunting for salamanders, they played ninjas using broomsticks. Axl was "Poison Ivy" and Stan was "Bamboo Sham". Apparently, they each have special moves that they can use once every battle.

Day 5

Wine tasting fattoria di fubbiano

Today we went wine tasting at our vineyard, Fattoria di Fubbiano, and got a tour of the cellar. It was great that the tour was in "inglese". We learned that they manage 45 hectares and produce 100,000 bottle of wine annually. We bought 21 of them and shipped them home so there is only 99,979 left. The best part? We could walk home afterwards. :)

Our charcoal reserves are running low; a sign of a great vacation.

Day 6

Funicular montecatini alto

We visited Montecatini Alto, about a 40 minute drive from our house. To get to Montecatini Alto, we took a funicular built in 1898. They claim it is the oldest working cable car in the world. I believe them.

Montecatini Alto is a small medieval village that dates back to 1016. It's up on a hill. The views from the village are amazing, overlooking a huge plain. I closed my eyes and let my mind wonder, trying to image how life was back then over a thousand year ago.

At the very top there was an old church where we lit a candle for Opa. I think about Opa almost every day. I imagined all of the stories and historic facts he would tell if he were still with us.

The city square was filled with typical restaurants, cafes and shops. We poked around in some of the shops and Stan found a wooden sword he wanted, but couldn't decide if that's what he wanted to spend his money on. To teach Axl and Stan about money, we let them spend €20 of their savings on vacation. Having to use their own money made them think long and hard on their purchases. Since the shops close from 1pm to 2:30pm, we went for lunch in one of the local restaurants on the central square while Stan contemplated his purchase. It's great to see Axl explore the menu and try new things. He ordered the carbonara and loved it. Stan finally decided he wanted the sword bad enough, so we went back and he bought it for €10.

When we got back to our vineyard, we spotted wild horses! Finally proof that they exist. Vanessa quickly named them Hocus, Pocus and Dominocus.

In the evening we had dinner in a nearby family restaurant called "Da Mi Pa". The boys had tordelli lucchesi and then tiramisu for dessert. Chances are slim but I hope that they will remember those family dinners. They talked about the things that are most important in life, as well as their passions (computer programming for Axl and soccer for Stan). The conversations were so fulfilling and a highlight of the vacation.

Day 7

Leaning tower of pisa

Spontaneous last minute decision on what to do today. We came up with a list of things to do and Axl came up with a voting system. We decided to visit the Leaning Tower of Pisa. We were all surprised how much the tower actually leans and of course we did the goofy photos to prove we were there. These won't be published.

Day 8

Ponte vecchio florence

Last day of the vacation. We're all a bit sad to go home. The longer we stay, the happier we get. Happier not because of where we were, but about how we connected.

Today, we're making the trek to Florence. One of the things Florence is known for is leather. Vanessa wanted to look for a leather jacket, and I wanted to look for a new duffel bag. We found a shop that was recommended to us; one of the shop owners is originally from the Greater Boston area. Enio, her husband, was very friendly and kind. He talked about swimming in Walden Pond, visiting the Thoreau's House, etc. The boys couldn't believe he had been to Concord, MA. Enio really opened up and gave us a private tour to his leather workshop. His workshop consisted of small rooms filled with piles and piles of leather and all sorts of machinery and tools.

I had a belt made with my initials on it (on the back). Stan got a bracelet made out of the leftover leather from the belt. Axl also got a bracelet made, and both had their initials stamped on them. Vanessa bought a gorgeous brown leather jacket, a purse and funky belt. And last but not least, l found a beautiful handmade ram-skin duffel bag in a cool green color. Enio explained that it takes him two full days to make the bag. It was expensive but will hopefully last for many, many years. I wanted to buy a leather jacket but as usual they didn't have anything my size.

We strolled across the Ponte Vecchio and made some selfies (like every other tourist). We had a nice lunch. Pasta for Vanessa, Axl and myself. Stan still has an aversion to ragù even though he ate it 3 times that week and loved it every time. Then we had our "grand finale gelato" before we headed to the airport.

September 16, 2016


Pour Stoyan, mon ami et lecteur de ce blog, décédé le 8 septembre 2016.

Laissez-moi la nuit !

Depuis que je suis tout petit,
J’ai été entouré et vous m’avez appris
Que le soleil est source de toute vie.
Aujourd’hui, regardez, j’ai grandi.
Alors laissez-moi la nuit.

Laissez moi ma colère contre les traditions,
Contre les religions et toutes ces règles à la con
Auxquelles on devrait obéir sans poser de question.
Laissez-moi leur hurler, leur crier mon nom
Les secouer dans un grand bruit,
Dans un tohu-bohu, un charivari,
Oui, laissez-moi la nuit.

Laissez moi la nuit
Laissez moi partir dans le noir
Laissez moi vierge de vos espoirs
Laissez moi hurler
Laissez moi choquer
Laissez-moi même si vous n’aimez pas
Laissez-moi même si vous ne comprenez pas
Laissez-moi choisir
Laissez-moi haïr
Laissez-moi être incompris
Laissez-moi la nuit

Ne cherchez pas de responsabilité
Ne vous demandez pas comment me changer
N’essayez pas de modifier le passé !
De l’amour, vous m’en apportez,
Comprenez que personne n’a failli, j’ai choisi
Alors, laissez-moi la nuit.

Lorsque s’évanouit l’illusion d’éternité,
Lorsque meurt le voile de la naïveté,
Lorsque la lumière fait place à l’obscurité,
Il ne reste plus qu’une insatiable quête de liberté.
Laissez-moi ce chemin que j’ai choisi.
Laissez-moi la liberté de la nuit.


Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins: 12uAH27PhWepyzZ8kzY9rMAZzwpfsWSxXt, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

September 15, 2016

During the last edition of the Troopers security conference in March, I attended a talk about “JustMetaData”. It’s a tool developed by Chris Truncer to perform open source intelligence against IP addresses. Since then, I used this tool on a regular basis. Often when you’re using a tool, you have ideas to improve it. JustMetaData being written in Python and based on “modules”, it is easy to write your own. My first contribution to the project was a module to collect information from DShield.

Passive DNS is a nice technique to track the use of IP addresses, mainly to track malicious domains but it can be combined with other techniques to have a better view of the “scope” of an IP address. My new contribution is a module which uses the Microsoft API to collected hostnames associated with an IP address. Indeed, has a nice search query ‘ip:x.x.x.x’. To have a good view of an IP address is a key when you collect them to build blacklists. Indeed, behind an IP address, hundreds of websites can be hosted (in a shared environment). A good example is big WordPress providers. Blacklisting an IP address might have multiple impacts:

  • Your users could be blocked while trying to visit a legitimate site hosted on the same IP address.
  • This could generate a lot of false-positive events in your log monitoring environment.

Here is an example of the module:

$ ./
# Just-Metadata #

[>] Please enter a command: load ip.tmp
[*] Loaded 1 systems
[>] Please enter a command: gather Bing_IP
Found 549 hostnames for
[>] Please enter a command: ip_info
...stuff deleted...
...stuff delete...
[>] Please enter a command: exit

I wrote a Dockerfile for Just-Metadata to run it in a container. To fully automate the gathering process and reporting, I wrote a second patch to allow the user to specific a filename where to save the state of a research or the export of results in the right volume. Once the Docker created with the following command:

# docker build -t justmetadata --build-arg SHODAN_APIKEY=<yourkey> --build-arg BING_APIKEY= <yourkey> .

Now, the analyse of IP addresses can be fully automated (with the target IP addresses stored in ip.tmp):

# docker run -it --rm -v /tmp:/data justmetadata \
             -l /data/ip.tmp -g All \
             -e /data/results.csv

As a bonus, CVS files can be index by a Splunk instance (or any other tool) for further processing:

Splunk IP Address Details(Click to zoom)

Here is an example of complete OSINT details about an IP address:

Splunk IP OSINT(Click to zoom)

Based on these details, you can generate more accurate blacklists. I sent a pull request to Chris with my changes. In the meantime, you can use my repository to use the “Bing_IP” module.



[The post IP Address Open Source Intelligence for the Win has been first published on /dev/random]

another-canary-in-a-coal-mineCopy/ pasted straight from a support question on;

Auto-deleting the cache would only solve one problem you’re having (disk space), but there are 2 other problems -which I consider more important- that auto-cleaning can never solve:
1. you will be generating new autoptimized JS very regularly, which slows your site down for users who happen to be the unlucky ones requesting that page
2. a visitor going from page X to page Y will very likely have to request a different autoptimized JS file for page Y instead of using the one from page X from cache, again slowing your site down

So I actually consider the cache-size warning like a canary in the coal mines; if the canary dies, you know there’s a bigger problem.

You don’t (or shouldn’t) really want me to take away the canary! :)

The post Varnish Cache 5.0 Released appeared first on

This just got posted to the varnish-announce mailing list.

We have just released Varnish Cache 5.0:

This release comes almost exactly 10 years after Varnish Cache 1.0,
and also marks 10 years without any significant security incidents.

Next major release (either 5.1 or 6.0) will happen on March 15 2017.


PS: Please help fund my Varnish habit:

Poul-Henning Kamp

varnish-announce mailing list

Lots of changes (HTTP/2, Shard director, ban lurker improvements, ...) and info on upgrading to Varnish 5!

I'll be working on updated configs for Varnish 5 (as I did for Varnish 4) as soon as I find some time for it.

The post Varnish Cache 5.0 Released appeared first on

Next week, the second edition of, the Belgian conference on open-source geospatial software will take place in Tour and Taxis in Brussels.

The conference will bring together developers and users of open source geomatics software from Belgium and all over the world. Participation is free of charge but registration is required.

We have a varied program for you: in the plenary session we will present the current state of many of the major Open source GIS applications - this includes novelties demonstrated at the last FOSS4G conference in Bonn.

Next in the main track the different regions of Belgium will present their open (geo) data sets. Moreover Christian Quest from Open StreetMap France will explain how the Openstreetmap community was actually part of the creation and maintenance of the address database in France.

In our parallel sessions we have presentations covering all aspects of Open source geospatial. Presentations on the usage of some of the most widely used programs, such as QGis, Openlayers3 and PostGIS, but also less well known solutions for handling 3D data, building SDI and doing advanced analyses. With the open streetmap conference happening just the day after this conference, we also have a special track with specialised

Last but not least, conferences are all about meeting networking and meeting new people, and you will get the chance to do so during our breaks!

Hope to see many of you in Brussels!

September 14, 2016

I've made no secret of my interest in the open web, so it won't come as a surprise that I'd love to see more web applications and fewer native applications. Nonetheless, many argue that "the future of the internet isn't the web" and that it's only a matter of time before walled gardens like Facebook and Google — and the native applications which serve as their gatekeepers — overwhelm the web as we know it today: a public, inclusive, and decentralized common good.

I'm not convinced. Native applications seem to be winning because they offer a better user experience. So the question is: can open web applications, like those powered by Drupal, ever match up to the user experience exemplified by native applications? In this blog post, I want to describe inversion of control, a technique now common in web applications and that could benefit Drupal's own user experience.

Native applications versus web applications

Using a native application — for the first time — is usually a high-friction, low-performance experience because you need to download, install, and open the application (Android's streamed apps notwithstanding). Once installed, native applications offer unique access to smartphone capabilities such as hardware APIs (e.g. microphone, GPS, fingerprint sensors, camera), events such as push notifications, and gestures such as swipes and pinch-and-zoom. Unfortunately, most of these don't have corresponding APIs for web applications.

A web application, on the other hand, is a low-friction experience upon opening it for the first time. While native applications can require a large amount of time to download initially, web applications usually don't have to be installed and launched. Nevertheless, web applications do incur the constraint of low performance when there is significant code weight or dozens of assets that have to be downloaded from the server. As such, one of the unique challenges facing web applications today is how to emulate a native user experience without the drawbacks that come with a closed, opaque, and proprietary ecosystem.

Inversion of control

In the spirit of open source, the Drupal Association invited experts from the wider front-end community to speak at DrupalCon New Orleans, including from Ember and Angular. Ed Faulkner, a member of the Ember core team and contributor to the API-first initiative, delivered a fascinating presentation about how Drupal and Ember working in tandem can enrich the user experience.

One of Ember's primary objectives is to demonstrate how web applications can be indistinguishable from native applications. And one of the key ideas of JavaScript frameworks like Ember is inversion of control, in which the client side essentially "takes over" from the server side by driving requirements and initiating actions. In the traditional page delivery model, the server is in charge, and the end user has to wait for the next page to be delivered and rendered through a page refresh. With inversion of control, the client is in charge, which enables fluid transitions from one place in the web application to another, just like native applications.

Before the advent of JavaScript and AJAX, distinct states in web applications could be defined only on the server side as individual pages and requested and transmitted via a round trip to the server, i.e. a full page refresh. Today, the client can retrieve application states asynchronously rather than depending on the server for a completely new page load. This improves perceived performance. I discuss the history of this trend in more detail in this blog post.

Through inversion of control, JavaScript frameworks like Ember provide much more than seamless interactions and perceived performance enhancements; they also offer client-side storage and offline functionality when the client has no access to the server. As a result, inversion of control opens a door to other features requiring the empowerment of the client beyond just client-driven interactions. In fact, because the JavaScript code is run on a client such as a smartphone rather than on the server, it would be well-positioned to access other hardware APIs, like near-field communication, as web APIs become available.

Inversion of control in end user experiences

Application-like animation using Ember and Drupal
When a user clicks a teaser image on the homepage of an Ember-enhanced, the page seamlessly transitions into the full content page for that teaser, with the teaser image as a reference point, even though the URL changes.

In response to our recent evaluation of JavaScript frameworks and their compatibility with Drupal, Ed applied the inversion of control principle to using Ember. Ed's goal was to enhance's end user experience with Ember to make it more application-like, while also preserving Drupal's editorial and rendering capabilities as much as possible.

Ed's changes are not in production on, but in his demo, clicking a teaser image causes it to "explode" to become the hero image of the destination page. Pairing Ember with Drupal in this way allows a user to visually and mentally transition from a piece of teaser content to its corresponding page via an animated transition between pages — all without a page refresh. The animation is very impressive and the animated GIF above doesn't do it full justice. While this transition across pages is similar to behavior found in native mobile applications, it's not currently possible out of the box in Drupal without extensive client-side control.

Rather than the progressively decoupled approach, which embeds JavaScript-driven components into a Drupal-rendered page, Ed's implementation inverts control by allowing Ember to render what is emitted by Drupal. Ember maintains control over how URLs are loaded in the browser by controlling URLs under its responsibility; take a look at Ed's DrupalCon presentation to better understand how Drupal and Ember interact in this model.

These impressive interactions are possible using the Ember plugin Liquid Fire. Fewer than 20 lines of code were needed to build the animations in Ed's demo, much like how SDKs for native mobile applications provide easy-to-implement animations out of the box. Of course, Ember isn't the only tool capable of this kind of functionality. The RefreshLess module for Drupal by Wim Leers (Acquia) also uses client-side control to enable navigating across pages with minimal server requests. Unfortunately, RefreshLess can't tap into Liquid Fire or other Ember plugins.

Inversion of control in editorial experiences

In-place editing using Ember and Drupal
In CardStack Editor, an editorial interface with transitions and animations is superimposed onto the content page in a manner similar to outside-in, and the editor benefits from an in-context, in-preview experience that updates in real time.

We can apply this principle of inversion of control not only to the end user experience but also to editorial experiences. The last demos in Ed's presentation depict CardStack Editor, a fully decoupled Ember application that uses inversion of control to overlay an administrative interface to edit Drupal content, much like in-place editing.

CardStack Editor communicates with Drupal's web services in order to retrieve and manipulate content to be edited, and in this example Drupal serves solely as a central content repository. This is why the API-first initiative is so important; it enables developers to use JavaScript frameworks to build application-like experiences on top of and backed by Drupal. And with the help of SDKs like Waterwheel.js (a native JavaScript library for interacting with Drupal's REST API), Drupal can become a preferred choice for JavaScript developers.

Inversion of control as the rule or exception?

Those of you following the outside-in work might have noticed some striking similarities between outside-in and the work Ed has been doing: both use inversion of control. The primary purpose of our outside-in interfaces is to provide for an in-context editing experience in which state changes take effect live before your eyes; hence the need for inversion of control.

Thinking about the future, we have to answer the following question: does Drupal want inversion of control to be the rule or the exception? We don't have to answer that question today or tomorrow, but at some point we should.

If the answer to that question is "the rule", we should consider embracing a JavaScript framework like Ember. The constellation of tools we have in jQuery, Backbone, and the Drupal AJAX framework makes using inversion of control much harder to implement than it could be. With a JavaScript framework like Ember as a standard, implementation could accelerate by becoming considerably easier. That said, there are many other factors to consider, including the costs of developing and hosting two codebases in different languages.

In the longer term, client-side frameworks like Ember will allow us to build web applications which compete with and even exceed native applications with regard to perceived performance, built-in interactions, and a better developer experience. But these frameworks will also enrich interactions between web applications and device hardware, potentially allowing them to react to pinch-and-zoom, issue native push notifications, and even interact with lower-level devices.

In the meantime, I maintain my recommendation of (1) progressive decoupling as a means to begin exploring inversion of control and (2) a continued focus on the API-first initiative to enable application-like experiences to be developed on Drupal.


I'm hopeful Drupal can exemplify how the open web will ultimately succeed over native applications and walled gardens. Through the API-first initiative, Drupal will provide the underpinnings for web and native applications. But is it enough?

Inversion of control is an important principle that we can apply to Drupal to improve how we power our user interactions and build robust experiences for end users and editors that rival native applications. Doing so will enable us to enhance our user experience long into the future in ways that we may not even be able to think of now. I encourage the community to experiment with these ideas around inversion of control and consider how we can apply them to Drupal.

Special thanks to Preston So for contributions to this blog post and to Angie Byron, Wim Leers, Kevin O'Leary, Matt Grill, and Ted Bowman for their feedback during its writing.

September 10, 2016

Finally took the time to use certificates for and Also the SMTP at should now have a somewhat good TLS situation, too. But of course, whoever needed tell me something very secret … just met with me face to face. Duh.

My colleague in crime introduced me to, which indeed looks quite nice and fantastic.

Congratulations to the Let’s encrypt initiative for making it really, really easy.

That certbot couldn’t parse my default-ssl in available-sites. No idea why. But it wasn’t in enabled-sites. After removing that original debian-package file, it all worked fine.

They probably also want to post a checksum of that “wget” thing there. When downloading and executing things on my server I usually do want to quickly and easily check and double-check it all.

The tool is also not super easy to use for anything that isn’t HTTPS. Especially SMTPS comes to mind.

September 09, 2016

I’m happy to release a few Clean Architecture related diagrams into the public domain (CC0 1.0).

These diagrams where created at Wikimedia Deutchland by Jan Dittrich, Charlie Kritschmar and myself for an upcoming presentation I’m doing on the Clean Architecture. There are plenty of diagrams available already if you include Onion Architecture and Hexagonal, which have essentially the same structure, though none I’ve found so far have a permissive license. Furthermore, I’m not so happy with the wording and structure of a lot of these. In particular, some incorporate more than they can chew with the “dependencies pointing inward rule”, glossing over important restrictions which end up not being visualized at all.

These images are SVGs. Click them to go to Wikimedia Commons where you can download them.

Clean Architecture Clean Architecture + Bounded Context Clean Architecture + Bounded Contexts Clean Architecture + Bounded Contexts

I published the following diary on “Collecting Users Credentials from Locked Devices.

It’s a fact: When a device can be physically accessed, you may consider it as compromised. And if the device is properly hardened, it’s just a matter of time. The best hacks are the ones which use a feature or the way the computer is supposed to work. To illustrate this, let’s review an interesting blog post published yesterday[1]. It demonstrates how easy it is to steal credentials from a locked computer… [Read more]

[The post [SANS ISC Diary] Collecting Users Credentials from Locked Devices has been first published on /dev/random]

September 08, 2016

Update 2016/09/08: on recent Ubuntus (e.g. 16.04) you can use the graphical “disks” application to create a Luks+ext4 partiton. The defaults are sane. However, it’s still advisable to put random data on the new disk before encryption. This howto is still useful for non-X setups.
Update 2012/03/18: up to date with Ubuntu 11.10.
Update 2010/04/30: Addition for the new 4KB block size drives.

If you are like me and use a laptop as your main computer, you will run out of space very soon. USB disks are a great alternative to store your photography or music collection or, simply, files you don’t use everyday. I always keep backups off-site (a USB disk) and I want to have those encrypted. This is what I did (open a shell):

  1. Install the cryptography software:
    $ sudo apt-get install cryptsetup
  2. Write some random data to your disk (we will assume it’s called /dev/sdx, type “dmesg” after inserting the disk to figure out the device, or if it’s windows formatted and automounted have a look at the output of “mount”):
    $ sudo dd if=/dev/random of=/dev/sdx bs=4K
    This will taken a long time, at least a few days (create some IO). A good -shorter- compromise (a day) will be:
    $ sudo badblocks -c 10240 -s -w -t random -v /dev/sdx
  3. Create a new Linux partition table with cfdisk (create new partition table if asked, chose New and assign all the disk, use a primary partition).
    $ sudo cfdisk /dev/sdx
  4. Setup a partition using fdisk (compatible with the new 4KB block size drives):
    $ sudo fdisk -uc /dev/sdx

    Command (m for help): d
    Selected partition 1
    Command (m for help): n
    Command action
    e   extended
    p   primary partition (1-4)
    Partition number (1-4): 1
    First sector (2048-2930277167, default 2048):
    Using default value 2048
    Last sector, +sectors or +size{K,M,G} (2048-2930277167, default 2930277167):
    Using default value 2930277167
    Command (m for help): t
    Selected partition 1
    Hex code (type L to list codes): 83
    Command (m for help): p
    Disk /dev/sdx: 1500.3 GB, 1500301910016 bytes
    81 heads, 63 sectors/track, 574226 cylinders, total 2930277168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x4fabbfc4
    Device Boot      Start         End      Blocks   Id  System
    /dev/sdx1         2048  2930277167  1465137560   83  Linux
    Command (m for help): w
    The partition table has been altered!
    Calling ioctl() to re-read partition table.
    Syncing disks.
  5. Create the encrypted partition. Make the paraphase long and difficult to guess:
    $ sudo cryptsetup --verbose --verify-passphrase luksFormat /dev/sdx1 -c aes-cbc-essiv:sha256
  6. Create a filesystem (I am using ext4, the chose device and label name is “disk5”, change it to your taste):
    $ sudo cryptsetup luksOpen /dev/sdx1 disk5
    $ sudo mkfs.ext4 /dev/mapper/disk5 -L disk5
    $ sudo cryptsetup luksClose disk5
  7. Mount it going to “Computer” in Nautilus, double clicking the disk and inserting your paraphrase. I chose not let Gnome store the encrypting paraphrase for automounting as it would make encryption as weak as your system password (and we know how to retrieve/change those)…


That’s it!

Posted in Uncategorized Tagged: encryption, Hardware, security

Let's say you have a file which contains this:

PKG_CHECK_VAR([p11_moduledir], "p11-kit-1", "p11_module_path")

and that it goes with a which contains this:

dist_p11_module_DATA = foo.module

Then things should work fine, right? When you run make install, your modules install to the right location, and p11-kit will pick up everything the way it should.

Well, no. Not exactly. That is, it will work for the common case, but not for some other cases. You see, if you do that, then make distcheck will fail pretty spectacularly. At least if you run that as non-root (which you really really should do). The problem is that by specifying the p11_moduledir variable in that way, you hardcode it; it doesn't honour any $prefix or $DESTDIR variables that way. The result of that is that when a user installs your package by specifying --prefix=/opt/testmeout, it will still overwrite files in the system directory. Obviously, that's not desireable.

The $DESTDIR bit is especially troublesome, as it makes packaging your software for the common distributions complicated (most packaging software heavily relies on DESTDIR support to "install" your software in a staging area before turning it into an installable package).

So what's the right way then? I've been wondering about that myself, and asked for the right way to do something like that on the automake mailinglist a while back. The answer I got there wasn't entirely satisfying, and at the time I decided to take the easy way out (EXTRA_DIST the file, but don't actually install it). Recently, however, I ran against a similar problem for something else, and decided to try to do it the proper way this time around.

p11-kit, like systemd, ships pkg-config files which contain variables for the default locations to install files into. These variables' values are meant to be easy to use from scripts, so that no munging of them is required if you want to directly install to the system-wide default location. The downside of this is that, if you want to install to the system-wide default location by default from an autotools package (but still allow the user to --prefix your installation into some other place, accepting that then things won't work out of the box), you do need to do the aforementioned munging.

Luckily, that munging isn't too hard, provided whatever package you're installing for did the right thing:

PKG_CHECK_VAR([p11_moduledir], "p11-kit-1", "p11_module_path")
PKG_CHECK_VAR([p11kit_libdir], "p11-kit-1", "libdir")
if test -z $ac_cv_env_p11_moduledir_set; then
    p11_moduledir=$(echo $p11_moduledir|sed -e "s,$p11kit_libdir,\${libdir},g")

Whoa, what just happened?

First, we ask p11-kit-1 where it expects modules to be. After that, we ask p11-kit-1 what was used as "libdir" at installation time. Usually that should be something like /usr/lib or /usr/lib/gnu arch triplet or some such, but it could really be anything.

Next, we test to see whether the user set the p11_moduledir variable on the command line. If so, we don't want to munge it.

The next line looks for the value of whatever libdir was set to when p11-kit-1 was installed in the value of p11_module_path, and replaces it with the literal string ${libdir}.

Finally, we exit our if and AC_SUBST our value into the rest of the build system.

The resulting package will have the following semantics:

  • If someone installs p11-kit-1 your package with the same prefix, the files will install to the correct location.
  • If someone installs both packages with a different prefix, then by default the files will not install to the correct location. This is what you'd want, however; using a non-default prefix is the only way to install something as non-root, and if root installed something into /usr, a normal user wouldn't be able to fix things.
  • If someone installs both packages with a different prefix, but sets the p11_moduledir variable to the correct location, at configure time, then things will work as expected.

I suppose it would've been easier if the PKG_CHECK_VAR macro could (optionally) do that munging by itself, but then, can't have everything.

In two weeks from now, from 22 september until 26 september 5 big events from the open geo movement will be happening in Brussels.

On 22 september there will be two conferences: one FOSS4G Belgium conference organised by the local , which focuses on software for Geo. There will also be a lot of attention to open (geo) data sets provided by different governement agencies in Belgium.

On the same day in a different location the Humanitarian OpenStreetMap Team will also be gathering for their HOT summit. After both events, forces will be joined for a mapathon.

The next three days the international openstreetmap conference will be happening at VUB.
State Of The Map

Last but not least on Monday after all events, there will be a hackday, focused on OSM and OSGeo technologies.

Completely lost track? Check for an overview.

September 06, 2016

There exist millions of Open Source projects today, but many of them aren't sustainable. Scaling Open Source projects in a sustainable manner is difficult. A prime example is OpenSSL, which plays a critical role in securing the internet. Despite its importance, the entire OpenSSL development team is relatively small, consisting of 11 people, 10 of whom are volunteers. In 2014, security researchers discovered an important security bug that exposed millions of websites. Like OpenSSL, most Open Source projects fail to scale their resources. Notable exceptions are the Linux kernel, Debian, Apache, Drupal, and WordPress, which have foundations, multiple corporate sponsors and many contributors that help these projects scale.

We (Dries Buytaert is the founder and project lead of Drupal and co-founder and Chief Technology Officer of Acquia and Matthew Tift is a Senior Developer at Lullabot and Drupal 8 configuration system co-maintainer) believe that the Drupal community has a shared responsibility to build Drupal and that those who get more from Drupal should consider giving more. We examined commit data to help understand who develops Drupal, how much of that work is sponsored, and where that sponsorship comes from. We will illustrate that the Drupal community is far ahead in understanding how to sustain and scale the project. We will show that the Drupal project is a healthy project with a diverse community of contributors. Nevertheless, in Drupal's spirit of always striving to do better, we will also highlight areas where our community can and should do better.

Who is working on Drupal?

In the spring of 2015, after proposing ideas about giving credit and discussing various approaches at length, added the ability for people to attribute their work to an organization or customer in the issue queues. Maintainers of Drupal themes and modules can award issues credits to people who help resolve issues with code, comments, design, and more.

Example issue credit on drupal org
A screenshot of an issue comment on You can see that jamadar worked on this patch as a volunteer, but also as part of his day job working for TATA Consultancy Services on behalf of their customer, Pfizer.'s credit system captures all the issue activity on This is primarily code contributions, but also includes some (but not all) of the work on design, translations, documentation, etc. It is important to note that contributing in the issues on is not the only way to contribute. There are other activities—for instance, sponsoring events, promoting Drupal, providing help and mentoring—important to the long-term health of the Drupal project. These activities are not currently captured by the credit system. Additionally, we acknowledge that parts of Drupal are developed on GitHub and that credits might get lost when those contributions are moved to For the purposes of this post, however, we looked only at the issue contributions captured by the credit system on

What we learned is that in the 12-month period from July 1, 2015 to June 30, 2016 there were 32,711 issue credits—both to Drupal core as well as all the contributed themes and modules—attributed to 5,196 different individual contributors and 659 different organizations.

Despite the large number of individual contributors, a relatively small number do the majority of the work. Approximately 51% of the contributors involved got just one credit. The top 30 contributors (or top 0.5% contributors) account for over 21% of the total credits, indicating that these individuals put an incredible amount of time and effort in developing Drupal and its contributed modules:

5Wim Leers382
10drunken monkey248

How much of the work is sponsored?

As mentioned above, from July 1, 2015 to June 30, 2016, 659 organizations contributed code to Drupal is used by more than one million websites. The vast majority of the organizations behind these Drupal websites never participate in the development of Drupal; they use the software as it is and do not feel the need to help drive its development.

Technically, Drupal started out as a 100% volunteer-driven project. But nowadays, the data suggests that the majority of the code on is sponsored by organizations in Drupal's ecosystem. For example, of the 32,711 commit credits we studied, 69% of the credited work is "sponsored".

We then looked at the distribution of how many of the credits are given to volunteers versus given to individuals doing "sponsored work" (i.e. contributing as part of their paid job):

Contributions top range

Looking at the top 100 contributors, for example, 23% of their credits are the result of contributing as volunteers and 56% of their credits are attributed to a corporate sponsor. The remainder, roughly 21% of the credits, are not attributed. Attribution is optional so this means it could either be volunteer-driven, sponsored, or both.

As can be seen on the graph, the ratio of volunteer versus sponsored don't meaningfully change as we look beyond the top 100—the only thing that changes is that more credits that are not attributed. This might be explained by the fact that occasional contributors might not be aware of or understand the credit system, or could not be bothered with setting up organizational profiles for their employer or customers.

As shown in jamadar's screenshot above, a credit can be marked as volunteer and sponsored at the same time. This could be the case when someone does the minimum required work to satisfy the customer's need, but uses his or her spare time to add extra functionality. We can also look at the amount of code credits that are exclusively volunteer credits. Of the 7,874 credits that marked volunteer, 43% of them (3,376 credits) only had the volunteer box checked and 57% of them (4,498) were also partially sponsored. These 3,376 credits are one of our best metrics to measure volunteer-only contributions. This suggests that only 10% of the 32,711 commit credits we examined were contributed exclusively by volunteers. This number is a stark contrast to the 12,888 credits that were "purely sponsored", and that account for 39% of the total credits. In other words, there were roughly four times as many "purely sponsored" credits as there were "purely volunteer" credits.

When we looked at the 5,196 users, rather than credits, we found somewhat different results. A similar percentage of all users had exclusively volunteer credits: 14% (741 users). But the percentage of users with exclusively sponsored credits is only 50% higher: 21% (1077 users). Thus, when we look at the data this way, we find that users who only do sponsored work tend to contribute quite a bit more than users who only do volunteer work.

None of these methodologies are perfect, but they all point to a conclusion that most of the work on Drupal is sponsored. At the same time, the data shows that volunteer contribution remains very important to Drupal. We believe there is a healthy ratio between sponsored and volunteer contributions.

Who is sponsoring the work?

Because we established that most of the work on Drupal is sponsored, we know it is important to track and study what organizations contribute to Drupal. Despite 659 different organizations contributing to Drupal, approximately 50% of them got 4 credits or less. The top 30 organizations (roughly top 5%) account for about 29% of the total credits, which suggests that the top 30 companies play a crucial role in the health of the Drupal project. The graph below shows the top 30 organizations and the number of credits they received between July 1, 2015 and June 30, 2016:

Contributions top organizations

While not immediately obvious from the graph above, different types of companies are active in Drupal's ecosystem and we propose the following categorization below to discuss our ecosystem.

Category Description
Traditional Drupal businesses Small-to-medium-sized professional services companies that make money primarily using Drupal. They typically employ less than 100 employees, and because they specialize in Drupal, many of these professional services companies contribute frequently and are a huge part of our community. Examples are Lullabot (shown on graph) or Chapter Three (shown on graph).
Digital marketing agencies Larger full-service agencies that have marketing led practices using a variety of tools, typically including Drupal, Adobe Experience Manager, Sitecore, WordPress, etc. They are typically larger, with the larger agencies employing thousands of people. Examples are Sapient (shown on graph) or AKQA.
System integrators Larger companies that specialize in bringing together different technologies into one solution. Example system agencies are Accenture, TATA Consultancy Services, Capgemini or CI&T.
Technology and infrastructure companies Examples are Acquia (shown on graph), Lingotek (shown on graph), BlackMesh, RackSpace, Pantheon or
End-users Examples are Pfizer (shown on graph), (shown on graph) or NBC Universal.

Most of the top 30 sponsors are traditional Drupal companies. Sapient (120 credits) is the only digital marketing agency showing up in the top 30. No system integrator shows up in the top 30. The first system integrator is CI&T, which ranked 31st with 102 credits. As far as system integrators are concerned CI&T is a smaller player with between 1,000 and 5,000 employees. Other system integrators with credits are Capgemini (43 credits), Globant (26 credits), and TATA Consultancy Services (7 credits). We didn't see any code contributions from Accenture, Wipro or IBM Global Services. We expect these will come as most of them are building out Drupal practices. For example, we know that IBM Global Services already has over 100 people doing Drupal work.

Contributions by organization type

When we look beyond the top 30 sponsors, we see that roughly 82% of the code contribution on comes from the traditional Drupal businesses. About 13% of the contributions comes from infrastructure and software companies, though that category is mostly dominated by one company, Acquia. This means that the technology and infrastructure companies, digital marketing agencies, system integrators and end-users are not meaningfully contributing code to today. In an ideal world, the pie chart above would be sliced in equal sized parts.

How can we explain that unbalance? We believe the two biggest reasons are: (1) Drupal's strategic importance and (2) the level of maturity with Drupal and Open Source. Various of the traditional Drupal agencies have been involved with Drupal for 10 years and almost entirely depend on on Drupal. Given both their expertise and dependence on Drupal, they are most likely to look after Drupal's development and well-being. These organizations are typically recognized as Drupal experts and sought out by organizations that want to build a Drupal website. Contrast this with most of the digital marketing agencies and system integrators who have the size to work with a diversified portfolio of content management platforms, and are just getting started with Drupal and Open Source. They deliver digital marketing solutions and aren't necessarily sought out for their Drupal expertise. As their Drupal practices grow in size and importance, this could change, and when it does, we expect them to contribute more. Right now many of the digital marketing agencies and system integrators have little or no experience with Open Source so it is important that we motivate them to contribute and then teach them how to contribute.

There are two main business reasons for organizations to contribute: (1) it improves their ability to sell and win deals and (2) it improves their ability to hire. Companies that contribute to Drupal tend to promote their contributions in RFPs and sales pitches to win more deals. Contributing to Drupal also results in being recognized as a great place to work for Drupal experts.

We also should note that many organizations in the Drupal community contribute for reasons that would not seem to be explicitly economically motivated. More than 100 credits were sponsored by colleges or universities, such as the University of Waterloo (45 credits). More than 50 credits came from community groups, such as the Drupal Bangalore Community and the Drupal Ukraine Community. Other nonprofits and government organization that appeared in our data include the Drupal Association (166), National Virtual Library of India (25 credits), Center for Research Libraries (20), and Welsh Government (9 credits).

Infrastructure and software companies

Infrastructure and software companies play a different role in our community. These companies are less reliant on professional services (building Drupal websites) and primarily make money from selling subscription based products.

Acquia, Pantheon and are venture-backed Platform-as-a-Service companies born out of the Drupal community. Rackspace and AWS are public companies hosting thousands of Drupal sites each. Lingotek offers cloud-based translation management software for Drupal.

Contributions by technology companies

The graph above suggests that Pantheon and have barely contributed code on during the past year. ( only became an independent company 6 months ago after they split off from CommerceGuys.) The chart also does not reflect sponsored code contributions on GitHub (such as drush), Drupal event sponsorship, and the wide variety of value that these companies add to Drupal and other Open Source communities.

Consequently, these data show that the Drupal community needs to do a better job of enticing infrastructure and software companies to contribute code to The Drupal community has a long tradition of encouraging organizations to share code on rather than keep it behind firewalls. While the spirit of the Drupal project cannot be reduced to any single ideology-- not every organization can or will share their code -- we would like to see organizations continue to prioritize collaboration over individual ownership. Our aim is not to criticize those who do not contribute, but rather to help foster an environment worthy of contribution.

End users

We saw two end-users in the top 30 corporate sponsors: Pfizer (158 credits) and (132 credits). Other notable end-users that are actively giving back are Workday (52 credits), NBC Universal (40 credits), the University of Waterloo (45 credits) and (33 credits). The end users that tend to contribute to Drupal use Drupal for a key part of their business and often have an internal team of Drupal developers.

Given that there are hundreds of thousands of Drupal end-users, we would like to see more end-users in the top 30 sponsors. We recognize that a lot of digital agencies don't want, or are not legally allowed, to attribute their customers. We hope that will change as Open Source continues to get more and more adopted.

Given the vast amount of Drupal users, we believe encouraging end-users to contribute could be a big opportunity. Being credited on gives them visibility in the Drupal community and recognizes them as a great place for Open Source developers to work.

The uneasy alliance with corporate contributions

As mentioned above, when community-driven Open Source projects grow, there becomes a bigger need for organizations to help drive its development. It almost always creates an uneasy alliance between volunteers and corporations.

This theory played out in the Linux community well before it played out in the Drupal community. The Linux project is 25 years old now has seen a steady increase in the number of corporate contributors for roughly 20 years. While Linux companies like Red Hat and SUSE rank highly on the contribution list, so do non-Linux-centric companies such as Samsung, Intel, Oracle and Google. The major theme in this story is that all of these corporate contributors were using Linux as an integral part of their business.

The 659 organizations that contribute to Drupal (which includes corporations), is roughly three times the number of organizations that sponsor development of the Linux kernel, "one of the largest cooperative software projects ever attempted". In fairness, Linux has a different ecosystem than Drupal. The Linux business ecosystem has various large organizations (Red Hat, Google, Intel, IBM and SUSE) for whom Linux is very strategic. As a result, many of them employ dozens of full-time Linux contributors and invest millions of dollars in Linux each year.

In the Drupal community, Acquia has had people dedicated full-time to Drupal starting nine years ago when it hired Gábor Hojtsy to contribute to Drupal core full-time. Today, Acquia has about 10 developers contributing to Drupal full-time. They work on core, contributed modules, security, user experience, performance, best practices, and more. Their work has benefited untold numbers of people around the world, most of whom are not Acquia customers.

In response to Acquia’s high level of participation in the Drupal project, as well as to the number of Acquia employees that hold leadership positions, some members of the Drupal community have suggested that Acquia wields its influence and power to control the future of Drupal for its own commercial benefit. But neither of us believe that Acquia should contribute less. Instead, we would like to see more companies provide more leadership to Drupal and meaningfully contribute on

Who is sponsoring the top 30 contributors?

Rank Username Issues Volunteer Sponsored Not specified Sponsors
1 dawehner 560 84.1% 77.7% 9.5% Drupal Association (182), Chapter Three (179), Tag1 Consulting (160), Cando (6), Acquia (4), Comm-press (1)
2 DamienMcKenna 448 6.9% 76.3% 19.4% Mediacurrent (342)
3 alexpott 409 0.2% 97.8% 2.2% Chapter Three (400)
4 Berdir 383 0.0% 95.3% 4.7% MD Systems (365), Acquia (9)
5 Wim Leers 382 31.7% 98.2% 1.8% Acquia (375)
6 jhodgdon 381 5.2% 3.4% 91.3% Drupal Association (13), Poplar ProductivityWare (13)
7 joelpittet 294 23.8% 1.4% 76.2% Drupal Association (4)
8 heykarthikwithu 293 99.3% 100.0% 0.0% Valuebound (293), Drupal Bangalore Community (3)
9 mglaman 292 9.6% 96.9% 0.7% Commerce Guys (257), Bluehorn Digital (14),, Inc. (12), LivePerson, Inc (11), Bluespark (5), DPCI (3), Thinkbean, LLC (3), Digital Bridge Solutions (2), Matsmart (1)
10 drunken monkey 248 75.4% 55.6% 2.0% Acquia (72), StudentFirst (44), epiqo (12), Vizala (9), Sunlime IT Services GmbH (1)
11 Sam152 237 75.9% 89.5% 10.1% PreviousNext (210), Code Drop (2)
12 borisson_ 207 62.8% 36.2% 15.9% Acquia (67), Intracto digital agency (8)
13 benjy 206 0.0% 98.1% 1.9% PreviousNext (168), Code Drop (34)
14 edurenye 184 0.0% 100.0% 0.0% MD Systems (184)
15 catch 180 3.3% 44.4% 54.4% Third and Grove (44), Tag1 Consulting (36), Drupal Association (4)
16 slashrsm 179 12.8% 96.6% 2.8% (89), MD Systems (84), Acquia (18), Studio Matris (1)
17 phenaproxima 177 0.0% 94.4% 5.6% Acquia (167)
18 mbovan 174 7.5% 100.0% 0.0% MD Systems (118), ACTO Team (43), Google Summer of Code (13)
19 tim.plunkett 168 14.3% 89.9% 10.1% Acquia (151)
20 rakesh.gectcr 163 100.0% 100.0% 0.0% Valuebound (138), National Virtual Library of India (NVLI) (25)
21 martin107 163 4.9% 0.0% 95.1%
22 dsnopek 152 0.7% 0.0% 99.3%
23 mikeryan 150 0.0% 89.3% 10.7% Acquia (112), Virtuoso Performance (22), Drupalize.Me (4), North Studio (4)
24 jhedstrom 149 0.0% 83.2% 16.8% Phase2 (124), Workday, Inc. (36), Memorial Sloan Kettering Cancer Center (4)
25 xjm 147 0.0% 81.0% 19.0% Acquia (119)
26 hussainweb 147 2.0% 98.6% 1.4% Axelerant (145)
27 stefan.r 146 0.7% 0.7% 98.6% Drupal Association (1)
28 bojanz 145 2.1% 83.4% 15.2% Commerce Guys (121), Bluespark (2)
29 penyaskito 141 6.4% 95.0% 3.5% Lingotek (129), Cocomore AG (5)
30 larowlan 135 34.1% 63.0% 16.3% PreviousNext (85), Department of Justice & Regulation, Victoria (14), amaysim Australia Ltd. (1), University of Adelaide (1)

We observe that the top 30 contributors are sponsored by 45 organizations. This kind of diversity is aligned with our desire not to see Drupal controlled by a single organization. The top 30 contributors and the 45 organizations are from many different parts in the world and work with customers large or small. We could still benefit from more diversity, though. The top 30 lacks digital marketing agencies, large system integrators and end-users -- all of whom could contribute meaningfully to making Drupal for them and others.

Evolving the credit system

The credit system gives us quantifiable data about where our community's contributions come from, but that data is not perfect. Here are a few suggested improvements:

  1. We need to find ways to recognize non-code contributions as well as code contributions outside of (i.e. on GitHub). Lots of people and organizations spend hundreds of hours putting together local events, writing documentation, translating Drupal, mentoring new contributors, and more—and none of that gets captured by the credit system.
  2. We'd benefit by finding a way to account for the complexity and quality of contributions; one person might have worked several weeks for just one credit, while another person might have gotten a credit for 30 minutes of work. We could, for example, consider the issue credit data in conjunction with Git commit data regarding insertions, deletions, and files changed.
  3. We could try to leverage the credit system to encourage more companies, especially those that do not contribute today, to participate in large-scale initiatives. Dries presented some ideas two years ago in his DrupalCon Amsterdam keynote and Matthew has suggested other ideas, but we are open to more suggestions on how we might bring more contributors into the fold using the credit system.
  4. We could segment out organization profiles between end users and different kinds of service providers. Doing so would make it easier to see who the top contributors are in each segment and perhaps foster more healthy competition among peers. In turn, the community could learn about the peculiar motivations within each segment.

Like Drupal the software, the credit system on is a tool that can evolve, but that ultimately will only be useful when the community uses it, understands its shortcomings, and suggests constructive improvements. In highlighting the organizations that sponsor work on, we hope to provoke responses that help evolve the credit system into something that incentivizes business to sponsor more work and that allows more people the opportunity to participate in our community, learn from others, teach newcomers, and make positive contributions. We view Drupal as a productive force for change and we wish to use the credit system to highlight (at least some of) the work of our diverse community of volunteers, companies, nonprofits, governments, schools, universities, individuals, and other groups.


Our data shows that Drupal is a vibrant and diverse community, with thousands of contributors, that is constantly evolving and improving the software. While here we have examined issue credits mostly through the lens of sponsorship, in future analyses we plan to consider the same issue credits in conjunction with other publicly-disclosed Drupal user data, such as gender identification, geography, seasonal participation, mentorship, and event attendance.

Our analysis of the credit data concludes that most of the contributions to Drupal are sponsored. At the same time, the data shows that volunteer contribution remains very important to Drupal.

As a community, we need to understand that a healthy Open Source ecosystem is a diverse ecosystem that includes more than traditional Drupal agencies. The traditional Drupal agencies and Acquia contribute the most but we don't see a lot of contribution from the larger digital marketing agencies, system integrators, technology companies, or end-users of Drupal—we believe that might come as these organizations build out their Drupal practices and Drupal becomes more strategic for them.

To grow and sustain Drupal, we should support those that contribute to Drupal, and find ways to get those that are not contributing involved in our community. We invite you to help us figure out how we can continue to strengthen our ecosystem.

We hope to repeat this work in 1 or 2 years' time so we can track our evolution. Special thanks to Tim Lehnen (Drupal Association) for providing us the credit system data and supporting us during our research.

I published the following diary on “Malware Delivered via ‘.pub’ Files“.

While searching for new scenarios to deliver their malwares[1][2], attackers launched a campaign to deliver malicious code embedded in Microsoft Publisher[3] (.pub) files. The tool Publisher is less known than Word or Excel. This desktop publishing tool was released in 1991 (version 1.0) but it is still alive and included in the newest Office suite. It is not surprising that it support also macros… [Read more]

[The post [SANS ISC Diary] Malware Delivered via ‘.pub’ Files has been first published on /dev/random]

September 05, 2016


There was an interesting discussion on #perl6 ( about the use of rakudobrew as a way for end-users to install Rakudo Perl 6 (see how-to-get-rakudo).

rakudobrew, inspired by perlbrew, is a way to manage (and compile) different versions of rakudo. nine argued that it’s primarily meant as a tool for rakudo developers. Because of the increased complexity (e.g. when dealing with modules) it’s not targeted at end-users. While being a big fan of rakudobrew, I agree with nine.

The problem is that there are no Linux binaries on the download page (there are for MacOS and Windows), so users are stuck with building from source (it can be fun, but after a while it isn’t).

rakudo-pkg is a github project to help system administrators (and hopefully Rakudo release managers) to easily provide native Linux packages for end users. So far, I added support for creating Ubuntu 16.04 LTS amd64 and i386 packages and Centos 7 amd64. These are the systems I use the most. Feel free to add support for more distributions.

rakudo-pkg uses Docker. The use of containers means that there is no longer need for chasing dependencies and no risks of installing files all over your system. It also means that as long the building machine is a Linux 64-bit OS, you can build packages for *all* supported distributions.

Within the containers, rakudo-pkg uses fpm. The created packages are minimalistic by design: they don’t run any pre/post scripts and all the files are installed in /opt/rakudo. You’ll have to add /opt/rakudo/bin to your PATH. I also added two additional scripts to install Perl 6 module managers (both have similar functionalities):

If you just want to create native packages, just go to the bin directory and execute the command. In this case there is no need to locally build the Docker images: you’ll automatically retrieve the image from the rakudo namespace on Docker Hub. Of course, if you want to create the container images locally, you can use the supplied dockerfiles in the docker directory. Have a look at the for more information.

You can find examples of packages created with rakudo-pkg here (they need to be moved to a more definitive URL).

Have fun.

Filed under: Uncategorized Tagged: deb, Perl, perl6, pkg, rakudo, rpm

Logo WikipédiaCe jeudi 15 septembre 2016 à 19h se déroulera la 51ème séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : Exploiter les données issues de Wikipédia

Thématique : Internet

Public : Tout public | entreprises | étudiants | développeurs

L’animateur conférencier : Robert Viseur (CETIC)

Lieu de cette séance : Université de Mons, Campus Plaine de Nimy, avenue Maistriau, Grands Amphithéâtres, Auditoire Curie (cf. ce plan sur le site de l’UMONS, ou la carte OSM).

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page La séance sera suivie d’un verre de l’amitié.

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, Normation, OpenSides, MeaWeb et Phonoid.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

Description : Wikipedia est un projet collaboratif de référence. Publié sous licence libre, il est alimenté par de très nombreux contributeurs. Il s’agit aussi d’un formidable réservoir de données, utilisable par exemple pour des applications reposant sur les technologies du web sémantique. Cette conférence proposera un cas pratique d’extraction et d’intégration de données biographiques. Elle permettra de mettre en évidence les alternatives pour extraire les données de l’encyclopédie ainsi que les principales difficultés techniques. Les résultats d’une évaluation de la qualité des données extraites de Wikipédia, par comparaison à des données de référence, seront ensuite discutés. Les évolutions récentes de Wikipédia en matière de gestion des données seront enfin présentées.

September 04, 2016

Yesterday evening I released Autoptimize 2.1 and the first Power-Up to manage critical CSS has been made available as a optional service over at This short video explains some of the logic behind the Autoptimize Critical CSS Power-Up:

YouTube Video
Watch this video on YouTube.

But let’s not forget about Autoptimize 2.1! The new features include:

  • Autoptimize now appears in the admin-toolbar with an easy view on cache size and the possibility to purge the cache (thanks to Pablo Custo)
  • A “More Optimization”-tab is shown with info about optimization tools- and services.
  • settings-screen now accepts protocol-relative URL for CDN base URL
  • admin GUI updated and responsiveness added
  • If cache size becomes too big, a mail will be sent to the site admin
  • power-users can enable Autoptimize to pre-gzip the autoptimized files with a filter
  • new (smarter) defaults for JS and CSS optimization

Although excluding jQuery from autoptimization by default might seem counter-intuitive, the “smarter” defaults should allow more Autoptimize installs to work out-of-the-box (including on sites run by people who might not be inclined to troubleshoot/ reconfigure Autoptimize in the first place).

And thanks to the release I now have a better idea of the number of active installs (which lists as +100000); 2.1 was downloaded 3239 times yesterday evening and it is listed as running on 1.8% sites. Simple math learns that Autoptimize is currently active on approx. 180000 WordPress websites. Let’s aim for 200K by the end of 2016! :-)

September 02, 2016

September 01, 2016

In the East there is a shark which is larger than all other fish. It changes into a bird whose wings are like clouds filling the sky. When this bird moves across the land, it brings a message from Corporate Headquarters. This message it drops into the midst of the programmers, like a seagull making its mark upon the beach. Then the bird mounts on the wind and, with the blue sky at its back, returns home.


En février 2015, Pierre Valade, co-fondateur du calendrier Sunrise, m’a demandé de collaborer avec lui à la rédaction d’un texte explorant le futur possible de notre utilisation d’un calendrier électronique. La société Sunrise a ensuite été rachetée par Microsoft et le calendrier Sunrise a malheureusement été définitivement mis hors-service ce 1er septembre 2016. Avec l’accord de Pierre, j’ai décidé de rendre ce texte public afin de célébrer, une dernier fois, Sunrise et ce qu’il aurait pu devenir : le calendrier du futur !


– À ce soir mon chéri !

Je hurle depuis le hall d’entrée tout en enfilant ma veste. La réponse de mon époux me parvient, lointaine.

— À ce soir ! J’ai vu que tu ne rentrais pas trop tard. Je m’occupe de nous préparer un bon petit plat.

Esquissant un sourire, je sors en refermant la porte derrière moi. La voiture est justement en train de se garer. Une légère vibration à mon poignet me confirme que je dois embarquer. La portière s’ouvre et une voix neutre me demande si je suis prêt à partir.

— Oui, confirmation de départ immédiat, annoncé-je machinalement.

Je m’étire et m’installe confortablement dans le fauteuil. Le temps estimé de trajet est inscrit sur un écran : 1h15. Mon interview du jour aura lieu en plein cœur de New York, dans le lounge d’un grand hôtel. Un hôtel que ni Pierre Valade, mon interviewé, ni moi ne connaissons. Mais qui, selon les algorithmes de Sunrise, est le plus propice à notre rendez-vous. Il faut dire que je me suis contenté d’envoyer une demande de rencontre avec quelques explications. Pierre a accepté. Nos agendas ont fait le reste.

À peine ai-je sorti ma tablette de mon sac qu’elle me propose des lectures et des vidéos qui correspondent au temps du trajet. Pratique mais, aujourd’hui, j’ai seulement envie de rêver, de regarder le paysage défiler, de méditer. Je me sens particulièrement zen.

En quelques années à peine, j’ai perdu ce réflexe de stress permanent que nous imposaient nos conventions. Peur d’être en retard, peur de rater un train ou un avion, peur de ne pas avoir le temps. Nous étions tellement obnubilé par la crainte de perdre du temps que nous en passions la majeure partie à organiser nos agendas, à arriver à l’avance à nos rendez-vous. Notre société était pauvre en temps et ceux qui ne le rentabilisaient pas étaient perçus comme des paresseux, des gaspilleurs de temps.

L’utilisation du temps relatif a, de manière surprenante, apporté une solution à ce paradoxe. Désormais, il est rare que je sache l’heure absolue. Je sais juste le temps qu’il me reste avant de me rendre quelque part. Je ne m’occupe même plus de choisir les moyens de transport : je me contente d’inviter mon mari à un week-end romantique à Paris, j’accepte une offre de voyage si elle correspond à mon budget et, après avoir fait nos bagages suite à un rappel judicieusement placé dans mon emploi du temps par Sunrise, nous embarquons dans la voiture qui nous conduit à l’aéroport.

S’arrêtant doucement, la voiture me sort de ma rêverie. Un coup d’œil à mon téléphone m’indique que Pierre vient également d’arriver. Je le repère au fond du lobby.

– Bonjour Pierre !

– Bonjour, enchanté de faire votre connaissance.

Après les présentations d’usage, je me lance directement dans l’interview.

— Pierre, comment vous est venu l’idée de fonder Sunrise ?

— Étant un grand distrait, j’avais tout simplement besoin d’un très bon calendrier.

— Qu’est-ce qui n’était pas satisfaisant avec les solutions existantes ? La plupart des entreprises étaient très satisfaites avec leur calendrier Exchange.

— Microsoft Exchange, comme la majorité des outils de cette époque, cherchait à organiser le problème, pas à le résoudre. Le but d’Exchange était de gérer un calendrier. Le but de Sunrise, c’est de vous permettre de profiter de votre temps. C’est très différent.

— Concrètement, en quoi Sunrise s’est-il démarqué ? Quelle a été l’innovation majeure ?

— Sunrise n’est pas une invention unique, soudaine. C’est un ensemble d’innovations continues, d’améliorations perpétuelles. Google, Microsoft, Facebook et Apple ne s’intéressant pas vraiment au problème, il y avait une place à prendre. Sunrise est né et nous avons acquis de l’expérience, nous sommes devenus des spécialistes, des experts. Nous étions les seuls !

— Et quel est ton rôle dans cette aventure ?

— Je suis un peu le chef d’orchestre. J’ai une vision précise et je cherche à recruter les personnes qui seront capables faire passer cette vision du rêve à la réalité.

— Pourrais-tu me donner un exemple concret de ta vision ?

— Et bien j’etais convaincu que l’optimisation du temps était un problème relativement simple pour un ordinateur alors que les outils existants étaient particulièrement laborieux à utiliser. Sunrise s’est donc concentré sur le design et l’interaction utilisateur. Pas besoin d’algorithme intelligent si personne ne peut utiliser ton application !

— En effet. Mais vous avez cependant introduit beaucoup d’intelligence par la suite…

Il acquiesce avant de jeter un coup d’œil machinal à son téléphone.

— Dîtes, je vois dans mon agenda qu’une parade musicale passe à deux rues d’ici. Ça vous dirait d’aller la voir.

— J’avais prévu de m’atteler à la rédaction de votre interview mais je pourrai faire ça durant mon retour en voiture.

— Elle viendra vous chercher là bas. Et puis, une parade musicale dans les rues de New York, c’est une occasion à ne pas manquer. Autant en profiter !

Me prenant par le bras, il m’emmène vers la sortie. Je résiste pour la forme.

— Au fait, quelle heure est-il ? me demande-t-il mystérieusement.

— Aucune idée ! fais-je, étonné.

— Parfait ! Ignorer l’heure est la meilleure façon de profiter du temps présent.

— Après tout, tant que je suis rentré pour le repas que me prépare mon mari…

— Vous utilisez Sunrise ? Alors, aucun risque ! me fait-il avec un clin d’œil complice.

Au loin, je perçois déjà les premiers échos de la fanfare.


Photo par Clément Cousin. Also available in English.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins: 12uAH27PhWepyzZ8kzY9rMAZzwpfsWSxXt, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.


A year ago I made the upgrade to a Pebble Time Steel. I really have fallen in love with the Pebble smartwatch, and Android Wear of the Apple iWatch were no valid candidates due to their shortcomings in battery life & user interfaces. The PTS upgrades the Pebble experience finally with a color screen (though the readability indoors is disappointing), and the smartwatch really looks like a watch now (Pebble OG looked like plastic toy). This thing survives nine days on a single charge which is one of the main advantages of Pebble hardware. I could never support a daily charge cycle on a smartwatch, which makes the Pebble Round (2 days on a charge) a no-go.

However, Pebble Time still has its drawbacks : it carries a large bezel (which is now addressed in Pebble Time 2), and the screen resolution remains far below Android/Apple competing devices.

The largest surprise was the Pebble Timeline in firmware 3.x : it carries your whole daily agenda on a glance within a single button press, and this has become one of the most pleasant features of the smartwatch.


My old PC has survived for 10,5 years : mostly thanks to Linux and its low resource requirements. That is very impressive, but the box started to show its age : boot times up to two minutes, and a hard drive which performed sub-par. Time for a new machine : Intel i7 Skylake, SSD + 3TB HDD and 16GB of DDR4 RAM. Fast and furious.

I made the switch as well from Debian to Fedora : I must admit that I'm quite charmed by Fedora. Stable and on the bleeding edge side (my previous box was Debian Stable based, so your definition of "bleeding edge" may vary). Anyway, Fedora installed without a glitch, and the subsequent upgrade to Fedora24 was one of the fastest PC upgrades I experienced.

The machine is called Nostromo, to the spaceship in Alien. I guess I ran out of pronounceable Tolkien names, and didn't found any suitable Game of Thrones based names. Science-fiction to the rescue.

A feature that has been requested quite a bit – transient variables – has landed in the Beta3 of Activiti v6 we’ve released yesterday. In this post, I’ll show you an example on how transient variables can be used to cover some advanced use cases that weren’t possible (or optimal) before. So far, all variables […]

I published the following diary on “ (Ab)used As Anti-Analysis Technique“.

A long time ago I wrote a diary[1] about malware samples which use online geolocalization services. Such services are used to target only specific victims. If the malware detects that it is executed from a specific area, it just stops. This has been seen in Russian malware’s which did not infect people located in the same area … [Read more]


[The post [SANS ISC Diary] (Ab)used As Anti-Analysis Technique has been first published on /dev/random]

August 31, 2016

Like last year, FunKey Hotel is offering special rates for FOSDEM 2017 attendees. They offer our attendees a bed for 28EUR, per night. Their offer is valid for Friday, Saturday and Sunday night. Please check our accommodation page for more information.

I’m happy to announce the immediate availability of Maps 3.8. This feature release brings several enhancements and new features.

  • Added Leaflet marker clustering (by Peter Grassberger)
    • markercluster: Enables clustering, multiple markers are merged into one marker.
    • clustermaxzoom: The maximum zoom level where clusters may exist.
    • clusterzoomonclick: Whether clicking on a cluster zooms into it.
    • clustermaxradius: The maximum radius that a cluster will cover.
    • clusterspiderfy: At the lowest zoom level markers are separated so you can see all.
  • Added Leaflet fullscreen control (by Peter Grassberger)
  • Added OSM Nominatim Geocoder (by Peter Grassberger)
  • Upgraded Leaflet library to its latest version (1.0.0-r3) (by Peter Grassberger)
  • Made removal of marker clusters more robust. (by Peter Grassberger)
  • Unified system messages for several services (by Karsten Hoffmeyer)

Leaflet marker clusters

Goolge Maps API key

Due to changes to Google Maps, an API key now needs to be set. Upgrading to the latest version of Maps will not break the maps on your wiki in any case, as the change really is on Googles end. If they are still working, you can keep running an older version of Maps. Of course it’s safer to upgrade and set the API key anyway. In case you have a new wiki or the maps broke for some reason, you will need to get Maps 3.8 or later and set the API key. See the installation configuration instructions for more information.

  • Added Google Maps API key egMapsGMaps3ApiKey setting (by Peter Grassberger)
  • Added Google Maps API version number egMapsGMaps3ApiVersion setting (by Peter Grassberger)


Since this is a feature release, there are no breaking changes, and you can simply run composer update, or replace the old files with the new ones.

Beware that as of Maps 3.6, you need MediaWiki 1.23 or later, and PHP 5.5 or later. If you choose to remain with an older version of PHP or MediaWiki, use Maps 3.5. Maps works with the latest stable versions of both MediaWiki and PHP, which are the versions I recommend you use.

August 29, 2016

Getting useful info from log file should be piece of cake …if the file is properly formatted! Usually, one event is written on a single line with useful info delimited by a separator or extractable using regular expressions. But it’s not always the case, welcome to the log hell…

Sometimes, the log file contains the output of a script or a dump of another file and is split into multiple lines (think about a Java error –  known to be extremely verbose). If the application does a good job, the dump can be identified by “tags” at the beginning and end of the interesting data. Here is a quick tip to extract them from the UNIX command like. Very useful to parse them or just send the output via an email.

I’m a big fan of Security Onion. Amongst multiple tools to analyze your network traffic, it helps me to gather intelligence about new IDS signatures. One tool used by this distribution is Pulled Pork, which keeps a Snort / Suricata IDS rule base up-to-date. Executed daily, it generates a log file with the new and removed rules. Every day, the following data are appended to the file:

-=Begin Changes Logged for Mon Aug 29 07:26:33 2016 GMT=-
New Rules
    BROWSER-CHROME Google Chrome FileSystemObject clsid access (1:21446)

Deleted Rules
    BROWSER-CHROME Google Chrome FileSystemObject clsid access (1:0)

Set Policy: Disabled

Rule Totals

No IP Blacklist Changes

-=End Changes Logged for Mon Aug 29 07:26:33 2016 GMT=-

I like to get a notification with the daily added / removed IDS rules on my Security Onion box (I’m using the Emerging Threats feed). The power of the command line can help us to extract useful information from the log above. The goal is to search for the “Begin changes” line, the “End Changes” line and extract what’s in the middle. How? Thanks to the wonderful ‘awk‘ tool:

awk "
  /Begin Changes Logged for `date +'%a %b %d'`/ {echo=1}
  /End Changes Logged for `date +'%a %b %d'`/ {echo=0}
  " /var/log/nsm/sid_changes.log \
| mail -s "[SecurityOnion] Suricata Rules Update"

How does it work? awk searches for the starting header line (with the current date properly formatted). Once found, it declares the variable ‘echo’. While ‘echo’ is declared, the next line read from the file is displayed. A second search is performed (the ending header). If found, ‘echo’ is set to ‘0’ and the next lines won’t be printed.

[The post Getting Useful Info From the Log Hell with Awk has been first published on /dev/random]

As a follow-up to Centrally managing your Let’s Encrypt certificates using the dns-01 challenge, in this article I’ll post a follow-up for Puppet users on how to distribute those certificates easily to your servers.

I’ve written a small Puppet module which installs your certificates in /etc/letsencrypt/live/<>, which is where the official client places them as well. This way you can easily use it as a drop-in replacement without having to change your daemon configuration files. The directories where the previous certificate versions are kept by the official client are not being maintained, but I don’t think anyone will miss them.

Do note that simply using this module will not generate the certificates automatically; it will only deploy already made certificates stored on the Puppet server. Certificate requests should still be done by the procedure discussed in the previous post. The rest of this article assumes that setup is already in place.

Note: This blog post has been updated since its first incarnation to account for the name change from to dehydrated, following a possible trademark violation by using the Let’s Encrypt name.


Check out the module on GitHub and place it in /etc/puppet/modules/letsencrypt.

Configure the dns-01 hook script to place the certificates in /etc/puppet/modules/letsencrypt/files and set permissions so Puppet can read them. In short, add the following to /root/dehydrated/config:



  class { 'letsencrypt': }

  letsencrypt::certificate {
      ensure => present,
      notify => Service['apache2'];

This snippet will deploy a certificate/key/chain combination for in /etc/letsencrypt/live/ on the target machine. Some other parameters are also accepted, which change the owner, group and access mode for the certificate files and their parent directory (defaults are root:root, 0644 and 0755 respectively).

The optional notify parameter allows you to make Puppet reload one or multiple services after updating the certificate file. This way your renewed certificate will automatically be loaded into your server software.

You can add as many certificates to one Puppet node as you want, obviously the name has to be unique for each.

I’ve distributed my free Let’s Encrypt certificates to a few of my hosts this way, and keep them up to date from a central location – I hope it’s useful for you as well. Feel free to leave any feedback!

&url Writing informative technical how-to documentation takes time, dedication and knowledge. Should my blog series have helped you in getting things working the way you want them to, or configure certain software step by step, feel free to tip me via PayPal ( or the Flattr button. Thanks!

August 26, 2016

The post Podcast: Application Security, Cryptography & PHP appeared first on

I just uploaded the newest SysCast episode, it's available for your podcasting hearing pleasures.

In the latest episode I talk to Scott Arciszewski to discuss all things security: from the OWASP top 10 to cache timing attacks, SQL injection and local/remote file inclusion. We also talk about his secure CMS called Airship, which takes a different approach to over-the-air updates.

Go have a listen: SysCast -- 6 -- Application Security & Cryptography with Scott Arciszewski.

The post Podcast: Application Security, Cryptography & PHP appeared first on

Let’s Encrypt

In a previous post, I’ve already briefly touched on Let’s Encrypt. It’s a fairly new but already very well established Certificate Authority, providing anyone with free SSL certificates to use for sites and devices they own. This is a welcome change from the older CAs who charge a premium to get that padlock into your visitors’ browsers. Thanks to Let’s Encrypt being free however, their prices have come down as well in the last year, which is great!

A fairly major stumbling block for some people is the fact that, out of security concern, Let’s Encrypt’s certificates are only valid for 90 days, while paid certificates are usually valid up to 3 years, so administrators can have that part on autopilot for quite a while without intervention.

However, given the fact that it is possible for you to fully automate the renewals of Let’s Encrypt certificates, if you do it right, you may never have to manually touch any SSL certificate renewal ever again!

The ACME protocol

In that same previous post I’ve also touched on the fact that I don’t very much like the beginner-friendly software provided by Let’s Encrypt themselves. It’s nice for simple setups, but as it by default tries to mangle your Apache configuration to its liking, it breaks a lot of advanced set-ups. Luckily, the Let’s Encrypt system uses an open protocol called ACME (“Automated Certificate Management Environment“), so instead of using their own provided ACME client, we can use any other client that also speaks ACME. The client of my choice is dehydrated, which is written in bash and allows us to manage and control a lot more things. Last but not least, it allows the use of the dns-01 challenge type, which uses a DNS TXT entry to validate ownership of the domain/host name instead of a web server.

Note: This blog post has been updated since its first incarnation to account for the name change from to dehydrated, following a possible trademark violation by using the Let’s Encrypt name.

The dns-01 challenge

There are a few different reasons to use the dns-01 challenge instead of the http-01 challenge:

  • Non-server hardware: not all devices supporting SSL are fully under your control. It might be a router, for example, or even a management card of some sorts, where you can’t just go in and install Let’s Encrypt’s ACME client, but you can (usually manually) upload SSL certificates to it. It would be nice to be able to request an “official” (non-self signed) certificate for anything that can do it, as otherwise the value of SSL communication is debatable (users are quickly trained to dismiss certificate warnings and errors if they are trained to expect them).
  • Internally used systems: these don’t exist in outside DNS, and are likely not reachable from the internet on port 80 either, so the ACME server cannot contact the web server to validate the token.
  • Centralized configuration management: most if not all of my server configuration is centrally managed by Puppet, including distribution of SSL certificates and reloading daemons after certificate changes. I don’t feel much for running an ACME client on every single server, all managing its own certificates. Being able to retrieve all SSL certificates to this same system directly and coordinate redistribution from there is a big win, plus there’s only one ACME client on the entire network.

The DNS record creation challenge

When using the dns-01 challenge, the script needs to be able to update your public DNS server(s), to be able to insert (and remove) a TXT record for the zone(s) you want to secure with Let’s Encrypt. There are a few different ways of accomplishing this, depending on what DNS server software you use.

For example, if you use Amazon’s Route53, CloudFlare, or any other cloud-based system, you’ll have to use their API to manipulate DNS records. If you’re using PowerDNS with a database backend, you could modify the database directly (as this script by Joe Holden demonstrates for PowerDNS with MySQL backend). Other types of server may require you to (re)write a zone file and load it into the software.

RFC2136 aka Dynamic DNS Update

Luckily, there’s also somewhat of a standard solution to remote DNS updates, as detailed in RFC2136. This allows for signed (or unsigned) updates to happen on your DNS zones over the network if your DNS server supports this and is configured to allow it. RFC2136-style updates are supported in ISC BIND, and since version 4.0 also in PowerDNS authoritative server.

As I use PowerDNS for all my DNS needs, this next part will focus on setting up PowerDNS, but if you can configure your own DNS server to accept dynamic updates, the rest of the article will apply just the same.

Setting up PowerDNS for dynamic DNS updates

First things first, the requirements: RFC2136 is only available since version 4.0 of the PowerDNS Authoritative Server – it was available as an experimental option in 3.4.x already, but I do recommend running the latest incarnation. Also important is the backend support: as detailed on the Dynamic DNS Update documentation page only a number of backends can accept updates – this includes most database-based backends, but not the bind zone file backend, for example.

I will assume you already have a running PowerDNS server hosting at least one domain, and replication configured (database, AXFR, rsync, …) to your secondary name servers.

There are a number of ways in PowerDNS to secure dynamic DNS updates: you can allow specific IPs or IP ranges to modify either a single domain, or give them blanket authorization to modify records on all domains, or you can secure updates per domain with TSIG signatures.

In this example I went with the easiest route, giving my configuration management server full access for all domains hosted on the server.

Only 2 (extra) statements are required in your PowerDNS configuration:


This will enable the Dynamic DNS Updates functionality, and allow changes coming from the server only. Multiple entries (separated by spaces) and netmasks (i.e. are allowed.


Installing dehydrated

The script is hosted on github, we can install it into /root/dehydrated with the following commands:

# apt-get install git
# cd /root; git clone

Configuring dehydrated

# cd /root/dehydrated
# echo HOOK=/root/dehydrated/ > config

The HOOK variable in the configuration above points to the hook script we will install for dns-01, so we don’t have to supply the path on every invocation.

Hook script requirements

As the hook script we will use is a simple bash script, it requires 2 binaries, one of which is the ‘nsupdate’ binary which will do the RFC2136-speaking for us, and the other is the ‘host’ binary, used to check propagation. In Debian and derivatives, these are contained in the ‘dnsutils’ and ‘bind9-host’ packages, respectively.

# apt-get install dnsutils bind9-host

The hook script

I’ve uploaded the hook script to Github, download it and save it as /root/dehydrated/
Make sure the script is executable as otherwise it won’t be run by dehydrated.

# chmod a+x

This script will be called by dehydrated and will handle the creation and removal of the DNS entry using dynamic updates. It will also check if the record has correctly propagated to the outside world.

If you don’t have direct database replication between your master and its slaves, say you use AXFR with notifies, it will take a short while before all nameservers responsible for the domain are up to date and serving the new record.

I initially thought of iterating through all the NS records for the domain and check if they are all serving the correct TXT record, but after seeing Joe’s PowerDNS/MySQL script run the check against Google’s, I decided to do the same. If in the end it turns out there are too many failures, I might update the script to check every nameserver individually before continuing.

The hook script will load the configuration file used by dehydrated itself (/root/dehydrated/config), so you can add a number of configuration values for the hook script in there:

Required variables


This is the DNS server IP to send the dynamic update to.

Optional variables


This is the path for the nsupdate binary, the default is the correct path on Debian and derivatives.


The amount of times to try asking Google if the DNS record propagation succeeded.


The amount of time to wait (in seconds) before retrying the DNS propagation check.


This is the DNS server port to send the dynamic update to.


This is the TTL for the record we will be inserting, default is 5 minutes which should be fine.


This block defines where to copy the newly created certificates to after they have been received from Let’s Encrypt. A new directory inside DESTINATION will be created (named after the hostname) and the 3 files (key, certificate and full chain) will be copied into it. Leaving DESTINATION empty will disable the copy feature.

The CERT_OWNER, CERT_GROUP and CERT_MODE fields define the new owner of the files and their mode. Leaving CERT_OWNER empty will disable the chown functionality, leaving CERT_GROUP empty will change group ownership to the CERT_OWNER’s primary group, and leaving CERT_MODE empty will disable the chmod functionality.

CERTDIR_OWNER, CERTDIR_GROUP and CERTDIR_MODE offer the same functionality for the certificate files’ directory created inside DESTINATION.

I use this functionality to copy the files to the puppet configuration directory, and I need to change ownership and/or mode because the certificates generated are by default readable by root only, which means my Puppet install can not actually deploy them as it is running as the ‘puppet’ user.

Requesting a certificate

To request a certificate, run:

# ./ --cron --challenge dns-01 --domain <>

If everything goes well, you will end up with a brand new 90-day certificate from Let’s Encrypt for the host name you provided, copied into the destination directory of your choice.

Renewing your certificates automatically

The hook script adds any successful certificate creations into domains.txt. This file is used by to automatically renew certificates if you don’t pass the –domain parameter on the command line.

# ./dehydrated --cron --challenge dns-01

To do this fully automatically, just add the command into a cron job.

&url Writing informative technical how-to documentation takes time, dedication and knowledge. Should my blog series have helped you in getting things working the way you want them to, or configure certain software step by step, feel free to tip me via PayPal ( or the Flattr button. Thanks!

August 25, 2016

August 23, 2016

I published the following diary on “Voice Message Notifications Deliver Ransomware“.

Bad guys need to constantly find new ways to lure their victims. If billing notifications were very common for a while, not all people in a company are working with such kind of documents. Which types of notification do they have in common? All of them have a phone number and with modern communication channels (“Unified Communications”) like Microsoft Lync or Cisco, everybody can receive a mail with a voice mail notification. Even residential systems can deliver voice message notifications…[Read more]

[The post [SANS ISC Diary] Voice Message Notifications Deliver Ransomware has been first published on /dev/random]

Over the weekend, Drupal 8.2 beta was released. One of the reasons why I'm so excited about this release is that it ships with "more outside-in". In an "outside-in experience", you can click anything on the page, edit its configuration in place without having to navigate to the administration back end, and watch it take effect immediately. This kind of on-the-fly editorial experience could be a game changer for Drupal's usability.

When I last discussed turning Drupal outside-in, we were still in the conceptual stages, with mockups illustrating the concepts. Since then, those designs have gone through multiple rounds of feedback from Drupal's usability team and a round of user testing led by Cheppers. This study identified some issues and provided some insights which were incorporated into subsequent designs.

Two policy changes we introduced in Drupal 8 — semantic versioning and experimental modules — have fundamentally changed Drupal's innovation model starting with Drupal 8. I should write a longer blog post about this, but the net result of those two changes is ongoing improvements with an easy upgrade path. In this case, it enabled us to add outside-in experiences to Drupal 8.2 instead of having to wait for Drupal 9. The authoring experience improvements we made in Drupal 8 are well-received, but that doesn't mean we are done. It's exciting that we can move much faster on making Drupal easier to use.

In-place block configuration

As you can see from the image below, Drupal 8.2 adds the ability to trigger "Edit" mode, which currently highlights all blocks on the page. Clicking on one — in this case, the block with the site's name — pops out a new tray or sidebar. A content creator can change the site name directly from the tray, without having to navigate through Drupal's administrative interface to theme settings as they would have to in Drupal 7 and Drupal 8.1.

Editing the site name using outside-in

Making adjustments to menus

In the second image, the pattern is applied to a menu block. You can make adjustments to the menu right from the new tray instead of having to navigate to the back end. Here the content creator changes the order of the menu links (moving "About us" after "Contact") and toggles the "Team" menu item from hidden to visible.

Editing the menu using outside-in

In-context block placement

In Drupal 8.1 and prior, placing a new block on the page required navigating away from your front end into the administrative back end and noting the available regions. Once you discover where to go to add a block, which can in itself be a challenge, you'll have to learn about the different regions, and some trial and error might be required to place a block exactly where you want it to go.

Starting in Drupal 8.2, content creators can now just click "Place block" without navigating to a different page and knowing about available regions ahead of time. Clicking "Place block" will highlight the different possible locations for a block to be placed in.

Placing a block using outside-in

Next steps

These improvements are currently tagged "experimental". This means that anyone who downloads Drupal 8.2 can test these changes and provide feedback. It also means that we aren't quite satisfied with these changes yet and that you should expect to see this functionality improve between now and 8.2.0's release, and even after the Drupal 8.2.0 release.

As you probably noticed, things still look pretty raw in places; as an example, the forms in the tray are exposing too many visual details. There is more work to do to bring this functionality to the level of the designs. We're focused on improving that, as well as the underlying architecture and accessibility. Once we feel good about how it all works and looks, we'll remove the experimental label.

We deliberately postponed most of the design work to focus on introducing the fundamental concepts and patterns. That was an important first step. We wanted to enable Drupal developers to start experimenting with the outside-in pattern in Drupal 8.2. As part of that, we'll have to determine how this new pattern will apply broadly to Drupal core and the many contributed modules that would leverage it. Our hope is that once the outside-in work is stable and no longer experimental, it will trickle down to every Drupal module. At that point we can all work together, in parallel, on making Drupal much easier to use.

Users have proven time and again in usability studies to be extremely "preview-driven", so the ability to make quick configuration changes right from their front end, without becoming an expert in Drupal's information architecture, could be revolutionary for Drupal.

If you'd like to help get these features to stable release faster, please join us in the outside-in roadmap issue.

Thank you

I'd also like to thank everyone who contributed to these features and reviewed them, including Bojhan, yoroy, pwolanin, andrewmacpherson, gtamas, petycomp, zsofimajor, SKAUGHT, nod_, effulgentsia, Wim Leers, catch, alexpott, and xjm.

And finally, a special thank you to Acquia's outside-in team for driving most of the design and implementation: tkoleary, webchick, tedbow, Gábor Hojtsy, tim.plunkett, and drpal.

Acquia's outside in team
Acquia's outside-in team celebrating that the outside-in patch was committed to Drupal 8.2 beta. Go team!

August 22, 2016


Update 20160818: added Proximus RADIUS server.

The Belgian ISPs Proximus and Telenet both provide access to a network of hotspots. A nice recent addition is the use of alternative ssids for “automatic” connections instead of a captive portal where you login through a webpage. Sadly, their support pages provide next to no information to make a safe connection to these hotspots.

Proximus is a terrible offender. According to their support page on a PC only Windows 8.1 is supported. Linux, OSX *and* Windows 8 (!) or 7 users are kindly encouraged to use the open wifi connection and login through the captive portal. Oh, and no certification information is given for Windows 8.1 either. That’s pretty silly, as they use EAP-TTLS. Here is the setup to connect from whatever OS you use (terminology from gnome-network-manager):

Security: WPA2 Enterprise
Authentication: Tunneled TLS (TTLS)
Anonymous identity:
Certificate: GlobalSign Root CA (in Debian/Ubuntu in /usr/share/ca-certificates/mozilla/)
Inner Authentication: MSCHAPv2
Password: your_password_here
RADIUS server certificate (optional):

Telenet’s support page is slightly better (not a fake Windows 8.1 restriction), but pretty useless as well with no certificate information whatsoever. Here is the information needed to use TelenetWifree using PEAP:

SSID: TelenetWifree
Security: WPA2 Enterprise
Authentication: Protected EAP (PEAP)
Certificate: GlobalSign Root CA (in Debian/Ubuntu in /usr/share/ca-certificates/mozilla/)
Inner Authentication: MSCHAPv2
Password: your_password_here
RADIUS server certificate (optional):

If you’re interested, screenshots of the relevant parts of the wireshark trace are attached here:

proximus_rootca telenet_rootca

Filed under: Uncategorized Tagged: GNU/Linux, Lazy support, proximus, PROXIMUS_AUTO_PHONE, telenet, TelenetWifree, Windows 7

If you’re a Vim user you probably use it for almost everything. Out of the box, Perl 6 support is rather limited. That’s why many people use editors like Atom for Perl 6 code.

What if with a few plugins you could configure vim to be a great Perl 6 editor? I made the following notes while configuring Vim on my main machine running Ubuntu 16.04. The instructions should be trivially easy to port to other distributions or Operating Systems. Skip the applicable steps if you already have a working vim setup (i.e. do not overwrite you .vimrc file).

I maintain my Vim plugins using pathogen, as it allows me to directly use git clones from github. This is specially important for plugins in rapid development.
(If your .vim directory is a git repository, replace ‘git clone’ in the commands by ‘git submodule add’.)

Basic vim Setup

Install vim with scripting support and pathogen. Create the directory where the plugins will live:
$ sudo apt-get install vim-nox vim-pathogen && mkdir -p ~/.vim/bundle

$ vim-addons install pathogen

Create a minimal .vimrc in your $HOME, with at least this configuration (enabling pathogen). Lines commencing with ” are comments:

“Enable extra features (e.g. when run systemwide). Must be before pathogen
set nocompatible

“Enable pathogen
execute pathogen#infect()
“Enable syntax highlighting
syntax on
“Enable indenting
filetype plugin indent on

Additionally I use these settings (the complete .vimrc is linked atthe end):

“Set line wrapping
set wrap
set linebreak
set nolist
set formatoptions+=l

“Enable 256 colours
set t_Co=256

“Set auto indenting
set autoindent

“Smart tabbing
set expandtab
set smarttab
set sw=4 ” no of spaces for indenting
set ts=4 ” show \t as 2 spaces and treat 2 spaces as \t when deleting

“Set title of xterm
set title

” Highlight search terms
set hlsearch

“Strip trailing whitespace for certain type of files
autocmd BufWritePre *.{erb,md,pl,pl6,pm,pm6,pp,rb,t,xml,yaml,go} :%s/\s\+$//e

“Override tab espace for specific languages
autocmd Filetype ruby,puppet setlocal ts=2 sw=2

“Jump to the last position when reopening a file
au BufReadPost * if line(“‘\””) > 1 && line(“‘\””) <= line(“$”) |
\ exe “normal! g’\”” | endif

“Add a coloured right margin for recent vim releases
if v:version >= 703
set colorcolumn=80

“Ubuntu suggestions
set showcmd    ” Show (partial) command in status line.
set showmatch  ” Show matching brackets.
set ignorecase ” Do case insensitive matching
set smartcase  ” Do smart case matching
set incsearch  ” Incremental search
set autowrite  ” Automatically save before commands like :next and :make
set hidden     ” Hide buffers when they are abandoned
set mouse=v    ” Enable mouse usage (all modes)

Install plugins

vim-perl for syntax highlighting:

$ git clone ~ /.vim/bundle/vim-perl


vim-airline and themes for a status bar:
$ git clone ~/.vim/bundle/vim-airline
$ git clone ~/.vim/bundle/vim-airline-themes
In vim type :Helptags

In  Ubuntu the ‘fonts-powerline’ package (sudo apt-get install fonts-powerline) installs fonts that enable nice glyphs in the statusbar (e.g. line effect instead of ‘>’, see the screenshot at

Add this to .vimrc for airline (the complete .vimrc is attached):
“airline statusbar
set laststatus=2
set ttimeoutlen=50
let g:airline#extensions#tabline#enabled = 1
let g:airline_theme=’luna’
“In order to see the powerline fonts, adapt the font of your terminal
“In Gnome Terminal: “use custom font” in the profile. I use Monospace regular.
let g:airline_powerline_fonts = 1


Tabular for aligning text (e.g. blocks):
$ git clone ~/.vim/bundle/tabular
In vim type :Helptags

vim-fugitive for Git integration:
$ git clone ~/.vim/bundle/vim-fugitive
In vim type :Helptags

vim-markdown for markdown syntax support (e.g. the of your module):
$ git clone ~/.vim/bundle/vim-markdown
In vim type :Helptags

Add this to .vimrc for markdown if you don’t want folding (the complete .vimrc is attached):
“markdown support
let g:vim_markdown_folding_disabled=1

synastic-perl6 for Perl 6 syntax checking support. I wrote this plugin to add Perl 6 syntax checking support to synastic, the leading vim syntax checking plugin. See the ‘Call for Testers/Announcement’ here. Instruction can be found in the repo, but I’ll paste it here for your convenience:

You need to install syntastic to use this plugin.
$ git clone ~/.vim/bundle/synastic
$ git clone ~/.vim/bundle/synastic-perl6

Type “:Helptags” in Vim to generate Help Tags.

Syntastic and syntastic-perl6 vimrc configuration, (comments start with “):

“airline statusbar integration if installed. De-comment if installed
“set laststatus=2
“set ttimeoutlen=50
“let g:airline#extensions#tabline#enabled = 1
“let g:airline_theme=’luna’
“In order to see the powerline fonts, adapt the font of your terminal
“In Gnome Terminal: “use custom font” in the profile. I use Monospace regular.
“let g:airline_powerline_fonts = 1

“syntastic syntax checking
let g:syntastic_always_populate_loc_list = 1
let g:syntastic_auto_loc_list = 1
let g:syntastic_check_on_open = 1
let g:syntastic_check_on_wq = 0
set statusline+=%#warningmsg#
set statusline+=%{SyntasticStatuslineFlag()}
set statusline+=%*
“Perl 6 support
“Optional comma separated list of quoted paths to be included to -I
“let g:syntastic_perl6_lib_path = [ ‘/home/user/Code/some_project/lib’, ‘lib’ ]
“Optional perl6 binary (defaults to perl6)
“let g:syntastic_perl6_interpreter = ‘/home/claudio/tmp/perl6’
“Register the checker provided by this plugin
let g:syntastic_perl6_checkers = [ ‘perl6latest’]
“Enable the perl6latest checker
let g:syntastic_enable_perl6latest_checker = 1


You complete me fuzzy search autocomplete:

$ git clone ~/.vim/bundle/YouCompleteMe

Read the YouCompleteMe documentation for the dependencies for your OS and for the switches for additional non-fuzzy support for additional languages like C/C++, Go and so on. If you just want fuzzy complete support for Perl 6, the default is ok. If someone is looking for a nice project, a native Perl6 autocompleter for YouCompleteMe (instead of the fuzzy one) would be a great addition. You can install YouCompleteMe like this:
$ cd ~/.vim/bundle/YouCompleteMe && ./


That’s it. I hope my notes are useful to someone. The complete .vimrc can be found here.



Filed under: Uncategorized Tagged: Perl, perl6, vim

August 20, 2016

Vimlogo.svgI think that Perl 6, as a fairly new language, needs good tooling not only to attract new programmers but also to make the job of Perl 6 programmers more enjoyable. If you’ve worked with an IDE before, you certainly agree that syntax checking is one of those things that we take for granted. Syntastic-perl6 is a plugin that adds Perl 6 syntax checking in Vim using Syntastic. Syntastic is the leading Vim plugin for syntax checking. It supports many programming languages.

If the plugin proves to be useful, I plan on a parallel track for Perl 6 support in Vim. On one hand, this plugin will track the latest Perl 6 Rakudo releases (while staying as backwards compatible as possible) and be the first to receive new functionality. On the other hand, once this plugin is well-tested and feature complete, it will hopefully be added to the main syntastic repo (it has it’s own branch upstream already) in order to provide out-of-the-box support for Perl 6.

So, what do we need to get there? We need testers and users, so they can make this plugin better by:

  • sending Pull Requests to make the code (vimscript) better where needed.
  • sending Pull Requests to add tests for error cases not yet tested (see the t directory) or -more importantely- caught.
  • posting issues for bugs or errors not-yet-caught. In that case copy-paste the error (e.g. within vim: :!perl6 -c %) and post a sample of the erroneous Perl 6 code in question.

The plugin, with installation instructions, is on its github repo at syntastic-perl6. With a vim module manage like pathogen you can directly use a clone of the repo.

Keep me posted!


Filed under: Uncategorized Tagged: Perl, perl6, vim

August 19, 2016

I published the following diary on “Data Classification For the Masses“.

Data classification isn’t a brand new topic. For a long time, international organizations or military are doing “data classification”. It can be defined as:

A set of processes and tools to help the organization to know what data are used, how they are protected and what access levels are implemented

Military’s levels are well known: Top Secret, Secret, Confidential, Restricted, Unclassified.

But organizations are free to implement their own scheme and they are deviations. NATO is using: Cosmic Top Secret (CTS), NATO Secret (NS), NATO Confidential (NC) and NATO Restricted (NR). EU institutions are using: EU Top Secret, EU Secret, EU Confidential, EU Restricted. The most important is to have the right classification depending on your business… [Read more]

[The post [SANS ISC Diary] Data Classification For the Masses has been first published on /dev/random]

August 17, 2016


You may have backupped your music cd’s using a single flac file instead of a file for each track. In case you need to split the cd-flac, do this:

Install the needed software:

$ sudo apt-get install cuetools shntool

Split the album flac file into separate tracks:

$ cuebreakpoints sample.cue | shnsplit -o flac sample.flac

Copy the flac tags (if present):

$ cuetag sample.cue split-track*.flac

The full howto can be found here (aidanjm).

Update (April 18th, 2009):
In case the cue file is not a separate file, but included in the flac file itself do this as the first step:

$ metaflac --show-tag=CUESHEET sample.flac | grep -v ^CUESHEET > sample.cue

(NB: The regular syntax is “metaflac –export-cuesheet-to=sample.cue sample.flac“, however often the cue file in embedded in a tag instead of the cuesheet block).

Posted in Uncategorized Tagged: flac, GNU/Linux, music

August 16, 2016

The post TCP vulnerability in Linux kernels pre 4.7: CVE-2016-5696 appeared first on

This is a very interesting vulnerability in the TCP stack of Linux kernels pre < 4.7. The bad news: there are a lot of systems online running those kernel versions. The bug/vulnerability is as follows.

Red Hat Product Security has been made aware of an important issue in
the Linux kernel's implementation of challenge ACKS as specified in
RFC 5961. An attacker which knows a connections client IP, server IP
and server port can abuse the challenge ACK mechanism
to determine the accuracy of a normally 'blind' attack on the client or server.

Successful exploitation of this flaw could allow a remote attacker to
inject or control a TCP stream contents in a connection between a
Linux device and its connected client/server.

* This does NOT mean that cryptographic information is exposed.
* This is not a Man in the Middle (MITM) attack.
[oss-security] CVE-2016-5389: linux kernel -- challange ack information leak

In short: a successful attack could hijack a TCP session and facilitate a man-in-the-middle attack and allow the attacker to inject data. Ie: altering the content on websites, modifying responses from webservers, ...

This Stack Overflow post explains it very well.

The hard part of taking over a TCP connection is to guess the source port of the client and the current sequence number.

The global rate limit for sending Challenge ACK's (100/s in Linux) introduced together with Challenge ACK (RFC5961) makes it possible in the first step to guess a source port used by the clients connection and in the next step to guess the sequence number. The main idea is to open a connection to the server and send with the source of the attacker as much RST packets with the wrong sequence mixed with a few spoofed packets.

By counting how much Challenge ACK get returned to the attacker and by knowing the rate limit one can infer how much of the spoofed packets resulted in a Challenge ACK to the spoofed client and thus how many of the guesses where correct. This way can can quickly narrow down which values of port and sequence are correct. This attack can be done within a few seconds.

And of course the attacker need to be able to spoof the IP address of the client which is not true in all environments. It might be possible in local networks (depending on the security measures) but ISP will often block IP spoofing when done from the usual DSL/cable/mobile accounts.

TCP “off-path” Attack (CVE-2016-5696)

For RHEL (and CentOS derivatives), the following OS's are affected.


While it's no permanent fix, the following config will make it a lot harder to abuse this vulnerability.

$ sysctl -w net.ipv4.tcp_challenge_ack_limit=999999999

And make it permanent so it persists on reboot:

$ echo "net.ipv4.tcp_challenge_ack_limit=999999999" >> /etc/sysctl.d/net.ipv4.tcp_challenge_ack_limit.conf

While the attack isn't actually prevented, it is damn hard to reach the ACK limits.

Further reading:

The post TCP vulnerability in Linux kernels pre 4.7: CVE-2016-5696 appeared first on

Rio olympic stadium

As the 2016 Summer Olympics in Rio de Janeiro enters its second and final week, it's worth noting that the last time I blogged about Drupal and the Olympics was way back in 2008 when I called attention to the fact that Nike was running its sponsorship site on Drupal 6 and using Drupal's multilingual capabilities to deliver their message in 13 languages.

While watching some track and field events on television, I also spent a lot of time on my laptop with the NBC Olympics website. It is a site that has run on Drupal for several years, and this year I noticed they took it up a notch and did a redesign to enhance the overall visitor experience.

Last week NBC issued a news release that it has streamed over one billion minutes of sports via their site so far. That's a massive number!

I take pride in knowing that an event as far-reaching as the Olympics is being delivered digitally to a massive audience by Drupal. In fact, some of the biggest sporting leagues around the globe run their websites off of Drupal, including NASCAR, the NBA, NFL, MLS, and NCAA. Massive events like the Super Bowl, Kentucky Derby, and the Olympics run on Drupal, making it the chosen platform for global athletic organizations.

Rio website

Rio press release

Update on August 24: This week, the NBC Sports Group issued a press release stating that the Rio 2016 Olympics was the most successful media event in history! Digital coverage across and the NBC Sports app set records, with 3.3 billion total streaming minutes, 2.71 billion live streaming minutes, and 100 million unique users. According to the announcement, live streaming minutes for the Rio games nearly doubled that of all Olympic games combined, and digital coverage amassed 29 percent more unique users than the London Olympics four years prior. Drupal was proud to be a part of the largest digital sporting event in history. Looking forward to breaking more records in the years to come!

August 12, 2016

The post youtube-dl: download audio-only files from YouTube on Mac appeared first on

I may or may not have become addicted to a particular video on YouTube, and I wanted to download the MP3 for offline use.

(Whether it's allowed or not is up for debate, knowing copyright laws it probably depends per country.)

Luckily, I remember I featured a YouTube downloader once in cron.weekly issue #23 that I could use for this.

So, a couple of simple steps on Mac to download the MP3 from any YouTube video. All further commands assume the Brew package manager is installed on your Mac.

$ brew install ffmpeg youtube-dl

To download and convert to MP3:

$ youtube-dl --extract-audio --audio-format mp3 --prefer-ffmpeg
 3UOtF4J9wpo: Downloading webpage
 3UOtF4J9wpo: Downloading video info webpage
 3UOtF4J9wpo: Extracting video information
 3UOtF4J9wpo: Downloading MPD manifest
[download] 100% of 58.05MiB
[ffmpeg] Destination: 3UOtF4J9wpo.mp3

Deleting original file 3UOtF4J9wpo.webm

And bingo, all that remains is the MP3!

The post youtube-dl: download audio-only files from YouTube on Mac appeared first on

August 11, 2016

The post Mark a varnish backend as healthy, sick or automatic via CLI appeared first on

This is a useful little command for when you want to perform maintenance on a Varnish installation and want to dynamically mark backends as healthy or sick via the command line, without restarting or reloading varnish.

See varnish backend health status

To see all backends, there are 2 methods: a debug output and a normalized output.

$ varnishadm -S /etc/varnish/secret -T localhost:6082 backend.list
Backend name                   Refs   Admin      Probe
backend1(,,80)        1      probe      Sick 0/4
fallback(,,80)      12     probe      Healthy (no probe)

$ varnishadm -S /etc/varnish/secret -T localhost:6082
Backend backend1 is Sick
Current states  good:  0 threshold:  2 window:  4
Average responsetime of good probes: 0.000000
Oldest                                                    Newest
---------------------------------------------------------------- Happy

The backend.list shows all backends, even those without a probe (= healtcheck) configured.

The command will show in-depth statistics on the varnish probes that are being executed, including the IPv4 connect state, whether a send/receive has worked and if the response code was HTTP/200.

For instance, a healthy backend will be shown like this, with each state of the check (IPv4, send, receive & HTTP response code) on a seperate line.

$ varnishadm -S /etc/varnish/secret -T localhost:6082
Backend backend1 is Healthy
Current states  good:  5 threshold:  4 window:  5
Average responsetime of good probes: 0.014626
Oldest                                                    Newest
4444444444444444444444444444444444444444444444444444444444444444 Good IPv4

Now, to change backend statuses.

Mark a varnish backend as healthy or sick

In order to mark a particular backend as sick or healthy, thus overriding the probe, you can do so like this.

$ varnishadm -S /etc/varnish/secret -T localhost:6082 backend.set_health backend1 healthy

The above command will mark the backend named backend1 as healthy. Likewise, you can mark a backend as sick to prevent it from getting traffic.

$ varnishadm -S /etc/varnish/secret -T localhost:6082 backend.set_health backend1 sick

If you have multiple Varnish backends and they're configured in a director to load balance traffic, all traffic should gracefully be sent to the other backend(s). (see the examples in mattiasgeniar/varnish-4.0-configuration-templates)

If you mark a backend explicitly as sick, the backend.list output changes and the admin column removes the 'probe' and marks it as 'sick' explicitly, indicating it was changed via CLI.

$ varnishadm -S /etc/varnish/secret -T localhost:6082 backend.list
Backend name                   Refs   Admin      Probe
backend1(,,80)        1      sick       Sick 0/4
fallback(,,80)      12     probe      Healthy (no probe)

You can also change it back to let Varnish decide the backend health.

Mark the backend as 'varnish managed', let probes decide the health

To let Varnish decide the health itself, by using it probes, mark the backend to be auto again:

$ varnishadm -S /etc/varnish/secret -T localhost:6082 backend.set_health backend1 auto

So to summarise: the backend.set_healthy command in varnishadm allows you to manipulate the backend health state of varnish backends, overriding the result of a probe.

Useful when you're trying to gracefully update several backend servers, by marking backends as sick one by one without waiting for the probes to discover that backends are sick. This method allows you to do things gracefully before the update.

The post Mark a varnish backend as healthy, sick or automatic via CLI appeared first on

August 09, 2016

The post zsh: slow startup for new terminals appeared first on

I couldn't quite put my finger on the why, but I was experiencing slower and slower startups of my terminal when using zsh (combined with the oh-my-zsh extension).

In my case, this was because of a rather long history file that gets loaded whenever you start a new terminal.

$  wc -l ~/.zsh_history
   10005 /Users/mattias/.zsh_history

Turns out, loading over 10k lines worth of shell history whenever you launch a new shell is hard for a computer.

This was my fix:

$ cp ~/.zsh_history ~/.zsh_history.1
$ echo '' > ~/.zsh_history

I had to use echo because the shortcut that would normally work on Bash didn't work here;

$ > ~/.zsh_history

Either way, that solved zsh from starting slowly for me.

The post zsh: slow startup for new terminals appeared first on

The post Docker Cheat Sheet appeared first on

An interesting Docker cheat sheet just got posted on the @Docker Twitter account that's worth sharing. Because it got linked to a strange domain (, really?) I'll mirror it here -- I feel the original link will one day go down.


Alternative links:

  • Docker Cheat Sheet: PNG
  • Docker Cheat Sheet: PDF

Good stuff Docker, thanks for sharing!

The post Docker Cheat Sheet appeared first on

August 08, 2016

The post Awk trick: show lines longer than X characters appeared first on

Here's a quick little awk trick to have in your arsenal: if you want to search through a bunch of files, but only want to show the lines that exceed X amount of characters, you can use awk's built-in length check.

For instance:

$ awk 'length > 350'
$ awk 'length < 50' 

If you combine this with a grep, you can do things like "show me all the lines that match TXT and that exceed 100 characters in length".

$ grep 'TXT' * | awk 'length > 100'

Super useful to quickly skim through a bunch of logs or text-files.

The post Awk trick: show lines longer than X characters appeared first on

The post Podcast: Ansible config management & deploying code with James Cammarata appeared first on

I recorded a fun new episode on the SysCast podcast about Ansible. I'm joined by James Cammarata, head of Ansible core engineering, to discuss Ansible, push vs. pull scenario's, deploying code, testing your config management and much more.

You can find the Episode on the website or wherever you get your podcasts: SysCast #5: Ansible: config management & deploying code with James Cammarata from Red Hat.

Feedback appreciated!

The post Podcast: Ansible config management & deploying code with James Cammarata appeared first on

August 04, 2016

August 02, 2016

The post Postfix mail queue: deliver e-mail to an alternate address appeared first on

Have you ever had an e-mail stuck in a postfix queue you'd just like to re-route to a different address? This could be because the original to address has e-mail delivery issues and you really want to have that e-mail delivered to an alternate recipient.

Here's a couple of steps to workaround that. It basically goes:

  1. Find the e-mail
  2. Mark mail as 'on hold' in the Postfix queue
  3. Extract the mail from the queue
  4. Requeue to a different recipient
  5. Delete the original mail from the queue after confirmed delivery

It's less work than it sounds. Let's go.

Find the mail ID in postfix

You need to find the mail ID of the mail you want to send to a different address.

$ postqueue  -p | grep '' -B 2 | grep 'keyword'
42DA1C0B84D0    28177 Tue Aug  2 14:52:38  thesender@domain.tld

In this case, a mail was sent to me and I'd like to have that be delivered to a different address. The identifier in the front, 42DA1C0B84D0 is what we'll need.

Mark the postfix queue item as 'on hold'

To prevent Postfix from trying to deliver it in the meanwhile.

$ postsuper -h CF452C1239FB
postsuper: CF452C1239FB: placed on hold
postsuper: Placed on hold: 1 message

Don't worry, your mail isn't deleted.

Extract the mail from the queue

Extract that email and save it to a temporary file. If you're paranoid, don't save to /tmp as everyone can read that mail while it's there.

$ postcat -qbh CF452C1239FB > /tmp/m.eml

Now, to resend.

Send queued mail to different recipient

Now that you've extracted that e-mail, you can have it be sent to a different recipient than the original.

$ sendmail -f $sender $recipient < /tmp/m.eml

Replace $sender and $recipient with real values. The sender should remain the same as the from address you saw with the postqueue -p command, the $recipient can be your modified address. For instance, in my example, I could do this.

$ sendmail -f thesender@domain.tld newrecipient@domain.tld < /tmp/m.eml

After a while, that mail should arrive at the new address.

Delete the 'on hold' mail from the postfix queue

After you've confirmed delivery to your new e-mail address, you can delete the mail from the 'on hold' queue in Postfix.

Warning: after this, the mail is gone forever from the postfix queue!

$ postsuper -d  CF452C1239FB
postsuper: CF452C1239FB: removed
postsuper: Deleted: 1 message

$ rm -f /tmp/m.eml

And you're good: you just resent a mail that got stuck in the postfix queue to a different address!

Fyi: there are alternatives using Postfix's smtp_generic_maps, but call me old fashioned - I still prefer this method.

The post Postfix mail queue: deliver e-mail to an alternate address appeared first on

Autoptimize by default uses WordPress’ internal logic to determine if a URL should be HTTP or HTTPS. But in some cases WordPress may not be fully aware it is on HTTPS, or maybe you want part of your site HTTP and another part (cart & checkout?) in HTTPS. Protocol-relative URL’s to the rescue, except Autoptimize does not do those, right?

Well, not by default no. But the following code-snippet uses AO’s API to output protocol-relative URL’s (warning: not tested thoroughly in a production environment, but I’ll happy to assist in case of problems):

function protocollesser($urlIn) {
  return $urlOut;

August 01, 2016

The post Chrome 52: return old backspace behaviour appeared first on

Remember when you could hit backspace and go back one page in your history? Those were the days!

If you're a Chrome 52 user, you might have noticed that no longer works. Instead, it'll show this screen.


This was discussed at length and the consensus was: it's mad to have such functionality, it does more harm than good, let's rethink it. And so, the backspace functionality has been removed.

But to ease our pain, Chrome 52 introduced a new material design, so all is good, right?


Well, if like me, you miss the old backspace functionality, you can get it back!

Quickest fix: a Chrome extension

You can get the back to back Chrome extension that fixes this for you.

But come on, using an extension for this feels wrong, no?

Add CLI argument to restore backspace

Add the following argument whenever you start Chrome to restore the old backspace functionality: --enable-blink-features=BackspaceDefaultHandler --test-type.

Because apparently we're the 0.04% of users that want this feature.

The post Chrome 52: return old backspace behaviour appeared first on

Remember when my webserver was acting up? Well, I was so fed up with it, that I took a preconfigured Bitnami WordPress image and ran that on AWS. I don’t care how Bitnami configured it, as long as it works.

As a minor detail, postfix/procmail/dovecot were of course not installed or configured. Meh. This annoyed the Mrs. a bit because she didn’t get her newsletters. But I was so fed up with all the technical problems, that I waited a month to do anything about it.

Doing sudo apt-get -y install postfix procmail dovecot-pop3d and copying over the configs from the old server solved that.

Did I miss email during that month? Not at all. People were able to contact met through Twitter, Facebook, Telegram and all the other social networks. And I had an entire month without spam. Wonderful!

The post Living without email for a month appeared first on