Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

August 17, 2018

Acquia was once again included the Inc 5000 listing of fast-growing private U.S. companies. It's a nice milestone for us because it is the seventh year in a row that Acquia has been included. We first appeared on the list in 2012, when Acquia was ranked the eighth fastest growing private company in the United States. It's easy to grow fast when you get started, but as you grow, it's increasingly more challenging to sustain high growth rates. While there may be 4,700 companies ahead of us, we have kept a solid track record of growth ever since our debut seven years ago. I continue to be proud of the entire Acquia team who are relentless in making these achievements possible. Kapow!

August 16, 2018

A very quick post about a new thread which has been started yesterday on the OSS-Security mailing list. It’s about a vulnerability affecting almost ALL SSH server version. Quoted from the initial message;

It affects all operating systems, all OpenSSH versions (we went back as far as OpenSSH 2.3.0, released in November 2000)

It is possible to enumerate usernames on a server that offers SSH services publicly. Of course, it did not take too long to see a proof-of-concept posted. I just tested it and it works like a charm:

$ ./ test
[*] Invalid username
$ ./ xavier
[+] Valid username

This is very nice/evil (depending on the side you’re working on). For Red Teams, it’s nice to enumerate usernames and focus on the weakest ones (“guest”, “support”, “test”, etc). There are plenty of username lists available online to brute force the server.

From a Blue Team point of view, how to detect if a host is targeted by this attack? Search for this type of event:

Aug 16 21:42:10 victim sshd[10680]: fatal: ssh_packet_get_string: incomplete message [preauth]

Note that the offending IP address is not listed in the error message. It’s time to keep an eye on your log files and block suspicious IP addresses that make too many SSH attempts (correlate with your firewall logs).

[The post Detecting SSH Username Enumeration has been first published on /dev/random]

I published the following diary on “Truncating Payloads and Anonymizing PCAP files“:

Sometimes, you may need to provide PCAP files to third-party organizations like a vendor support team to investigate a problem with your network. I was looking for a small tool to anonymize network traffic but also to restrict data to packet headers (and drop the payload). Google pointed me to a tool called ‘TCPurify’… [Read more]


[The post [SANS ISC] Truncating Payloads and Anonymizing PCAP files has been first published on /dev/random]

August 14, 2018

In this follow-up to Implementing the Clean Architecture I introduce you to a combination of The Clean Architecture and the strategic DDD pattern known as Bounded Contexts.

At Wikimedia Deutschland we use this combination of The Clean Architecture and Bounded Contexts for our fundraising applications. In this post I describe the structure we have and the architectural rules we follow in the abstract. For the story on how we got to this point and a more concrete description, see my post Bounded Contexts in the Wikimedia Fundraising Software. In that post and at the end of this one I link you to a real-world codebase that follows the abstract rules described in this post.

If you are not yet familiar with The Clean Architecture, please first read Implementing the Clean Architecture.

Clean Architecture + Bounded Contexts

Diagram by Jeroen De Dauw, Charlie Kritschmar, Jan Dittrich and Hanna Petruschat

Diagram depicting Clean Architecture + Bounded Contexts

In the top layer of the diagram we have applications. These can be web applications, they can be console applications, they can be monoliths, they can be microservices, etc. Each application has presentation code which in bigger applications tends to reside in a decoupled presentation layer using patterns such as presenters. All applications also somehow construct the dependency graph they need, perhaps using a Dependency Injection Container or set of factories. Often this involves reading configuration from somewhere. The applications contain ALL framework binding, hence they are the place where you will find the Controllers if you are using a typical web framework.

Since the applications are in the top layer, and dependencies can only go down, no code outside of the applications is allowed to depend on code in the applications. That means there is 0 binding to mechanisms such as frameworks and presentation code outside of the applications.

In the second layer we have the Bounded Contexts. Ideally one Bounded Context per subdomain. At the core of each BC we have the Domain Model and Domain Services, containing the business logic part of the subdomain. Dependencies can only point inwards, so the Domain model which is at the center cannot depend on anything more to the outside. Around the Domain Model are the Domain Services. These include interfaces for persistence services such as Repositories. The UseCases form the final ring. They can use both the Domain Model and the Domain Services. They also form a boundary around the two, meaning that no code outside of the Bounded Context is allowed to talk to the Domain Model or Domain Services.

The Bounded Contexts include their own Persistence Layer. The Persistence Layer can use a relational database, files on the file system, a remote web API, a combination of these, etc. It has implementations of domain services such as Repositories which are used by the UseCases. These implementations are the only thing that is allowed to talk to and know about the low-level aspects of the Persistence Layer. The only things that can use these service implementations are other Domain Services and the UseCases.

The UseCases, including their Request Models and Response Models, form the public interface of the Bounded Context. This means that there is 0 binding to the persistence mechanisms outside of the Bounded Context. It also means that the code responsible for the domain logic cannot be directly accessed elsewhere, such as in the presentation layer of an application.

The applications and Bounded Contexts contain all the domain specific code. This code can make use of libraries and of course the runtime (ie PHP) itself.

As examples of Bounded Contexts following this approach, see the Donation Context and Membership Context. For an application following this architecture, see the FundraisingFrontend, which uses both the Donation Context and Membership Context. Both these contexts are also used by another application the code of which sadly enough is not currently public. You can also read the stories of how we rewrote the FundraisingFontend to use the Clean Architecture and how we refactored towards Bounded Contexts.

Further reading

If you are not yet familiar with Bounded Contexts or how to design them well, I recommend reading Domain-Driven Design Distilled.

In this follow-up to rewriting the Wikimedia Deutschland fundraising I tell the story of how we reorganized our codebases along the lines of the DDD strategic pattern Bounded Contexts.

In 2016 the FUN team at Wikimedia Deutschland rewrote the Wikimedia Deutschland fundraising application. This new codebase uses The Clean Architecture and near the end of the rewrite got reorganized partially towards Bounded Contexts. After adding many new features to the application in 2017, we reorganized further towards Bounded Contexts, this time also including our other fundraising applications. In this post I explain the questions we had and which decisions we ended up making. I also link to the relevant code so you have real world examples of using both The Clean Architecture and Bounded Contexts.

This post is a good primer to my Clean Architecture + Bounded Contexts post, which describes the structure and architecture rules we now have in detail.

Our initial rewrite

Back in 2014, we had two codebases, each in their own git repository and not using code from the other. The first one being a user facing PHP web-app that allows people to make donations and apply for memberships, called FundraisingFrontend. The “frontend” here stands for “user facing”. This is the application that we rewrote in 2016. The second codebase mainly contains the Fundraising Operations Center, a PHP web-app used by the fundraising staff to moderate and analyze donations and membership applications. This second codebase also contains some scripts to do exports of the data for communication with third-party systems.

Both the FundraisingFrontend and Fundraising Operations Center (FOC) used the same MySQL database. They each accessed this database in their own way, using PDO, Doctrine or SQL directly. In 2015 we created a Fundraising Store component based on Doctrine, to be used by both applications. Because we rewrote the FundraisingFrontend, all data access code there now uses this component. The FOC codebase we have been gradually migrating away from random SQL or PDO in its logic to also using this component, a process that is still ongoing.

Hence as we started with our rewrite of the FundraisingFrontend in 2016, we had 3 sets of code: FundraisingFrontend, FOC and Fundraising Store. After our rewrite the picture still looked largely the same. What we had done was turning FundraisingFrontend from a big ball of mud into a well designed application with proper architecture and separation of subdomains using Bounded Contexts. So while there where multiple components within the FundraisingFrontend, it was still all in one codebase / git repository. (Except for several small PHP libraries, but they are not relevant here.)

Need for reorganization

During 2017 we did a bunch of work on the FOC application and associated export script, all the while doing gradual refactoring towards sanity. While refactoring we found ourself implementing things such as a DonationRepository, things that already existed in the Bounded Contexts part of the FundraisingFrontend codebase. We realized that at least the subset of the FOC application that does moderation is part of the same subdomains that we created those Bounded Contexts for.

The inability to use these Bounded Contexts in the FOC app was forcing us to do double work by creating a second implementation of perfectly good code we already had. This also forced us to pay a lot of extra attention to avoid inconsistencies.  For instance, we ran into issues with the FOC app persisting Donations in a state deemed invalid by the donations Bounded Context.

To address these issues we decided to share the Bounded Contexts that we created during the 2016 rewrite with all relevant applications.

Sharing our Bounded Contexts

We considered several approaches to sharing our Bounded Contexts between our applications. The first approach we considered was having a dedicated git repository per BC. To do this we would need to answer what exactly would go into this repository and what would stay behind. We where concerned that for many changes we’d need to touch both the BC git repo and the application git repo, which requires more work and coordination than being able to make a change in a single repository. This lead us to consider options such as putting all BCs together into a single repo, to minimize this cost, or to simply put all code (BCs and applications) into a single huge repository.

We ended up going with one repo per BC, though started with a single BC to see how well this approach would work before committing to it. With this approach we still faced the question of what exactly should go into the BC repo. Should that just be the UseCases and their dependencies, or also the presentation layer code that uses the UseCases? We decided to leave the presentation layer code in the application repositories, to avoid extra (heavy) dependencies in the BC repo and because the UseCases provide a nice interface to the BC. Following this approach it is easy to tell if code belongs in the BC repo or not: if it binds to the domain model, it belongs in the BC repo.

These are the Bounded Context git repositories:

The BCs still use the FundraisingStore component in their data access services. Since this is not visible from outside of the BC, we can easily refactor towards removing the FundraisingStore and having the data access mechanism of a BC be truly private to it (as it is supposed to be for BCs).

The new BC repos allow us to continue gradual refactoring of the FOC and swap out legacy code with UseCases from the BCs. We can do so without duplicating what we already did and, thanks to the proper encapsulation, also without running into consistency issues.

Clean Architecture + Bounded Contexts

During our initial rewrite we created a diagram to represent the favor of The Clean Architecture that we where using. For an updated version that depicts the structure of The Clean Architecture + Bounded Contexts and describes the rules of this architecture in detail, see my blog post The Clean Architecture + Bounded Contexts.

Diagram depicting Clean Architecture + Bounded Contexts

August 10, 2018

So you have a WooCommerce shop which uses YouTube video’s to showcase your products but those same video’s are slowing your site down as YouTube embeds typically do? WP YouTube Lyte can go a long way to fix that issue, replacing the “fat embedded YouTube player” with a LYTE alternative.

LYTE will automatically detect and replace YouTube links (oEmbeds) and iFrames in your content (blogposts, pages, product descriptions) but is not active on content that does not hook into WordPress’ the_content filter (e.g. category descriptions or short product descriptions). To have LYTE active on those as well, just hook the respective filters up with the lyte_parse-function and you’re good to go;

if (function_exists('lyte_parse')) {

And a LYTE video, in case you’re wondering, looks like this (in this case beautiful harmonies by David Crosby & Venice, filmed way back in 1999 on Dutch TV);

YouTube Video
Watch this video on YouTube.

August 09, 2018

We now invite proposals for main track presentations, developer rooms, stands and lightning talks. FOSDEM offers open source and free software developers a place to meet, share ideas and collaborate. Renowned for being highly developer-oriented, the event brings together some 8000+ geeks from all over the world. The nineteenth edition will take place on Saturday 2nd and Sunday 3rd February 2019 at the usual location: ULB Campus Solbosch in Brussels. We will record and stream all main tracks, devrooms and lightning talks live. The recordings will be published under the same licence as all FOSDEM content (CC-BY). If, exceptionally,舰

Le problème d’Aristide, c’est qu’il passe un peu trop de temps sur Internet. Et que passer du temps sur Internet, ça donne des idées. Du genre des idées d’aller dans l’espace, d’explorer les planètes et de faire un petit pas pour un lapin mais un grand pas pour la lapinité.

C’est dit, Aristide ne sera pas commercial dans l’entreprise d’exportation de carotte familiale. Il sera cosmonaute !

Pour y arriver, il a besoin de votre aide sur notre campagne de financement participatif.

Né dans l’imagination de votre serviteur et mis en image par le talent graphique de Vinch, les aventures d’Aristide a été conçu pour un livre pour enfant à destination des adultes : texte dense, vocabulaire fouillé, humour absurde et second degré.

Mais n’infantilise-t-on pas un peu trop les enfants ? Eux aussi sont capables de se passionner pour une histoire plus longue, d’apprécier la naïveté colorée d’une conquête spatiale pas comme les autres. Aristide est donc un livre pour enfants pour adultes pour enfants. Avec en filigrane une problématique actuelle : faut-il croire tout ce qu’on lit sur Internet ? Parfois oui, parfois non, parfois cela donne des idées…

Bien que ce projet aie nécessité énormément de travail et d’efforts, nous avons choisi la voie de l’auto-édition afin de réaliser un véritable livre pour enfants (mais pour adultes pour enfants, vous suivez ?) de l’époque Internet. Plutôt que d’optimiser les coûts, nous cherchons avant tout à produire un livre de qualité sur tous les aspects (impression, papier recyclé). Le choix définitif de l’imprimeur n’est d’ailleurs pas arrêté, si jamais vous avez des filons, faites nous signe !

Bref, je pourrais vous parler de ce projet pendant des heures mais, aujourd’hui, on a surtout besoin de votre soutien à la fois financier et à la fois pour diffuser le projet sur les réseaux sociaux (surtout ceux hors-internet, genre les amis, la famille, les parents de l’école, toussa).

De la part d’Aristide, un tout grand merci d’avance !

Vous avez aimé votre lecture ? Soutenez l’auteur sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Même un don symbolique fait toute la différence ! Retrouvons-nous ensuite sur Facebook, Twitter ou Mastodon.

Ce texte est publié sous la licence CC-By BE.

August 07, 2018


I finished my earlier work on build environment examples. Illustrating how to do versioning on shared object files right with autotools, qmake, cmake and meson. You can find it here.

The DIR examples are examples for various build environments on how to create a good project structure that will build libraries that are versioned with libtool or have versioning that is equivalent to what libtool would deliver, have a pkg-config file and have a so called API version in the library’s name.

What is right?

Information on this can be found in the autotools mythbuster docs, the libtool docs on versioning and freeBSD’s chapter on shared libraries. I tried to ensure that what is written here works with all of the build environments in the examples., what is what?

You’ll notice that a library called ‘package’ will in your LIBDIR often be called something like

We call the 4.3 part the APIVERSION, and the 2.1.0 part the VERSION (the ABI version).

I will explain these examples using semantic versioning as APIVERSION and either libtool’s current:revision:age or a semantic versioning alternative as field for VERSION (like in FreeBSD and for build environments where compatibility with libtool’s -version-info feature ain’t a requirement).

Noting that with libtool’s -version-info feature the values that you fill in for current, age and revision will not necessarily be identical to what ends up as suffix of the soname in LIBDIR. The formula to form the filename’s suffix is, for libtool, “(current – age).age.revision”. This means that for soname, you would need current=3, revision=0 and age=1.

The VERSION part

In case you want compatibility with or use libtool’s -version-info feature, the document libtool/version.html on states:

The rules of thumb, when dealing with these values are:

  • Increase the current value whenever an interface has been added, removed or changed.
  • Always increase the revision value.
  • Increase the age value only if the changes made to the ABI are backward compatible.

The libtool’s -version-info feature‘s updating-version-info part of libtool’s docs states:

  1. Start with version information of ‘0:0:0’ for each libtool library.
  2. Update the version information only immediately before a public release of your software. More frequent updates are unnecessary, and only guarantee that the current interface number gets larger faster.
  3. If the library source code has changed at all since the last update, then increment revision (‘c:r:a’ becomes ‘c:r+1:a’).
  4. If any interfaces have been added, removed, or changed since the last update, increment current, and set revision to 0.
  5. If any interfaces have been added since the last public release, then increment age.
  6. If any interfaces have been removed or changed since the last public release, then set age to 0.

When you don’t care about compatibility with libtool’s -version-info feature, then you can take the following simplified rules for VERSION:

  • SOVERSION = Major version
  • Major version: increase it if you break ABI compatibility
  • Minor version: increase it if you add ABI compatible features
  • Patch version: increase it for bug fix releases.

Examples when these simplified rules are or can be applicable is in build environments like cmake, meson and qmake. When you use autotools you will be using libtool and then they ain’t applicable.


For the API version I will use the rules from You can also use the semver rules for your package’s version:

Given a version number MAJOR.MINOR.PATCH, increment the:

  1. MAJOR version when you make incompatible API changes,
  2. MINOR version when you add functionality in a backwards-compatible manner, and
  3. PATCH version when you make backwards-compatible bug fixes.

When you have an API, that API can change over time. You typically want to version those API changes so that the users of your library can adopt to newer versions of the API while at the same time other users still use older versions of your API. For this we can follow section 4.3. called “multiple libraries versions” of the autotools mythbuster documentation. It states:

In this situation, the best option is to append part of the library’s version information to the library’s name, which is exemplified by Glib’s > soname. To do so, the declaration in the has to be like this:


libtest_1_0_la_LDFLAGS = -version-info 0:0:0

The pkg-config file

Many people use many build environments (autotools, qmake, cmake, meson, you name it). Nowadays almost all of those build environments support pkg-config out of the box. Both for generating the file as for consuming the file for getting information about dependencies.

I consider it a necessity to ship with a useful and correct pkg-config .pc file. The filename should be /usr/lib/pkgconfig/package-APIVERSION.pc for soname In our example that means /usr/lib/pkgconfig/package-4.3.pc. We’d use the command pkg-config package-4.3 –cflags –libs, for example.

Examples are GLib’s pkg-config file, located at /usr/lib/pkgconfig/glib-2.0.pc

The include path

I consider it a necessity to ship API headers in a per API-version different location (like for example GLib’s, at /usr/include/glib-2.0). This means that your API version number must be part of the include-path.

For example using earlier mentioned API-version 4.3, /usr/include/package-4.3 for /usr/lib/ having /usr/lib/pkg-config/package-4.3.pc

What will the linker typically link with?

The linker will for -lpackage-4.3 typically link with /usr/lib/ or with – age). Noting that the part that is calculated as (current – age) in this example is often, for example in cmake and meson, referred to as the SOVERSION. With SOVERSION the soname template in LIBDIR is

What is wrong?

Not doing any versioning

Without versioning you can’t make any API or ABI changes that wont break all your users’ code in a way that could be managable for them. If you do decide not to do any versioning, then at least also don’t put anything behind the .so part of your so’s filename. That way, at least you wont break things in spectacular ways.

Coming up with your own versioning scheme

Knowing it better than the rest of the world will in spectacular ways make everything you do break with what the entire rest of the world does. You shouldn’t congratulate yourself with that. The only thing that can be said about it is that it probably makes little sense, and that others will probably start ignoring your work. Your mileage may vary. Keep in mind that without a correct SOVERSION, certain things will simply not work correct.

In case of libtool: using your package’s (semver) release numbering for current, revision, age

This is similarly wrong to ‘Coming up with your own versioning scheme’.

The Libtool documentation on updating version info is clear about this:

Never try to set the interface numbers so that they correspond to the release number of your package. This is an abuse that only fosters misunderstanding of the purpose of library versions.

This basically means that once you are using libtool, also use libtool’s versioning rules.

Refusing or forgetting to increase the current and/or SOVERSION on breaking ABI changes

The current part of the VERSION (current, revision and age) minus age, or, SOVERSION is/are the most significant field(s). The current and age are usually involved in forming the so called SOVERSION, which in turn is used by the linker to know with which ABI version to link. That makes it … damn important.

Some people think ‘all this is just too complicated for me’, ‘I will just refuse to do anything and always release using the same version numbers’. That goes spectacularly wrong whenever you made ABI incompatible changes. It’s similarly wrong to ‘Coming up with your own versioning scheme’.

That way, all programs that link with your shared library can after your shared library gets updated easily crash, can corrupt data and might or might not work.

By updating the current and age, or, SOVERSION you will basically trigger people who manage packages and their tooling to rebuild programs that link with your shared library. You actually want that the moment you made breaking ABI changes in a newer version of it.

When you don’t want to care about libtool’s -version-info feature, then there is also a set of more simple to follow rules. Those rules are for VERSION:

  • SOVERSION = Major version (with these simplified set of rules, no subtracting of current with age is needed)
  • Major version: increase it if you break ABI compatibility
  • Minor version: increase it if you add ABI compatible features
  • Patch version: increase it for bug fix releases.

What isn’t wrong?

Not using libtool (but nonetheless doing ABI versioning right)

GNU libtool was made to make certain things more easy. Nowadays many popular build environments also make things more easy. Meanwhile has GNU libtool been around for a long time. And its versioning rules, commonly known as the current:revision:age field as parameter for -verison-info, got widely adopted.

What GNU libtool did was, however, not really a standard. It’s is one interpretation of how to do it. And a rather complicated one, at that.

Please let it be crystal clear that not using libtool does not mean that you can do ABI versioning wrong. Because very often people seem to think that they can, and think they’ll still get out safely while doing ABI versioning completely wrong. This is not the case.

Not having a APIVERSION at all

It isn’t wrong not to have an APIVERSION in the soname. It however means that you promise to not ever break API. Because the moment you break API, you disallow your users to stay on the old API for a little longer. They might both have programs that use the old and that use the new API. Now what?

When you have an APIVERSION then you can allow the introduction of a new version of the API while simultaneously the old API remains available on a user’s system.

Using a different naming-scheme for APIVERSION

I used the MAJOR.MINOR version numbers from semver to form the APIVERSION. I did this because only the MAJOR and the MINOR are technically involved in API changes (unless you are doing semantic versioning wrong – in which case see ‘Coming up with your own versioning scheme’).

Some projects only use MAJOR. Examples are Qt which puts the MAJOR number behind the Qt part. For example (so that’s “Qt” + MAJOR + Module). The GLib world, however, uses “g” + Module + “-” + MAJOR + “.0″ as they have releases like 2.2, 2.3, 2.4 that are all called I guess they figured that maybe someday in their 2.x series, they could use that MINOR field?

DBus seems to be using a similar thing to GLib, but then without the MINOR suffix: For their GLib integration they also use it as

Who is right, who is wrong? It doesn’t matter too much for your APIVERSION naming scheme. As long as there is a way to differentiate the API in a) the include path, b) the pkg-config filename and c) the library that will be linked with (the -l parameter during linking/compiling). Maybe someday a standard will be defined? Let’s hope so.

Differences in interpretation per platform


FreeBSD’s Shared Libraries of Chapter 5. Source Tree Guidelines and Policies states:

The three principles of shared library building are:

  1. Start from 1.0
  2. If there is a change that is backwards compatible, bump minor number (note that ELF systems ignore the minor number)
  3. If there is an incompatible change, bump major number

For instance, added functions and bugfixes result in the minor version number being bumped, while deleted functions, changed function call syntax, etc. will force the major version number to change.

I think that when using libtool on a FreeBSD (when you use autotools), that the platform will provide a variant of libtool’s scripts that will convert earlier mentioned current, revision and age rules to FreeBSD’s. The same goes for the VERSION variable in cmake and qmake. Meaning that with those tree build environments, you can just use the rules for GNU libtool’s -version-info.

I could be wrong on this, but I did find mailing list E-mails from ~ 2011 stating that this SNAFU is dealt with. Besides, the *BSD porters otherwise know what to do and you could of course always ask them about it.

Note that FreeBSD’s rules are or seem to be compatible with the rules for VERSION when you don’t want to care about libtool’s -version-info compatibility. However, when you are porting from a libtoolized project, then of course you don’t want to let newer releases break against releases that have already happened.

Modern Linux distributions

Nowadays you sometimes see things like /usr/lib/$ARCH/ linking to /lib/$ARCH/ I have no idea how this mechanism works. I suppose this is being done by packagers of various Linux distributions? I also don’t know if there is a standard for this.

I will update the examples and this document the moment I know more and/or if upstream developers need to worry about it. I think that using GNUInstallDirs in cmake, for example, makes everything go right. I have not found much for this in qmake, meson seems to be doing this by default and in autotools you always use platform variables for such paths.

As usual, I hope standards will be made and that the build environment and packaging community gets to their senses and stops leaving this into the hands of developers. I especially think about qmake, which seems to not have much at all to state that standardized installation paths must be used (not even a proper way to define a prefix).

Questions that I can imagine already exist

Why is there there a difference between APIVERSION and VERSION?

The API version is the version of your programmable interfaces. This means the version of your header files (if your programming language has such header files), the version of your pkgconfig file, the version of your documentation. The API is what software developers need to utilize your library.

The ABI version can definitely be different and it is what programs that are compiled and installable need to utilize your library.

An API breaks when recompiling the program without any changes, that consumes a, is not going to succeed at compile time. The API got broken the moment any possible way package’s API was used, wont compile. Yes, any way. It means that a should be started.

An ABI breaks when without recompiling the program, replacing a with a or a (or later) as is not going to succeed at runtime. For example because it would crash, or because the results would be wrong (in any way). It implies that shouldn’t be overwritten, but should be started.

For example when you change the parameter of a function in C to be a floating point from a integer (and/or the other way around), then that’s an ABI change but not neccesarily an API change.

What is this SOVERSION about?

In most projects that got ported from an environment that uses GNU libtool (for example autotools) to for example cmake or meson, and in the rare cases that they did anything at all in a qmake based project, I saw people converting the current, revision and age parameters that they passed to the -version-info option of libtool to a string concatenated together using (current – age), age, revision as VERSION, and (current – age) as SOVERSION.

I wanted to use the exact same rules for versioning for all these examples, including autotools and GNU libtool. When you don’t have to (or want to) care about libtool’s set of (for some people, needlessly complicated) -version-info rules, then it should be fine using just SOVERSION and VERSION using these rules:

  • SOVERSION = Major version
  • Major version: increase it if you break ABI compatibility
  • Minor version: increase it if you add ABI compatible features
  • Patch version: increase it for bug fix releases.

I, however, also sometimes saw variations that are incomprehensible with little explanation and magic foo invented on the spot. Those variations are probably wrong.

In the example I made it so that in the root build file of the project you can change the numbers and calculation for the numbers. However. Do follow the rules for those correctly, as this versioning is about ABI compatibility. Doing this wrong can make things blow up in spectacular ways.

The examples

qmake in the qmake-example

Note that the VERSION variable must be filled in as “(current – age).age.revision” for qmake (to get 2.1.0 at the end, you need VERSION=2.1.0 when current=3, revision=0 and age=1)

To try this example out, go to the qmake-example directory and type

$ cd qmake-example
$ mkdir=_test
$ qmake PREFIX=$PWD/_test
$ make
$ make install

This should give you this:

$ find _test/
├── include
│   └── qmake-example-4.3
│       └── qmake-example.h
└── lib
    ├── ->
    ├── ->
    ├── ->
    └── pkgconfig
        └── qmake-example-4.3.pc

When you now use pkg-config, you get a nice CFLAGS and LIBS line back (I’m replacing the current path with $PWD in the output each time):

$ export PKG_CONFIG_PATH=$PWD/_test/lib/pkgconfig
$ pkg-config qmake-example-4.3 --cflags
$ pkg-config qmake-example-4.3 --libs
-L$PWD/_test/lib -lqmake-example-4.3

And it means that you can do things like this now (and people who know about pkg-config will now be happy to know that they can use your library in their own favorite build environment).

$ export LD_LIBRARY_PATH=$PWD/_test/lib
$ echo -en "#include <qmake-example.h>\nmain() {} " > test.cpp
$ g++ -fPIC test.cpp -o test.o `pkg-config qmake-example-4.3 --libs --cflags`

You can see that it got linked to, where that 2 at the end is (current – age).

$ ldd test.o (0xb77b0000) => $PWD/_test/lib/ (0xb77a6000) => /usr/lib/i386-linux-gnu/ (0xb75f5000) => /lib/i386-linux-gnu/ (0xb759e000) => /lib/i386-linux-gnu/ (0xb7580000) => /lib/i386-linux-gnu/ (0xb73c9000)
    /lib/ (0xb77b2000)

cmake in the cmake-example

Note that the VERSION property on your library target must be filled in with “(current – age).age.revision” for cmake (to get 2.1.0 at the end, you need VERSION=2.1.0 when current=3, revision=0 and age=1. Note that in cmake you must also fill in the SOVERSION property as (current – age), so SOVERSION=2 when current=3 and age=1).

To try this example out, go to the cmake-example directory and do

$ cd cmake-example
$ mkdir _test
-- Configuring done
-- Generating done
-- Build files have been written to: .
$ make
[ 50%] Building CXX object src/libs/cmake-example/CMakeFiles/cmake-example.dir/cmake-example.cpp.o
[100%] Linking CXX shared library
[100%] Built target cmake-example
$ make install
[100%] Built target cmake-example
Install the project...
-- Install configuration: ""
-- Installing: $PWD/_test/lib/
-- Up-to-date: $PWD/_test/lib/
-- Up-to-date: $PWD/_test/lib/
-- Up-to-date: $PWD/_test/include/cmake-example-4.3/cmake-example.h
-- Up-to-date: $PWD/_test/lib/pkgconfig/cmake-example-4.3.pc

This should give you this:

$ tree _test/
├── include
│   └── cmake-example-4.3
│       └── cmake-example.h
└── lib
    ├── ->
    ├── ->
    └── pkgconfig
        └── cmake-example-4.3.pc

When you now use pkg-config, you get a nice CFLAGS and LIBS line back (I’m replacing the current path with $PWD in the output each time):

$ pkg-config cmake-example-4.3 --cflags
$ pkg-config cmake-example-4.3 --libs
-L$PWD/_test/lib -lcmake-example-4.3

And it means that you can do things like this now (and people who know about pkg-config will now be happy to know that they can use your library in their own favorite build environment):

$ echo -en "#include <cmake-example.h>\nmain() {} " > test.cpp
$ g++ -fPIC test.cpp -o test.o `pkg-config cmake-example-4.3 --libs --cflags`

You can see that it got linked to, where that 2 at the end is the SOVERSION. This is (current – age).

$ ldd test.o (0xb7729000) => $PWD/_test/lib/ (0xb771f000) => /usr/lib/i386-linux-gnu/ (0xb756e000) => /lib/i386-linux-gnu/ (0xb7517000) => /lib/i386-linux-gnu/ (0xb74f9000) => /lib/i386-linux-gnu/ (0xb7342000)
    /lib/ (0xb772b000)

autotools in the autotools-example

Note that you pass -version-info current:revision:age directly with autotools. The libtool will translate that to (current – age).age.revision to form the so’s filename (to get 2.1.0 at the end, you need current=3, revision=0, age=1).

To try this example out, go to the autotools-example directory and do

$ cd autotools-example
$ mkdir _test
$ libtoolize
$ aclocal
$ autoheader
$ autoconf
$ automake --add-missing
$ ./configure --prefix=$PWD/_test
$ make
$ make install

This should give you this:

$ tree _test/
├── include
│   └── autotools-example-4.3
│       └── autotools-example.h
└── lib
    ├── libautotools-example-4.3.a
    ├── ->
    ├── ->
    └── pkgconfig
        └── autotools-example-4.3.pc

When you now use pkg-config, you get a nice CFLAGS and LIBS line back (I’m replacing the current path with $PWD in the output each time):

$ export PKG_CONFIG_PATH=$PWD/_test/lib/pkgconfig
$ pkg-config autotools-example-4.3 --cflags
$ pkg-config autotools-example-4.3 --libs
-L$PWD/_test/lib -lautotools-example-4.3

And it means that you can do things like this now (and people who know about pkg-config will now be happy to know that they can use your library in their own favorite build environment):

$ echo -en "#include <autotools-example.h>\nmain() {} " > test.cpp
$ export LD_LIBRARY_PATH=$PWD/_test/lib
$ g++ -fPIC test.cpp -o test.o `pkg-config autotools-example-4.3 --libs --cflags`

You can see that it got linked to, where that 2 at the end is (current – age).

$ ldd test.o (0xb778d000) => $PWD/_test/lib/ (0xb7783000) => /usr/lib/i386-linux-gnu/ (0xb75d2000) => /lib/i386-linux-gnu/ (0xb757b000) => /lib/i386-linux-gnu/ (0xb755d000) => /lib/i386-linux-gnu/ (0xb73a6000)
    /lib/ (0xb778f000)

meson in the meson-example

Note that the version property on your library target must be filled in with “(current – age).age.revision” for meson (to get 2.1.0 at the end, you need version=2.1.0 when current=3, revision=0 and age=1. Note that in meson you must also fill in the soversion property as (current – age), so soversion=2 when current=3 and age=1).

To try this example out, go to the meson-example directory and do

$ cd meson-example
$ mkdir -p _build/_test
$ cd _build
$ meson .. --prefix=$PWD/_test
$ ninja
$ ninja install

This should give you this:

$ tree _test/
├── include
│   └── meson-example-4.3
│       └── meson-example.h
└── lib
    └── i386-linux-gnu
        ├── ->
        ├── ->
        └── pkgconfig
            └── meson-example-4.3.pc

When you now use pkg-config, you get a nice CFLAGS and LIBS line back (I’m replacing the current path with $PWD in the output each time):

$ export PKG_CONFIG_PATH=$PWD/_test/lib/i386-linux-gnu/pkgconfig
$ pkg-config meson-example-4.3 --cflags
$ pkg-config meson-example-4.3 --libs
-L$PWD/_test/lib -lmeson-example-4.3

And it means that you can do things like this now (and people who know about pkg-config will now be happy to know that they can use your library in their own favorite build environment):

$ echo -en "#include <meson-example.h>\nmain() {} " > test.cpp
$ export LD_LIBRARY_PATH=$PWD/_test/lib/i386-linux-gnu
$ g++ -fPIC test.cpp -o test.o `pkg-config meson-example-4.3 --libs --cflags`

You can see that it got linked to, where that 2 at the end is the soversion. This is (current – age).

$ ldd test.o (0xb772e000) => $PWD/_test/lib/i386-linux-gnu/ (0xb7724000) => /usr/lib/i386-linux-gnu/ (0xb7573000) => /lib/i386-linux-gnu/ (0xb751c000) => /lib/i386-linux-gnu/ (0xb74fe000) => /lib/i386-linux-gnu/ (0xb7347000)
    /lib/ (0xb7730000)

August 03, 2018

So it’s probably the crazy nineties and you have Adriano Celentano in a live show joined by the young Manu Chao (then the “leader” of Mano Negra) and they perform a weird mix of “Prisencolinensinainciusol” and “King Kong 5” and they throw in an interview during the song and there’s lots of people dancing, a pretty awkward upskirt shot and general craziness.

This must (have) be(en) Italian TV, mustn’t it?

YouTube Video
Watch this video on YouTube.


August 01, 2018

Today, Acquia was named a leader in the 2018 Gartner Magic Quadrant for Web Content Management. Acquia has now been recognized as a leader for five years in a row.

The 2018 Gartner Magic Quadrant for Web Content ManagementAcquia recognized as a leader, next to Adobe and Sitecore, in the 2018 Gartner Magic Quadrant for Web Content Management.

Analyst reports like the Gartner Magic Quadrant are important because they introduce organizations to Acquia and Drupal. Last year, I explained it in the following way: "If you want to find a good coffee place, you use Yelp. If you want to find a nice hotel in New York, you use TripAdvisor. Similarly, if a CIO or CMO wants to spend $250,000 or more on enterprise software, they often consult an analyst firm like Gartner.".

Our tenure as a top vendor is not only a strong endorsement of Acquia's strategy and vision, but also underscores our consistency. Drupal and Acquia are here to stay!

What I found interesting about this year's report is the increased emphasis on flexibility and ease of integration. I've been saying this for a few years now, but it's all about innovation through integration, rather than just innovation in the core platform itself.

Marketing technology landscape 2018An image of the Marketing Technology Landscape 2018. For reference, here are the 2011, 2012, 2014, 2015, 2016 and 2017 versions of the landscape. It shows how fast the marketing technology industry is growing.

Just look at the 2018 Martech 5000 — the supergraphic now includes 7,000 marketing technology solutions, which is a 27% increase from a year ago. This accelerated innovation isn't exclusive to marketing technology; its happening across every part of the enterprise technology stack. From headless commerce integrations to the growing adoption of JavaScript frameworks and emerging cross-channel experiences, organizations have the opportunity to re-imagine customer experiences like never before.

It's not surprising that customers are looking for an open platform that allows for open innovation and unlimited integrations. The best way to serve this need is through open APIs, decoupled architectures and an Open Source innovation model. This is why Drupal can offer its users thousands of integrations, more than all of the other Gartner leaders combined.

Acquia Experience Platform

When you marry Drupal's community-driven innovation with Acquia's cloud platform and suite of marketing tools, you get an innovative solution across every layer of your technology stack. It allows our customers to bring powerful new experiences to market, across the web, mobile, native applications, chatbots and more. Most importantly, it gives customers the freedom to build on their own terms.

Thank you to everyone who contributed to this result!

When creating an application that follows The Clean Architecture you end up with a number of UseCases that hold your application logic. In this blog post I outline a testing pattern for effectively testing these UseCases and avoiding common pitfalls.

Testing UseCases

A UseCase contains the application logic for a single “action” that your system supports. For instance “cancel a membership”. This application logic interacts with the domain and various services. These services and the domain should have their own unit and integration tests. Each UseCase gets used in one or more applications, where it gets invoked from inside the presentation layer. Typically you want to have a few integration or edge-to-edge tests that cover this invocation. In this post I look at how to test the application logic of the UseCase itself.

UseCases tend to have “many” collaborators. I can’t recall any that had less than 3. For the typical UseCase the number is likely closer to 6 or 7, with more collaborators being possible even when the design is good. That means constructing a UseCase takes some work: you need to provide working instances of all the collaborators.

Integration Testing

One way to deal with this is to write integration tests for your UseCases. Simply get an instance of the UseCase from your Top Level Factory or Dependency Injection Container.

This approach often requires you to mutate the factory or DIC. Want to test that an exception from the persistence service gets handled properly? You’ll need to use some test double instead of the real service, or perhaps mutate the real service in some way. Want to verify a mail got send? Definitely want to use a Spy here instead of the real service. Mutability comes with a cost so is better avoided.

A second issue with using real collaborators is that your tests get slow due to real persistence usage. Even using an in-memory SQLite database (that needs initialization) instead of a simple in-memory fake repository makes for a speed difference of easily two orders of magnitude.

Unit Testing

While there might be some cases where integration tests make sense, normally it is better to write unit tests for UseCases. This means having test doubles for all collaborators. Which leads us to the question of how to best inject these test doubles into our UseCases.

As example I will use the CancelMembershipApplicationUseCase of the Wikimedia Deutschland fundrasing application.

function __construct(ApplicationAuthorizer $authorizer, ApplicationRepository $repository, TemplateMailerInterface $mailer) {
    $this->authorizer = $authorizer;
    $this->repository = $repository;
    $this->mailer = $mailer;

This UseCase uses 3 collaborators. An authorization service, a repository (persistence service) and a mailing service. First it checks if the operation is allowed with the authorizer, then it interacts with the persistence and finally, if all went well, it uses the mailing service to send a confirmation email. Our unit test should test all this behavior and needs to inject test doubles for the 3 collaborators.

The most obvious approach is to construct the UseCase in each test method.

public function testGivenIdOfUnknownDonation_cancellationIsNotSuccessful(): void {
    $useCase = new CancelMembershipApplicationUseCase(
        new SucceedingAuthorizer(),
        new MailerSpy()

    $response = $useCase->cancelApplication(
        new CancellationRequest( self::ID_OF_NON_EXISTING_APPLICATION )

    $this->assertFalse( $response->cancellationWasSuccessful() );

public function testGivenIdOfCancellableApplication_cancellationIsSuccessful(): void {
    $useCase = new CancelMembershipApplicationUseCase(
        new SucceedingAuthorizer(),
        new MailerSpy()
    $response = $useCase->cancelApplication(
        new CancellationRequest( $this->cancelableApplication->getId() )

    $this->assertTrue( $response->cancellationWasSuccessful() );

Note how both these test methods use the same test doubles. This is not always the case, for instance when testing authorization failure, the test double for the authorizer service will differ, and when testing persistence failure, the test double for the persistence service will differ.

public function testWhenAuthorizationFails_cancellationFails(): void {
    $useCase = new CancelMembershipApplicationUseCase(
        new FailingAuthorizer(),
        new MailerSpy()

    $response = $useCase->cancelApplication(
        new CancellationRequest( $this->cancelableApplication->getId() )

    $this->assertFalse( $response->cancellationWasSuccessful() );

Normally a test function will only change a single test double.

UseCases tend to have, on average, two or more behaviors (and thus tests) per collaborator. That means for most UseCases you will be repeating the construction of the UseCase in a dozen or more test functions. That is a problem. Ask yourself why.

If the answer you came up with was DRY then think again and read my blog post on DRY 😉 The primary issue is that you couple each of those test methods to the list of collaborators. So when the constructor signature of your UseCase changes, you will need to do Shotgun Surgery and update all test functions. Even if those tests have nothing to do with the changed collaborator. A second issue is that you pollute the test methods with irrelevant details, making them harder to read.

Default Test Doubles Pattern

The pattern is demonstrated using PHP + PHPUnit and will need some adaptation when using a testing framework that does not work with a class based model like that of PHPUnit.

The coupling to the constructor signature and resulting Shotgun Surgery can be avoided by having a default instance of the UseCase filled with the right test doubles. This can be done by having a newUseCase method that constructs the UseCase and returns it. A way to change specific collaborators is needed (ie a FailingAuthorizer to test handling of failing authorization).

private function newUseCase() {
    return new CancelMembershipApplicationUseCase(
        new SucceedingAuthorizer(),
        new InMemoryApplicationRepository(),
        new MailerSpy()

Making the UseCase itself mutable is a big no-no. Adding optional parameters to the newUseCase method works in languages that have named parameters. Since PHP does not have named parameters, another solution is needed.

An alternative approach to getting modified collaborators into the newUseCase method is using fields. This is less nice than named parameters, as it introduces mutable state on the level of the test class. Since in PHP this approach gives us named fields and is understandable by tools, it is better than either using a positional list of optional arguments or emulating named arguments with an associative array (key-value map).

The fields can be set in the setUp method, which gets called by PHPUnit before the test methods. For each test method PHPUnit instantiates the test class, then calls setUp, and then calls the test method.

public function setUp() {
    $this->authorizer = new SucceedingAuthorizer();
    $this->repository = new InMemoryApplicationRepository();
    $this->mailer = new MailerSpy();

    $this->cancelableApplication = ValidMembershipApplication::newDomainEntity();
    $this->repository->storeApplication( $this->cancelableApplication );

private function newUseCase(): CancelMembershipApplicationUseCase {
    return new CancelMembershipApplicationUseCase(

With this field-based approach individual test methods can modify a specific collaborator by writing to the field before calling newUseCase.

public function testWhenAuthorizationFails_cancellationFails(): void {
    $this->authorizer = new FailingAuthorizer();

    $response = $this->newUseCase()->cancelApplication(
        new CancellationRequest( $this->cancelableApplication->getId() )

    $this->assertFalse( $response->cancellationWasSuccessful() );

public function testWhenSaveFails_cancellationFails() {

    $response = $this->newUseCase()->cancelApplication(
        new CancellationRequest( $this->cancelableApplication->getId() )

    $this->assertFalse( $response->cancellationWasSuccessful() );

The choice of default collaborators is important. To minimize binding in the test functions, the default collaborators should not cause any failures. This is the case both when using the field-based approach and when using optional named parameters.

If the authorization service failed by default, most test methods would need to modify it, even if they have nothing to do with authorization. And it is not always self-evident they need to modify the unrelated collaborator. Imagine the default authorization service indeed fails and that the testWhenSaveFails_cancellationFails test method forgets to modify it. This test method would end up passing even if the behavior it tests is broken, since the UseCase will return the expected failure result even before getting to the point where it saves something.

This is why inside of the setUp function the example creates a “cancellable application” and puts it inside an in-memory test double of the repository.

I chose the CancelMembershipApplication UseCase as an example because it is short and easy to understand. For most UseCases it is even more important to avoid the constructor signature coupling as this issue becomes more severe with size. And no matter how big or small the UseCase is, you benefit from not polluting your tests with unrelated setup details.

You can view the whole CancelMembershipApplicationUseCase and CancelMembershipApplicationUseCaseTest.

See also:

July 31, 2018

I published the following diary on “Exploiting the Power of Curl“:

Didier explained in a recent diary that it is possible to analyze malicious documents with standard Linux tools. I’m using Linux for more than 20 years and, regularly, I find new commands or new switches that help me to perform recurring (boring?) tasks in a more efficient way. How to use these tools can be found by running them with the flag ‘-h’ or ‘–help’. They also have a corresponding man page that describes precisely how to use the numerous options available (just type ‘man <command>’ in your shell)… [Read more]

[The post [SANS ISC] Exploiting the Power of Curl has been first published on /dev/random]

Dien Francken, heeft die als staatsecretaris niet de eed gezworen op onze Belgische grondwet?

Want beweren dat zijn hypothetische aannamens boven een beslissing van het gerecht staan, gaat tegen één van de wetten van onze grondwet in. Namelijk de scheiding der machten. Iemand die in functie is, gezworen heeft op die grondwet en daar totaal tegen in gaat begaat meineed en is strafbaar.

Een staatssecretarisch die zijn eed niet kan houden en die geen respect heeft voor de Belgische grondwet kan wat mij betreft niet aanblijven. Hoe populair hij door zijn populistische zever ook is.

July 30, 2018

Digital backpack

I recently heard a heart-warming story from the University of California, Davis. Last month, UC Davis used Drupal to launch Article 26 Backpack, a platform that helps Syrian Refugees document and share their educational credentials.

Over the course of the Syrian civil war, more than 12 million civilians have been displaced. Hundreds of thousands of these refugees are students, who now have to overcome the obstacle of re-entering the workforce or pursuing educational degrees away from home.

Article 26 Backpack addresses this challenge by offering refugees a secure way to share their educational credentials with admissions offices, scholarship agencies, and potentials employers. The program also includes face-to-face counseling to provide participants with academic advisory and career development.

The UC Davis team launched their Drupal 8 application for Article 26 Backpack in four months. On the site, students can securely store their educational data, such as diplomas, transcripts and resumes. The next phase of the project will be to leverage Drupal's multilingual capabilities to offer the site in Arabic as well.

This is a great example of how organizations are using Drupal to prioritize impact. It's always inspiring to hear stories of how Drupal is changing lives for the better. Thank you to the UC Davis team for sharing their story, and continue the good work!


No, this isn't a post about Microsoft, judging by the title. I'm talking about the Xiaomi M365 electric step. I've been looking lately to use my car way less, partially due to the fact that parking space is very limited at the train station. Getting fines for not parking at designated places surely doesn't help either. It took me a while to obtain a M365 before summer, but eventually I got it. The e-step has an autonomy of 20km (in my case), and this just suffices for the round-trip from/to the station. The step has a maximum speed of 25km/h, which is 'acceptable' : I would have preferred a bit faster, as taking over bikes sometimes takes a while.

This e-step is quite high-tech : it features cruise-control, ABS and KERS, which makes me hardly use the brakes. Cruising at 25km/h really is a blast, and I have really become quite fond at my daily ride with it. Additionally, it allows me to explore different routes, is more versatile than a bike, and can be taken with me on the train (although the size, even when folded, is quite large).

There's a quite active group of 'developers' around this e-step, creating custom firmware which allows to change different parameters such as maximum speed or KERS control. I've tested out a few, but additional speed comes with too much impact on the battery, so that I decided to stick with the official firmware.

Mobile phones

I'll take back anything I said about the Xiaomi Mi Mix. Well, at least about the slippery part : 5 months after my purchase, I placed the mobile phone onto a pile of papers on my desk. Five minutes later, it must have seen it wasn't 100% horizontal placed, and decided to go for a walk. Fell on the floor, cracked screen. This thing *is* slippery as soap.

I still find the design of the phone the most beautiful I've ever seen, and the large screen just is gorgeous ! But you cannot use this phone without a cover, and that just breaks the beauty of the phone. That, in combination with the fact that mobile reception is zero without band 20 support, and the fact that putting a light sensor at the bottom of the screen is just irritating, were enough reasons for me not to reorder another Mi Mix again.

I've switched to a Onplus 5T as a new phone. The design just pales in comparison with the Mi Mix, but the development support is fantastic, and my Oneplus One was brilliant. I hope I'll can say that again within 3 to 4 years about the OP5T.

July 27, 2018

On the heels of Microsoft acquiring GitHub for $7.5 billion, Google has partnered with Microsoft to provide a continuous integration and delivery platform for GitHub. While I predicted Microsoft would integrate build tools into GitHub, I didn't expect them to integrate with Google's as well. Google and GitHub probably partnered on this before the Microsoft acquisition, but I'm pleasantly surprised that Microsoft has decided to offer more than Azure-based solutions. It sends a strong message to anyone who was worried about Microsoft's acquisition of GitHub, and should help put worries about GitHub's independence to rest. Satya Nadella clearly understands and values the Open Source movement and continues to impress me. What an interesting time to be a developer and to observe the cloud wars!

July 26, 2018

I published the following diary on “Windows Batch File Deobfuscation“:

Last Thursday, Brad published a diary about a new ongoing campaign delivering the Emotet malware. I found another sample that looked the same. My sample was called ‘Order-42167322776.doc’ (SHA256:4d600ae3bbdc846727c2922485f9f7ec548a3dd031fc206dbb49bd91536a56e3 and looked the same as the one analyzed Brad. The infection chain was almost the same… [Read more]

[The post [SANS ISC] Windows Batch File Deobfuscation has been first published on /dev/random]

A few days ago, I published a diary on the SANS Internet Storm Center website about a Javascript file that was altered to deliver a cryptominer into the victim’s browser. Since my first finding, I’m hunting for more samples. The best way to identify them is to search for the following piece of code:

var foo = navigator['hardwareConcurrency'] || 0x4;

This is useful to detect the number of cores available. I already found plenty of samples that are most of the time standalone files.

Another interesting piece of code:

return /mobile|Android|webOS|iPhone|iPad|iPod|IEMobile|Opera Mini/i['test'](navigator['userAgent']);

This is used to not run the waste resources of mobile devices.

This morning, I found an altered jquery.js file. JQuery is a very popular JavaScript library that helps developers to “write less, do more” as stated on the website. The malicious file is a very old version of JQuery (1.7.1) but still popular. The wdiff command (or “word diff”) returns interesting info between the original file and the malicious one:

$ wdiff -s jquery.js jquery.js.malicious
jquery.js: 1244 words 1243 100% common 0 0% deleted 1 0% changed
jquery.js.malicious: 10457 words 1243 12% common 9212 88% inserted 2 0% changed

Note that the malicious file (SHA256: ec214629efdffce5031b105737a14778a275c7a178bf1330f700ea6254269276) has a very low score on VT: 2/60 and was submitted yesterday from the USA.

[The post Another Cryptominer Delivered Through Altered JQuery.js File has been first published on /dev/random]

Success and failure are not polar opposites: you often need to endure failure to enjoy success. In Google's 2004 Founders' IPO Letter, Larry Page wrote:

We will not shy away from high-risk, high-reward projects because of short term earnings pressure. Some of our past bets have gone extraordinarily well, and others have not. Because we recognize the pursuit of such projects as the key to our long term success, we will continue to seek them out. For example, we would fund projects that have a 10% chance of earning a billion dollars over the long term. Do not be surprised if we place smaller bets in areas that seem very speculative or even strange when compared to our current businesses. Although we cannot quantify the specific level of risk we will undertake, as the ratio of reward to risk increases, we will accept projects further outside our current businesses, especially when the initial investment is small relative to the level of investment in our current businesses.

Think big and fail well — fail fast, fail often, and learn from your mistakes.

July 24, 2018

One of the hallmarks of a great company is that they hire well, and make it a priority to train and challenge their employees to become better. Great companies are a breeding ground for talent. As such, it's always sad when great talent leaves, but it's certainly rewarding to see alumni venture to accomplish greater things.

The Paypal Mafia is an esteemed example of this; many of its early employees have gone off to do impactful things. There are many examples of this in Acquia's history as well.

In 2012, we hired Chris Comparato as Acquia's SVP of Customer Success. While at Acquia, Chris had been advising a local startup called Toast. I remember the day Chris came into my office and told me it was time for him to leave Acquia; he had been waking up thinking about how to help solve Toast's challenges instead of Acquia's. Chris ultimately went on to become the CEO of Toast and under his leadership, Toast is thriving. Just this month, Toast raised another $100 million in funding at a $1.4 billion valuation. Chris is right. If they can, people should try to do what they wake up thinking about. It's advice I try to live by every day. In fact, I still call it the "Comparato Principle".

Chris' story isn't unique. Last week, I was reminded of how meaningful it can be to see former colleagues grow after watching Nick Veenhof's video interview on The Modern CTO Podcast. Nick was hired at Acquia as an engineer to help build Acquia Search. Last year, Nick left to become CTO at Dropsolid, and now oversees a 25 person engineering team. While I miss Nick, it's great to see him thrive.

I feel lucky to witness the impact Chris, Nick and other ex-Acquians are making. Congratulations Chris and Nick. I look forward to your future success!

July 20, 2018

This past weekend Vanessa and I took our much-anticipated annual weekend trip to Cape Cod. It's always a highlight for us. We set out to explore a new part of the Cape as we've extensively explored the Upper Cape.

Stage Harbor lighthouse

We found The Platinum Pebble Inn in West Harwich by way of TripAdvisor, a small luxury bed and breakfast. The owners, Mike and Stefanie Hogan, were extremely gracious hosts. Not only are they running the Inn and serving up delicious breakfasts, they would ask what we wanted to do, and then created our adventure with helpful tips for the day.

On our first day we went on a 35 km (22 miles) bike ride out to Chatham, making stops along the way for ice cream, shopping and lobster rolls.

Bike ride

While we were at the Chatham Pier Fish Market, we watched the local fisherman offload their daily catch with sea lions and seagulls hovering to get some lunch of their own. Once we arrived back at the Inn we were able to cool off in the pool and relax in the late afternoon sun.

Unloading fish at the Chatham Pier Fish Market

Saturday we were up for a hike, so the Hogans sent us to the Dune Shacks Trail in Provincetown. We were told to carry in whatever we would need as there weren't any facilities on the beach. So we stopped at an authentic French bakery in Wellfleet to get lunch to take on our hike — the baguette took me right back to being in France, and while I was tempted by the pain au chocolat and pain aux raisins, I didn't indulge. I had too much ice cream already.

After we picked up lunch, we continued up Route 6 and parked on the side of the road to begin our journey into the woods and up the first of many, intense sand dunes. The trails were unmarked but there are visible paths that pass the Dune Shacks that date back to the early 1900's. After 45 minutes we finally reached the beach and ocean.

Dune Shacks Trail in Provincetown
Dune Shacks Trail in Provincetown

We rounded out the weekend with an afternoon sail of the Nantucket Sound. It was a beautiful day and the conditions lent themselves to a very relaxing sailing experience.


It was a great weekend!

By way of experiment, I've just enabled the PKCS#11 v2.20 implementation in the eID packages for Linux, but for now only in the packages in the "continuous" repository. In the past, enabling this has caused issues; there have been a few cases where Firefox would deadlock when PKCS#11 v2.20 was enabled, rather than the (very old and outdated) v2.11 version that we support by default. We believe we have identified and fixed all outstanding issues that caused such deadlocks, but it's difficult to be sure. So, if you have a Belgian electronic ID card and are willing to help me out and experiment a bit, here's something I'd like you to do:

  • Install the eID software (link above) as per normal.
  • Enable the "continuous" repository and upgrade to the packages in that repository:

    • For Debian, Ubuntu, or Linux Mint: edit /etc/apt/sources.list.d/eid.list, and follow the instructions there to enable the "continuous" repository. Don't forget the dpkg-reconfigure eid-archive step. Then, run apt update; apt -t continuous upgrade.
    • For Fedora and CentOS: run yum --enablerepo=beid-continuous install eid-mw
    • For OpenSUSE: run zypper mr -e beid-continuous; zypper up

The installed version of the eid-mw-libs or libbeidpkcs11-0 package should be v4.4.3-42-gf78d786e or higher.

One of the new features in version 2.20 of the PKCS#11 API is that it supports hotplugging of card readers; in version 2.11 of that API, this is not the case, since it predates USB (like I said, it is outdated). So, try experimenting with hotplugging your card reader a bit; it should generally work. Try leaving it installed and using your system (and webbrowser) for a while with that version of the middleware; you shouldn't have any issues doing so, but if you do I'd like to know about it.

Bug reports are welcome as issues on our github repository.


July 19, 2018

It's been 12 months since my last progress report on Drupal core's API-first initiative. Over the past year, we've made a lot of important progress, so I wanted to provide another update.

Two and a half years ago, we shipped Drupal 8.0 with a built-in REST API. It marked the start of Drupal's evolution to an API-first platform. Since then, each of the five new releases of Drupal 8 introduced significant web service API improvements.

While I was an early advocate for adding web services to Drupal 8 five years ago, I'm even more certain about it today. Important market trends endorse this strategy, including integration with other technology solutions, the proliferation of new devices and digital channels, the growing adoption of JavaScript frameworks, and more.

In fact, I believe that this functionality is so crucial to the success of Drupal, that for several years now, Acquia has sponsored one or more full-time software developers to contribute to Drupal's web service APIs, in addition to funding different community contributors. Today, two Acquia developers work on Drupal web service APIs full time.

Drupal core's REST API

While Drupal 8.0 shipped with a basic REST API, the community has worked hard to improve its capabilities, robustness and test coverage. Drupal 8.5 shipped 5 months ago and included new REST API features and significant improvements. Drupal 8.6 will ship in September with a new batch of improvements.

One Drupal 8.6 improvement is the move of the API-first code to the individual modules, instead of the REST module providing it on their behalf. This might not seem like a significant change, but it is. In the long term, all Drupal modules should ship with web service APIs rather than depending on a central API module to provide their APIs — that forces them to consider the impact on REST API clients when making changes.

Another improvement we've made to the REST API in Drupal 8.6 is support for file uploads. If you want to understand how much thought and care went into REST support for file uploads, check out API-first Drupal: file uploads. It's hard work to make file uploads secure, support large files, optimize for performance, and provide a good developer experience.


Adopting the JSON API module into core is important because JSON API is increasingly common in the JavaScript community.

We had originally planned to add JSON API to Drupal 8.3, which didn't happen. When that plan was originally conceived, we were only beginning to discover the extent to which Drupal's Routing, Entity, Field and Typed Data subsystems were insufficiently prepared for an API-first world. It's taken until the end of 2017 to prepare and solidify those foundational subsystems.

The same shortcomings that prevented the REST API to mature also manifested themselves in JSON API, GraphQL and other API-first modules. Properly solving them at the root rather than adding workarounds takes time. However, this approach will make for a stronger API-first ecosystem and increasingly faster progress!

Despite the delay, the JSON API team has been making incredible strides. In just the last six months, they have released 15 versions of their module. They have delivered improvements at a breathtaking pace, including comprehensive test coverage, better compliance with the JSON API specification, and numerous stability improvements.

The Drupal community has been eager for these improvements, and the usage of the JSON API module has grown 50% in the first half of 2018. The fact that module usage has increased while the total number of open issues has gone down is proof that the JSON API module has become stable and mature.

As excited as I am about this growth in adoption, the rapid pace of development, and the maturity of the JSON API module, we have decided not to add JSON API as an experimental module to Drupal 8.6. Instead, we plan to commit it to Drupal core early in the Drupal 8.7 development cycle and ship it as stable in Drupal 8.7.


For more than two years I've advocated that we consider adding GraphQL to Drupal core.

While core committers and core contributors haven't made GraphQL a priority yet, a lot of great progress has been made on the contributed GraphQL module, which has been getting closer to its first stable release. Despite not having a stable release, its adoption has grown an impressive 200% in the first six months of 2018 (though its usage is still measured in the hundreds of sites rather than thousands).

I'm also excited that the GraphQL specification has finally seen a new edition that is no longer encumbered by licensing concerns. This is great news for the Open Source community, and can only benefit GraphQL's adoption.

Admittedly, I don't know yet if the GraphQL module maintainers are on board with my recommendation to add GraphQL to core. We purposely postponed these conversations until we stabilized the REST API and added JSON API support. I'd still love to see the GraphQL module added to a future release of Drupal 8. Regardless of what we decide, GraphQL is an important component to an API-first Drupal, and I'm excited about its progress.

OAuth 2.0

A web services API update would not be complete without touching on the topic of authentication. Last year, I explained how the OAuth 2.0 module would be another logical addition to Drupal core.

Since then, the OAuth 2.0 module was revised to exclude its own OAuth 2.0 implementation, and to adopt The PHP League's OAuth 2.0 Server instead. That implementation is widely used, with over 5 million installs. Instead of having a separate Drupal-specific implementation that we have to maintain, we can leverage a de facto standard implementation maintained by others.

API-first ecosystem

While I've personally been most focused on the REST API and JSON API work, with GraphQL a close second, it's also encouraging to see that many other API-first modules are being developed:

  • OpenAPI, for standards-based API documentation, now at beta 1
  • JSON API Extras, for shaping JSON API to your site's specific needs (aliasing fields, removing fields, etc)
  • JSON-RPC, for help with executing common Drupal site administration actions, for example clearing the cache
  • … and many more


Hopefully, you are as excited for the upcoming release of Drupal 8.6 as I am, and all of the web service improvements that it will bring. I am very thankful for all of the contributions that have been made in our continued efforts to make Drupal API-first, and for the incredible momentum these projects and initiatives have achieved.

Special thanks to Wim Leers (Acquia) and Gabe Sullice (Acquia) for contributions to this blog post and to Mark Winberry (Acquia) and Jeff Beeman (Acquia) for their feedback during the writing process.

July 17, 2018

I published the following diary on “Searching for Geographically Improbable Login Attempts“:

For the human brain, an IP address is not the best IOC because, like phone numbers, we are bad to remember them. That’s why DNS was created. But, in many log management applications, there are features to enrich collected data. One of the possible enrichment for IP addresses is the geolocalization. Based on databases, it is possible to locate an IP address based on the country and/or the city. This information is available in our DShield IP reputation database… [Read more]

[The post [SANS ISC] Searching for Geographically Improbable Login Attempts has been first published on /dev/random]

July 13, 2018

I published the following diary on “Cryptominer Delivered Though Compromized JavaScript File“:

Yesterday I found an interesting compromised JavaScript file that contains extra code to perform crypto mining activities. It started with a customer’s IDS alerts on the following URL:


This website is not referenced as malicious and the domain looks clean. When you point your browser to the site, it loads the JavaScript file. So, I performed some investigations on this URL. jquery.prettyphoto.js is a file from the package pretty photo[1] but the one hosted on safeyourhealth[.]ru was modified… [Read more]

[The post [SANS ISC] Cryptominer Delivered Though Compromized JavaScript File has been first published on /dev/random]

I’m using OSSEC to feed an instance of TheHive to investigate security incidents reported by OSSEC. To better categorize the alerts and merge similar events, I needed to add more observables. OSSEC alerts are delivered by email with interesting information for TheHive. This was an interesting use case to play with custom observables.

So, I added a new feature to define your custom observables. For OSSEC, I created the following ones:

  • ossec_rule (The rule ID)
  • ossec_asset (The asset – OSSEC agent)
  • ossec_level (The alert level, 0-10)
  • ossec_message (The alert description)

You can define those custom observables via a new section in the configuration file:

ossec_asset: Received From: \((\w+)\)\s
ossec_level: Rule: \w+ fired \(level (\d+)\)\s-
ossec_message: Rule: \w+ fired \(level \d+\)\s-> "(.*)"
ossec_rule: Rule: (\d+) fired \(level

Here is an example of alerts received in TheHive:

OSSEC Observables

Now that you have new interesting observables, you can also build your own dashboards to increase more visibility:

OSSEC Dashboard

The updated script is available here.

[The post Imap2TheHive: Support for Custom Observables has been first published on /dev/random]

July 12, 2018

If you've ever watched a Drupal Camp video to learn a new Drupal skill, technique or hack, you most likely have Kevin Thull to thank. To date, Kevin has traveled to more than 30 Drupal Camps, recorded more than 1,000 presentations, and has shared them all on YouTube for thousands of people to watch. By recording and posting hundreds of Drupal Camp presentations online, Kevin has has spread knowledge, awareness and a broader understanding of the Drupal project.

I recently attended a conference in Chicago, Kevin's hometown. I had the chance to meet with him, and to learn more about the evolution of his Drupal contributions. I was struck by his story, and decided to write it up on my blog, as I believe it could inspire others around the world.

Kevin began recording sessions during the first community events he helped organize: DrupalCamp Fox Valley in 2013 and MidCamp in 2014. At first, recording and publishing Drupal Camp sessions was an arduous process; Kevin had to oversee dozens of laptops, converters, splitters, camcorders, and trips to Fedex.

After these initial attempts, Kevin sought a different approach for recording sessions. He ended up developing a recording kit, which is a bundle of the equipment and technology needed to record a presentation. After researching various options, he discovered a lightweight, low cost and foolproof solution. Kevin continued to improve this process after he tweeted that if you sponsored his travel, he would record Drupal Camp sessions. It's no surprise that numerous camps took Kevin up on his offer. With more road experience, Kevin has consolidated the recording kits to include just a screen recorder, audio recorder and corresponding cables. With this approach, the kit records a compressed mp4 file that can be uploaded directly to YouTube. In fact, Kevin often finishes uploading all presentation videos to YouTube before the camp is over!

Kevin Thull recording kitThis is one of Kevin Thull's recording kits used to record hundreds of Drupal presentations around the world. Each kit runs at about $450 on Amazon.

Most recently, Kevin has been buying and building more recording kits thanks to financial contributions from various Drupal Camps. He has started to send recording kits and documentation around the world for local camp organizers to use. Not only has Kevin recorded hundreds of sessions himself, he is now sharing his expertise and teaching others how to record and share sessions.

What is exciting about Kevin's contribution is that it reinforces what originally attracted him to Drupal. Kevin ultimately chose to work with Drupal after watching online video tutorials and listening to podcasts created by the community. Today, a majority of people prefer to learn development through video tutorials. I can only imagine how many people have joined and started to contribute to Drupal after they have watched one of the many videos that Kevin has helped to publish.

Kevin's story is a great example of how everyone in the Drupal community has something to contribute, and how contributing back to the Drupal project is not exclusive to code.

This year, the Drupal community celebrated Kevin by honoring him with the 2018 Aaron Winborn Award. The Aaron Winborn award is presented annually to an individual who demonstrates personal integrity, kindness, and above-and-beyond commitment to the Drupal community. It's named after a long-time Drupal contributor Aaron Winborn, who lost his battle with Amyotrophic lateral sclerosis (ALS) in early 2015. Congratulations Kevin, and thank you for your incredible contribution to the Drupal community!

July 11, 2018

Enough with the political posts!

Making libraries that are both API and libtool versioned with qmake, how do they do it?

I started a project on github that will collect what I will call “doing it right” project structures for various build environments.

With right I mean that the library will have a API version in its Library name, that the library will be libtoolized and that a pkg-config .pc file gets installed for it.

I have in mind, for example, autotools, cmake, meson, qmake and plain make. First example that I have finished is one for qmake.

Let’s get started working on a

We get the PREFIX, MAJOR_VERSION, MINOR_VERSION and PATCH_VERSION from a project-wide include


We will use the standard lib template of qmake


We need to set VERSION to a version for compile_libtool (in reality it should use what is called current, revision and age to form an API and ABI version number. In the actual example it’s explained in the comments, as this is too much for a small blog post).


According section 4.3 of Autotools’ mythbusters we should have as target-name the API version in the library’s name

TARGET = qmake-example-$${MAJOR_VERSION}"."$${MINOR_VERSION}

We will write a define in config.h for access to the version as a double quoted string


Our example happens to use QDebug, so we need QtCore here

QT = core

This is of course optional

CONFIG += c++14

We will be using libtool style libraries

CONFIG += compile_libtool
CONFIG += create_libtool

These will create a pkg-config .pc file for us

CONFIG += create_pc create_prl no_install_prl

Project sources

SOURCES = qmake-example.cpp

Project’s public and private headers

HEADERS = qmake-example.h

We will install the headers in a API specific include path

headers.path = $${PREFIX}/include/qmake-example-$${MAJOR_VERSION}"."$${MINOR_VERSION}

Here put only the publicly installed headers

headers.files = $${HEADERS}

Here we will install the library to

target.path = $${PREFIX}/lib

This is the configuration for generating the pkg-config file

QMAKE_PKGCONFIG_DESCRIPTION = An example that illustrates how to do it right with qmake
# This is our libdir
# This is where our API specific headers are
# These are dependencies that our library needs

Installation targets (the pkg-config seems to install automatically)

INSTALLS += headers target

This will be the result after make-install

├── include
│   └── qmake-example-3.2
│       └── qmake-example.h
└── lib
    ├── ->
    ├── ->
    ├── ->
    └── pkgconfig
        └── qmake-example-3.pc

ps. Dear friends working at their own customers: when I visit your customer, I no longer want to see that you produced completely stupid wrong qmake based projects for them. Libtoolize it all, get an API version in your Library’s so-name and do distribute a pkg-config .pc file. That’s the very least to pass your exam. Also read this document (and stop pretending that you don’t need to know this when at the same time you charge them real money pretending that you know something about modern UNIX software development).

July 10, 2018

Quite a few people in the Drupal community are looking forward to see the JSON API module ship with Drupal 8 core.


  • they want to use it on their projects
  • the Admin UI & JS Modernization Initiative needs it
  • they want to see Drupal 8 ship with a more capable RESTful HTTP API
  • then Drupal will have a non-NIH (Not Invented Here) API but one that follows a widely used spec
  • it enables them to build progressively decoupled components

So where are things at?


Let’s start with a high-level timeline:

  1. The plan (intent) to move the JSON API module into Drupal core was approved by Drupal’s product managers and a framework manager 4 months ago, on March 19, 2018!
  2. A core patch was posted on March 29 (issue #2843147). My colleague Gabe and I had already been working full time for a few months at that point to make the JSON API modules more stable: several security releases, much test coverage and so on.
  3. Some reviews followed, but mostly the issue (#2843147) just sat there. Anybody was free to provide feedback. We encouraged people to review, test and criticize the JSON API contrib module. People did: another 1000 sites started using JSON API! Rather than commenting on the core issue, they filed issues against the JSON API contrib module!
  4. Since December 2017, Gabe and I were still working on it full time, and e0ipso whenever his day job/free time allowed. Thanks to the test coverage Gabe and I had been adding, bugs were being fixed much faster than new ones were reported — and more often than not we found (long-existing) bugs before they were reported.
  5. Then 1.5 week ago, on June 28, we released JSON API 1.22, the final JSON API 1.x release. That same day, we branched the 2.x version. More about that below.
  6. The next day, on June 29, an updated core patch was posted. All feedback had been addressed!

June 29

I wrote in my comment:

Time to get this going again. Since #55, here’s what happened:

  1. Latest release at #55: JSON API 1.14
  2. Latest release today: JSON API 1.22
  3. 69 commits: ($ git log --oneline --since "March 30 2018 14:21 CET" | wc -l)
  4. Comprehensive test coverage completed (#2953318: Comprehensive JSON API integration test coverage phase 4: collections, filtering and sorting + #2953321: Comprehensive JSON API integration test coverage phase 5: nested includes and sparse field sets + #2972808: Comprehensive JSON API integration test coverage phase 6: POST/PATCH/DELETE of relationships)
  5. Getting the test coverage to that point revealed some security vulnerabilities (1.16), and many before it (1.14, 1.10 …)
  6. Ported many of the core REST improvements in the past 1.5 years to JSON API (1.15)
  7. Many, many, many bugfixes, and much, much clean-up for future maintainability (1.16, 1.17, 1.18, 1.19, 1.20, 1.21, 1.22)

That’s a lot, isn’t it? :)

But there’s more! All of the above happened on the 8.x-1.x branch. As described in #2952293: Branch next major: version 2, requiring Drupal core >=8.5 (and mentioned in #61), we have many reasons to start a 8.x-2.x branch. (That branch was created months ago, but we kept them identical for months.)
Why wait so long? Because we wanted all >6000 JSON API users to be able to gently migrate from JSON API 1.x (on Drupal ⇐8.5) to JSON API 2.x (on Drupal >=8.5). And what better way to do that than to write comprehensive test coverage, and fixing all known problems that that surfaced? That’s what we’ve been doing the past few months! This massively reduces the risk of adding JSON API to Drupal core. We outlined a plan of must-have issues before going into Drupal core: #2931785: The path for JSON API to core — and they’re all DONE as of today! Dozens of bugs have been flushed out and fixed before they ever entered core. Important: in the past 6–8 weeks we’ve noticed a steep drop in the number of bug reports and support requests that have been filed against the JSON API module!

After having been tasked with maturing core’s REST API, and finding the less-than-great state that was in when Drupal 8 shipped, and having experienced how hard it is to improve it or even just fix bugs, this was a hard requirement for me. I hope it gives core committers the same feeling of relief as it gives me, to see that JSON API will on day one be in much better shape.

The other reason why it’s in much better shape, is that the JSON API module now has no API surface other than the HTTP API! No PHP API (its sole API was dropped in the 2.x branch: #2982210: Move EntityToJsonApi service to JSON API Extras) at all, only the HTTP API as specified by

TL;DR: JSON API in contrib today is more stable, more reliable, more feature-rich than core’s REST API. And it does so while strongly complying with the JSON API spec: it’s far less of a Drupalism than core’s REST API.

So, with pride, and with lots of sweat (no blood and no tears fortunately), @gabesullice, @e0ipso and I present you this massively improved core patch!

EDIT: P.S.: 668K bytes of the 1.0M of bytes that this patch contains are for test coverage. That’s 2/3rds!

To which e0ipso replied:

So, with pride, and with lots of sweat (no blood and no tears fortunately), @gabesullice, @e0ipso and I present you this massively improved core patch!
So much pride! This was a long journey, that I walked (almost) alone for a couple of years. Then @Wim Leers and @gabesullice joined and carried this to the finish line. Such a beautiful collaboration!


July 9

Then, about 12 hours ago, core release manager xjm and core framework manager effulgentsia posted a comment:

(@effulgentsia and @xjm co-authored this comment.) It’s really awesome to see the progress here on JSON API! @xjm and @effulgentsia discussed this with other core committers (@webchick, @Dries, @larowlan, @catch) and with the JSON API module maintainers. Based on what we learned in these discussions, we’ve decided to target this issue for an early feature in 8.7 rather than 8.6. Therefore, we will will set it 8.7 in a few days when we branch 8.7. Reviews and comments are still welcome in the meantime, whether in this issue, or as individual issues in the jsonapi issue queue. Feel free to stop reading this comment here, or continue reading if you want to know why it’s being bumped to 8.7. First, we want to give a huge applause for everything that everyone working on the jsonapi contrib module has done. In the last 3-4 months alone (since 8.5.0 was released and #44 was written):
  • Over 100 issues in the contrib project have been closed.
  • There are currently only 36 open issues, only 7 of which are bug reports.
  • Per #62, the remaining bug fixes require breaking backwards compatibility for users of the 1.x module, so a final 1.x release has been released, and new features and BC-breaking bug fixes are now happening in the 2.x branch.
  • Also per #62, an amazing amount of test coverage has been written and correspondingly there’s been a drop in new bug reports and support requests getting filed.
  • The module is now extremely well-documented, both in the API documentation and in the handbook.
Given all of the above, why not commit #70 to core now, prior to 8.6 alpha? Well,
  1. We generally prefer to commit significant new core features early in the release cycle for the minor, rather than toward the end. This means that this month and the next couple are the best time to commit 8.7.x features.
  2. To minimize the disruption to contrib, API consumers, and sites of moving a stable module from core to contrib, we’d like to have it as a stable module in 8.7.0, rather than an experimental module in 8.6.0.
  3. Per above, we’re not yet done breaking BC. The mentioned spec compliance issues still need more work.
  4. While we’re still potentially evolving the API, it’s helpful to continue having the module in contrib for faster iteration and feedback.
  5. Since the 2.x branch of JSON API was just branched, there are virtually no sites using it yet (only 23 as compared with the 6000 using 1.x). An alpha release of JSON API 2.x once we’re ready will give us some quick real-world testing of the final API that we’re targeting for core.
  6. As @lauriii pointed out, an additional advantge of allowing a bit more time for API changes is that it allows more time for the Javascript Modernization Initiative, which depends on JSON API, to help validate that JSON API includes everything we need to have a fully decoupled admin frontend within Drupal core itself. (We wouldn’t block the module addition on the other initiative, but it’s an added bonus given the other reasons to target 8.7.)
  7. While the module has reached maturity in contrib, we still need the final reviews and signoffs for the core patch. Given the quality of the contrib module this should go well, but it is a 1 MB patch (with 668K of tests, but that still means 300K+ of code to review.) :) We want to give our review of this code the attention it deserves.
None of the above aside from the last point are hard blockers to adding an experimental module to core. Users who prefer the stability of the 1.x module could continue to use it from contrib, thereby overriding the one in core. However, in the case of jsonapi, I think there’s something odd about telling site builders to experiment with the one in core, but if they want to use it in production, to downgrade to the one in contrib. I think that people who are actually interested in using jsonapi on their sites would be better off going to the contrib project page and making an explicit 1.x or 2.x decision from there. Meanwhile, we see what issues, if any, people run into when upgrading from 1.x to 2.x. When we’re ready to commit it to core, we’ll consider it at least beta stability (rather than alpha). Once again, really fantastic work here.


So there you have it. JSON API will not be shipping in Drupal 8.6 this fall.
The primary reason being that it’s preferred for significant new core features to land early in the release cycle, especially ones shipping as stable from the start. This also gives the Admin UI & JS Modernization Initiative more time to actually exercise many parts of JSON API’s capabilities, and in doing so validate that it’s sufficiently capable to power it.

For us as JSON API module maintainers, it keeps things easier for a little while longer: once it’s in core, it’ll be harder to iterate: more process, slower test runs, commits can only happen by core committers and not by JSON API maintainers. Ideally, we’d commit JSON API to Drupal core with zero remaining bugs and tasks, with only feature requests being left. Good news: we’re almost there already: most open issues are feature requests!

For you as JSON API users, not much changes. Just keep using The 2.x branch introduced some breaking changes to better comply with the JSON API spec, and also received a few new small features. But we worked hard to make sure that disruption is minimal (example 1 2 3).1
Use it, try to break it, report bugs. I’m confident you’ll have to try hard to find bugs … and yes, that’s a challenge to y’all!

  1. If you want to stay on 1.x, you can — and it’s rock solid thanks to the test coverage we added. That’s the reason we waited so long to work on the 2.x branch: because we wanted the thousands of JSON API sites to be in the best state possible, not be left behind. Additionally, the comprehensive test coverage we added in 1.x guarantees we’re aware of even subtle BC breaks in 2.x! ↩︎

July 09, 2018

TheHive is an awesome tool to perform incident management. One of the software components that is linked to TheHive is Cortex defined as a “Powerful observable analysis engine“. Let’s me explain why Cortex can save you a lot of time. When you are working on an incident in TheHive, observables are linked to it. An observable is an IP address, a hash, a domain, filename, … (note: it is not an IOC, yet!). Let’s say you have an incident evolving 10 IP addresses. It could be quite time-consuming (read: “boring”) to search for each IP address in reputation databases or websites like Virustotal. Cortex is made for this purpose. It relies on small modules called “analyzers” that will query a specific service for information about your observables, parse the returned data and pass them to TheHive. There are already plenty of analyzers available today for most of the well-known online services (the complete list is available here) and, regularly, people submit new analyzers for specific online resources. Being a SANS ISC Handler, one of my favourite IP reputation database is, of course, DShield which has its own API. Surprisingly, there was no analyzer available for DShield. So, I wrote mine!

The analyzer is provided to work with IP addresses:DShield Analyzer Status

When you click on a DShield taxonomy, you get the details about this IP address:DShield Report

To install the analyzer, copy files from my Github repo in your $CORTEX_ANALYZERS_PATH/analyzers/ and restart your Cortex instance. The analyzer will be listed and can be enabled (no further configuration is required). Enjoy!

(Note: I’ll submit a pull-request to the official repository)

[The post DShield Analyzer for Cortex has been first published on /dev/random]

July 08, 2018

I said it before, we shouldn’t finance the US’s war-industry any longer. It’s not a reliable partner.

I’m sticking to my guns on this one,

Let’s build ourselves a European army, utilizing European technology. Build, engineered and manufactured by Europeans.

We engineers are ready. Let us do it.

July 04, 2018

The day three started quietly (let’s call this fact the post-social event effect) with a set of presentations around Blue Team activities. Alexandre Dulaunoy from CIRCL presented “Fail frequently to avoid disaster” or how to organically build an open threat intelligence sharing standard to keep the intelligence community free and sane! He started with a nice quote: “There was never a plan. There was just a series of mistakes”.  After a brief introduction to MISP, Alex came back to the history of the project and explained some mistakes they made. The philosophy is to not wait for a perfect implementation from the beginning but to start small and extend later. Standardisation is required when your tool is growing but do not make the mistake to define your own new standard. Use the ones already existing. For example, MISP is able to export data in multiple open formats (CVS, XML, Bro, Suricata, Sigma, etc). Another issue was the way people use tags (the great-failure of free-text tagging). They tend to be very creative when they have a playground. The perfect example is how TLP levels are written (TLP:Red, TLP-RED, TLP:RED, …). Taxonomies solved this creativity issue. MISP is designed with an object-template format which helps organisations to exchange specific information they want. Finally, be happy to get complaints about your software. It means that it’s being used!
The next slot was assigned to Thomas Chopitea from Google who presented FOSS tools to automate your DFIR process. As you can imagine, Google is facing many incidents to be investigated and their philosophy is to write tools for their own usage (first of all) but also to share them. As they use the tools they are developing, it means they know them and improve them. The following tools were reviewed:
  • GRR
  • Plaso
  • TimeSketch
  • dfTimeWolf
  • Turbinia
To demonstrate how they work, Thomas prepared his demos with a targeted attack scenario based on a typo-squatting. All tools were used one by one them investigation was performed via dfTimeWolf which is a “glue” between all the tools. Turbinia is less known. It’s an automation of forensic analysis tools in the cloud. Note that it is not restricted to the Google cloud. It was an excellent presentation. Have a look at it if you’re in the process to build your own DFIR toolbox.
After a short coffee break, a set of sessions related to secure programming started. The first one was about LandLock by Mickaël Salaün from ANSSI. Landlock is a stackable Linux Security Module (LSM) that makes it possible to create security sandboxes. After a short demo to demonstrate the capabilities, the solution was compared to other ones (SELinux seccomp-bpf, namespaces). Only Landlock has all features: Fine-grained control, embedded policy and non-privileged use. Then Mickaël dived into the code and explained how the module works. The idea is to have user-space hardening:
  • access control
  • designed for unprivileged use
  • apply tailored access controls perprocss
  • make it evolve over time

This is an ongoing research that is not yet completely implemented but it’s still possible to install and play with it. It looks promising. Then, Pierre Chifflier (@) presented “Security, Performance, which one?

The last presentation about secure programming was “Immutable infrastructure and zero trust networking: designing your system for resilience” by Geoffroy Couprie. Here is the scenario used by Geoffroy: You just got pwned. Your WordPress instance was compromised. Who accessed the server? Was it updated? Traditional operations are long-lived servers (sysadmins like big uptimes). Is it safe to reinstall the same server? They are techniques to make the server reinstall reproducible (puppet, ansible, chef, …)
The idea presented by Geoffroy: Why not reinstall from scratch on every update with an immutable infrastructure (do not modify directly a running server.  The process of image creation is based on Exherbo, they remove unwanted software, build a kernel statically. The resulting image is simple, safe and it boots in 7”. Images are then deployed via BitTorrent to hypervisors.
Machines are moving so how to reach them? Via a home-made load-balancers called “sozu”  which can be reconfigured live. A very interesting approach!
After the lunch, the topic switched the security of IoT devices. Sébastien Tricaud presented some tests he performed via honeypots mimicking IoT devices. After a brief introduction about the (many) issues introduced by IoT devices, he explained how he deployed some honeypots with results. The first example is called Gaspot. The second one is Conpot which simulates a Siemens PLC or a Guardian AST device). Interesting fact: Nmap has a script to scan such devices:
nmap —script atg-info -p 10001 <host>
Sébastien put a honeypot only for 3 months and got 5 uniques IP addresses. The second test was to accept much more connections (S7, Modbus or IPMI). In this case, he got much more hits, the first one after only three hours. The question is: are those IP addresses real attackers, bots (Shodan?) or other security researchers?
Rayna Stamboliyska was the next speaker and she presented “Io(M)T Security: A year in review”. Rayna focussed on connected sex toys but respected the code of conduct defined during the conference, no offensive content, just facts. Like any other “smart” device, they suffer from multiple vulnerabilities. And don’t think that it’s a niche market, there is a real business for connected sex toys. Rayna also presented her project called PiRanhalysis. It’s a suite of tools running on a Raspberry Pi that helps to collect traffic generated by IoT devices.
  • PiRogue collects all the traces
  • PiRahna automates install and capture
  • PiPrecious is the platform to store and version them
The last slot related to IoT was assigned to Aseem Jakhar who presented his pentesting framework called “Expl-IoT”. In was interesting but Assem started by complaining about the huge number of frameworks available and then it started his own!? Why not contribute to an existing one or just write Metasploit modules?
The last sessions were oriented to red teaming / pentesting. Ivan Kwiatkowski presented “Freedom Fighting Mode – Open Source Hacking Harness”. Already presented at SSTIC a few weeks ago. Then, Antoine Cervoise presented some cool attack scenarios based on open source hardware like Teensy devices or Raspberry Pi computers. Niklas Abel presented his research ShadowSocks, a secure SOCKS5 proxy which is… not so secure! He explained some vulnerabilities found in the tool and, last but not least, Jérémy Mousset explained how he compromized a Glassfish server via the admin interface.
The PST18 Crew
This closes the first edition of Pass-The-Salt. It seems that a second edition is already on its way at the same location and same place! The event occurred smoothly in a very relaxed atmosphere, put it on your agenda for next year because this event is free (important to remind) but the quality of talks is high!

[The post Pass-The-Salt 2018 Wrap-Up Day #3 has been first published on /dev/random]

July 03, 2018

When you have a look at the schedule of infosec conferences, the number of events is already very high. There is one at least every week around the world. So, when a new one is born and is nice, it must be mentioned. “Pass-The-Salt” (SALT means “Security And Libre Talks“) is a fork of the security track of the RMLL. For different reasons, the team behind the security track decided to jump out of the RMLL organization and to create their independent event. What a challenge: to find a free time slot, to find a location, to organize a call-for-papers, to find sponsors (because the event is free for attendees). They released 200 tickets that were sold in 5 days. Not bad for a first edition, congratulations to them! The event is split across three days. It started yesterday with some workshops and talks in the afternoon. Due to a very busy agenda, I was only able to join Lille (in the north of France) yesterday evening. So, it’s not a typo but no wrap-up of the first day!

I joined the location of the conference to attend some talks in a sunny morning. After a quick registration and some coffee refills, let’s listen to speakers! A good idea was to group talks by topics (network, web security, reverse, etc). This way, if you’ve fewer interests for a specific topic, you can easily attend a workshop. The day started with talks related to network security. The first speaker was Francois Serman who’s working for the OVH anti-DDoS team. He explained with a lot of details on how to filter packets in an efficient way on Linux systems. Indeed, the traffic to be inspected is always growing and can quickly become a bottleneck. Just for the story, OVH was targeted by a 1.3Tb/s DDoS a few months ago. Francois started by reviewing the current BPF filter that is used by tools like tcpdump or Wireshark. He explained with a lot of examples how packets are inspected and decisions are made to drop/allow them. Then, he switched to eBPF (extended BPF). This issue remains almost the same because, even if iptables is powerful, it is implemented too late in the stack. Why now filter packets sooner? To achieve this, Francois presented “XDP” or eXpress Data Path.

The next talk was on the same topic with Eric Leblond from the Suricata project. He explained why packets loss is a real pain for IDS systems. Just one packet lost might lead to undetected suspicious traffic. A common problem is the “elephant flow problem” which is a very big flow like a video stream. When we face a ring buffer overrun, we lose data. He explained how to implement bypass capabilities.

After the morning break, the keynote speaker was Pablo Neira Ayuso. He presented a talk named “A 10 years journey in Linux firewall“. Pablo is a core developer of the NetFilter which is, as he explained very well, not only the well-known iptables module. He reviewed the classic iptables tool then switched to the new nftable that is much more powerful! Very interesting keynote!

The next slot was assigned to me. I presented my solution to perform full packet capture based on Moloch & Docker containers. Just after, there was a session of lightning talks (~10 presentations of 4 minutes each).

After the lunch break, the topic switched to “web security”. The first speaker was Stefan Eissing that presented “Security and Self-Driving Computers“. The title was strange but related to mod_md that implements the Let’s Encrypt certificate support directly into Apache. Then, Julien Voisin, Thibault Koechlin, Simon Magnin-Feysot presented their project called Snuffleupagus (I already saw this talk at in 2017). Due to a last-minute change, Sébastien Larinier presented his work about how to clusterize malware datasets with open source tools and machine learning.

The last part of the day was dedicated to “IAM”: Clément Oudot & Xavier Guimard presented how to integrate second-factor authentication in LemonLDAP::NG. Then Fraser Tweedale from RedHat presented “No way JOSE! Lessons for authors and implementers of open standards” and finally, Florence Blanc-Renaud closed the day with some tips to better protect your passwords and how to implement 2FA with RedHat tools.

The day ended with the social even in the center of Lille followed by a dinner with friends. See you tomorrow for the third day.

[The post Pass-The-Salt 2018 Wrap-Up Day #2 has been first published on /dev/random]

During my DrupalCon Nashville keynote, I shared a brief video of Mike Lamb, the Senior Director of Architecture, Engineering & Development at Pfizer. Today, I wanted to share an extended version of my interview with Mike, where he explains why the development team at Pfizer has ingrained Open Source contribution into the way they work.

Mike had some really interesting and important things to share, including:

  1. Why Pfizer has chosen to standardize all of its sites on Drupal (from 0:00 to 03:19). Proprietary software isn't a match.
  2. Why Pfizer only works with agencies and vendors that contribute back to Drupal (from 03:19 to 06:25). Yes, you read that correctly; Pfizer requires that its agency partners contribute to Open Source!
  3. Why Pfizer doesn't fork Drupal modules (from 06:25 to 07:27). It's all about security.
  4. Why Pfizer decided to contribute to the Drupal 8's Workflow Initiative, and what they have learned from working with the Drupal community (from 07:27 to 10:06).
  5. How to convince a large organization (like Pfizer) to contribute back to Drupal (from 10:06 to 12:07).

Between Pfizer's direct contributions to Drupal (e.g. the Drupal 8 Workflow Initiative) and the mandate for its agency partners to contribute code back to Drupal, Pfizer's impact on the Drupal community is invaluable. It's measured in the millions of dollars per year. Just imagine what would happen to Drupal if ten other large organizations adopted Pfizer's contribution models?

Most organizations use Open Source, and don't think twice about it. However, we're starting to see more and more organizations not just use Open Source, but actively contribute to it. Open source offers organizations a completely different way of working, and fosters an innovation model that is not possible with proprietary solutions. Pfizer is a leading example of how organizations are starting to challenge the prevailing model and benefit from contributing to Open Source. Thanks for changing the status quo, Mike!

July 01, 2018

Two weeks ago, I stumbled upon a two-part blog post by Alex Russell, titled Effective Standards Work.

The first part (The Lay Of The Land) sets the stage. The second part (Threading the Needle) attempts to draw conclusions.

It’s worth reading if you’re interested in how Drupal is developed, or in how any consensus-driven open source project works (rather than the increasingly common “controlled by a single corporate entity” “open source”).

It’s written with empathy, modesty and honesty. It shows the struggle of somebody given the task and opportunity to help shape/improve the developer experience of many, but not necessarily the resources to make it happen. I’m grateful he posted it, because something like this is not easy to write nor publish — which he also says himself:

I’ve been drafting and re-drafting versions of this post for almost 4 years. In that time I’ve promised a dozen or more people that I had a post in process that talked about these issues, but for some of the reasons I cited at the beginning, it has never seemed a good time to hit “Publish”. To those folks, my apologies for the delay.


I hope you’ll find the incredibly many parallels with the open source Drupal ecosystem as fascinating as I did!

Below, I’ve picked out some of the most interesting statements and replaced only a few terms, and tadaaa! — it’s accurately describing observations in the Drupal world!

Go read those two blog posts first before reading my observations though! You’ll find some that I didn’t. Then come back here and see which ones I see, having been a Drupal contributor for >11 years and a paid full-time Drupal core contributor for >6.

Standards Theory

Design A new Drupal contrib module is the process of trying to address a problem with a new feature. Standardisation Moving a contributed module into Drupal core is the process of documenting consensus.

The process of feature design Drupal contrib module development is a messy, exciting exploration embarked upon from a place of trust and hope. It requires folks who have problems (web developers site builders) and the people who can solve them (browser engineers Drupal core/contrib developers) to have wide-ranging conversations.

The Forces at Play

Feature Drupal module design starts by exploring problems without knowing the answers, whereas participation in Working Groups Drupal core initiatives entails sifting a set of proposed solutions and integrating the best proposals competing Drupal modules. Late-stage iteration can happen there, but every change made without developer site builder feedback is dangerous — and Working Groups Drupal core initiatives aren’t set up to collect or prioritise it.

A sure way for a browser engineer Drupal core/contrib developer to attract kudos is to make existing content Drupal sites work better, thereby directly improving things for users site builders who choose your browser Drupal module.

Essential Ingredients

  • Participation by web developers site builders and browser engineers Drupal core/contrib developers: Nothing good happens without both groups at the table.
  • A venue outside a chartered Working Group Drupal core in which to design and iterate: Pre-determined outcomes rarely yield new insights and approaches. Long-term relationships of WG participants Drupal core developers can also be toxic to new ideas. Nobody takes their first tap-dancing lessons under Broadway’s big lights. Start small and nimble, build from there.
  • A path towards eventual standardisation stability & maintainability: Care must be taken to ensure that IP obligations API & data model stability can be met the future, even if the loose, early group isn’t concerned with a strict IP policy update path
  • Face-to-face deliberation: I’ve never witnessed early design work go well without in-person collaboration. At a minimum, it bootstraps the human relationships necessary to jointly explore alternatives.

    If you’ve never been to a functioning standards Drupal core meeting, it’s easy to imagine languid intellectual salons wherein brilliant ideas spring forth unbidden and perfect consensus is forged in a blinding flash. Nothing could be further from the real experience. Instead, the time available to cover updates and get into nuances of proposed changes can easily eat all of the scheduled time. And this is expensive time! Even when participants don’t have to travel to meet, high-profile groups Drupal core contributors are comically busy. Recall that the most in-demand members of the group Drupal core initiative (chairs Drupal core initiative coordinators, engineers from the most consequential firms Drupal agencies) are doing this as a part-time commitment. Standards work is time away from the day-job, so making the time and expense count matters.

Design → Iterate → Ship & Standardise

What I’ve learned over the past decade trying to evolving the web platform is a frustratingly short list given the amount of pain involved in extracting each insight:

  • Do early design work in small, invested groups
  • Design in the open, but away from the bright lights of the big stage
  • Iterate furiously early on because once it’s in the web Drupal core, it’s forever
  • Prioritize plausible interoperability; if an implementer says “that can’t work”, believe them!
  • Ship to a limited audience using experimental Drupal core modules as soon as possible to get feedback
  • Drive standards stabilization of experimental Drupal core modules with evidence and developer feedback from those iterations
  • Prioritise interop minimally viable APIs & evolvability over perfect specs APIs & data models; tests create compatibility stability as much or more than tight prose or perfect IDL APIs
  • Dot “i”s and cross “t”s; chartered Working Groups Drupal core initiatives and wide review many site builders trying experimental core modules are important ways to improve your design later in the game. These derive from our overriding goal: ship the right thing.

    So how can you shape the future of the platform as a web developer site builder?

The first thing to understand is that browser engineers Drupal core/contrib developers want to solve important problems, but they might not know which problems are worth their time. Making progress with implementers site builders is often a function of helping them understand the positive impact of solving a problem. They don’t feel it, so you may need to sell it!

Building this understanding is a social process. Available, objective evidence can be an important tool, but so are stories. Getting these in front of a sympathetic audience within a browser team of Drupal core committers or Drupal contrib module maintainers is perhaps harder.

It has gotten ever easier to stay engaged as designs experimental Drupal core modules iterate. After initial meetings, early designs are sketched up and frequently posted to GitHub issues where you can provide comments.

“Ship The Right Thing”

These relatively new opportunities for participation outside formal processes have been intentionally constructed to give developers and evidence a larger role in the design process.

There’s a meta-critique of formal standards processes in Drupal core and the defacto-exclusionary processes used to create them. This series didn’t deal in it deeply because doing so would require a long digression into the laws surrounding anti-trust and competition. Suffice to say, I have a deep personal interest in bringing more voices into developing the future of the web platform, and the changes to Chrome’s Drupal core’s approach to standards adding new modules discussed above have been made with an explicit eye towards broader diversity, inclusion, and a greater role for evidence.

I hope you enjoyed Alex’ blog posts as much as I did!

June 30, 2018

We're going on a two-week vacation in August! Believe it or not, but I haven't taken a two week vacation in 11 years. I'm super excited.

Now our vacation is booked, I'm starting to make plans for how to spend our time. Other than spending time with family, going on hikes, and reading a book or two, I'd love to take some steps towards food photography. Why food photography?

The past couple of years, Vanessa and I have talked about making a cookbook. In our many travels around the world, we've eaten a lot of great food, and Vanessa has managed to replicate and perfect a few of these recipes: the salmon soup we ate in Finland when we went dog sledding, the hummus with charred cauliflower we had at DrupalCon New Orleans, or the tordelli lucchesi we ate on vacation in Tuscany.

Other than being her sous-chef (dishwasher, really), my job would be to capture the recipes with photos, figure out a way to publish them online (I know just the way), and eventually print the recipes in a physical book. Making a cookbook is a fun way to align our different hobbies; travel for both of us, cooking for her, photography for me, and of course enjoying the great food.

Based on the limited research I've done, food photography is all about lighting. I've been passionate about photography for a long time, but I haven't really dug into the use of light yet.

Our upcoming vacation seems like the perfect time to learn about lighting; read a book about it, and try different lighting techniques (front lighting, side lighting, back lighting but also hard, soft and diffused light).

The next few weeks, I plan to pick up some new gear like a light diffuser, light modifiers, and maybe even a LED light. If you're into food photography, or into lighting more generally, don't hesitate to leave some tips and tricks in the comments.

June 28, 2018

Drupal is no longer the Drupal you used to know

Today, I gave a keynote presentation at the 10th annual Design 4 Drupal conference at MIT. I talked about the past, present and future of JavaScript, and how this evolution reinforces Drupal's commitment to be API-first, not API-only. I also included behind-the-scene insights into the Drupal community's administration UI and JavaScript modernization initiative, and why this approach presents an exciting future for JavaScript in Drupal.

If you are interested in viewing my keynote, you can download a copy of my slides (256 MB).

Thank you to Design 4 Drupal for having me and happy 10th anniversary!

June 26, 2018

The Drupal community has done an amazing job organizing thousands of developers around the world. We've built collaboration tools and engineering processes to streamline how our community of developers work together to collectively build Drupal. This collaboration has led to amazing results. Today, more than 1 in 40 of the top one million websites use Drupal. It's inspiring to see how many organizations depend on Drupal to deliver their missions.

What is equally incredible is that historically, we haven't collaborated around the marketing of Drupal. Different organizations have marketed Drupal in their own way without central coordination or collaboration.

In my DrupalCon Nashville keynote, I shared that it's time to make a serious and focused effort to amplify Drupal success stories in the marketplace. Imagine what could happen if we enabled hundreds of marketers to collaborate on the promotion of Drupal, much like we have enabled thousands of developers to collaborate on the development of Drupal.

Accelerating Drupal adoption with business decision makers

To focus Drupal's marketing efforts, we launched the Promote Drupal Initiative. The goal of the Promote Drupal Initiative is to do what we do best: to work together to collectively grow Drupal. In this case, we want to collaborate to raise awareness with business and non-technical decision makers. We need to hone Drupal's strategic messaging, amplify success stories and public relation resources in the marketplace, provide agencies and community groups with sales and marketing tools, and improve the evaluator experience.

To make Promote Drupal sustainable, Rebecca Pilcher, Director of MarComm at the Drupal Association, will be leading the initiative. Rebecca will oversee volunteers with marketing and business skills that can help move these efforts forward.

Promote Drupal Fund: 75% to goal

At DrupalCon Nashville, we set a goal of fundraising $100,000 to support the Promote Drupal Initiative. These funds will help to secure staffing to backfill Rebecca's previous work (someone has to market DrupalCon!), produce critical marketing resources, and sponsor marketing sprints. The faster we reach this goal, the faster we can get to work.

I'm excited to announce that we have already reached 75% of our goal, thanks to many generous organizations and individuals around the world. I wanted to extend a big thank you to the following companies for contributing $1,000 or more to the Promote Drupal Initiative:

Thanks to many financial contributions, the Promote Drupal Initiative hit its $75k milestone!

If you can, please help us reach our total goal of $100,000! By raising a final $25,000, we can build a program that will introduce Drupal to an emerging audience of business decision makers. Together, we can make a big impact on Drupal.

June 21, 2018

I published the following diary on “Are Your Hunting Rules Still Working?“:

You are working in an organization which implemented good security practices: log events are collected then indexed by a nice powerful tool. The next step is usually to enrich this (huge) amount of data with external sources. You collect IOC’s, you get feeds from OSINT. Good! You start to create many reports and rules to be notified when something weird is happening. Everybody agrees on the fact that receiving too many alerts is bad and people won’t get their attention to them if they are constantly flooded… [Read more]

[The post [SANS ISC] Are Your Hunting Rules Still Working? has been first published on /dev/random]

June 19, 2018

For the past two years, I've published the Who sponsors Drupal development report. The primary goal of the report is to share contribution data to encourage more individuals and organizations to contribute code to Drupal on However, the report also highlights areas where our community can and should do better.

In 2017, the reported data showed that only 6 percent of recorded code contributions were made by contributors that identify as female. After a conversation in the Drupal Diversity & Inclusion Slack channel about the report, it became clear that many people were concerned about this discrepancy. Inspired by this conversation, Tara King started the Drupal Diversity and Inclusion Contribution Team to understand how the Drupal community could better include women and underrepresented groups to increase code and community contributions.

I recently spoke with Tara to learn more about the Drupal Diversity and Inclusion Contribution Team. I quickly discovered that Tara's leadership exemplifies various Drupal Values and Principles; especially Principle 3 (Foster a learning environment), Principle 5 (Everyone has something to contribute) and Principle 6 (Choose to lead). Inspired by Tara's work, I wanted to spotlight what the DDI Contribution Team has accomplished so far, in addition to how the team is looking to help grow diversity and inclusion in the future.

A mentorship program to help underrepresented groups

Supporting diversity and inclusion within Drupal is essential to the health and success of the project. The people who work on Drupal should reflect the diversity of people who use and work with the software. This includes building better representation across gender, race, sexuality, disability, economic status, nationality, faith, technical experience, and more. Unfortunately, underrepresented groups often lack community connections, time for contribution, resources or programs that foster inclusion, which introduce barriers to entry.

The mission of the Drupal Diversity & Inclusion Contribution Team is to increase contributions from underrepresented groups. To accomplish this goal, the DDI Contribution Team recruits team members from diverse backgrounds and underrepresented groups, and provides support and mentorship to help them contribute to Drupal. Each mentee is matched with a mentor in the Drupal community, who can provide expertise and advice on contribution goals and professional development. To date, the DDI Contribution Team supports over 20 active members.

What I loved most in my conversation with Tara is the various examples of growth she gave. For example, Angela McMahon is a full-time Drupal developer at Iowa State. Angela been working with her mentor, Caroline Boyden, on the External Link Module. Due to her participation with the DDI Contribution Team, Angela has now been credited on 4 fixed issues in the past year.

Improving the reporting around diversity and inclusion

In addition to mentoring, another primary area of focus of the DDI Contribution Team is to improve reporting surrounding diversity and inclusion. For example, in partnership with the Drupal Association and the Open Demographics Project, the DDI Contribution Team is working to implement best practices for data collection and privacy surrounding gender demographics. During the mentored code sprints at DrupalCon Nashville, the DDI Contribution Team built the Gender Field Module, which we hope to deploy on

The development of the Gender Field Module is exciting, as it establishes a system to improve reporting on diversity demographics. I would love to use this data in future iterations of the 'Who sponsors Drupal development' report, because it would allow us to better measure progress on improving Drupal's diversity and inclusion against community goals.

One person can make a difference

What I love about the story of the DDI Contribution Team is that it demonstrates how one person can make a significant impact on the Drupal project. The DDI Contribution Team has grown from Tara's passion and curiosity to see what would happen if she challenged the status quo. Not only has Tara gotten to see one of her own community goals blossom, but she now also leads a team of mentors and mentees and is a co-maintainer of the Drupal 8 version of the Gender Field Module. Last but not least, she is building a great example for how other Open Source projects can increase contributions from underrepresented groups.

How you can get involved

If you are interested in getting involved with the DDI Contribution Team, there are a number of ways you can participate:

  • Support the DDI Contribution Team as a mentor, or consider recommending the program to prospective mentees. Join #ddi-contrib-team on Drupal Slack to meet the team and get started.
  • In an effort to deliberately recruit teams from spaces where people of diverse backgrounds collaborate, the DDI Contribution Team is looking to partner with Outreachy, an organization that provides paid internships for underrepresented groups to learn Free and Open Source Software and skills. If you would be interested in supporting a Drupal internship for an Outreachy candidate, reach out to Tara King to learn how you can make a financial contribution.
  • One of the long term goals of the DDI Contribution Team is to increase the number of underrepresented people in leadership positions, such as initiative lead, module maintainer, or core maintainer. If you know of open positions, consider understanding how you can work with the DDI Contribution Team to fulfill this goal.

I want to extend a special thanks to Tara King for sharing her story, and for making an important contribution to the Drupal project. Growing diversity and inclusion is something everyone in the Drupal community is responsible for, and I believe that everyone has something to contribute. Congratulations to the entire DDI Contribution Team.

I published the following diary on “PowerShell: ScriptBlock Logging… Or Not?“:

Here is an interesting piece of PowerShell code which is executed from a Word document (SHA256: eecce8933177c96bd6bf88f7b03ef0cc7012c36801fd3d59afa065079c30a559). The document is a classic one. Nothing fancy, spit executes the macro and spawns a first PowerShell command… [Read more]

[The post [SANS ISC] PowerShell: ScriptBlock Logging… Or Not? has been first published on /dev/random]

So suppose you have one page/ post which for whatever reason you don’t want Autoptimize to act on? Simply add this in the post content and AO will bail out;

<!-- <xsl:stylesheet -->

Some extra info:

  • Make sure to use the “text”-editor, not the “visual” one as I did here to make sure the ode is escaped and thus visible
  • This bailing out was added 5 years ago to stop the PHP-generated <xsl:stylesheet from Yoast SEO from being autoptmized, if I’m not mistaking Yoast generates the stylesheet differently now.
  • The xsl-tag is enclosed in a HTML comment wrapper to ensure it is not visible (except here, on purpose to escape the HTML tags so they are visible for you to see).

June 18, 2018

I published the following diary on “Malicious JavaScript Targeting Mobile Browsers“:

A reader reported a suspicious piece of a Javascript code that was found on a website. In the meantime, the compromized website has been cleaned but it was running WordPress (again, I would say![1]).  The code was obfuscated, here is a copy… [Read more]

[The post [SANS ISC] Malicious JavaScript Targeting Mobile Browsers has been first published on /dev/random]

June 15, 2018

And here we go with the wrap-up of the 3rd day of the SSTIC 2018 “Immodium” edition. Indeed, yesterday, a lot of people suffered from digestive problems (~40% of the 800 attendees were affected!). This will for sure remains a key story for this edition. Anyway, it was a good edition!

The first timeslot is never an easy one on Friday. It was assigned to Christophe Devigne: “A Practical Guide to Differential Power Analysis of USIM Cards“. USIM cards are the SIM cards that you use in your mobile phones. Guest what? They are vulnerable to some types of attacks to extract the authentication secret. What does it mean? A complete confidentiality lost for the user’s communications. An interesting fact, Christophe and his team tested several USIM cards (9) – 5 of them from French operators – and one was vulnerable. Also, 75% of the French mobile operators still distribute cards with a trivial PIN code. The technology used is called “MILENAGE“. Christophe described it and the explained how, thanks to an oscilloscope, he was able to extract keys.
The second talk was targeting the Erlang language. Erlang is not widely used and was developed by Ericsson. The talk title was “Starve for Erlang cookie to gain remote code exec” and presented by Guillaume Teissier. It is used for many applications but mainly in the telecom sector to manage network devices.
Erlang has a feature that allows two processes to communicate. Guillaume explained how communications are established between the processes – via a specific TCP port – and how they authenticate together – via a cookie. This cookie is always a string of 20 uppercase characters. The talk focussed on how to intercept communications between those processes and recover this cookie. Guillaume released a tool for this.
The next talk was about HACL*, a crypto library written in formally verified code and used by FireFox. Benjamin Beurdouche and Jean Karim Zinzindohoue explained how they developed the library (using the F* language).
Then,  Jason Donenfeld presented his project: Wireguard. This is a Layer-3 secure network tunnel for IPv4 & IPv6 (read: a VPN) designed for the Linux kernel (but available on other platforms – MacOS, Android and other embedded OS). It is UDP based and provides an authentication similar to SSH and its .ssh/authentication-keys. It can replace without problem a good old OpenVPN or IPsec solution. Compared to other solutions, the code is very slow and can be easily audited/reviewed. The setup is very easy:
# ip link add wg0 type wireguard
# ip address add dev wg0
# ip route add default wg0
# ifconfig wg0 ...
# iptables -A input -i wg0 ...
Jason explained in details how the authentication mechanism has been implemented to ensure that once a packet reached a system was are sure of the origin. So easy to setup, here is a quick tutorial on a friend’s wiki.
The next presentation was made by Yvan GENUER and focussed on SAP (“Ca sent le SAPin!“). Everybody knows SAP, the worldwide leader in ERP solutions. A lot of security issues have already been found in multiple tools or modules. But this time, the focus was on a module called SAP IGS or “Internet Graphic Services”. This module helps to render and process multiple files inside an SAP infrastructure. After some classic investigations (network traffic capture, search in the source code – yes, SAP code is stored in databases), they find an interesting call: “ADM:INSTALL”. It is used to install new shape files. They explained the two vulnerabilities found: The service allows the creation of any files on the file system and a DoS when you create a file with a filename longer than 256 characters.
The next talk was not usual but very interesting: Yves-Alexis Perez from the Debian Security Team came on stage to explain how his team is working. How they handle security issues with the Debian Linux distribution. The core team is based on 10 people (5 being really active) and other developers and maintainers. He reviewed the process that is followed when a vulnerability is reported (triage, push of patches, etc). He also reviewed some vulnerabilities from the past and how they were handled.
After a nice lunch break with Friends and some local food, back in the auditorium for two talks: Ivan Kwiatkowski demonstrated the tool he wrote to help pentester to handler remote shells in a comfortable way: “Hacking Harness open-source“. Ivan started with some bad stories that every pentester in the world faced. You got a shell but no TTY, you lose it, you suffer from latency, etc… This tool helps to get rid of these problems and allow the pentester to work like in a normal shell without any footprint. Other features allow, for example, to transfer files back to the attacker. It looks to be a nice tool, have a look at it, definitively!
Then, Florian Maury presented “DNS Single Point of Failure Detection using Transitive Availability Dependency Analysis“. Everybody has a love/hate relation with DNS. No DNS, no Internet. Florian came back on the core principle of the DNS and also a weak point: the single point of failure that can make your services not reachable on the Internet. He wrote a tool that, based on DNS requests, shows you if a domain is vulnerable to one or more single point of failure. In the second part of the talk, Florian presented the results of a research he performed on 4M of domains (+ the Alexa top list). Guess what? There are a lot of domains that suffer from, at least, one SPoF.

Finally, the closing keynote was presented by Patrick Pailloux, the technical director of the DGSE (“Direction Générale de la Sécurité Extérieure”). Excellent speaker who presented the “Cyber” goals of the French secret services, of course, what he was authorized to disclosed 😉 It was also a good opportunity to repeat that they are always looking to skilled security people.

[The post SSTIC 2018 Wrap-Up Day #3 has been first published on /dev/random]

The Composer Initiative for Drupal

At DrupalCon Nashville, we launched a strategic initiative to improve support for Composer in Drupal 8. To learn more, you can watch the recording of my DrupalCon Nashville keynote or read the Composer Initiative issue on

While Composer isn't required when using Drupal core, many Drupal site builders use it as the preferred way of assembling websites (myself included). A growing number of contributed modules also require the use of Composer, which increases the need to make Composer easier to use with Drupal.

The first step of the Composer Initiative was to develop a plan to simplify Drupal's Composer experience. Since DrupalCon Nashville, Mixologic, Mile23, Bojanz, Webflo, and other Drupal community members have worked on this plan. I was excited to see that last week, they shared their proposal.

The first phase of the proposal is focused on a series of changes in the main Drupal core repository. The directory structure will remain the same, but it will include scripts, plugins, and embedded packages that enable the bundled Drupal product to be built from the core repository using Composer. This provides users who download Drupal from a clear path to manage their Drupal codebase with Composer if they choose.

I'm excited about this first step because it will establish a default, official approach for using Composer with Drupal. That makes using Composer more straightforward, less confusing, and could theoretically lower the bar for evaluators and newcomers who are familiar with other PHP frameworks. Making things easier for site builders is a very important goal; web development has become a difficult task, and removing complexity out of the process is crucial.

It's also worth noting that we are planning the Automatic Updates Initiative. We are exploring if an automated update system can be build on top of the Composer Initiative's work, and provide an abstraction layer for those that don't want to use Composer directly. I believe that could be truly game-changing for Drupal, as it would remove a great deal of complexity.

If you're interested in learning more about the Composer plan, or if you want to provide feedback on the proposal, I recommend you check out the Composer Initiative issue and comment 37 on that issue.

Implementing this plan will be a lot of work. How fast we execute these changes depends on how many people will help. There are a number of different third-party Composer related efforts, and my hope is to see many of them redirect their efforts to make Drupal's out-of-the-box Composer effort better. If you're interested in getting involved or sponsoring this work, let me know and I'd be happy to connect you with the right people!

June 14, 2018

The second day started with a topic this had a lot of interest for me: Docker containers or “Audit de sécurité d’un environnement Docker” by Julien Raeis and Matthieu Buffet. Docker is everywhere today and, like new technologies, is not always mature when deployed, sometimes in a corner by developers. They explained (for those that are living on the moon) what is Docker in 30 seconds. The idea of the talk was not to propose a tool (you can have a look here). Based on their research, most containers are deployed with the default configuration. Images are downloaded without security pre-checks. If Docker is very popular on Linux systems, it is also available for Windows. In this case, there are two working modes: Via the Windows Server Containers (based on objects of type “job”) or Hyper-V container. They reviewed different aspects of the containers like privilege escalation, abuse of resources and capabilities. Some nice demonstrations were presented like privilege escalation and access to a file on the host from the container. Keep in mind that Docker is not considered as a security tool by the developers! Interesting talks but with a lack of practical stuff that could help auditors.
The next talk was also oriented to virtualization and, more precisely, how to protect them from a guest point of view. This was presented by Jean-Baptiste Galet. The scenario was: “if the hypervisor is already compromized by an attacker, how to protect the VMs running on top of it? We can face the same kind of issues with a rogue admin. By design, an admin has full access to the virtual hosts. The goal is to reach the following requirements;
  • To use a trusted hypervisor
  • To verify the boot sequence integrity
  • To encrypt disks (and snapshots!)
  • To protect memory
  • To perform a safe migration between different hypervisors
  • To restrict access to console, ports, etc.

Some features have already been implemented by VMware in 2016 like an ESXi secure boot procedure, VM encryption and VMotion data encryption. Jean-Baptiste explained in detail how to implement such controls. For example, to implement a safe boot, UEFI & a TPM chip can be used.

The two next slot was assigned to short presentations (15 mins) and focussed on specific tools. The first one was The tool helps in the development of an ASN.1 encoder/decoder. ASN means “Abstract Syntax Notation 1” and is used in many domains, the most important one being the mobile network operators.
The second one was ProbeManager, developed by  Matthieu Treussart. Why this talk? Matthieu was looking for a tool to help in the day-to-day management of IDS (like Suricata) but did not found a solution that matched his requirements. So, he decided to write his own tool. ProbeManager was born! The tool is written in Python and has a (light) web interface to perform all the classic tasks to manage IDS sensors (creation, deployment, the creation of rules, monitoring, etc). The tool is nice but the web interface is very light and it suffers from a lack of IDS rules finetuning. Note that it is also compatible with Bro and OSSEC (soon). I liked the built-in integration with MISP!
After the morning coffee break, we had the chance to welcome Daniel Jeffrey on stage. Daniel is working for the Internet Security Research Group of the Linux Foundation and is involved in the Let’s Encrypt project. In the first part, Daniel explained why HTTPS became mandatory to better protect the Internet users privacy but SSL is hard! It’s boring, time-consuming, confusing and costly. The goal of the Let’s Encrypt project is to automate, to offer for free and be open. Let’s Encrypt is maintenance by a team of 12 people (only!). They went into production in eight months only. Then, Daniel explained how Let’s Encrypt is implemented. It was interesting to learn more about the types of challenges available to enrol/renew certificates: DNS-01 is easy with many frontends needing simultaneous renewals. HTTP-01 is useful for a few servers that get certs and when DNS lag can be an issue.
Then, two other tools were presented.”YaDiff” (available here) which helps to propagate symbols between analysis sessions. The idea of the tool came as a response to a big issue with malware analysis: it is a repeating job. The idea is, once the analyzis on a malware completed, symbols are exported and can be reused in other analysis (in IDA). Interesting tool if you are performing reverse engineering as a core activity. The second one was Sandbagility. After a short introduction to the different methods available to perform malware analysis (static, dynamic, in a sandbox), the authors explained their approach. The idea is to interact with a Windows sandbox without an agent installed on it but, instead, to interact with the hypervisor. The result of their research is a framework, written in Python. It implements a protocol called “Fast Debugging Protocol”. They performed some demos and showed how easy it is to extract information from the malware but also to interact with the sandbox. One of the demos was based on the Wannacry ransomware. Warning, this is not a new sandbox. The guest Windows system must still be fine-tuned to prevent easy VM detection! This is very interesting and deserves to be tested!
After the lunch, the last regular presentation started with one about “Java Card”, presented by Guillaume Bouffard and Léo Gaspard. It was in some way, an extension of the talk about an armoured USB device, the Java Card is one of the components.
As usual, the afternoon was completed with a wrap-up of the SSTIC challenge and rump sessions. The challenge was quite complex (as usual?) and included many problems based on crypto. The winner came on site and explained how he solve the challenge. This is part of the competition, players must deliver a document containing all the details and findings of the game. A funny anecdote about the challenge, the server was compromized because an ~/.ssh/authorized-keys was left writable.
Rump sessions are also a key event during the conference. Rules are simple: 5 minutes (4 today due to the number of proposals received), if people applaud, you stop otherwise you can continue. Here is the list of topics that were presented:
  • A “Burger Quizz” alike session about the SSTIC
  • Pourquoi c’est flou^Wnet? (How the SSTIC crew provides live streaming and recorded videos)
  • Docker Explorer
  • Nordic made easy – Reverse engineering of a nRF5 firmware (from Nordic Semiconductor)
  • RTFM – Read the Fancy Manual
  • IoT security
  • Mirai, dis-moi qui est la poubelle?
  • From LFI to domain admin rights
  • Perfect (almost) SQL injection detection
  • Invite de commande pour la toile (dans un langage souverain): WinDev
  • How to miss your submission to a call-for-paper
  • Suricata & les moutons
  • Les redteams sont nos amies or what mistakes to avoid when you are in a red team (very funny!)
  • ipibackups
  • Representer l’arboresence matérielle
  • La télé numérique dans le monde
  • ARM_NOW (
  • Signing certi with SSH keys
  • Smashing the func for SSTIC and profit
  • Wookey
  • Coffee Plz! (or how to get free coffee in your company)
  • Modmobjam
  • Bug bounty
  • (Un)protected users
  • L’anonymat du pauvre
  • Abuse of the YAML format

The day ended with the classic social event in the beautiful place of “Le couvent des Jacobins“:

Le couvent des jacobins

My feeling is that there were less entertaining talks today (based on my choices/feeling of course) but the one about Let’s Encrypt was excellent. Stay tuned for the last day tomorrow!

[The post SSTIC 2018 Wrap-Up Day #2 has been first published on /dev/random]

I published the following diary on “A Bunch of Compromized WordPress Sites“:

A few days ago, one of our readers contacted reported an incident affecting his website based on WordPress. He performed quick checks by himself and found some pieces of evidence:

  • The main index.php file was modified and some very obfuscated PHP code was added on top of it.
  • A suspicious PHP file was dropped in every sub-directories of the website.
  • The wp-config.php was altered and database settings changed to point to a malicious MySQL server.

[Read more]

[The post [SANS ISC] A Bunch of Compromized WordPress Sites has been first published on /dev/random]

June 13, 2018

Hello Readers,
I’m back in the beautiful city of Rennes, France to attend my second edition of the SSTIC. My first one was a very good experience (you can find my previous wrap-up’s on this blog – day 1, day 2, day 3) and this one was even more interesting because the organizers invited me to participate to the review and selection of the presentations. The conference moved to a new location to be able to accept the 800 attendees, quite challenging!

As usual, the first day started with a keynote which was assigned to Thomas Dullien aka Halvar Flake. The topic was “Closed, heterogeneous platforms and the (defensive) reverse engineers dilemma”. Thomas is a reverse engineer for years and he decided to have a look back at twenty years of reverse engineering. In 2010, this topic was already covered in a blog post and, perhaps, it’s time to have another look. What are the progress? Thomas reviewed today’s challenges, some interesting changes and the future (how computing is changing and the impacts in reverse engineering tasks). Thomas’s feeling is that we have many tools available today (Frida, Radare, Angr, BinNavi, ….) which should be helpful but it’s not the case. Getting debugging live and traces from devices like mobile devices is a pain (closed platform) and there is a clear lack of reliable library to retrieve enough amount of data. Also, the “debugability” is reduced due to more and more security controls in place (there is clearly a false sense of security: “It’s not because your device is not debuggable that it is safe!” said Thomas. Disabling the JTAG on a router PCB will not make it more secure. There is also a “left shift” in the development process to try to reduce the time to market (software is developed on hardware not completely ready). Another fact? The poor development practices of most reverse engineers. Take as example a quick Python script written to fix a problem at a time ‘x’. Often, the same script is still used months or years later without proper development guidelines. Some tools are also developed as support for a research or a presentation but does not work properly in real-life cases. For Thomas, the future will still change with more changes in technologies than in the last 20 years, the cloud will bring not only “closed source” tools but also “closed binary” and infrastructures will become heterogeneous. Very nice keynote and Thomas did not hesitate to throw a stone into the water!
After a first coffee break, Alexandre Gazet and Fabien Perigaud presented a research about HP iLO interfaces: “Subverting your server through its BMC: the HPE iLO4 case”. After a brief introduction of the product and what it does (basically: to allow an out-of-band control/monitoring of an HP server), a first demo was presented based on their previous research. Dumping the kernel memory of the server, implement a shellcode and become root in the Linux server. Win! Their research generated the CVE-2017-12542 and a patch is available for a while (it was a classic buffer overflow). But does it mean that iLO is a safe product now? They came back with a new research to demonstrate that no, it’s not secure yet. Even if HP did a good job to fix the previous issue, they still lack some controls. Alexandre & Fabien explained how the firmware upgrade process fails to validate the signature and can be abused to perform malicious activities, again! The goal was to implement a backdoor in the Linux server running on the HP server controlled by the compromized iLO interface. They release a set of tools to check your iLO interface but the recommendation remains the same: to patch and do not deploy iLO interfaces in the wild.
The next talk was about “T-Brop” or “Taint-Based Return Oriented Programming” presented by Colas Le Guernic & Francois Khourbiga. A very difficult topic for me. They reviewed what it “ROP” (Return Oriented Programming) and described the two existing techniques to detect possible ROP in a program: syntactic or symbolic with pro & con of both solutions. Then, they introduced their new approach called T-Brop which is a mix of the best of both solutions.
The next talk was about “Certificate Transparency“, presented by Christophe Brocas & Thomas Damonneville. HTTPS is really pushed on stage for a while to improve web security and one of the controls available to help to track certificates and rogue websites is the Certificate Transparency. It’s a Google initiative known as RFC 6962. They explained what’s behind this RFC. Basically, all created SSL certificates must be added in an unalterable list which can be accessed freely for tracking and monitoring purposes. Christophe & Thomas are working for a French organization that is often targeted by phishing campaigns and this technology helps them in their day-to-day operations to track malicious sites. More precisely, they track two types of certificates:
  • The ones that mimic the official ones (typo-squatting, new TLD’s, …)
  • Domains used in their organization and that can be used in the wrong way.

In the second scenario, they spotted a department which developed a web application hosted by a 3rd party company and using Let’s Encrypt. This is not compliant with their internal rules. Their tools have been release (here). Definitively a great talk because it does not require a lot of investment (time, money) and can greatly improve your visibility of potential issues (ex: detecting phishing attacks before they are really started).

After the lunch, a bunch of small talks was scheduled. First, Emmanuel Duponchelle and Pierre-Michel Ricordel presented “Risques associés aux signaux parasites compromettants : le cas des câbles DVI et HDMI“. Their research focused on the TEMPEST issue with video cables. They just started with a live demo which demonstrated how a computer video flow can be captured:
Then, they explained how video signals work and what are the VGA, DVI & HDMI standards (FYI, HDMI is like DVI but with a new type of connector). To solve the TEMPEST issues, it’s easy as used properly shielded cables. They demonstrated different cables, good and bad. Keep in mind: low-cost cables are usually very bad (not a surprise). To make the demo, they used the software called TempestSDR. Also, for sensitive computers, use VGA cables instead of HDMI, they leak less data!
The next talk was close to the previous topic. This time, it focussed on SmartTV’s and, more precisely, the DVB-T protocol. José Lopes Esteves & Tristan Claverie presented their research which is quite… scary! Basically, a SmartTV is a computer with many I/O interfaces and, as they are cheaper than a normal computer monitor, they are often installed in meeting rooms, where sensitive information are exchanged. They explained that, besides the audio & video flows, subtitles, programs, “apps” can also be delivered via a DVB-T signal. Such “apps” are linked to a TV channel (that must be selected/viewed). Those apps are web-based and, if the info is provided, can be installed silently and automatically! So nice! Major issues are:
  • HTTP vs HTTPS (no comment!)
  • Legacy mode fallback (si pas signé, pas grave)
  • Unsafe API’s
  • Time-based trust

They explained how to protect against this, like asking the user to approve the installation of an app or access to this or this resources but no easy to implement in a “TV” used by no technical people. Another great talk! Think about this when you will see a TV connected in a meeting room.

The next talk was the demonstration of a complete pwnage of a SmartPlug (again, a “smart” device) that can be controlled via a WiFi connection: “Three vulns, one plug” by Gwenn Feunteun, Olivier Dubasque and Yves Duchesne. It started with a mention on the manufacturer website. When you read something like “we are using top-crypto algorithm…“, this is a good sign of failure. Indeed. They bought an adapter and started to analyze its behaviour. The first issue was to understand how the device was able to “automatically” configure the WiFi interface via a mobile phone. By doing a simple MitM attack, they checked the traffic between the smartphone and the SmartPlug. They discovered that the WiFi key was broadcasted using a … Caesar cipher (of 120)! The second vulnerability was found in the WiFi chipset that implements a backdoor via an open UDP port. They discovered also that WPS was available but not used. For the fun, they decided to implement it using an Arduino 🙂 For the story, the same kind of WiFi chipset is also used in medical and industrial devices… Just one remark about the talk: it looks that the manufacturer of the SmartPlug was never contacted to report the vulnerabilities found… sad!

Then, Erwan Béguin came to present the Escape Room they developed at his school. The Escape Room focusses on security and awareness. It is for non-tech people. When I read the abstract, I had a strange feeling about the talk but it was nice and explained how people reacted and some finding about their behaviours when they are working in groups. Example: in a group, if the “leader” gives his/her approval, people will follow and perform unsafe actions like inserting a malicious USB device in a laptop.
After the afternoon coffee break, Damien Cauquil presented a cool talk about hacking PCB’s: “Du PCB à l’exploit: étude de cas d’une serrure connectée Bluetooth Low Energy“. When you are facing some piece of hardware, they are different approaches: You can open the box, locate the JTAG, use, brute force the serial speed, get a shell, root access. Completed! Damien does not like this approach and prefers to work in a more strict way but which can be helpful in many cases. Sometimes, just be inspecting the PCB, you can deduct some features or missing controls. At the moment, they are two frameworks to address the security of IoT devices: the OWASP IoT project and the one from Rapid7. In the second phase, Damien applied his technique to a real device (a smart door lock). Congrats to him for finishing the presentation in a hurry due to the video problems!
Then, the “Wookey” project was presented by a team of the ANSSI. The idea behind this project is to build a safe USB storage that will protect against all types of attack like data leak, USBKill, etc… The idea is nice, they performed a huge amount of work but it is very complex and not ready to be used by most people…
Finally, Emma Benoit presented the result of a pentest she realized with Guillaume Heilles, Philippe Teuwen on an embedded device: “Attacking serial flash chip: case study of a black box device“. The device had a flash chip on the PCB that should contain interesting data. They are two types of attacks: “in circuit” (probes are plugged on the chip PINs) or “chip-off” (or physical extraction). In this case, they decided to use the second method and explained step by step how they succeeded. The most challenging step was to find an adapter to connect the unsoldered chip on an analyzer. Often, you don’t have the right adapter and you must build your own. All the steps were described and finally data extracted from the flash. Bonus, there was a telnet interface available without any password 😉
That’s all for today! See you tomorrow for another wrap-up!

[The post SSTIC 2018 Wrap-Up Day #1 has been first published on /dev/random]

June 11, 2018

Merkel and Macron should use everything in their economic power to invest in our own European Military.

For example whenever the ECB must pump money in the EU-system, it could do that by increased spending on European military.

This would be a great way to increase the EURO inflation to match the ‘below but near two percent annual inflation’ target.

However. The EU budget for military should not go to NATO. Right now it should go to EU’s own national armies. NATO is more or less the United State’s military influence in Europe. We’ve seen last G7 that we can’t rely on the United States’ help.

Therefor, it should use exclusively European suppliers for military hardware. We don’t want to spend EUROs outside of our EU system. Let the money circulate within our EU economy. This implies no F-35 for Belgium. Instead, for example the Eurofighter Typhoon. The fact that Belgium can’t deliver the United States’s nuclear weapons without their F-35, means that the United States should take their nuclear bombs back. There is no democratic legitimacy to keep them in Belgium anyway.

It’s also time to create a pillar similar to the European Union: a military branch of the EU.

Already are Belgium and The Netherlands sharing military marine and air force resources. Let’s extend this principle to other EU countries.

June 08, 2018

Logo JenkinsCe jeudi 28 juin 2018 à 19h se déroulera la 70ème séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : Jenkins en 2018


  • exceptionnellement, il s’agit du 4ème jeudi du mois !
  • Un atelier introductif “Docker/Jenkins” sera organisé dès 14h ! Détails ci-après.

Thématique : sysadmin

Public : sysadmin|développeurs|entreprises|étudiants

Les animateurs conférenciers : Damien Duportal et Olivier Vernin (CloudBees)

Lieu de cette séance : Université de Mons, Faculté Polytechnique, Site Houdain, Rue de Houdain, 9, auditoire 3 (cf. ce plan sur le site de l’UMONS, ou la carte OSM). Entrée par la porte principale, au fond de la cour d’honneur. Suivre le fléchage à partir de là.

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page La séance sera suivie d’un verre de l’amitié, vers 21H. Un écran géant sera installé de manière à suivre la seconde mi-temps de l’événement footballistique (Belgique -Angleterre) en direct !

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, OpenSides, MeaWeb et Phonoid.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

Description :

  • Partie I : Introduction à Jenkins-X: une solution d’intégration et de déploiement continus pour des applications “cloud” modernes dans Kubernetes. Résumé : Jenkins X est un projet qui repense la manière dont les développeurs devraient interagir avec l’intégration et le déploiement continus dans le cloud en mettant l’accent sur la productivité des équipes de développement grâce à l’automatisation, à l’outillage et à de meilleures pratiques DevOps.
  • Partie II: Le plugin Configuration-as-Code de Jenkins. Résumé : En 2018, nous savons comment définir nos Jobs avec JobDSL ou Pipeline. Mais comment définir la configuration de Jenkins avec le même modèle, avec seulement un fichier YAML ?

Short bios :

  • Damien Duportal : Training Engineer @ CloudBees, i am an IT Engineer tinkering from development to production areas. Trainer and mentor, i also love to transmit and teach. Open-Source afficionado. Docker & Apple addict. Human being.
  • olivier Vernin : Fascinated by new technologies and particulary in computer sciences, I am continuously looking for ways to improve my skills. The domain of Linux/Unix especially piqued my interest and encourages myself to get in-depth understanding.

Atelier introductif “Docker/Jenkins”, de 14H à 17H30 :

  • Introduction à Docker
  • Introduction à Jenkins
  • Intégration de Jenkins avec Docker
  • Bring your own challenge

L’atelier est limité à 25 personnes. Détails et inscription via la page

Ordering a revision of a power electronics board from Aisler I decided to get a metal paste stencil as well to be able to cleanly solder using the reflow oven.

I already did a first board just taping the board and stencil to the table and applying solder paste. This worked but it is not very handy.

Then I came with the idea to use a 3D printed PCB holder that would ease the process.

The holder

The holder (just a rectangle with a hole) tightly fits the PCB. It is a bit larger then the stencil and 0.1mm less thick then the PCB to make sure the connection between the PCB and the stencil is tight.

I first made some smaller test prints but after 3 revisions the following openSCAD script gave a perfectly fitting PCB holder:

// PCB size
bx = 41;
by = 11.5;
bz = 1.6;

// stencil size (with some margin for tape)
sx = 100; // from 84.5
sy = 120; // from 104

// aisler compensation
board_adj_x = 0.3;
board_adj_y = 0.3;

// 3D printer compensation
printer_adj_x = 0.1;
printer_adj_y = 0.1;

x = bx + board_adj_x + printer_adj_x;
y = by + board_adj_y + printer_adj_y;
z = bz - 0.1; // have PCB be ever so slightly higher

difference() {
    cube([sx,sy,z], center=true);
    cube([x,y,z*2], center=true);


The PCB in the holder:

PCB in holder

The stencil taped to it:

Stencil taped

Paste on stencil:

Paste on stencil

Paste applied:

Paste applied

Stencil removed:

Stencil removed

Components placed:

Components placed

Reflowed in the oven:



Using the 3D printed jig worked good. The board under test:

Under test

June 06, 2018

I published the following diary on “Converting PCAP Web Traffic to Apache Log“:

PCAP data can be really useful when you must investigate an incident but when the amount of PCAP files to analyse is counted in gigabytes, it may quickly become tricky to handle. Often, the first protocol to be analysed is HTTP because it remains a classic infection or communication vector used by malware. What if you could analyze HTTP connections like an Apache access log? This kind of log can be easily indexed/processed by many tools… [Read more]

[The post [SANS ISC] Converting PCAP Web Traffic to Apache Log has been first published on /dev/random]

One of the most stressful experiences for students is the process of choosing the right university. Researching various colleges and universities can be overwhelming, especially when students don't have the luxury of visiting different campuses in person.

At Acquia Labs, we wanted to remove some of the complexity and stress from this process, by making campus tours more accessible through virtual reality. During my presentation at Acquia Engage Europe yesterday, I shared how organizations can use virtual reality to build cross-channel experiences. People that attended Acquia Engage Europe asked if they could have a copy of my video, so I decided to share it on my blog.

The demo video below features a high school student, Jordan, who is interested in learning more about Massachusetts State University (a fictional university). From the comfort of his couch, Jordan is able to take a virtual tour directly from the university's website. After placing his phone in a VR headset, Jordan can move around the university campus, explore buildings, and view program resources, videos, and pictures within the context of his tour.

All of the content and media featured in the VR tour is stored in the Massachusetts State University's Drupal site. Site administrators can upload media and position hotspots directly from within Drupal backend. The React frontend pulls in information from Drupal using JSON API. In the video below, Chris Hamper (Acquia) further explains how the decoupled React VR application takes advantage of new functionality available in Drupal 8.

It's exciting to see how Drupal's power and flexibility can be used beyond traditional web pages. If you are interesting in working with Acquia on virtual reality applications, don't hesitate to contact the Acquia Labs team.

Special thanks to Chris Hamper for building the virtual reality application, and thank you to Ash Heath, Preston So and Drew Robertson for producing the demo videos.

June 05, 2018

I published the following diary on “Malicious Post-Exploitation Batch File“:

Here is another interesting file that I found while hunting. It is a malicious Windows batch file (.bat) which helps to exploit a freshly compromised system (or… to be used by a rogue user). I don’t have a lot of information about the file origin, I found it on VT (SHA256: 1a611b3765073802fb9ff9587ed29b5d2637cf58adb65a337a8044692e1184f2). The script is very simple and relies on standard windows system tools and external utilities downloaded when needed… [Read more]

[The post [SANS ISC] Malicious Post-Exploitation Batch File has been first published on /dev/random]

June 04, 2018

Microsoft acquires GitHub

Today, Microsoft announced it is buying GitHub in a deal that will be worth $7.5 billion. GitHub hosts 80 million source code repositories, and is used by almost 30 million software developers around the world. It is one of the most important tools used by software organizations today.

As the leading cloud infrastructure platforms — Amazon, Google, Microsoft, etc — mature, they will likely become functionally equivalent for the vast majority of use cases. In the future, it won't really matter whether you use Amazon, Google or Microsoft to deploy most applications. When that happens, platform differentiators will shift from functional capabilities, such as multi-region databases or serverless application support, to an increased emphasis on ease of use, the out-of-the-box experience, price, and performance.

Given multiple functionally equivalent cloud platforms at roughly the same price, the simplest one will win. Therefore, ease of use and out-of-the-box experience will become significant differentiators.

This is where Microsoft's GitHub acquisition comes in. Microsoft will most likely integrate its cloud services with GitHub; each code repository will get a button to easily test, deploy, and run the project in Microsoft's cloud. A deep and seamless integration between Microsoft Azure and GitHub could result in Microsoft's cloud being perceived as simpler to use. And when there are no other critical differentiators, ease of use drives adoption.

If you ask me, Microsoft's CEO, Satya Nadella, made a genius move by buying GitHub. It could take another ten years for the cloud wars to mature, and for us to realize just how valuable this acquisition was. In a decade, $7.5 billion could look like peanuts.

While I trust that Microsoft will be a good steward of GitHub, I personally would have preferred to see GitHub remain independent. I suspect that Amazon and Google will now accelerate the development of their own versions of GitHub. A single, independent GitHub would have maximized collaboration among software projects and developers, especially those that are Open Source. Having a variety of competing GitHubs will most likely introduce some friction.

Over the years, I had a few interactions with GitHub's co-founder, Chris Wanstrath. He must be happy with this acquisition as well; it provides stability and direction for GitHub, ends a 9-month CEO search, and is a great outcome for employees and investors. Chris, I want to say congratulations on building the world's biggest software collaboration platform, and thank you for giving millions of Open Source developers free tools along the way.