Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

July 20, 2018

This past weekend Vanessa and I took our much-anticipated annual weekend trip to Cape Cod. It's always a highlight for us. We set out to explore a new part of the Cape as we've extensively explored the Upper Cape.

Stage Harbor lighthouse

We found The Platinum Pebble Inn in West Harwich by way of TripAdvisor, a small luxury bed and breakfast. The owners, Mike and Stefanie Hogan, were extremely gracious hosts. Not only are they running the Inn and serving up delicious breakfasts, they would ask what we wanted to do, and then created our adventure with helpful tips for the day.

On our first day we went on a 35 km (22 miles) bike ride out to Chatham, making stops along the way for ice cream, shopping and lobster rolls.

Bike ride

While we were at the Chatham Pier Fish Market, we watched the local fisherman offload their daily catch with sea lions and seagulls hovering to get some lunch of their own. Once we arrived back at the Inn where we were able to cool off in the pool and relax in the late afternoon sun.

Unloading fish at the Chatham Pier Fish Market

Saturday we were up for a hike, so the Hogans sent us to the Dune Shacks Trail in Provincetown. We were told to carry in whatever we would need as there weren't any facilities on the beach. So we stopped at an authentic French bakery in Wellfleet to get lunch to take on our hike — the baguette took me right back to being in France, and while I was tempted by the pain au chocolat and pain aux raisins, I didn't indulge. I had too much ice cream already.

After we picked up lunch, we continued up Route 6 and parked on the side of the road to begin our journey into the woods and up the first of many, intense sand dunes. The trails were unmarked but there are visible paths that pass the Dune Shacks that date back to the early 1900's. After 45 minutes we finally reached the beach and ocean.

Dune Shacks Trail in Provincetown

Dune Shacks Trail in Provincetown

We rounded out the weekend with an afternoon sail of the Nantucket Sound. It was a beautiful day and the conditions lent themselves to a very relaxing sailing experience.




It was a great weekend!

By way of experiment, I've just enabled the PKCS#11 v2.20 implementation in the eID packages for Linux, but for now only in the packages in the "continuous" repository. In the past, enabling this has caused issues; there have been a few cases where Firefox would deadlock when PKCS#11 v2.20 was enabled, rather than the (very old and outdated) v2.11 version that we support by default. We believe we have identified and fixed all outstanding issues that caused such deadlocks, but it's difficult to be sure. So, if you have a Belgian electronic ID card and are willing to help me out and experiment a bit, here's something I'd like you to do:

  • Install the eID software (link above) as per normal.
  • Enable the "continuous" repository and upgrade to the packages in that repository:

    • For Debian, Ubuntu, or Linux Mint: edit /etc/apt/sources.list.d/eid.list, and follow the instructions there to enable the "continuous" repository. Don't forget the dpkg-reconfigure eid-archive step. Then, run apt update; apt -t continuous upgrade.
    • For Fedora and CentOS: run yum --enablerepo=beid-continuous install eid-mw
    • For OpenSUSE: run zypper mr -e beid-continuous; zypper up

The installed version of the eid-mw-libs or libbeidpkcs11-0 package should be v4.4.3-42-gf78d786e or higher.

One of the new features in version 2.20 of the PKCS#11 API is that it supports hotplugging of card readers; in version 2.11 of that API, this is not the case, since it predates USB (like I said, it is outdated). So, try experimenting with hotplugging your card reader a bit; it should generally work. Try leaving it installed and using your system (and webbrowser) for a while with that version of the middleware; you shouldn't have any issues doing so, but if you do I'd like to know about it.

Bug reports are welcome as issues on our github repository.


July 19, 2018

It's been 12 months since my last progress report on Drupal core's API-first initiative. Over the past year, we've made a lot of important progress, so I wanted to provide another update.

Two and a half years ago, we shipped Drupal 8.0 with a built-in REST API. It marked the start of Drupal's evolution to an API-first platform. Since then, each of the five new releases of Drupal 8 introduced significant web service API improvements.

While I was an early advocate for adding web services to Drupal 8 five years ago, I'm even more certain about it today. Important market trends endorse this strategy, including integration with other technology solutions, the proliferation of new devices and digital channels, the growing adoption of JavaScript frameworks, and more.

In fact, I believe that this functionality is so crucial to the success of Drupal, that for several years now, Acquia has sponsored one or more full-time software developers to contribute to Drupal's web service APIs, in addition to funding different community contributors. Today, two Acquia developers work on Drupal web service APIs full time.

Drupal core's REST API

While Drupal 8.0 shipped with a basic REST API, the community has worked hard to improve its capabilities, robustness and test coverage. Drupal 8.5 shipped 5 months ago and included new REST API features and significant improvements. Drupal 8.6 will ship in September with a new batch of improvements.

One Drupal 8.6 improvement is the move of the API-first code to the individual modules, instead of the REST module providing it on their behalf. This might not seem like a significant change, but it is. In the long term, all Drupal modules should ship with web service APIs rather than depending on a central API module to provide their APIs — that forces them to consider the impact on REST API clients when making changes.

Another improvement we've made to the REST API in Drupal 8.6 is support for file uploads. If you want to understand how much thought and care went into REST support for file uploads, check out API-first Drupal: file uploads. It's hard work to make file uploads secure, support large files, optimize for performance, and provide a good developer experience.


Adopting the JSON API module into core is important because JSON API is increasingly common in the JavaScript community.

We had originally planned to add JSON API to Drupal 8.3, which didn't happen. When that plan was originally conceived, we were only beginning to discover the extent to which Drupal's Routing, Entity, Field and Typed Data subsystems were insufficiently prepared for an API-first world. It's taken until the end of 2017 to prepare and solidify those foundational subsystems.

The same shortcomings that prevented the REST API to mature also manifested themselves in JSON API, GraphQL and other API-first modules. Properly solving them at the root rather than adding workarounds takes time. However, this approach will make for a stronger API-first ecosystem and increasingly faster progress!

Despite the delay, the JSON API team has been making incredible strides. In just the last six months, they have released 15 versions of their module. They have delivered improvements at a breathtaking pace, including comprehensive test coverage, better compliance with the JSON API specification, and numerous stability improvements.

The Drupal community has been eager for these improvements, and the usage of the JSON API module has grown 50% in the first half of 2018. The fact that module usage has increased while the total number of open issues has gone down is proof that the JSON API module has become stable and mature.

As excited as I am about this growth in adoption, the rapid pace of development, and the maturity of the JSON API module, we have decided not to add JSON API as an experimental module to Drupal 8.6. Instead, we plan to commit it to Drupal core early in the Drupal 8.7 development cycle and ship it as stable in Drupal 8.7.


For more than two years I've advocated that we consider adding GraphQL to Drupal core.

While core committers and core contributors haven't made GraphQL a priority yet, a lot of great progress has been made on the contributed GraphQL module, which has been getting closer to its first stable release. Despite not having a stable release, its adoption has grown an impressive 200% in the first six months of 2018 (though its usage is still measured in the hundreds of sites rather than thousands).

I'm also excited that the GraphQL specification has finally seen a new edition that is no longer encumbered by licensing concerns. This is great news for the Open Source community, and can only benefit GraphQL's adoption.

Admittedly, I don't know yet if the GraphQL module maintainers are on board with my recommendation to add GraphQL to core. We purposely postponed these conversations until we stabilized the REST API and added JSON API support. I'd still love to see the GraphQL module added to a future release of Drupal 8. Regardless of what we decide, GraphQL is an important component to an API-first Drupal, and I'm excited about its progress.

OAuth 2.0

A web services API update would not be complete without touching on the topic of authentication. Last year, I explained how the OAuth 2.0 module would be another logical addition to Drupal core.

Since then, the OAuth 2.0 module was revised to exclude its own OAuth 2.0 implementation, and to adopt The PHP League's OAuth 2.0 Server instead. That implementation is widely used, with over 5 million installs. Instead of having a separate Drupal-specific implementation that we have to maintain, we can leverage a de facto standard implementation maintained by others.

API-first ecosystem

While I've personally been most focused on the REST API and JSON API work, with GraphQL a close second, it's also encouraging to see that many other API-first modules are being developed:

  • OpenAPI, for standards-based API documentation, now at beta 1
  • JSON API Extras, for shaping JSON API to your site's specific needs (aliasing fields, removing fields, etc)
  • JSON-RPC, for help with executing common Drupal site administration actions, for example clearing the cache
  • … and many more


Hopefully, you are as excited for the upcoming release of Drupal 8.6 as I am, and all of the web service improvements that it will bring. I am very thankful for all of the contributions that have been made in our continued efforts to make Drupal API-first, and for the incredible momentum these projects and initiatives have achieved.

Special thanks to Wim Leers (Acquia) and Gabe Sullice (Acquia) for contributions to this blog post and to Mark Winberry (Acquia) and Jeff Beeman (Acquia) for their feedback during the writing process.

July 17, 2018

I published the following diary on “Searching for Geographically Improbable Login Attempts“:

For the human brain, an IP address is not the best IOC because, like phone numbers, we are bad to remember them. That’s why DNS was created. But, in many log management applications, there are features to enrich collected data. One of the possible enrichment for IP addresses is the geolocalization. Based on databases, it is possible to locate an IP address based on the country and/or the city. This information is available in our DShield IP reputation database… [Read more]

[The post [SANS ISC] Searching for Geographically Improbable Login Attempts has been first published on /dev/random]

July 13, 2018

I published the following diary on “Cryptominer Delivered Though Compromized JavaScript File“:

Yesterday I found an interesting compromised JavaScript file that contains extra code to perform crypto mining activities. It started with a customer’s IDS alerts on the following URL:


This website is not referenced as malicious and the domain looks clean. When you point your browser to the site, it loads the JavaScript file. So, I performed some investigations on this URL. jquery.prettyphoto.js is a file from the package pretty photo[1] but the one hosted on safeyourhealth[.]ru was modified… [Read more]

[The post [SANS ISC] Cryptominer Delivered Though Compromized JavaScript File has been first published on /dev/random]

I’m using OSSEC to feed an instance of TheHive to investigate security incidents reported by OSSEC. To better categorize the alerts and merge similar events, I needed to add more observables. OSSEC alerts are delivered by email with interesting information for TheHive. This was an interesting use case to play with custom observables.

So, I added a new feature to define your custom observables. For OSSEC, I created the following ones:

  • ossec_rule (The rule ID)
  • ossec_asset (The asset – OSSEC agent)
  • ossec_level (The alert level, 0-10)
  • ossec_message (The alert description)

You can define those custom observables via a new section in the configuration file:

ossec_asset: Received From: \((\w+)\)\s
ossec_level: Rule: \w+ fired \(level (\d+)\)\s-
ossec_message: Rule: \w+ fired \(level \d+\)\s-> "(.*)"
ossec_rule: Rule: (\d+) fired \(level

Here is an example of alerts received in TheHive:

OSSEC Observables

Now that you have new interesting observables, you can also build your own dashboards to increase more visibility:

OSSEC Dashboard

The updated script is available here.

[The post Imap2TheHive: Support for Custom Observables has been first published on /dev/random]

July 12, 2018

If you've ever watched a Drupal Camp video to learn a new Drupal skill, technique or hack, you most likely have Kevin Thull to thank. To date, Kevin has traveled to more than 30 Drupal Camps, recorded more than 1,000 presentations, and has shared them all on YouTube for thousands of people to watch. By recording and posting hundreds of Drupal Camp presentations online, Kevin has has spread knowledge, awareness and a broader understanding of the Drupal project.

I recently attended a conference in Chicago, Kevin's hometown. I had the chance to meet with him, and to learn more about the evolution of his Drupal contributions. I was struck by his story, and decided to write it up on my blog, as I believe it could inspire others around the world.

Kevin began recording sessions during the first community events he helped organize: DrupalCamp Fox Valley in 2013 and MidCamp in 2014. At first, recording and publishing Drupal Camp sessions was an arduous process; Kevin had to oversee dozens of laptops, converters, splitters, camcorders, and trips to Fedex.

After these initial attempts, Kevin sought a different approach for recording sessions. He ended up developing a recording kit, which is a bundle of the equipment and technology needed to record a presentation. After researching various options, he discovered a lightweight, low cost and foolproof solution. Kevin continued to improve this process after he tweeted that if you sponsored his travel, he would record Drupal Camp sessions. It's no surprise that numerous camps took Kevin up on his offer. With more road experience, Kevin has consolidated the recording kits to include just a screen recorder, audio recorder and corresponding cables. With this approach, the kit records a compressed mp4 file that can be uploaded directly to YouTube. In fact, Kevin often finishes uploading all presentation videos to YouTube before the camp is over!

Kevin Thull recording kitThis is one of Kevin Thull's recording kits used to record hundreds of Drupal presentations around the world. Each kit runs at about $450 on Amazon.

Most recently, Kevin has been buying and building more recording kits thanks to financial contributions from various Drupal Camps. He has started to send recording kits and documentation around the world for local camp organizers to use. Not only has Kevin recorded hundreds of sessions himself, he is now sharing his expertise and teaching others how to record and share sessions.

What is exciting about Kevin's contribution is that it reinforces what originally attracted him to Drupal. Kevin ultimately chose to work with Drupal after watching online video tutorials and listening to podcasts created by the community. Today, a majority of people prefer to learn development through video tutorials. I can only imagine how many people have joined and started to contribute to Drupal after they have watched one of the many videos that Kevin has helped to publish.

Kevin's story is a great example of how everyone in the Drupal community has something to contribute, and how contributing back to the Drupal project is not exclusive to code.

This year, the Drupal community celebrated Kevin by honoring him with the 2018 Aaron Winborn Award. The Aaron Winborn award is presented annually to an individual who demonstrates personal integrity, kindness, and above-and-beyond commitment to the Drupal community. It's named after a long-time Drupal contributor Aaron Winborn, who lost his battle with Amyotrophic lateral sclerosis (ALS) in early 2015. Congratulations Kevin, and thank you for your incredible contribution to the Drupal community!

July 11, 2018

Enough with the political posts!

Making libraries that are both API and libtool versioned with qmake, how do they do it?

I started a project on github that will collect what I will call “doing it right” project structures for various build environments.

With right I mean that the library will have a API version in its Library name, that the library will be libtoolized and that a pkg-config .pc file gets installed for it.

I have in mind, for example, autotools, cmake, meson, qmake and plain make. First example that I have finished is one for qmake.

Let’s get started working on a

We get the PREFIX, MAJOR_VERSION, MINOR_VERSION and PATCH_VERSION from a project-wide include


We will use the standard lib template of qmake


We need to set VERSION to a version for compile_libtool (in reality it should use what is called current, revision and age to form an API and ABI version number. In the actual example it’s explained in the comments, as this is too much for a small blog post).


According section 4.3 of Autotools’ mythbusters we should have as target-name the API version in the library’s name

TARGET = qmake-example-$${MAJOR_VERSION}"."$${MINOR_VERSION}

We will write a define in config.h for access to the version as a double quoted string


Our example happens to use QDebug, so we need QtCore here

QT = core

This is of course optional

CONFIG += c++14

We will be using libtool style libraries

CONFIG += compile_libtool
CONFIG += create_libtool

These will create a pkg-config .pc file for us

CONFIG += create_pc create_prl no_install_prl

Project sources

SOURCES = qmake-example.cpp

Project’s public and private headers

HEADERS = qmake-example.h

We will install the headers in a API specific include path

headers.path = $${PREFIX}/include/qmake-example-$${MAJOR_VERSION}"."$${MINOR_VERSION}

Here put only the publicly installed headers

headers.files = $${HEADERS}

Here we will install the library to

target.path = $${PREFIX}/lib

This is the configuration for generating the pkg-config file

QMAKE_PKGCONFIG_DESCRIPTION = An example that illustrates how to do it right with qmake
# This is our libdir
# This is where our API specific headers are
# These are dependencies that our library needs

Installation targets (the pkg-config seems to install automatically)

INSTALLS += headers target

This will be the result after make-install

├── include
│   └── qmake-example-3.2
│       └── qmake-example.h
└── lib
    ├── ->
    ├── ->
    ├── ->
    └── pkgconfig
        └── qmake-example-3.pc

ps. Dear friends working at their own customers: when I visit your customer, I no longer want to see that you produced completely stupid wrong qmake based projects for them. Libtoolize it all, get an API version in your Library’s so-name and do distribute a pkg-config .pc file. That’s the very least to pass your exam. Also read this document (and stop pretending that you don’t need to know this when at the same time you charge them real money pretending that you know something about modern UNIX software development).

July 10, 2018

Quite a few people in the Drupal community are looking forward to see the JSON API module ship with Drupal 8 core.


  • they want to use it on their projects
  • the Admin UI & JS Modernization Initiative needs it
  • they want to see Drupal 8 ship with a more capable RESTful HTTP API
  • then Drupal will have a non-NIH (Not Invented Here) API but one that follows a widely used spec
  • it enables them to build progressively decoupled components

So where are things at?


Let’s start with a high-level timeline:

  1. The plan (intent) to move the JSON API module into Drupal core was approved by Drupal’s product managers and a framework manager 4 months ago, on March 19, 2018!
  2. A core patch was posted on March 29 (issue #2843147). My colleague Gabe and I had already been working full time for a few months at that point to make the JSON API modules more stable: several security releases, much test coverage and so on.
  3. Some reviews followed, but mostly the issue (#2843147) just sat there. Anybody was free to provide feedback. We encouraged people to review, test and criticize the JSON API contrib module. People did: another 1000 sites started using JSON API! Rather than commenting on the core issue, they filed issues against the JSON API contrib module!
  4. Since December 2017, Gabe and I were still working on it full time, and e0ipso whenever his day job/free time allowed. Thanks to the test coverage Gabe and I had been adding, bugs were being fixed much faster than new ones were reported — and more often than not we found (long-existing) bugs before they were reported.
  5. Then 1.5 week ago, on June 28, we released JSON API 1.22, the final JSON API 1.x release. That same day, we branched the 2.x version. More about that below.
  6. The next day, on June 29, an updated core patch was posted. All feedback had been addressed!

June 29

I wrote in my comment:

Time to get this going again. Since #55, here’s what happened:

  1. Latest release at #55: JSON API 1.14
  2. Latest release today: JSON API 1.22
  3. 69 commits: ($ git log --oneline --since "March 30 2018 14:21 CET" | wc -l)
  4. Comprehensive test coverage completed (#2953318: Comprehensive JSON API integration test coverage phase 4: collections, filtering and sorting + #2953321: Comprehensive JSON API integration test coverage phase 5: nested includes and sparse field sets + #2972808: Comprehensive JSON API integration test coverage phase 6: POST/PATCH/DELETE of relationships)
  5. Getting the test coverage to that point revealed some security vulnerabilities (1.16), and many before it (1.14, 1.10 …)
  6. Ported many of the core REST improvements in the past 1.5 years to JSON API (1.15)
  7. Many, many, many bugfixes, and much, much clean-up for future maintainability (1.16, 1.17, 1.18, 1.19, 1.20, 1.21, 1.22)

That’s a lot, isn’t it? :)

But there’s more! All of the above happened on the 8.x-1.x branch. As described in #2952293: Branch next major: version 2, requiring Drupal core >=8.5 (and mentioned in #61), we have many reasons to start a 8.x-2.x branch. (That branch was created months ago, but we kept them identical for months.)
Why wait so long? Because we wanted all >6000 JSON API users to be able to gently migrate from JSON API 1.x (on Drupal ⇐8.5) to JSON API 2.x (on Drupal >=8.5). And what better way to do that than to write comprehensive test coverage, and fixing all known problems that that surfaced? That’s what we’ve been doing the past few months! This massively reduces the risk of adding JSON API to Drupal core. We outlined a plan of must-have issues before going into Drupal core: #2931785: The path for JSON API to core — and they’re all DONE as of today! Dozens of bugs have been flushed out and fixed before they ever entered core. Important: in the past 6–8 weeks we’ve noticed a steep drop in the number of bug reports and support requests that have been filed against the JSON API module!

After having been tasked with maturing core’s REST API, and finding the less-than-great state that was in when Drupal 8 shipped, and having experienced how hard it is to improve it or even just fix bugs, this was a hard requirement for me. I hope it gives core committers the same feeling of relief as it gives me, to see that JSON API will on day one be in much better shape.

The other reason why it’s in much better shape, is that the JSON API module now has no API surface other than the HTTP API! No PHP API (its sole API was dropped in the 2.x branch: #2982210: Move EntityToJsonApi service to JSON API Extras) at all, only the HTTP API as specified by

TL;DR: JSON API in contrib today is more stable, more reliable, more feature-rich than core’s REST API. And it does so while strongly complying with the JSON API spec: it’s far less of a Drupalism than core’s REST API.

So, with pride, and with lots of sweat (no blood and no tears fortunately), @gabesullice, @e0ipso and I present you this massively improved core patch!

EDIT: P.S.: 668K bytes of the 1.0M of bytes that this patch contains are for test coverage. That’s 2/3rds!

To which e0ipso replied:

So, with pride, and with lots of sweat (no blood and no tears fortunately), @gabesullice, @e0ipso and I present you this massively improved core patch!
So much pride! This was a long journey, that I walked (almost) alone for a couple of years. Then @Wim Leers and @gabesullice joined and carried this to the finish line. Such a beautiful collaboration!


July 9

Then, about 12 hours ago, core release manager xjm and core framework manager effulgentsia posted a comment:

(@effulgentsia and @xjm co-authored this comment.) It’s really awesome to see the progress here on JSON API! @xjm and @effulgentsia discussed this with other core committers (@webchick, @Dries, @larowlan, @catch) and with the JSON API module maintainers. Based on what we learned in these discussions, we’ve decided to target this issue for an early feature in 8.7 rather than 8.6. Therefore, we will will set it 8.7 in a few days when we branch 8.7. Reviews and comments are still welcome in the meantime, whether in this issue, or as individual issues in the jsonapi issue queue. Feel free to stop reading this comment here, or continue reading if you want to know why it’s being bumped to 8.7. First, we want to give a huge applause for everything that everyone working on the jsonapi contrib module has done. In the last 3-4 months alone (since 8.5.0 was released and #44 was written):
  • Over 100 issues in the contrib project have been closed.
  • There are currently only 36 open issues, only 7 of which are bug reports.
  • Per #62, the remaining bug fixes require breaking backwards compatibility for users of the 1.x module, so a final 1.x release has been released, and new features and BC-breaking bug fixes are now happening in the 2.x branch.
  • Also per #62, an amazing amount of test coverage has been written and correspondingly there’s been a drop in new bug reports and support requests getting filed.
  • The module is now extremely well-documented, both in the API documentation and in the handbook.
Given all of the above, why not commit #70 to core now, prior to 8.6 alpha? Well,
  1. We generally prefer to commit significant new core features early in the release cycle for the minor, rather than toward the end. This means that this month and the next couple are the best time to commit 8.7.x features.
  2. To minimize the disruption to contrib, API consumers, and sites of moving a stable module from core to contrib, we’d like to have it as a stable module in 8.7.0, rather than an experimental module in 8.6.0.
  3. Per above, we’re not yet done breaking BC. The mentioned spec compliance issues still need more work.
  4. While we’re still potentially evolving the API, it’s helpful to continue having the module in contrib for faster iteration and feedback.
  5. Since the 2.x branch of JSON API was just branched, there are virtually no sites using it yet (only 23 as compared with the 6000 using 1.x). An alpha release of JSON API 2.x once we’re ready will give us some quick real-world testing of the final API that we’re targeting for core.
  6. As @lauriii pointed out, an additional advantge of allowing a bit more time for API changes is that it allows more time for the Javascript Modernization Initiative, which depends on JSON API, to help validate that JSON API includes everything we need to have a fully decoupled admin frontend within Drupal core itself. (We wouldn’t block the module addition on the other initiative, but it’s an added bonus given the other reasons to target 8.7.)
  7. While the module has reached maturity in contrib, we still need the final reviews and signoffs for the core patch. Given the quality of the contrib module this should go well, but it is a 1 MB patch (with 668K of tests, but that still means 300K+ of code to review.) :) We want to give our review of this code the attention it deserves.
None of the above aside from the last point are hard blockers to adding an experimental module to core. Users who prefer the stability of the 1.x module could continue to use it from contrib, thereby overriding the one in core. However, in the case of jsonapi, I think there’s something odd about telling site builders to experiment with the one in core, but if they want to use it in production, to downgrade to the one in contrib. I think that people who are actually interested in using jsonapi on their sites would be better off going to the contrib project page and making an explicit 1.x or 2.x decision from there. Meanwhile, we see what issues, if any, people run into when upgrading from 1.x to 2.x. When we’re ready to commit it to core, we’ll consider it at least beta stability (rather than alpha). Once again, really fantastic work here.


So there you have it. JSON API will not be shipping in Drupal 8.6 this fall.
The primary reason being that it’s preferred for significant new core features to land early in the release cycle, especially ones shipping as stable from the start. This also gives the Admin UI & JS Modernization Initiative more time to actually exercise many parts of JSON API’s capabilities, and in doing so validate that it’s sufficiently capable to power it.

For us as JSON API module maintainers, it keeps things easier for a little while longer: once it’s in core, it’ll be harder to iterate: more process, slower test runs, commits can only happen by core committers and not by JSON API maintainers. Ideally, we’d commit JSON API to Drupal core with zero remaining bugs and tasks, with only feature requests being left. Good news: we’re almost there already: most open issues are feature requests!

For you as JSON API users, not much changes. Just keep using The 2.x branch introduced some breaking changes to better comply with the JSON API spec, and also received a few new small features. But we worked hard to make sure that disruption is minimal (example 1 2 3).1
Use it, try to break it, report bugs. I’m confident you’ll have to try hard to find bugs … and yes, that’s a challenge to y’all!

  1. If you want to stay on 1.x, you can — and it’s rock solid thanks to the test coverage we added. That’s the reason we waited so long to work on the 2.x branch: because we wanted the thousands of JSON API sites to be in the best state possible, not be left behind. Additionally, the comprehensive test coverage we added in 1.x guarantees we’re aware of even subtle BC breaks in 2.x! ↩︎

July 09, 2018

TheHive is an awesome tool to perform incident management. One of the software components that is linked to TheHive is Cortex defined as a “Powerful observable analysis engine“. Let’s me explain why Cortex can save you a lot of time. When you are working on an incident in TheHive, observables are linked to it. An observable is an IP address, a hash, a domain, filename, … (note: it is not an IOC, yet!). Let’s say you have an incident evolving 10 IP addresses. It could be quite time-consuming (read: “boring”) to search for each IP address in reputation databases or websites like Virustotal. Cortex is made for this purpose. It relies on small modules called “analyzers” that will query a specific service for information about your observables, parse the returned data and pass them to TheHive. There are already plenty of analyzers available today for most of the well-known online services (the complete list is available here) and, regularly, people submit new analyzers for specific online resources. Being a SANS ISC Handler, one of my favourite IP reputation database is, of course, DShield which has its own API. Surprisingly, there was no analyzer available for DShield. So, I wrote mine!

The analyzer is provided to work with IP addresses:DShield Analyzer Status

When you click on a DShield taxonomy, you get the details about this IP address:DShield Report

To install the analyzer, copy files from my Github repo in your $CORTEX_ANALYZERS_PATH/analyzers/ and restart your Cortex instance. The analyzer will be listed and can be enabled (no further configuration is required). Enjoy!

(Note: I’ll submit a pull-request to the official repository)

[The post DShield Analyzer for Cortex has been first published on /dev/random]

July 08, 2018

I said it before, we shouldn’t finance the US’s war-industry any longer. It’s not a reliable partner.

I’m sticking to my guns on this one,

Let’s build ourselves a European army, utilizing European technology. Build, engineered and manufactured by Europeans.

We engineers are ready. Let us do it.

July 04, 2018

The day three started quietly (let’s call this fact the post-social event effect) with a set of presentations around Blue Team activities. Alexandre Dulaunoy from CIRCL presented “Fail frequently to avoid disaster” or how to organically build an open threat intelligence sharing standard to keep the intelligence community free and sane! He started with a nice quote: “There was never a plan. There was just a series of mistakes”.  After a brief introduction to MISP, Alex came back to the history of the project and explained some mistakes they made. The philosophy is to not wait for a perfect implementation from the beginning but to start small and extend later. Standardisation is required when your tool is growing but do not make the mistake to define your own new standard. Use the ones already existing. For example, MISP is able to export data in multiple open formats (CVS, XML, Bro, Suricata, Sigma, etc). Another issue was the way people use tags (the great-failure of free-text tagging). They tend to be very creative when they have a playground. The perfect example is how TLP levels are written (TLP:Red, TLP-RED, TLP:RED, …). Taxonomies solved this creativity issue. MISP is designed with an object-template format which helps organisations to exchange specific information they want. Finally, be happy to get complaints about your software. It means that it’s being used!
The next slot was assigned to Thomas Chopitea from Google who presented FOSS tools to automate your DFIR process. As you can imagine, Google is facing many incidents to be investigated and their philosophy is to write tools for their own usage (first of all) but also to share them. As they use the tools they are developing, it means they know them and improve them. The following tools were reviewed:
  • GRR
  • Plaso
  • TimeSketch
  • dfTimeWolf
  • Turbinia
To demonstrate how they work, Thomas prepared his demos with a targeted attack scenario based on a typo-squatting. All tools were used one by one them investigation was performed via dfTimeWolf which is a “glue” between all the tools. Turbinia is less known. It’s an automation of forensic analysis tools in the cloud. Note that it is not restricted to the Google cloud. It was an excellent presentation. Have a look at it if you’re in the process to build your own DFIR toolbox.
After a short coffee break, a set of sessions related to secure programming started. The first one was about LandLock by Mickaël Salaün from ANSSI. Landlock is a stackable Linux Security Module (LSM) that makes it possible to create security sandboxes. After a short demo to demonstrate the capabilities, the solution was compared to other ones (SELinux seccomp-bpf, namespaces). Only Landlock has all features: Fine-grained control, embedded policy and non-privileged use. Then Mickaël dived into the code and explained how the module works. The idea is to have user-space hardening:
  • access control
  • designed for unprivileged use
  • apply tailored access controls perprocss
  • make it evolve over time

This is an ongoing research that is not yet completely implemented but it’s still possible to install and play with it. It looks promising. Then, Pierre Chifflier (@) presented “Security, Performance, which one?

The last presentation about secure programming was “Immutable infrastructure and zero trust networking: designing your system for resilience” by Geoffroy Couprie. Here is the scenario used by Geoffroy: You just got pwned. Your WordPress instance was compromised. Who accessed the server? Was it updated? Traditional operations are long-lived servers (sysadmins like big uptimes). Is it safe to reinstall the same server? They are techniques to make the server reinstall reproducible (puppet, ansible, chef, …)
The idea presented by Geoffroy: Why not reinstall from scratch on every update with an immutable infrastructure (do not modify directly a running server.  The process of image creation is based on Exherbo, they remove unwanted software, build a kernel statically. The resulting image is simple, safe and it boots in 7”. Images are then deployed via BitTorrent to hypervisors.
Machines are moving so how to reach them? Via a home-made load-balancers called “sozu”  which can be reconfigured live. A very interesting approach!
After the lunch, the topic switched the security of IoT devices. Sébastien Tricaud presented some tests he performed via honeypots mimicking IoT devices. After a brief introduction about the (many) issues introduced by IoT devices, he explained how he deployed some honeypots with results. The first example is called Gaspot. The second one is Conpot which simulates a Siemens PLC or a Guardian AST device). Interesting fact: Nmap has a script to scan such devices:
nmap —script atg-info -p 10001 <host>
Sébastien put a honeypot only for 3 months and got 5 uniques IP addresses. The second test was to accept much more connections (S7, Modbus or IPMI). In this case, he got much more hits, the first one after only three hours. The question is: are those IP addresses real attackers, bots (Shodan?) or other security researchers?
Rayna Stamboliyska was the next speaker and she presented “Io(M)T Security: A year in review”. Rayna focussed on connected sex toys but respected the code of conduct defined during the conference, no offensive content, just facts. Like any other “smart” device, they suffer from multiple vulnerabilities. And don’t think that it’s a niche market, there is a real business for connected sex toys. Rayna also presented her project called PiRanhalysis. It’s a suite of tools running on a Raspberry Pi that helps to collect traffic generated by IoT devices.
  • PiRogue collects all the traces
  • PiRahna automates install and capture
  • PiPrecious is the platform to store and version them
The last slot related to IoT was assigned to Aseem Jakhar who presented his pentesting framework called “Expl-IoT”. In was interesting but Assem started by complaining about the huge number of frameworks available and then it started his own!? Why not contribute to an existing one or just write Metasploit modules?
The last sessions were oriented to red teaming / pentesting. Ivan Kwiatkowski presented “Freedom Fighting Mode – Open Source Hacking Harness”. Already presented at SSTIC a few weeks ago. Then, Antoine Cervoise presented some cool attack scenarios based on open source hardware like Teensy devices or Raspberry Pi computers. Niklas Abel presented his research ShadowSocks, a secure SOCKS5 proxy which is… not so secure! He explained some vulnerabilities found in the tool and, last but not least, Jérémy Mousset explained how he compromized a Glassfish server via the admin interface.
The PST18 Crew
This closes the first edition of Pass-The-Salt. It seems that a second edition is already on its way at the same location and same place! The event occurred smoothly in a very relaxed atmosphere, put it on your agenda for next year because this event is free (important to remind) but the quality of talks is high!

[The post Pass-The-Salt 2018 Wrap-Up Day #3 has been first published on /dev/random]

July 03, 2018

When you have a look at the schedule of infosec conferences, the number of events is already very high. There is one at least every week around the world. So, when a new one is born and is nice, it must be mentioned. “Pass-The-Salt” (SALT means “Security And Libre Talks“) is a fork of the security track of the RMLL. For different reasons, the team behind the security track decided to jump out of the RMLL organization and to create their independent event. What a challenge: to find a free time slot, to find a location, to organize a call-for-papers, to find sponsors (because the event is free for attendees). They released 200 tickets that were sold in 5 days. Not bad for a first edition, congratulations to them! The event is split across three days. It started yesterday with some workshops and talks in the afternoon. Due to a very busy agenda, I was only able to join Lille (in the north of France) yesterday evening. So, it’s not a typo but no wrap-up of the first day!

I joined the location of the conference to attend some talks in a sunny morning. After a quick registration and some coffee refills, let’s listen to speakers! A good idea was to group talks by topics (network, web security, reverse, etc). This way, if you’ve fewer interests for a specific topic, you can easily attend a workshop. The day started with talks related to network security. The first speaker was Francois Serman who’s working for the OVH anti-DDoS team. He explained with a lot of details on how to filter packets in an efficient way on Linux systems. Indeed, the traffic to be inspected is always growing and can quickly become a bottleneck. Just for the story, OVH was targeted by a 1.3Tb/s DDoS a few months ago. Francois started by reviewing the current BPF filter that is used by tools like tcpdump or Wireshark. He explained with a lot of examples how packets are inspected and decisions are made to drop/allow them. Then, he switched to eBPF (extended BPF). This issue remains almost the same because, even if iptables is powerful, it is implemented too late in the stack. Why now filter packets sooner? To achieve this, Francois presented “XDP” or eXpress Data Path.

The next talk was on the same topic with Eric Leblond from the Suricata project. He explained why packets loss is a real pain for IDS systems. Just one packet lost might lead to undetected suspicious traffic. A common problem is the “elephant flow problem” which is a very big flow like a video stream. When we face a ring buffer overrun, we lose data. He explained how to implement bypass capabilities.

After the morning break, the keynote speaker was Pablo Neira Ayuso. He presented a talk named “A 10 years journey in Linux firewall“. Pablo is a core developer of the NetFilter which is, as he explained very well, not only the well-known iptables module. He reviewed the classic iptables tool then switched to the new nftable that is much more powerful! Very interesting keynote!

The next slot was assigned to me. I presented my solution to perform full packet capture based on Moloch & Docker containers. Just after, there was a session of lightning talks (~10 presentations of 4 minutes each).

After the lunch break, the topic switched to “web security”. The first speaker was Stefan Eissing that presented “Security and Self-Driving Computers“. The title was strange but related to mod_md that implements the Let’s Encrypt certificate support directly into Apache. Then, Julien Voisin, Thibault Koechlin, Simon Magnin-Feysot presented their project called Snuffleupagus (I already saw this talk at in 2017). Due to a last-minute change, Sébastien Larinier presented his work about how to clusterize malware datasets with open source tools and machine learning.

The last part of the day was dedicated to “IAM”: Clément Oudot & Xavier Guimard presented how to integrate second-factor authentication in LemonLDAP::NG. Then Fraser Tweedale from RedHat presented “No way JOSE! Lessons for authors and implementers of open standards” and finally, Florence Blanc-Renaud closed the day with some tips to better protect your passwords and how to implement 2FA with RedHat tools.

The day ended with the social even in the center of Lille followed by a dinner with friends. See you tomorrow for the third day.

[The post Pass-The-Salt 2018 Wrap-Up Day #2 has been first published on /dev/random]

During my DrupalCon Nashville keynote, I shared a brief video of Mike Lamb, the Senior Director of Architecture, Engineering & Development at Pfizer. Today, I wanted to share an extended version of my interview with Mike, where he explains why the development team at Pfizer has ingrained Open Source contribution into the way they work.

Mike had some really interesting and important things to share, including:

  1. Why Pfizer has chosen to standardize all of its sites on Drupal (from 0:00 to 03:19). Proprietary software isn't a match.
  2. Why Pfizer only works with agencies and vendors that contribute back to Drupal (from 03:19 to 06:25). Yes, you read that correctly; Pfizer requires that its agency partners contribute to Open Source!
  3. Why Pfizer doesn't fork Drupal modules (from 06:25 to 07:27). It's all about security.
  4. Why Pfizer decided to contribute to the Drupal 8's Workflow Initiative, and what they have learned from working with the Drupal community (from 07:27 to 10:06).
  5. How to convince a large organization (like Pfizer) to contribute back to Drupal (from 10:06 to 12:07).

Between Pfizer's direct contributions to Drupal (e.g. the Drupal 8 Workflow Initiative) and the mandate for its agency partners to contribute code back to Drupal, Pfizer's impact on the Drupal community is invaluable. It's measured in the millions of dollars per year. Just imagine what would happen to Drupal if ten other large organizations adopted Pfizer's contribution models?

Most organizations use Open Source, and don't think twice about it. However, we're starting to see more and more organizations not just use Open Source, but actively contribute to it. Open source offers organizations a completely different way of working, and fosters an innovation model that is not possible with proprietary solutions. Pfizer is a leading example of how organizations are starting to challenge the prevailing model and benefit from contributing to Open Source. Thanks for changing the status quo, Mike!

July 01, 2018

Two weeks ago, I stumbled upon a two-part blog post by Alex Russell, titled Effective Standards Work.

The first part (The Lay Of The Land) sets the stage. The second part (Threading the Needle) attempts to draw conclusions.

It’s worth reading if you’re interested in how Drupal is developed, or in how any consensus-driven open source project works (rather than the increasingly common “controlled by a single corporate entity” “open source”).

It’s written with empathy, modesty and honesty. It shows the struggle of somebody given the task and opportunity to help shape/improve the developer experience of many, but not necessarily the resources to make it happen. I’m grateful he posted it, because something like this is not easy to write nor publish — which he also says himself:

I’ve been drafting and re-drafting versions of this post for almost 4 years. In that time I’ve promised a dozen or more people that I had a post in process that talked about these issues, but for some of the reasons I cited at the beginning, it has never seemed a good time to hit “Publish”. To those folks, my apologies for the delay.


I hope you’ll find the incredibly many parallels with the open source Drupal ecosystem as fascinating as I did!

Below, I’ve picked out some of the most interesting statements and replaced only a few terms, and tadaaa! — it’s accurately describing observations in the Drupal world!

Go read those two blog posts first before reading my observations though! You’ll find some that I didn’t. Then come back here and see which ones I see, having been a Drupal contributor for >11 years and a paid full-time Drupal core contributor for >6.

Standards Theory

Design A new Drupal contrib module is the process of trying to address a problem with a new feature. Standardisation Moving a contributed module into Drupal core is the process of documenting consensus.

The process of feature design Drupal contrib module development is a messy, exciting exploration embarked upon from a place of trust and hope. It requires folks who have problems (web developers site builders) and the people who can solve them (browser engineers Drupal core/contrib developers) to have wide-ranging conversations.

The Forces at Play

Feature Drupal module design starts by exploring problems without knowing the answers, whereas participation in Working Groups Drupal core initiatives entails sifting a set of proposed solutions and integrating the best proposals competing Drupal modules. Late-stage iteration can happen there, but every change made without developer site builder feedback is dangerous — and Working Groups Drupal core initiatives aren’t set up to collect or prioritise it.

A sure way for a browser engineer Drupal core/contrib developer to attract kudos is to make existing content Drupal sites work better, thereby directly improving things for users site builders who choose your browser Drupal module.

Essential Ingredients

  • Participation by web developers site builders and browser engineers Drupal core/contrib developers: Nothing good happens without both groups at the table.
  • A venue outside a chartered Working Group Drupal core in which to design and iterate: Pre-determined outcomes rarely yield new insights and approaches. Long-term relationships of WG participants Drupal core developers can also be toxic to new ideas. Nobody takes their first tap-dancing lessons under Broadway’s big lights. Start small and nimble, build from there.
  • A path towards eventual standardisation stability & maintainability: Care must be taken to ensure that IP obligations API & data model stability can be met the future, even if the loose, early group isn’t concerned with a strict IP policy update path
  • Face-to-face deliberation: I’ve never witnessed early design work go well without in-person collaboration. At a minimum, it bootstraps the human relationships necessary to jointly explore alternatives.

    If you’ve never been to a functioning standards Drupal core meeting, it’s easy to imagine languid intellectual salons wherein brilliant ideas spring forth unbidden and perfect consensus is forged in a blinding flash. Nothing could be further from the real experience. Instead, the time available to cover updates and get into nuances of proposed changes can easily eat all of the scheduled time. And this is expensive time! Even when participants don’t have to travel to meet, high-profile groups Drupal core contributors are comically busy. Recall that the most in-demand members of the group Drupal core initiative (chairs Drupal core initiative coordinators, engineers from the most consequential firms Drupal agencies) are doing this as a part-time commitment. Standards work is time away from the day-job, so making the time and expense count matters.

Design → Iterate → Ship & Standardise

What I’ve learned over the past decade trying to evolving the web platform is a frustratingly short list given the amount of pain involved in extracting each insight:

  • Do early design work in small, invested groups
  • Design in the open, but away from the bright lights of the big stage
  • Iterate furiously early on because once it’s in the web Drupal core, it’s forever
  • Prioritize plausible interoperability; if an implementer says “that can’t work”, believe them!
  • Ship to a limited audience using experimental Drupal core modules as soon as possible to get feedback
  • Drive standards stabilization of experimental Drupal core modules with evidence and developer feedback from those iterations
  • Prioritise interop minimally viable APIs & evolvability over perfect specs APIs & data models; tests create compatibility stability as much or more than tight prose or perfect IDL APIs
  • Dot “i”s and cross “t”s; chartered Working Groups Drupal core initiatives and wide review many site builders trying experimental core modules are important ways to improve your design later in the game. These derive from our overriding goal: ship the right thing.

    So how can you shape the future of the platform as a web developer site builder?

The first thing to understand is that browser engineers Drupal core/contrib developers want to solve important problems, but they might not know which problems are worth their time. Making progress with implementers site builders is often a function of helping them understand the positive impact of solving a problem. They don’t feel it, so you may need to sell it!

Building this understanding is a social process. Available, objective evidence can be an important tool, but so are stories. Getting these in front of a sympathetic audience within a browser team of Drupal core committers or Drupal contrib module maintainers is perhaps harder.

It has gotten ever easier to stay engaged as designs experimental Drupal core modules iterate. After initial meetings, early designs are sketched up and frequently posted to GitHub issues where you can provide comments.

“Ship The Right Thing”

These relatively new opportunities for participation outside formal processes have been intentionally constructed to give developers and evidence a larger role in the design process.

There’s a meta-critique of formal standards processes in Drupal core and the defacto-exclusionary processes used to create them. This series didn’t deal in it deeply because doing so would require a long digression into the laws surrounding anti-trust and competition. Suffice to say, I have a deep personal interest in bringing more voices into developing the future of the web platform, and the changes to Chrome’s Drupal core’s approach to standards adding new modules discussed above have been made with an explicit eye towards broader diversity, inclusion, and a greater role for evidence.

I hope you enjoyed Alex’ blog posts as much as I did!

June 30, 2018

We're going on a two-week vacation in August! Believe it or not, but I haven't taken a two week vacation in 11 years. I'm super excited.

Now our vacation is booked, I'm starting to make plans for how to spend our time. Other than spending time with family, going on hikes, and reading a book or two, I'd love to take some steps towards food photography. Why food photography?

The past couple of years, Vanessa and I have talked about making a cookbook. In our many travels around the world, we've eaten a lot of great food, and Vanessa has managed to replicate and perfect a few of these recipes: the salmon soup we ate in Finland when we went dog sledding, the hummus with charred cauliflower we had at DrupalCon New Orleans, or the tordelli lucchesi we ate on vacation in Tuscany.

Other than being her sous-chef (dishwasher, really), my job would be to capture the recipes with photos, figure out a way to publish them online (I know just the way), and eventually print the recipes in a physical book. Making a cookbook is a fun way to align our different hobbies; travel for both of us, cooking for her, photography for me, and of course enjoying the great food.

Based on the limited research I've done, food photography is all about lighting. I've been passionate about photography for a long time, but I haven't really dug into the use of light yet.

Our upcoming vacation seems like the perfect time to learn about lighting; read a book about it, and try different lighting techniques (front lighting, side lighting, back lighting but also hard, soft and diffused light).

The next few weeks, I plan to pick up some new gear like a light diffuser, light modifiers, and maybe even a LED light. If you're into food photography, or into lighting more generally, don't hesitate to leave some tips and tricks in the comments.

June 28, 2018

Drupal is no longer the Drupal you used to know

Today, I gave a keynote presentation at the 10th annual Design 4 Drupal conference at MIT. I talked about the past, present and future of JavaScript, and how this evolution reinforces Drupal's commitment to be API-first, not API-only. I also included behind-the-scene insights into the Drupal community's administration UI and JavaScript modernization initiative, and why this approach presents an exciting future for JavaScript in Drupal.

If you are interested in viewing my keynote, you can download a copy of my slides (256 MB).

Thank you to Design 4 Drupal for having me and happy 10th anniversary!

June 26, 2018

The Drupal community has done an amazing job organizing thousands of developers around the world. We've built collaboration tools and engineering processes to streamline how our community of developers work together to collectively build Drupal. This collaboration has led to amazing results. Today, more than 1 in 40 of the top one million websites use Drupal. It's inspiring to see how many organizations depend on Drupal to deliver their missions.

What is equally incredible is that historically, we haven't collaborated around the marketing of Drupal. Different organizations have marketed Drupal in their own way without central coordination or collaboration.

In my DrupalCon Nashville keynote, I shared that it's time to make a serious and focused effort to amplify Drupal success stories in the marketplace. Imagine what could happen if we enabled hundreds of marketers to collaborate on the promotion of Drupal, much like we have enabled thousands of developers to collaborate on the development of Drupal.

Accelerating Drupal adoption with business decision makers

To focus Drupal's marketing efforts, we launched the Promote Drupal Initiative. The goal of the Promote Drupal Initiative is to do what we do best: to work together to collectively grow Drupal. In this case, we want to collaborate to raise awareness with business and non-technical decision makers. We need to hone Drupal's strategic messaging, amplify success stories and public relation resources in the marketplace, provide agencies and community groups with sales and marketing tools, and improve the evaluator experience.

To make Promote Drupal sustainable, Rebecca Pilcher, Director of MarComm at the Drupal Association, will be leading the initiative. Rebecca will oversee volunteers with marketing and business skills that can help move these efforts forward.

Promote Drupal Fund: 75% to goal

At DrupalCon Nashville, we set a goal of fundraising $100,000 to support the Promote Drupal Initiative. These funds will help to secure staffing to backfill Rebecca's previous work (someone has to market DrupalCon!), produce critical marketing resources, and sponsor marketing sprints. The faster we reach this goal, the faster we can get to work.

I'm excited to announce that we have already reached 75% of our goal, thanks to many generous organizations and individuals around the world. I wanted to extend a big thank you to the following companies for contributing $1,000 or more to the Promote Drupal Initiative:

Thanks to many financial contributions, the Promote Drupal Initiative hit its $75k milestone!

If you can, please help us reach our total goal of $100,000! By raising a final $25,000, we can build a program that will introduce Drupal to an emerging audience of business decision makers. Together, we can make a big impact on Drupal.

June 21, 2018

I published the following diary on “Are Your Hunting Rules Still Working?“:

You are working in an organization which implemented good security practices: log events are collected then indexed by a nice powerful tool. The next step is usually to enrich this (huge) amount of data with external sources. You collect IOC’s, you get feeds from OSINT. Good! You start to create many reports and rules to be notified when something weird is happening. Everybody agrees on the fact that receiving too many alerts is bad and people won’t get their attention to them if they are constantly flooded… [Read more]

[The post [SANS ISC] Are Your Hunting Rules Still Working? has been first published on /dev/random]

June 19, 2018

For the past two years, I've published the Who sponsors Drupal development report. The primary goal of the report is to share contribution data to encourage more individuals and organizations to contribute code to Drupal on However, the report also highlights areas where our community can and should do better.

In 2017, the reported data showed that only 6 percent of recorded code contributions were made by contributors that identify as female. After a conversation in the Drupal Diversity & Inclusion Slack channel about the report, it became clear that many people were concerned about this discrepancy. Inspired by this conversation, Tara King started the Drupal Diversity and Inclusion Contribution Team to understand how the Drupal community could better include women and underrepresented groups to increase code and community contributions.

I recently spoke with Tara to learn more about the Drupal Diversity and Inclusion Contribution Team. I quickly discovered that Tara's leadership exemplifies various Drupal Values and Principles; especially Principle 3 (Foster a learning environment), Principle 5 (Everyone has something to contribute) and Principle 6 (Choose to lead). Inspired by Tara's work, I wanted to spotlight what the DDI Contribution Team has accomplished so far, in addition to how the team is looking to help grow diversity and inclusion in the future.

A mentorship program to help underrepresented groups

Supporting diversity and inclusion within Drupal is essential to the health and success of the project. The people who work on Drupal should reflect the diversity of people who use and work with the software. This includes building better representation across gender, race, sexuality, disability, economic status, nationality, faith, technical experience, and more. Unfortunately, underrepresented groups often lack community connections, time for contribution, resources or programs that foster inclusion, which introduce barriers to entry.

The mission of the Drupal Diversity & Inclusion Contribution Team is to increase contributions from underrepresented groups. To accomplish this goal, the DDI Contribution Team recruits team members from diverse backgrounds and underrepresented groups, and provides support and mentorship to help them contribute to Drupal. Each mentee is matched with a mentor in the Drupal community, who can provide expertise and advice on contribution goals and professional development. To date, the DDI Contribution Team supports over 20 active members.

What I loved most in my conversation with Tara is the various examples of growth she gave. For example, Angela McMahon is a full-time Drupal developer at Iowa State. Angela been working with her mentor, Caroline Boyden, on the External Link Module. Due to her participation with the DDI Contribution Team, Angela has now been credited on 4 fixed issues in the past year.

Improving the reporting around diversity and inclusion

In addition to mentoring, another primary area of focus of the DDI Contribution Team is to improve reporting surrounding diversity and inclusion. For example, in partnership with the Drupal Association and the Open Demographics Project, the DDI Contribution Team is working to implement best practices for data collection and privacy surrounding gender demographics. During the mentored code sprints at DrupalCon Nashville, the DDI Contribution Team built the Gender Field Module, which we hope to deploy on

The development of the Gender Field Module is exciting, as it establishes a system to improve reporting on diversity demographics. I would love to use this data in future iterations of the 'Who sponsors Drupal development' report, because it would allow us to better measure progress on improving Drupal's diversity and inclusion against community goals.

One person can make a difference

What I love about the story of the DDI Contribution Team is that it demonstrates how one person can make a significant impact on the Drupal project. The DDI Contribution Team has grown from Tara's passion and curiosity to see what would happen if she challenged the status quo. Not only has Tara gotten to see one of her own community goals blossom, but she now also leads a team of mentors and mentees and is a co-maintainer of the Drupal 8 version of the Gender Field Module. Last but not least, she is building a great example for how other Open Source projects can increase contributions from underrepresented groups.

How you can get involved

If you are interested in getting involved with the DDI Contribution Team, there are a number of ways you can participate:

  • Support the DDI Contribution Team as a mentor, or consider recommending the program to prospective mentees. Join #ddi-contrib-team on Drupal Slack to meet the team and get started.
  • In an effort to deliberately recruit teams from spaces where people of diverse backgrounds collaborate, the DDI Contribution Team is looking to partner with Outreachy, an organization that provides paid internships for underrepresented groups to learn Free and Open Source Software and skills. If you would be interested in supporting a Drupal internship for an Outreachy candidate, reach out to Tara King to learn how you can make a financial contribution.
  • One of the long term goals of the DDI Contribution Team is to increase the number of underrepresented people in leadership positions, such as initiative lead, module maintainer, or core maintainer. If you know of open positions, consider understanding how you can work with the DDI Contribution Team to fulfill this goal.

I want to extend a special thanks to Tara King for sharing her story, and for making an important contribution to the Drupal project. Growing diversity and inclusion is something everyone in the Drupal community is responsible for, and I believe that everyone has something to contribute. Congratulations to the entire DDI Contribution Team.

I published the following diary on “PowerShell: ScriptBlock Logging… Or Not?“:

Here is an interesting piece of PowerShell code which is executed from a Word document (SHA256: eecce8933177c96bd6bf88f7b03ef0cc7012c36801fd3d59afa065079c30a559). The document is a classic one. Nothing fancy, spit executes the macro and spawns a first PowerShell command… [Read more]

[The post [SANS ISC] PowerShell: ScriptBlock Logging… Or Not? has been first published on /dev/random]

So suppose you have one page/ post which for whatever reason you don’t want Autoptimize to act on? Simply add this in the post content and AO will bail out;

<!-- <xsl:stylesheet -->

Some extra info:

  • Make sure to use the “text”-editor, not the “visual” one as I did here to make sure the ode is escaped and thus visible
  • This bailing out was added 5 years ago to stop the PHP-generated <xsl:stylesheet from Yoast SEO from being autoptmized, if I’m not mistaking Yoast generates the stylesheet differently now.
  • The xsl-tag is enclosed in a HTML comment wrapper to ensure it is not visible (except here, on purpose to escape the HTML tags so they are visible for you to see).

June 18, 2018

I published the following diary on “Malicious JavaScript Targeting Mobile Browsers“:

A reader reported a suspicious piece of a Javascript code that was found on a website. In the meantime, the compromized website has been cleaned but it was running WordPress (again, I would say![1]).  The code was obfuscated, here is a copy… [Read more]

[The post [SANS ISC] Malicious JavaScript Targeting Mobile Browsers has been first published on /dev/random]

June 15, 2018

And here we go with the wrap-up of the 3rd day of the SSTIC 2018 “Immodium” edition. Indeed, yesterday, a lot of people suffered from digestive problems (~40% of the 800 attendees were affected!). This will for sure remains a key story for this edition. Anyway, it was a good edition!

The first timeslot is never an easy one on Friday. It was assigned to Christophe Devigne: “A Practical Guide to Differential Power Analysis of USIM Cards“. USIM cards are the SIM cards that you use in your mobile phones. Guest what? They are vulnerable to some types of attacks to extract the authentication secret. What does it mean? A complete confidentiality lost for the user’s communications. An interesting fact, Christophe and his team tested several USIM cards (9) – 5 of them from French operators – and one was vulnerable. Also, 75% of the French mobile operators still distribute cards with a trivial PIN code. The technology used is called “MILENAGE“. Christophe described it and the explained how, thanks to an oscilloscope, he was able to extract keys.
The second talk was targeting the Erlang language. Erlang is not widely used and was developed by Ericsson. The talk title was “Starve for Erlang cookie to gain remote code exec” and presented by Guillaume Teissier. It is used for many applications but mainly in the telecom sector to manage network devices.
Erlang has a feature that allows two processes to communicate. Guillaume explained how communications are established between the processes – via a specific TCP port – and how they authenticate together – via a cookie. This cookie is always a string of 20 uppercase characters. The talk focussed on how to intercept communications between those processes and recover this cookie. Guillaume released a tool for this.
The next talk was about HACL*, a crypto library written in formally verified code and used by FireFox. Benjamin Beurdouche and Jean Karim Zinzindohoue explained how they developed the library (using the F* language).
Then,  Jason Donenfeld presented his project: Wireguard. This is a Layer-3 secure network tunnel for IPv4 & IPv6 (read: a VPN) designed for the Linux kernel (but available on other platforms – MacOS, Android and other embedded OS). It is UDP based and provides an authentication similar to SSH and its .ssh/authentication-keys. It can replace without problem a good old OpenVPN or IPsec solution. Compared to other solutions, the code is very slow and can be easily audited/reviewed. The setup is very easy:
# ip link add wg0 type wireguard
# ip address add dev wg0
# ip route add default wg0
# ifconfig wg0 ...
# iptables -A input -i wg0 ...
Jason explained in details how the authentication mechanism has been implemented to ensure that once a packet reached a system was are sure of the origin. So easy to setup, here is a quick tutorial on a friend’s wiki.
The next presentation was made by Yvan GENUER and focussed on SAP (“Ca sent le SAPin!“). Everybody knows SAP, the worldwide leader in ERP solutions. A lot of security issues have already been found in multiple tools or modules. But this time, the focus was on a module called SAP IGS or “Internet Graphic Services”. This module helps to render and process multiple files inside an SAP infrastructure. After some classic investigations (network traffic capture, search in the source code – yes, SAP code is stored in databases), they find an interesting call: “ADM:INSTALL”. It is used to install new shape files. They explained the two vulnerabilities found: The service allows the creation of any files on the file system and a DoS when you create a file with a filename longer than 256 characters.
The next talk was not usual but very interesting: Yves-Alexis Perez from the Debian Security Team came on stage to explain how his team is working. How they handle security issues with the Debian Linux distribution. The core team is based on 10 people (5 being really active) and other developers and maintainers. He reviewed the process that is followed when a vulnerability is reported (triage, push of patches, etc). He also reviewed some vulnerabilities from the past and how they were handled.
After a nice lunch break with Friends and some local food, back in the auditorium for two talks: Ivan Kwiatkowski demonstrated the tool he wrote to help pentester to handler remote shells in a comfortable way: “Hacking Harness open-source“. Ivan started with some bad stories that every pentester in the world faced. You got a shell but no TTY, you lose it, you suffer from latency, etc… This tool helps to get rid of these problems and allow the pentester to work like in a normal shell without any footprint. Other features allow, for example, to transfer files back to the attacker. It looks to be a nice tool, have a look at it, definitively!
Then, Florian Maury presented “DNS Single Point of Failure Detection using Transitive Availability Dependency Analysis“. Everybody has a love/hate relation with DNS. No DNS, no Internet. Florian came back on the core principle of the DNS and also a weak point: the single point of failure that can make your services not reachable on the Internet. He wrote a tool that, based on DNS requests, shows you if a domain is vulnerable to one or more single point of failure. In the second part of the talk, Florian presented the results of a research he performed on 4M of domains (+ the Alexa top list). Guess what? There are a lot of domains that suffer from, at least, one SPoF.

Finally, the closing keynote was presented by Patrick Pailloux, the technical director of the DGSE (“Direction Générale de la Sécurité Extérieure”). Excellent speaker who presented the “Cyber” goals of the French secret services, of course, what he was authorized to disclosed 😉 It was also a good opportunity to repeat that they are always looking to skilled security people.

[The post SSTIC 2018 Wrap-Up Day #3 has been first published on /dev/random]

The Composer Initiative for Drupal

At DrupalCon Nashville, we launched a strategic initiative to improve support for Composer in Drupal 8. To learn more, you can watch the recording of my DrupalCon Nashville keynote or read the Composer Initiative issue on

While Composer isn't required when using Drupal core, many Drupal site builders use it as the preferred way of assembling websites (myself included). A growing number of contributed modules also require the use of Composer, which increases the need to make Composer easier to use with Drupal.

The first step of the Composer Initiative was to develop a plan to simplify Drupal's Composer experience. Since DrupalCon Nashville, Mixologic, Mile23, Bojanz, Webflo, and other Drupal community members have worked on this plan. I was excited to see that last week, they shared their proposal.

The first phase of the proposal is focused on a series of changes in the main Drupal core repository. The directory structure will remain the same, but it will include scripts, plugins, and embedded packages that enable the bundled Drupal product to be built from the core repository using Composer. This provides users who download Drupal from a clear path to manage their Drupal codebase with Composer if they choose.

I'm excited about this first step because it will establish a default, official approach for using Composer with Drupal. That makes using Composer more straightforward, less confusing, and could theoretically lower the bar for evaluators and newcomers who are familiar with other PHP frameworks. Making things easier for site builders is a very important goal; web development has become a difficult task, and removing complexity out of the process is crucial.

It's also worth noting that we are planning the Automatic Updates Initiative. We are exploring if an automated update system can be build on top of the Composer Initiative's work, and provide an abstraction layer for those that don't want to use Composer directly. I believe that could be truly game-changing for Drupal, as it would remove a great deal of complexity.

If you're interested in learning more about the Composer plan, or if you want to provide feedback on the proposal, I recommend you check out the Composer Initiative issue and comment 37 on that issue.

Implementing this plan will be a lot of work. How fast we execute these changes depends on how many people will help. There are a number of different third-party Composer related efforts, and my hope is to see many of them redirect their efforts to make Drupal's out-of-the-box Composer effort better. If you're interested in getting involved or sponsoring this work, let me know and I'd be happy to connect you with the right people!

June 14, 2018

The second day started with a topic this had a lot of interest for me: Docker containers or “Audit de sécurité d’un environnement Docker” by Julien Raeis and Matthieu Buffet. Docker is everywhere today and, like new technologies, is not always mature when deployed, sometimes in a corner by developers. They explained (for those that are living on the moon) what is Docker in 30 seconds. The idea of the talk was not to propose a tool (you can have a look here). Based on their research, most containers are deployed with the default configuration. Images are downloaded without security pre-checks. If Docker is very popular on Linux systems, it is also available for Windows. In this case, there are two working modes: Via the Windows Server Containers (based on objects of type “job”) or Hyper-V container. They reviewed different aspects of the containers like privilege escalation, abuse of resources and capabilities. Some nice demonstrations were presented like privilege escalation and access to a file on the host from the container. Keep in mind that Docker is not considered as a security tool by the developers! Interesting talks but with a lack of practical stuff that could help auditors.
The next talk was also oriented to virtualization and, more precisely, how to protect them from a guest point of view. This was presented by Jean-Baptiste Galet. The scenario was: “if the hypervisor is already compromized by an attacker, how to protect the VMs running on top of it? We can face the same kind of issues with a rogue admin. By design, an admin has full access to the virtual hosts. The goal is to reach the following requirements;
  • To use a trusted hypervisor
  • To verify the boot sequence integrity
  • To encrypt disks (and snapshots!)
  • To protect memory
  • To perform a safe migration between different hypervisors
  • To restrict access to console, ports, etc.

Some features have already been implemented by VMware in 2016 like an ESXi secure boot procedure, VM encryption and VMotion data encryption. Jean-Baptiste explained in detail how to implement such controls. For example, to implement a safe boot, UEFI & a TPM chip can be used.

The two next slot was assigned to short presentations (15 mins) and focussed on specific tools. The first one was The tool helps in the development of an ASN.1 encoder/decoder. ASN means “Abstract Syntax Notation 1” and is used in many domains, the most important one being the mobile network operators.
The second one was ProbeManager, developed by  Matthieu Treussart. Why this talk? Matthieu was looking for a tool to help in the day-to-day management of IDS (like Suricata) but did not found a solution that matched his requirements. So, he decided to write his own tool. ProbeManager was born! The tool is written in Python and has a (light) web interface to perform all the classic tasks to manage IDS sensors (creation, deployment, the creation of rules, monitoring, etc). The tool is nice but the web interface is very light and it suffers from a lack of IDS rules finetuning. Note that it is also compatible with Bro and OSSEC (soon). I liked the built-in integration with MISP!
After the morning coffee break, we had the chance to welcome Daniel Jeffrey on stage. Daniel is working for the Internet Security Research Group of the Linux Foundation and is involved in the Let’s Encrypt project. In the first part, Daniel explained why HTTPS became mandatory to better protect the Internet users privacy but SSL is hard! It’s boring, time-consuming, confusing and costly. The goal of the Let’s Encrypt project is to automate, to offer for free and be open. Let’s Encrypt is maintenance by a team of 12 people (only!). They went into production in eight months only. Then, Daniel explained how Let’s Encrypt is implemented. It was interesting to learn more about the types of challenges available to enrol/renew certificates: DNS-01 is easy with many frontends needing simultaneous renewals. HTTP-01 is useful for a few servers that get certs and when DNS lag can be an issue.
Then, two other tools were presented.”YaDiff” (available here) which helps to propagate symbols between analysis sessions. The idea of the tool came as a response to a big issue with malware analysis: it is a repeating job. The idea is, once the analyzis on a malware completed, symbols are exported and can be reused in other analysis (in IDA). Interesting tool if you are performing reverse engineering as a core activity. The second one was Sandbagility. After a short introduction to the different methods available to perform malware analysis (static, dynamic, in a sandbox), the authors explained their approach. The idea is to interact with a Windows sandbox without an agent installed on it but, instead, to interact with the hypervisor. The result of their research is a framework, written in Python. It implements a protocol called “Fast Debugging Protocol”. They performed some demos and showed how easy it is to extract information from the malware but also to interact with the sandbox. One of the demos was based on the Wannacry ransomware. Warning, this is not a new sandbox. The guest Windows system must still be fine-tuned to prevent easy VM detection! This is very interesting and deserves to be tested!
After the lunch, the last regular presentation started with one about “Java Card”, presented by Guillaume Bouffard and Léo Gaspard. It was in some way, an extension of the talk about an armoured USB device, the Java Card is one of the components.
As usual, the afternoon was completed with a wrap-up of the SSTIC challenge and rump sessions. The challenge was quite complex (as usual?) and included many problems based on crypto. The winner came on site and explained how he solve the challenge. This is part of the competition, players must deliver a document containing all the details and findings of the game. A funny anecdote about the challenge, the server was compromized because an ~/.ssh/authorized-keys was left writable.
Rump sessions are also a key event during the conference. Rules are simple: 5 minutes (4 today due to the number of proposals received), if people applaud, you stop otherwise you can continue. Here is the list of topics that were presented:
  • A “Burger Quizz” alike session about the SSTIC
  • Pourquoi c’est flou^Wnet? (How the SSTIC crew provides live streaming and recorded videos)
  • Docker Explorer
  • Nordic made easy – Reverse engineering of a nRF5 firmware (from Nordic Semiconductor)
  • RTFM – Read the Fancy Manual
  • IoT security
  • Mirai, dis-moi qui est la poubelle?
  • From LFI to domain admin rights
  • Perfect (almost) SQL injection detection
  • Invite de commande pour la toile (dans un langage souverain): WinDev
  • How to miss your submission to a call-for-paper
  • Suricata & les moutons
  • Les redteams sont nos amies or what mistakes to avoid when you are in a red team (very funny!)
  • ipibackups
  • Representer l’arboresence matérielle
  • La télé numérique dans le monde
  • ARM_NOW (
  • Signing certi with SSH keys
  • Smashing the func for SSTIC and profit
  • Wookey
  • Coffee Plz! (or how to get free coffee in your company)
  • Modmobjam
  • Bug bounty
  • (Un)protected users
  • L’anonymat du pauvre
  • Abuse of the YAML format

The day ended with the classic social event in the beautiful place of “Le couvent des Jacobins“:

Le couvent des jacobins

My feeling is that there were less entertaining talks today (based on my choices/feeling of course) but the one about Let’s Encrypt was excellent. Stay tuned for the last day tomorrow!

[The post SSTIC 2018 Wrap-Up Day #2 has been first published on /dev/random]

I published the following diary on “A Bunch of Compromized WordPress Sites“:

A few days ago, one of our readers contacted reported an incident affecting his website based on WordPress. He performed quick checks by himself and found some pieces of evidence:

  • The main index.php file was modified and some very obfuscated PHP code was added on top of it.
  • A suspicious PHP file was dropped in every sub-directories of the website.
  • The wp-config.php was altered and database settings changed to point to a malicious MySQL server.

[Read more]

[The post [SANS ISC] A Bunch of Compromized WordPress Sites has been first published on /dev/random]

June 13, 2018

Hello Readers,
I’m back in the beautiful city of Rennes, France to attend my second edition of the SSTIC. My first one was a very good experience (you can find my previous wrap-up’s on this blog – day 1, day 2, day 3) and this one was even more interesting because the organizers invited me to participate to the review and selection of the presentations. The conference moved to a new location to be able to accept the 800 attendees, quite challenging!

As usual, the first day started with a keynote which was assigned to Thomas Dullien aka Halvar Flake. The topic was “Closed, heterogeneous platforms and the (defensive) reverse engineers dilemma”. Thomas is a reverse engineer for years and he decided to have a look back at twenty years of reverse engineering. In 2010, this topic was already covered in a blog post and, perhaps, it’s time to have another look. What are the progress? Thomas reviewed today’s challenges, some interesting changes and the future (how computing is changing and the impacts in reverse engineering tasks). Thomas’s feeling is that we have many tools available today (Frida, Radare, Angr, BinNavi, ….) which should be helpful but it’s not the case. Getting debugging live and traces from devices like mobile devices is a pain (closed platform) and there is a clear lack of reliable library to retrieve enough amount of data. Also, the “debugability” is reduced due to more and more security controls in place (there is clearly a false sense of security: “It’s not because your device is not debuggable that it is safe!” said Thomas. Disabling the JTAG on a router PCB will not make it more secure. There is also a “left shift” in the development process to try to reduce the time to market (software is developed on hardware not completely ready). Another fact? The poor development practices of most reverse engineers. Take as example a quick Python script written to fix a problem at a time ‘x’. Often, the same script is still used months or years later without proper development guidelines. Some tools are also developed as support for a research or a presentation but does not work properly in real-life cases. For Thomas, the future will still change with more changes in technologies than in the last 20 years, the cloud will bring not only “closed source” tools but also “closed binary” and infrastructures will become heterogeneous. Very nice keynote and Thomas did not hesitate to throw a stone into the water!
After a first coffee break, Alexandre Gazet and Fabien Perigaud presented a research about HP iLO interfaces: “Subverting your server through its BMC: the HPE iLO4 case”. After a brief introduction of the product and what it does (basically: to allow an out-of-band control/monitoring of an HP server), a first demo was presented based on their previous research. Dumping the kernel memory of the server, implement a shellcode and become root in the Linux server. Win! Their research generated the CVE-2017-12542 and a patch is available for a while (it was a classic buffer overflow). But does it mean that iLO is a safe product now? They came back with a new research to demonstrate that no, it’s not secure yet. Even if HP did a good job to fix the previous issue, they still lack some controls. Alexandre & Fabien explained how the firmware upgrade process fails to validate the signature and can be abused to perform malicious activities, again! The goal was to implement a backdoor in the Linux server running on the HP server controlled by the compromized iLO interface. They release a set of tools to check your iLO interface but the recommendation remains the same: to patch and do not deploy iLO interfaces in the wild.
The next talk was about “T-Brop” or “Taint-Based Return Oriented Programming” presented by Colas Le Guernic & Francois Khourbiga. A very difficult topic for me. They reviewed what it “ROP” (Return Oriented Programming) and described the two existing techniques to detect possible ROP in a program: syntactic or symbolic with pro & con of both solutions. Then, they introduced their new approach called T-Brop which is a mix of the best of both solutions.
The next talk was about “Certificate Transparency“, presented by Christophe Brocas & Thomas Damonneville. HTTPS is really pushed on stage for a while to improve web security and one of the controls available to help to track certificates and rogue websites is the Certificate Transparency. It’s a Google initiative known as RFC 6962. They explained what’s behind this RFC. Basically, all created SSL certificates must be added in an unalterable list which can be accessed freely for tracking and monitoring purposes. Christophe & Thomas are working for a French organization that is often targeted by phishing campaigns and this technology helps them in their day-to-day operations to track malicious sites. More precisely, they track two types of certificates:
  • The ones that mimic the official ones (typo-squatting, new TLD’s, …)
  • Domains used in their organization and that can be used in the wrong way.

In the second scenario, they spotted a department which developed a web application hosted by a 3rd party company and using Let’s Encrypt. This is not compliant with their internal rules. Their tools have been release (here). Definitively a great talk because it does not require a lot of investment (time, money) and can greatly improve your visibility of potential issues (ex: detecting phishing attacks before they are really started).

After the lunch, a bunch of small talks was scheduled. First, Emmanuel Duponchelle and Pierre-Michel Ricordel presented “Risques associés aux signaux parasites compromettants : le cas des câbles DVI et HDMI“. Their research focused on the TEMPEST issue with video cables. They just started with a live demo which demonstrated how a computer video flow can be captured:
Then, they explained how video signals work and what are the VGA, DVI & HDMI standards (FYI, HDMI is like DVI but with a new type of connector). To solve the TEMPEST issues, it’s easy as used properly shielded cables. They demonstrated different cables, good and bad. Keep in mind: low-cost cables are usually very bad (not a surprise). To make the demo, they used the software called TempestSDR. Also, for sensitive computers, use VGA cables instead of HDMI, they leak less data!
The next talk was close to the previous topic. This time, it focussed on SmartTV’s and, more precisely, the DVB-T protocol. José Lopes Esteves & Tristan Claverie presented their research which is quite… scary! Basically, a SmartTV is a computer with many I/O interfaces and, as they are cheaper than a normal computer monitor, they are often installed in meeting rooms, where sensitive information are exchanged. They explained that, besides the audio & video flows, subtitles, programs, “apps” can also be delivered via a DVB-T signal. Such “apps” are linked to a TV channel (that must be selected/viewed). Those apps are web-based and, if the info is provided, can be installed silently and automatically! So nice! Major issues are:
  • HTTP vs HTTPS (no comment!)
  • Legacy mode fallback (si pas signé, pas grave)
  • Unsafe API’s
  • Time-based trust

They explained how to protect against this, like asking the user to approve the installation of an app or access to this or this resources but no easy to implement in a “TV” used by no technical people. Another great talk! Think about this when you will see a TV connected in a meeting room.

The next talk was the demonstration of a complete pwnage of a SmartPlug (again, a “smart” device) that can be controlled via a WiFi connection: “Three vulns, one plug” by Gwenn Feunteun, Olivier Dubasque and Yves Duchesne. It started with a mention on the manufacturer website. When you read something like “we are using top-crypto algorithm…“, this is a good sign of failure. Indeed. They bought an adapter and started to analyze its behaviour. The first issue was to understand how the device was able to “automatically” configure the WiFi interface via a mobile phone. By doing a simple MitM attack, they checked the traffic between the smartphone and the SmartPlug. They discovered that the WiFi key was broadcasted using a … Caesar cipher (of 120)! The second vulnerability was found in the WiFi chipset that implements a backdoor via an open UDP port. They discovered also that WPS was available but not used. For the fun, they decided to implement it using an Arduino 🙂 For the story, the same kind of WiFi chipset is also used in medical and industrial devices… Just one remark about the talk: it looks that the manufacturer of the SmartPlug was never contacted to report the vulnerabilities found… sad!

Then, Erwan Béguin came to present the Escape Room they developed at his school. The Escape Room focusses on security and awareness. It is for non-tech people. When I read the abstract, I had a strange feeling about the talk but it was nice and explained how people reacted and some finding about their behaviours when they are working in groups. Example: in a group, if the “leader” gives his/her approval, people will follow and perform unsafe actions like inserting a malicious USB device in a laptop.
After the afternoon coffee break, Damien Cauquil presented a cool talk about hacking PCB’s: “Du PCB à l’exploit: étude de cas d’une serrure connectée Bluetooth Low Energy“. When you are facing some piece of hardware, they are different approaches: You can open the box, locate the JTAG, use, brute force the serial speed, get a shell, root access. Completed! Damien does not like this approach and prefers to work in a more strict way but which can be helpful in many cases. Sometimes, just be inspecting the PCB, you can deduct some features or missing controls. At the moment, they are two frameworks to address the security of IoT devices: the OWASP IoT project and the one from Rapid7. In the second phase, Damien applied his technique to a real device (a smart door lock). Congrats to him for finishing the presentation in a hurry due to the video problems!
Then, the “Wookey” project was presented by a team of the ANSSI. The idea behind this project is to build a safe USB storage that will protect against all types of attack like data leak, USBKill, etc… The idea is nice, they performed a huge amount of work but it is very complex and not ready to be used by most people…
Finally, Emma Benoit presented the result of a pentest she realized with Guillaume Heilles, Philippe Teuwen on an embedded device: “Attacking serial flash chip: case study of a black box device“. The device had a flash chip on the PCB that should contain interesting data. They are two types of attacks: “in circuit” (probes are plugged on the chip PINs) or “chip-off” (or physical extraction). In this case, they decided to use the second method and explained step by step how they succeeded. The most challenging step was to find an adapter to connect the unsoldered chip on an analyzer. Often, you don’t have the right adapter and you must build your own. All the steps were described and finally data extracted from the flash. Bonus, there was a telnet interface available without any password 😉
That’s all for today! See you tomorrow for another wrap-up!

[The post SSTIC 2018 Wrap-Up Day #1 has been first published on /dev/random]

June 11, 2018

Merkel and Macron should use everything in their economic power to invest in our own European Military.

For example whenever the ECB must pump money in the EU-system, it could do that by increased spending on European military.

This would be a great way to increase the EURO inflation to match the ‘below but near two percent annual inflation’ target.

However. The EU budget for military should not go to NATO. Right now it should go to EU’s own national armies. NATO is more or less the United State’s military influence in Europe. We’ve seen last G7 that we can’t rely on the United States’ help.

Therefor, it should use exclusively European suppliers for military hardware. We don’t want to spend EUROs outside of our EU system. Let the money circulate within our EU economy. This implies no F-35 for Belgium. Instead, for example the Eurofighter Typhoon. The fact that Belgium can’t deliver the United States’s nuclear weapons without their F-35, means that the United States should take their nuclear bombs back. There is no democratic legitimacy to keep them in Belgium anyway.

It’s also time to create a pillar similar to the European Union: a military branch of the EU.

Already are Belgium and The Netherlands sharing military marine and air force resources. Let’s extend this principle to other EU countries.

June 08, 2018

Logo JenkinsCe jeudi 28 juin 2018 à 19h se déroulera la 70ème séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : Jenkins en 2018


  • exceptionnellement, il s’agit du 4ème jeudi du mois !
  • Un atelier introductif “Docker/Jenkins” sera organisé dès 14h ! Détails ci-après.

Thématique : sysadmin

Public : sysadmin|développeurs|entreprises|étudiants

Les animateurs conférenciers : Damien Duportal et Olivier Vernin (CloudBees)

Lieu de cette séance : Université de Mons, Faculté Polytechnique, Site Houdain, Rue de Houdain, 9, auditoire 3 (cf. ce plan sur le site de l’UMONS, ou la carte OSM). Entrée par la porte principale, au fond de la cour d’honneur. Suivre le fléchage à partir de là.

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page La séance sera suivie d’un verre de l’amitié, vers 21H. Un écran géant sera installé de manière à suivre la seconde mi-temps de l’événement footballistique (Belgique -Angleterre) en direct !

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, OpenSides, MeaWeb et Phonoid.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

Description :

  • Partie I : Introduction à Jenkins-X: une solution d’intégration et de déploiement continus pour des applications “cloud” modernes dans Kubernetes. Résumé : Jenkins X est un projet qui repense la manière dont les développeurs devraient interagir avec l’intégration et le déploiement continus dans le cloud en mettant l’accent sur la productivité des équipes de développement grâce à l’automatisation, à l’outillage et à de meilleures pratiques DevOps.
  • Partie II: Le plugin Configuration-as-Code de Jenkins. Résumé : En 2018, nous savons comment définir nos Jobs avec JobDSL ou Pipeline. Mais comment définir la configuration de Jenkins avec le même modèle, avec seulement un fichier YAML ?

Short bios :

  • Damien Duportal : Training Engineer @ CloudBees, i am an IT Engineer tinkering from development to production areas. Trainer and mentor, i also love to transmit and teach. Open-Source afficionado. Docker & Apple addict. Human being.
  • olivier Vernin : Fascinated by new technologies and particulary in computer sciences, I am continuously looking for ways to improve my skills. The domain of Linux/Unix especially piqued my interest and encourages myself to get in-depth understanding.

Atelier introductif “Docker/Jenkins”, de 14H à 17H30 :

  • Introduction à Docker
  • Introduction à Jenkins
  • Intégration de Jenkins avec Docker
  • Bring your own challenge

L’atelier est limité à 25 personnes. Détails et inscription via la page

Ordering a revision of a power electronics board from Aisler I decided to get a metal paste stencil as well to be able to cleanly solder using the reflow oven.

I already did a first board just taping the board and stencil to the table and applying solder paste. This worked but it is not very handy.

Then I came with the idea to use a 3D printed PCB holder that would ease the process.

The holder

The holder (just a rectangle with a hole) tightly fits the PCB. It is a bit larger then the stencil and 0.1mm less thick then the PCB to make sure the connection between the PCB and the stencil is tight.

I first made some smaller test prints but after 3 revisions the following openSCAD script gave a perfectly fitting PCB holder:

// PCB size
bx = 41;
by = 11.5;
bz = 1.6;

// stencil size (with some margin for tape)
sx = 100; // from 84.5
sy = 120; // from 104

// aisler compensation
board_adj_x = 0.3;
board_adj_y = 0.3;

// 3D printer compensation
printer_adj_x = 0.1;
printer_adj_y = 0.1;

x = bx + board_adj_x + printer_adj_x;
y = by + board_adj_y + printer_adj_y;
z = bz - 0.1; // have PCB be ever so slightly higher

difference() {
    cube([sx,sy,z], center=true);
    cube([x,y,z*2], center=true);


The PCB in the holder:

PCB in holder

The stencil taped to it:

Stencil taped

Paste on stencil:

Paste on stencil

Paste applied:

Paste applied

Stencil removed:

Stencil removed

Components placed:

Components placed

Reflowed in the oven:



Using the 3D printed jig worked good. The board under test:

Under test

June 06, 2018

I published the following diary on “Converting PCAP Web Traffic to Apache Log“:

PCAP data can be really useful when you must investigate an incident but when the amount of PCAP files to analyse is counted in gigabytes, it may quickly become tricky to handle. Often, the first protocol to be analysed is HTTP because it remains a classic infection or communication vector used by malware. What if you could analyze HTTP connections like an Apache access log? This kind of log can be easily indexed/processed by many tools… [Read more]

[The post [SANS ISC] Converting PCAP Web Traffic to Apache Log has been first published on /dev/random]

One of the most stressful experiences for students is the process of choosing the right university. Researching various colleges and universities can be overwhelming, especially when students don't have the luxury of visiting different campuses in person.

At Acquia Labs, we wanted to remove some of the complexity and stress from this process, by making campus tours more accessible through virtual reality. During my presentation at Acquia Engage Europe yesterday, I shared how organizations can use virtual reality to build cross-channel experiences. People that attended Acquia Engage Europe asked if they could have a copy of my video, so I decided to share it on my blog.

The demo video below features a high school student, Jordan, who is interested in learning more about Massachusetts State University (a fictional university). From the comfort of his couch, Jordan is able to take a virtual tour directly from the university's website. After placing his phone in a VR headset, Jordan can move around the university campus, explore buildings, and view program resources, videos, and pictures within the context of his tour.

All of the content and media featured in the VR tour is stored in the Massachusetts State University's Drupal site. Site administrators can upload media and position hotspots directly from within Drupal backend. The React frontend pulls in information from Drupal using JSON API. In the video below, Chris Hamper (Acquia) further explains how the decoupled React VR application takes advantage of new functionality available in Drupal 8.

It's exciting to see how Drupal's power and flexibility can be used beyond traditional web pages. If you are interesting in working with Acquia on virtual reality applications, don't hesitate to contact the Acquia Labs team.

Special thanks to Chris Hamper for building the virtual reality application, and thank you to Ash Heath, Preston So and Drew Robertson for producing the demo videos.

June 05, 2018

I published the following diary on “Malicious Post-Exploitation Batch File“:

Here is another interesting file that I found while hunting. It is a malicious Windows batch file (.bat) which helps to exploit a freshly compromised system (or… to be used by a rogue user). I don’t have a lot of information about the file origin, I found it on VT (SHA256: 1a611b3765073802fb9ff9587ed29b5d2637cf58adb65a337a8044692e1184f2). The script is very simple and relies on standard windows system tools and external utilities downloaded when needed… [Read more]

[The post [SANS ISC] Malicious Post-Exploitation Batch File has been first published on /dev/random]

June 04, 2018

Microsoft acquires GitHub

Today, Microsoft announced it is buying GitHub in a deal that will be worth $7.5 billion. GitHub hosts 80 million source code repositories, and is used by almost 30 million software developers around the world. It is one of the most important tools used by software organizations today.

As the leading cloud infrastructure platforms — Amazon, Google, Microsoft, etc — mature, they will likely become functionally equivalent for the vast majority of use cases. In the future, it won't really matter whether you use Amazon, Google or Microsoft to deploy most applications. When that happens, platform differentiators will shift from functional capabilities, such as multi-region databases or serverless application support, to an increased emphasis on ease of use, the out-of-the-box experience, price, and performance.

Given multiple functionally equivalent cloud platforms at roughly the same price, the simplest one will win. Therefore, ease of use and out-of-the-box experience will become significant differentiators.

This is where Microsoft's GitHub acquisition comes in. Microsoft will most likely integrate its cloud services with GitHub; each code repository will get a button to easily test, deploy, and run the project in Microsoft's cloud. A deep and seamless integration between Microsoft Azure and GitHub could result in Microsoft's cloud being perceived as simpler to use. And when there are no other critical differentiators, ease of use drives adoption.

If you ask me, Microsoft's CEO, Satya Nadella, made a genius move by buying GitHub. It could take another ten years for the cloud wars to mature, and for us to realize just how valuable this acquisition was. In a decade, $7.5 billion could look like peanuts.

While I trust that Microsoft will be a good steward of GitHub, I personally would have preferred to see GitHub remain independent. I suspect that Amazon and Google will now accelerate the development of their own versions of GitHub. A single, independent GitHub would have maximized collaboration among software projects and developers, especially those that are Open Source. Having a variety of competing GitHubs will most likely introduce some friction.

Over the years, I had a few interactions with GitHub's co-founder, Chris Wanstrath. He must be happy with this acquisition as well; it provides stability and direction for GitHub, ends a 9-month CEO search, and is a great outcome for employees and investors. Chris, I want to say congratulations on building the world's biggest software collaboration platform, and thank you for giving millions of Open Source developers free tools along the way.

June 03, 2018

Wordt het eens geen tijd dat ons centrum voor cybersecurity overheidsdiensten zoals het Belgisch leger oplegt om steeds a.d.h.v. met bv. PGP (minimaal) getekende (en hopelijk ook geëncrypteerde) E-mails te communiceren? Ja ja. We kunnen ze zelfs encrypteren. Hightech at Belgium. Stel je dat maar eens voor. Waanzin!

Stel je voor. Men zou zowel de E-mail (de content, het bericht zelf) kunnen verifiëren, als de afzender als dat men tijdens de transit én opslag van het bericht de inhoud zou kunnen encrypteren. Bij een eventueel “onafhankelijk” onderzoek zouden we (wiskundige) garanties hebben dat één en ander nu exact is zoals hoe het toen verstuurd werd.

Allemaal zaken die erg handig zouden geweest zijn in de saga over de E-mails over of onze F-16 vliegtuigen langer kunnen vliegen of niet.

Bij de ICT diensten van de oppositiepartijen zou men dan een opleiding van een halfuurtje kunnen krijgen over hoe ze met PGP in de hand één en ander cryptografisch kunnen verifiëren.

ps. Ik weet ook wel dat, in het wereldje waar het over gaat, nu net het feit dat bepaalde zaken achteraf niet meer te achterhalen zijn als waardevolle feature gezien wordt.

Wij hebben in Leuven de beste cryptografen van de wereld zitten. Maar ons Belgisch leger kan dit niet implementeren voor hun E-mails?

The title of this blog post comes from a recent Platformonomics article that analyzes how much Amazon, Google, Microsoft, IBM and Oracle are investing in their cloud infrastructure. It does that analysis based on these companies' publicly reported CAPEX numbers.

Capital expenditures, or CAPEX, is money used to purchase, upgrade, improve, or extend the life of long-term assets. Capital expenditures generally takes two forms: maintenance expenditure (money spent for normal upkeep and maintenance) and expansion expenditures (money used to buy assets to grow the business, or money used to buy assets to actually sell). This could include buying a building, upgrading computers, acquiring a business, or in the case of cloud infrastructure vendors, buying the hardware needed to invest in the growth of their cloud infrastructure.

Building this analysis on CAPEX spending is far from perfect, as it includes investments that are not directly related to scaling cloud infrastructure. For example, Google is building subsea cables to improve their internet speed, and Amazon is investing a lot in its package and shipping operations, including the build-out of its own cargo airline. These investments don't advance their cloud services businesses. Despite these inaccuracies, CAPEX is still a useful indicator for measuring the growth of their cloud infrastructure businesses, simply because these investments dwarf others.

The Platformonomics analysis prompted me to do a bit of research on my own.

The evolution of Amazon, Alphabet, Google, IBM and Oracle's CAPEX between 2008 and 2018

The graph above shows the trailing twelve months (TTM) CAPEX spending for each of the five cloud vendors. CAPEX don't lie: cloud infrastructure services is clearly a three-player race. There are only three cloud infrastructure companies that are really growing: Amazon, Google (Alphabet) and Microsoft. Oracle and IBM are far behind and their spending is not enough to keep pace with Amazon, Microsoft or Google.

Amazon's growth in CAPEX is the most impressive. This becomes really clear when you look at the percentage growth:

The percentage growth of Amazon, Alphabet, Google, IBM and Oracle's CAPEX between 2008 and 2018

Amazon's CAPEX has exploded over the past 10 years. In relative terms, it has grown more than all other companies' CAPEX combined.

The scale is hard to grasp

To put the significance of these investments in cloud services in perspective, in the last 12 months, Amazon and Alphabet's CAPEX is almost 10x the size of Coca-Cola's, a company whose products are available in every grocery store, gas station, and vending machine in every town and country in the world. More than 3% of all beverages consumed around the world are Coca-Cola products. In contrast, the amount of money cloud infrastructure vendors are investing in CAPEX is hard to grasp.

The CAPEX of Amazon, Alphabet, Google vs Coca-Cola between 2008 and 2018
Disclaimers: As a public market investor, I'm long Amazon, Google and Microsoft. Also, Amazon is an investor in my company, Acquia.

June 02, 2018

This is an ode to the Drupal Association.


  1. Yesterday, I stumbled upon Customizing DrupalCI Testing for Projects, written by Ryan “Mixologic” Aslett. It contains detailed, empathic 1 explanations. He also landed d.o/node/2969363 to make Drupal core use this capability, and to set an example.
  2. I’ve been struggling in d.o/project/jsonapi/issues/2962461 to figure out why an ostensibly trivial patch would not just fail tests, but cause the testing infrastructure to fail in inexplicable ways after 110 minutes of execution time, despite JSON API test runs normally taking 5 minutes at most! My state of mind: (ノಠ益ಠ)ノ彡┻━┻
    Three days ago, Mixologic commented on the issue and did some DrupalCI infrastructure-level digging. I didn’t ask him. I didn’t ping him. He just showed up. He’s just monitoring the DrupalCI infrastructure!

In 2015 and 2016, I must have pinged Mixologic (and others, but usually him) dozens of times in the #drupal-infrastructure IRC channel about testbot/DrupalCI being broken yet again. Our testing infrastructure was frequently having troubles then; sometimes because Drupal was making changes, sometimes because DrupalCI was regressing, and surprisingly often because Amazon Web Services was failing.

Thanks to those two things in the past few days, I realized something: I can’t remember the last time I had to ping somebody about DrupalCI being broken! I don’t think I did it once in 2018. I’m not even sure I did in 2017! This shows what a massive improvement the Drupal Association contributed to the velocity of the Drupal project!

Of course, many others at the Drupal Assocation help make this happen, not just Ryan.

For example Neil “drumm” Drumm. He has >2800 commits on the customizations project! Lately, he’s done things like making newer & older releases visible on project release pages, exposing all historical issue credits, providing nicer URLs for issues and giving project maintainers better issue queue filtering. BTW, Neil is approaching his fifteenth Drupal anniversary!
Want to know about new features as they go live? Watch the change recordsRSS feed available.

In a moment of frustration, I tweeted fairly harshly (uncalled for … sorry!) to @drupal_infra, and got a forgiving and funny tweet in response:

(In case it wasn’t obvious yet: Ryan is practically a saint!)

Thank you!

I know that the Drupal Association does much more than the above (an obvious example is organizing DrupalCons). But these are the ways in which they are most visible to me.

When things are running as smoothly as they are, it’s easy to forget that it takes hard work to get there and stay there. It’s easy to take this for granted. We shouldn’t. I shouldn’t. I did for a while, then realized … this blog post is the result!

A big thanks to everyone who works/worked at the Drupal Association! You’ve made a tangible difference in my professional life! Drupal would not be where it is today without you.

  1. Not once is there a Just do [jargon] and it’ll magically work in there, for example! There’s screenshots showing how to navigate Jenkins’ (peculiar) UI to get at the data you need. ↩︎

May 30, 2018

Most VMware appliances (vCenter Appliance, VMware Support Appliance, vRealize Orchestrator) have the so called VAMI: the VMware Appliance Management Interface, generally served via https on port 5480. VAMI offers a variety of functions, including "check updates" and "install updates". Some appliances offer to check/install updates from a connected CD iso, but the default is always to check online. How does that work?
VMware uses a dedicated website to serve the updates: Each appliance is configured with a repository URL: . The PRODUCT-ID is a hexadecimal code specific for the product. vRealize Orchestrator uses 00642c69-abe2-4b0c-a9e3-77a6e54bffd9, VMware Support Appliance uses 92f44311-2508-49c0-b41d-e5383282b153, vCenter Server Appliance uses 647ee3fc-e6c6-4b06-9dc2-f295d12d135c. The VERSION-ID contains the current appliance version and appends ".latest":,,
The appliance will check for updates by retrieving the repository URL /manifest/manifest-latest.xml . This xml contains the latest available version in fullVersion and version (fullVersion includes the build number), pre- and post-install scripts, EULA, and a list of updated rpm packages. Each entry has a that can be appended to the repository URL and downloaded. The update procedure downloads manifest and rpms, verifies checksums on downloaded rpms, executes the preInstallScript, runs rpm -U on the downloaded rpm packages, executes the postInstallScript, displays the exit code and prompts for reboot.
With this information, you can setup your own local repository (for cases where internet access is impossible from the virtual appliances), or you can even execute the procedure manually. Be aware that manual update would be unsupported. Using a different repository is supported by a subset of VMware appliances (e.g. VCSA, VRO) but not all (VMware Support Appliance).

Firefox 60 was released a few weeks ago and now comes with support for the upcoming Web Authentication (WebAuthn) standard.

Other major web browsers weren't far behind. Yesterday, the release of Google Chrome 67 also included support for the Web Authentication standard.

I'm excited about it because it can make the web both easier and safer to use.

The Web Authentication standard will make the web easier, because it is a big step towards eliminating passwords on the web. Instead of having to manage passwords, we'll be able to use web-based fingerprints, facial authentication, voice recognition, a smartphone, or hardware security keys like the YubiKey.

It will also make the web safer, because it will help reduce or even prevent phishing, man-in-the-middle attacks, and credential theft. If you are interested in learning more about the security benefits of the Web Authentication standard, I recommend reading Adam Langley's excellent analysis.

When I have a bit more time for side projects, I'd like to buy a YubiKey 4C to see how it fits in my daily workflow, in addition to what it would look like to add Web Authentication support to Drupal and

May 29, 2018

I'm taking a brief look at cheap quality PCB providers oshpark, aisler and JLCPCB.

all 3

PCB quality

All 3 provide nice quality good looking PCBs. (Click on a picture to see the full scale photo)


all 3

As always in beautiful OshPark Purple. Only small downside compared to others is the rough breakoffs.


all 3

Looks great.


all 3

Looks great as well. No gold but it still soldered great. A bit sad they include a production code on the silk screen, which could be a problem for PCBs that are visible.

Ease of Order


Just upload the .kicad_pcb file, very convenient. Shows a drawing of how your board will look.


Same, just upload the .kicad_pcb file and shows a drawing. Option to get a stencil.


Upload gerbers and shows a drawing. Option to get a stencil.


This is where it gets a bit more tricky to compare ;)


3 boards $1.55 shipped, as cheap as it gets for a 20 x 10 mm board.


3 boards 5.70 Euro shipped, still a good price.


This is of course a bigger board (the other two were the same).

10 boards $2 + $5.7 shipping gives $7.7.



  • ordered: April 30th 2018
  • shipped: May 8th 2018 (from USA)
  • arrived: May 15th 2018 (in Belgium)

Took 15 days from order to arrival.


  • ordered: April 30th 2018
  • shipped: May 9th 2018 (from Germany)
  • arrived: May 11th 2018 (in Belgium)

Took 11 days from order to arrival.


  • ordered: May 5th 2018
  • shipped: May 7th 2018 (from Singapore)
  • arrived: May 18th 2018 (in Belgium)

Took 13 days from order to arrival.


All three show an impressively fast delivery and a good quality board. Oshpark is still the king of cheap for tiny boards. JLCpcb gives you 10 boards, and could be cheaper for bigger boards. Aisler is the fastest, but only marginally.

Both Aisler and JLC have an option for a stencil which is interesting.

I'll be using all of them depending on the situation (need for stencil, quantity, board size, rush shipping, ...)

In the beginning of the year I started doing some iOS development for my POSSE plan. As I was new to iOS development, I decided to teach myself by watching short, instructional videos. Different people learn in different ways, but for me, videos tutorials were the most effective way to learn.

Given that recent experience, I'm very excited to share that all of the task tutorials in the Drupal 8 User Guide are now accompanied by video tutorials. These videos are embedded directly into every user guide page on You can see an example on the "Editing with the in-place editor" page.

These videos provide a great introduction to installing, administering, site building and maintaining the content of a Drupal-based website — all important skills for a new Drupalist to learn. Supplementing user guides with video tutorials is an important step towards improving our evaluator experience, as video can often convey a lot more than text.

Creating high-quality videos is hard and time-consuming work. Over the course of six months, the team at Drupalize.Me has generously contributed a total of 52 videos! I want to give a special shout-out to Joe Shindelar and the Drupalize.Me team for creating these videos and to Jennifer Hodgdon and Neil Drumm (Drupal Association) for helping to get each video posted on

What a fantastic gift to the community!

May 28, 2018

Zoals ik al voorspelde wordt onze overheid aangeklaagd omdat ze te weinig doet om kinderen van Syrië strijders in veiligheid te brengen.

Ongeacht hoe moeilijk dit onderwerp ook ligt, mogen we nooit onschuldige kinderen gaan veroordelen. Deze kinderen hebben niet gekozen waar hun ouders schuldig aan zijn. Ons land is verantwoordelijk om die kinderen op te vangen, er voor te zorgen en ze veiligheid te bieden.

Zelfs na de Tweede Wereld Oorlog deden we niet zo raar over de kinderen van collaborateurs. We kunnen dit niet maken.

Het kan voor mij niet. Het arbitrair straffen van onschuldige kinderen hoort strafbaar te zijn. Dat is een schending van de mensenrechten.

Wat is onfatsoenlijk?

It was such a beautiful day at the Belgian coast that we decided to go on a bike ride. We ended up doing a 44 km (27 miles) ride that took us from the North Sea beach, through the dunes into the beautiful countryside around Bruges.

Riante polder route

The photo shows the seemingly endless rows of poplar trees along a canal in Damme. The canal (left of the trees, not really visible in the photo) was constructed by Napoleon Bonaparte to enable the French army to move around much faster and to transport supplies more rapidly. At the time, canal boats were drawn by horses on roads alongside the canal. Today, many of these narrow roads have been turned into bike trails.

May 26, 2018

May 25, 2018

Since the release of Drupal 8.0.0 in November 2015, the Drupal 8 core committers have been discussing when and how we'll release Drupal 9. Nat Catchpole, one of Drupal 8's core committers, shared some excellent thoughts about what goes into making that decision.

The driving factor in that discussion is security support for Drupal 8’s third party dependencies (e.g. Symfony, Twig, Guzzle, jQuery, etc). Our top priority is to ensure that all Drupal users are using supported versions of these components so that all Drupal sites remain secure.

In his blog, Nat uses Symfony as an example. The Symfony project announced that it will stop supporting Symfony 3 in November 2021, which means that Symfony 3 won't receive security updates after that date. Consequently, by November 2021, we need to prepare all Drupal sites to use Symfony 4 or later.

Nothing has been decided yet, but the current thinking is that we have to move Drupal to Symfony 4 or later, release that as Drupal 9, and allow enough time for everyone to upgrade to Drupal 9 by November 2021. Keep in mind that this is just looking at Symfony, and none of the other components.

This proposal builds on top of work we've already done to make Drupal upgrades easy, so upgrades from Drupal 8 to Drupal 9 should be smooth and much simpler than previous upgrades.

If you're interested in the topic, check out Nat's post. He goes in more detail about potential release timelines, including how this impacts our thinking about Drupal 7, Drupal 8 and even Drupal 10. It's a complicated topic, but the goal of Nat's post is to raise awareness and to solicit input from the broader community before we decide our official timeline and release dates on

I published the following diary on “Antivirus Evasion? Easy as 1,2,3“:

For a while, ISC handlers have demonstrated several obfuscation techniques via our diaries. We always told you that attackers are trying to find new techniques to hide their content to not be flagged as malicious by antivirus products. Such of them are quite complex. And sometimes, we find documents that have a very low score on VT. Here is a sample that I found (SHA256: bac1a6c238c4d064f8be9835a05ad60765bcde18644c847b0c4284c404e38810)… [Read more]

[The post [SANS ISC] Antivirus Evasion? Easy as 1,2,3 has been first published on /dev/random]

Last weekend, over 29 million people watched the Royal Wedding of Prince Harry and Meghan Markle. While there is a tremendous amount of excitement surrounding the newlyweds, I was personally excited to learn that the royal family's website is built with Drupal! is the official website of the British royal family, and is visited by an average of 12 million people each year. Check it out at!

Royal uk

May 24, 2018

I published the following diary on “Blocked Does Not Mean Forget It“:

Today, organisations are facing regular waves of attacks which are targeted… or not. We deploy tons of security controls to block them as soon as possible before they successfully reach their targets. Due to the amount of daily generated information, most of the time, we don’t care for them once they have been blocked. A perfect example is blocked emails. But “blocked” does not mean that we can forget them, there is still valuable information in those data… [Read more]

[The post [SANS ISC] “Blocked” Does Not Mean “Forget It” has been first published on /dev/random]

We’re finalizing work on Autoptimize 2.4 with beta-3, which was just released. There are misc. changes under the hood, but the main functional change is the inclusion of image optimization!

This feature uses Shortpixel’s smart image optimization proxy, where requests for images are routed through their URL and automagically optimized. As the image proxy is hosted on one of the best preforming CDN’s with truly global presence obviously using HTTP/2 for parallelized requests, as the parameters are not in a query-string but are part of the URL and as behind the scenes the same superb image optimization logic is used that can also be found in the existing Shortpixel plugin, you can expect great results from just ticking that “optimize image” checkbox on the “Extra” tab :-)

Image optimization will be completely free during the 2.4 Beta-period. After the official 2.4 release this will remain free up until a still to be defined threshold per domain, after which additional service can be purchased at Shortpixel’s.

If you’re already running AO 2.4 beta-2, you’ll be prompted to upgrade. And if you’d like to join the beta to test image optimization, you can download the zip-file here. The only thing I ask in return is feedback (bugs, praise, yay’s, meh’s, …)  ;-)


May 23, 2018

May 22, 2018

Adobe acquires Magento for $1.68 billion

Yesterday, Adobe announced that it agreed to buy Magento for $1.68 billion. When I woke up this morning, 14 different people had texted me asking for my thoughts on the acquisition.

Adobe acquiring Magento isn't a surprise. One of our industry's worst-kept secrets is that Adobe first tried to buy Hybris, but lost the deal to SAP; subsequently Adobe tried to buy DemandWare and lost out against Salesforce. It's evident that Adobe has been hungry to acquire a commerce platform for quite some time.

The product motivation behind the acquisition

Large platform companies like Salesforce, Oracle, SAP and Adobe are trying to own the digital customer experience market from top to bottom, which includes providing support for marketing, commerce, personalization, and data management, in addition to content and experience management and more.

Compared to the other platform companies, Adobe was missing commerce. With Magento under its belt, Adobe can better compete against Salesforce, Oracle and SAP.

While Salesforce, SAP and Oracle offer good commerce capability, they lack satisfactory content and experience management capabilities. I expect that Adobe closing the commerce gap will compel Salesforce, SAP and Oracle to act more aggressively on their own content and experience management gap.

While Magento has historically thrived in the SMB and mid-market, the company recently started to make inroads into the enterprise. Adobe will bring a lot of operational maturity; how to sell into the enterprise, how to provide enterprise grade support, etc. Magento stands to benefit from this expertise.

The potential financial outcome behind the acquisition

According to Adobe press statements, Magento has achieved "approximately $150 million in annual revenue". We also know that in early 2017, Magento raised $250 million in funding from Hillhouse Capital. Let's assume that $180 million of that is still in the bank. If we do a simple back-of-the-envelope calculation, we can subtract this $180 million from the $1.68 billion, and determine that Magento was valued at roughly $1.5 billion, or a 10x revenue multiple on Magento's trailing twelve months of revenue. That is an incredible multiple for Magento, which is primarily a licensing business today.

Compare that with Shopify, which is trading at a $15 billion dollar valuation and has $760 million of twelve month trailing revenue. This valuation is good for a 20x multiple. Shopify deserves the higher multiple, because it's the better business; all of its business is delivered in the cloud and at 65% year-over-year revenue growth, it is growing much faster than Magento.

Regardless, one could argue that Adobe got a great deal, especially if it can accelerate Magento's transformation from a licensing business into a cloud business.

Most organizations prefer best-of-breed

While both the product and financial motivations behind this acquisition are seemingly compelling, I'm not convinced organizations want an integrated approach.

Instead of being confined to proprietary vendors' prescriptive suites and roadmaps, global brands are looking for an open platform that allows organizations to easily integrate with their preferred technology. Organizations want to build content-rich shopping journeys that integrate their experience management solution of choice with their commerce platform of choice.

We see this first hand at Acquia. These integrations can span various commerce platforms, including IBM WebSphere Commerce, Salesforce Commerce Cloud/Demandware, Oracle/ATG, SAP/hybris, Magento and even custom transaction platforms. Check out Quicken (Magento), Weber (Demandware), Motorola (Broadleaf Commerce), Tesla (custom to order a car, and Shopify to order accessories) as great examples of Drupal and Acquia working with various commerce platforms. And of course, we've quite a few projects with Drupal's native commerce solution, Drupal Commerce.

Owning Magento gives Adobe a disadvantage, because commerce vendors will be less likely to integrate with Adobe Experience Manager moving forward.

It's all about innovation through integration

Today, there is an incredible amount of innovation taking place in the marketing technology landscape (full-size image), and it is impossible for a single vendor to have the most competitive product suite across all of these categories. The only way to keep up with this unfettered innovation is through integrations.

Marketing technology landscape 2018An image of the Marketing Technology Landscape 2018. For reference, here are the 2011, 2012, 2014, 2015, 2016 and 2017 versions of the landscape. It shows how fast the marketing technology industry is growing.

Most customers want an open platform that allows for open innovation and unlimited integrations. It's why Drupal and Acquia are winning, why the work on Drupal's web services is so important, and why Acquia remains committed to a best-of-breed strategy for commerce. It's also why Acquia has strong conviction around Acquia Journey as a marketing integration platform. It's all about innovation through integration, making those integrations easy, and removing friction from adopting preferred technologies.

If you acquire a commerce platform, acquire a headless one

If I were Adobe, I would have looked to acquire a headless commerce platform such as Elastic Path, Commerce Tools, Moltin, Reaction Commerce or even Salsify.

Today, there is a lot of functional overlap between Magento and Adobe Experience Manager — from content editing, content workflows, page building, user management, search engine optimization, theming, and much more. The competing functionality between the two solutions makes for a poor developer experience and for a poor merchant experience.

In a headless approach, the front end and the back end are decoupled, which means the experience or presentation layer is separated from the commerce business layer. There is a lot less overlap of functionality in this approach, and it provides a better experience for merchants and developers.

Alternatively, you could go for a deeply integrated approach like Drupal Commerce. It has zero overlap between its commerce, content management and experience building capabilities.

For Open Source, it could be good or bad

How Adobe will embrace Magento's Open Source community is possibly the most intriguing part of this acquisition — at least for me.

For a long time, Magento operated as Open Source in name, but wasn't very Open Source in practice. Over the last couple of years, the Magento team worked hard to rekindle its Open Source community. I know this because I attended and keynoted one of its conferences on this topic. I have also spent a fair amount of time with Magento's leadership team discussing this. Like other projects, Magento has been taking inspiration from Drupal.

For example, the introduction of Magento 2 allowed the company to move to GitHub for the first time, which gave the community a better way to collaborate on code and other important issues. The latest release of Magento cited 194 contributions from the community. While that is great progress, it is small compared to Drupal.

My hope is that these Open Source efforts continue now that Magento is part of Adobe. If they do, that would be a tremendous win for Open Source.

On the other hand, if Adobe makes Magento cloud-only, radically changes their pricing model, limits integrations with Adobe competitors, or doesn't value the Open Source ethos, it could easily alienate the Magento community. In that case, Adobe bought Magento for its install base and the Magento brand, and not because it believes in the Open Source model.

This acquisition also signals a big win for PHP. Adobe now owns a $1.68 billion PHP product, and this helps validate PHP as an enterprise-grade technology.

Unfortunately, Adobe has a history of being "Open Source"-second and not "Open Source"-first. It acquired Day Software in July 2010. This technology was largely made using open source frameworks — Apache Sling, Apache Jackrabbit and more — and was positioned as an open, best-of-breed solution for developers and agile marketers. Most of that has been masked and buried over the years and Adobe's track record with developers has been mixed, at best.

Will the same happen to Magento? Time will tell.

In March during TROOPERS’18, I discovered a very nice tiny device developed by Luca Bongiorni (see my wrap-up here): The WiFi HID Injector. Just to resume what’s behind this name, we have a small USB evil device which offers: a Wireless access point for the configuration and exfiltration of data, an HID device simulating a keyboard (like a Teensy or Rubberducky) and a serial port. This is perfect for the Evil Mouse Project!

Just after the conference, I bought my first device and started to play with it. The idea is to be able to control an air-gapped computer and ex-filtrate data. The modus-operandi is the following:

  1. Connect the evil USB device to the victim’s computer or ask him/her to do it (with some social engineering tricks)
  2. Once inserted, the USB device adds a new serial port and fires up the wireless network
  3. The attacker, located close enough to get the wireless signal, takes control of the device, loads his payload
  4. The HID (keyboard) injects the payload by simulating keystrokes (like a Teensy) and executes it
  5. The payload sends data to the newly created serial port. Data will be saved to a flat file on the USB device storage
  6. The attack can download those files

By using the serial port, no suspicious traffic is generated by the host but the feature is, of course, available if more speed is required to exfiltrate data. Note that everything is configurable and the WHID can also automatically connect to another wireless network.

During his presentation, Luca explained how he weaponized another USB device to hide the WHID. For the fun, he chose to use a USB fridge because people like this kind of goodies. IMHO, a USB fridge will not fit on all types of desks, especially for C-levels… Why not use a device that everybody needs: a mouse.

Hopefully, most USB mouses have enough inside space to hide extra cables and the WHID. To connect both USB devices on the same cable, I found a nano-USB hub with two ports:

Nano-USB Hub

This device does a perfect job only on a 12x12mm circuit! I bought a second WHID device and found an old USB mouse ready to be weaponized to start a new life. The idea is the following: cut the original USB cable and solder it to the nano hub then reconnect the mouse and the WHID to the available ports (to gain some space, the USB connector was unsoldered.

My soldering-Fu is not good enough to assemble such small components by myself so my friend @doegox made a wonderful job as you can see from the pictures below. Thanks to him!

Inside the Evil Mouse

Inside the Evil Mouse

Once all the cables re-organized properly inside the mouse, it looks completely safe (ok, it is an old Microsoft mouse used for this proof-of-concept) and 100% ready to use. Just plug it in the victim’s computer and have fun:

Final MouseThe Evil Mouse

If you need to build one for a real pentest or red-team exercise, here is a list of the components:

  • A new mouse labelled with a nice brand – it will be even more trusted (9.69€) on Amazon
  • A WHID USB device (13.79€) on AliExpress
  • A nano-USB hub (8.38€) on Tindie

With this low price, you can leave the device on site, no need to recover it after the engagement. Offer it to the victim.

Conclusion: Never trust USB devices (and not only storage devices…)

[The post The Evil Mouse Project has been first published on /dev/random]

I published the following diary on “Malware Distributed via .slk Files“:

Attackers are always trying to find new ways to infect computers by luring not only potential victims but also security controls like anti-virus products. Do you know what SYLK files are? SYmbolic LinK files (they use the .slk extension) are Microsoft files used to exchange data between applications, specifically spreadsheets… [Read more]

[The post [SANS ISC] Malware Distributed via .slk Files has been first published on /dev/random]

May 21, 2018

Sometimes, a security incident starts with an email. A suspicious email can be provided to a security analyst for further investigation. Most of the time, the mail is provided in EML or “Electronic Mail Format“. EML files store the complete message in a single file: SMTP headers, mail body and all MIME content. Opening such file in a mail client can be dangerous if dynamic content is loaded (remember the EFAIL vulnerability disclosed a few days ago?) and reading a big file in a text editor is not easy to quickly have an overview of the mail content. To help in this task, I wrote a Python script that parses an EML file and generates a PNG image based on its content. In a few seconds, an analyst will be able to “see” what’s in the mail and can decide if further investigation is useful. Here is an example of generated image:

EML Rendering SampleThe script reads SMTP headers and extracts the most important ones. It extracts the body and MIME part “text/plain”, “text/html”. Attached images are decoded and displayed. If other MIME parts are found, they are just listed below the email. The image is generated using wkhtmltoimage and requires some dependencies. For an easier development, I build a Docker container ready to use:

$ git pull rootshell/emlrender
$ git run emlrender/rootshell

The container runs a web service via HTTPS (with a self-signed certificate at the moment). It provides a classic web interface and a REST API. Once deployed, the first step is to configure users to access the rendering engine. Initialize the users’ database with an ‘admin’ user and create additional users if required:

$ curl -k -X POST -d '{"password":"strongpw"}'
[{"message": "Users database successfully initialized"}, {"password": "strongpw"}]
$ curl -k -u admin:secretpw -X POST -d '{"username":"john", "password":"strongpw"}'
[{"message": "Account successfully created", "username": "john", "password": "strongpw"}]

Note: if you don’t provide a password, a random one will be generated for you.

The REST API provides the following commands and names are explicit:

  • POST /init
  • GET /users/list
  • POST /users/add
  • POST /users/delete
  • POST /users/resetpw

To render EML files, use the REST API or a browser. Via the REST API:

$ curl -k -u john:strongpw -F file=@"spam.eml" -o spam.png
$ curl -k -u john:strongpw -F file=@"" -F password=infected \
  -o malicious.png

You can see that EML files can be submitted as is or in a ZIP archive protected by a password (which is common when security analysts exchange samples)

Alternatively, point your browser to A small help page is available at Help Page

EMLRender is not a sandbox. If links are present in the HTML code, they will be visited. It is recommended to deploy the container on a “wild” Internet connection and not your corporate network.

The code is available on my GitHub repository and a ready-to-use Docker image is available on Docker hub. Feel free to post improvement ideas!

[The post Rendering Suspicious EML Files has been first published on /dev/random]