Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

April 22, 2018

There are many tools to implement this, and yeah, this is not the fastest. But the advantage is that you don't need extra tools beyond "bash" and "ping"...

for i in $(seq 1 254); do
  if ping -W 1 -c 1 192.168.0.$i; then
echo ${!HOST[@]}

will give you the host addresses for the machines that are live on a given network...

April 21, 2018

20 years ago I was somewhat into techno and stuff, especially the left-of-center music (and DJ sets) guys like Carl Craig and Stacey Pullen released.

Times change and that jazz thing has caught on pretty big the last couple of years for me. And now there’s Japanese Jazz-maestro Toshio Matsuura covering Carl Craig’s “At Les”. How great is that, right!?

YouTube Video
Watch this video on YouTube.

April 20, 2018

April 19, 2018

It's several months after FOSDEM by now, but our video transcoding infrastructure is still up and running, in order to deal with the long tail of videos with issues that was still outstanding. This will change next week. Currently, 644 videos have been signed off on, while 25 are marked as needing some kind of intervention or further investigation. These mostly require us to read data from our backups. Unfortunately, due to various complications, it turns out that getting data from those backups is more involved than we initially thought. Now, almost three months after the event, it is not舰

April 18, 2018

I’m just back from the 2018 edition of the FIRST TC (“Technical Colloquium”) organized in Amsterdam. This was the second edition for me. The format was the same, one day of workshops and two days with normal presentations. And always 100% free! During the quick introduction, Jeff Bollinger from Cisco, which is hosting the event, gave some numbers about this edition: 28 countries represented, 91 organizations (mainly CERTS, SOCs, etc) and 31 FIRST teams amongst them. 3 trainer, 14 speakers and 6 sponsors. Here is my quick review of the talks that I followed. Some of them were flagged as TLP:RED.

Alan Neville (Symantec) presented “The (makes me) Wannacry investigation“. He made a review of the infection which started in May 2017 but also explained that Symantec identified previous versions of the malware already in February. Of course, Alan also covered the story of the famous kill-switch and gave a good advice: Create a sinkhole in your organization, set up a web server and capture all the traffic sent to it. You could detect interesting stuff!

Mathias Seitz (Switch) presented an updated version of his talk about DNS Firewalling (“DNS RPZ intro and examples“). He re-explained how it works and why a DNS firewall can be a great security control and not difficult to put in place. Don’t forget that you must have procedures and process in place to support your users and the (always possible) false positive. FYI, here is a list of DNZ RPZ providers active in 2018:

  • Farsight security
  • (Infoblox)
  • Spamhaus
  • Switch
  • ThreatSTOP

And organizations providing DFaaS (“DNS Firewall as a Service“) if you don’t run a resolver:

  • Akamai AnswerX
  • Cisco OpenDNS Umbrella
  • Comodo Secure DNS
  • Neustar Recursive DNS
  • ThreatSTOP
  • Verisign
Krassimir Tzvetanov (Fastly) presented “Investigator Opsec“. For me, it was one of the best presentations of this edition. Krassimir explained the mistakes that we are all doing when performing investigations or threat hunting. And those mistakes can help the attackers to detect that they are being tracked or, worse, to disclose some details about you! By example, the first advice provided was to not block the attacker too quickly because you learn them how to improve! Sandboxes must be hardened and tools used must be properly configured and used. Example: data enrichment may lead to resolve domain names or contact IP addresses without your prior consent! Do we have to say something about services like VirusTotal? It was very constructive and opened my eyes… Yes, I’m making a lot of mistakes too!

Melanie Rieback (Radically Open Security) presented “Pentesting ChatOps“. Also, a very attractive talk where Melanie explained how she started a business based on freelancers and fully “virtual”. There is no physical office and all the communications between consultants and also customers are performed but chat rooms! Event logs are sent to chat rooms.

Abhishta (University of Twente) presented “Economic impact of DDoS attacks: How can we measure it?“. It started with a description of the DDoS business. They are very profitable and even more with the huge amount of vulnerable IoT devices that are easier to compromise.  In 75%, costs to build a DDoS infrastructure is ~1% of the revenue! What’s the main impact of DDoS? Reputation! But it can quickly turn into money loss. Why? Victim of DDoS, the organization has a negative view in the media, there is a decrease in stock demands and then a fall of stock price! Abhista also explained how they use Google Alerts to get news about companies and correlate them with the DDoS reports.

Jaeson Schultz (Cisco – Talos) presented “It’s a Trap“. It explained how spammers are working today, how campaigns are organized. Then, he explained how to build an effective spam trap based (tip: the choice of the domain names is a critical key to get as much spam as possible).
The last presentation for me was an update from a previous one by Tom Ueltschi (Swiss Post): “Advanced Incident Detection and Threat hunting with Sysmon & Splunk“. Tom explained again how he successfully deployed Sysmon with Splunk on all end-points in his company and how he is able to detect a lot of malicious activity thanks to the power of Splunk.

The audience was mainly based on Incident Handlers, it was a good opportunity for me to increase my network and make new friends as wall as discussing about new projects and crazy ideas 😉

[The post FIRST Technical Colloquium Amsterdam 2018 Wrap-Up has been first published on /dev/random]

I published the following diary on “Webshell looking for interesting files“:

Yesterday, I found on Pastebin a bunch of samples of a webshell that integrates an interesting feature: It provides a console mode that you can use to execute commands on the victim host. The look and feel of the webshell is classic… [Read more]

[The post [SANS ISC] Webshell looking for interesting files has been first published on /dev/random]

April 17, 2018

On March 28th, the Drupal Security Team released a bug fix for a critical security vulnerability, named SA-CORE-2018-002. Over the past week, various exploits have been identified, as attackers have attempted to compromise unpatched Drupal sites. Hackers continue to try to exploit this vulnerability, and Acquia's own security team has observed more than 100,000 attacks a day.

The SA-CORE-2018-002 security vulnerability is highly critical; it allows an unauthenticated attacker to perform remote code execution on most Drupal installations. When the Drupal Security Team made the security patch available, there were no publicly known exploits or attacks against SA-CORE-2018-002.

That changed six days ago, after Checkpoint Research provided a detailed explanation of the SA-CORE-2018-002 security bug, in addition to step-by-step instructions that explain how to exploit the vulnerability. A few hours after Checkpoint Research's blog post, Vitalii Rudnykh, a Russian security researcher, shared a proof-of-concept exploit on GitHub. Later that day, Acquia's own security team began to witness attempted attacks.

The article by Checkpoint Research and Rudnykh's proof-of-concept code have spawned numerous exploits, which are written in different programming languages such as Ruby, Bash, Python and more. As a result, the number of attacks have grown significantly over the past few days.

Fortunately, Acquia deployed a platform level mitigation for all Acquia Cloud customers one hour after the Drupal Security Team made the SA-CORE-2018-002 release available on March 28th. Over the past week, Acquia has observed over 500,000 attacks from more than 3,000 different IP addresses across our fleet of servers and customer base. To the best of our knowledge, every attempted exploitation of an Acquia customer has failed.

SA-CORE-2018-002 timeline of events as seen by Acquia

The scale and the severity of this attack suggests that if you failed to upgrade your Drupal sites, or your site is not supported by Acquia Cloud or another trusted vendor that provides platform level fixes, the chances of your site being hacked are very high. If you haven't upgraded your site yet and you are not on a protected platform then assume your site is compromised. Rebuild your host, reinstall Drupal from a backup taken before the vulnerability was announced and upgrade before putting the site back online. (Update: restoring a Drupal site from backup may not be sufficient as some of the exploits reinstall themselves from crontab. You should assume the whole host is compromised.)

Drupal's responsible disclosure policy

It's important to keep in mind that all software has security bugs, and fortunately for Drupal, critical security bugs are rare. It's been nearly four years since the Drupal Security Team published a security release for Drupal core that is this critical.

What matters is how software projects or software vendors deal with security bugs. The Drupal Security Team follows a "coordinated disclosure policy": issues remain private until there is a published fix. A public announcement is made when the threat has been addressed and a secure version of Drupal core is also available. Even when a bug fix is made available, the Drupal Security Team is very thoughtful with its communication. The team is careful to withhold as many details about the vulnerability as possible to make it difficult for hackers to create an exploit, and to buy Drupal site owners as much time as possible to upgrade. In this case, Drupal site owners had two weeks before the first public exploits appeared.

Historically, many proprietary CMS vendors have executed a different approach, and don't always disclose security bugs. Instead, they often fix bugs silently. In this scenario, secrecy might sound like a good idea; it prevents sites from being hacked and it avoids bad PR. However, hiding vulnerabilities provides a false sense of security, which can make matters much worse. This approach also functions under the assumption that hackers can't find security problems on their own. They can, and when they do, even more sites are at risk of being compromised.

Drupal's approach to security is best-in-class — from fixing the bug, testing the solution, providing advance notice, coordinating the release, being thoughtful not to over communicate too many details, being available for press inquiries, and repeatedly reminding everyone to upgrade.

Acquia's platform level fix

In addition to the Drupal Security Team's responsible disclosure policy, Acquia's own security team has been closely monitoring attempted attacks on our infrastructure. Following the release of the Checkpoint Research article, Acquia has tracked the origin of the 500,000 attempted attacks:

SA-CORE-2018-002 map of attacks against Acquia Cloud customersThis image captures the geographic distribution of SA-CORE-2018-002 attacks against Acquia's customers. The number denoted in each bubble is the total number of attacks that came from that location.

To date, over 50 percent of the attempted attacks Acquia has witnessed originate from the Ukraine:

SA-CORE-2018-002 countries as seen by Acquia

At Acquia, we provide customers with automatic security patching of both infrastructure and Drupal code, in addition to platform level fixes for security bugs. Our commitment to keeping our customers safe is reflected in our push to release a platform level fix one hour after the Drupal Security Team made SA-CORE-2018-002 available. This mitigation covered all customers with Acquia Cloud Free, Acquia Cloud Professional, Acquia Cloud Enterprise, and Acquia Cloud Site Factory applications; giving our customers peace of mind while they upgraded their Drupal sites, with or without our help. This means that when attempted exploits and attacks first appeared in the wild, Acquia's customers were safe. As a best practice, Acquia always recommends that customers upgrade to the latest secure version of Drupal core, in addition to platform mitigations.

This blog post was co-authored by Dries Buytaert and Cash Williams.

I just pushed out an update of the Autoptimize 2.4 beta branch on GitHub, with this in the changelog;

  • new: Autoptimize “listens” to page caches being cleared, upon which it purges it’s own cache as well. Support depends on known action hooks firing by the page cache, supported by Hyper Cache, WP Rocket, W3 Total Cache, KeyCDN Cache Enabler, support confirmed coming in WP Fastest Cache and Comet Cache.
  • new: Google Fonts can now be “aggregated & preloaded”, this uses CSS which is loaded non render-blocking
  • improvement: “remove all Google Fonts” is now more careful (avoiding removing entire CSS

We’re still looking for beta-testers and some of these new features might convince you to jump on board? You can download the zip-file here, installing it a simple one-time process:

  1. Deactivate Autoptimize 2.3.x
  2. Go to Plugins -> New -> Upload
  3. Select the downloaded zipfile and upload
  4. Click activate
  5. Go to Settings -> Autoptimize to review some of the new settings

In case of any problem; we’re actively looking for feedback in the GitHub Issue queue :-)


Cowboy Dries at DrupalCon Nashville© Yes Moon

Last week, I shared my State of Drupal presentation at Drupalcon Nashville. In addition to sharing my slides, I wanted to provide more information on how you can participate in the various initiatives presented in my keynote, such as growing Drupal adoption or evolving our community values and principles.

Drupal 8 update

During the first portion of my presentation, I provided an overview of Drupal 8 updates. Last month, the Drupal community celebrated an important milestone with the successful release of Drupal 8.5, which ships with improved features for content creators, site builders, and developers.

Drupal 8 continues to gain momentum, as the number of Drupal 8 sites has grown 51 percent year-over-year:

Drupal 8 site growthThis graph depicts the number of Drupal 8 sites built since April 2015. Last year there were 159,000 sites and this year there are 241,000 sites, representing a 51% increase year-over-year.

Drupal 8's module ecosystem is also maturing quickly, as 81 percent more Drupal 8 modules have become stable in the past year:

Drupal 8 module readinessThis graph depicts the number of modules now stable since January 2016. This time last year there were 1,028 stable projects and this year there are 1,860 stable projects, representing an 81% increase year-over-year.

As you can see from the Drupal 8 roadmap, improving the ease of use for content creators remains our top priority:

Drupal 8 roadmapThis roadmap depicts Drupal 8.5, 8.6, and 8.7+, along with a column for "wishlist" items that are not yet formally slotted. The contents of this roadmap can be found at

Four ways to grow Drupal adoption

Drupal 8 was released at the end of 2015, which means our community has had over two years of real-world experience with Drupal 8. It was time to take a step back and assess additional growth initiatives based on what we have learned so far.

In an effort to better understand the biggest hurdles facing Drupal adoption, we interviewed over 150 individuals around the world that hold different roles within the community. We talked to Drupal front-end and back-end developers, contributors, trainers, agency owners, vendors that sell Drupal to customers, end users, and more. Based on their feedback, we established four goals to help accelerate Drupal adoption.

Lets grow Drupal together

Goal 1: Improve the technical evaluation process

Matthew Grasmick recently completed an exercise in which he assessed the technical evaluator experience of four different PHP frameworks, and discovered that Drupal required the most steps to install. Having a good technical evaluator experience is critical, as it has a direct impact on adoption rates.

To improve the Drupal evaluation process, we've proposed the following initiatives:

Initiative Issue link Stakeholders Initiative coordinator Status
Better discovery experience on roadmap Drupal Association hestenet Under active development
Better "getting started" documentation #2956879 Documentation Working Group grasmash In planning
More modern administration experience #2957457 Core contributors ckrina and yoroy Under active development

To become involved with one of these initiatives, click on its "Issue link" in the table above. This will take you to, where you can contribute by sharing your ideas or lending your expertise to move an initiative forward.

Goal 2: Improve the content creator experience

Throughout the interview process, it became clear that ease of use is a feature now expected of all technology. For Drupal, this means improving the content creator experience through a modern administration user interface, drag-and-drop media management and page building, and improved site preview functionality.

The good news is that all of these features are already under development through the Media, Workflow, Layout and JavaScript Modernization initiatives.

Most of these initiative teams meet weekly on Drupal Slack (see the meetings calendar), which gives community members an opportunity to meet team members, receive information on current goals and priorities, and volunteer to contribute code, testing, design, communications, and more.

Goal 3: Improve the site builder experience

Our research also showed that to improve the site builder experience, we should focus on improving the three following areas:

  • The configuration management capabilities in core need to support more common use cases out-of-the-box.
  • Composer and Drupal core should be better integrated to empower site builders to manage dependencies and keep Drupal sites up-to-date.
  • We should provide a longer grace period between required core updates so development teams have more time to prepare, test, and upgrade their Drupal sites after each new minor Drupal release.

We plan to make all of these aspects easier for site builders through the following initiatives:

Initiative Issue link Stakeholders Initiative coordinator Status
Composer & Core #2958021 Core contributors + Drupal Association Coordinator needed! Proposed
Config Management 2.0 #2957423 Core contributors Coordinator needed! Proposed
Security LTS 2909665 Core committers + Drupal Security Team + Drupal Association Core committers and Security team Proposed, under discussion

Goal 4: Promote Drupal to non-technical decision makers

The fourth initiative is unique as it will help our community to better communicate the value of Drupal to the non-technical decision makers. Today, marketing executives and content creators often influence the decision behind what CMS an organization will use. However, many of these individuals are not familiar with Drupal or are discouraged by the misconception that Drupal is primarily for developers.

With these challenges in mind, the Drupal Association has launched the Promote Drupal Initiative. This initiative will include building stronger marketing and branding, demos, events, and public relations resources that digital agencies and local associations can use to promote Drupal. The Drupal Association has set a goal of fundraising $100,000 to support this initiative, including the hiring of a marketing coordinator.

$54k raised for the Promote Drupal initiative

Megan Sanicki and her team have already raised $54,000 from over 30 agencies and 5 individual sponsors in only 4 days. Clearly this initiative resonates with Drupal agencies. Please consider how you or your organization can contribute.

Fostering community with values and principles

This year at DrupalCon Nashville, over 3,000 people traveled to the Music City to collaborate, learn, and connect with one another. It's at events like DrupalCon where the impact of our community becomes tangible for many. It also serves as an important reminder that while Drupal has grown a great deal since the early days, the work needed to scale our community is never done.

Prompted by feedback from our community, I have spent the past five months trying to better establish the Drupal community's principles and values. I have shared an "alpha" version of Drupal's values and principles at As a next step, I will be drafting a charter for a new working group that will be responsible for maintaining and improving our values and principles. In the meantime, I invite every community member to provide feedback in the issue queue of the Drupal governance project.

Values and principles alphaAn overview of Drupal's values with supporting principles.

I believe that taking time to highlight community members that exemplify each principle can make the proposed framework more accessible. That is why it was very meaningful for me to spotlight three Drupal community members that demonstrate these principles.

Principle 1: Optimize for Impact - Rebecca Pilcher

Rebecca shares a remarkable story about Drupal's impact on her Type 1 diabetes diagnosis:

Principle 5: Everyone has something to contribute - Mike Lamb

Mike explains why Pfizer contributes millions to Drupal:

Principle 6: Choose to Lead - Mark Conroy

Mark tells the story of his own Drupal journey, and how his experience inspired him to help other community members:

Watch the keynote or download my slides

In addition to the community spotlights, you can also watch a recording of my keynote (starting at 19:25), or you can download a copy of my slides (164 MB).

April 16, 2018

The post Upcoming presentation at LOADays: Varnish Internals – Speeding up a site x100 appeared first on

I'll be speaking at LOADays next Sunday about Varnish.

If you happen to be around, come say hi -- I'll be there all day!

Varnish Internals -- Speeding up a site x100

In this talk we'll look at the internals of Varnish, a reverse proxy with powerful caching abilities.

We'll walk through an HTTP request end-to-end, manipulate it change it in ways that no one should ever do in production -- but it'll proof how powerful Varnish can be.

Varnish is a load balancer, caching engine, its own scripting language and a fun way to deep-dive in to the HTTP protocol.

Source: Varnish Internals -- Speeding up a site x100 (Mattias Geniar)

The post Upcoming presentation at LOADays: Varnish Internals – Speeding up a site x100 appeared first on

April 11, 2018

datasciencelogoCe jeudi 19 avril 2018 à 19h se déroulera la 68ème séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : Data Science

Thématique : Data Science|Big Data

Public : Tout public

L’animateur conférencier : Xavier Tordoir

Lieu de cette séance : Université de Mons, Campus Plaine de Nimy, avenue Maistriau, Grands Amphithéâtres, Auditoire Curie (cf. ce plan sur le site de l’UMONS, ou la carte OSM).

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page La séance sera suivie d’un verre de l’amitié.

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, OpenSides, MeaWeb et Phonoid.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

Description : Data Science est un terme devenu populaire ces dernières années. La Data Science est devenue un point stratégique dans les entreprises, une filière d’études dans le monde académique, un job de grande valeur pour les praticiens. La présentation a pour but de couvrir ce qu’est la Data Science, depuis la collection de données jusqu’à la prise de décision sur base de modèles construits sur ces données. De nombreuses solutions open source permettent de construire une architecture pour la Data Science à grande échelle en entreprise. Nous couvrirons ces solutions ainsi que les outils et méthodes qui font le coeur de la modélisation de données et leur cadre d’utilisation (par exemples prédictions, recommendations, analyses d’images, de texte etc).

Short bio : Xavier, Docteur en Sciences physiques, est consultant et développeur de solutions Data Science et Big Data. Concepteur et formateur de programmes de training en Data Science, machine learning et Pipelines de données. Actif d’abord dans le monde académique comme chercheur en génomique computationnelle et leader pour développement de solutions de stockage et calcul distribué. Ensuite dans divers projets dans des grandes entreprises ou start-ups: marketing, IoT, trading haute fréquence, banking et assurances.

April 10, 2018

Values and principles balloons

Since its founding, Drupal has grown a great deal, and today there are thousands of contributors and organizations that make up our community. Over the course of seventeen years, we have spent a great amount of time and effort scaling our community. As a result, Drupal has evolved into one of the largest open source projects in the world.

Today, the Drupal project serves as a role model to many other open source projects; from our governance and funding models, to how we work together globally with thousands of contributors, to our 3,000+ person conferences. However, the work required to scale our community is a continuous process.

Prompted by feedback from the Drupal community, scaling Drupal will be a key focus for me throughout 2018. I have heard a lot of great ideas about how we can scale our community, in addition to improving how we all work together. Today, I wanted to start by better establishing Drupal's values and principles, as it is at the core of everything we do.

Remarkably, after all these years, our values (what guides these behaviors) and our principles (our most important behaviors) are still primarily communicated through word of mouth.

In recent years, various market trends and challenging community events have inspired a variety of changes in the Drupal community. It's in times like these that we need to rely on our values and principles the most. However, that is very difficult to do when our values and principles aren't properly documented.

Over the course of the last five months, I have tried to capture our fundamental values and principles. Based on more than seventeen years of leading and growing the Drupal project, I tried to articulate what I know are "fundamental truths": the culture and behaviors members of our community uphold, how we optimize technical and non-technical decision making, and the attributes shared by successful contributors and leaders in the Drupal project.

Capturing our values and principles as accurately as I could was challenging work. I spent many hours writing, rewriting, and discarding them, and I consulted numerous people in the process. After a lot of consideration, I ended up with five value statements, supported by eleven detailed principles.

I shared both the values and the principles on as version 1.0-alpha (archived PDF). I labeled it alpha, because the principles and values aren't necessarily complete. While I have strong conviction in each of the Drupal principles and corresponding values, some of our values and principles are hard to capture in words, and by no means will I have described them perfectly. However, I arrived at a point where I wanted to share what I have drafted, open it up to the community for feedback, and move the draft forward more collaboratively.

Values and principles alphaAn overview of Drupal's values with supporting principles.

While this may be the first time I've tried to articulate our values and principles in one document, many of these principles have guided the project for a very long time. If communicated well, these principles and values should inspire us to be our best selves, enable us to make good decisions fast, and guide us to work as one unified community.

I also believe this document is an important starting point and framework to help address additional (and potentially unidentified) needs. For example, some have asked for clearer principles about what behavior will and will not be tolerated in addition to defining community values surrounding justice and equity. I hope that this document lays the groundwork for that.

Throughout the writing process, I consulted the work of the Community Governance Group and the feedback that was collected in discussions with the community last fall. The 1.0-alpha version was also reviewed by the following people: Tiffany Farriss, George DeMet, Megan Sanicki, Adam Goodman, Gigi Anderson, Mark Winberry, Angie Byron, ASH Heath, Steve Francia, Rachel Lawson, Helena McCabe, Adam Bergstein, Paul Johnson, Michael Anello, Donna Benjamin, Neil Drumm, Fatima Khalid, Sally Young, Daniel Wehner and Ryan Szrama. I'd like to thank everyone for their input.

As a next step, I invite you to provide feedback. The best way to provide feedback is in the issue queue of the Drupal governance project, but there will also be opportunities to provide feedback at upcoming Drupal events, including DrupalCon Nashville.

FOSDEMx is a small scale spin-off of FOSDEM, combining a workshop track geared towards students and anyone interested in the topic, with a main track for a broader audience. FOSDEMx 0 will take place on Thursday the 3rd of May 2018 at ULB Campus de la Plaine from 16:00 onward. This first edition will introduce attendees to the Python ecosystem. As this is a smaller event and seats for the workshops are limited, we ask attendees to register upfront. The main track sessions will be open to anyone. Attendance is free of charge for all sessions. Head over to舰

April 08, 2018

This blog post summarizes the 572 comments spanning 5 years and 2 months to get REST file upload support in #1927648 committed. Many thanks to everyone who contributed!

From February 2013 until the end of March 2017, issue #1927648 mostly … lingered. On April 3 of 2017, damiankloip posted an initial patch for an approach he’d been working on for a while, thanks to Acquia (my employer) sponsoring his time. Exactly one year later his work is committed to Drupal core. Shaped by the input of dozens of people! Just *look at that commit message!*

Background: API-First Drupal: file uploads!.

  • Little happened between February 2013 (opening of issue) and November 2015 (shipping of Drupal 8).
  • Between February 2013 and April 2014, only half a dozen comments were posted, until moshe weitzman aptly said Still a gaping hole in our REST support. Come on Internets ….
  • The first proof-of-concept patch followed in August 2014 by juampynr, but was still very rough. A fair amount of iteration occurred that month, between juampynr and Arla. It used base64 encoding, which means it needed 33% more bytes on the wire to transfer a file than if it were transmitted in binary rather than base64.
  • Then again a period of silence. Remember that this was around the time when we were trying to get Drupal 8 to a shippable state: the #1 priority was to stabilize, fix critical bugs. Not to add missing features, no matter how important. To the best of my knowledge, the funding for those who originally worked on Drupal 8’s REST API had also dried up.
  • In May 2015, another flurry of activity occurred, this time fueled by marthinal. Comment #100 was posted. Note that all patches up until this point had zero validation logic! Which of course was a massive security risk. marthinal is the first to state that this is really necessary, and does a first iteration of that.
  • A few months of silence, and then again progress in September, around DrupalCon Barcelona 2015. dawehner remarked in a review on the lack of tests for the validation logic.
  • In February 2016 I pointed out that I’m missing integration tests that prove the patch actually works. To which Berdir responded that we’d first need to figure out how to deal with File entity type access control!
  • Meanwhile, marthinal works on the integration test coverage in 2016. And … we reached comment #200.
  • In May 2016, I did a deep review, and found many problems. Quick iterations fix those problems! But then damiankloip pointed out that despite the issue being about the general File (de)serialization problem, it actually only worked for the HAL normalization. We also ended up realizing that the issue so far was about stand-alone File entity creation, even though those entities cannot be viewed stand-alone nor can they be created stand-alone through the existing Drupal UI: they can only be created to be referenced from file fields. And consequently, we have no access control logic for this yet, nor is it clear how access control should work; nor is it how validation should work! Berdir explained this well in comment 232. This lead us to explore moving parts of into core (which would be a hard blocker). The issue then went quiet again.
  • In July 2016, garphy pointed out that large file uploads still were not yet supported. Some work around that happened. In September, kylebrowning stressed this again, and provided a more detailed rationale.
  • Then … silence. Until damiankloip posted comment #281 on April 3, 2017. Acquia was sponsoring him to work on this issue. Damian is the maintainer of the serialization.module component and therefore of course wanted to see this issue get fixed. My employer Acquia agreed with my proposal to sponsor Damian to work on REST file upload support. Because after 280 comments, some fundamental capabilities are still absent: this was such a complex issue, with so many concerns and needs to balance, that it was nigh impossible to finish it without dedicated time.
    To get this going, I asked Damian to look at the documentation for a bunch of well-known sites to observe how they handle file uploads. I also asked him to read the entire issue. Combined, this should give him a good mental map of how to approach this.
  • #281 was a PoC patch that only barely worked but did support binary (non-base64) uploads. damiankloip articulated the essential things yet to be figured out: validation and access checking. Berdir chimes in with his perspective on that in #291 … in which he basically outlines what ended up in core! Besides Berdir, dagmar, dawehner, garphy, dabito, ibustos all chimed in and influenced the patch. Berdir, damiankloip and I had a meeting about how to deal with validation, and I disagreed with with both of them. And turned out to be very wrong! More feedback is provided by the now familiar names, and the intense progress/activity continues for two months, until comment #376!
  • Damian got stuck on test coverage — and since I’d written most of the REST test coverage in the preceding months, it made sense for me to pick up the baton from Damian. So I did that in July 2017, just making trivial changes that were hard to figure out. Damian then continued again, expanding test coverage and finding a core bug in the process! And so comment #400 was reached!
  • At the beginning of August, the patch was looking pretty good, so I did an architectural review. For the first time, we realized that we first needed to fix the normalization of File entities before this could land. And many more edge cases need to be tested for us to be confident that there were no security vulnerabilities. blainelang did manual testing and posted super helpful feedback based on his experience. Blaine and Damian tag-teamed for a good while, then graphy chimed in again, and we entered September. Then dawehner chimed in once more, followed by tedbow.
  • On September 6 2017, in comment #452 I marked the issue postponed on two other issues, stating that it otherwise looked tantalizingly close to RTBC. aheimlich found a problem nobody else had spotted yet, which Damian fixed.
  • Silence while the other issues get fixed … and December 21 2017 (comment #476), it finally was unblocked! Lots of detailed reviews by tedbow, gabesullice, Berdir and myself followed, as well as rerolls to address them, until I finally RTBC‘d it … in comment #502 on February 1 2018.
  • Due to the pending Drupal 8.5 release, the issue mostly sat waiting in RTBC for about two months … and then got committed on April 3 2018!!!

Damian’s first comment (preceded by many hours of research) was on April 3, 2017. Exactly one year later his work is committed to Drupal core. Shaped by the input of dozens of people! Just look at that commit message!

Drupal 8’s REST API has been maturing steadily since the Drupal 8.0.0 was released in November 2015. One of the big missing features has been file upload support. As of April 3 2018, Drupal 8.6 will support it, when it ships in September 2018! See the change record for the practical consequences:

It doesn’t make sense for me to repeat what is already written in that change record: that already has both a tl;dr and a practical example.

What I’m going to do instead, is give you a high-level overview of what it took to get to this point: why it took so long, which considerations went into it, why this particular approach was chosen. You could read the entire issue (#1927648), but … it’s one of the longest issues in Drupal history, at 572 comments1. You would probably need at least an entire workday to read it all! It’s also one of the longest commit messages ever, thanks to the many, many people who shaped it over the years:

Issue #1927648 by damiankloip, Wim Leers, marthinal, tedbow, Arla, alexpott, juampynr, garphy, bc, ibustos, eiriksm, larowlan, dawehner, gcardinal, vivekvpandya, kylebrowning, Sam152, neclimdul, pnagornyak, drnikki, gaurav.goyal, queenvictoria, kim.pepper, Berdir, clemens.tolboom, blainelang, moshe weitzman, linclark, webchick, Dave Reid, dabito, skyredwang, klausi, dagmar, gabesullice, pwolanin, amateescu, slashrsm, andypost, catch, aheimlich: Allow creation of file entities from binary data via REST requests

Thanks to all of you in that commit message!

I hope it can serve as a reference not just for people interested in Drupal, but also for people outside the Drupal community: there is no One Best Practice Way to handle file uploads for RESTful APIs. There is a surprising spectrum of approaches2. Some even avoid this problem space even entirely, by only allowing to “upload” files by sending a publicly accessible URL to the file. Read on if you’re interested. Otherwise, go and give it a try!

Design rationale


  • Request with Content-Type: application/octet-stream aka “raw binary” as its body, because base64-encoded means 33% more bytes, implying both slower uploads and more memory consumption. Uploading videos (often hundreds of megabytes or even gigabytes) is not really feasible with base64 encoding.
  • Request header Content-Disposition: file; filename="cat.jpg" to name the uploaded file. See the Mozilla docs. This also implies you can only upload one file per request. But of course, a client can issue multiple file upload requests in parallel, to achieve concurrent/batch uploading.
  • The two points above mean we reuse as much as possible from existing HTTP infrastructure.
  • Of course it does not make sense to have a Content-Type: application/octet-stream as the response. Usually, the response is of the same MIME type as the request. File uploads are the sensible exception.
  • This is meant for the raw file upload only; any metadata (for example: source or licensing) cannot be associated in this request: all you can provide is the name and the data for the file. To associate metadata, a second request to “upgrade” the raw file into something richer would be necessary. The performance benefit mentioned above more than makes up for the RTT of a second request in almost all cases.


  • php://input because otherwise limited by the PHP memory limit.


  • In the case of Drupal, we know that it always represents files as File entities. They don’t contain metadata (fields), at least not with just Drupal core; it’s the file fields (@FieldType=file or @FieldType=image) that contain the metadata (because the same image may need different captions depending on its use, for example).
  • When a file is uploaded for a field on a bundle on an entity type, a File entity is created with status=false. The response contains the serialized File entity.
  • You then need a second request to make the referencing entity “use” the File entity, which will cause the File entity to get status=true.
  • Validation: Drupal core only has the infrastructure in place to use files in the context of an entity type/bundle’s file field (or derivatives thereof, such as image fields). This is why files can only be uploaded by specifying an entity type ID, bundle ID and field name: that’s the level where we have settings and validation logic in place. While not ideal, it’s pragmatic: first allowing generic file uploads would be a big undertaking and somewhat of a security nightmare.
  • Access control is similar: you need create access for the referencing entity type and field edit access for the file field.


If we combine all these choices, then we end up with a new file_upload @RestResource plugin, which enables clients to upload a file:

  1. by POSTing the file’s contents
  2. to the path /file/upload/{entity_type_id}/{bundle}/{field_name}, which means that we’re uploading a file to be used by the file field of the specified entity type+bundle, and the settings/constraints of that field will be respected.
  3. … don’t forget to include a ?_format URL query argument, this determines what format the response will be in
  4. sending file data as a application/octet-stream binary data stream, that means with a Content-Type: application/octet-stream request header. (This allows uploads of an arbitrary size, including uploads larger than the PHP memory limit.)
  5. and finally, naming the file using the Content-Disposition: file; filename="filename.jpg" header
  6. the five preceding steps result in a successfully uploaded file with status=false — all that remains is to perform a second request to actually start using the file in the referencing entity!

Four years in the making — summarizing 572 comments

From February 2013 until the end of March 2017, issue #1927648 mostly … lingered. On April 3 of 2017, damiankloip posted an initial patch for an approach he’d been working on for a while, thanks to Acquia (my employer) sponsoring his time. Exactly one year later his work is committed to Drupal core. Shaped by the input of dozens of people! Just look at that commit message!

Want to actually read a summary of those 572 comments? I got you covered!

  1. It currently is the fifth longest Drupal core issue of all time! The first page, with ~300 comments, is >1 MB of HTML↩︎

  2. Examples: Contentful, Twitter, Dropbox and others↩︎

April 06, 2018

Wat ontbreekt* in de aanpassingen van het voorstel voor de aftapwet van de Nederlandse overheid is een rechterlijke toetsing van de proportionaliteit om al dan niet over te gaan tot een digitale zoeking. Zo’n zoeking is wat mij betreft gelijkaardig aan een huiszoeking.

Dit is onontbeerlijk in een verlichte samenleving waar de drie machten gescheiden zijn.

Spinoza, dé bodemvoorbereider voor de bodem waarop de verlichtingsfilosofie werd gebaseerd, was een Amsterdammer. Het is dus een aardshock voor zij die zich met filosofie bezig houden mee te maken dat Nederland niet meer mee doet.

Noot* dat de toetsingscommissie bestaat uit twee Nederlandse rechters. Twee zulke rechters kunnen nooit een degelijk proportionaliteitsonderzoek uitvoeren voor alle aanvragen.


The post Chalk Talk #2: how does Varnish work? appeared first on

In the first Chalk Talk video, we looked at what Varnish can do. In this second video, I explain how Varnish does this.

As usual, if you like a Dutch written version, have a look at the company blog.

Next videos will focus more on the technical internals, like how the hashing works, how to optimize your content & site and how to debug Varnish.

The post Chalk Talk #2: how does Varnish work? appeared first on

J’ai le plaisir de vous annoncer la création de la société SiguéPloum, société co-fondée avec ma partenaire de vie.

Ces 12 derniers mois auront été pour moi une profonde mutation, tant personnelle, que professionnelle. Mon nouveau rôle de père me rappelle sans cesse l’importance de l’éducation, quel que soit l’âge. Au fil des années, les spectateurs de mes conférences et, plus récemment, mes étudiants à l’université m’ont fait prendre conscience de l’importance de partager mes connaissances. Non pas comme une vérité absolue ou une connaissance infaillible mais plutôt comme un instantané de ce que j’ai appris, de ce que je suis aujourd’hui.

Je ne cherche plus à expliquer ce que je connais mais plutôt à partager l’état de mes réflexions sur un sujet avec pour objectif de faire gagner du temps à mon public. Si j’ai mis 5, 10 ou 15 ans pour arriver aux conclusions que je vous partage, vous pouvez vous en emparer pour aller plus vite, les dépasser, aller plus loin.

À travers la société SiguéPloum, ce goût pour le partage de la connaissance et l’éducation devient une profession, un métier.

Vous vous en doutez, parmi mes sujets de prédilection se trouveront toujours la blockchain, les logiciels libres, l’innovation, la futurologie et l’impact des technologies sur la société. C’est donc avec grand plaisir que je continuerai à donner des conférences et des formations sur le sujet.

Mais, grâce à la complicité de ma partenaire, conseiller en prévention des risques psychosociaux de formation, la mission de SiguéPloum s’étend pour permettre aux entreprises comme aux individus de se questionner sur leur organisation, leur impact et l’influence des outils utilisés. À l’heure des questionnements complexes et pluridisciplinaires comme le respect de la vie privée en ligne ou la dépendance numérique et le burn-out, la complémentarité de notre couple nous est apparue comme une force que nous souhaitions partager.

Nous aimerions également partager ces sujets avec les plus jeunes, qui sont souvent la source d’un éclairage nouveau ou de pensées particulièrement originales.

Si SiguéPloum représente une casquette de plus pour le Ploum blogueur, maître de conférence à l’École Polytechnique de Louvain, chercheur et conférencier, j’ai la fierté de dire que toutes ces casquettes ont une mission commune : “Apprendre et partager”.

Apprendre et partager ensemble.

Mon plaisir, ma vocation et, désormais, ma profession.


Vous avez aimé votre lecture ? Soutenez l’auteur sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Même un don symbolique fait toute la différence ! Retrouvons-nous ensuite sur Facebook, Twitter ou Mastodon.

Ce texte est publié sous la licence CC-By BE.

April 04, 2018

I published the following diary on “A Suspicious Use of certutil.exe“:

The Microsoft operating system is full of command line tools that help to perform administrative tasks. Some can be easily installed, like the SysInternal suite[1] and psexec.exe, others are builtin in Windows and available to everybody. The presence of calls to such tools can help to detect suspicious behaviours. Why reinvent the wheel, if a tool can achieve what you need? I recently upgraded my hunting rules on VirusTotal to collect samples that are (ab)using the “certutil.exe” tool… [Read more]

[The post [SANS ISC] A Suspicious Use of certutil.exe has been first published on /dev/random]

April 03, 2018

The post Varnish: same hash, different results? Check the Vary header! appeared first on

I'll admit I get bitten by the Vary header once every few months. It's something a lot of CMS's randomly add, and it has a serious impact on how Varnish handles and treats requests.

For instance, here's a request I was troubleshooting that had these varnishlog hash() data:

-   VCL_call       HASH
-   Hash           "/images/path/to/file.jpg%00"
-   Hash           "http%00"
-   Hash           "www.yoursite.tld%00"
-   Hash           "/images/path/to/file.jpg.jpg%00"
-   Hash           "www.yoursite.tld%00"
-   VCL_return     lookup
-   VCL_call       MISS

A new request, giving the exact same hashing data, would return a different page from the cache/backend. So why does a request with the same hash return different data?

Let me introduce the Vary header.

In this case, the page I was requesting added the following header:

Vary: Accept-Encoding,User-Agent

This instructs Varnish to keep a separate version each page for every value of Accept-Encoding and User-Agent it finds.

The Accept-Encoding would make sense, but Varnish already handles that internally. A gziped/plain version will return different data, that makes sense. There's no real point in adding that header for Varnish, but other proxies in between might still benefit from it.

The User-Agent is plain nonsense, why would you serve a different version of a page per browser? If you consider a typical User-Agent string to contain text like Mozilla/5.0 (Macintosh; Intel Mac OS X...) AppleWebKit/537.xx (KHTML, like Gecko) Chrome/65.x.y.z Safari/xxx, that's practically unique per visitor you have.

So, quick hack in this case, I remove the Vary header altogether.

sub vcl_backend_response {
  unset beresp.http.Vary;

No more variations of the cache based on what a random CMS does or says.

The post Varnish: same hash, different results? Check the Vary header! appeared first on

The post Varnishlog: show the hash() data in output appeared first on

Since Varnish 4.x and later, the default varnishlog output doesn't show the vcl_hash() data anymore.

Your varnishlog will look like this:

-   VCL_call       HASH
-   VCL_return     lookup
-   Hit            103841868
-   VCL_call       HIT
-   VCL_return     deliver

If you're debugging, this isn't very useful.

To include the "old" behaviour of Varnish and add all the hash elements as well, start Varnish with the following extra parameter:

 -p vsl_mask=+Hash

After a Varnish restart, this will procedure varnishlog output like this:

-   VCL_call       HASH
-   Hash           "/images/path/to/file.jpg%00"
-   Hash           "http%00"
-   Hash           "www.yoursite.tld%00"
-   Hash           "/images/path/to/file.jpg%00"
-   Hash           "www.yoursite.tld%00"
-   VCL_return     lookup
-   VCL_call       MISS

In this example, the hash included:


The %00 are null characters to prevent cache-conflicts in case you have hash('123'); hash('456); vs. hash('1'); hash('23456');.

The post Varnishlog: show the hash() data in output appeared first on

Facebook is in the news for a few days after the disclosure of the Cambridge Analytica scandal. A few days ago, another wave of rumours revealed that the Facebook app could collect your private data. Facebook denied and a ping-pong game started. Is it true or false? The fact is that more people started to take care of their data and uninstalled the app from their mobile phone. Hashtags also appeared on Twitter like #leavefacebook

If you’re curious about what Facebook knows about you, they allow you to download an archive of all (let’s trust them?) data they have about you. Go to your profile settings, general and click “Download a copy of your Facebook data”. You must provide your password and a few minutes later (depending on your activity on Facebook), you’ll get an email with a temporary link to download a nice ZIP archive. It contains a tree of subdirectories. In my case:


The index.htm can be used from your browser to navigate through your data:

Facebook Archive IndexThe downloaded archive name has the following format: facebook-<string>.zip. Based on my findings, <string> can be your Facebook username or ID. With interesting content like this, it was tempting to start searching for Facebook users’ history file on the net. What’s the best place to search for files uploaded by users or without their explicit consent? Virustotal of course!

I created a simple YARA hunting rule to search for interesting Facebook archives:

rule FacebookArchive
    $s1 = "messages/" nocase wide ascii
    $s2 = "photos/" nocase wide ascii
    new_file and any of ($s*) and file_type contains "zip" and file_name contains "facebook-"

Guess what? Less than 24 hours later, I already got one hit. At the moment, I collected five archives that contain:

  • Profile details
  • Timeline of activity
  • Photos / Videos
  • Friends (when added)
  • Messaging  conversations
  • Events (accepted, declined)
  • Account activity (with IP addresses, Browser UA)
  • Ads displayed to the users (with the one he/she clicked)
  • Applications installed

I contacted Facebook via their bug bounty program to report the issue just to raise the potential issue which is, I agree, not directly related to their services. Users are responsible for the privacy of the data once downloaded. Not surprisingly, their answer was negative:

Hi Xavier,
Thanks for contacting us. Keep in mind that this queue is specifically for security vulnerabilities. Since what you describe doesn’t appear to be a security vulnerability, you can provide feedback or suggestions regarding a feature here:

So, they consider that it’s not a security issue but a feature. My suggestion is:

  • To generate a unique filename (example: based on the file SHA256)
  • To encrypt the ZIP archive with the current user’s password.

Conclusion (and reminder): do not post sensitive files to VirusTota and, when you’re using tool that relies on VirusTotal, be sure that files will not be sent without your prior consent!

[The post Facebook Archives Predictive Name: Some Found Online has been first published on /dev/random]

March 29, 2018

FOSDEM 2019 will take place at ULB Campus Solbosch on Saturday 2 and Sunday 3 February 2019. Further details and calls for participation will be announced in the coming months.

Autoptimize 2.3.3 was just released. It is a minor release with the following minor improvements & fixes:

  • improvement: updated to latest version of Filamentgroup’s loadCSS
  • improvement: by default exclude wp-content/cache and wp-content/uploads from CSS optimization (Divi, Avada & possibly others store page-specific CSS there)
  • bugfix: stop double try/catch-blocks
  • misc. bugfixes (see GitHub commit log)

This is (supposed to be) the last minor release of the 2.3 branch, 2.4 is a major change with some big under-the-hood and functional changes. That new version has been in the works for some months already and if you’re up for it you can install by simply downloading this zip-file from GitHub. The GitHub version will also auto-update thanks to the great Plugin Update Checker library.

We released new versions of Drupal 7 and Drupal 8 yesterday that fixed a highly critical security bug. All software has security bugs, and fortunately for Drupal, critical security bugs are rare. What matters is how you deal with security releases.

I have the utmost respect for how the Drupal Security Team manages a security release like this — from fixing the bug, testing the solution, providing advance notice, coordinating the release, to being available for press inquiries and more.

The amount of effort, care and dedication that the Drupal Security Team invests to keep Drupal secure is unparalleled, and makes Drupal's security best-in-class. Thank you!

Imagine the following problem. You are working on an application running inside a docker container and you feel the need to inspect the http requests being executed by your application. Coming from an app development background myself, it can be a huge benefit to use proxy tooling like mitmproxy to see the actual request being made instead of staying stuck trying to figure out what’s wrong. Or, use the proxy to alter requests and easily test use cases or reproduce bug reports. All in all, there’s a significant gain in lead time when using the right tool for the job. Otherwise we’d still be adding print statements everywhere just for the sake of debugging (I know you’re doing it).

In this case, I have a Python Django web app running in a Debian Jessie container and I needed to see the flow of requests being made towards the backend REST API. Installing mitmproxy is a breeze, as it’s a cross platform tool, you can follow the installation guide for your platform. I’m running it using the WSL on Windows myself (Ubuntu). Before doing anything else, start mitmproxy and get comfortable. The default port for capturing traffic is 8080, change it using the -p <portnumber> option in case conflicts would occur.

$ mitmproxy

Docker provides proxy support out of the box which is convenient. As we’ve got mitmproxy running on our host machine, we want to route all traffic exiting the container to that once. To do so, I created a new docker-compose.override file which holds all settings and (almost all) logic.

version: '2'
      - HTTP_PROXY=mitmproxy:8080
      - HTTPS_PROXY=mitmproxy:8080
      - "mitmproxy:"
    command: sh -c 'python3 && python3 runserver'

In the above example I only added the settings relevant to illustrate the solution. You can see two environment variables, HTTP_PROXY and HTTPS_PROXY. These tell the container to route all exiting traffic towards mitmproxy, who’s added as an extra host. The ip address is the one of my host machine, change it to whatever yours is.

Now, you might notice there’s another little item in there called python3 which runs just before we start the Django app. If you’d only use the proxy settings, http requests would enter mitmproxy just fine, but https requests will not show up. The reason is that mitmproxy uses it’s own certificate to be able to decrypt the traffic flowing through. As a result, the requests will get blocked as the OS doesn’t trust that certificate (like it should).

Initially I tried to add the certificate to the container’s OS trusted certificate store, but that didn’t work out as planned. I could run a curl command inside the container to an https endpoint and it flowed through mitmproxy like a boss, but when I started the Django app, https requests kept being aborted. It took me a while to figure out what was going on. Our Django app uses the Requests library to execute http calls. This one in turn uses certifi to create its trusted certificate store, which didn’t have a copy of mitmproxy’s root certificate causing all https calls being silenced and not exiting the container.

The solution, once you know it, is straightforward. Add a script called to your container (either by volume mapping, or as a build step) together with mitmproxy’s root certificate (located at ~/.mitmproxy/mitmproxy-ca-cert.pem) and edit the docker-compose.override.yml file to execute before starting the Django app.

Copy mitmproxy’s root certificate to your Django app’s root folder:

$ cp ~/.mitmproxy/mitmproxy-ca-cert.pem <project root>

Create a new file called in the same project root:

import certifi

caroot = certifi.where()
with open('mitmproxy-ca-cert.pem', 'rb') as mitm:
    cert =
with open(caroot, 'ab') as out:
print('w00t, added mitmproxy root cert to certifi!')

Now, when you up the docker container with compose, you’ll see all traffic flowing through mitmproxy! Happy debugging!

Small additions

Multiple docker-compose files

When using a docker-compose.override.yml file and you’ve already specified another non-standard docker-compose file, make sure to use -f twice to load both (or more) settings files. So, imagine your settings file is called docker-compose.standalone.yml and you’ve added all files following this tutorial, the command to up your container becomes:

docker-compose -f .\docker-compose.standalone.yml -f .\docker-compose.override.yml up

Our override file should be last so that no setting can be overwritten by settings in previous files, The last in wins.

Check firewall settings

As I’m running this on Windows, I still had to open up port 8080 to allow the container to connect to mitmproxy. Double check on whatever system you’re using that this isn’t the thing causing a headache before proceeding 😑.

But I’m not using Python or Requests

In case you want to do this but don’t have a Python app using Requests, you can try to add the certificate to the container’s OS as I mentioned before. For some reason, I also had to set it on every container start, but that’s not too much hassle. Instead of, create a file called with the following content:

mkdir /usr/local/share/ca-certificates/extra
cp mitmproxy-ca-cert.pem /usr/local/share/ca-certificates/extra/root.cert.crt

Then, change the docker-compose.override.yml file to execute this one instead of

command: sh -c './ && <whatever the command>'

The example assumes a Debian based container, make sure to alter to add the root certificate to the correct location according to your container’s OS.

Have fun!

March 28, 2018

I published the following diary on “How are Your Vulnerabilities?“:

Scanning assets for known vulnerabilities is a mandatory process in many organisations. This topic comes in the third position of the CIS Top-20. The major issue with a vulnerability scanning process is not on the technical side but more on the process side. Indeed, the selection of the tool and its deployment is not very complicated (well, in not too complex environments, to be honest): Buy a solution or build a solution based on free tools, define the scope, schedule the scan and it’s done… [Read more]

[The post [SANS ISC] How are Your Vulnerabilities? has been first published on /dev/random]

Last month, Matthew Grasmick sparked an important conversation surrounding Drupal's evaluator experience and our approach to documentation. It's become clear that we need to evolve our documentation governance model, in addition to formalizing best practices.

After receiving a tremendous amount of support and feedback, Matthew has continued the conversation by publishing a Documentation Initiative Proposal. Matt's proposal includes an evaluation of our current state, a picture of what success could look like, and what you can do to get involved. If you are passionate about improving Drupal's evaluator experience, I would encourage you to read Matt's proposal!

March 27, 2018

Eigenlijk moet ik nog eens iets posten over hoe we allerlei dingen met Qt en QML doen he?

Na zoveel jaren recruiters te vragen om naar een Duitstalige Qt/QML ontwikkelaar in Eindhoven te zoeken, zou Heidenhain er goed aan doen eens wat te laten zien waar wij aan toe zijn. Vind ik. Maar ja. Het is en blijft een bedrijf dat zelf een beetje geheimzinnig wil zijn.

De laatste tijd zijn het de details van die Klartext-editor die aan beurt zijn. M.a.w. dat wat er wel toe doet: dat wat werkers moeten gebruiken om een werkstuk in te geven in een CNC machine. Dat duurt even. Want de TNC640 heeft best wel veel specifieke dingetjes die al vele jaren ingeleerd zijn bij gebruikers van de machines.

Ik heb m.a.w. veel dingen die ik zou kunnen posten. Maar ik ben lui, plus ik moet wat opletten wat ik wel en wat ik niet publiek maak. Het grappigste daarvan is dat Nokia veel stricter was. Maar ook veel duidelijker. Daarom kon ik veel meer tijdens Nokia zeggen als nu. Nokia was kei streng in haar NDA’s (tot 180000 Euro schadevergoeding, en meer), maar wel duidelijk: dit mag je zeggen, dit niet.

March 26, 2018

I published the following diary on “Windows IRC Bot in the Wild“:

Last weekend, I caught on VirusTotal a trojan disguised as Windows IRC bot. It was detected thanks to my ‘psexec’ hunting rule which looks definitively an interesting keyword (see my previous diary). I detected the first occurrence on 2018-03-24 15:48:00 UTC. The file was submitted for the first time from the US. The strange fact is that the initial file has already a goods code on VT (55/67) and is detected by most of the classic antivirus tools… [Read more]

[The post [SANS ISC] Windows IRC Bot in the Wild has been first published on /dev/random]

March 25, 2018

The post Chalk Talk #1: An introduction to Varnish appeared first on

At work we started a new video series called Chalk Talk, where we (try to) explain complicated technologies with a single chalkboard as guidance.

Our first series is on Varnish and will be hosted by yours truly.

Here's the first episode, giving an introduction to Varnish.

If you like a Dutch written version of the video, it's available at the Nucleus blog.

There will be 5 videos in the Varnish series, each will be posted weekly and every episode will dive deeper into the Varnish internals and offer practical tips on getting started. They'll appear on both the Nucleus blog & this one.

The post Chalk Talk #1: An introduction to Varnish appeared first on

March 23, 2018

I published the following diary on “Extending Hunting Capabilities in Your Network“:

Today’s diary is an extension to the one I posted yesterday about hunting for malicious files crossing your network. Searching for new IOCs is nice but there are risks of missing important pieces of information! Indeed, the first recipe could miss some malicious files in the following scenarios… [Read more]

[The post [SANS ISC] Extending Hunting Capabilities in Your Network has been first published on /dev/random]

March 22, 2018

GDAL is one of the cornerstones of the open source geospatial stack (and actually of many of the proprietary systems as well).
If you want to use or test the latest features this can be done quite easily by setting a few environment variables:

I save this commands in a file After running

You will be able to use all the latest gdal tools.   

johan@x1:~$ source ~/ 
johan@x1:~$ gdalinfo --version
GDAL 2.3.0dev, released 2017/99/99

Actually if you use this command, also tools dynamically linked to gdal will be using this latest version.

johan@x1:~$ saga_cmd io_gdal

##### ## ##### ##
### ### ## ###
### # ## ## #### # ##
### ##### ## # #####
##### # ## ##### # ##

SAGA Version: 6.4.0

Library: GDAL/OGR
Category: Import/Export
File: /usr/local/lib/saga/
Interface to Frank Warmerdam's Geospatial Data Abstraction Library (GDAL).
Version 2.3.0dev

Note that this will work well for packages dynamically linked to the C api. If packages are linked to the C++ api, they may need recompilation.

If you are on windows, you could have a look at SDKShell.bat (part of the builds from Tamas Szekeres).

Note: I actually wrote this blogpost because in his presentation on FOSDEM Jeremy Mayeres recommended using Docker for using the latest version of GDAL. I think that solution is overkill for desktop usage, and using environment variables is easier.

I published the following diary on “Automatic Hunting for Malicious Files Crossing your Network“:

If classic security controls remain mandatory (antivirus, IDS, etc), it is always useful to increase your capacity to detect suspicious activities occurring in your networks.

Here is a quick recipe that I’m using to detect malicious files crossing my networks. The different components are… [Read more]

[The post [SANS ISC] Automatic Hunting for Malicious Files Crossing your Network has been first published on /dev/random]

March 21, 2018

I published the following diary on “Surge in blackmailing?“:

What’s happening with blackmails? For those who don’t know the word, it is a piece of mail sent to a victim to ask money in return for not revealing compromising information about him/her. For a few days, we noticed a peak of such malicious emails. One of our readers reported one during the weekend, Johannes Ullrich received also one. A campaign targeted people in The Netherlands… [Read more]

[The post [SANS ISC] Surge in blackmailing? has been first published on /dev/random]

March 20, 2018

A very thorough explanation of how to build responsive accessible HTML tables. I'd love to compare this with Drupal's out-of-the-box approach to evaluate how we are doing.

Yesterday was Father's Day in Belgium. As a gift, Stan drew me this picture. In addition to the thoughtful gesture, I love learning more about the way our kids see us. For me, it's special that Stan understands two important communities in my life.

Father day drawing stan

I published the following diary on “Administrator’s Password Bad Practice“:

Just a quick reminder about some bad practices while handling Windows Administrator credentials. I’m constantly changing my hunting filters on VT. A few days ago, I started to search for files/scripts that use the Microsoft SysInternals tool psexec. For system administrators, this a great tool to execute programs on remote systems but it is also used by attackers to pivot internally. This morning, my filter returned an interesting file with a VT score of 11/66. The file is a compiled AutoIT script… [Read more]

[The post [SANS ISC] Administrator’s Password Bad Practice has been first published on /dev/random]

March 19, 2018

The Call for Presentations, Workshops, Ignites are open for submission. The CfP has been extended and will be open until 8th of April 2018. We decided to run the CfP in git, more specificaly gitlab. Please follow the following process.

Using a Merge Request by following the next steps :

  • Fork …

March 17, 2018

As heard on; Mopo – Tökkö. The sound is vaguely reminiscent of Morphine due to the instrumentation no doubt, but this Finnish trio is more into jazz (their bio states; “The band draws their inspiration from jazz, punk and the Finnish nature”), but they rock big time nonetheless!

YouTube Video
Watch this video on YouTube.

If you have time watch a concert of theirs on YouTube and if you dig them; they’ll be playing in Ghent and Leuven this month!

March 15, 2018

Hello Readers, here is my wrap-up of the second day. Usually, the second day is harder in the morning due to the social events but, at TROOPERS, they organize the hacker run started at 06:45 for the most motivated of us. Today, the topic of the 3rd track switched from SAP to Active Directory. But I remained in track #1 and #2 to follow a mix of offensive vs defensive presentations. By the way, to have a good idea of the TROOPERS’ atmosphere here is their introduction video for 2018:

 As usual, the second day started also with a keynote which was assigned to Rossella Mattioli, working at ENISA. She’s doing a lot of promotion for this European agency across multiple security conferences but it’s special with TROOPERS because their motto is the same: “Make the world (the Internet) a safer place”. ENISA’s rule is to remove the gap between industries, the security community and the EU Member States. Rossella reviewed the projects and initiatives promoted by ENISA like the development of papers for all types of industries (power, automation, public transports, etc). To demonstrate the fact that today, the security must be addressed at a global scale, she gave the following example: Think about your journey to come to TROOPERS and list all the actions that you performed with online systems. The list is quite long! Another role of ENISA is to make CSIRT’s work better together. Did you know that they are 272 different CSIRST’s in the European Union? And they don’t always communicate in an efficient way. That’s why ENISA is working on common taxonomies to help them. Their website has plenty of useful documents that are available for free, just have a look!

After a short coffee break and interesting chats with peers, I move to the defensive track to follow Matt Graeber who presented “Subverting trust in Windows”. Matt warned that the talk was a “hands-on” edition of his complete research that is available online. Today’s presentation focuses more on live demos. First, what is “trust” in the context of software?
  • Is the software from a reputable vendor?
  • What is the intent of the software?
  • Can it be abused in any way?
  • What is the protection status of signing keys?
  • Is the certificate issuer reputable?
  • Is the OS validating signer origin and code integrity properly?
Trust maturity level can be matched with enforcement level:
Trust Maturity Levels
But what is the intent of code signing? To attest the origin and integrity of software.  It is NOT an attention of trust or intent! But, it can be used to enforce the mechanism for previously established trust. Some bad assumptions reported by Matt:
  • Signed == trusted
  • Non-robust signature verification
  • No warning/enforcement of known bad certs
One of the challenges is to detect malicious files and, across millions of events generated daily, how to take advantage of signed code? Signature can be valid but the patch suspicious C:\Windows\Tasks\notepad.exe)
A bad approach is to just ignore because the file is signed… Matt’s demonstrated why! The first attack was based on the subject Interface package hijacks (attack the validation infrastructure). He manually added a signature to a non-signed PE file. Brilliant! The second attack scenario was to perform a certificate cloning and Root CA installation. Here again, very nice but it’s more touchy to achieve because the victim has to install the Root CA on his computer. The conclusion to the talk was that even signed binaries can’t be trusted… All the details of the attacks are available here:
Then, I switched back to the offensive track to listen to Salvador Mendoza and Leigh-Anne Galloway who presented “NFC payments: The art of relay and replay attacks“. They started with a recap of the NFC technology. Payments via NFC are not new: The first implementation was in 1996 in Korea (public transports). In 2015, ApplePay was launched. And today, 40% of non-cash operations are performed over NFC! Such attacks are also interesting because the attacker can gain easily some money, banks are accepting the risk to lose a percentage of transactions, some limits on the amount are higher in other countries and finally, there is no additional card holder identification. The explained two types of attacks: the replay and relay. Both are based on RFC readers coupled with Raspberry devices. They tried to perform live demos but it failed (it’s always touchy to play live with NFC). Hopefully, they had pre-recorded videos. Demos are interesting but, in my opinion, there was a lack of details for people who don’t play with NFC every day, just like me!

After the lunch, Matt Domko and Jordan Salyer started the last half-day with an interesting topic: “Blue team sprint: Let’s fix these 3 thinks on Monday”. Matt presented a nice tool last year, called Bropy. The idea was to automatically detect unusual traffic on your networks. This year, he came back with more stuff. The idea of the talk was: “What to do with a limited amount of resources but in a very effective way?“. Why? Many companies don’t have the time, the stuff and/or the budget to deploy commercial solutions. The first idea was to answer the following question: “What the hell is on my network?”. Based on a new version of his tool, rewritten in Python3 and now supported IPv6 (based on last year comments). The next question was: “Why are all my client systems mining bitcoin?”. Jordan explained how to deploy AppLocker from Microsoft to implement a white-list of applications that can be executed on a client computer.  Many examples were provided, many commands based on PowerShell. I recommend you to have a look at the slide when they will be available online. Finally, the 3rd question to be addressed was the management of logs based on an ELK stack… classic stuff! Here, Matt gave a very nice tip when you’re deploying a log management solution. Always split the storage of logs and the tools used to process them. This way, if you deploy a new tool in the future, you won’t have to reconfigure all your clients (ex: if you decide to move from ELK to Splunk because you got some budget).

If Matt is using Bro (see above), the next speaker too. Joe Slowik presented “Mind the gap, Bro: using network monitoring to overcome lack of host visibility in ICS environments“. What does it mean? Monitoring of hosts and networks are mandatory but sometimes, it’s not easy to get a complete view of all the hosts present on a network. Environments can be completely different: a Microsoft Windows-based network does not have the behaviour of an ICS network. The idea is to implement Bro to collect data passing across the wire. Bro is very powerful to extract files from (clear-text) protocols. As said Joe: “If you can see it, Bro can read it“. Next to Bro, Joe deployed a set of YARA rules to analyze the files carved by Bro. This is quite powerful. Then, it presented some examples of malware impacting ICS networks and how to detect them with his solution (Trisis, Dymalloy and CrashOverride). The conclusion of the presentation was that reducing the response time can limit an infection.

The last time slot for me was in the offensive track where Raphaël Vinot, from, presented “Ads networks are following you, follow them back”. Raphaël presented a tool he developed to better understand how many (to not say all) websites integrate components from multiple 3rd party providers. If the goal is often completely legit (and to gain some revenue), it has already been discovered that ads networks were compromized to distribute malicious content. Based on this, we can consider that any website is potentially dangerous. The situation today as described by Raphaël:
  • Some website homepages are very big (close to 10MB for some of them!)
  • The content is extremely dynamic
  • Dozen of 3rd party components are loaded
  • There was a lack of tools to analyze such website.

The result is “Lookyloo” that downloads the provided homepage and all its objects and present them in a tree. This kind of analyze of very important during the daily tasks of a CERT. The tool emulates completely the browser and stores data (HTML core, cookies, etc) in an HTML Archive (.har) that is processed to be displayed in a nice way. The tool is available online but a live demo is available here: This is very nice tool that must certainly be in your incident handling toolbox!


This is the end of the 2018 edition of TROOPERS… Congratulations to the team for the perfect organization!

[The post TROOPERS 18 Wrap-Up Day #2 has been first published on /dev/random]

The post Enable the slow log in Elastic Search appeared first on

Elasticsearch is pretty cool, you can just fire of HTTP commands to it to change (most of) its settings on the fly, without restarting the service. Here's how you can enable the slowlog to lo queries that exceed a certain time treshold.

These are enabled per index you have, so you can be selective about it.

Get all indexes in your Elastic Search

To start, get a list of all your Elasticsearch indexes. I'm using jq here for the JSON formatting (get jq here).

$ curl -s -XGET ''
green open index1 BV8NLebPuHr6wh2qUnp7XpTLBT 2 0 425739 251734  1.3gb  1.3gb
green open index2 3hfdy8Ldw7imoq1KDGg2FMyHAe 2 0 425374 185515  1.2gb  1.2gb
green open index3 ldKod8LPUOphh7BKCWevYp3xTd 2 0 425674 274984  1.5gb  1.5gb

This shows you have 3 indexes, called index1, index2 and index3.

Enable slow log per index

Make a PUT HTTP call to change the settings of a particular index. In this case, index index3 will be changed.

$ curl -XPUT -d '{"" : "50ms","": "50ms","index.indexing.slowlog.threshold.index.warn": "50ms"}' | jq

If you pretty-print the JSON payload, it looks like this:

  "" : "50ms",
  "": "50ms",
  "index.indexing.slowlog.threshold.index.warn": "50ms"

Which essentially means: log all queries, fetches and index rebuilds that exceed 50ms with a severity of "warning".

Enable global warning logging

To make sure those warning logs get written to your logs, make sure you enable that logging in your cluster.

$ curl -XPUT -d '{"transient" : {"" : "WARN", "logger.index.indexing.slowlog" : "WARN" }}' | jq

Again, pretty-printed payload:

  "transient" : {
    "" : "WARN",
    "logger.index.indexing.slowlog" : "WARN"

These settings aren't persisted on restart, they are only written to memory and active for the currently running elasticsearch instance.

The post Enable the slow log in Elastic Search appeared first on

Google Search Console showed me that I have some duplicate content issues on, so I went ahead and tweaked my use of the rel="canonical" link tag.

When you have content that is accessible under multiple URLs, or even on multiple websites, and you don't explicitly tell Google which URL is canonical, Google makes the choice for you. By using the rel="canonical" link tag, you can tell Google which version should be prioritized in search results.

Doing canonicalization well improves your site's SEO, and doing canonicalization wrong can be catastrophic. Let's hope I did it right!

While working on my POSSE plan, I realized that my site no longer supported "RSS auto-discovery". RSS auto-discovery is a technique that makes it possible for browsers and RSS readers to automatically find a site's RSS feed. For example, when you enter in an RSS reader or browser, it should automatically discover that the feed is It's a small adjustment, but it helps improve the usability of the open web.

To make your RSS feeds auto-discoverable, add a tag inside the tag of your website. You can even include multiple tags, which will allow you to make multiple RSS feeds auto-discoverable at the same time. Here is what it looks like for my site:

Pretty easy! Make sure to check your own websites — it helps the open web.

March 14, 2018

I’m back to Heidelberg (Germany) for my yearly trip to the TROOPERS conference. I really like this event and I’m glad to be able to attend it again (thanks to the crew!). So, here is my wrap-up for the first day. The conference organization remains the same with a good venue. It’s a multiple tracks events. Not easy to manage the schedule because you’re always facing critical choices between two interesting talks scheduled in parallel. As the previous edition, TROOPERS is renowned for its badge! This year, it is a radio receiver that you can use to receive live audio feeds from the different tracks but also some hidden information. Just awesome!

TROOPERS Hospitality

After a short introduction to this edition by Nicki Vonderwell, the main stage was give to Mike Ossmann, the founder of Great Scott Gadgets. The introduction was based on his personal story: I should not be here, I’m a musician”. Scott is usually better at giving technical talks but,  this time, it was a completely different exercise for him and he did well! Mike tried to open the eyes of the security community about how to share ideas. Indeed the best way to learn new stuff is to explore what you don’t know. Assign time to research and you will find. And, once you found something new, how to share it with the community. According to Mike, it’s time to change how we share ideas. For Mike, “Ideas have value, proportionally to how they are shared…”. Sometimes, ideas are found by mistake or found in parallel by two different researchers. Progress is made by small steps and many people. How to properly use your idea to get some investors coming to you? It takes work to turn an idea into a product! The idea is to abolish patents that prevent many ideas to come to real products. Hackers are good at open source but they lack at writing papers and documentation (compared to academic papers). Really good and inspiring ideas!

Mike on stage

Then I remained in the “offensive” track to attend Luca Bongiorni’s (@) presentation: “How to brink HID attacks to the next level”. Let’s start with the conclusion about this talk: “After, you will be more paranoid about USB devices“. What is HID or “Human Interface Devices”  like a mouse, keyboard, game controllers, drawing tablets, etc. Most of the time, no driver is required to use them, they’re whitelisted by DLP tools, not under AV’s scope. What could go wrong? Luca started with a review of the USB attacks landscape. Yes, there are available for a while:

  • 1st generation: Teensy (2009) and RubberDucky (2010) (both simple keyboard emulators). The RubberDucky was able to  change the VID/PID to evade filters (to mimic another keyboard type)
  • 2nd generation: BadUSB (2014), TurnipSchool (2015)
  • 3rd generation: WHID Injector (2017), P4wnP1 (2017)

But, often, the challenge is to entice the victim to plug the USB device into his/her computer. To help them, it’s possible to weaponise cool USB gadgets. Based on social engineering, it’s possible to find what looks interesting to the victim. He likes beer? Just weaponise a USB fridge. There are more chances that he will connect it. Other good candidates are USB plasma balls. The second part of the talk was a review of the cool features and demos of the WHID and P4nwP1. The final idea was to compromise an air-gapped computer. How?

  • Entice the user to connect the USB device
  • It creates a wireless access point that the attacker can use or connect to an SSID and phone home to a C2 server
  • It creates a COM port
  • It creates a keyboard
  • Powershell payload is injected using the keyboard feature
  • Data is sent to the device via the COM port
  • Data is exfiltrated to the attacker
  • Win!

Luca on stage

It’s quite difficult to protect against this kind of attack because people need USB devices! So, never trust USB devices and use USB condom. They are tools like DuckHunt on Windows or USBguard/USBdeath on Linux. Good tip, Microsoft has an event log which reports the connection of a new USB device: 6416.

For the rest of the day, I switched to the defence & management track. The next presentation was performed by Ivan Pepelnjak: “Real-Life network and security automation“. Ivan is a network architect and a regular speaker at TROOPERS. I like the way he presents his ideas because he always rolls back to the conclusion of the previous talk. This time, he explained that he received some feedback from people who were sceptic about the automation process he explained in 2017. Challenge accepted: he came with a new idea: To create a path for network engineers to help them to implement network automation. Ivan reviewed multiple examples based on real situations. Network engineers contacted him about problems they were facing where no tool was available. If no tool exists, just create yours. It does not need to be complex, sometimes a bunch of scripts can save you hours of a headache! It’s mandatory to find solutions when vendors are not able to provide the right tools. Many examples were provided by Ivan. Most of them based on python & expect scripts. They usually generate text files that are more easy to process, index, are searchable and can be injected into a versioning tool. He started with some simple example like checking the network configuration of thousands of switched (ex: verify they use the right DNS or timezone) up to the complex project to completely automate the deployment of new devices in a data center. Even if the talk was not really a “security” talk, it was excellent and provided many ideas!

After the lunch, Nate Warfield, working at the Microsoft Security Response Center, presented “All your cloud are belong to us – Hunting compromise in Azure“. Azure is a key player amongst the cloud providers. With 1.6M IP addresses facing the Internet, it is normal that many hosts running on it are compromized. But how to find them easily? A regular port scan is not doable, will return too many false positives and will be way too long. Nate explained with many examples, how easy it is to search for bad stuff across a big network. The first example was NoSQL databases. Such kind of services should not be facing the Internet but, guess what, people do! NoSQL databases have been nice targets in the past month (MongoDB, Redis, CouchDB, ElasticSearch, …). At a certain point in time, Azure had 2500 compromized DB’s. How to find them? Use Shodan of course! A good point is that compromized databases have common names, so it’s convenient to use them as IOCs. Shodan can export results in JSON, easily parsable. The next example was only one week old: the Exim RCE (CVE-2018-6789). 1221 vulnerable hosts were found in 5 mins again thank to Shodan and some scripting. And more examples were given like MQTT and WannaCry or default passwords. The next part of the presentation focused on 3rd party images that can be used by anybody to spawn a VM in a minute. Keep in mind that, in the IaaS (“Infrastructure as a Service“), your security is your responsibility. Other features that can be abused are “Express Route” or “Direct Connect”. Those feature act basically as a VPN and allow your LAN to connect transparently to your Azure resources. That’s nice but… some people don’t know which IP address allow and… they allow the complete Azure IP space to connect to their LAN!? Another threat with 3rd party images: most of them are outdated (more than a few months old). When you use them, they are lacking the latest security patches. A best practice is to deploy them, patch them and review security settings! A very interesting and entertaining talk! Probably the best of the day for me.

The next talk was presented by Mara Tam: “No royal roads: Advanced Analysis for Advances Threats“. Mara is also a recurring speaker at TROOPERS. Small deception, based on the title, I expected something more technical about threat analysis. But it was interesting, Mara’s knowledge about nation-states attacks is really deep and she gave interesting tips. I like the following facts: “All offensive problems are technical problems and all defensive problems are political problems” or this one: “Attribution is hard, and binary analysis is not enough“. So true!

After a break, Birk Kauer and Flo Grunow, both working for ERNW, presented the result of their research: “Security appliances internals“. This talk was scheduled in the defensive track because takeaways and recommendations have been given. As says Wikipedia: the term “appliance” is coming from “home appliance” which are usually closed and sealed. Really? Appliances are critical components of the infrastructure because they are connected to the core network, they handle sensitive data and perform security operations. So we should expect a high-security level. Helas, there are two main threats with an appliance:

  • The time to market (vendors must react faster than competitors)
  • Complexity (they are more and more complex and, therefore, difficult to maintain)

An appliance can be classified into two type:

  • Vendor class 1: They do everything on their own (with no or little external dependencies). The best example is the Bluecoat SGOS which has its own custom filesystem and bootloader.
  • Vendor class 2: They integrate 3rd party software (just integration)

They are pro & con for both models (like keeping full control, smaller surface attack but people difficult to hire, harder maintenance, easy patching, etc). Then some old vulnerabilities were given to prove that even big players can be at risk. Examples were based on previous FireEye or PaloAlto appliances. The next part of the talk focused on new vulnerabilities that affected CheckPoint firewalls. One of them was affecting SquirrelMail used in some Checkpoint products. The vulnerability was solved by CheckPoint but the original product SquirrelMail is still unpatched today. Despite multiple attempts to contact the developers (since May 2017!), they decided to go to a full disclosure. The next example was targeting SIEM appliance and NXlog. The conclusion to the talk was a series of recommendations and question to ask to challenge your vendors:

  • How do they handle disclosure, security community? (A no answer is an answer…)
  • Do they train staff?
  • Do they implement features?
  • Do they use 3rd party code?
  • Average time to patch?
  • What about the cloud?

The day ended with a set of lightning talks (20-mins each). The first one was presented by Bryan K who explained why the layer 8 is often weaponized. For years, security people consider users are dumb, idiots that make always mistakes… He reviewed simple human tricks that simply… work! Like USB sticks, Indicators of Bullshit (money), fear (Your account will be deleted) or urgency (Do it now!).

The next lightning talk was presented by Jasper Bongertz: “Verifying network IOCs hits in millions of packets“. In his daily job, Jasper must often analyze gigabytes of PCAP files for specific events. Usually, an IDS alert match on a single packet. Example: a website returns a 403 error. That’ nice but how to know from which HTTP request this error is coming? We need to inspect the complete flow. To help him, Jasper presented the tool he developed. Basically, it takes IDS alerts, a bunch of PCAP files and extracts relevant packets to specific directories related to the IDS alerts. Easy to spot the issue. New PCAPs are saved in the PCAPng format with comments to easily find the interesting packets. The tool is called TraceWrangler. Really nice tool to keep in your toolbox!

Finally, Daniel Underhay presented his framework to easily clone RFID cards: Walrus. Based on red team engagements, it’s often mandatory to try to clone an employee’s access card to access some facilities. They are plenty of tools to achieve this but there was a lack of a unique tool that could handler multiple readers and store cloned cards in a unique database. This is the goal of Walrus. The tool could be used by a 3+ years old kid! The project is available here.

As usual, the day ended with the social event and a nice dinner where new as well as old friends were met! See you tomorrow for the second wrap-up. Nice talks are on the schedule!

[The post TROOPERS 18 Wrap-Up Day #1 has been first published on /dev/random]

March 13, 2018

David Clough, author of the Async JavaScript WordPress plugin contacted me on March 5th to ask me if I was interested to take over ownership of his project. Fast-forward to the present; I will release a new version of AsyncJS on March 13th on, which will:

  • integrate all “pro” features (that’s right, free for all)
  • include some rewritten code for easier maintenance
  • be fully i18n-ready (lots of strings to translate :-) )

I will provide support on the forum (be patient though, I don’t have a deep understanding of code, functionality & quirks yet). I also have some more fixes/ smaller changes/ micro-improvements in mind (well, a Trello-board really) for the next release, but I am not planning major changes or new functionality. But I vaguely remember I said something similar about Autoptimize a long time ago and look where that got me …

Anyway, kudo’s to David for a great plugin with a substantial user-base (over 30K active installations) and for doing “the right thing” (as in not putting it on the plugin-market in search for the highest bidder). I hope I’ll do you proud man!

March 12, 2018

It's been 5 years since I wrote my last blogpost. Until today.

The past years my interest in blogging declined. Combined with the regular burden of Drupal security updates, I seriously considered to simply scrap everything and replace the website with a simple one-pager. Although a lot of my older blog-posts are not very useful (to say the least) or are very out of date, some of these bring memories of long gone times. Needles to say I didn't like the idea of deleting everything from these past 15 years.

Meanwhile I also have the feeling that social networks, Facebook in particular, have too much control over the content we create. This gave me the idea to start using this blog again. By writing content on my own blog, I can take control back and still easily post it to Facebook, Twitter and so on. A few days later Dries wrote a post about his very similar plans. Apparently this idea even has a name: The IndieWeb movement.

I decided …

I published the following diary on “Payload delivery via SMB“:

This weekend, while reviewing the collected data for the last days, I found an interesting way to drop a payload to the victim. This is not brand new and the attack surface is (in my humble opinion) very restricted but it may be catastrophic. Let’s see why… [Read more]


[The post [SANS ISC] Payload delivery via SMB has been first published on /dev/random]

March 11, 2018

The post Laravel & MySQL auto-adding “on update current_timestamp()” to timestamp fields appeared first on

We hit an interesting Laravel "issue" while developing Oh Dear! concerning a MySQL table.

Consider the following database migration to create a new table with some timestamp fields.

Schema::create('downtime_periods', function (Blueprint $table) {

This turns into a MySQL table like this.

mysql> DESCRIBE downtime_periods;
| Field      | Type             | Null | Key | Default           | Extra                       |
| id         | int(10) unsigned | NO   | PRI | NULL              | auto_increment              |
| site_id    | int(10) unsigned | NO   | MUL | NULL              |                             |
| started_at | timestamp        | NO   |     | CURRENT_TIMESTAMP | on update CURRENT_TIMESTAMP |
| ended_at   | timestamp        | YES  |     | NULL              |                             |
| created_at | timestamp        | YES  |     | NULL              |                             |
| updated_at | timestamp        | YES  |     | NULL              |                             |
6 rows in set (0.00 sec)

Notice the Extra column on the started_at field? That was unexpected. On every save/modification to a row, the started_at would be auto-updated to the current timestamp.

The fix in Laravel to avoid this behaviour is to add nullable() to the migration, like this.


To fix an already created table, remove the Extra behaviour with a SQL query.

MySQL> ALTER TABLE downtime_periods
CHANGE started_at started_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP;

Afterwards, the table looks like you'd expect:

mysql> describe downtime_periods;
| Field      | Type             | Null | Key | Default | Extra          |
| id         | int(10) unsigned | NO   | PRI | NULL    | auto_increment |
| site_id    | int(10) unsigned | NO   | MUL | NULL    |                |
| started_at | timestamp        | YES  |     | NULL    |                |
| ended_at   | timestamp        | YES  |     | NULL    |                |
| created_at | timestamp        | YES  |     | NULL    |                |
| updated_at | timestamp        | YES  |     | NULL    |                |
6 rows in set (0.00 sec)

Lesson learned!

The post Laravel & MySQL auto-adding “on update current_timestamp()” to timestamp fields appeared first on

March 10, 2018

Le like est lâche, inutile, paresseux. Il ne sert à rien, il nous entretient dans un état d’hébétude.

Je ne suis pas assez courageux pour assumer, pour repartager à mon audience. Je ne suis pas assez volontaire pour répondre à l’auteur. Alors je like. Et je cherche à augmenter mon nombre de like. Ou de followers qui, subtilement, sont d’ailleurs représentés par des likes sur les pages Facebook.

Je ne cherche plus qu’à flatter mon égo et, grand prince, j’octroie parfois un peu d’ego bon marché à d’autres. Tenez, manants ! Voici un like, vous m’avez amusé, ému ou touché ! Votre cause est importante et mérite mon misérable et inutile like.

Je veux aller vite. Je consomme sans réfléchir. Je like une photo pour l’émotion qu’elle suscite immédiatement mais sans jamais creuser vraiment, sans réfléchir à ce que cela veut dire réellement.

Mon esprit critique fond, disparait, enfoui sous les likes, ce qui est une aubaine pour les publicitaires. Et puis, j’en viens à acheter des likes. Des likes qui ne veulent plus rien dire, des followers qui n’existent pas.

Forcément, à partir du moment où une observable (le nombre de likes) est censé représenté un concept beaucoup plus ardu (la popularité, le succès), le système va tout faire pour maximiser l’observable en le décorellant de ce qu’elle représente.

Les followers et les likes sont désormais tous faux. Ils n’ont que la signification religieuse que nous leur accordons. Nous sommes dans notre petite bulle, à interagir avec quelques idées qui nous flattent, qui nous brossent dans le sens du poil, qui nous rendent addicts. Peut-être pas encore assez addicts selon certains qui se font une fierté de rendre cette addiction encore plus importante.

Il faut sortir de cette spirale infernale.

Le prochain réseau social, le réseau enfin social sera anti-like. Il n’y aura pas de likes. Juste un repartage, une réponse ou un micropaiement libre. Pas de statistiques, pas de compteur de repartarges, pas de nombre de “personnes atteintes”.

Je vais même plus loin : il n’y aura pas de compteurs de followers ! Certains followers pourraient même vous suivre de manière invisible (vous ne savez pas qu’ils vous suivent). Finie la course à l’audience artificielle. Notre valeur viendra de notre contenu, pas de notre pseudo popularité.

Lorsque ce réseau verra le jour, l’utiliser paraitra angoissant, vide de sens. Nous aurons l’impression que nos écrits, nos paroles s’enfoncent dans un puit sans fond. Suis-je lu ? Suis-je dans le vide ? Paradoxalement, ce vide pourrait peut-être nous permettre de reconquérir la parole que nous avons laissée aux publicitaires en échange d’un peu d’égo, nous libérer de nos prisons de populisme en 140 caractères. Et, parfois, miraculeusement, quelques centimes d’une quelconque cryptomonnaie viendront nous remercier pour nos partages. Anonymes, anodins. Mais tellement signifiants.

La première version d’un réseau de ce type existe depuis des siècles. On l’appelle “le livre”. Initialement, il a été réservé à une élite minoritaire. En se popularisant, il a été corrompu, transformé en machine commerciale pleine de statistiques qui tente d’imprimer des millions de fois les quelques mêmes livres appelés “best-sellers”. Mais le vrai libre, le livre anti-like existe toujours, indépendant, auto-édité, attendant son heure pour renaître sur nos réseaux, sur nos liseuses.

Le livre du 21ème siècle est là, dans nos imaginations, sous nos claviers, attendant un réseau qui conjuguerait la simplicité d’un Twitter et la profondeur d’un fichier epub, qui mélangerait l’esprit d’un Mastodon et d’un Liberapay. Qui serait libre, décentralisé, incensurable, incontrôlable. Et qui bannirait à jamais l’effroyable, l’antisocial bouton “like”, celui qui a transformé nos véléités de communiquer en pleutres interactions monétisées par les publicitaires.


Photo by Alessio Lin on Unsplash

Vous avez aimé votre lecture ? Soutenez l’auteur sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Même un don symbolique fait toute la différence ! Retrouvons-nous ensuite sur Facebook, Twitter ou Mastodon.

Ce texte est publié sous la licence CC-By BE.

March 08, 2018

I published the following diary on “CRIMEB4NK IRC Bot“:

Yesterday, I got my hands on the source code of an IRC bot written in Perl. Yes, IRC (“Internet Relay Chat”) is still alive! If the chat protocol is less used today to handle communications between malware and their C2 servers, it remains an easy way to interact with malicious bots that provide interesting services to attackers. I had a quick look at the source code (poorly written) and found some interesting information… [Read more]

[The post [SANS ISC] CRIMEB4NK IRC Bot has been first published on /dev/random]

March 07, 2018

Short version: Don't ever use zonal statistics as a table in arcgis. It manages to do simple calculations such as calculating an average wrong.

Example of using zonal statistics between grid cells (from the ArcGIS manual)

Long version: I was hired by a customer to determine why they got unexpected results for their analyses. These analyses led to an official map with legal consequences. After investigating their whole procedure a number of issues were found. But the major source of errors was one which I found very unlikely: it turns out that the algorithm used by ArcGIS spatial analyst to determine the average grid value in a shape is wrong. Not just a little wrong. Very wrong. And no, I am not talking about the no data handling which was very wrong as well, I'm talking about how the algorithm compares vectors and rasters Interestingly, this seems to be known, as the arcgis manual states
It is recommended to only use rasters as the zone input, as it offers you greater control over the vector-to-raster conversion. This will help ensure you consistently get the expected results.
So how does arcgis compare vectors and rasters? In fact one could invent a number of algorithms:

  • Use the centers of the pixels and compare those to the vectors (most frequently used and fastest).
  • Use the actual area of the pixels 
  • Use those pixels of which the majority of the area is covered by the vector. 

None of these algorithm matches with the results we saw from arcgis, even though the documentation seems to suggest the first method is used. So what is happening? It seems that arcgis first converts your vector file to a raster, not necessarily in the same grid system as the grid you compare to. Then it interpolates your own grid (using an undocumented method) and then takes the average of those interpolated values if their cells match with the raster you supplied. This means pixels outside your shape can have an influence on the result. This mainly seems to occur when large areas are mapped (eg Belgium at 5m).

The average of this triangle is calculated by ArcGIS as 5.47

I don't understand how the market leader in GIS can do such a basic operation so wrong, and the whole search also convinced me how important it is to open the source (or at least the algorithm used) to get reproducible results. Anyway, if you are still stuck with arcgis, know that you can install SAGA GIS as a toolbox. It contains sensible algorithms to do a vector/raster intersection and they are also approximately 10 times faster than the ArcGIS versions. Or you can have a look at how Grass and QGIS implement this.  All of this of course only if you consistently want to get the expected results...

And if your government also uses ArcGIS for determining taxes or other policies, perhaps they too should consider switching to a product which consistently gives the expected results.

Update March 18 2018: make sure you check out the comments from Steve Kopp (spatial analyst development team) below and the discussion - it is interesting.

It took me way too long (Autoptimize and related stuff is great fun, but it is eating a lot of time), but I just pushed out an update to my first ever plugin; WP YouTube Lyte. From the changelog:

So there you have it; Lite YouTube Embeds 2018 style and an example Lyte embed of a 1930’s style Blue Monday …

YouTube Video
Watch this video on YouTube.

Octave is a good choice for getting some serious computing done (it’s largely an open-source Matlab). But for interactive exploration, it feels a bit awkward. If you’ve done any data science work lately, you’ll undoubtedly have used the fantastic Jupyter.

There’s a way to combine both and have the great UI of Jupyter with the processing core of Octave:

Jupyter lab with an Octave kernel

I’ve built a variant of the standard Jupyter Docker images that uses Octave as a kernel, to make it trivial to run this combination. You can find it here.

Comments | More on | @rubenv on Twitter

March 06, 2018

Everybody still reminds the huge impact that Wannacry had in many companies in 2017? The ransomware exploited the vulnerability, described in MS17-010, which abuse of the SMBv1 protocol. One of the requirements to protect against this kind of attacks was to simply disable SMBv1 (besides the fact to NOT expose it on the Internet ;-).

Last week, I was trying to connect an HP MFP (“Multi-Functions Printer”) to an SMB share to store scanned documents. This is very convenient to automate many tasks. The printer saves PDF files in a dedicated Windows share that can be monitored by a script to perform extra processing on documents like indexing them, extracting useful data, etc.

I spent a long time trying to understand why it was not working. SMB server, ok! Credentials, ok! Permissions on the share, ok! Firewall, ok! So what? In such cases, Google remains often your best friend and I found this post on an HP forum:

HP Forum

So, HP still requires SMBv1 to work properly? Starting with the Windows 10 Fall Creators update, SMBv1 is not installed by default but this can lead to incompatibility issues. The HP support twitted this:

HP Support Tweet

The link points to a document that explains how to enable SMBv1… Really? Do you think that I’ll re-enable this protocol? HP has an online document that lists all printers and their compatibility with the different versions of SMB. Of course, mine is flagged as “SMBv1 only”. Is it so difficult to provide an updated firmware?

Microsoft created a nice page that lists all the devices/applications that still use SMBv1. The page is called “SMB1 Product Clearinghouse” and the list is quite impressive!

So, how to “kill” an obsolete protocol if it is still required by many devices/applications? If users must scan and store documents and it does not work, guess what will the average administrator do? Just re-enable SMBv1…

[The post SMBv1, The Phoenix of Protocols? has been first published on /dev/random]