Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

March 17, 2018

As heard on; Mopo – Tökkö. The sound is vaguely reminiscent of Morphine due to the instrumentation no doubt, but this Finnish trio is more into jazz (their bio states; “The band draws their inspiration from jazz, punk and the Finnish nature”), but they rock big time nonetheless!

YouTube Video
Watch this video on YouTube.

If you have time watch a concert of theirs on YouTube and if you dig them; they’ll be playing in Ghent and Leuven this month!

March 15, 2018

Hello Readers, here is my wrap-up of the second day. Usually, the second day is harder in the morning due to the social events but, at TROOPERS, they organize the hacker run started at 06:45 for the most motivated of us. Today, the topic of the 3rd track switched from SAP to Active Directory. But I remained in track #1 and #2 to follow a mix of offensive vs defensive presentations. By the way, to have a good idea of the TROOPERS’ atmosphere here is their introduction video for 2018:

 As usual, the second day started also with a keynote which was assigned to Rossella Mattioli, working at ENISA. She’s doing a lot of promotion for this European agency across multiple security conferences but it’s special with TROOPERS because their motto is the same: “Make the world (the Internet) a safer place”. ENISA’s rule is to remove the gap between industries, the security community and the EU Member States. Rossella reviewed the projects and initiatives promoted by ENISA like the development of papers for all types of industries (power, automation, public transports, etc). To demonstrate the fact that today, the security must be addressed at a global scale, she gave the following example: Think about your journey to come to TROOPERS and list all the actions that you performed with online systems. The list is quite long! Another role of ENISA is to make CSIRT’s work better together. Did you know that they are 272 different CSIRST’s in the European Union? And they don’t always communicate in an efficient way. That’s why ENISA is working on common taxonomies to help them. Their website has plenty of useful documents that are available for free, just have a look!

After a short coffee break and interesting chats with peers, I move to the defensive track to follow Matt Graeber who presented “Subverting trust in Windows”. Matt warned that the talk was a “hands-on” edition of his complete research that is available online. Today’s presentation focuses more on live demos. First, what is “trust” in the context of software?
  • Is the software from a reputable vendor?
  • What is the intent of the software?
  • Can it be abused in any way?
  • What is the protection status of signing keys?
  • Is the certificate issuer reputable?
  • Is the OS validating signer origin and code integrity properly?
Trust maturity level can be matched with enforcement level:
Trust Maturity Levels
But what is the intent of code signing? To attest the origin and integrity of software.  It is NOT an attention of trust or intent! But, it can be used to enforce the mechanism for previously established trust. Some bad assumptions reported by Matt:
  • Signed == trusted
  • Non-robust signature verification
  • No warning/enforcement of known bad certs
One of the challenges is to detect malicious files and, across millions of events generated daily, how to take advantage of signed code? Signature can be valid but the patch suspicious C:\Windows\Tasks\notepad.exe)
A bad approach is to just ignore because the file is signed… Matt’s demonstrated why! The first attack was based on the subject Interface package hijacks (attack the validation infrastructure). He manually added a signature to a non-signed PE file. Brilliant! The second attack scenario was to perform a certificate cloning and Root CA installation. Here again, very nice but it’s more touchy to achieve because the victim has to install the Root CA on his computer. The conclusion to the talk was that even signed binaries can’t be trusted… All the details of the attacks are available here:
Then, I switched back to the offensive track to listen to Salvador Mendoza and Leigh-Anne Galloway who presented “NFC payments: The art of relay and replay attacks“. They started with a recap of the NFC technology. Payments via NFC are not new: The first implementation was in 1996 in Korea (public transports). In 2015, ApplePay was launched. And today, 40% of non-cash operations are performed over NFC! Such attacks are also interesting because the attacker can gain easily some money, banks are accepting the risk to lose a percentage of transactions, some limits on the amount are higher in other countries and finally, there is no additional card holder identification. The explained two types of attacks: the replay and relay. Both are based on RFC readers coupled with Raspberry devices. They tried to perform live demos but it failed (it’s always touchy to play live with NFC). Hopefully, they had pre-recorded videos. Demos are interesting but, in my opinion, there was a lack of details for people who don’t play with NFC every day, just like me!

After the lunch, Matt Domko and Jordan Salyer started the last half-day with an interesting topic: “Blue team sprint: Let’s fix these 3 thinks on Monday”. Matt presented a nice tool last year, called Bropy. The idea was to automatically detect unusual traffic on your networks. This year, he came back with more stuff. The idea of the talk was: “What to do with a limited amount of resources but in a very effective way?“. Why? Many companies don’t have the time, the stuff and/or the budget to deploy commercial solutions. The first idea was to answer the following question: “What the hell is on my network?”. Based on a new version of his tool, rewritten in Python3 and now supported IPv6 (based on last year comments). The next question was: “Why are all my client systems mining bitcoin?”. Jordan explained how to deploy AppLocker from Microsoft to implement a white-list of applications that can be executed on a client computer.  Many examples were provided, many commands based on PowerShell. I recommend you to have a look at the slide when they will be available online. Finally, the 3rd question to be addressed was the management of logs based on an ELK stack… classic stuff! Here, Matt gave a very nice tip when you’re deploying a log management solution. Always split the storage of logs and the tools used to process them. This way, if you deploy a new tool in the future, you won’t have to reconfigure all your clients (ex: if you decide to move from ELK to Splunk because you got some budget).

If Matt is using Bro (see above), the next speaker too. Joe Slowik presented “Mind the gap, Bro: using network monitoring to overcome lack of host visibility in ICS environments“. What does it mean? Monitoring of hosts and networks are mandatory but sometimes, it’s not easy to get a complete view of all the hosts present on a network. Environments can be completely different: a Microsoft Windows-based network does not have the behaviour of an ICS network. The idea is to implement Bro to collect data passing across the wire. Bro is very powerful to extract files from (clear-text) protocols. As said Joe: “If you can see it, Bro can read it“. Next to Bro, Joe deployed a set of YARA rules to analyze the files carved by Bro. This is quite powerful. Then, it presented some examples of malware impacting ICS networks and how to detect them with his solution (Trisis, Dymalloy and CrashOverride). The conclusion of the presentation was that reducing the response time can limit an infection.

The last time slot for me was in the offensive track where Raphaël Vinot, from, presented “Ads networks are following you, follow them back”. Raphaël presented a tool he developed to better understand how many (to not say all) websites integrate components from multiple 3rd party providers. If the goal is often completely legit (and to gain some revenue), it has already been discovered that ads networks were compromized to distribute malicious content. Based on this, we can consider that any website is potentially dangerous. The situation today as described by Raphaël:
  • Some website homepages are very big (close to 10MB for some of them!)
  • The content is extremely dynamic
  • Dozen of 3rd party components are loaded
  • There was a lack of tools to analyze such website.

The result is “Lookyloo” that downloads the provided homepage and all its objects and present them in a tree. This kind of analyze of very important during the daily tasks of a CERT. The tool emulates completely the browser and stores data (HTML core, cookies, etc) in an HTML Archive (.har) that is processed to be displayed in a nice way. The tool is available online but a live demo is available here: This is very nice tool that must certainly be in your incident handling toolbox!


This is the end of the 2018 edition of TROOPERS… Congratulations to the team for the perfect organization!

[The post TROOPERS 18 Wrap-Up Day #2 has been first published on /dev/random]

The post Enable the slow log in Elastic Search appeared first on

Elasticsearch is pretty cool, you can just fire of HTTP commands to it to change (most of) its settings on the fly, without restarting the service. Here's how you can enable the slowlog to lo queries that exceed a certain time treshold.

These are enabled per index you have, so you can be selective about it.

Get all indexes in your Elastic Search

To start, get a list of all your Elasticsearch indexes. I'm using jq here for the JSON formatting (get jq here).

$ curl -s -XGET ''
green open index1 BV8NLebPuHr6wh2qUnp7XpTLBT 2 0 425739 251734  1.3gb  1.3gb
green open index2 3hfdy8Ldw7imoq1KDGg2FMyHAe 2 0 425374 185515  1.2gb  1.2gb
green open index3 ldKod8LPUOphh7BKCWevYp3xTd 2 0 425674 274984  1.5gb  1.5gb

This shows you have 3 indexes, called index1, index2 and index3.

Enable slow log per index

Make a PUT HTTP call to change the settings of a particular index. In this case, index index3 will be changed.

$ curl -XPUT -d '{"" : "50ms","": "50ms","index.indexing.slowlog.threshold.index.warn": "50ms"}' | jq

If you pretty-print the JSON payload, it looks like this:

  "" : "50ms",
  "": "50ms",
  "index.indexing.slowlog.threshold.index.warn": "50ms"

Which essentially means: log all queries, fetches and index rebuilds that exceed 50ms with a severity of "warning".

Enable global warning logging

To make sure those warning logs get written to your logs, make sure you enable that logging in your cluster.

$ curl -XPUT -d '{"transient" : {"" : "WARN", "logger.index.indexing.slowlog" : "WARN" }}' | jq

Again, pretty-printed payload:

  "transient" : {
    "" : "WARN",
    "logger.index.indexing.slowlog" : "WARN"

These settings aren't persisted on restart, they are only written to memory and active for the currently running elasticsearch instance.

The post Enable the slow log in Elastic Search appeared first on

Google Search Console showed me that I have some duplicate content issues on, so I went ahead and tweaked my use of the rel="canonical" link tag.

When you have content that is accessible under multiple URLs, or even on multiple websites, and you don't explicitly tell Google which URL is canonical, Google makes the choice for you. By using the rel="canonical" link tag, you can tell Google which version should be prioritized in search results.

Doing canonicalization well improves your site's SEO, and doing canonicalization wrong can be catastrophic. Let's hope I did it right!

While working on my POSSE plan, I realized that my site no longer supported "RSS auto-discovery". RSS auto-discovery is a technique that makes it possible for browsers and RSS readers to automatically find a site's RSS feed. For example, when you enter in an RSS reader or browser, it should automatically discover that the feed is It's a small adjustment, but it helps improve the usability of the open web.

To make your RSS feeds auto-discoverable, add a tag inside the tag of your website. You can even include multiple tags, which will allow you to make multiple RSS feeds auto-discoverable at the same time. Here is what it looks like for my site:

Pretty easy! Make sure to check your own websites — it helps the open web.

March 14, 2018

I’m back to Heidelberg (Germany) for my yearly trip to the TROOPERS conference. I really like this event and I’m glad to be able to attend it again (thanks to the crew!). So, here is my wrap-up for the first day. The conference organization remains the same with a good venue. It’s a multiple tracks events. Not easy to manage the schedule because you’re always facing critical choices between two interesting talks scheduled in parallel. As the previous edition, TROOPERS is renowned for its badge! This year, it is a radio receiver that you can use to receive live audio feeds from the different tracks but also some hidden information. Just awesome!

TROOPERS Hospitality

After a short introduction to this edition by Nicki Vonderwell, the main stage was give to Mike Ossmann, the founder of Great Scott Gadgets. The introduction was based on his personal story: I should not be here, I’m a musician”. Scott is usually better at giving technical talks but,  this time, it was a completely different exercise for him and he did well! Mike tried to open the eyes of the security community about how to share ideas. Indeed the best way to learn new stuff is to explore what you don’t know. Assign time to research and you will find. And, once you found something new, how to share it with the community. According to Mike, it’s time to change how we share ideas. For Mike, “Ideas have value, proportionally to how they are shared…”. Sometimes, ideas are found by mistake or found in parallel by two different researchers. Progress is made by small steps and many people. How to properly use your idea to get some investors coming to you? It takes work to turn an idea into a product! The idea is to abolish patents that prevent many ideas to come to real products. Hackers are good at open source but they lack at writing papers and documentation (compared to academic papers). Really good and inspiring ideas!

Mike on stage

Then I remained in the “offensive” track to attend Luca Bongiorni’s (@) presentation: “How to brink HID attacks to the next level”. Let’s start with the conclusion about this talk: “After, you will be more paranoid about USB devices“. What is HID or “Human Interface Devices”  like a mouse, keyboard, game controllers, drawing tablets, etc. Most of the time, no driver is required to use them, they’re whitelisted by DLP tools, not under AV’s scope. What could go wrong? Luca started with a review of the USB attacks landscape. Yes, there are available for a while:

  • 1st generation: Teensy (2009) and RubberDucky (2010) (both simple keyboard emulators). The RubberDucky was able to  change the VID/PID to evade filters (to mimic another keyboard type)
  • 2nd generation: BadUSB (2014), TurnipSchool (2015)
  • 3rd generation: WHID Injector (2017), P4wnP1 (2017)

But, often, the challenge is to entice the victim to plug the USB device into his/her computer. To help them, it’s possible to weaponise cool USB gadgets. Based on social engineering, it’s possible to find what looks interesting to the victim. He likes beer? Just weaponise a USB fridge. There are more chances that he will connect it. Other good candidates are USB plasma balls. The second part of the talk was a review of the cool features and demos of the WHID and P4nwP1. The final idea was to compromise an air-gapped computer. How?

  • Entice the user to connect the USB device
  • It creates a wireless access point that the attacker can use or connect to an SSID and phone home to a C2 server
  • It creates a COM port
  • It creates a keyboard
  • Powershell payload is injected using the keyboard feature
  • Data is sent to the device via the COM port
  • Data is exfiltrated to the attacker
  • Win!

Luca on stage

It’s quite difficult to protect against this kind of attack because people need USB devices! So, never trust USB devices and use USB condom. They are tools like DuckHunt on Windows or USBguard/USBdeath on Linux. Good tip, Microsoft has an event log which reports the connection of a new USB device: 6416.

For the rest of the day, I switched to the defence & management track. The next presentation was performed by Ivan Pepelnjak: “Real-Life network and security automation“. Ivan is a network architect and a regular speaker at TROOPERS. I like the way he presents his ideas because he always rolls back to the conclusion of the previous talk. This time, he explained that he received some feedback from people who were sceptic about the automation process he explained in 2017. Challenge accepted: he came with a new idea: To create a path for network engineers to help them to implement network automation. Ivan reviewed multiple examples based on real situations. Network engineers contacted him about problems they were facing where no tool was available. If no tool exists, just create yours. It does not need to be complex, sometimes a bunch of scripts can save you hours of a headache! It’s mandatory to find solutions when vendors are not able to provide the right tools. Many examples were provided by Ivan. Most of them based on python & expect scripts. They usually generate text files that are more easy to process, index, are searchable and can be injected into a versioning tool. He started with some simple example like checking the network configuration of thousands of switched (ex: verify they use the right DNS or timezone) up to the complex project to completely automate the deployment of new devices in a data center. Even if the talk was not really a “security” talk, it was excellent and provided many ideas!

After the lunch, Nate Warfield, working at the Microsoft Security Response Center, presented “All your cloud are belong to us – Hunting compromise in Azure“. Azure is a key player amongst the cloud providers. With 1.6M IP addresses facing the Internet, it is normal that many hosts running on it are compromized. But how to find them easily? A regular port scan is not doable, will return too many false positives and will be way too long. Nate explained with many examples, how easy it is to search for bad stuff across a big network. The first example was NoSQL databases. Such kind of services should not be facing the Internet but, guess what, people do! NoSQL databases have been nice targets in the past month (MongoDB, Redis, CouchDB, ElasticSearch, …). At a certain point in time, Azure had 2500 compromized DB’s. How to find them? Use Shodan of course! A good point is that compromized databases have common names, so it’s convenient to use them as IOCs. Shodan can export results in JSON, easily parsable. The next example was only one week old: the Exim RCE (CVE-2018-6789). 1221 vulnerable hosts were found in 5 mins again thank to Shodan and some scripting. And more examples were given like MQTT and WannaCry or default passwords. The next part of the presentation focused on 3rd party images that can be used by anybody to spawn a VM in a minute. Keep in mind that, in the IaaS (“Infrastructure as a Service“), your security is your responsibility. Other features that can be abused are “Express Route” or “Direct Connect”. Those feature act basically as a VPN and allow your LAN to connect transparently to your Azure resources. That’s nice but… some people don’t know which IP address allow and… they allow the complete Azure IP space to connect to their LAN!? Another threat with 3rd party images: most of them are outdated (more than a few months old). When you use them, they are lacking the latest security patches. A best practice is to deploy them, patch them and review security settings! A very interesting and entertaining talk! Probably the best of the day for me.

The next talk was presented by Mara Tam: “No royal roads: Advanced Analysis for Advances Threats“. Mara is also a recurring speaker at TROOPERS. Small deception, based on the title, I expected something more technical about threat analysis. But it was interesting, Mara’s knowledge about nation-states attacks is really deep and she gave interesting tips. I like the following facts: “All offensive problems are technical problems and all defensive problems are political problems” or this one: “Attribution is hard, and binary analysis is not enough“. So true!

After a break, Birk Kauer and Flo Grunow, both working for ERNW, presented the result of their research: “Security appliances internals“. This talk was scheduled in the defensive track because takeaways and recommendations have been given. As says Wikipedia: the term “appliance” is coming from “home appliance” which are usually closed and sealed. Really? Appliances are critical components of the infrastructure because they are connected to the core network, they handle sensitive data and perform security operations. So we should expect a high-security level. Helas, there are two main threats with an appliance:

  • The time to market (vendors must react faster than competitors)
  • Complexity (they are more and more complex and, therefore, difficult to maintain)

An appliance can be classified into two type:

  • Vendor class 1: They do everything on their own (with no or little external dependencies). The best example is the Bluecoat SGOS which has its own custom filesystem and bootloader.
  • Vendor class 2: They integrate 3rd party software (just integration)

They are pro & con for both models (like keeping full control, smaller surface attack but people difficult to hire, harder maintenance, easy patching, etc). Then some old vulnerabilities were given to prove that even big players can be at risk. Examples were based on previous FireEye or PaloAlto appliances. The next part of the talk focused on new vulnerabilities that affected CheckPoint firewalls. One of them was affecting SquirrelMail used in some Checkpoint products. The vulnerability was solved by CheckPoint but the original product SquirrelMail is still unpatched today. Despite multiple attempts to contact the developers (since May 2017!), they decided to go to a full disclosure. The next example was targeting SIEM appliance and NXlog. The conclusion to the talk was a series of recommendations and question to ask to challenge your vendors:

  • How do they handle disclosure, security community? (A no answer is an answer…)
  • Do they train staff?
  • Do they implement features?
  • Do they use 3rd party code?
  • Average time to patch?
  • What about the cloud?

The day ended with a set of lightning talks (20-mins each). The first one was presented by Bryan K who explained why the layer 8 is often weaponized. For years, security people consider users are dumb, idiots that make always mistakes… He reviewed simple human tricks that simply… work! Like USB sticks, Indicators of Bullshit (money), fear (Your account will be deleted) or urgency (Do it now!).

The next lightning talk was presented by Jasper Bongertz: “Verifying network IOCs hits in millions of packets“. In his daily job, Jasper must often analyze gigabytes of PCAP files for specific events. Usually, an IDS alert match on a single packet. Example: a website returns a 403 error. That’ nice but how to know from which HTTP request this error is coming? We need to inspect the complete flow. To help him, Jasper presented the tool he developed. Basically, it takes IDS alerts, a bunch of PCAP files and extracts relevant packets to specific directories related to the IDS alerts. Easy to spot the issue. New PCAPs are saved in the PCAPng format with comments to easily find the interesting packets. The tool is called TraceWrangler. Really nice tool to keep in your toolbox!

Finally, Daniel Underhay presented his framework to easily clone RFID cards: Walrus. Based on red team engagements, it’s often mandatory to try to clone an employee’s access card to access some facilities. They are plenty of tools to achieve this but there was a lack of a unique tool that could handler multiple readers and store cloned cards in a unique database. This is the goal of Walrus. The tool could be used by a 3+ years old kid! The project is available here.

As usual, the day ended with the social event and a nice dinner where new as well as old friends were met! See you tomorrow for the second wrap-up. Nice talks are on the schedule!

[The post TROOPERS 18 Wrap-Up Day #1 has been first published on /dev/random]

Stephen Hawking passed away this morning at age 76. He was an inspiration in so many ways: his contributions to science unlocked a universe of exploration and he helped to dismantle stigma surrounding disability. Perhaps most importantly, he dedicated his life to meaningful work that he was deeply passionate about; a message that is important for all. Rest in peace, Professor.

March 13, 2018

I'm excited to share that Acquia plans to increase the capacity of our global research and development team by 60 percent in 2018. Last year, we saw adoption of Acquia Lift more than double, and adoption of Acquia Cloud Site Factory nearly double. Increasing our investment in engineering will allow us to support this growth, in addition to accelerating our innovation. Acquia's product teams have an exciting year ahead!

David Clough, author of the Async JavaScript WordPress plugin contacted me on March 5th to ask me if I was interested to take over ownership of his project. Fast-forward to the present; I will release a new version of AsyncJS on March 13th on, which will:

  • integrate all “pro” features (that’s right, free for all)
  • include some rewritten code for easier maintenance
  • be fully i18n-ready (lots of strings to translate :-) )

I will provide support on the forum (be patient though, I don’t have a deep understanding of code, functionality & quirks yet). I also have some more fixes/ smaller changes/ micro-improvements in mind (well, a Trello-board really) for the next release, but I am not planning major changes or new functionality. But I vaguely remember I said something similar about Autoptimize a long time ago and look where that got me …

Anyway, kudo’s to David for a great plugin with a substantial user-base (over 30K active installations) and for doing “the right thing” (as in not putting it on the plugin-market in search for the highest bidder). I hope I’ll do you proud man!

The goal of this tutorial is to show how to use Drupal 8.5's new off-canvas dialog in your own Drupal modules.

The term "off-canvas" refers to the ability for a dialog to slide in from the side of the page, in addition to resizing the page so that no part of it is obstructed by the dialog. You can see the off-canvas dialog in action in this animated GIF:

Animated GIF of Drupal off-canvas dialog tutorial

This new Drupal 8.5 feature allows us to improve the content authoring and site building experience by turning Drupal outside-in. We can use the off-canvas dialog to enable the content creator or site builder to seamlessly edit content or configuration in-place, and see any changes take effect immediately. There is no need to navigate to the administrative backend to make edits. As you'll see in this tutorial, it's easy to use the off-canvas dialog in your own Drupal modules.

I use a custom album module on for managing my photo albums and for embedding images in my posts. With Drupal 8.5, I can now take advantage of the new off-canvas dialog to edit the title, alt-attribute and captions of my photos. As you can see in the animated GIF above, every photo gets an "Edit"-link. Clicking the "Edit"-link opens up the off-canvas dialog. This allows me to edit a photo in context, without having to go to an another page to make changes.

So how did I do that?

Step 1: Create your form, the Drupal way

Every image on has its own unique path:

I can edit my images at:

For example, gives you the image of the Niagara Falls. If you have the right permissions you could edit the image at (you don't 😉). Because you don't have the right permissions, I'll show you a screenshot of the edit form instead:

Drupal off-canvas dialog tutorial: regular Drupal form

I created those paths (or routes), using Drupal's routing system, and I created the form using Drupal's regular Drupal form API. I'm not going to explain how to create a Drupal form in this post, but you can read more about this at the routing system documentation and the form API. Here is the code for creating the form:

      '#value' => $image->getUrlPath(),  // Unique ID of the image
    $form['title'] = [
      '#type' => 'textfield',
      '#title' => t('Title'),
      '#default_value' => $image->getTitle(),
    $form['alt'] = [
      '#type' => 'textfield',
      '#title' => t('Alt'),
      '#default_value' => $image->getAlt(),
    $form['caption'] = [
      '#type' => 'textarea',
      '#title' => t('Caption'),
      '#default_value' => $image->getCaption(),
    $form['submit'] = [
      '#type' => 'submit',
      '#value' => t('Save image'),

    return $form;

  public function submitForm(array &$form, FormStateInterface $form_state) {
    $values = $form_state->getValues();

    $image = Image::loadImage($values['path']);
    if ($image) {

    $form_state->setRedirectUrl(Url::fromUserInput('/album/'. $image->getUrlPath()));


Step 2: Add an edit link to my images

First, I want to overlay an "Edit"-button over my image:

Drupal off-canvas dialog tutorial:

If you were to look at the HTML code, the image link uses the following tag:


Clicking the link doesn't open the off-canvas dialog yet. The class="edit-button" is used to style the button with CSS and to overlay it on top of the image.

Step 3: Opening the off-canvas dialog

Next, we have to tell Drupal to open the form in the off-canvas dialog when the "Edit"-link is clicked. To open the form in the off-canvas dialog, extend that tag to:


Some extra HTML in the tag is all it took; it took my regular Drupal form, showed it in an off-canvas dialog, and even styled it! As I wrote above, it is easy to use the off-canvas dialog in your own modules. Hopefully you'll be inspired to take advantage of this new functionality.

Drupal off-canvas dialog tutorial: open dialog

There are several things being added though, so let's break it down. First we add the a class called use-ajax:

  • use-ajax is the class that is necessary for any link including dialogs that use Drupal's Ajax API.

We also added some data-dialog-* attributes:

You can create a link like the example above using a Drupal render array; it will open a form page in the off-canvas dialog and redirect the submitted form back to the current page:

$elements['link'] = [
  '#title' => 'Edit image',
  '#type' => 'link',
  '#url' => Url::fromRoute('album_image', ['album' => $album, 'image' => $image], 
    ['query' => \Drupal::service('redirect.destination')->getAsArray()])->toString();,
  '#attributes' => [
    'class' => ['use-ajax'],
    'data-dialog-type' => 'dialog',
    'data-dialog-renderer' => 'off_canvas',
    'data-dialog-options' => Json::encode(['width' => 400]),
  '#attached' => [
    'library' => [

Because the dialog functionality might not be needed on every page, Drupal won't load it unless needed. We use the #attached element to tell Drupal that we want the JavaScript dialog system to be loaded for this page. It's a bit more work, but it keeps Drupal efficient.

Improving the developer experience

Applying the off-canvas dialog to my blog and writing this tutorial uncovered several opportunities to improve the developer experience. It seems unnecessary to set class' => ['use-ajax'] when data-dialog-type is set. Why do I need to specify both a data-dialog-type and a data-dialog-renderer? And why can't Drupal automatically attach core/drupal.dialog.ajax when data-dialog-type is set?

In discussing these challenges with Ted Bowman, one of the developers of the off-canvas dialog, he created an issue on to work on off-canvas developer experience improvements. Hopefully in a future version of Drupal, you will be able to create an off-canvas dialog link as simply as:

$link = Link::createFromRoute('Edit image', 'album_image', 
   ['album' => $album, 'image' => $image])->openInOffCanvasDialog();
$elements['link'] = $link->toRenderable();

Special thanks to Ted Bowman and Samuel Mortenson for their feedback to this blog post.

March 12, 2018

It's been 5 years since I wrote my last blogpost. Until today.

The past years my interest in blogging declined. Combined with the regular burden of Drupal security updates, I seriously considered to simply scrap everything and replace the website with a simple one-pager. Although a lot of my older blog-posts are not very useful (to say the least) or are very out of date, some of these bring memories of long gone times. Needles to say I didn't like the idea of deleting everything from these past 15 years.

Meanwhile I also have the feeling that social networks, Facebook in particular, have too much control over the content we create. This gave me the idea to start using this blog again. By writing content on my own blog, I can take control back and still easily post it to Facebook, Twitter and so on. A few days later Dries wrote a post about his very similar plans. Apparently this idea even has a name: The IndieWeb movement.

I decided …

I published the following diary on “Payload delivery via SMB“:

This weekend, while reviewing the collected data for the last days, I found an interesting way to drop a payload to the victim. This is not brand new and the attack surface is (in my humble opinion) very restricted but it may be catastrophic. Let’s see why… [Read more]


[The post [SANS ISC] Payload delivery via SMB has been first published on /dev/random]

March 11, 2018

Talking about the many contributors to Drupal 8.5, a few of them shouted out on social media that they got their first patch in Drupal 8.5. They were excited but admitted it was more challenging than anticipated. It's true that contributing to Drupal can be challenging, but it is also true that it will accelerate your learning, and that you will likely feel an incredible sense of reward and excitement. And maybe best of all, through your collaboration with others, you'll forge relationships and friendships. I've been contributing to Open Source for 20 years and can tell you that that combined "passion + learning + contribution + relationships"-feeling is one of the most rewarding feelings there is.

I just updated my site to Drupal 8.5 and spent some time reading the Drupal 8.5 release notes. Seeing all the different issues and contributors in the release notes is a good reminder that many small contributions add up to big results. When we all contribute in small ways, we can make a lot of progress together.

The post Laravel & MySQL auto-adding “on update current_timestamp()” to timestamp fields appeared first on

We hit an interesting Laravel "issue" while developing Oh Dear! concerning a MySQL table.

Consider the following database migration to create a new table with some timestamp fields.

Schema::create('downtime_periods', function (Blueprint $table) {

This turns into a MySQL table like this.

mysql> DESCRIBE downtime_periods;
| Field      | Type             | Null | Key | Default           | Extra                       |
| id         | int(10) unsigned | NO   | PRI | NULL              | auto_increment              |
| site_id    | int(10) unsigned | NO   | MUL | NULL              |                             |
| started_at | timestamp        | NO   |     | CURRENT_TIMESTAMP | on update CURRENT_TIMESTAMP |
| ended_at   | timestamp        | YES  |     | NULL              |                             |
| created_at | timestamp        | YES  |     | NULL              |                             |
| updated_at | timestamp        | YES  |     | NULL              |                             |
6 rows in set (0.00 sec)

Notice the Extra column on the started_at field? That was unexpected. On every save/modification to a row, the started_at would be auto-updated to the current timestamp.

The fix in Laravel to avoid this behaviour is to add nullable() to the migration, like this.


To fix an already created table, remove the Extra behaviour with a SQL query.

MySQL> ALTER TABLE downtime_periods
CHANGE started_at started_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP;

Afterwards, the table looks like you'd expect:

mysql> describe downtime_periods;
| Field      | Type             | Null | Key | Default | Extra          |
| id         | int(10) unsigned | NO   | PRI | NULL    | auto_increment |
| site_id    | int(10) unsigned | NO   | MUL | NULL    |                |
| started_at | timestamp        | YES  |     | NULL    |                |
| ended_at   | timestamp        | YES  |     | NULL    |                |
| created_at | timestamp        | YES  |     | NULL    |                |
| updated_at | timestamp        | YES  |     | NULL    |                |
6 rows in set (0.00 sec)

Lesson learned!

The post Laravel & MySQL auto-adding “on update current_timestamp()” to timestamp fields appeared first on

March 10, 2018

Le like est lâche, inutile, paresseux. Il ne sert à rien, il nous entretient dans un état d’hébétude.

Je ne suis pas assez courageux pour assumer, pour repartager à mon audience. Je ne suis pas assez volontaire pour répondre à l’auteur. Alors je like. Et je cherche à augmenter mon nombre de like. Ou de followers qui, subtilement, sont d’ailleurs représentés par des likes sur les pages Facebook.

Je ne cherche plus qu’à flatter mon égo et, grand prince, j’octroie parfois un peu d’ego bon marché à d’autres. Tenez, manants ! Voici un like, vous m’avez amusé, ému ou touché ! Votre cause est importante et mérite mon misérable et inutile like.

Je veux aller vite. Je consomme sans réfléchir. Je like une photo pour l’émotion qu’elle suscite immédiatement mais sans jamais creuser vraiment, sans réfléchir à ce que cela veut dire réellement.

Mon esprit critique fond, disparait, enfoui sous les likes, ce qui est une aubaine pour les publicitaires. Et puis, j’en viens à acheter des likes. Des likes qui ne veulent plus rien dire, des followers qui n’existent pas.

Forcément, à partir du moment où une observable (le nombre de likes) est censé représenté un concept beaucoup plus ardu (la popularité, le succès), le système va tout faire pour maximiser l’observable en le décorellant de ce qu’elle représente.

Les followers et les likes sont désormais tous faux. Ils n’ont que la signification religieuse que nous leur accordons. Nous sommes dans notre petite bulle, à interagir avec quelques idées qui nous flattent, qui nous brossent dans le sens du poil, qui nous rendent addicts. Peut-être pas encore assez addicts selon certains qui se font une fierté de rendre cette addiction encore plus importante.

Il faut sortir de cette spirale infernale.

Le prochain réseau social, le réseau enfin social sera anti-like. Il n’y aura pas de likes. Juste un repartage, une réponse ou un micropaiement libre. Pas de statistiques, pas de compteur de repartarges, pas de nombre de “personnes atteintes”.

Je vais même plus loin : il n’y aura pas de compteurs de followers ! Certains followers pourraient même vous suivre de manière invisible (vous ne savez pas qu’ils vous suivent). Finie la course à l’audience artificielle. Notre valeur viendra de notre contenu, pas de notre pseudo popularité.

Lorsque ce réseau verra le jour, l’utiliser paraitra angoissant, vide de sens. Nous aurons l’impression que nos écrits, nos paroles s’enfoncent dans un puit sans fond. Suis-je lu ? Suis-je dans le vide ? Paradoxalement, ce vide pourrait peut-être nous permettre de reconquérir la parole que nous avons laissée aux publicitaires en échange d’un peu d’égo, nous libérer de nos prisons de populisme en 140 caractères. Et, parfois, miraculeusement, quelques centimes d’une quelconque cryptomonnaie viendront nous remercier pour nos partages. Anonymes, anodins. Mais tellement signifiants.

La première version d’un réseau de ce type existe depuis des siècles. On l’appelle “le livre”. Initialement, il a été réservé à une élite minoritaire. En se popularisant, il a été corrompu, transformé en machine commerciale pleine de statistiques qui tente d’imprimer des millions de fois les quelques mêmes livres appelés “best-sellers”. Mais le vrai libre, le livre anti-like existe toujours, indépendant, auto-édité, attendant son heure pour renaître sur nos réseaux, sur nos liseuses.

Le livre du 21ème siècle est là, dans nos imaginations, sous nos claviers, attendant un réseau qui conjuguerait la simplicité d’un Twitter et la profondeur d’un fichier epub, qui mélangerait l’esprit d’un Mastodon et d’un Liberapay. Qui serait libre, décentralisé, incensurable, incontrôlable. Et qui bannirait à jamais l’effroyable, l’antisocial bouton “like”, celui qui a transformé nos véléités de communiquer en pleutres interactions monétisées par les publicitaires.


Photo by Alessio Lin on Unsplash

Vous avez aimé votre lecture ? Soutenez l’auteur sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Même un don symbolique fait toute la différence ! Retrouvons-nous ensuite sur Facebook, Twitter ou Mastodon.

Ce texte est publié sous la licence CC-By BE.

March 08, 2018

I recently had the opportunity to read Tiffany Farriss' Drupal Association Retrospective. In addition to being the CEO of, Tiffany also served on the Drupal Association Board of Directors for nine years. In her retrospective post, Tiffany shares what the Drupal Association looked like when she joined the board in 2009, and how the Drupal Association continues to grow today.

What I really appreciate about Tiffany's retrospective is that it captures the evolution of the Drupal Association. It's easy to forget how far we've come. What started as a scrappy advisory board, with little to no funding, has matured into a nonprofit that can support and promote the mission of the Drupal project. While there is always work to be done, Tiffany's retrospective is a great testament of our community's progress.

I feel very lucky that the Drupal Association was able to benefit from Tiffany's leadership for nine years; she truly helped shape every aspect of the Drupal Association. I'm proud to have worked with Tiffany; she has been one of the most influential, talented members of our Board, and has been very generous by contributing both time and resources to the project.

I published the following diary on “CRIMEB4NK IRC Bot“:

Yesterday, I got my hands on the source code of an IRC bot written in Perl. Yes, IRC (“Internet Relay Chat”) is still alive! If the chat protocol is less used today to handle communications between malware and their C2 servers, it remains an easy way to interact with malicious bots that provide interesting services to attackers. I had a quick look at the source code (poorly written) and found some interesting information… [Read more]

[The post [SANS ISC] CRIMEB4NK IRC Bot has been first published on /dev/random]

Earlier today, we released Drupal 8.5.0, which ships with improved features for content authors, site builders and developers.

Content authors can benefit from enhanced media support and content moderation workflows. It is now easier to upload, manage and reuse media assets, in addition to moving content between different workflow states (e.g. draft, archived, published, etc).

Drupal 8.5.0 also ships with a Settings Tray module, which improves the experience for site builders. Under the hood, the Settings Tray module uses Drupal 8.5's new off-canvas dialog library; Drupal module developers are encouraged to start using these new features to improve the end-user experience of their modules.

It's also exciting to see additional improvements to Drupal's REST API. With every new release, Drupal continues to extend investments in being an API-first platform, which makes it easier to integrate with JavaScript frameworks, mobile applications, marketing solutions and more.

Finally, Drupal 8.5 also ships with significant improvements for Drupal 7 to Drupal 8 migration. After four years of work, 1,300+ closed issues and contributions from over 570 Drualists, the migrate system's underlying architecture in Drupal 8.5 is fully stable. With the exception of sites with multilingual content, the migration path is now considered stable. Needless to say, this is a significant milestone.

These are just a few of the major highlights. For more details about what is new in Drupal 8.5, please check out the official release announcement and the detailed release notes.

What I'm probably most excited about is the fact that the new Drupal 8 release system is starting to hit its stride. The number of people contributing to Drupal continues to grow and the number of new features scheduled for Drupal 8.6 and beyond is exciting.

In future releases, we plan to add a media library, support for remote media types like YouTube videos, support for content staging, a layout builder, JSON API support, GraphQL support, a React-based administration application and a better out-of-the-box experience for evaluators. While we have made important progress on these features, they are not yet ready for core inclusion and/or production use. The layout builder is available in Drupal 8.5 as an experimental module; you can beta test the layout builder if you are interested in trying it out.

I want to extend a special thank you to the many contributors that helped make Drupal 8.5 possible. Hundreds of people and organizations have contributed to Drupal 8.5. It can be hard to appreciate what you can't see, but behind every bugfix and new feature there are a number of people and organizations that have given their time and resources to contribute back. Thank you!

March 07, 2018

Short version: Don't ever use zonal statistics as a table in arcgis. It manages to do simple calculations such as calculating an average wrong.

Example of using zonal statistics between grid cells (from the ArcGIS manual)

Long version: I was hired by a customer to determine why they got unexpected results for their analyses. These analyses led to an official map with legal consequences. After investigating their whole procedure a number of issues were found. But the major source of errors was one which I found very unlikely: it turns out that the algorithm used by ArcGIS spatial analyst to determine the average grid value in a shape is wrong. Not just a little wrong. Very wrong. And no, I am not talking about the no data handling which was very wrong as well, I'm talking about how the algorithm compares vectors and rasters Interestingly, this seems to be known, as the arcgis manual states
It is recommended to only use rasters as the zone input, as it offers you greater control over the vector-to-raster conversion. This will help ensure you consistently get the expected results.
So how does arcgis compare vectors and rasters? In fact one could invent a number of algorithms:

  • Use the centers of the pixels and compare those to the vectors (most frequently used and fastest).
  • Use the actual area of the pixels 
  • Use those pixels of which the majority of the area is covered by the vector. 

None of these algorithm matches with the results we saw from arcgis, even though the documentation seems to suggest the first method is used. So what is happening? It seems that arcgis first converts your vector file to a raster, not necessarily in the same grid system as the grid you compare to. Then it interpolates your own grid (using an undocumented method) and then takes the average of those interpolated values if their cells match with the raster you supplied. This means pixels outside your shape can have an influence on the result. This mainly seems to occur when large areas are mapped (eg Belgium at 5m).

The average of this triangle is calculated by ArcGIS as 5.47

I don't understand how the market leader in GIS can do such a basic operation so wrong, and the whole search also convinced me how important it is to open the source (or at least the algorithm used) to get reproducible results. Anyway, if you are still stuck with arcgis, know that you can install SAGA GIS as a toolbox. It contains sensible algorithms to do a vector/raster intersection and they are also approximately 10 times faster than the ArcGIS versions. Or you can have a look at how Grass and QGIS implement this.  All of this of course only if you consistently want to get the expected results...

And if your government also uses ArcGIS for determining taxes or other policies, perhaps they too should consider switching to a product which consistently gives the expected results.

It took me way too long (Autoptimize and related stuff is great fun, but it is eating a lot of time), but I just pushed out an update to my first ever plugin; WP YouTube Lyte. From the changelog:

So there you have it; Lite YouTube Embeds 2018 style and an example Lyte embed of a 1930’s style Blue Monday …

YouTube Video
Watch this video on YouTube.

Octave is a good choice for getting some serious computing done (it’s largely an open-source Matlab). But for interactive exploration, it feels a bit awkward. If you’ve done any data science work lately, you’ll undoubtedly have used the fantastic Jupyter.

There’s a way to combine both and have the great UI of Jupyter with the processing core of Octave:

Jupyter lab with an Octave kernel

I’ve built a variant of the standard Jupyter Docker images that uses Octave as a kernel, to make it trivial to run this combination. You can find it here.

Comments | More on | @rubenv on Twitter

March 06, 2018

Everybody still reminds the huge impact that Wannacry had in many companies in 2017? The ransomware exploited the vulnerability, described in MS17-010, which abuse of the SMBv1 protocol. One of the requirements to protect against this kind of attacks was to simply disable SMBv1 (besides the fact to NOT expose it on the Internet ;-).

Last week, I was trying to connect an HP MFP (“Multi-Functions Printer”) to an SMB share to store scanned documents. This is very convenient to automate many tasks. The printer saves PDF files in a dedicated Windows share that can be monitored by a script to perform extra processing on documents like indexing them, extracting useful data, etc.

I spent a long time trying to understand why it was not working. SMB server, ok! Credentials, ok! Permissions on the share, ok! Firewall, ok! So what? In such cases, Google remains often your best friend and I found this post on an HP forum:

HP Forum

So, HP still requires SMBv1 to work properly? Starting with the Windows 10 Fall Creators update, SMBv1 is not installed by default but this can lead to incompatibility issues. The HP support twitted this:

HP Support Tweet

The link points to a document that explains how to enable SMBv1… Really? Do you think that I’ll re-enable this protocol? HP has an online document that lists all printers and their compatibility with the different versions of SMB. Of course, mine is flagged as “SMBv1 only”. Is it so difficult to provide an updated firmware?

Microsoft created a nice page that lists all the devices/applications that still use SMBv1. The page is called “SMB1 Product Clearinghouse” and the list is quite impressive!

So, how to “kill” an obsolete protocol if it is still required by many devices/applications? If users must scan and store documents and it does not work, guess what will the average administrator do? Just re-enable SMBv1…

[The post SMBv1, The Phoenix of Protocols? has been first published on /dev/random]

When I'm home, one of the devices I use most frequently is the Amazon Echo. I use it to play music, check the weather, set timers, check traffic, and more. It's a gadget that is beginning to inform many of my daily habits.

Discovering how organizations can use a device like the Amazon Echo is big part of my professional life too. For the past two years, Acquia Labs has been helping customers take advantage of conversational interfaces, beacons and augmented reality to remove friction from user experiences. One of the most exciting examples of this was the development of Ask GeorgiaGov, an Alexa skill that enables Georgia state residents to use an Amazon Echo to easily interact with government agencies.

The demo video below shows another example. It features a shopper named Alex, who has just returned from Freshland Market (a fictional grocery store). After selecting a salmon recipe from Freshland Market's website, Alex has all the ingredients she needs to get started. Alex begins by asking Alexa how to make her preferred salmon recipe for eight people. The recipe on Freshland Market's Drupal website is for four people, so the Freshland Market Alexa skill automatically adjusts the number of ingredients needed to accommodate eight people. By simply asking Alexa a series of questions, Alex is able to preheat the oven, make ingredient substitutions and complete the recipe without ever looking at her phone or laptop. With Alexa, Alex is able to stay focused on the joy of cooking, instead of following a complex recipe.

This project was easy to implement because the team took advantage of the Alexa integration module, which allows Drupal to respond to Alexa skill requests. Originally created by Jakub Suchy (Acquia) and maintained by Chris Hamper (Acquia), the Alexa integration module enables Drupal to respond to custom voice commands, otherwise known as "skills".

Cooking with Amazon Echo and Drupal

Once an Amazon Echo user provides a verbal query, known as an "utterance", this vocal input is converted into a text-based request (the "intent") that is sent to the Freshland Market website (the "endpoint"). From there, a combination of custom code and the Alexa module for Drupal 8 responds to the Amazon Echo with the requested information.

Over the past year, it's been very exciting to see the Acquia Labs team build a connected customer journey using chatbots, augmented reality and now, voice assistance. It's a great example of how organizations can build cross-channel customer experiences that take place both online and offline, in store and at home, and across multiple touch points. While Freshland Market is a fictional store, any organization could begin creating these user experiences today.

Special thanks to Chris Hamper and Preston So for building the Freshland Market Alexa skill, and thank you to Ash Heath and Drew Robertson for producing the demo videos.

March 05, 2018

Josh Gee, former product manager for the City of Boston's Department of Innovation and Technology, recently shared how identified 425 PDF forms used by citizens and moved 122 of them online. Not only did it improve the accessibility of's government services, it also saved residents roughly 10,000 hours filling out forms. While is a Drupal website, it opted to use SeamlessDocs for creating online forms, which could provide inspiration for Drupal's webform module. Josh's blog provides an interesting view into what it takes for a government to go paperless and build constituent-centric experiences.

I published the following diary on “Malicious Bash Script with Multiple Features“:

It’s not common to find a complex malicious bash script. Usually, bash scripts are used to download a malicious executable and start it. This one has been spotted by @michalmalik who twitted about it. I had a quick look at it. The script has currently a score of 13/50 on VT. First of all, the script installs some tools and dependencies. ‘apt-get’ and ‘yum’  are used, this means that multiple Linux distributions are targeted… [Read more]

[The post [SANS ISC] Malicious Bash Script with Multiple Features has been first published on /dev/random]

I was struggling with an occasional loss of reported traffic by SlimStat, due to my pages being cached by WP Super Cache (which is not active for logged in users) but not having SlimStatParams & the slimstat JS-file in it. I tried changing different settings in Slimstat, but the problem still occurred. As I was not able to pinpoint the exact root cause, I ended up using this code snippet to prevent pages being cached by WP Super Cache;

function slimstat_checker($bufferIn) {
  if ( strpos($bufferIn, "<html") && strpos($bufferIn, "SlimStatParams") === false ) {
	define("DONOTCACHEPAGE","no slimstat no cache");
	error_log("no slimstat = no cache");
  return $bufferIn;

Changing the condition on line 3 allow one to stop caching based on whatever (not) being present in the HTML.

Article co-rédigé avec Mathieu Jamar.

Vous n’avez pas pu manquer les nombreux articles qui comparent la consommation énergétique du réseau Bitcoin à celui de différents pays. Et tous d’insister sur la catastrophe écologique qu’est le Bitcoin.

Bitcoin consomme plus que tous les pays en orange ! Horreur !

Si le Bitcoin consomme autant d’électricité que le Maroc, c’est une catastrophe, non ?


« Bitcoin pollue énormément et va détruire la planète » est au Bitcoin ce que « Les étrangers piquent notre boulot » est à l’immigration. Une croyance facile à instiller mais fausse sur tellement de niveaux que ça en devient difficile d’énumérer les différentes erreurs. Ce que je vais pourtant tenter de faire.

Pour simplifier, je vais diviser les erreurs de raisonnement en quatre parties :
1. La fausse simplification consommation équivaut à pollution
2. On ne compare que ce qui est comparable
3. Non, l’énergie du Bitcoin n’est pas gaspillée
4. L’optimisation en ingénierie

Consommation électrique n’est pas pollution

En tout premier lieu, consommer de l’électricité ne pollue pas. Ce qui pollue, ce sont certains moyens de production d’électricité.

C’est pareil, me direz-vous. Et bien pas du tout !

Si c’était pareil, on se battrait contre les voitures électriques et on encouragerait les voitures à essence (qui sont plus efficaces car l’énergie est produite directement avec un meilleur rendement global).

Vous êtes certainement convaincu que les voitures électriques, qui consomment de l’électricité, sont un atout dans la préservation de l’environnement. Or, selon les conditions, une Tesla consommerait entre entre 5000 et 10.000kWh pour 20/30.000 km par an.

Cela veut dire que si la moitié des automobilistes d’un petit pays comme la Belgique achetait une Tesla, cette moitié consommerait à elle seule plus que toute la consommation du Bitcoin ! Et je ne parle que de la moitié d’un tout petit pays ! Vous imaginez la catastrophe si il y’avait plus de Tesla ?

Mais trêve de comparaison foireuse, pourquoi est-ce que la consommation électrique n’est pas nécessairement polluante ? Et pourquoi les voitures électriques sont écologiquement intéressantes ?

Premièrement parce que parfois l’électricité est là et inutilisée. C’est le cas des panneaux solaires, des barrages hydro-électriques ou des centrales nucléaires qui produisent de l’électricité, quoi qu’il arrive. On ne peut pas faire ON/OFF. Et l’électricité est pour le moment difficilement transportable.

Selon une étude de Bitmex, une grande partie de l’électricité aujourd’hui utilisée dans le Bitcoin serait en fait de l’électricité provenant d’infrastructures hydro-électriques sous-utilisées car initialement dédiée à la production d’aluminium en Chine, production qui a baissé drastiquement suite à une baisse de demande pour ce matériau. Je répète : le Bitcoin a bénéficié d’une grande quantité d’électricité non-utilisée et donc très bon marché, écologiquement comme économiquement.

Dans certains cas, diminuer la consommation électrique peut même être problématique. Lors de mes études, on m’a raconté que l’éclairage outrancier des autoroutes belges a été conçu pour absorber l’excédent électrique des centrales nucléaires durant la nuit. Je ne sais pas si c’est vrai mais cela était affirmé comme tel par des professeurs d’université.

En deuxième lieu, si on se concentre uniquement sur la pollution de CO2 (qui est loin d’être la seule pollution mais la plus médiatique actuellement), il est tout simplement impossible de faire une équivalence: autant d’électricité produite équivaut à autant de CO2 produit. En effet, le CO2 produit dépend complètement du moyen de production utilisé. Pire, chaque atome de CO2 n’est pas équivalent ! Comme je l’expliquais, le seul CO2 problématique est celui qui provient du carbone fossile (charbon, pétrole, gaz). Le reste est du carbone est déjà présent dans le cycle naturel du carbone (le CO2 de votre respiration ou celui produit par une centrale à cogénération par exemple).

Alors je sais qu’on nous lave le cerveau depuis des années avec “consommer de l’électricité, c’est mal, il faut éteindre les lumières” mais c’est un raccourci très très très raccourci qui peut parfois être faux. Dans son livre “Sapiens”, Harari affirme que tenter de consommer moins d’électricité n’a aucun sens. Ce qu’il faut améliorer, ce sont les moyens de production. Il ajoute que quelques kilomètres carrés de panneaux solaires dans le Sahara suffirait à alimenter toute la planète. Cette solution n’est malheureusement pas réalisable en l’état car l’électricité reste difficile à déplacer sur de grandes distances mais cela illustre bien que le challenge est dans l’amélioration de la production et du transport, pas dans une hypothétique réduction de la consommation.

Bref, pour résumer, il est tout simplement faux de dire que consommer de l’électricité pollue. Cela peut être parfois (ou souvent) le cas mais c’est très loin d’être une vérité générale.

Et oui, c’est très difficile à admettre quand ça fait 10 ans qu’on met des ampoules écolos et qu’on mesure chaque kilowattheure de sa maison pour se sentir l’âme d’un sauveur de planète. Mais ces actions n’ont qu’un impact infime par rapport à d’autres actions individuelles (par exemple réduire l’utilisation d’une voiture à essence).

Comparons ce qui est comparable

Les comparaisons que j’ai vue sont toutes plus absurdes que les autres. Bitcoin consommerait autant que le Maroc. La consommation du Bitcoin aurait annulé les bénéfices de l’obligation des ampoules économiques en Europe. Voire “Bitcoin consommerait plus que 159 pays” (en oubliant de préciser qu’il s’agit d’un classement, pas de la somme de ces pays).

Dit comme ça, ça paraît catastrophique.

Mais vous savez combien consomme le Maroc, vous ? Vous savez que le Maroc a 33 millions d’habitants et qu’on estime entre 13 et 30 millions le nombre d’utilisateurs de Bitcoin ? Bref que l’ordre de grandeur est tout à fait comparable !

Quand aux ampoules, cela ne veut-il pas dire tout simplement que cette mesure d’obligation des ampoules économiques est tout simplement une mesurette qui n’a pas servi à grand chose en termes d’économie ? Je ne dis pas que c’est une éventualité tout à fait plausible.

Bref, il faut comparer ce qui est comparable.

Aujourd’hui, Bitcoin est avant tout un système d’échange de valeur. Il se compare donc aux monnaies, au banque et à l’or.

Surprise, le Bitcoin consomme à peine plus que la production des pièces de monnaies et des billets de banque ! Or, rappelons que les pièces et billets ne représentent que 8% de la masse monétaire totale et plus spécifiquement 6,2% pour la zone euro.

Il consomme également près de 8 fois moins que l’extraction de l’or ou 50 fois moins que la production de l’aluminium. Or, outre la consommation énergétique, l’extraction de l’or est extrêmement contaminante (notamment en termes de métaux lourds comme le mercure).

Source : @beetcoin

La pollution liée à l’or semble d’autant plus inutile quand on sait que 17% de tout l’or extrait dans l’histoire de l’humanité est entreposé dans les coffres des états et ne bougent pas. Comme le disait Warren Buffet, l’or est extrait d’un trou en Afrique pour être mis dans un trou gardé jour et nuit dans un autre pays. Si on ajoute à cela l’or acheté par des particuliers (pour garder dans un coffre ou planquer sous un matelas) ou utilisé dans la joaillerie (dont l’utilité est donc uniquement esthétique), il ne reste que 10% de la production annuelle d’or qui est utilisé dans l’industrie !

Si Bitcoin remplaçait l’or stocké dans des coffres, même partiellement, ce serait donc une merveille d’écologie, comparable au remplacement de toutes les voitures à essence du monde par des voitures électriques.

Et s’il fallait comparer Bitcoin au système bancaire, avec ses milliers d’immeubles refroidis à l’air conditionnés, ses millions d’employés qui viennent bosser en voiture (ou en jets privés), je pense que le Bitcoin ne paraîtrait plus écologique mais tout simplement miraculeux.

Sans compter que nous n’en somme qu’au début ! Bitcoin a le potentiel pour devenir une véritable plateforme décentralisée qui pourrait remplacer complètement le web tel que nous le connaissons et modifier notre interactions sociales et politiques : votes, communications, échanges sans possibilité de contrôle d’une autorité centralisée.

Vous êtes-vous déjà demandé quelle est la consommation énergétique d’un Youtube qui sert essentiellement à vous afficher des pubs entre deux vidéos rigolotes (et donc de vous faire consommer et polluer plus) ? Et bien la consommation des data-centers de Google en 2015 était supérieure à la consommation du Bitcoin en 2017 ! Cela n’inclut pas les consommations des routeurs intermédiaires, de vos ordinateurs/téléphones/tablettes ni de tous les bureaux de Google autres que les data centers !

Face à ces nombres, quelle est selon vous la consommation acceptable pour une plateforme décentralisée mondiale capable de remplacer l’extraction hyper polluante de l’or, les banques, l’Internet centralisés voire même les états ? Ou, tout simplement, de protéger certaines de nos libertés fondamentales ? Avant de critiquer la consommation de Bitcoin, il est donc nécessaire de quantifier à combien nous estimons une consommation « normale » pour un tel système.

Un gaspillage d’énergie ?

Une autre incompréhension, plus subtile celle-là, est que l’énergie du Bitcoin est gaspillée. Il est vrai que les mineurs cherchent tous à résoudre un problème mathématique compliqué et consomment tous de l’électricité mais que seul le premier à trouver la solution gagne et il faut à chaque fois tout reprendre à zéro.

Vu comme ça, cela parait du gaspillage.

Mais la compétition entre mineurs est un élément essentiel qui garantit la décentralisation du Bitcoin. Si les mineurs se mettaient d’accord pour coopérer, ils auraient le contrôle sur le réseau qui ne serait plus décentralisé.

Ce “gaspillage”, appelé “Proof of Work”, est donc un élément fondamental. Chaque watt utilisé l’est afin de garantir la décentralisation du système Bitcoin.

Ce serait comme dire que les policiers doivent rester enfermés dans leur commissariat et n’en sortir qu’après un appel. Les rondes en voiture sont en effet polluantes et essentiellement inutiles (seules un pourcentage infime des rondes en voitures aboutit à une intervention policière). Pourtant, nous acceptons que les rondes de police « inutiles » sont un élément essentiel de la sécurité d’une ville ou d’un quartier (moyennant que vous ayez confiance envers les forces de police).

Pour Bitcoin, c’est pareil : les calculs inutiles garantissent la sécurité et la décentralisation. Des alternatives au Proof of Work sont étudiées mais aucune n’a encore été démontrée comme fonctionnelle. Il n’est même pas encore certain que ces alternatives soient possibles !

L’optimisation, une étape nécessairement tardive

Une des règles majeurs d’ingénierie c’est qu’avant d’optimiser quoi que ce soit dans un projet, il faut garantir que le projet est correct.

On n’essaie pas de rendre plus rapide un programme informatique qui renvoie des valeurs erronnées. On ne rend pas plus aérodynamique un avion qui ne vole pas. On n’essaie pas de limiter la consommation d’un moteur qui ne démarre pas.

Lors de la phase expérimentale, la consommation de ressources est maximale.

Elon Musk a utilisé toute une fusée juste pour envoyer une voiture dans l’espace. Non pas par gaspillage mais parce que concevoir une fusée nécessite des tests “à vide”.

Bitcoin est encore dans cette phase expérimentale. Le système est tellement complexe et se doit de garantir une telle sécurité qu’il doit d’abord prouver qu’il fonctionne avant qu’on envisage d’optimiser sa consommation en électricité. Comme le disait Antonopoulos, dire “En 2020, le Bitcoin consommera plus que tout la consommation actuelle mondiale” revient à dire à une femme enceinte “Madame, à ce rythme, dans 5 ans, votre ventre aura la taille de la pièce”.

Donc oui, comme tout système humain, le Bitcoin pourrait sans doute polluer moins. Et je suis certains que, dans les prochaines années, des propositions en ce sens vont voir le jour. C’est d’ailleurs déjà le cas si on considère la consommation liée au nombre de transactions car l’amélioration “Lightning Network” va permettre de réaliser des milliers de transactions Bitcoin pour un coût quasi nul, y compris en termes de consommation électrique. Mais les comparaisons coût par transaction sont de toutes façons pour la plupart malhonnêtes car elles ne prennent généralement pas en compte toute l’infrastructure bancaire sur laquelle s’appuient les solutions comme VISA ou MasterCard.

Une autre règle de l’optimisation est qu’avant toute optimisation, il faut mesurer pour tenter d’optimiser les points les plus critiques. En effet, rien ne sert de diminuer de même 90% la consommation électrique des ampoules si les ampoules ne représentent que 0,01% de la consommation globale mais que l’air conditionné, par exemple, représente 30% de la consommation globale. J’invente les chiffres mais ça vous donne une idée. C’est un principe bien connu des développeurs informatiques qui, après des jours à diviser par deux le temps d’une fonction, se rendent compte que cette fonction ne prend qu’un millième du temps total d’exécution. Il s’ensuit qu’avant de chercher à optimiser Bitcoin à tout prix, il est nécessaire de voir quelle sera sa consommation lors des utilisations futures et mesurer si c’est bien lui qui consomme le plus. Cela pourrait être nos smartphones. Ou ces écrans publicitaires lumineux qui nous aveuglent la nuit.

Pourquoi un tel acharnement ?

J’espère qu’à ce point de la lecture, vous avez compris le nombre d’erreurs de logique nécessaires pour arriver à la conclusion “Bitcoin est un monstre de pollution qui va détruire le monde et les bébés phoques”.

Mais du coup, pourquoi un tel acharnement ?

Pour deux raisons.

Premièrement, c’est du sensationnalisme et ça se vend. Dire “Le Bitcoin consomme autant que le Maroc” ne veut rien dire rationnellement mais ça fait réagir, ça pousse les gens à s’indigner, à partager l’article et à faire vivre les publicitaires qui financent les médias. Avez-vous une idée de la consommation électrique qu’engendre la publicité sur le net ?

Le deuxième point c’est que les médias sont financés par les publicitaires et par les états. Tout ce petit monde pressent bien que le Bitcoin peut bouleverser les choses. Nulle théorie du complot nécessaire ici, mais il va sans dire qu’un sujet qui décrédibilise une technologie dangereuse pour la souveraineté des états et qui verse dans le sensationnalisme est du pain béni. La peur et l’ignorance sont devenus les deux moteurs de la presse qui n’est plus qu’une machine à manipuler nos émotions.

Vous lirez rarement des articles dont le titre est “Le Bitcoin a consommé moins en 2017 que les datacenters Google en 2015” ou “Si la moitié du parc automobile belge était des Tesla, elles consommeraient plus que le réseau Bitcoin”. Ces titres ne sont pourtant pas moins faux que ceux que vous lisez habituellement. Mais écrire un article comme celui que vous êtes en train de lire en ce moment demande à la fois de la compétence et du temps. Le temps est un luxe que les journalistes, acculés par la rentabilité publicitaire, ne peuvent plus se permettre. Si nous devions facturer notre travail, aucun média financé par la publicité ne pourrait nous rétribuer le temps passé à un tarif raisonnable.


De manière étonnante, les personnes les plus convaincues que le Bitcoin est une catastrophe écologique sont celles qui ne connaissent absolument rien au domaine.

Dans la littérature académique, on peut même lire que, potentiellement, le Bitcoin pourrait devenir la manière la plus efficace de transformer de l’électricité en argent, empêchant les états de subsidier l’électricité afin de maintenir un prix artificiellement bas et d’attirer les grosses usines. À terme, le Bitcoin pourrait donc forcer une optimisation du réseau électrique mondial et favoriser le développement des productions d’électricité plus locales, moins centralisées (voir par exemple “Bitcoin and Cryptocurrency Technologies”, p. 122-123, par Narayanan, Bonneau, Felten, Miller et Goldfeder).

Il n’est pas impossible que le Bitcoin signe la fin des centrales à charbon et des méga centrales nucléaires. Mais ce genre de possibilité n’est tout simplement jamais évoquée dans les feuilles de chou sponsorisée par l’état et les vendeurs de voitures.

Ce simple exemple devrait vous alerter sur la facilité avec laquelle nous sommes manipulés, avec laquelle on nous fait avaler n’importe quoi, surtout sur des domaines que nous ne connaissons pas.

Et si vous avez investi dans un « concurrent à Bitcoin qui est écologique et qui va dépasser Bitcoin », et bien, aussi dur à admettre que cela puisse être, vous vous êtes probablement fait avoir (ce qui ne vous empêchera peut-être pas de faire des bénéfices sur le dos d’autres investisseurs arrivés après vous).

Alors, oui, le Bitcoin consomme de l’électricité. Mais c’est normal, c’est un système très complexe qui offre énormément de choses. Il consomme cependant moins d’électricité que bien d’autres systèmes que nous acceptons et qui sont certainement moins utiles ou qui n’ont pas été sélectionnés comme tête de turc par les médias. Vous avez souvent lu des articles sur la consommation électrique de l’industrie de l’or, de l’aluminium ou de l’industrie du surgelé ? Et si, en fait, pour ce qu’il propose le Bitcoin était extrêmement écologique ?

Ce n’est pas une raison pour ne pas encourager les améliorations visant à ce que Bitcoin consomme moins. Mais peut-être qu’on peut arrêter de ne se tourner que sur cet aspect et se concentrer sur des questions certainement plus intéressantes. Par exemple : qu’est-ce que le Bitcoin (ou ses successeurs) va changer dans nos vies et nos sociétés ?


Vous avez aimé votre lecture ? Soutenez l’auteur sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Même un don symbolique fait toute la différence ! Retrouvons-nous ensuite sur Facebook, Twitter ou Mastodon.

Ce texte est publié sous la licence CC-By BE.

March 04, 2018

Over 3 years ago Autoptimize added support for critical CSS and one and a half year ago the first “power-up” was released for Critical CSS rules creation.

But time flies and it’s time for a new evolution; automated creation of critical CSS, using a deep integration with using their powerful API! A first version of the plugin is ready and the admin-page looks like this (look to the right of this paragraph);

The plan:

  1. beta-test (asap)
  2. release as separate plugin on (shooting for April)
  3. release as part of Autoptimize 2.5 (target mid 2018)

This new “” power-up has been tested on a couple of sites already (including this little blog of mine) and we are now looking for a small group of to help beta-test for that first target.  Beta-testers will be able to use for free during the test (i.e. for one month). If you’re interested; head on up to the contact form and tell me what kind or site you would test this on (main plugins + theme; I’m very interesting in advanced plugins like WooCommerce, BuddyPress and some of the major themes such as Avada, Divi, Astra, GeneratePress, … ) and I’ll get back to you with further instructions.


I published the following diary on “The Crypto Miners Fight For CPU Cycles“:

I found an interesting piece of Powershell code yesterday. The purpose is to download and execute a crypto miner but the code also implements a detection mechanism to find other miners, security tools or greedy processes (in terms of CPU cycles). Indeed, crypto miners make intensive use of your CPUs and more CPU resources they can (ab)use, more money will be generated… [Read more]

[The post [SANS ISC] The Crypto Miners Fight For CPU Cycles has been first published on /dev/random]

March 03, 2018

Caribbean sunset

Spending a few days at the Cayman Islands, where the WiFi is weak and the rum is strong. When I relax, I become creative and reflective. Writing helps me develop these creative thoughts and deepens my personal reflection. It's a virtuous cycle, because when I see my world, myself and my ideas more clearly, it helps me relax even more. Relaxing makes me write and writing makes me relax.

With the configuration baseline for a technical service being described fully (see the first, second and third post in this series), it is time to consider the validation of the settings in an automated manner. The preferred method for this is to use Open Vulnerability and Assessment Language (OVAL), which is nowadays managed by the Center for Internet Security, abbreviated as CISecurity. Previously, OVAL was maintained and managed by Mitre under NIST supervision, and Google searches will often still point to the old sites. However, documentation is now maintained on CISecurity's github repositories.

But I digress...

Read-only compliance validation

One of the main ideas with OVAL is to have a language (XML-based) that represents state information (what something should be) which can be verified in a read-only fashion. Even more, from an operational perspective, it is very important that compliance checks do not alter anything, but only report.

Within its design, OVAL engineering has considered how to properly manage huge sets of assessment rules, and how to document this in an unambiguous manner. In the previous blog posts, ambiguity was resolved through writing style, and not much through actual, enforced definitions.

OVAL enforces this. You can't write a generic or ambiguous rule in OVAL. It is very specific, but that also means that it is daunting to implement the first few times. I've written many OVAL sets, and I still struggle with it (although that's because I don't do it enough in a short time-frame, and need to reread my own documentation regularly).

The capability to perform read-only validation with OVAL leads to a number of possible use cases. In the 5.10 specification a number of use cases are provided. Basically, it boils down to vulnerability discovery (is a system vulnerable or not), patch management (is the system patched accordingly or not), configuration management (are the settings according to the rules or not), inventory management (detect what is installed on the system or what the systems' assets are), malware and threat indicator (detect if a system has been compromised or particular malware is active), policy enforcement (verify if a client system adheres to particular rules before it is granted access to a network), change tracking (regularly validating the state of a system and keeping track of changes), and security information management (centralizing results of an entire organization or environment and doing standard analytics on it).

In this blog post series, I'm focusing on configuration management.

OVAL structure

Although the OVAL standard (just like the XCCDF standard actually) entails a number of major components, I'm going to focus on the OVAL definitions. Be aware though that the results of an OVAL scan are also standardized format, as are results of XCCDF scans for instance.

OVAL definitions have 4 to 5 blocks in them: - the definition itself, which describes what is being validated and how. It refers to one or more tests that are to be executed or validated for the definition result to be calculated - the test or tests, which are referred to by the definition. In each test, there is at least a reference to an object (what is being tested) and optionally to a state (what should the object look like) - the object, which is a unique representation of a resource or resources on the system (a file, a process, a mount point, a kernel parameter, etc.). Object definitions can refer to multiple resources, depending on the definition. - the state, which is a sort-of value mapping or validation that needs to be applied to an object to see if it is configured correctly - the variable, an optional definition which is what it sounds like, a variable that substitutes an abstract definition with an actual definition, allowing to write more reusable tests.

Let's get an example going, but without the XML structure, so in human language. We want to define that the Kerberos definition on a Linux system should allow forwardable tickets by default. This is accomplished by ensuring that, inside the /etc/krb5.conf file (which is an INI-style configuration file), the value of the forwardable key inside the [libdefaults] section is set to true.

In OVAL, the definition itself will document the above in human readable text, assign it a unique ID (like oval:com.example.oval:def:1) and mark it as being a definition for configuration validation (compliance). Then, it defines the criteria that need to be checked in order to properly validate if the rule is applicable or not. These criteria include validation if the OVAL statement is actually being run on a Linux system (as it makes no sense to run it against a Cisco router) which is Kerberos enabled, and then the criteria of the file check itself. Each criteria links to a test.

The test of the file itself links to an object and a state. There are a number of ways how we can check for this specific case. One is that the object is the forwardable key in the [libdefaults] section of the /etc/krb5.conf file, and the state is the value true. In this case, the state will point to those two entries (through their unique IDs) and define that the object must exist, and all matches must have a matching state. The "all matches" here is not that important, because there will generally only be one such definition in the /etc/krb5.conf file.

Note however that a different approach to the test can be declared as well. We could state that the object is the [libdefaults] section inside the /etc/krb5.conf file, and the state is the value true for the forwardable key. In this case, the test declares that multiple objects must exist, and (at least) one must match the state.

As you can see, the OVAL language tries to map definitions to unambiguous definitions. So, how does this look like in OVAL XML?

The OVAL XML structure

The full example contains a few more entries than those we declare next, in order to be complete. The most important definitions though are documented below.

Let's start with the definition. As stated, it will refer to tests that need to match for the definition to be valid.

  <definition id="oval:com.example.oval:def:1" version="1" class="compliance">
      <title>libdefaults.forwardable in /etc/krb5.conf must be set to true</title>
      <affected family="unix">
        <platform>Red Hat Enterprise Linux 7</platform>
        By default, tickets obtained from the Kerberos environment must be forwardable.
    <criteria operator="AND">
      <criterion test_ref="oval:com.example.oval:tst:1" comment="Red Hat Enterprise Linux is installed"/>
      <criterion test_ref="oval:com.example.oval:tst:2" comment="/etc/krb5.conf's libdefaults.forwardable is set to true"/>

The first thing to keep in mind is the (weird) identification structure. Just like with XCCDF, it is not sufficient to have your own id convention. You need to start an id with oval: followed by the reverse domain definition (here com.example.oval), followed by the type (def for definition) and a sequence number.

Also, take a look at the criteria. Here, two tests need to be compliant (hence the AND operator). However, more complex operations can be done as well. It is even allowed to nest multiple criteria, and refer to previous definitions, like so (taken from the ssg-rhel6-oval.xml file:

<criteria comment="package hal removed or service haldaemon is not configured to start" operator="OR">
  <extend_definition comment="hal removed" definition_ref="oval:ssg:def:211"/>
  <criteria operator="AND" comment="service haldaemon is not configured to start">
    <criterion comment="haldaemon runlevel 0" test_ref="oval:ssg:tst:212"/>
    <criterion comment="haldaemon runlevel 1" test_ref="oval:ssg:tst:213"/>
    <criterion comment="haldaemon runlevel 2" test_ref="oval:ssg:tst:214"/>
    <criterion comment="haldaemon runlevel 3" test_ref="oval:ssg:tst:215"/>
    <criterion comment="haldaemon runlevel 4" test_ref="oval:ssg:tst:216"/>
    <criterion comment="haldaemon runlevel 5" test_ref="oval:ssg:tst:217"/>
    <criterion comment="haldaemon runlevel 6" test_ref="oval:ssg:tst:218"/>

Next, let's look at the tests.

  <unix:file_test id="oval:com.example.oval:tst:1" version="1" check_existence="all_exist" check="all" comment="/etc/redhat-release exists">
    <unix:object object_ref="oval:com.example.oval:obj:1" />
  <ind:textfilecontent54_test id="oval:com.example.oval:tst:2" check="all" check_existence="all_exist" version="1" comment="The value of forwardable in /etc/krb5.conf">
    <ind:object object_ref="oval:com.example.oval:obj:2" />
    <ind:state state_ref="oval:com.example.oval:ste:2" />

There are two tests defined here. The first test just checks if /etc/redhat-release exists. If not, then the test will fail and the definition itself will result to false (as in, not compliant). This isn't actually a proper definition, because you want the test to not run when it is on a different platform, but for the sake of example and simplicity, let's keep it as is.

The second test will check for the value of the forwardable key in /etc/krb5.conf. For it, it refers to an object and a state. The test states that all objects must exist (check_existence="all_exist") and that all objects must match the state (check="all").

The object definition looks like so:

  <unix:file_object id="oval:com.example.oval:obj:1" comment="The /etc/redhat-release file" version="1">
  <ind:textfilecontent54_object id="oval:com.example.oval:obj:2" comment="The forwardable key" version="1">
    <ind:pattern operation="pattern match">^\s*forwardable\s*=\s*((true|false))\w*</ind:pattern>
    <ind:instance datatype="int" operation="equals">1</ind:instance>

The first object is a simple file reference. The second is a text file content object. More specifically, it matches the line inside /etc/krb5.conf which has forwardable = true or forwardable = false in it. An expression is made on it, so that we can refer to the subexpression as part of the test.

This test looks like so:

  <ind:textfilecontent54_state id="oval:com.example.oval:ste:2" version="1">
    <ind:subexpression datatype="string">true</ind:subexpression>

This test refers to a subexpression, and wants it to be true.

Testing the checks with Open-SCAP

The Open-SCAP tool is able to test OVAL statements directly. For instance, with the above definition in a file called oval.xml:

~$ oscap oval eval --results oval-results.xml oval.xml
Definition oval:com.example.oval:def:1: true
Evaluation done.

The output of the command shows that the definition was evaluated successfully. If you want more information, open up the oval-results.xml file which contains all the details about the test. This results file is also very useful while developing OVAL as it shows the entire result of objects, tests and so forth.

For instance, the /etc/redhat-release file was only checked to see if it exists, but the results file shows what other parameters can be verified with it as well:

<unix-sys:file_item id="1233781" status="exists">
  <unix-sys:group_id datatype="int">0</unix-sys:group_id>
  <unix-sys:user_id datatype="int">0</unix-sys:user_id>
  <unix-sys:a_time datatype="int">1515186666</unix-sys:a_time>
  <unix-sys:c_time datatype="int">1514927465</unix-sys:c_time>
  <unix-sys:m_time datatype="int">1498674992</unix-sys:m_time>
  <unix-sys:size datatype="int">52</unix-sys:size>
  <unix-sys:suid datatype="boolean">false</unix-sys:suid>
  <unix-sys:sgid datatype="boolean">false</unix-sys:sgid>
  <unix-sys:sticky datatype="boolean">false</unix-sys:sticky>
  <unix-sys:uread datatype="boolean">true</unix-sys:uread>
  <unix-sys:uwrite datatype="boolean">true</unix-sys:uwrite>
  <unix-sys:uexec datatype="boolean">false</unix-sys:uexec>
  <unix-sys:gread datatype="boolean">true</unix-sys:gread>
  <unix-sys:gwrite datatype="boolean">false</unix-sys:gwrite>
  <unix-sys:gexec datatype="boolean">false</unix-sys:gexec>
  <unix-sys:oread datatype="boolean">true</unix-sys:oread>
  <unix-sys:owrite datatype="boolean">false</unix-sys:owrite>
  <unix-sys:oexec datatype="boolean">false</unix-sys:oexec>
  <unix-sys:has_extended_acl datatype="boolean">false</unix-sys:has_extended_acl>

Now, this is just on OVAL level. The final step is to link it in the XCCDF file.

Referring to OVAL in XCCDF

The XCCDF Rule entry allows for a check element, which refers to an automated check for compliance.

For instance, the above rule could be referred to like so:

<Rule id="xccdf_com.example_rule_krb5-forwardable-true">
  <title>Enable forwardable tickets on RHEL systems</title>
  <check system="">
    <check-content-ref href="oval.xml" name="oval:com.example.oval:def:1" />

With this set in the Rule, Open-SCAP can validate it while checking the configuration baseline:

~$ oscap xccdf eval --oval-results --results xccdf-results.xml xccdf.xml
Title   Enable forwardable kerberos tickets in krb5.conf libdefaults
Rule    xccdf_com.example_rule_krb5-forwardable-tickets
Ident   RHEL7-01007
Result  pass

A huge advantage here is that, alongside the detailed results of the run, there is also better human readable output as it shows the title of the Rule being checked.

The detailed capabilities of OVAL

In the above example I've used two examples: a file validation (against /etc/redhat-release) and a file content one (against /etc/krb5.conf). However, OVAL has many more checks and support for it, and also has constraints that you need to be aware of.

In the OVAL Project github account, the Language repository keeps track of the current documentation. By browsing through it, you'll notice that the OVAL capabilities are structured based on the target technology that you can check. Right now, this is AIX, Android, Apple iOS, Cisco ASA, Cisco CatOS, VMWare ESX, FreeBSD, HP-UX, Cisco iOS and iOS-XE, Juniper JunOS, Linux, MacOS, NETCONF, Cisco PIX, Microsoft SharePoint, Unix (generic), Microsoft Windows, and independent.

The independent one contains tests and support for resources that are often reusable toward different platforms (as long as your OVAL and XCCDF supporting tools can run it on those platforms). A few notable supporting tests are:

  • filehash58_test which can check for a number of common hashes (such as SHA-512 and MD5). This is useful when you want to make sure that a particular (binary or otherwise) file is available on the system. In enterprises, this could be useful for license files, or specific library files.
  • textfilecontent54_test which can check the content of a file, with support for regular expressions.
  • xmlfilecontent_test which is a specialized test toward XML files

Keep in mind though that, as we have seen above, INI files specifically have no specialization available. It would be nice if CISecurity would develop support for common textual data formats, such as CSV (although that one is easily interpretable with the existing ones), JSON, YAML and INI.

The unix one contains tests specific to Unix and Unix-like operating systems (so yes, it is also useful for Linux), and together with the linux one a wide range of configurations can be checked. This includes support for generic extended attributes (fileextendedattribute_test) as well as SELinux specific rules (selinuxboolean_test and selinuxsecuritycontext_test), network interface settings (interface_test), runtime processes (process58_test), kernel parameters (sysctl_test), installed software tests (such as rpminfo_test for RHEL and other RPM enabled operating systems) and more.

I published the following diary on “Beware of the “Cloud”“:

Today, when you buy a product, there are chances that it will be “connected” and use cloud services for, at least, one of its features. I’d like to tell you a bad story that I had this week. Just to raise your awareness… I won’t mention any product or service because the same story could append with many alternative solutions and my goal is not to blame them… [Read more]

[The post [SANS ISC] Reminder: Beware of the “Cloud” has been first published on /dev/random]

March 02, 2018

I published the following diary on “Common Patterns Used in Phishing Campaigns Files“:

Phishing campaigns remain a common way to infect computers. Every day, I’m receiving plenty of malicious documents pretending to be sent from banks, suppliers, major Internet actors, etc. All those emails and their payloads are indexed and this morning I decided to have a quick look at them just by the name of the malicious files. Basically, there are two approaches used by attackers:

  • They randomize the file names by adding a trailing random string (ex: aaf_438445.pdf) or the complete filename.
  • They make the filename “juicy” to entice the user to open it by using common words.

[Read more]

[The post [SANS ISC] Common Patterns Used in Phishing Campaigns Files has been first published on /dev/random]

Now that Drupal 8’s REST API 1 has reached the next level of maturity, I think a concise blog post summarizing the most important API-First Initiative improvements for every minor release is going to help a lot of developers. Drupal 8.5.0 will be released next week and the RC was tagged last week. So, let’s get right to it!

The REST API made a big step forward with the 5th minor release of Drupal 8 — I hope you’ll like these improvements :)

Thanks to everyone who contributed!

  1. text fields’ computed processed property exposed #2626924

    No more need to re-implement this in consumers nor work-arounds.


  2. uri field on File gained a computed url property #2825487

    "uri":{"value":"public://cat.png"} "uri":{"url":"/files/cat.png","value":"public://cat.png"}

  3. Term POSTing requires non-admin permission #1848686

    administer taxonomy permission create terms in %vocabulary% permission

    Analogously for PATCH and DELETE: you need edit terms in %vocabulary% and delete terms in %vocabulary%, respectively.

  4. Vocabulary GETting requires non-admin permission #2808217

    administer taxonomy permission access taxonomy overview permission

  5. GET → decode → modify field → encode → PATCH403 200 #2824851

    You can now GET a response, modifying the bits you want to change, and then sending exactly that, without needing to remove fields you’re not allowed to modify. Any fields that you’re not allowed to modify can still be sent without resulting in a 403 response, as long as you send exactly the same values. Drupal’s REST API now implements the robustness principle.

  6. 4xx GET responses cacheable: more scalable + faster #2765959

    Valuable for use cases where you have many (for example a million) anonymous consumers hitting the same URL. Because the response is not cacheable, it can also not be cached by a reverse proxy in front of it. Meaning that we’ll have hundreds of thousands of requests hitting origin, which can bring down the origin server.

  7. Comprehensive integration tests + test coverage test coverage

    This massively reduces the risk of REST API regressions/changes/BC breaks making it into a Drupal 8 release. It allows us to improve things faster, because we can be confident that most regressions will be detected. That even includes the support for XML serialization, for the handful of you who are using that! We take backwards compatibility serious.
    Even better: we have test coverage test coverage: tests that ensure we have integration tests for every entity type that Drupal core’s stable modules ship with!
    Details at API-First Drupal — really!. Getting to this point took more than a year and required fixing bugs in many other parts of Drupal!

Want more nuance and detail? See the REST: top priorities for Drupal 8.5.x issue on

Are you curious what we’re working on for Drupal 8.6? Want to follow along? Click the follow button at REST: top priorities for Drupal 8.6.x — whenever things on the list are completed (or when the list gets longer), a comment gets posted. It’s the best way to follow along closely!2

The other thing that we’re working on for 8.6 besides the REST API is getting the JSON API module polished to core-worthiness. All of the above improvements help JSON API either directly or indirectly! More about that in a future blog post.

Was this helpful? Let me know in the comments!

Thanks to Ted for reviewing a draft of this blog post! And sorry for not changing the title to API First Drupal in 8.5.0: Amazing progress because of tedbow’s work on setInternal properties!!!!!!!!! even though that would’ve been totally accurate :D

For reference, historical data:

  1. This consists of the REST and Serialization modules. ↩︎

  2. ~50 comments per six months — so very little noise. ↩︎

February 28, 2018

Savez-vous comment Facebook gagne de l’argent ? Par la publicité, bien entendu. Mais quel est le mécanisme exact ? Qui paie et pour quoi en échange ? Et quelles en sont les conséquences de ce modèle ?

Les pages Facebook

Sur Facebook, les utilisateurs comme vous et moi sont limités à 5000 amis maximum. Devenir ami nécessite l’acceptation des deux parties.

Les pages, au contraire, ne sont pas limitée à des personnes physiques et peuvent être « aimées » par un nombre illimité de personnes.Les contenus d’une page sont toujours publics et leur but est de toucher un maximum de personnes. Pour cette raison, Facebook permet aux administrateurs d’une page d’avoir des statistiques complètes sur le nombre d’utilisateurs qui ont interagi avec une publication.

Il semble donc logique pour tout business, même local, de se créer une page. Cela vaut pour les associations, les artistes, les politiciens, les groupes citoyens ou les blogueurs. La page Facebook semble être un outil idéal pour faire entendre sa voix.

Il y’a cependant une petite subtilité. Si un individu aime cent pages qui publient chacune dix contenus par jour, il est impossible pour cet individu de consulter mille contenus, sans compter ceux de ses amis.

La mise en avant des contenus

Pour résoudre ce problème, Facebook décide des contenus « les plus intéressants » en se basant sur ce qui a déjà été aimé par d’autres utilisateurs. Les contenus nouveaux et originaux ont donc très peu de chance de se propager car pour être aimé, il faut être diffusé et pour être diffusé, il faut être aimé.

L’administrateur de la page sera tout triste et déçu en consultant ses statistiques. 1000 personnes aiment sa page et, pourtant, son contenu n’a été lu qu’une dizaine de fois !

À ce moment, Facebook propose à notre administrateur de payer pour être vu plus de fois. Après tout, qu’est-ce que 10€ pour obtenir 1000 vues supplémentaires ?

Lorsque j’ai créé mon premier site web, au 20ème siècle, il était courant d’avoir un compteur du nombre de visites reçues. Au plus le chiffre de ce compteur était important, au plus le site paraissait avoir du succès. Il n’était pas rare de rafraichir les pages pour incrémenter son compteur.

Facebook a littéralement réussi à faire payer les créateurs de contenus pour incrémenter leur compteur. Tout en fournissant le compteur en question !

Les conséquences de ce modèle

Ce système, très rentable, a plusieurs conséquences.

Premièrement, cela signifie Facebook a tout intérêt à ce que les contenus des pages ne payant pas soient très peu consultés. Si vous avez une page Facebook et que vous ne payez pas, comme moi, vos contenus seront très peu diffusés, même chez ceux qui aiment votre page. Sauf cas exceptionnel où un contenu se révèle très populaire, les gens qui aiment votre page… ne verront pas votre contenu car Facebook n’a pas intérêt à les diffuser.

Malgré 50 « J’aime » et 17 partages (ce qui est pour moi un véritable succès), le nombre de « personnes atteintes » est à peine le nombre de personne qui aime ma page. Et, par « personne atteinte », Facebook veut dire « personne pour laquelle ma publication s’est affichée dans le flux ». Cela ne signifie pas que la personne a vraiment vu ma publication et encore moins qu’elle l’aie lu.

Si vous payez, ce que j’ai testé, Facebook va vous montrer que les chiffres augmentent mais qu’il faudrait payer un poil plus pour vraiment toucher un large public. À l’exception des publications qui visent un public très local et donc limité (un cas particulier pour lequel Facebook fonctionne très bien grâce au ciblage géographique), il est tentant de payer un peu plus chaque fois.

Deuxième conséquence de ce modèle, le système va automatiquement favoriser des contenus commerciaux pour lequel il est rentable de payer de la publicité : essentiellement des sites de vente ou des événements commerciaux. Pire : le nombre de lecteurs potentiels étant limité, au plus les gestionnaires de pages paieront, au moins ils toucheront de personnes ! Mais un auteur comme moi n’a aucun intérêt à payer car il n’a rien à gagner directement. Pareil pour les mouvements citoyens ou les événements non-commerciaux. Les contenus non-commerciaux sont donc moins importants pour Facebook ! Facebook transforme notre monde en gigantesque centre commercial où les voix non commerciales sont étouffées !

À partir d’un certain niveau de paiement, Facebook doit quand même se montrer rentable. Ils vont donc permettre de cibler aussi précisément que possible votre public. Vous voulez toucher des hommes de 35-40 ans dans la région parisienne qui aiment une page liée au vélo ? C’est possible. Vous voulez cibler des jeunes qui se sentent mal dans leur peau pour accroitre l’impact de votre marque ? C’est possible.

Enfin, la dernière conséquence de ce système c’est que tout message que vous postez sur Facebook sert à attirer vos amis afin de pouvoir leur afficher des contenus qui ont été payés. C’est la raison pour laquelle les contenus non-commerciaux existent encore : ils servent à vous faire revenir sans cesse sur Facebook. Et ils permettent de mieux vous étudier afin d’affiner le ciblage publicitaire.

Les producteurs de télévision disent produire des émissions afin de pouvoir glisser des publicités, d’offrir du temps de cerveau aux annonceurs. Facebook n’a même plus à se donner cette peine : nous sommes les producteurs et nous vendons le cerveau de nos amis aux publicitaires. En faisant cela, nous mettons nous-mêmes notre cerveau à disposition. C’est évidemment génial.


Si abandonner Facebook n’est pas souhaitable pour tout le monde, il me semble important d’au moins prendre conscience du fonctionnement de ce réseau social et de ce que nous lui offrons à chaque fois que nous postons un message ou consultons notre fil. Il est également primordial que de comprendre qu’un tel réseau social renforce des interactions purement commerciales au détriment de tout le reste du spectre des échanges humains.

Facebook reste malheureusement un outil difficilement contournable et j’expliquais comment nous pouvions l’utiliser à notre avantage. Mais je vous encourage également à me suivre sur des réseaux sans publicités comme Mastodon et Diaspora afin de casser l’hégémonie du tout commercial. Et à me contacter sur Signal ou sur Wire (@ploum) afin que le fait que nous soyons en interaction ne soit pas une donnée de plus dans notre profilage publicitaire (pour rappel, Whatsapp appartient à Facebook).


Vous avez aimé votre lecture ? Soutenez l’auteur sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Même un don symbolique fait toute la différence ! Retrouvons-nous ensuite sur Facebook, Twitter ou Mastodon.

Ce texte est publié sous la licence CC-By BE.

February 27, 2018

Last week, Matthew Grasmick stepped into the shoes of a developer who has no Drupal experience, and attempted to get a new "Hello world!" site up and running using four different PHP frameworks: WordPress, Laravel, Symfony and Drupal. He shared his experience in a transparent blog post. In addition to detailing the inefficiencies in Drupal's download process and end-user documentation, Matt also shows that out of the four frameworks, Drupal required the most steps to get installed.

While it is sobering to read, I'm glad Matthew brought this problem to the forefront. Having a good evaluator experience is critical as it has a direct impact on adoption rates. A lot goes into a successful evaluator experience: from learning what Drupal is, to understanding how it works, getting it installed and getting your first piece of content published.

So how can we make some very necessary improvements to Drupal's evaluator experience?

I like to think of the evaluator experience as a conversion funnel, similar to the purchase funnel developed in 1898 by E. St. Elmo Lewis. It maps an end-user journey from the moment a product attracts the user's attention to the point of use. It's useful to visualize the process as a funnel, because it helps us better understand where the roadblocks are and where to focus our efforts. For example, we know that more than 13 million people visited in 2017 (top of the funnel) and that approximately 75,000 new Drupal 8 websites launched in production (bottom of the funnel). A very large number of evaluators were lost as they moved down the conversion funnel. It would be good to better understand what goes on in between.

An example Drupal conversion funnel and the primary stakeholders for each level.

As you can see from the image above, the Drupal Association plays an important role at the top of the funnel; from educating people about Drupal, to providing a streamlined download experience on, to helping users find themes and modules, and much more.

The Drupal Association could do more to simplify the evaluator experience. For example, I like the idea of the Drupal Association offering and promoting a hosted, one-click trial service. This could be built by extending a service like into a hosted evaluation service, especially when combined with the upcoming Umami installation profile. (The existing "Try Drupal" program could evolve into a "Try hosting platforms" program. This could help resolve the expectation mismatch with the current "Try Drupal" program, which is currently more focused on showcasing hosting offerings than providing a seamless Drupal evaluation experience.)

The good news is that the Drupal Association recognizes the same needs, and in the past months, we have been working together on plans to improve Drupal's conversional funnel. The Drupal Association will share its 2018 execution plans in the upcoming weeks. As you'll see, the plans address some of the pain points for evaluators (though not necessarily through a hosted trial service, as that could take significant engineering and infrastructure resources).

The Documentation Working Group also plays a very important role in this process. After reading Matthew's post, I reached out to Joe Shindelar, who is a member of the Drupal Documentation Working Group. He explained that the Documentation Working Group has not been meeting regularly nor coordinating initiatives for some time.

It is time to rethink our approach around Drupal's documentation. Adam Hoenich, a long-time Drupal contributor, recommends that documentation becomes a full-fledged core initiative, including the addition of a Documentation Maintainer to the Core Committer team. His proposal includes blocking commits to Drupal on documentation.

I've no doubt that we have to evolve our governance model surrounding documentation. It's hard to write world-class documentation by committee without good governance and Adam's recommendations are compelling. Drupal's API documentation, for example, is governed by the Core Committers; while there is always room for improvement, it's really well-maintained. Some of you might remember that we had an official Documentation Maintainer role in the past, filled by Jennifer Hodgdon. Reinstating this position could bring documentation back into clearer focus and provide the necessary governance. I also suspect that a stronger emphasis on coordination, governance and recognition for documentation work, would inspire more contributors to help.

Last but not least, this also affects the Drupal (Core) Contributors. Evaluators often spend hours trying to get their web server configured, PHP installed or their database setup. As a community, we could help alleviate this pain by deciding to have a single, recommended default installation environment. For example, we could maintain and promote a Docker container (including an official docker-compose.yml) that ships with the latest version of Drupal. It would simplify many of our documentation efforts, and eliminate many roadblocks from the evaluation process.

To narrow down my recommendations, I would propose the following three steps:

  1. A single, recommended default installation environment (e.g. Docker container) for evaluators or developers taking their first steps in Drupal development.
  2. Infrastructure budget and engineering resources for the Drupal Association so they can offer a true hosted "Try Drupal" service.
  3. A Documentation Maintainer who can focus on end-user documentation, is a member of the Core Committer team and is responsible for defining the scope of what should be documented. Given the amount of work this position would entail, it would be ideal if this person could be at least partially funded.
Of course, there are many other solutions, but these are the areas I'd focus on. As always, success depends on our ability to align on solutions, coordinate all the work, and allocate the time and money to implement the necessary improvements. If you think you can help with any of the proposed steps, please let us know in the comments, and we'll help you get involved.

February 26, 2018

The post Clear systemd journal appeared first on

On any server, the logs can start to add up and take considerable amount of disk space. Systemd conveniently stores these in /var/log/journal and has a systemctl command to help clear them.

Take this example:

$ du -hs /var/log/journal/
4.1G	/var/log/journal/

4.1GB worth of journal files, with the oldest dating back over 2 months.

$ ls -lath /var/log/journal/*/ | tail -n 2
-rw-r-x---+ 1 root systemd-journal 8.0M Dec 24 05:15 user-xxx.journal

On this server, I really don't need that many logs, so let's clean them out. There are generally 2 ways to do this.

Clear systemd journals older than X days

The first one is time-based, clearing everything holder than say 10 days.

$ journalctl --vacuum-time=10d
Vacuuming done, freed 2.3G of archived journals on disk.

Alternatively, you can limit its total size.

Clear systemd journals if they exceed X storage

This example will keep 2GB worth of logs, clearing everything that exceeds this.

$ journalctl --vacuum-size=2G
Vacuuming done, freed 720.0M of archived journals on disk.

Afterwards, your /var/log/journal should be much smaller.

$ du -hs /var/log/journal
1.1G	/var/log/journal

Saves you some GBs on disk!

The post Clear systemd journal appeared first on

Earlier this month, I uninstalled Facebook from my phone. I also made a commitment to own my own content and to share shorter notes on my site. Since then, I shared my POSSE plan and I already posted several shorter notes.

One thing that I like about Facebook and Twitter is how easy it is to share quick updates, especially from my phone. If I'm going to be serious about POSSE, having a good iOS application that removes friction from the publishing process is important.

I always wanted to learn some iOS development, so I decided to jump in and start building a basic iOS application to post notes and photos directly to my site. I've already made some progress; so far my iOS application shares the state of my phone battery at This is what it looks like:

My phone's battery level is 78% and it is unplugged

This was inspired by Aaron Parecki, who not only tracks his phone battery but also tracks his sleep, his health patterns and his diet. Talk about owning your own data and liking tacos!

Sharing the state of my phone's battery might sound silly but it's a first step towards being able to publish notes and photos from my phone. To post the state of my phone's battery on my Drupal site, my iOS application reads my battery status, wraps it in a JSON object and sends it to a new REST endpoint on my Drupal site. It took less than 100 lines of code to accomplish this. More importantly, it uses the same web services approach as posting notes and photos will.

In an unexpected turn of events (yes, unnecessary scope creep), I decided it would be neat to keep my status page up-to-date with real-time battery information. In good tradition, the scope creep ended up consuming most of my time. Sending periodic battery updates turned out to be difficult because when a user is not actively using an iOS application, iOS suspends the application. This means that the applications can't listen to battery events nor send data using web services. This makes sense: suspending the application helps improve battery life, and allows iOS to devote more system resources to the active application in the foreground.

The old Linux hacker in me wasn't going to be stopped though; I wanted my application to keep sending regular updates, even when it's not the active application. It turns out iOS makes a few exceptions and allows certain categories of applications – navigation, music and VOIP applications – to keep running in the background. For example, Waze continues to provide navigation instructions and Spotify will play music even when they are not the active application running in the foreground.

You guessed it; the solution was to turn my note-posting-turned-battery application into a navigation application. This requires the application to listen to location update events from my phone's GPS. Doing so prevents the application from being suspended. As a bonus, I can use my location information for future use cases such as geotagging posts.

This navigation hack works really well, but the bad news is that my application drains my phone battery rather quickly. The good news is that I can easily instruct Drupal to send me email notifications to remind me to charge my phone.

Scope creep aside, it's been fun to work with Drupal 8's built-in REST API and to learn some iOS application development. Now I can post my phone's battery on my site, posting notes or photos shouldn't be much more difficult. I'm guessing that a "minimum viable implementation" will require no more than another 100 lines of code and four hours of my time. My current thinking is to go ahead with that so I can start posting from my phone. That gives me more time to evaluate if I want to use the Micropub standard (probably) and the Indigenous iOS application (depends) instead of building and maintaining my own.

February 24, 2018

The post Announcing “Oh Dear!”: a new innovative tool to monitor your websites appeared first on

We've just completed our first month of public beta at Oh Dear!, a new website monitoring tool I've been working on for the last couple of months with my buddy Freek. I'm excited to show it to you!

What is Oh Dear!?

Our project is called "Oh Dear!" and it's a dead-simple tool you can use to help monitor the availability of your websites. And we define availability pretty broadly.

We offer multi-location uptime monitoring, mixed content & broken links detection and SSL certificate & transparency reporting. There's an API and kick-ass documentation too.

Think of it as any other monitoring solution, but tailored specifically to site owners that care about every detail about their site.

What does it look like?

I guess the more obvious first question would be "what can it do?", but since we're pretty damn proud of our layout & design, I want to brag about that first.

Here's our homepage.

There's extensive documentation on the API, every feature & the different notification methods.

A beautiful in-app dashboard to give you a birds-eye view of the status of all your sites.

An overview of the advanced notification features.

The whole design just looks & feels right.

If you've been following me for a while, you know I have no experience whatsoever with design. That's why we got external help from an Antwerp web agency called Spatie to help layout our app. I couldn't be happier with these results!

What can Oh Dear! do?

The heart & soul of Oh Dear! is about monitoring websites. So once you add your first site:

  • We will monitor its uptime from independent world-wide locations (downtime gets verified from another, independent location -- different country, different cloud provider).
  • We validate its HTTPS health: certificate expirations dates, invalid certificate chains, revoked certs, certificate transparency reporting, ...
  • We crawl your site and report any broken links (internal & external) and will report errors (like HTTP/500) to you
  • Everything can be reported to you via HipChat, Slack, Email, Push notifications (Pushover) or SMS (Nexmo)
  • There's an API to automate Oh Dear! from start to finish & webhooks to tie in to your own monitoring systems

All of this in -- but I'm biased here -- the easiest interface to set this up. Everything is tailored to website developers or maintainers. Web agencies or large firms, that run multiple sites. We know & feel the pain involved in managing those sites, we built Oh Dear! specifically for that use case.

If you have clients or users that can create their own pages in a CMS, create links to other pages, ... you'll eventually get a situation where a user makes an unforeseen error. Oh Dear! can catch those, as we crawl your site and report any errors we find.

Can I try it for free?

Yes, a no-strings-attached, no credit card required free trial. Head over to Oh Dear! to give it a try.

The trial lasts for 10 days, we'll bug you twice to give a heads-up that your trial is about to expire and then we leave you alone.


Give it a try & let me know what you think, what bugs you found or what features you're interested in.

It's already pretty feature-complete, but new code is deployed on a weekly basis and it contains a lot of good new ideas that have been reported by our community.

In the next few posts I'll detail how every check is performed and explain the technicals behind it.

Stay in touch via or via @OhDearApp on Twitter.

The post Announcing “Oh Dear!”: a new innovative tool to monitor your websites appeared first on

February 22, 2018

Today, Commerce Guys shared a new initiative for Drupal Commerce. They launched a "Drupal Commerce distribution configurator": a web-based UI that generates a composer.json file and builds a version of Drupal Commerce that is tailored to an evaluator's needs. The ability to visually configure your own distribution or demo is appealing. I like the idea of a Drupal distribution configurator for at least three reasons: (1) its development could lead to improvements to Drupal's Composer-based workflow, which would benefit all of Drupal, (2) it could help make distributions easier to build and maintain, which is something I'm passionate about, and last but not least, (3) it could grow into a more polished Drupal evaluator tool.

During 2016 and 2017 I worked in a small 3 people dev team at Wikimedia Deutschland, aptly named the FUN team. This is the team that rewrote our fundraising application and implemented the Clean Architecture. Shortly after we started working on new features we noticed room for improvement in many areas of our collaboration. We talked about these and created a Best Practices document, the content of which I share in this blog post.

We created this document in October 2016 and it focused on the issues we where having at the time or things we where otherwise concerned about. Hence it is by no means a comprehensive list, and it might contain things not applicable to other teams due to culture/value differences or different modes of working. That said I think it can serve as a source of inspiration for other teams.

Make sure others can pick up a task where it was left

  • Make small commits
  • Publish code quickly and don’t go home with unpublished code
  • When partially finishing a task, describe what is done and still needs doing
  • Link Phabricator on Pull Requests and Pull Requests on Phabricator

Make sure others can (start) work(ing) on tasks

  • Try to describe new tasks in such a way others can work on them without first needing to inquire about details
  • Respond to emails and comments on tasks (at least) at the start and end of your day
  • Go through the review queue (at least) at the start and end of your day
  • Indicate if a task was started or is being worked on so everyone can pick up any task without first checking it is not being worked upon. At least pull the task into the “doing” column of the board, better yet assign yourself.

Make sure others know what is going on

  • Discuss introduction of new libraries or tools and creation of new projects
  • Only reviewed code gets deployed or used as critical infrastructure
  • If review does not happen, insist on it rather than adding to big branches
  • Discuss (or Pair Program) big decisions or tasks before investing a lot of time in them

Shared commitment

  • Try working on the highest priority tasks (including any support work such as code review of work others are doing)
  • Actively look for and work on problems and blockers others are running into
  • Follow up commits containing suggested changes are often nicer than comments

February 21, 2018

Update 20180216: remark about non-Ubuntu extensions & stability.

Gnome logoI like what the Ubuntu people did when adopting Gnome as the new Desktop after the dismissal of Unity. When the change was announced some months ago, I decided to move to Gnome and see if I liked it. I did.

It’s a good idea to benefit of the small changes Ubuntu did to Gnome 3. Forking dash-to-dock was a great idea so untested updates (e.g. upstream) don’t break the desktop. I won’t discuss settings you can change through the “Settings” application (Ubuntu Dock settings) or through “Tweaks”:

$ sudo apt-get install gnome-tweak-tool

It’s a good idea, though, to remove third party extensions so you are sure you’re using the ones provided and adapted by Ubuntu. You can always add new extensions later (the most important ones are even packaged).
$ rm -rf ~/.local/share/gnome-shell/extensions/*

Working with Gnome 3, and in less extent with MacOS, taught me that I prefer bars and docks to autohide. I never did in the past, but I feel that Gnome (and MacOS) got this right. I certainly don’t like the full height dock: make it so small as needed. You can use the graphical “dconf Editor” tool to make the changes, but I prefer the safer command line (you won’t make a change by accident).

To prevent Ubuntu Dock to take all the vertical space (i.e., most of it is just an empty bar):

$ dconf write /org/gnome/shell/extensions/dash-to-dock/extend-height false

A neat Dock trick: when hovering over a icon on the dock, cycle through windows of the application while scrolling (or using two fingers). Way faster than click + select:

$ dconf write /org/gnome/shell/extensions/dash-to-dock/scroll-action "'cycle-windows'"

I set the dock to autohide in the regular “Settings” application. An extension is needed to do the same for the Top Bar (you need to log out, and the enable it through the “Tweaks” application):

$ sudo apt-get install gnome-shell-extension-autohidetopbar

Update: I don’t install extensions any more besides the ones forked by Ubuntu. In my experience, they make the desktop unstable under Wayland. That’s said, I haven’t seen crashes related to autohidetopbar. That said, I moved back to Xorg (option at login screen) because Wayland feels less stable. The next Ubuntu release (18.04) will default to Xorg as well meaning that at least until 18.10 Wayland won’t be the default session.

Oh, just to be safe (e.g., in case you broke something), you can reset all the gnome settings with:

$ dconf reset -f /

Have a look at the comments for some extra settings (that I personally do not use, but many do).

Some options that I don’t use far people have asked me about (here and elsewhere)

Specially with the setting that allows scrolling above, you may want to only switch between windows of the same application in the active workspace. You can isolate workspaces with:

$ dconf write /org/gnome/shell/extensions/dash-to-dock/isolate-workspaces true

Hide the dock all the time, instead of only when needed. You can do this by disabling “intellihide”:

$ dconf write /org/gnome/shell/extensions/dash-to-dock/intellihide false

February 20, 2018

The post Update a docker container to the latest version appeared first on

Here's a simple one, but if you're new to Docker something you might have to look up. On this server, I run Nginx as a Docker container using the official nginx:alpine  version.

I was running a fairly outdated version:

$ docker images | grep nginx
nginx    none                5a35015d93e9        10 months ago       15.5MB
nginx    latest              46102226f2fd        10 months ago       109MB
nginx    1.11-alpine         935bd7bf8ea6        18 months ago       54.8MB

In order to make sure I had the latest version, I ran pull:

$ docker pull nginx:alpine
alpine: Pulling from library/nginx
550fe1bea624: Pull complete
d421ba34525b: Pull complete
fdcbcb327323: Pull complete
bfbcec2fc4d5: Pull complete
Digest: sha256:c8ff0187cc75e1f5002c7ca9841cb191d33c4080f38140b9d6f07902ababbe66
Status: Downloaded newer image for nginx:alpine

Now, my local repository contains an up-to-date Nginx version:

$ docker images | grep nginx
nginx    alpine              bb00c21b4edf        5 weeks ago         16.8MB

To use it, you have to launch a new container based on that particular image. The currently running container will still be using the original (old) image.

$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED
4d9de6c0fba1        5a35015d93e9        "nginx -g 'daemon ..."   9 months ago

In my case, I re-created my HTTP/2 nginx container like this;

$ docker stop nginx-container
$ docker rm nginx-container
$ docker run --name nginx-container \ 
    --net="host" \
    -v /etc/nginx/:/etc/nginx/ \
    -v /etc/ssl/certs/:/etc/ssl/certs/ \
    -v /etc/letsencrypt/:/etc/letsencrypt/ \
    -v /var/log/nginx/:/var/log/nginx/ \
    --restart=always \
    -d nginx:alpine

And the Nginx/container upgrade was completed.

The post Update a docker container to the latest version appeared first on

February 19, 2018

One thing that one has to like with Entreprise distribution is the same stable api/abi during the distro lifetime. If you have one application that works, you'll know that it will continue to work.

But in parallel, one can't always decide the application to run on that distro, with the built-in components. I was personally faced with this recently, when I was in a need to migrate our Bug Tracker to a new version. Let's so use that example to see how we can use "newer" php pkgs distributed through the distro itself.

The application that we use for is MantisBT, and by reading their requirements list it was clear than a CentOS 7 default setup would not work : as a reminder the default php pkg for .el7 is 5.4.16 , so not supported anymore by "modern" application[s].

That's where SCLs come to the rescue ! With such "collections", one can install those, without overwriting the base pkgs, and so can even run multiple parallel instances of such "stack", based on configuration.

Let's just start simple with our MantisBT example : forget about the traditional php-* packages (including "php" which provides the mod_php for Apache) : it's up to you to let those installed if you need it, but on my case, I'll default to php 7.1.x for the whole vhost, and also worth knowing that I wanted to integrate php with the default httpd from the distro (to ease the configuration management side, to expect finding the .conf files at $usual_place)

The good news is that those collections are built and so then tested and released through our CentOS Infra, so you don't have to care about anything else ! (kudos to the SCLo SIG ! ). You can see the available collections here

So, how do we proceed ? easy ! First let's add the repository :

yum install centos-release-scl

And from that point, you can just install what you need. For our case, MantisBT needs php, php-xml, php-mbstring, php-gd (for the captcha, if you want to use it), and a DB driver, so php-mysql (if you targets mysql of course). You just have to "translate" that into SCLs pkgs : in our case, php becomes rh-php71 (meta pkg), php-xml becomes rh-php71-php-xml and so on (one remark though, php-mysql became rh-php71-php-mysqlnd !)

So here we go :

yum install httpd rh-php71 rh-php71-php-xml rh-php71-php-mbstring rh-php71-php-gd rh-php71-php-soap rh-php71-php-mysqlnd rh-php71-php-fpm

As said earlier, we'll target the default httpd pkg from the distro , so we just have to "link" php and httpd. Remember that mod_php isn't available anymore, but instead we'll use the php-fpm pkg (see rh-php71-php-fpm) for this (so all requests are sent to that FastCGI Process Manager daemon)

Let's do this :

systemctl enable httpd --now
systemctl enable rh-php71-php-fpm --now
cat > /etc/httpd/conf.d/php-fpm.conf << EOF
AddType text/html .php 
DirectoryIndex index.php
<FilesMatch \.php$>
      SetHandler "proxy:fcgi://"
systemctl restart httpd

And from this point, it's all basic, and application is now using php 7.1.x stack. That's a basic "howto" but you can also run multiple versions in parallel, and also tune php-fpm itself. If you're interested, I'll let you read Remi Collet's blog post about this (Thank you again Remi !)

Hope this helps, as strangely I couldn't easily find a simple howto for this, as "scl enable rh-php71 bash" wouldn't help a lot with httpd (which is probably the most used scenario)

I was working on my POSSE plan when Vanessa called and asked if I wanted to meet for a coffee. Of course, I said yes. In the car ride over, I was thinking about how I made my first website over twenty years ago. HTML table layouts were still cool and it wasn't clear if CSS was going to be widely adopted. I decided to learn CSS anyway. More than twenty years later, the workflows, the automated toolchains, and the development methods have become increasingly powerful, but also a lot more complex. Today, you simply npm your webpack via grunt with vue babel or bower to react asdfjkl;lkdhgxdlciuhw. Everything is different now, except that I'm still at my desk learning CSS.

The post Show IDN punycode in Firefox to avoid phishing URLs appeared first on

Pop quiz: can you tell the difference between these 2 domains?

Both host a version of the popular crypto exchange Binance.

The second image is the correct one, the first one is a phishing link with the letter 'n' replaced by 'n with a dot below it' (U+1E47). It's not a piece of dirt on your screen, it's an attempt to trick you to believe it's the official site.

Firefox has a very interesting option called IDN_show_punycode. You can enable it in about:config`.

Once enabled, it'll make that phishing domain look like this:

Doesn't look that legit now anymore, does it?

I wish Chrome offered a similar option though, could prevent quite a few phishing attempts.


The post Show IDN punycode in Firefox to avoid phishing URLs appeared first on

"Get your HTTPS on" because Chrome will mark all HTTP sites as "not secure" starting in July 2018. Chrome currently displays a neutral icon for sites that aren't using HTTPS, but starting with Chrome 68, the browser will warn users in the address bar.

Fortunately, HTTPS has become easier to implement through services like Let's Encrypt, who provide free certificates and aim to eliminate to complexity of setting up and maintaining HTTPS encryption.

February 18, 2018

TL;DR Autoptimize 2.4 will be a major change. Tomaš Trkulja (aka zytzagoo) has cleaned up and modernized the code significantly, making it easier to read and maintain, switched to the latest and greatest minifiers and added automated testing. And as if that isn’t enough we’re also adding new optimization options! The downside: we will be dropping support for PHP < 5.3 and for the “legacy minifiers”. AO 2.4 will first be made available as a separate “Autoptimize Beta” plugin on via GitHub and will see more releases/ iterations with new options/ optimizations there before being promoted to the official “Autoptimize”.

Back in March 2015 zytzagoo forked Autoptimize, rewriting the CDN-replacement logic (and a lot of autoptimizeStyles::minify really) and started adding automated testing. I kept an eye on the fork and later that year I contacted Tomas via Github to see how we could collaborate. We have been in touch ever since; some of his improvements have already been integrated and he is my go-to man to discuss coding best practices, bugfixes and security.

FFWD to the nearby future; Autoptimize 2.4 will be based on Tomaš’ fork and will include the following major changes:

  • New: option only minify JS/ CSS without combining them
  • New: excluded JS- or CSS-files will be automatically minified
  • Improvement: switching to the current version of JSMin and -more importantly- of YUI CSS compressor PHP port which has some major performance-improvements of itself
  • Improvement: all create_function() instances have been replaced by anonymous functions (PHP 7.2 started issuing warnings about create_function being deprecated)
  • Improvement: all code in autoptimize/classlesses/* (my stuff) has been rewritten into OOP and is now in autoptimize/classes/*
  • Improvement: use of autoload instead of manual conditional includes
  • Improvement: a nice amount of test cases (although no 100% coverage yet), allowing for Travis continuous integration-tests being done
  • dropping support for PHP below 5.3 (you really should be on PHP 7.x, it is way faster)
  • dropping support for the legacy minifiers

These improvements will be released in a separate “Autoptimize Beta” plugin soon (albeit not on as “beta”-plugins are not allowed). You can already download from GitHub here. We will start adding additional optimization options there, releasing at a higher pace. The goal is to create a healthy Beta-user pool allowing us to move code from AO Beta to AO proper with more confidence. So what new optimization options would you like to see added to Autoptimize 2.4 and beyond? :-)

[corrected 19/02; does not allow beta-plugins]

February 17, 2018

I published the following diary on “Malware Delivered via Windows Installer Files“:

For some days, I collected a few samples of malicious MSI files. MSI files are Windows installer files that users can execute to install software on a Microsoft Windows system. Of course, you can replace “software” with “malware”. MSI files look less suspicious and they could bypass simple filters based on file extensions like “(com|exe|dll|js|vbs|…)”. They also look less dangerous because they are Composite Document Files… [Read more]

[The post [SANS ISC] Malware Delivered via Windows Installer Files has been first published on /dev/random]

February 16, 2018

My website plan

In an effort to reclaim my blog as my thought space and take back control over my data, I want to share how I plan to evolve my website. Given the incredible feedback on my previous blog posts, I want to continue the conversation and ask for feedback.

First, I need to find a way to combine longer blog posts and status updates on one site:

  1. Update my site navigation menu to include sections for "Blog" and "Notes". The "Notes" section would resemble a Twitter or Facebook livestream that catalogs short status updates, replies, interesting links, photos and more. Instead of posting these on third-party social media sites, I want to post them on my site first (POSSE). The "Blog" section would continue to feature longer, more in-depth blog posts. The front page of my website will combine both blog posts and notes in one stream.
  2. Add support for Webmention, a web standard for tracking comments, likes, reposts and other rich interactions across the web. This way, when users retweet a post on Twitter or cite a blog post, mentions are tracked on my own website.
  3. Automatically syndicate to 3rd party services, such as syndicating photo posts to Facebook and Instagram or syndicating quick Drupal updates to Twitter. To start, I can do this manually, but it would be nice to automate this process over time.
  4. Streamline the ability to post updates from my phone. Sharing photos or updates in real-time only becomes a habit if you can publish something in 30 seconds or less. It's why I use Facebook and Twitter often. I'd like to explore building a simple iOS application to remove any friction from posting updates on the go.
  5. Streamline the ability to share other people's content. I'd like to create a browser extension to share interesting links along with some commentary. I'm a small investor in Buffer, a social media management platform, and I use their tool often. Buffer makes it incredibly easy to share interesting articles on social media, without having to actually open any social media sites. I'd like to be able to share articles on my blog that way.

Second, as I begin to introduce a larger variety of content to my site, I'd like to find a way for readers to filter content:

  1. Expand the site navigation so readers can filter by topic. If you want to read about Drupal, click "Drupal". If you just want to see some of my photos, click "Photos".
  2. Allow people to subscribe by interests. Drupal 8 make it easy to offer an RSS feed by topic. However, it doesn't look nearly as easy to allow email subscribers to receive updates by interest. Mailchimp's RSS-to-email feature, my current mailing list solution, doesn't seem to support this and neither do the obvious alternatives.

Implementing this plan is going to take me some time, especially because it's hard to prioritize this over other things. Some of the steps I've outlined are easy to implement thanks to the fact that I use Drupal. For example, creating new content types for the "Notes" section, adding new RSS feeds and integrating "Blogs" and "Notes" into one stream on my homepage are all easy – I should be able to get those done my next free evening. Other steps, like building an iPhone application, building a browser extension, or figuring out how to filter email subscriptions by topics are going to take more time. Setting up my POSSE system is a nice personal challenge for 2018. I'll keep you posted on my progress – much of that might happen via short status updates, rather than on the main blog. ;)

February 15, 2018

I just published a quick update of my imap2thehive tool. Files attached to an email can now be processed and uploaded as an observable attached to a case. It is possible to specify which MIME types to process via the configuration file. The example below will process PDF & EML files:

files: application/pdf,messages/rfc822

The script is available here.

[The post Imap2TheHive: Support of Attachments has been first published on /dev/random]

Logo ReactJSCe jeudi 15 mars 2018 à 19h se déroulera la 67ème séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : ReactJS – Présentation de la librairie et prototypage d’un jeu de type “clicker”

Thématique : Internet|Graphisme|sysadmin|communauté

Public : Tout public|sysadmin|entreprises|étudiants|…

L’animateur conférencier : Michaël Hoste (80LIMIT)

Lieu de cette séance : Campus technique (ISIMs) de la Haute Ecole en Hainaut, Avenue V. Maistriau, 8a, Salle Académique, 2e bâtiment (cf. ce plan sur le site de l’ISIMs, et ici sur la carte Openstreetmap).

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page La séance sera suivie d’un verre de l’amitié.

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, OpenSides, MeaWeb et Phonoid.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

Description : Le développement JavaScript est un véritable fouillis de solutions techniques. Il ne se passe pas un mois sans voir apparaître un nouveau framework soi-disant révolutionnaire.

Lorsque ReactJS est sorti en 2013, soutenu par un Facebook peu coutumier de la diffusion de solutions open-source, peu s’attendaient à voir arriver un réel challenger. Sa particularité de mélanger du code HTML et JavaScript au sein d’un seul fichier allait même à l’encontre du bon sens. Pourtant, 4 ans plus tard, cette solution reste idéale pour proposer des expériences web toujours plus fluides et complexes. ReactJS et ses mécanismes sous-jacents ont apporté un vent frais dans le monde du web front-end.

Lors de cette séance, nous reprendrons l’historique de ReactJS et expliquerons pourquoi cette librairie, pourtant minimaliste, aborde les problèmatiques du web d’une manière sensée.

Une fois les bases logiques détaillées, nous mettrons les mains sur le clavier pour développer un exemple concret sous la forme d’un jeu de type “clicker” (Clicker Heroes, Cookie Clicker, etc.).

Short bio : Michaël Hoste est diplômé d’une Licence en Sciences Informatiques à l’Université de Mons. En 2012, il fonde 80LIMIT, une société de développement d’applications web et mobile, centrée sur les langages Ruby et JavaScript. Il aime partager son temps entre des projets clients et des projets internes de type Software-as-a-Service. En outre, il préside l’ASBL Creative Monkeys qui permet à plusieurs sociétés de s’épanouir au sein d’un open space dans le centre de Mons.

February 14, 2018

My blog as my thought space

Last week, I shared my frustration with using social media websites like Facebook or Twitter as my primary platform for sharing photos and status updates. As an advocate of the open web, this has bothered me for some time so I made a commitment to prioritize publishing photos, updates and more to my own site.

I'm excited to share my plan for how I'd like to accomplish this, but before I do, I'd like to share two additional challenges I face on my blog. These struggles factor into some of the changes I'm considering implementing, so I feel compelled to share them with you.

First, I've struggled to cover a wide variety of topics lately. I've been primarily writing about Drupal, Acquia and the Open Web. However, I'm also interested in sharing insights on startups, investing, travel, photography and life outside of work. I often feel inspired to write about these topics, but over the years I've grown reluctant to expand outside of professional interests. My blog is primarily read by technology professionals — from Drupal users and developers, to industry analysts and technology leaders — and in my mind, they do not read my blog to learn about a wider range of topics. I'm conflicted because I would like my l blog to reflect both my personal and professional interests.

Secondly, I've been hesitant to share short updates, such as a two sentence announcement about a new Drupal feature or an Acquia milestone. I used to publish these kinds of short updates quite frequently. It's not that I don't want to share them anymore, it's that I struggle to post them. Every time I publish a new post, it goes out to more than 5,000 people that subscribe to my blog by email. I've been reluctant to share short status updates because I don't want to flood people's inbox.

Throughout the years, I worked around these two struggles by relying on social media; while I used my blog for in-depth blog posts specific to my professional life, I used social media for short updates, sharing photos and starting conversation about wider variety of topics.

But I never loved this division.

I've always written for myself, first. Writing pushes me to think, and it is the process I rely on to flesh out ideas. This blog is my space to think out loud, and to start conversations with people considering the same problems, opportunities or ideas. In the early days of my blog, I never considered restricting my blog to certain topics or making it fit specific editorial standards.

Om Malik published a blog last week that echoes my frustration. For Malik, blogs are thought spaces: a place for writers to share original opinions that reflect "how they view the world and how they are thinking". As my blog has grown, it has evolved, and along the way it has become less of a public thought space.

My commitment to implementing a POSSE approach on my site has brought these struggles to the forefront. I'm glad it did because it requires me to rethink my approach and to return to my blogging roots. After some consideration, here is what I want to do:

  1. Take back control of more of my data; I want to share more of my photos and social media data on my own site.
  2. Find a way to combine longer in-depth blog posts and shorter status updates.
  3. Enable readers and subscribers to filter content based on their own interests so that I can cover a larger variety of topics.

In my next blog post, I plan to outline more details of how I'd like to approach this. Stay tuned!

February 10, 2018

Vandaag wil ik de aandacht op een Belgische wet over het verkopen met verlies. Ons land verbiedt, bij wet, elke handelaar een goed met verlies te verkopen. Dat is de regel, in ons België.

Die regel heeft (terecht) uitzonderingen. De definitie van de uitzondering wil zeggen dat ze niet de regel zijn: de verkoop met verlies is in België slechts per uitzondering toegestaan:

  • naar aanleiding van soldenverkoop of uitverkoop;
  • met als doel de goederen die vatbaar zijn voor snel bederf van de hand te doen als hun bewaring niet meer kan worden verzekerd;
  • ten gevolge externe omstandigheden;
  • goederen die technisch voorbijgestreefd zijn of beschadigd zijn;
  • de noodzakelijkheid van concurrentie.

Ik vermoed dat onze wet bestaat om oneerlijke concurrentie te bestrijden. Een handelaar kan dus niet een bepaald product (bv. een game console) tegen verlies verkopen om zo marktdominantie te verkrijgen voor een ander product uit zijn gamma (bv. games), bv. met als doel concurrenten uit de markt te weren.

Volgens mij is het daarom zo dat, moest een game console -producent met verlies een console verkopen, dit illegaal is in België.

Laten we aannemen dat game console producenten, die actief zijn in (de verkoop in) België, de Belgische wet volgen. Dan volgt dat ze hun game consoles niet tegen verlies verkopen. Ze maken dus winst. Moesten ze dat niet doen dan moeten ze voldoen aan uitzonderlijke voorwaarden, in de (eerder vermelde) Belgische wet, die hen toelaat wel verlies te maken. In alle andere gevallen zouden ze in de ontwettigheid verkeren. Dat is de Belgische wet.

Dat maakt dat de aanschaf van zo’n game console, als Belgisch consument, betekent dat de producent -en verkoper een zekere winst hebben gemaakt door mijn aankoop. Er is dus geen sprake van verlies. Tenzij de producent -of verkoper in België betrokken is bij onwettige zaken.

Laten we aannemen dat we op zo’n console, na aanschaf, een andere software willen draaien. Dan kan de producent/verkoper dus niet beweren dat zijn winst gemaakt wordt door zaken die naderhand verkocht zouden worden (a.d.h.v. bv. originele software).

Hun winst is met andere woorden al gemaakt. Op de game console zelf. Indien niet, dan zou de producent of verkoper in onwettigheid verkeren (in België). Daarvan nemen we aan dat dit zo niet verlopen is. Want anders zou men het goed niet mogen verkopen. Het goed is wel verkocht. Volgens Belgische wetgeving (toch?).

Indien niet, dan is de producent -en of verkoper verantwoordelijk. In geen geval de consument.

February 09, 2018

A quick blog post about a module that I wrote to interconnect the malware analysis framework Viper and the malware analysis platform A1000 from ReversingLabs.

The module can perform two actions at the moment: to submit a new sample for analysis and to retrieve the analysis results (categorization):

viper sample.exe > a1000 -h
usage: a1000 [-h] [-s] [-c]

Submit files and retrieve reports from a ReversingLab A1000

optional arguments:
-h, --help show this help message and exit
-s, --submit Submit file to A1000
-c, --classification Get classification of current file from A1000
viper sample.exe > a1000 -s
[*] Successfully submitted file to A1000, task ID: 393846

viper sample.exe > a1000 -c
[*] Classification
- Threat status : malicious
- Threat name : Win32.Trojan.Fareit dw eldorado
- Trust factor : 5
- Threat level : 2
- First seen : 2018-02-09T13:03:26Z
- Last seen : 2018-02-09T13:07:00Z

The module is available on my GitHub repository.

[The post Viper and ReversingLabs A1000 Integration has been first published on /dev/random]