Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

December 14, 2018

Gift drive

Every December, Acquia organizes a gift drive on behalf of the Wonderfund. The gift drive supports children that otherwise wouldn't be receiving gifts this holiday season. This year, more than 120 Acquians collected presents for 205 children in Massachusetts.

Acquia's annual gift drive always stands out as a heart warming and meaningful effort. It's a wonderful example of how Acquia is committed to "Give back more". Thank you to every Acquian who participated and for Wonderfund's continued partnership. Happy Holidays!

December 13, 2018

Autoptimize by default excludes inline JS and jquery.js from optimization. Inline JS is excluded because it is a typical cache-buster (due to changing variables in it) and as inline JS often requires jQuery being available as a consequence that needs to be excluded as well. The result of this “safe default” however is that jquery.js is a render-blocking resource. So even if you’re doing “inline & defer CSS” your Start-Render time (or one of the variations thereof) will be sub-optimal.

Jonas, the smart guy behind, proposed to embed inline JS that requires jQuery in a function that executes after the DomContentLoaded event. And so I created a small code snippet as a proof of concept which hooks into Autoptimize’s API and that seems to work just fine;

The next step is having some cutting-edge Autoptimize users test this in the wild. You can view/ download the code from this gist  and add it as a code snippet (or if you insist in your theme’s functions.php). Your feedback is more then welcome, I’m sure you know where to find me!

Gabe, Mateu and I just released the third RC of JSON:API 2, so time for an update! The last update is from three weeks ago.

What happened since then? In a nutshell:


Curious about RC3? RC2RC3 has five key changes:

  1. ndobromirov is all over the issue queue to fix performance issues: he fixed a critical performance regression in 2.x vs 1.x that is only noticeable when requesting responses with hundreds of resources (entities); he also fixed another performance problem that manifests itself only in those circumstances, but also exists in 1.x.
  2. One major bug was reported by dagmar: the ?filter syntax that we made less confusing in RC2 was a big step forward, but we had missed one particular edge case!
  3. A pretty obscure broken edge case was discovered, but probably fairly common for those creating custom entity types: optional entity reference base fields that are empty made the JSON:API module stumble. Turns out optional entity reference fields get different default values depending on whether they’re base fields or configured fields! Fortunately, three people gave valuable information that led to finding this root cause and the solution! Thanks, olexyy, keesee & caseylau!
  4. A minor bug that only occurs when installing JSON:API Extras and configuring it in a certain way.
  5. Version 1.1 RC1 of the JSON:API spec was published; it includes two clarifications to the existing spec. We already were doing one of them correctly (test coverage added to guarantee it), and the other one we are now complying with too. Everything else in version 1.1 of the spec is additive, this is the only thing that could be disruptive, so we chose to do it ASAP.

So … now is the time to update to 2.0-RC3. We’d love the next release of JSON:API to be the final 2.0 release!

P.S.: if you want fixes to land quickly, follow dagmar’s example:

  1. Note that usage statistics on are an underestimation! Any site can opt out from reporting back, and composer-based installs don’t report back by default. ↩︎

  2. Since we’re in the RC phase, we’re limiting ourselves to only critical issues. ↩︎

  3. This is the first officially proposed JSON:API profile! ↩︎

I published the following diary on “Phishing Attack Through Non-Delivery Notification”:

Here is a nice example of phishing attack that I found while reviewing data captured by my honeypots. We all know that phishing is a pain and attackers are always searching for new tactics to entice the potential victim to click on a link, disclose personal information or more… [Read more]

[The post [SANS ISC] Phishing Attack Through Non-Delivery Notification has been first published on /dev/random]

If you're driving into Boston, you might notice something new on I-90. Acquia has placed ads on two local billboards; more than 120,000 cars drive past these billboards everyday. This is the first time in Acquia's eleven years that we've taken out a highway billboard, and dipped our toes in more traditional media advertising. Personally, I find that exciting, because it means that more and more people will be introduced to Acquia. If you find yourself on the Mass Pike, keep an eye out!


December 12, 2018

The post Our Gitlab CI pipeline for Laravel applications – Oh Dear! blog appeared first on

We've built an extensive Gitlab CI Pipeline for our testing at Oh Dear! and we're open sourcing our configs.

This can be applied to any Laravel application and will significantly speed up your entire pipeline configurations if you're just getting started.

We're releasing our Gitlab CI pipeline that is optimized for Laravel applications.

It contains all the elements you'd expect: building (composer, yarn & webpack), database seeding, PHPUnit & copy/paste (mess) detectors & some basic security auditing of our 3rd party dependencies.

Source: Our Gitlab CI pipeline for Laravel applications -- Oh Dear! blog

The post Our Gitlab CI pipeline for Laravel applications – Oh Dear! blog appeared first on

At Drupal Europe, I announced that Drupal 9 will be released in 2020. Although I explained why we plan to release in 2020, I wasn't very specific about when we plan to release Drupal 9 in 2020. Given that 2020 is less than thirteen months away (gasp!), it's time to be more specific.

Shifting Drupal's six month release cycle

A timeline that shows how we shifted Drupal 8's release windowsWe shifted Drupal 8's minor release windows so we can adopt Symfony's releases faster.

Before I talk about the Drupal 9 release date, I want to explain another change we made, which has a minor impact on the Drupal 9 release date.

As announced over two years ago, Drupal 8 adopted a 6-month release cycle (two releases a year). Symfony, a PHP framework which Drupal depends on, uses a similar release schedule. Unfortunately the timing of Drupal's releases has historically occurred 1-2 months before Symfony's releases, which forces us to wait six months to adopt the latest Symfony release. To be able to adopt the latest Symfony releases faster, we are moving Drupal's minor releases to June and December. This will allow us to adopt the latest Symfony releases within one month. For example, Drupal 8.8.0 is now scheduled for December 2019.

We hope to release Drupal 9 on June 3, 2020

Drupal 8's biggest dependency is Symfony 3, which has an end-of-life date in November 2021. This means that after November 2021, security bugs in Symfony 3 will not get fixed. Therefore, we have to end-of-life Drupal 8 no later than November 2021. Or put differently, by November 2021, everyone should be on Drupal 9.

Working backwards from November 2021, we'd like to give site owners at least one year to upgrade from Drupal 8 to Drupal 9. While we could release Drupal 9 in December 2020, we decided it was better to try to release Drupal 9 on June 3, 2020. This gives site owners 18 months to upgrade. Plus, it also gives the Drupal core contributors an extra buffer in case we can't finish Drupal 9 in time for a summer release.

A timeline that shows we hope to release Drupal 9 in June 2020Planned Drupal 8 and 9 minor release dates.

We are building Drupal 9 in Drupal 8

Instead of working on Drupal 9 in a separate codebase, we are building Drupal 9 in Drupal 8. This means that we are adding new functionality as backwards-compatible code and experimental features. Once the code becomes stable, we deprecate any old functionality.

Let's look at an example. As mentioned, Drupal 8 currently depends on Symfony 3. Our plan is to release Drupal 9 with Symfony 4 or 5. Symfony 5's release is less than one year away, while Symfony 4 was released a year ago. Ideally Drupal 9 would ship with Symfony 5, both for the latest Symfony improvements and for longer support. However, Symfony 5 hasn't been released yet, so we don't know the scope of its changes, and we will have limited time to try to adopt it before Symfony 3's end-of-life.

We are currently working on making it possible to run Drupal 8 with Symfony 4 (without requiring it). Supporting Symfony 4 is a valuable stepping stone to Symfony 5 as it brings new capabilities for sites that choose to use it, and it eases the amount of Symfony 5 upgrade work to do for Drupal core developers. In the end, our goal is for Drupal 8 to work with Symfony 3, 4 or 5 so we can identify and fix any issues before we start requiring Symfony 4 or 5 in Drupal 9.

Another example is our support for reusable media. Drupal 8.0.0 launched without a media library. We are currently working on adding a media library to Drupal 8 so content authors can select pre-existing media from a library and easily embed them in their posts. Once the media library becomes stable, we can deprecate the use of the old file upload functionality and make the new media library the default experience.

The upgrade to Drupal 9 will be easy

Because we are building Drupal 9 in Drupal 8, the technology in Drupal 9 will have been battle-tested in Drupal 8.

For Drupal core contributors, this means that we have a limited set of tasks to do in Drupal 9 itself before we can release it. Releasing Drupal 9 will only depend on removing deprecated functionality and upgrading Drupal's dependencies, such as Symfony. This will make the release timing more predictable and the release quality more robust.

For contributed module authors, it means they already have the new technology at their service, so they can work on Drupal 9 compatibility earlier (e.g. they can start updating their media modules to use the new media library before Drupal 9 is released). Finally, their Drupal 8 know-how will remain highly relevant in Drupal 9, as there will not be a dramatic change in how Drupal is built.

But most importantly, for Drupal site owners, this means that it should be much easier to upgrade to Drupal 9 than it was to upgrade to Drupal 8. Drupal 9 will simply be the last version of Drupal 8, with its deprecations removed. This means we will not introduce new, backwards-compatibility breaking APIs or features in Drupal 9 except for our dependency updates. As long as modules and themes stay up-to-date with the latest Drupal 8 APIs, the upgrade to Drupal 9 should be easy. Therefore, we believe that a 12- to 18-month upgrade period should suffice.

So what is the big deal about Drupal 9, then?

The big deal about Drupal 9 is … that it should not be a big deal. The best way to be ready for Drupal 9 is to keep up with Drupal 8 updates. Make sure you are not using deprecated modules and APIs, and where possible, use the latest versions of dependencies. If you do that, your upgrade experience will be smooth, and that is a big deal for us.

Special thanks to Gábor Hojtsy (Acquia), Angie Byron (Acquia), xjm (Acquia), and catch for their input in this blog post.

December 11, 2018

December 10, 2018

This morning, I received a mail from Cisco to tell me that I’ve been nominated as finalist for their IT Blog Awards (Category: “Most Inspirational”). I’m maintaining this blog just for the fun and to share useful (I hope) information with my readers and don’t do this to get rewards but it’s always nice to get such feedback. The final competition is now open, if you’ve a few minutes, just vote for me!

Votes are open here. Thank you!

[The post Nominated for the IT Blog Awards has been first published on /dev/random]

December 08, 2018

This week I was in New York for a day. At lunch, Sir Martin Sorrell pointed out that Microsoft overtook Apple as the most valuable software company as measured by market capitalization. It's a close call but Microsoft is now worth $805 billion while Apple is worth $800 billion.

What is interesting to me are the radical "ebbs and flows" of each organization.

In the 80's, Apple's market cap was twice that of Microsoft. Microsoft overtook Apple in the the early 90's, and by the late 90's, Microsoft's valuation was a whopping thirty-five times Apple's. With a 35x difference in valuation, no one would have guessed Apple to ever regain the number-one position. However, Apple did the unthinkable and regained its crown in market capitalization. By 2015, Apple was, once again, valued two times more than Microsoft.

And now, eighteen years after Apple took the lead, Microsoft has taken the lead again. Everything old is new again.

As you'd expect, the change in market capitalization corresponds with the evolution and commercial success of their product portfolios. In the 90s, Microsoft took the lead based on the success of the Windows operating system. Apple regained the crown in the 2000s based on the success of the iPhone. Today, Microsoft benefits from the rise of cloud computing, Software-as-a-Service and Open Source, while Apple is trying to navigate the saturation of the smartphone market.

It's unclear if Microsoft will maintain and extend its lead. On one hand, the market trends are certainly in Microsoft's favor. On the other hand, Apple still makes a lot more money than Microsoft. I believe Apple to be slightly undervalued, and Microsoft is to be overvalued. The current valuation difference is not justified.

At the end of the day, what I find to be most interesting is how both organizations have continued to reinvent themselves. This reinvention has happened roughly every ten years. During these periods of reinvention, organizations can fall out out favor for long stretches of time. However, as both organizations prove, it pays off to reinvent yourself, and to be patient product and market builders.

December 07, 2018

And the conference is over! I’m flying back to home by tomorrow morning so I’ve time to write my third wrap-up. The last day of the conference is always harder for many attendees due to the late parties. But I was present on time to attend the last set of presentations. The first one was presented by Wu Tiejun and Zhao Guangyan: “WASM Security Analysis Reverse Engineering”. They started with an introduction about WASM or “Web Assembly”. It’s a portable technology deployed in browsers which helps to provide an efficient binary format available on many different platforms. An interesting URL they mentioned is WAVM, a standalone VM for WebAssembly. They also covered the CVE-2018-4121. I’m sorry for the lack in details but the speakers were reading their slides and it was very hard to follow them. Sad, because I’m sure that they have a deep knowledge about this technology. If you’re interesting, have a look at their slides once published or here is another resource.
The next speaker was Charles IBRAHIM who presented an interesting usage of a botnet. The title of his presentation was “Red Teamer 2.0: Automating the C&C Set up Process”. Botconf is a conference dedicated to fighting botnets but this time, it was about building a botnet! By definition, a botnet can be very useful: sharing resources, executing commands on remote hosts, collecting data, etc. All those operations can be very interesting while conducting red team exercises. Indeed, red teams need tools to perform operations in a smooth way and have to remain below the radar. There are plenty of tools available for red teamers but there is a lack of aggregation. Charles presented the botnet they developed. The goal is to reduce the time required to build the infrastructure, to easily execute common actions, log operations and reduce OPSEC risks. About the C&C infrastructure, they provide user authentication, logging capabilities, remote agent deployment and administration and cover communication techniques. Steps of the red team process were reviewed:
  • Reconnaissance: via recon-ng
  • Weaponization: installation of a RAT with AV alarms, empire agent
  • Delivery: EC2 instance creation, Gophish
  • Exploitation & post-exploitation: Receives the RAT connection, launch discovery commands
Very interesting approach to an alternative use of a botnet.
Then, Rommel Joven came on stage to talk about Mirai: “Beyond the Aftermath“. Mirai was a huge botnet that affected many IoT tools. Since if was discovered, what’s going on? Mirai was also used to DDoS major sites like, Twitter, Spotify, Reddit, etc. Later the source code was released. Why does it affect IoT devices?

  • Easy to exploit
  • 24/7 availability
  • Powerful enough for DDoS
  • Rarely monitored / patched
  • Low security awareness
  • Malware source code available for reuse
The last point was the key of the presentation. Rommel explained that, since the leak, many new malware were developed reusing some functions or some part of the code present in Mirai. 80K samples were detected in 2018 so far with 49% of the Mirai code. Malware developers are like common developers: why reinvent the wheel if you can borrow some code somewhere else? Then Rommel reviewed some malware samples that are know to re-use the Mirai code (or at least a part of):
  • Hajime – same user/password combinations
  • IoTReaper: user of 9 exploits for infection, integration LUA
  • Persirai/Http81: borrows the port scanner from Mirai as well as utils functions. Found similar strings
  • BashLite: MiraiScanner() , MiraiIPRanges(), … 
  • HideNSeek: configuration table similarity, utils functions, capability of data exfiltration
  • ADB.Miner: Port scanning from Mirai, adds a Monero miner, 
When you deploy a botnet, the key is to monetize it. How is it achieved with Mirai & alternatives: by performing cryptomining operations, by stealing ETH coins, by installing proxies or booters.
Let’s continue with Piotr BIAŁCZAK who presented “Leaving no Stone Unturned – in Search of HTTP Malware Distinctive Features“. The idea behind Piotr’s research is to analyze HTTP requests to try to identify which ones are performed by a regular browser or a malware (Windows malware samples), then to try to build families. The research was based on a huge number of PCAP files that were analyzed through the following process:
PCAP > HTTP request > IDS > SID assigned to request > Analysis > Save to the database
Data sources of PCAP files are the CERT Polska’s sandbox system, HTTP traffic was performed via popular browsers and access to the top-500 Alexa via Selenium. About numbers, 36K+ PCAP file were analyzed and 2.5M+ alerts generated. Traffic from malware samples was from many know families like Locky, Zbot, Ursif, Dreambot, Pony, Nemucod, … 172 families identified in total, 19% of requests of unknown malware. To analyze results, they searched for errors, features inherent to malicious operations (example: obfuscation) and the identification of features which reflect differences in data exchange.
About headers, interesting stuffs are:
  • Occurrence frequency
  • Mispelling
  • Lack of headers
  • Protocol version
  • Destination port
  • Strange user-agent
  • Presence of non-printable characters
About payloads:
  • Length
  • Entropy
  • Non-printable characters
  • Obfuscation
  • Presence of request pipelining
Some of the findings:
  • Lack of colon in header line
  • Unpopular white space character + space before comma
  • End of header line other than CR+LR
  • Non ASCII character in header line
  • Destination port (other ports used by some malware families)
  • Prevalence of HTTP 1.0 version request by malware samples
  • Non ascii characters in payload (downloaders, bankers and trojans)
  • Entropy
  • GET request with payload
  • POST with out Referer header
The research was interesting but I don’t see the point for a malware developer to make bad HTTP requests instead of using a standard library to make HTTP request.
Yoshihiro ISHIKAWA & Shinichi NAGANO presented Let’s Go with a Go RAT!”. The wellmess malware is written in Go and was not detected by AV before June 2018. Mirai is one of the famous malware written in this language. The performed a deep review of wellmess:
  • It’s a RAT
  • C2 : RCE, upload and download files
  • Identify: Go & .net (some binaries)
  • Windows 32/64 bits and ELF X64
  • Compiled with Ubuntu
  • The “wellmess” name is coming from “Welcome Message”
  • Usage of IRC terms
  • They make a now classic typo page 🙂
    • choise.go
    • wellMess
    • Mozzila
  • Specific user-agents
  • C&C infrastructure (no domains, only IP addresses)
  • Lateral movement not by default but performed via another tool called gost (Go Simple Tunnel)
  • Some version are in .Net
  • Bot command syntax: using XML messages
They performed a live demo of the botnet and C&C comms. Very deep analyzis. They also provided Suricata IDS and YARA rules to detect the malware (check the slides).
After the lunch break, James Wyke presented “Tracking Actors through their Webinjects”. He started with a recap about banking malware and webinjects. they are not simple because web apps are complex. Off-the-shelf solutions are available. The idea of the research: Can we classify malware families based on web injects? some are popular for years (Zeuxs, Gozi). James reviewed many webinjects:
  • Yummba
  • Tables
  • inj_inj
  • adm_ssl
  • concert_all
  • delsrc

For each of them, he gave details like the targets, origin, explanation of the name and a YARA rule to detect them and many more information.

Then Łukasz Siewierski presented “Triada: the Past, the Present, the (Hopefully not Existing) Future“. He explained in details the history of the Triada malware present in many Android smartphones. It was discovered in 2016 but involved with the time.
Matthieu Faou presented “The Snake Keeps Reinventing Itself”. It was a very nice overview of the Turla espionage group. A lot of details were provided, especially about the exploitation of Outlook. I won’t give more details here, have a look at my wrap-up from 2018 where Matthieu give the same presentation.
Finally, the scheduled was completed with Ya Liu’s presentation: “How many Mirai variants are there?“. Again a presentation about Mirai and alternative malware that re-use the same source code. There was some overlapping with Rommel’s presentation (see above) but the approach was more technical. Ya explained how automate the extraction of configurations, what are the attack methods and dictionaries. From 21K analyzed samples, they extracted configurations and attack methods. Based on these data, they created five classification schemes. More info was also published here.
As usual, there was a small closing ceremony with more information about this edition: 26 talks for a total of 1080(!) minutes, 400 attendees coming from all over the world. Note already the date of the 2019 edition: 3-6 December. The event will be organized in Bordeaux!

[The post Botconf 2018 Wrap-Up Day #3 has been first published on /dev/random]

We’ve had a week of heated discussion within the Perl 6 community. It is the type of debate where everyone seems to lose. It is not the first time we do this and it certainly won’t be the last. It seems to me that we have one of those about every six months. I decided not to link to many reiterations of the debate in order not to feed the fire.

Before defining sides in the discussion it is important to identify the problems that drives the fears and hopes of the community. I don’t think that the latest round of discussions was about the Perl 6 alias in itself (Raku), but rather about the best strategy to answer to two underlying problems:

  • Perl 5 is steadily losing popularity.
  • Perl 6, despite the many enhancements and a steady growth, is not yet a popular language and does not seem to have a niche or killer app in sight yet to spur rapid adoption.

These are my observations and I don’t present them as facts set in stone. However, to me, they are the two elephants in the room. As an indication we could refer to the likes of TIOBE where Perl 5 fell off the top 10 in just 5 years (from 8th to 16th, see “Very Long Time History” at the bottom) or compare Github stars and Reddit subscribers of Perl 5 and 6 with languages on the same level of popularity on TIOBE.

Perl 5 is not really active on Github and the code mirror there has until today only 378 stars. Rakudo, developed on Github, has understandably more: 1022. On Reddit 11726 people subscribe to Perl threads (mostly Perl 5) and only 1367 to Perl 6. By comparison, Go has 48970 stars on Github and 57536 subscribers on Reddit. CPython, Python’s main implementation, has 20724 stars on Github and 290 308 Reddit subscribers. Or put differently: If you don’t work in a Perl shop, can you remember when a young colleague knew about Perl 5 besides by reputation? Have you met people in the wild that code in Perl 6?

Maybe this isn’t your experience or maybe you don’t care about popularity and adoption. If this is the case, you probably shrugged at the discussions, frowned at the personal attacks and just continued hacking. You plan to reopen IRC/Twitter/Facebook/Reddit when the dust settles. Or you may have lost your patience and moved on to a more popular language. If this is the case, this post is not for you.

I am under the impression that the participants of what I would call “cyclical discussions” *do* agree with the evaluation of the situation of Perl 5 and 6. What is discussed most of the time is clearly not a technical issue. The arguments reflect different –and in many cases opposing— strategies to alleviate the aforementioned problems. The strategies I can uncover are as follows:

Perl 5 and Perl 6 carry on as they have for longer than a decade and half

Out of inertia, this a status-quo view is what we’ve seen in practice today. While this vision honestly subscribes to the “sister languages narrative” (Perl is made up of two languages in order to explain the weird major version situation), it doesn’t address the perceived problems. It chooses to ignore them. The flip side is that with every real or perceived threat to the status-quo the debate resurges.

Perl 6 is the next major version of Perl

This is another status-quo view. The “sister languages narrative” is dead: Perl 6 is the next major version of Perl 5. While a lot of work is needed to make this happen, it’s work that is already happening: make Perl 6 fast. The target, however, is defined by Perl 5: It must be faster than the previous release. Perl consists of many layers and VMs are interchangeable: it’s culturally still Perl if you replace the Perl 5 runtime with one from of Perl 6. This view is not well received by many people in Perl 5 community, and certainly by those emotionally or professionally invested in “Perl”, with most of the time Perl meaning Perl 5.

Both Perl 5 and Perl 6 are qualified/renamed

This is a renaming view that looks for a compromise in both communities. The “sister languages narrative” is the real world experience and both languages can stand on their own feet while being one big community. By renaming both projects and keeping Perl in the name (e.g. Rakudo Perl, Pumpkin Perl) the investment in the Perl name is kept, while the next major version number dilemma is dissolved. However this strategy is not an answer for those in the Perl 6 community that experience that the (unjustified) reputation of Perl 5 is hurting Perl 6’s adoption. On the Perl 5’s side is some resentment why good old “Perl” needs to be renamed when Perl 6 is the newcomer.

Rename Perl 6

Perl 6’s adoption is hindered by Perl 5’s reputation and, at the same time, Perl 6’s major number “squatting” places Perl 5 in limbo. The “sister language narrative” is the real world situation: Perl 5 is not going away and it should not. The unjustified reputation of Perl 5 for some people is not something Perl 6 needs to fix. Only action of the Perl 6 community is required in the view. However, a “sisterly” rename will benefit Perl 5. Liberating the next major version will not fix Perl 5’s decline, but it may be a small piece of the puzzle of the recovery. Renaming will result in more loosely coupled communities, but Perl communities are not mutually exclusive and the relation may improve without the version dilemma. The “sister language narrative” becomes a proud origin story. Mostly Perl 6 people heavily invested in the Perl *and* the Perl 6 brand opposed this strategy.

Alias Perl 6

While being very similar to the strategy above, this view is less ambitious as it only cares about Perl 6’s adoption is hindered by Perl 5’s reputation. It’s up to Perl 5 to fix their major version problem. It’s a compromise between (a number of) people in the Perl 6 community. It may or may not be a way to proof if an alias catches on, the renaming of Perl 6 should stay on the table.

Every single strategy will result in people being angry or disappointed because they honestly believe it hurts the strategy that they feel is necessary to alleviate Perl’s problems. We need to acknowledge that the fears and hopes are genuine and often related. Without going in detail to not reignite the fire (again), the tone of many of the arguments I heard this week from people opposing the Raku alias rung very close to me to the arguments Perl 5 users have against the Perl 6 name. Being a victim of injustice by people that don’t care for an investment of years and a feeling of not being listen to.

By losing the sight of the strategies in play, I feel the discussion degenerated very early in personal accusations that certainly leave scars while not resulting in even a hint of progress. We are not unique in this situation, see the recent example of the toll it took on Guido van Rossum. I can only sympathize with Larry is feeling these days.

While the heated debates may continue for years to come, it’s important to keep an eye on people that silently leave. The way to irrelevance is a choice.


(I disabled comments on this entry, feel free to discuss it on Reddit or the like. However respect the tone of this message and refrain from personal attacks.)

December 06, 2018

I’m just back from the reception that was held at the Cité de l’Espace, such a great place with animations and exhibitions of space related devices. It’s tie for my wrap-up of the second day. This morning, after some coffee refill, the first talk of the day was performed by Jose Miguel ESPARZA: “Internals of a Spam Distribution Botnet“. This talk had content flagged as a mix of TLP:Amber and TLP:Red, so no disclosure. Jose started with an introduction about well-known spam distribution botnets like Necurs or Emotet: what are their features, some volumetric statistics and how they behave. Then, he dived into a specific one Onliner, well known for its huge amount of email accounts: 711 millions! He reviewed how the bot is working, how it communicates with its C&C infrastructure, the panel, the people behind the botnet. Nice review and a lot of useful information! The conclusion was that spam bots still remain a threat. They are not only used to drop spam but also to deliver malware.

Then, Jan SIRMER & Adolf STREDA came on stage to present: “Botception: Botnet distributes script with bot capabilities”. They presented their research about a bot that it acting like in the “Inception” movie. It was about a bot that distributes a script that acts like… a bot! The “first” bot is Necurs which is has been discovered in 2012. It’s one of the largest botnets that was used to distribute huge amount of spams as well as other malware campaigns like ransomware. The explained how the bot behave and, especially, how it communicates with its C&C servers. In a second part, they explained how they created a tracker to learn more about the botnet. Based on the results, they discovered the infection chain:

Spam email > Internet shortcut > VBS control panel > C&C > Download & execute > Flawed Ammyy.

The core component that was analyzed is the VBS control panel that is accessed via SMB (file://) in an Internet shortcut file. Thanks to the SMB protocol, they get access to all the files and grabbed also payloads in advance! The behaviour is classic: hardcoded C&C addresses, features like install, upgrade, kill or execute, watchdog feature. Interesting, the code was properly documented, which is rare for malicious code. Different version were compared. At the end, the malware drops a RAT: Flawy Ammyy.

The next tall was “Stagecraft of Malicious Office Documents – A Look at Recent Campaigns” presented by Deepen DESAI, Tarun DEWAN & Dr. Nirmal SINGH. Malicious Office documents (or “maldocs”) are a very common vector of infection for a while. But how do they evolve in time? The speakers focused their research on analyzing many maldocs. Today, approximatively 1 million of documents are used daily in enterprises transactions. The typical infection path is:

Maldoc > Social engineering > Execute macro > Download & execute payload

Why “Social engineering”? Since Office 2007, macros are disabled by default and the attacker must use techniques to lure the victim and force him/her to disable this default protection.

They analyzed ~1200 documents with a low AV detection (both manual and in sandboxes). They looked at URLs, filenames time frames, obfuscation techniques. What are the findings? They categorized documents in campaign that were reviewed one by one:

Campaign 1: “AppRun” – because they used Application.Run
Campaign 2: “ProtectedMacro” because the Powershell code was stored in document elements like boxes
Campaign 3: “LeetMX” – because leet text encoding was used
Campaign 4: “OverlayCode” – because encrypted PowerShell code is accessed using bookmarks
Campaign 5: “xObjectEnum” – because Macro code in the documents were using enum values from different built-in classes in VBA objects
Campaign 6: “PingStatus” – because the document used Win32_PingStatus WMI class to detect sandbox ping to and %userdomain%
Campaign 7: “Multiple embedded macros” – because malicious RTF containing multiple embedded Excel sheets
Campaign 8 “HideInProperlty” – because Powershell code was hidden in the doc properties
Campaign 9: “USR-KL” – because they used specific User-Agents: USR-KL & TST-DC

This was a very nice study and recap about malicious documents.

Then, Tom Ueltschi came to present “Hunting and Detecting APTs using Sysmon and PowerShell Logging”.  Tom is a recurrent speaker at Botconf and always presents interesting stuff to hunt for bad guys. Today, he came with new recipes (based on Sigma!). But, as he explained, to be able to track bad behaviour, it’s mandatory to prepare your environment for investigations (log everything but also specific stuff like auditing, Powershell modules, script block and transcription logging). The MITRE ATT@CK was used as a reference in Tom’s presentation. He reviewed three techniques that deserve to be detected:

  • Malware persistence installation through WMI Event Subscription (it needs an event filter, an event consumer and a binding between the two)
  • Persistence installation through login scripts
  • Any suspicious usage of Powershell

For each techniques, Tom described what to log and how to search events to spot the bad guys. The third technique was covered deeper with more examples to track many common evasion techniques. They are not easy to describe in a few lines here. My recommendation, if you are dealing with this kind of environment, is to have a look at Tom’s slides. Usually, he publish them quickly. Excellent talk, as usual!

The Rustam Mirkasymov’s talk was the last one of the first half-day: “Hunting for Silence“. There was no abstract given and I was thinking about a presentation on threat hunting. Nope, it was a review of the “Silence” trojan which targeted financial institutions in Ukraine in 2017. After a first analyze, the trojan was attributed to APT-28 but it was not the case. The attacker did not have the exploit builder but was able to modify an existing sample. Rustam did a classic review of the malware: available commands, communications with the C&C infrastructure, persistence mechanism, … An interesting common point of many presentations for this edition: slides usually contained some mistakes performed by the malware developers.

After the lunch break, the keynote was performed by the Colonel Jean-Dominique Nollet from the French Gendarmerie. The title was “Cybercrime fighting in the Gendarmerie”. He explained the role of law enforcement authorities in France and how they work to improve the security of all citizens. This is not an easy task because they have to explain to non-technical people (citizens as well as other members of the Gendarmerie) very technical information (like botnets!). Their missions are:

  • Be sure that guys dealing with cyber issues has the good knowledge and tools (support) at a local level (and France is a big country!)
  • Intelligence! But cops cannot hack back! (like many researchers do)
  • Investigations

Coordination is key! For example, to fight against child pornography, they have a database of 11 millions of pictures that can help to identify victims or bad guys. They already rescued eight children! The evolution cycle is also important:

Information > R&D > Experience > Validate -> Industrialize

The key is speed! Finally, another key point was to request for more collaboration between security researchers and law enforcement

The next speaker was Dennis Schwarz who presented “Everything Panda Banker“. The name is coming from references to Panda in the code and the control panel. The first sample was found in 2016 and uploaded from Norway. But the malware is still alive and new releases were found until June 2018. Dennis explained the protections in place like Windows API calls resolved via hash function (obfuscation technique), encrypted strings, how configurations are stored, the DGA mechanism and other features like Man-in-the-Browser and Web-Inject. Good content with a huge amount of data that deserve to be re-read because the talk was given at light speed! I even had no time to read all the information present on each slides!

Thomas Siebert came to present “Judgement Day”. Here again, no abstract was provided but the content of the talk was amazing but released as TLP:Red, sorry! But, trust me, it was awesome!

After the afternoon break, Romain Dumont and Hugo Porcher presented “The Dark Side of the ForSSHe”. The presentation covered the Windigo malware, well-known for attacking UNIX servers through a SSH backdoor. Once connected to the victim, the bot used a Perl script piped through the connection (so, without any file stored on disk). The malware was Ebury. They deployed honeypots to collect samples and review the script features. The common OpenSSH backdoor features found are:

  • Client & server modified
  • Credential stealing
  • Hook functions that manipulate clear-text credentials
  • Write collected passwords to a file
  • From ssh client, steal only the private key
  • Exfiltration: through GET or POST, DNS, SMTP or custom protocol (TCP or UDP)
  • Backdoor mode using hardcoded credentials
  • Log evasion by hooking proper functions

Then, they reviewed specific families like:

  • Kamino: steals usernames and passwords, exfiltrate via HTTP, C&C can be upgraded remotely, XOR encrypted, attacker can login as root, anti-logging, victim identified by UUID.
  • Kessel: has a bot feature: commands via DNS TXT an SSH tunnel between the host and any server.
  • Bonadan: bot module, kill existing cryptominers, custom protocol, cryptominer

From a remediation perspective, the advices are always the same: use keys instead of passwords, disable root login, enable 2FA, monitor file descriptors and outbound connections from the SSH daemon.

The day ended with the classic lightning talks session. The principle remains the same: 3 minutes max and any topic (but related to malware, botnets or security in general). Here is a quick list of covered topics:

  • Onyphe, how to unhide the hide Internet
  • Kodi media player plugins vulnerabilities
  • Spam bots
  • 3VE botnet
  • VTHunting
  • MISP to Splunk app
  • Anatomy of a booter
  • mwdb v2
  • TSurugi / Bento toolkit
  • A very funny one but TLP:Red
  • We need your IP space (from the Shadowserver foundation)
  • Evil maids attack (hotel room) -> Power-cycle counts via!

The best lightning talk was (according to the audience) the TLP:Red one (it was crazy!). I really liked, a simple Python script that can detect if your computer was rebooted without your consent.

That’s all for today, see you tomorrow for the third wrap-up!

[The post Botconf 2018 Wrap-Up Day #2 has been first published on /dev/random]

The post PHP 7.3.0 Release Announcement appeared first on

A new PHP release is born: 7.3!

The PHP development team announces the immediate availability of PHP 7.3.0. This release marks the third feature update to the PHP 7 series.

PHP 7.3.0 comes with numerous improvements and new features such as;

-- Flexible Heredoc and Nowdoc Syntax
-- PCRE2 Migration
-- Multiple MBString Improvements
-- LDAP Controls Support
-- Improved FPM Logging
-- Windows File Deletion Improvements
-- Several Deprecations

Source: PHP: PHP 7.3.0 Release Announcement

The post PHP 7.3.0 Release Announcement appeared first on

December 05, 2018

Here is my first wrap-up for the 6th edition of the Botconf security conference. Like the previous editions, the event is organized in a different location in France. This year, the beautiful city of Toulouse saw 400 people flying from all over the world to attend the conference dedicated to botnets and how to fight them. Attendees are coming from many countries like USA, Canada, Brazil, Japan, China, Israel, etc). The opening session was performed by Eric Freyssinet. Same rules as usual, no harassment, respect of the TLP policy. Let’s start with the review of the first talks.

No keynote on the first day (the keynote speaker has been scheduled tomorrow). The first talk was assigned to Emilien LE JAMTEL from the CERT EU. He presented his research about cryptominers: “Swimming in the Monero Pools”. Attackers have two key requirements: the obfuscation of data and to perform efficient mining on all kinds of hardware. Monero, being obfuscated by default and not requiring specific ASICs CPU, is a nice choice for attackers. Event a smartphone can be used as a miner. Criminals are very creative to drop more miners everywhere but the common attacks remain phishing (emails) and exploiting vulnerabilities in application (like WebLogic). Emilien explained how he’s hunting for new samples. He wrote a bunch of scripts (available here) as well as YARA rules. Once the collection process is done, he extracts information like hardcoded wallet addresses, search for outbound connections to mining pools. Right now, he collected 15K samples and is able to generate IOCs like: C2 communications, persistence mechanism, specific strings and TTP’s. The next step was to explain how your can deobfuscate data hidden on the code and configuration files (config.js or global.js). He concluded with more funny examples of malware samples that killed themselves or another that contained usernames in the compilation path of the source code. Nice topic to smoothly start the day.

The next talk was performed by Aseel KAYAL: “APT Attack against the Middle East: The Big Bang”. She gave many details about a malware sample they found targeting the Middle-East. The campaign was assigned to APT-C-23, a threat group targeting Palestinians. She explained in a very educational way how the malware was delivered and its behaviour to infect the victim’s computer. The malware was delivered as a fake Word document that was in fact a self-extracting archive containing a decoy document and a malicious PE file. The gave details about the malware itself then more about the “context”. It was called “The Big Bang” due to the unusual module names. Assel and her team also tracked the people behind the campaign and found many references to TV shows. It was a nice presentation not simply delivering (arte)facts but also telling a story.

Daniel PLOHMANN presented last year at Botconf the Malpedia project (see my previous wrap-up). This year , he came back with more news about the project, how it evolved during 12 months. The presentation was called “Code Cartographer’s Diary”.The platform has now 850 users, 2900+ contributions. The new version has now a REST API (which helps to integrates Malpedia with third party tools like TheHive – just saying). The second part of the talk was based on ApiScout. This tools helps to detect how the Windows API is used in malware samples. Based on many samples, Daniel gave statistics about the API usage. If you don’t know Malpedia, have a look, it’s an interesting tool for security analysts and malware researchers.

The next speaker was Renato MARINHO, a fellow SANS Internet Storm Center Handler, who presented “Cutting the Wrong Wire: how a Clumsy Attacker Revealed a Global Cryptojacking Campaign”. This was a second talk about cryptominers in a half day. After a quick recap about this kind of attacks, Renato explained how he discovered a new campaign affecting servers. During the analysis of a timeline, he found suspicious files in /tmp (config.json) as well as a binary file. This binary was running with the privileges of the WebLogic server running on the box. This box was compromized using the WebLogic exploit. He tracked the attacker using the hardcoded wallet address found in the binary. The bad guy generated $204K in two months! How the malware was detected? Due to a stupid mistake of the developer, the malware killed automatically running Java processes… so the WebLogic application too!

After the lunch break, Brett STONE-GROSS & Tillmann WERNER presented “Chess with Pyotr”. This talk was a resume of a blog post they published). Basically, the reviewed previous botnets like Storm Worm, Waledac, Storm 2.0 and… Kelihos. They gave multiple details about them. Kelihos was offering many services: spam, credential theft, DDoS, FFlux DNS, click fraud, SOCKs proxy, mining, Pay-per-install (PPI). The next part of the talk was dedicated to the attribution. The main threat actor behind this botnet is Peter Yuryevich Levashov, a member of an underground forum where is communicated about his botnet.

Then, Rémi JULLIAN came to present: “In-depth Formbook Malware Analysis”. In-depth was really the key word of the presentation! FormBook is a well-know malware that is very popular and still active! It targets 92(!) different applications (via password-stealer or form-grabber). It is proposed also on demand in a MaaS model (“Malware as a Service”). The price for a full version is around $29/week. This malware is often on the top-10 of threats detected by security solutions like sandboxes. Rémi reviewed the multiple anti-analysis techniques deployed by FormBook like string obfuscation and encryption, manually mapping NTDLL (to defeat tools like Cuckoo), check for debuggers, check for inline hooks etc. The techniques of code injection and process hollowing were also explained. About the features, we have: browser hooking to access the data before being encrypted, a key-logger, clipboard data stealer, passwords harvesting from the filesystems. Communication with the C&C was also explained. Interesting finding: FormBook uses fake C&C servers during sandbox analysis to defeat the analyst. This was a great presentation full of useful details!

The next speaker was Antoine REBSTOCK who presented: “How Much Should You Pay for your own Botnet ?”. This was not a technical presentation (though – with plenty of mathematical formules) but more a legal talk. The idea presented by Antoine was interesting: Let’s assume that we decide to build a botnet to DDoS a target, what will be the total price (hosting, bandwidth, etc). After the theory, he compared different providers: Orange, Amazon, Microsoft and Google. Event if the approach is not easy to put in the context of a real attacker, the idea was interesting. But way too much formulas for me 😉

After the welcomed coffee break, Jakub SOUČEK & Jakub TOMANEK: “Collecting Malicious Particles from Neutrino Botnets”. The Neutrino bot is not new. It was discovered in 2014 but still alive today, with many changes. Lot of articles have been written about this botnet but, according to the speakers, there was some information missing like how behaves the bot during investigation, how configuration files are received. Many bots are still running in parallel and they wanted to learn more about them. Newly introduced features are: modular structure, obfuscated API call, network data stealer, CC scraper, encryption of modules, new control flow, persistence and support for new web injects. The botnet is sold to many cybercriminals, there are many builds. How to classify them in groups? What can be collected and useful to classify the botnet?

  • The C&C
  • Version
  • Bot name
  • Build ID

Only the Build ID is relevant. The name, by example, is “NONE” in 95% of the cases. They found 120 different build ID’s classified in 41 unique botnets, 18 really active  and 3 special cases. They reviewed some botnets and named them with their own convention. Of course they found some funny stories like a botnet injected “Yaaaaaar” in front of all strings in the web inject module. They also found misused commands, disclosure of data, debugging information left in the code. Conclusion: malware developers make mistakes too.

The next slot was assigned to Joie SALVIO & Floser BACURIO Jr. with “Trickbot The Trick is On You!”. They performed the same kind of presentation as today but this time on the banking malware Trickbot. Discovered in 2016, it also evolved with new features. They gave more attention on the communication channels used by the malware.

Finally, the day ended with Ivan KWIATKOWSKI & Ronan MOUCHOUX who presented “Automation, structured knowledge in Tactical Threat Intelligence”. After an introduction and definition of “intelligence” (it’s a consumer-driven activity), they explained what is the Tactical Threat Intelligence and how to implement it. Just a mention about the slides, designed with a wrong palette, making them difficult to read.

That’s all for today, be ready for my second wrap-up tomorrow!

[The post Botconf 2018 Wrap-Up Day #1 has been first published on /dev/random]

A figure opening doors, lit from behind with a bright light.

Last week, WordPress Tavern picked up my blog post about Drupal 8's upcoming Layout Builder.

While I'm grateful that WordPress Tavern covered Drupal's Layout Builder, it is not surprising that the majority of WordPress Tavern's blog post alludes to the potential challenges with accessibility. After all, Gutenberg's lack of accessibility has been a big topic of debate, and a point of frustration in the WordPress community.

I understand why organizations might be tempted to de-prioritize accessibility. Making a complex web application accessible can be a lot of work, and the pressure to ship early can be high.

In the past, I've been tempted to skip accessibility features myself. I believed that because accessibility features benefited a small group of people only, they could come in a follow-up release.

Today, I've come to believe that accessibility is not something you do for a small group of people. Accessibility is about promoting inclusion. When the product you use daily is accessible, it means that we all get to work with a greater number and a greater variety of colleagues. Accessibility benefits everyone.

As you can see in Drupal's Values and Principles, we are committed to building software that everyone can use. Accessibility should always be a priority. Making capabilities like the Layout Builder accessible is core to Drupal's DNA.

Drupal's Values and Principles translate into our development process, as what we call an accessibility gate, where we set a clearly defined "must-have bar". Prioritizing accessibility also means that we commit to trying to iteratively improve accessibility beyond that minimum over time.

Together with the accessibility maintainers, we jointly agreed that:

  1. Our first priority is WCAG 2.0 AA conformance. This means that in order to be released as a stable system, the Layout Builder must reach Level AA conformance with WCAG. Without WCAG 2.0 AA conformance, we won't release a stable version of Layout Builder.
  2. Our next priority is WCAG 2.1 AA conformance. We're thrilled at the greater inclusion provided by these new guidelines, and will strive to achieve as much of it as we can before release. Because these guidelines are still new (formally approved in June 2018), we won't hold up releasing the stable version of Layout Builder on them, but are committed to implementing them as quickly as we're able to, even if some of the items are after initial release.
  3. While WCAG AAA conformance is not something currently being pursued, there are aspects of AAA that we are discussing adopting in the future. For example, the new 2.1 AAA "Animations from Interactions", which can be framed as an achievable design constraint: anywhere an animation is used, we must ensure designs are understandable/operable for those who cannot or choose not to use animations.

Drupal's commitment to accessibility is one of the things that makes Drupal's upcoming Layout Builder special: it will not only bring tremendous and new capabilities to Drupal, it will also do so without excluding a large portion of current and potential users. We all benefit from that!

December 04, 2018

The post Deploying laravel-websockets with Nginx reverse proxy and supervisord appeared first on

There is a new PHP package available for Laravel users called laravel-websockets that allows you to quickly start a websocket server for your applications.

The added benefit is that it's fully written in PHP, which means it will run on pretty much any system that already runs your Laravel code, without additional tools. Once installed, you can start a websocket server as easily as this:

$ php artisan websocket:serve

That'll open a locally available websocket server, running on

This is great for development, but it also performs pretty well in production. To make that more manageable, we'll run this as a supervisor job with an Nginx proxy in front of it, to handle the SSL part.

Supervisor job for laravel-websockets

The first thing we'll do is make sure that process keeps running forever. If it were to crash (out of memory, killed by someone, throwing exceptions, ...), we want it to automatically restart.

For this, we'll use supervisor, a versatile task runner that is ideally suited for this. Technically, systemd would work equally good for this purpose, as you could quickly add a unit file to run this job.

First, install supervisord.

# On Debian / Ubuntu
apt install supervisor

# On Red Hat / CentOS
yum install supervisor
systemctl enable supervisor

Once installed, add a job for managing this websocket server.

$ cat /etc/supervisord.d/ohdear_websocket_server.ini
command=/usr/bin/php /var/www/vhosts/ websocket:start

This example is taken from, where it's running in production.

Once the config has been made, instruct supervisord to load the configuration and start the job.

$ supervisorctl update
$ supervisorctl start websockets

Now you have a running websocket server, but it will still only listen to, not very useful for your public visitors that want to connect to that websocket.

Note: if you are expecting a higher number of users on this websocket server, you'll need to increase the maximum number of open files supervisord can open. See this blog post: Increase the number of open files for jobs managed by supervisord.

Add an Nginx proxy to handle the TLS

Let your websocket server run locally and add an Nginx configuration in front of it, to handle the TLS portion. Oh, and while you're at it, add that domain to Oh Dear! to monitor your certificate expiration dates. ;-)

The configuration looks like this, assuming you already have Nginx installed.

$ cat /etc/nginx/conf.d/
server {
  listen        443 ssl;
  listen        [::]:443 ssl;

  access_log    /var/log/nginx/ main;
  error_log     /var/log/nginx/ error;

  # Start the SSL configurations
  ssl                         on;
  ssl_certificate             /etc/letsencrypt/live/;
  ssl_certificate_key         /etc/letsencrypt/live/;
  ssl_session_timeout         3m;
  ssl_session_cache           shared:SSL:30m;
  ssl_protocols               TLSv1.1 TLSv1.2;

  # Diffie Hellmann performance improvements
  ssl_ecdh_curve              secp384r1;

  location / {
    proxy_pass                ;
    proxy_set_header Host               $host;
    proxy_set_header X-Real-IP          $remote_addr;

    proxy_set_header X-Forwarded-For    $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto  https;
    proxy_set_header X-VerifiedViaNginx yes;
    proxy_read_timeout                  60;
    proxy_connect_timeout               60;
    proxy_redirect                      off;

    # Specific for websockets: force the use of HTTP/1.1 and set the Upgrade header
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection 'upgrade';
    proxy_set_header Host $host;
    proxy_cache_bypass $http_upgrade;

Everything that connects to over TLS will be proxied to a local service on port 6001, in plain text. This offloads all the TLS (and certificate management) to Nginx, keeping your websocket server configuration as clean and simple as possible.

This also makes automation via Let's Encrypt a lot easier, as there are already implementations that will manage the certificate configuration in your Nginx and reload them when needed.

The post Deploying laravel-websockets with Nginx reverse proxy and supervisord appeared first on

The post Benchmarking websocket server performance with Artillery appeared first on

We recently deployed a custom websocket server for Oh Dear! and wanted to test its ability to handle requests under load. This blogpost will explain how we performed that load test using Artillery.

Installing Artillery

There's a powerful tool called Artillery that allows you to -- among other things -- stresstest a websocket server. This includes as well as regular websocket servers.

First, install it using either npm or yarn. This assumes you have nodejs already installed.

# Via npm
$ npm install -g artillery

# Via Yarn
$ yarn global add artillery

Once installed, it's time for the fun part.

Creating your scenario for the load test

There are a few ways you can load test your websockets. You can, rather naively, just launch a bunch of requests -- much like ab (Apache Bench) would -- and see how many hits/second you can get.

This is relatively easy with artillery. First, create a simple scenario you want to launch.

$ cat loadtest1.yml
    target: "ws://"
      - duration: 20  # Test for 20 seconds
        arrivalRate: 10 # Every second, add 10 users
        rampTo: 100 # Ramp it up to 100 users over the 20s period
        name: "Ramping up the load"
    - engine: "ws"
        - send: 'hello'
        - think: 5

This will do a few things, as explained by the comments:

  • Run the test for 20 seconds
  • Every second, it will add 10 users until it reaches 100 users
  • Once connected, the user will send a message over the channel with the string "hello"
  • Every user will hold the connection open for 5 seconds

To start this scenario, run this artillery command.

$ artillery run loadtest1.yml
Started phase 0 (Ramping up the load), duration: 20s @ ...

However, this is a fairly naive approach, as your test is sending garbage (the string "hello") and will just test the connection limits of both your client and the server. It'll mostly stresstest the TCP stack, not so much the server itself, as it's not doing anything (besides accepting some connections).

A test like this can quickly give you a few thousand connected users, without too much hassle (assuming you've increased your max open files limits).

Testing a real life sample

It would be far more useful if you could test an actual websocket implementation. One that would look like this:

  1. Connect to a websocket
  2. Make it subscribe to a channel for events
  3. Receive the events

Aka: what a real browser would do.

To test this, consider the following scenario.

$ cat loadtest2.yml
    target: "wss://"
      - duration: 60  # Test for 60 seconds
        arrivalRate: 10 # Every second, add 10 users
        rampTo: 100 # And ramp it up to 100 users in total over the 60s period
        name: "Ramping up the load"
      - duration: 120 # Then resume the load test for 120s
        arrivalRate: 100 # With those 100 users we ramped up to in the first phase
        rampTo: 100 # And keep it steady at 100 users
        name: "Pushing a constant load"
    - engine: "ws"
        - send: '{"event":"pusher:subscribe","data":{"channel":"public"}}'  # Subscribe to the public channel
        - think: 15 # Every connection will remain open for 15s

This example is similar to our first one, but the big difference is in the message we send: {"event":"pusher:subscribe","data":{"channel":"public"}}. An actual JSON payload that instructs the client to listen to events sent on the channel public.

In our example, this subscribes to the counter of live amount of health checks being run on Oh Dear!. Every second or so, we publish a new number on the public channel, so every socket listening to that channel will receive data. And more importantly: our server is forced to send that data to every connected user.

With an example like this, we're testing our full websocket stack: data needs to be sent to our websocket and it needs to relay that to every client subscribed to that channel. This is what will cause the actual load (on both client and the server) and this is what you'll want to test.

Now, it's a matter of slowly increasing the amount of users in the scenario and finding the breaking point. ;-)

The post Benchmarking websocket server performance with Artillery appeared first on

Beste NV-A,

Als je uw propaganda honden niet in de hand hebt, ben je ongeschikt als regeringspartij. Als je die campagne bewust lanceerde, ben je racistisch en onbeschoft. Die is gericht tegen mensen. Heeft niets te maken met oplossingen bieden. Wel met haat en angst.

In beide gevallen hoor je niet in de Belgische regering.

Vertrek dus maar.

Dit was er over.

Concerning the very short-notice release-announcement of WordPress 5.0 with Gutenberg for Dec 6th: I’m with Yoast;He has a great “should I update”-checklist and conclusion in this blogpost;

  • Is now the right time to update?
  • Can your site work with Gutenberg?
  • Do you need it?

So our advice boils down to: if you can wait, wait. 

So if you have a busy end-of-year, if you’re not 100% sure your site will work with Gutenburg or if you don’t really need Gutenberg in the first place; wait (while WordPress 5.0 stabilizes with some minor releases).

Apple had a rough year; its stock price has fallen 25% since the beginning of the year. Apple also reported a weaker than expected outlook and shared that it will no longer report individual unit sales, which many consider a bearish signal within a saturated smartphone market.

It's no surprise that this has introduced some skepticism with current Apple shareholders. A friend recently asked me if she should hold or sell her Apple stock. Her financial advisor suggested she should consider selling. Knowing that Apple is the largest position in my own portfolio, she asked what I'm planning to do in light of the recent troubles.

Every time I make an investment decision, I construct a simple narrative based on my own analysis and research of the stock in question. I wrote down my narrative so I could share it with her. I decided to share it on my blog as well to give you a sense of how I develop these narratives. I've shared a few others in the past — documenting my "investment narratives" is useful as it helps me learn from my mistakes and institutes accountability.

As a brief disclaimer, this post should be considered general information, and not a formal investment recommendation. Before making any investment decisions, you should do your own proper due diligence.

Over the last five years, Apple grew earnings per share at 16% annually. This is a combination of about 10% growth in net profit, combined with almost 6% growth as the result of share buybacks.

Management has consistently used cash to buy back five to six percent of the company's outstanding shares every year. At the current valuation and with the current strength of Apple's balance sheet, buybacks are a good use of a portion of their cash. Apple will likely see similar levels of cash generation in the coming years so I expect Apple will continue to buy back five to six percent of its outstanding shares annually. By reducing the number of shares on the market, buybacks lift a company's earnings per share by the same amount.

Next, I predict that Apple will grow net profits by six to seven percent a year. Apple can achieve six to seven percent growth in profits by growing sales and improving its margins. Margins are likely to grow due to the shift in revenue from hardware to software services. This is a multi-year shift so I expect margins to slowly improve over time. I believe that six to seven percent growth in net profits is both reasonable and feasible. It's well below the average 10% growth in net profits that we've seen in recent years.

Add 5-6% growth as the result of share buybacks to 6-7% growth in profit as a result of sales growth and margin expansion, and you have a company growing earnings per share by 12% a year.

If Apple sustains that 12% percent earnings per share growth for five years, earnings per share will grow from $11.88 today to $20.94 by the end of 2023. At the current P/E ratio of 15, one share of Apple would be worth $314 by then. Add about $20 in dividends that you'd collect along the way, and you are likely looking at market beating returns.

The returns could be better as there is an opportunity for P/E expansion. I see at least two drivers for that; (a) the potential for margin improvement as a result of Apple shifting its revenue mix, and (b) Apple's growing cash position (e.g. if you subtract the cash per share from the share price, the P/E increases).

Let's assume that the P/E expands from the current 15 to 18. Now, all of a sudden, you're looking at a per share price of $397 by the end 2023, and an average annual return of 18%. If that plays out, every dollar invested in Apple today, would double in five years — and that excludes the dividend you'd collect along the way!

Needless to say, this isn't an advanced forecasting model. Regardless, my narrative shows that if we make a few very reasonable assumptions, Apple could have a great return the next five years.

While Apple's day of disruption might be behind it, it remains one of the greatest cash machines of all time. Modest growth combined with a large buyback program and a relatively low valuation, can make for a great investment.

I'm not selling my Apple stock and I'd be tempted to buy more if the share price were to drop below $155 a share.

Disclaimer: I'm long AAPL. Before making any investment decisions, you should do your own proper due diligence. Any material in this article should be considered general information, and not a formal investment recommendation.

November 30, 2018

I’m writing this quick wrap-up in Vienna, Austria where I attended my first DeepSec conference. This event was already on my schedule for a while but I never had a chance to come. This year, I submitted a training and I was accepted! Good opportunity to visit the beautiful city of Vienna! Like many security conferences, the event started with a set of trainings on Tuesday and Wednesday. My training topic was about using OSSEC for threat hunting.

On Thursday and Friday, regular talks were scheduled and split across three tracks. Two tracks for regular presentations and the third one called “Roots”, more dedicated to academic researches and papers. There was a good balance between offensive and defensive presentations.

The keynote speaker was Peter Zinn and he presented a very entertaining keynote called “We’re all gonna die“. Basically, the main idea was to review how our world is changing in many points and new threats are coming: the climate change, magnetic fields, Donald Trump, etc. But also from an information technology point of view. Peter revealed that we have to face 4 types of “cyber-zombies”:

  • People
  • Inequality
  • Operational technology and IOT
  • Artificial Intelligence (here is a funny video that demonstrate how AI may fail)

Later, we will face the “IoP” of “Internet of People”. IT will be present inside our bodies (RFID implants, sensors, contact lenses, …) and we’ll have to deal with them. Nice keynote!

Here is a quick recap of the talk that I attended. Fernando Arnaboldi and “Uncovering Vulnerabilities in Secure Coding Guidelines“. The idea behind this talk was to demonstrate that, even if you follow all well-known development guidelines (like OWASP, CWE or NIST), you can fail. He gave several snippets of code as examples. Personally, I liked the mention to the new KPI: “the WTF’s/minute”.

Then, Werner Schober presented the “Internet of Dildos“. Always entertaining to have a talk focusing on “exotic” IoT devices. He explained the different vulnerabilities that he found in a sex-toy and the associated mobile app & website in Germany. Basically, he explained how it was possible to access all (hot) pictures uploaded by the users, how to enable (make vibrate) any device connected in the world or, worse, access to personal data of the consumers…

Then, Eric Leblond talked about the new features that are constantly added to the Suricata IDS with a focus on eBPF filters. I already saw Eric’s presentation a few month ago but he added more stuff like a crazy idea to use BCC (“BPF Code Compiler”) to generate BFP filters from C code directly present in a Python script!

Joe Slowik came to speak about ICS attacks. More and more ICS attacks are reported in the news because there is some kind of aura of sophistication around them. Joe started with a recap of the major ICS attacks that industries faced in the last years. But, many attacks are successful because the IT components used to control the ICS components are vulnerable and the same tools are abuse to compromize them (like Mimikatz, PsExec, etc).  Note that the talk was a mix of offensive & defensive.

Benjamin Ridgway (from the Microsoft Incident Response Center) came to speak about incident handling. The abstract was not clear and a lot of people expected a talk explaining how to select and use the right tools to perform incident management but it was completely different and not technical. Benjamin explained how to implement your IH process with a focus on the following points:

  • Human psychological response to stressful and/or dangerous situations
  • Strategies for effectively managing human factors during a crisis
  • Polices and structures that set up incident response teams for success
  • Tools for building a healthy and happy incident response team

It was an excellent presentation, one of my preferred!

Then, Dr. Silke Holtmanns from Nokia Bell Las came to speak about new attack vectors for mobile core networks. The problem for people that are not in the field of mobile networks is the complexity of terms and abbreviations used. It’s crazy! But Silke explained very well the basic: how roaming is used, how billing profile are managed. Of course, the idea was then to explain some attacks. I like the one focusing on how to change a billing plan when you’re abroad to reduce the roaming costs. Very didactic!

The new speaker was Mark Baenziger which is doing incident handling. He explained the challenges that incident handlers might face when handling personal data (and so, how to protect their privacy). He explained how, in some case, security teams failed to achieve this properly.

The last slot was assigned to Paula de la Hoz Garrido (she’s studying in Spain). She explained her project of network monitoring tools bundled on a Raspberry Pi. Interesting but the practical part was missing (how to build the project on the Pi. The talk was more a review of tools that are used to capture/process packets.

The second day started with a nice talk called “Everything is connected: how to hack Bank Account using Instagram“. The idea was to abuse phone services provided by some banks to allow their customers to perform a lot of basic operations (through IVR). Aleksandr Kolchanov explained the attacks he performed against an Ukrainian bank. Some services are available only based on the caller-ID. This information can be easily spoofed using only services (ex: Funny but crazy!

Then, I switched to the “Roots” room to attend a talk about using data over sound. More precisely, ultrasonic sounds. Matthias Zeppelzauer explained the research he made about this technology which is used more then we could expect! It’s possible to collect interesting informations (ex: how people watch television programs) or to deliver ads to people entering a shop. He also presented the project “SoniControl” which is some kind of an ultrasonic firewall to protect the privacy of users.

My next choice was “RFID Chip Inside the Body: Reflecting the Current State of Usage, Triggers, and Ethical Issues” presented by Ulrike Hugl. RFID implants in human bodies are not new but what’s the status today? Are people ready to have such kind of hardware under their skin? There is not massive deployment but some companies try to convince their users to use this technology. But it remains usually tests or funny projects.

Finally, my last choice was “Global Deep Scans – Measuring Vulnerability Levels across Organizations, Industries, and Countries” by Luca Melette & Fabian Bräunlein. I was curious when I read the abstract. The idea behind this research was to scan the Internet, to classify scanned IP addresses by location and business. Then, they used an algorithm to compute an “hackability” level. Indeed, from a defender perspective, it’s interesting to learn how your competitor are safe. From an attacker point of view, it’s nice to know which are the most juicy targets. The result of their research is available here.

This was a very quick wrap-up of my first DeepSec (and I hope not the last one!). The conference size is nice, not too many attendees (my rough estimation is ~200 people) and properly managed by the  crew. Thanks to them!

[The post DeepSec 2018 Wrap-Up has been first published on /dev/random]

Dries interviews Eric Black from NBC

Many of Acquia's customers have hundreds or even thousands of sites, which vary in terms of scale, functionality, longevity and complexity.

One thing that is very unique about Acquia is that we can help organizations scale from small to extremely large, one to many, and coupled to decoupled. This scalability and flexibility is quite unique, and allows organizations to standardize on a single web platform. Standardizing on a single web platform not only removes the complexity from having to manage dozens of different technology stacks and teams, but also enables organizations to innovate faster.

Acquia Engage 2018 - NBC Sports

A great example is NBC Sports Digital. Not only does NBC Sports Digital have to manage dozens of sites across 30,000 sporting events each year, but it also has some of the most trafficked sites in the world.

In 2018, Acquia supported NBC Sports Digital as it provided fans with unparalleled coverage of Super Bowl LII, the Pyeongchang Winter Games and the 2018 World Cup. As quoted in NBC Sport's press release, NBC Sports Digital streamed more than 4.37 billion live minutes of video, served 93 million unique users, and delivered 721 million minutes of desktop video streamed. These are some of the highest trafficked events in the history of the web, and I'm very proud that they are powered by Drupal and Acquia.

To learn more about how Acquia helps NBC Sports Digital deliver more than 10,000 sporting events every year, watch my conversation with Eric Black, CTO of NBC Sports Digital, in the video below:

Not every organization gets to entertain 100 million viewers around the world, but every business has its own World Cup. Whether it's Black Friday, Mother's Day, a new product launch or breaking news, we offer our customers the tools and services necessary to optimize efficiency and provide flexibility at any scale.

Congratulations to two of my best friends, Klaas and Joke, for selling their company, Inventive Designers. The sale is a great outcome for Inventive Designer's customers, partners and employees. This sale also means I will no longer serve on Inventive Designer's Board of Directors. I'll miss our board meetings, but I'm sure Klaas, Joke and I will replace them with nights in a local bar. Congratulations, Klaas and Joke!

November 29, 2018

The post Increase the number of open files for jobs managed by supervisord appeared first on

In Linux, a non-privileged user by default can only open 1024 files on a machine. This includes handles to log files, but also local sockets, TCP ports, ... everything's a file and the usage is limited, as a system protection.

Normally, we can increase the amount of processes a particular user can open by increasing the system limits. This is configured in /etc/security/limits.d/.

For instance, this allows the user john to open up to 10.000 files.

$ cat /etc/security/limits.d/john.conf
john		soft		nofile		10000

You would assume that once configured, this would apply to all the commands that run as the user john. Alas, that's not the case if you use supervisord to run a process.

Take the following supervisor job for instance:

$ cat /etc/supervisord.d/john.ini
command=/usr/bin/php /path/to/script.php

This would add a job to supervisor to always keep the task /usr/bin/php /path/to/script.php running as the user john, and if it were to crash or stop, it would automatically restart it.

However, if we were to inspect the actual limits being enforced on that process, we'd find the following.

$ cat /proc/19153/limits
Limit                     Soft Limit           Hard Limit           Units
Max open files            1024                 4096                 files

The process has a soft-limit of 1024 files and a hard limit of 4096, despite an increase in the amount of files it can open in our limits.d directory.

The reason is that supervisord has a setting of its own, minfds, that it uses to set the amount of files it can open. And that setting gets inherited by all the children that supervisord spawns, so it overrides any setting you may set in limits.d.

Its default value is set to 1024 and can be increased so anything you'd like (or need).

$ cat /etc/supervisord.conf

You'll find this file on /etc/supervisor/supervisord.conf on Debian or Ubuntu systems. Either add or modify the minfds parameter, restart supervisord (which will restart all your spawned jobs, too) and you'll notice the limits have actually been increased.

The post Increase the number of open files for jobs managed by supervisord appeared first on

November 28, 2018

I published the following diary on “More obfuscated shell scripts: Fake MacOS Flash update”:

Yesterday, I wrote a diary about a nice obfuscated shell script. Today, I found another example of a malicious shell script embedded in an Apple .dmg file (an Apple Disk Image). The file was delivered through a fake Flash update webpage… [Read more]

[The post [SANS ISC] More obfuscated shell scripts: Fake MacOS Flash update has been first published on /dev/random]

Dries interviews Erica Bizzari from Paychex

One trend I've noticed time and time again is that simplicity wins. Today, customers expect every technology they interact with to be both functionally powerful and easy to use.

A great example is Acquia's customer, Paychex. Paychex' digital marketing team recently replatformed using Drupal and Acquia. The driving factor was the need for more simplicity.

They completed the replatforming work in under four months, and beat the original launch goal by a long shot. By leveraging Drupal 8's improved content authoring capabilities, Paychex also made it a lot simpler for the marketing team to publish content, which resulted in doubled year-over-year growth in site traffic and leads.

Acquia Engage 2018 - Paychex

To learn more about how Paychex accomplished its ambitious digital and marketing goals, watch my Q&A with Erica Bizzari, digital marketing manager at Paychex.

November 27, 2018

I published the following diary on “Obfuscated bash script targeting QNap boxes:

One of our readers, Nathaniel Vos, shared an interesting shell script with us and thanks to him! He found it on an embedded Linux device, more precisely, a QNap NAS running QTS 4.3. After some quick investigations, it looked that the script was not brand new. we found references to it already posted in September 2018. But such shell scripts are less common: they are usually not obfuscated and they perform basic features like downloading and installing some binaries. So, I took the time to look at it… [Read more]

[The post [SANS ISC] Obfuscated bash script targeting QNap boxes has been first published on /dev/random]

The post My Laracon EU talk: Minimum Viable Linux appeared first on

The recorded video of my presentation I gave at Laracon EU last summer.

The title is, in hindsight, a badly chosen one. I tried to make it a pun on "Minimum Viable Product" (you know, the startup-y stuff), but it ended up giving perhaps a false expectation to the audience thinking it was about minimal linux distros, which it wasn't.

I plan to give this talk some more times, but i'll change a few things;

  • Skip the Linux landscape intro (does anyone still care?)
  • Change the title to something more like "Troubleshooting PHP as a Linux sysadmin"
  • Perhaps skip the SSH stuff, just keep the tunnel information (to help in debugging)
  • Focus more on live-troubleshooting with strace and introduce sysdig

Lots of lessons learned from me in this talk, it was by far the talk that received the most widely ranging feedback. From 0/10 to 10/10. Most -- I think -- was related to wrong expectations from the start (hence, changing the title).

The goal is -- and will continue to be -- to show debugging techniques that are more sysadmin focussed, to show a different perspective on debugging applications.

The post My Laracon EU talk: Minimum Viable Linux appeared first on

November 26, 2018

Hi Kettle and Neo4j fans!

So maybe that title is a little bit over the top but it sums up what this transformation does:

Here is what we do in this transformation:

  • Read a text file
  • Calculate unique values over 2 columns
  • Create Neo4j nodes for each unique value

To do this we first normalize the columns effectively doubling the amount of rows in the set.  Then we do some cleanup (remove double quotes).  The secret sauce is then to do a partitioned unique value calculation (5 partitions means 5 parallel thread). By partitioning the data on the single column we guarantee that the same data ends up on the same partition (step copy).  For this we use a Hash set (Unique Hash Set) step which uses memory to avoid a costly “Sort/Unique” operation.  While we have the data in parallel step copies, we also load the data in parallel into Neo4j.  Make sure you drop indexes on the Node/Label you’re loading to avoid transaction issues.

This allowed me to condense 28M rows at the starting point into 5M unique values and load those in just over 1 minute on my laptop.  I’ll post a more comprehensive walk-through example later but I wanted to show you this strategy because it can help you out there in need of decent data loading capacity.


Dries interviews Mike Mancuso from Wendy's

During the Innovation Showcase at Acquia Engage, I invited Mike Mancuso, head of digital analytics at Wendy's, on stage. is a Drupal site running on Acquia Cloud, and welcomes 30 million unique visitors a year. Wendy's also uses Acquia Lift to deliver personalized and intelligent experiences to all 30 million visitors.

In the 8-minute video below, Mike explains how Wendy's engages with its customers online.

For the occasion, the team at Wendy's decided to target Acquia Engage attendees. If you visited from Acquia Engage, you got the following personalized banner. It's a nice example of what you can do with Acquia Lift.

Wendy's used Acquia Lift to target Acquia Engage attendees on their website

As part of my keynote, we also demoed the next generation of Acquia Lift, which will be released in early 2019. In 2018, we decided that user experience always has to come first. We doubled our design and user experience team and changed our product development process to reflect this priority. The upcoming version of Acquia Lift is the first example of that. It offers more than just a fresh UI; it also ships with new features to simplify how marketers create campaigns. If you want a preview, have look at the 9-minute video below!

November 24, 2018

Scratch helloworldCe jeudi 13 décembre 2018 à 19h se déroulera la 73ème séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : Les enfants apprennent la programmation: Scratch v3

Thématique : Programmation|Éducation

Public : Tout public

L’animatrice conférencière : Laurence Baclin (HELHa)

Lieu de cette séance : Université de Mons, Faculté Polytechnique, Site Houdain, Rue de Houdain, 9, auditoire 12 (cf. ce plan sur le site de l’UMONS, ou la carte OSM). Entrée par la porte principale, au fond de la cour d’honneur. Suivre le fléchage à partir de là.

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page La séance sera suivie d’un verre de l’amitié (le tout sera terminé au plus tard à 22h).

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, OpenSides, MeaWeb et Phonoid.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

Description : Quoi de neuf dans l’univers de la programmation pour les enfants? La version 3 de Scratch!

Vous n’avez jamais entendu parler de Scratch , mais vous êtes parents, grands-parents, marraines, parrains, instituteurs, institutrices, animateurs ou animatrices de mouvement de jeunesse, d’école de devoirs bref, si vous avez une posture d’éducation d’enfants de 5 à 15 ans: cette conférence vous concerne.

Vous connaissez Scratch, mais vous vous posez des questions: pourquoi est-ce important d’apprendre à programmer tôt? Quels sont les bienfaits attendus? Pourquoi commencer par Scratch? Quelles sont les activités qu’on peut faire avec les enfants?

Que vous soyez vous-même novice ou expert en informatique, peu importe, l’objectif de cette conférence est de discuter des apports pédagogiques de l’apprentissage de la programmation, de l’intérêt de Scratch et ScratchJr dans cet apprentissage, et de rappeler que la programmation c’est pour tout le monde: les jeunes, les très jeunes, les moins jeunes, les plus tous jeunes, et que Scratch est un outil libre spécifiquement conçu pour débuter!

Short bio : Laurence Baclin revendique une double désignation: celle d’ingénieure et celle de pédagogue. Elle est fascinée par les processus d’apprentissage (avec notamment un bagage en pédagogie institutionnelle/Freinet) et passionnée par la technique, surtout les transmissions radios et l’électronique. Au jour le jour, ses étudiants sont les futurs ingénieurs industriels de la HELHa et de gestion à l’UCLouvain FUCaM Mons.

November 22, 2018

The post Tracking SQL queries appeared first on

This is a very clever technique that allows you to add meta data (or "comments") to SQL queries from within your application or framework.

It allows you to track down where in your application it has been called and can give a lot more insight into someone troubleshooting slow queries.

[...] if this is a specific query you might know which URL or method is sending it, but what if someone not so familiar with the application needed to investigate it -- would they be able to find it?

One option is to add a comment to your query that contains some useful information such as class name, method name or even a request ID.

Source: Tracking SQL queries · Jacob Bednarz

The post Tracking SQL queries appeared first on

I published the following diary on “Divided Payload in Multiple Pasties”:

In politic, there is a strategy which says “divide and conquer”. It’s also true for some pieces of malware that spread their malicious code amongst multiple sources. One of our readers shared a sample of Powershell code found on Pastebin that applies exactly this technique… [Read more]

[The post [SANS ISC] Divided Payload in Multiple Pasties has been first published on /dev/random]

November 21, 2018

Gabe, Mateu and I just released the second RC of JSON:API 2, so time for an update! The last update is from a month ago.

What happened since then? In a nutshell:

  • Usage has gone up a lot! 2.0-RC1: 0 → 250, 2.x: ~200 → ~330 1
  • 2.0-RC2 released :)
  • RC1 had about half a dozen issues reported. Most are fixed in RC2. 2
  • The file uploads feature: 80% → 100%, will ship in 2.1!
  • The revisions feature 3: 80% → 90%, will ship in 2.2!
  • New core patch to bring JSON:API to Drupal core: #2843147-78


Curious about RC2? RC1RC2 has four changes:

  1. Renamed from JSON API to JSON:API, matching recent spec changes.
  2. One critical bug was reported by brand new Drupalist Peter van Dijk (welcome, and thanks for noticing something we had gotten used to!): the ?filter syntax was very confusing and effectively required the knowledge (and use) of Drupal internals. Now you no longer need to do /jsonapi/node/article?filter[uid.uuid]=c42…e37 to get all articles by a given author, but just /jsonapi/node/article?filter[]=c42…e37. This matches the JSON:API document structure: the abstraction is no longer leaky! The updated ?filter syntax is still 95% the same. 4
  3. JSON:API responses were no longer being cached by Page Cache. This was of course a major performance regression. Fortunately long-time Drupalist yobottehg discovered this!
  4. Massive performance increase for requests using sparse fieldsets such as ?fields[node--article]=title. Thanks to long-time Drupalist jibran doing quite a bit of profiling on JSON:API, something which Mateu, Gabe and I haven’t gotten around to yet. Moving 2 lines of code made his particular use case 90% faster! We’re sure there’s more low-hanging fruit, but this one was so tasty and so low-risk that we decided to commit it despite being in RC.

So … now is the time to update to 2.0-RC2!

The JSON:API team: 150/150/150

In the previous update, I included a retrospective, including a section about how it’s been to go from one maintainer to three. A month ago this happened:

11:08:15 <e0ipso>
11:08:27 <e0ipso> Let's get that to 150, 150 and 150
11:08:31 <WimLeers> :D
11:08:46 <WimLeers> that'd be pretty cool indeed
11:09:05 <e0ipso> COMEON gabesullice you're slacking!
11:09:08 <e0ipso> hehe

That image was: The screenshot Mateu shared a month ago. Showing Wim at 149 commits, Gabe at 147, Mateu at 150.

Well, as of yesterday evening, it looks like this: JSON:API committer statistics showing that the three main maintainers each have 150 commits!

On top of that, mere hours after previous update went live, Gabe became an official API-First Initiative coordinator, together with Mateu and I:

Congrats Gabe, and thanks! :)

  1. Note that usage statistics on are an underestimation! Any site can opt out from reporting back, and composer-based installs don’t report back by default. ↩︎

  2. Since we’re in the RC phase, we’re limiting ourselves to only critical issues. ↩︎

  3. Just GET, just for Node entities. Those 2 limitations are essential because core doesn’t have a formal API for revision access control. :| Yet! ↩︎

  4. This was not a regression in 2.x; this also affects JSON:API 1.x! It’s just more apparent in 2.x that this is a problem because we did make 2.x comply with the spec completely (which also was the reason for the 2.x branch: we needed to break backward compatibility to comply). We chose to embrace the feedback and introduce this additional BC break in the 2.x branch. Some of your requests will now return 400 responses, but they all include helpful error messages, providing concrete suggestions on how to change it. ↩︎

November 20, 2018

I published the following diary on “Querying DShield from Cortex”:

Cortex is a tool part of the TheHive project. As stated on the website, it is a “Powerful Observable Analysis Engine”. Cortex can analyze observables like IP addresses, emails, hashes, filenames against a huge (and growing) list of online services. I like the naming convention used by Cortex. We have “observables” that can be switched later to an “IOC” later if they are really relevant for us. Keep in mind that an interesting IOC for you could be totally irrelevant in another environment… [Read more]

[The post [SANS ISC] Querying DShield from Cortex has been first published on /dev/random]

November 19, 2018

I published the following diary on “The Challenge of Managing Your Digital Library”:

How do you manage your digital library on a daily basis? If like me, you are receiving a lot of emails, notifications, tweets, [name your best technology here], they are chances that you’re flooded by tons of documents in multiple formats. This problem is so huge that, if I’m offline for a few days or too busy to handle the information in (almost) real time, it costs me a lot of extra time to process the waiting queue… [Read more]

[The post [SANS ISC] The Challenge of Managing Your Digital Library has been first published on /dev/random]

Fortieth birthday coffee mug

Today I turned forty. This morning, Axl and Stan surprised me with coffee in bed, served in a special birthday mug that they created.

When getting ready for a birthday party this weekend, I shaved not only my beard, but also my nose and ear hair. It sums up turning forty nicely.

Much of the things they say about turning forty are true: getting adequate sleep has become a priority, you no longer recognize celebrities in magazines, forgetfulness starts to become a bigger issue, and when you bend down to pick something up, there is no guarantee that you'll make it back up.

I can't complain though. On days like today, when looking back at the previous decade, I'm reminded how lucky and privileged I am.

Much like I hoped I would when I turned thirty, I have accomplished a lot in my thirties — both personally and professionally.

Drupal, Acquia and Open Source have been at the center of my professional career for the last decade and it has changed my life in every way. I'm lucky that my work gives me so much purpose. My desire to create, build, sustain and give back is as strong as before — there is not much more I'd want professionally.

Throughout the past ten years, I've also accomplished a lot of personal growth. I smile thinking how I have become more generous with my "I love you"s. I've gained not just weight, but also a kind of resiliency and perspective that enabled me to better deal with criticism, conflict or setbacks at work. I've learned not to sweat the small stuff and I take pride in acting with integrity on a consistent basis. From that point of view, I look forward to growing up more.

I've seen more of the world in the last ten years than in the first thirty combined. My wanderlust continues to grow and I look forward to exploring more of the world.

I've also been writing on this blog throughout the past decade. That might sound odd to call out on a day like today, but it's an accomplishment to me. My blog could have faded away like most blogs do, but it hasn't. It is one of my longest running projects and a highlight of my thirties. Blogging has made me a better communicator and a more critical thinker. It touches me that some people have been reading my blog for over a decade, and have followed me throughout my thirties. Thanks for sticking with me!

Entering my forties, part of me has zero desire to slow down, while another part of me wants to find more time to spend with Vanessa, Axl, Stan, family and friends. By the time I'll be fifty, Axl and Stan will likely be in college and might have moved out. There is nothing more satisfying than spending time with loved ones, yet it is so hard to pull off.

The post A big update to DNS Spy – DNS Spy Blog appeared first on

A pretty big update to DNS Spy -- our DNS monitoring & security solution -- went live.

The short version: every plan unlocks every feature (including AXFR zone transfers for our smallest plans), yearly billing options, a new layout, a nearly complete backend rewrite and plenty of minor fixes.

Very happy to see this shipped!

We’ve launched a big update to DNS Spy. Mostly behind the scenes improvement, but certainly a few things everyone can enjoy!

Source: A big update to DNS Spy -- DNS Spy Blog

The post A big update to DNS Spy – DNS Spy Blog appeared first on

November 17, 2018

A heads-up to Autoptimize users who are using Divi (and potentially other Elegant Theme’s themes); as discovered and documented by Chris, Divi purges Autoptimize’s cache every time a page/ post is published (or saved?).

To be clear; there is no reason for the AO cache being cleared at that point as:

  1. A new page/ post does not introduce new CSS/ JS
  2. Even if new CSS/ JS would be added somehow, AO would automatically pick up on that and create new optimized CSS/ JS.

Chris contacted Divi support to get this fixed, so this is in the ticketing queue, but if you’re using Divi and  encounter slower saving of posts/ pages or Autoptimized files mysteriously disappearing then his workaround can help you until ET fixes this.

I published the following diary on “Quickly Investigating Websites with Lookyloo”:

While we are enjoying our weekend, it’s always a good time to learn about new pieces of software that could be added to your toolbox. Security analysts have often to quickly investigate a website for malicious content and it’s not always easy to keep a good balance between online or local services… [Read more]

[The post [SANS ISC] Quickly Investigating Websites with Lookyloo has been first published on /dev/random]

Je suis impressionné par les retours que j’ai sur ce que j’ai appelé, dans un accès de grandiloquence prétentieuse, ma déconnexion.

Bien que simplissime dans sa forme (bloquer tout accès à une dizaine de sites internet), elle se révèle étonnamment profonde et titille un sujet particulièrement sensible. J’ai reçu plusieurs témoignages de ceux qui, suite à la lecture de mes billets, se sont mis à mesurer le temps passer en ligne et ont découvert avec effroi les ravages de leur addiction à Facebook, Instagram ou Youtube. La diversité de vos addictions est en soi une donnée particulièrement intéressante. Je n’ai même pas songé à bloquer Youtube car rien ne m’ennuie plus que d’entendre réciter un texte trop lent, trop pauvre en information autour d’une théorie de flashs lumineux filmée par un daltonien cocaïnomane et montée par un publicitaire parkinsonien. Pourtant, si vous aimez le genre, Youtube est particulièrement retors avec sa “suggestion de prochaine vidéo”.

Mais rassurons-nous, nous ne sommes pas seuls dans notre dépendance. La prise de conscience est telle que le business commence à s’en inquiéter ! Ainsi ai-je surpris, à la devanture d’une échoppe, la couverture d’un programme télé « Écrans : les éteindre ou les apprivoiser ? ». La rhétorique est à peine subtile et fait irrémédiablement penser au tristement célèbre « Fumeur ou pas, restons courtois » des années 80, slogan qui a retardé de près de quatre décennies la lutte contre le tabac en suggérant qu’il existait un juste milieu, que le non-fumeur était une sorte d’extrémiste.

Dans ce cas précis, il s’agit de mélanger aveuglément « les écrans ». La problématique n’est pas et n’a jamais été l’écran, mais bien ce qu’il affiche. À part dans les rares et très futuristes cas de crapauds hypnotiseurs, un écran coincé sur la mire n’a jamais rendu addict. Les réseaux sociaux, les contenus sans fin, les likes le font tous les jours.

On pourrait objecter que ce n’est pas aussi grave que la cigarette. Et, comme le dit Tonton Alias, que ma position est extrémiste.

Mais plus je me désintoxique, plus je pense que la gravité de la pollution des réseaux sociaux est au moins aussi grande que celle du tabac.

Il n’y a pas si longtemps, il était autorisé de fumer dans les avions, dans les trains, dans les bureaux, dans les couloirs. Même les non-fumeurs ne se plaignaient que sporadiquement, car l’odeur était partout, car tout le monde était au moins fumeur passif.

Pour moi, qui n’ai pas de souvenirs de cette époque, un voisin qui fume dans son jardin me force à fermer les fenêtres. Qu’une cigarette soit allumée à la table voisine d’une terrasse d’un restaurant et je change de table voir je quitte, incapable de manger dans ces conditions. Qu’un fumeur s’asseye à côté de moi dans un lieu public après avoir écrasé son mégot et je me vois forcé de changer de place. N’ayant pas été forcé de m’habituer à cette irritation permanente, je ne la supporte tout simplement pas, je suis physiquement mal, agressé. J’ai des nausées. Plusieurs ex-fumeurs m’ont témoigné qu’ils pensaient que les non-fumeurs exagéraient et en rajoutaient avant de se rendre compte que, quelques mois seulement après avoir arrêté, il ne pouvait tout simplement plus supporter le moindre relent de l’infâme tabac.

Il y’a quelques jours, j’ai décidé de vérifier que mon système de publication sur les réseaux sociaux fonctionnait bien. J’ai donc désactivé mes filtres et j’ai consulté, sans me connecter, mes différents comptes. L’opération n’a duré que quelques secondes, mais j’ai eu le temps de voir, sans vraiment les lire, plusieurs réactions.

Je ne me souviens même plus du contenu de ces réactions. Je sais juste qu’aucune n’était insultante ni agressive. Pourtant, j’ai senti mon corps entier se mettre sur la défensive, l’adrénaline couler dans mes veines. Certaines réactions faisaient montre d’une incompréhension (du moins, je l’ai perçu comme tel) qui nécessitait à tout prix une clarification, un combat. Une seule réaction me semblait moqueuse, ironique (mais comment être sûr ?). Mon sang n’a fait qu’un tour !

J’ai immédiatement réactivé mon blocage en respirant longuement. J’en ai fait part à ma femme qui m’a dit « La prochaine fois, demande-moi, je te dirai si tout est bien posté ! ».

N’étant plus exposé depuis un mois à l’irritation permanente, au stress constant, à la colère partagée, j’ai pris de plein fouet la décharge de violence. Violence qui n’est même pas à mettre sur le dos des auteurs de commentaires, car ma perception multiplie, amplifie ce que j’ai envie ou peur d’y voir. Si j’ai peur d’être incompris, je verrai de l’incompréhension partout, je prendrai de plein fouet une remarque idiote écrite en 12 secondes par quelqu’un que je ne connais pas et qui a sans doute lu les 5 premières lignes d’un de mes articles dans la file de son supermarché.

Depuis cette expérience, j’ai peur à l’idée d’aller sur un réseau social. Mon icône Adguard activée me rassure, me soulage.

Les médias, en général, sont source d’anxiété. Les réseaux sociaux en sont des amplificateurs. Vue comme cela, l’analogie de la cigarette me semble parfaitement appropriée. Comme la cigarette, une pseudo liberté privée est à la fois morbide pour l’individu et une source de dangereuse pollution pour la communauté. Le bien-être global, comme l’air pur, ne devrait-il pas être considéré comme un bien commun ? L’anxiété, la peur sont des cancers qui rongent les cellules individuelles que nous sommes pour former une gigantesque société tumeur. Et les dernières expériences semblent bel et bien confirmer cette intuition : les réseaux sociaux seraient la cause de symptômes de dépression.

N’est-il pas pas paradoxal que nous tentions de soigner nos angoisses existentielles à travers des outils qui prétendent nous aider en nous offrant reconnaissance et gloriole, mais en attisant la source de tous nos maux ?

Mais du coup se pose la question : ne suis-je pas complice en continuant à proposer mes billets sur les réseaux sociaux, en continuant à alimenter mes comptes et mes pages ? La réponse n’est pas facile et fera l’objet du prochain billet.

Photo by Andre Hunter on Unsplash

Je suis @ploum, conférencier et écrivain électronique déconnecté rémunérés en prix libre sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens, même symboliques, font une réelle différence pour moi. Merci !

Ce texte est publié sous la licence CC-By BE.

November 16, 2018

Le phare tranche l’épaisseur de la nuit de ses coups d’épée répétés, réguliers, métronome lumineux dans un silence de ténèbres. La mer s’étale comme une lisse frontière entre l’obscurité des étoiles et l’abysse d’un noir transparent.

Quelques vagues lèchent le sable en un langoureux ressac hypnotique. Depuis la baie vitrée de ma maison passive entièrement domotisée, je vois, à la lisière de mon écran, passer l’ombre d’une calèche d’un autre siècle, d’un autre temps. Ai-je rêvé ? Pourtant, le claquement caractéristique des sabots sur les graviers retentit encore, porté par l’air marin.

Intrigué, je pose mon ordinateur et fais quelques pas dehors. J’écarte un buisson d’épine et découvre, au fond du jardin, un chemin que je n’avais jamais vu. Quelques marches me mènent à la nationale. Où ce qui aurait dû être la nationale, car, inexplicablement, le bitume a été remplacé par de la poussière, de la terre et quelques pavés épars tentant vainement de contenir des herbes folles.

Je lève la tête. Le phare continue sa sarabande silencieuse d’assourdissants éclairs. Un bruit, un claquement suivi d’un crissement. Une autre calèche s’arrête. Dans la pénombre, je ne distingue qu’une vague forme noire surmontée d’un haut de forme. 

— Montez ! On a besoin de vous au phare !

La phrase résonne dans ma tempe. Machinalement, j’obéis à l’ordre qui m’a été fait sans savoir si le dur accent rocailleux me parlait bien en français. Ou bien était-ce du breton ? Un mélange ? Comment ai-je compris aussi facilement ?

Mais je n’ai pas le temps de m’interroger que le véhicule se met en branle. Le paysage défile sous mes yeux désormais habitués à l’obscurité. Sous la lumière de quelques étoiles, j’aperçois les bateaux des pêcheurs, rangés pour la nuit, gémissant doucement dans leur sommeil ligneux. Nous traversons le bourg jouxtant le port à toute vitesse dans un claquement de sabots.

Le long de la route, des pierres millénaires tentent de tracer un muret à travers les épines, dessinant parfois une croix, un calvaire ou l’ébauche d’un souvenir encore plus lointain. Il flotte dans l’air marin une vapeur étrange, un passé qui tente de s’instiller dans un présent filigrane.

L’attelage me dépose au pied de l’énorme cylindre de pierre. Une vieille porte s’entrouvre en grinçant, laissant apercevoir un visage buriné par les embruns et la fumée d’une vieille pipe d’écume. Sous de broussailleux sourcils de neige, deux fentes noires me transpercent par leur intensité.

— C’est donc vous que nous attendions ?

Entrecoupée de jurons et de crachats, la phrase a résonné comme une vague éclatant sur un brise-lame dans une langue que je n’aurais pas dû comprendre. Timidement, j’acquiesce d’un léger mouvement de tête.

— Alors ? Montez ! poursuit l’homme. Voici pour payer le cocher ! fait-il en me tendant une grosse pièce plate et argentée.

Sans prendre la peine de la regarder, je la tends à mon conducteur qui la mord, l’empoche et me donne en échange trois petites pièces en grommelant : 

— Votre monnaie. 

Sans ajouter un mot, il fait faire demi-tour à son attelage et s’enfonce dans la nuit. Mais je n’ai pas le temps de le voir disparaitre qu’une main me happe et me hisse à travers un escalier métallique.

— Venez ! On n’a pas toute la nuit !

L’escalade me semble interminable, vertigineuse. Sous mes pieds, les marches se succèdent, infiniment identiques. Après un passage dans différentes pièces maigrement meublées, nous débouchons enfin dans la galerie entourant l’optique. L’éclat est aveuglant, mais supportable. Je distingue une forme étendue. Le second gardien.

— Que lui est-il arrivé ?

Mon hôte ne répond pas et le pointe du doigt. Je m’approche du corps. Le visage est blanc, les lèvres sont bleuies, mais, sous mes doigts, je sens encore une légère chaleur.

Le premier gardien me regarde et, à contrecœur, me lance un rugueux : 

— Il s’est noyé.

— Comment ça noyé ? À 30 mètres au-dessus du niveau de la mer ?

— Oui !

— Et que voulez-vous que je fasse ?

— Que vous le sauviez !

— Mais depuis le temps que je suis en route, il est mort des dizaines de fois !

— En Bretagne, le temps ne s’écoule pas toujours de la même façon. Sauvez-le !

M’interdisant de réfléchir, j’applique machinalement les premiers secours. Noyade blanche. Bouche à bouche. Massage cardiaque. Encore une fois. Encore. Et, soudain, un mouvement, une toux, des yeux qui s’ouvrent et un froid terrifiant qui m’envahit, me coupe le souffle.

Je suis en état de choc glacé. Je suis sous l’eau. Instinctivement, mon cerveau m’impose la routine de survie de l’apnéiste. Se calmer. Détendre les muscles. Accepter le froid. Ne pas respirer, ne pas ouvrir la bouche. Nager. Une brasse. Une autre brasse.

La pression écrase ma poitrine, enfonce ma glotte dans ma cage thoracique, vrille mes tympans. Je suis profond. Très profond. Mais une profondeur qui m’est familière, à laquelle j’ai déjà plongé. Alors je fais une brasse. Mais comment connaitre la direction ? Une légère bulle d’air s’échappe de ma chaussure et remonte le long de mon pantalon. Je suis donc vertical. Il ne me reste plus qu’à nager. Une brasse. Et encore une brasse. J’en compte une dizaine. Mes poumons vont éclater. Mais encore une dizaine et tout devrait bien aller. Allez, courage, plus que dix et… mon crâne émerge brusquement. Inspire ! Inspire ! Inspire !

Le froid me vrille les tempes. Je suis hors de l’eau et j’aperçois le rivage qui se découpe en noir foncé sur la ligne sombre de l’horizon. Il n’y a que quelques centaines de mètres. Je nage sur le dos pour reprendre mon souffle, le courant me porte. Le froid piquant n’est pas mortel. Pas avant une heure ou deux à cette température. Je me retourne pour faire quelques brasses. La petite plage est proche désormais, mes pieds raclent le sable. Ahanant, marchant, nageant, j’extirpe des flots marins pour m’écrouler sur des algues malodorantes.

Le sol se dérobe sous moi et je tombe dans le noir. Un choc dur ! Un cri ! Aïe ! Une lumière m’éblouit ?

— Mais qu’est-ce que tu fais ? Fais moins de bruit, tu vas réveiller le petit ?

Étourdi, je constate que je suis au pied de mon lit.

— Je… Je dois être tombé du lit. J’ai fait un rêve !

— Tu me raconteras demain, me fait ma femme en éteignant la lumière. Grimpe et rendors-toi !

Mais lorsque j’ai voulu raconter mon rêve le lendemain matin, les mots ne me sont pas venus. Les souvenirs s’effilochaient, les images devenaient floues. Tout au plus ai-je pu montrer les trois petites pièces de monnaie trouvées dans ma poche et une plongée à moins trente mètres enregistrée cette nuit-là par ma montre profondimètre dans une eau à une dizaine de degrés.

Le phare, lui, continue de balayer chaque nuit de sa dague de lumière, envoyant aux hommes son vital avertissement. Et si les marins ont appris à être humbles face aux vagues de la mer, quel phare guidera la prétentieuse humanité dans les flots impétueux du temps dans lesquels nous sommes condamnés à nous faire engloutir ?

Qui en seront les gardiens ?

Nuit du 9 au 10 septembre, en regardant le phare depuis l’Aber Wrac’h.

Photo by William Bout on Unsplash

Je suis @ploum, conférencier et écrivain électronique déconnecté rémunérés en prix libre sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens, même symboliques, font une réelle différence pour moi. Merci !

Ce texte est publié sous la licence CC-By BE.

An Acquia banner at the Austin airportAcquia Engage attendees that arrived at the Austin airport were greeted by an Acquia banner!

Last week, Acquia welcomed more than 600 attendees to the fifth annual Acquia Engage Conference in Austin, Texas. During my keynote, my team and I talked about Acquia's strategy, recent product developments, and our product roadmap. I also had the opportunity to invite three of our customers on stage — Paychex, NBC Sports, and Wendy's — to hear how each organization is leveraging the Acquia Platform.

All three organizations demonstrate incredible use cases, and I invite you to watch the recording of the Innovation Showcase (78 minutes) or download a copy of my slides (219 MB).

I also plan to share more in-depth blog posts on my conversations with Wendy’s, NBC Sports, and Paychex’s next week.

I published the following diary on “Basic Obfuscation With Permissive Languages”:

For attackers, obfuscation is key to keep their malicious code below the radar. Code is obfuscated for two main reasons: defeat automatic detection by AV solutions or tools like YARA (which still rely mainly on signatures) and make the code difficult to read/understand by a security analyst… [Read more]

[The post [SANS ISC] Basic Obfuscation With Permissive Languages has been first published on /dev/random]

November 15, 2018

This week Acquia was named a leader in the Forrester Wave: Web Content Management Systems. Acquia was previously named a leader in WCM systems in Q1 2017.

The report highlights Acquia and Drupal's leadership on decoupled and headless architectures, in addition to Acquia Cloud's support for Node.js.

I'm especially proud of the fact that Acquia received the highest strategy score among all vendors.

Thank you to everyone who contributed to this result! It's another great milestone for Acquia and Drupal.

November 13, 2018

If you want to go quickly, go alone. If you want to go far, go together.

Drupal exists because of its community. What started from humble beginnings has grown into one of the largest Open Source communities in the world. This is due to the collective effort of thousands of community members.

What distinguishes Drupal from other open source projects is both the size and diversity of our community, and the many ways in which thousands of contributors and organizations give back. It's a community I'm very proud to be a part of.

Without the Drupal community, the Drupal project wouldn't be where it is today and perhaps would even cease to exist. That is why we are always investing in our community and why we constantly evolve how we work with one another.

The last time we made significant changes to Drupal's governance was over five years ago when we launched a variety of working groups. Five years is a long time. The time had come to take a step back and to look at Drupal's governance with fresh eyes.

Throughout 2017, we did a lot of listening. We organized both in-person and virtual roundtables to gather feedback on how we can improve our community governance. This led me to invest a lot of time and effort in documenting Drupal's Values and Principles.

In 2018, we transitioned from listening to planning. Earlier this year, I chartered the Drupal Governance Task Force. The goal of the task force was to draft a set of recommendations for how to evolve and strengthen Drupal's governance based on all of the feedback we received. Last week, after months of work and community collaboration, the task force shared thirteen recommendations (PDF).

The proposal from the Drupal Governance Task Force Me reviewing the Drupal Governance proposal on a recent trip.

Before any of us jump to action, the Drupal Governance Task Force recommended a thirty-day, open commentary period to give community members time to read the proposal and to provide more feedback. After the thirty-day commentary period, I will work with the community, various stakeholders, and the Drupal Association to see how we can move these recommendations forward. During the thirty-day open commentary period, you can then get involved by collaborating and responding to each of the individual recommendations below:

I'm impressed by the thought and care that went into writing the recommendations, and I'm excited to help move them forward.

Some of the recommendations are not new and are ideas that either the Drupal Association, myself or others have been working on, but that none of us have been able to move forward without a significant amount of funding or collaboration.

I hope that 2019 will be a year of organizing and finding resources that allow us to take action and implement a number of the recommendations. I'm convinced we can make valuable progress.

I want to thank everyone who has participated in this process. This includes community members who shared information and insight, facilitated conversations around governance, were interviewed by the task force, and supported the task force's efforts. Special thanks to all the members of the task force who worked on this with great care and determination for six straight months: Adam Bergstein, Lyndsey Jackson, Ela Meier, Stella Power, Rachel Lawson, David Hernandez and Hussain Abbas.

Pourquoi l’immense majorité des contenus en ligne aujourd’hui est en fait du bruit, comment on peut s’en protéger et comment faire pour rendre le monde un peu moins bruyant

Mon expérience des blogs marketing

Blogueur actif et, dans une certaine mesure, reconnu depuis plus de 14 ans, il est normal que j’aie tenté de multiples fois de faire de ma plume bloguesque un arc de ma carrière professionnelle.

Soit en postulant par moi-même pour alimenter le blog d’une entreprise contre rémunération, soit suite à une demande de mon employeur, impressionné par les différentes métriques de mon blog et rêvant de reproduire la même chose, de transférer mon audience vers son entreprise.

Dans tous les cas, ces projets se vautrèrent aussi misérablement qu’un base jumpeur ivre accroché à une plaque de tôle ondulée.

Je n’étais pas satisfait de mon travail. Mon employeur non plus. Le plus souvent, mes billets étaient même refusés avant publication. Pour le reste, ils étaient retravaillés à outrance, je devais les réécrire plusieurs fois et aucun de ceux qui ont finalement été publiés n’ont jamais obtenu ne fût qu’une fraction du succès relatif dont je peux régulièrement m’enorgueillir sur mon blog personnel. D’ailleurs, j’avais l’impression que les lettres s’étalaient sur l’écran en une bouse fraîche et odorante de printemps. Mon commanditaire me faisait comprendre que ce n’était pas qu’une impression.

J’ai toujours attribué ces échecs au bike shedding, ce besoin des managers de surveiller, de modifier ce qui leur semble simple au lieu de me faire confiance. Je les croyais incapables de laisser publier un truc qu’ils n’avaient pas écrit eux-mêmes (alors que si on m’engage, c’est justement pour ne pas l’écrire soi-même). Bref, du micromanagement.

Les deux types de contenus

Avec le recul, je commence seulement à réaliser une raison bien plus profonde de ces échecs : il y’a deux types de contenu, deux types de texte.

Les premiers, comme celui que vous êtes en train de lire, sont le résultat d’une expression personnelle. Le but est de réfléchir, en public ou non, par clavier interposé, de partager des réflexions, de les lancer dans l’éther sans trop savoir ce qui va en découler.

Mais les entreprises n’ont, par définition, qu’un seul objectif : vendre leurs tapis, leurs chameaux de plastiques produits en chine par des enfants aveugles dans des mines, leurs services de voyance/consultance aussi ineptes que dispendieux, leur golfware et autre tupperware. Mais on ne vend pas directement par blog interposé, ça se saurait ! En conséquence, le but du blog corporate est uniquement d’augmenter la visibilité, le pagerank, le référencement du site de l’entreprise dans les moteurs de recherches et les réseaux sociaux. L’entreprise va tenter de produire du contenu, mais sans réelle motivation (car elle n’a rien à partager ou, si c’est le cas, elle ne souhaite justement pas le partager) et sans aucun objectif d’être lu. Un blog d’entreprise, c’est surtout ne rien dire (pour ne pas donner des idées aux concurrents), mais faire du vent en espérant que les suites aléatoires de mots à haute teneur de buzz bercent les oreilles des robots Google d’une douce musique de pagerank ou que le titre soit suffisamment accrocheur pour générer un clic chez une aléatoire probabilité de clientèle potentielle.

J’exagère ? Mais regardez, au hasard, le blog de Freedom, un logiciel qui a pour mission de filtrer les distractions, de permettre de se concentrer. Il est rempli d’articles creux, vides et, disons-le, nullissimes, sur le thème du focus. Le but est évident : attirer l’attention. Tenter de distraire les internautes pour leur vendre une solution pour éviter les distractions.

Ce qui est particulièrement dommage c’est que ce genre de blog peut avoir des choses à dire. Mais un contenu intéressant, car parlant du logiciel lui-même, est noyé au milieu d’un succédané de Flair l’hebdo. Freedom n’est pas une exception, c’est la règle générale. Les projets, même intéressants, se sentent obligés de produire régulièrement du contenu, de faire du marketing au lieu de se concentrer sur la technique.

Une fois clairement identifiée la différence entre un blog d’idées et un blog marketing, il semble absurde qu’on ait voulu, de nombreuses fois, me confier les rênes d’un blog marketing.

Un blog marketing, par essence, n’est que du vent, du bruit. Il a pour objectif d’attirer l’attention sans prendre le moindre risque. Un blog personnel, c’est le contraire. J’ai l’aspiration d’écrire pour moi, de prendre des risques, de me mettre à nu. Même si, trop souvent, je cède aux sirènes du bruit et du marketing de mon fragile ego.

Le bruit de l’email

Il semble évident que cette différentiation du contenu ne s’applique pas qu’aux blogs, mais absolument à tout type de média, depuis l’email à la lettre papier. Il y’a deux types de contenus : les contenus créés pour partager quelque chose et les contenus qui ne cherchent qu’à prendre de la place dans un océan de contenu, à exister, à accaparer votre attention. Bref, du bruit…

Toute la « science » enseignée dans les écoles de marketing peut se résumer à cela : faire du bruit. Faire plus de bruit que les autres. Et accessoirement mesurer combien de personnes ont été forcées d’entendre. Alors, forcément, quand des milliers de marketeux issus des mêmes écoles se retrouvent en situation de concurrence, cela produit un gigantesque tintamarre, un tohu-bohu, un charivari dans lequel nous vivons pour le moment.

Aujourd’hui, je commence à peine à prendre conscience que la grande partie de mon temps de consommation pré-déconnexion était en fait dédié à “trier” le bruit, sans réel repère autre que l’intuition. Je tentais de donner un sens à tout ce vacarme. Pour chaque contenu qui “avait l’air intéressant”, je passais du temps à soit le supprimer, soit à tenter de comprendre ce qui se cachait sous les parasites. Et, forcément, lorsqu’on est assourdi, tout parait bruyant. Même une conversation normale devient inintelligible.

À la lueur de cette dichotomie manichéenne, ma déconnexion s’illumine d’un nouveau sens : comment passer moins de temps dans la mer de contenu à trier ? Comment supprimer le bruit de mon existence ?

Et la solution se révèle, pour moi, incroyablement simple.

Ne plus accepter le bruit

Pour les emails, cela consiste à se désabonner d’absolument tous les mails qui comportent la mention “unsubscribe” et à demander à tous les auteurs de mails impersonnels de me supprimer de leur liste. Chaque mail de ce type est donc un léger effort (parfois il y’a des négociations), mais l’effet, au bout de quelques semaines, est absolument saisissant. L’immense majorité de notre usage du mail est en fait “filtrer le bruit”. Ma solution est tellement efficace que, parfois, j’ai l’impression que mon mail est planté. Cela demande une certaine rigueur, car recevoir un mail est source d’une légère décharge de dopamine. Nous sommes accros à recevoir des mails. Si vous êtes de ceux qui reçoivent plus de cinquante emails par jour, et je l’ai été bien longtemps, vous n’êtes pas une personne importante, vous êtes tout simplement une victime du bruit.

Il y’a probablement certaines newsletters que vous trouvez instructives, intéressantes. Si vous ne payez pas pour ces newsletters, alors c’est par définition du bruit. Des mails conçus pour vous rappeler que le projet ou la personne existe. Les grands services en ligne comme Facebook et Linkedin sont d’ailleurs particulièrement retors : de nouvelles catégories d’emails sont ajoutées régulièrement, permettant de toucher ceux qui s’étaient déjà désinscrits de tout le reste. Se désinscrire peut parfois révéler du véritable parcours du combattant, et c’est parfois pire pour les mailings papier !

Hors mail, comme vous le savez peut-être, je n’accède plus non plus aux réseaux sociaux. Au fond, ceux-ci sont l’archétype du bruit. Sur les réseaux sociaux, l’immense majorité des contenus ne sont que « Regardez-moi, j’existe ! ». Être sur les réseaux sociaux, c’est un peu comme rentrer dans une discothèque en espérant tomber par hasard sur quelqu’un avec qui discuter de l’influence kantienne dans l’œuvre d’Heidegger. Ça arrive, mais c’est tellement rare que mieux vaut utiliser d’autres moyens et se couper du bruit.

Par contre, je lis avec plaisir ce que mes amis prennent le temps de me recommander. Je suis également le flux RSS de quelques individus sélectionnés. Le fait de les lire non plus en vitesse, au milieu du bruit, en les scannant pour tester leur “intérêt potentiel”, m’apporte énormément. Je leur fais désormais confiance, je les lis en étant disponible à 100%. Cela me permet de m’imprégner de leurs idées. Je ne me contente plus de lister leurs idées pour les archiver dans un coin de ma tête ou d’Evernote, mais je prends le temps de les laisser grandir en moi, de les approprier, de partager leur vision.

Au lieu de scanner le bruit pour repérer ce qui m’intéresse, j’accepte de rater des infos et je n’accepte que des sources qui sont majoritairement personnelles, profondes, quel que soit le sujet. Je peux me passionner pour un billet d’un Alias même lorsqu’il traite de sujets abscons et complètement hors de mes intérêts. Par exemple la différence entre le néo-prog métal et le néo-métal tendance prog, un sujet fondamental. Si je pestais contre les articles trop longs qui n’allaient pas directement à l’essentiel, aujourd’hui je suis déçu par la brièveté de certains qui ne font que toucher, effleurer ce qui mériterait une bien plus grande profondeur.

Contribuer au silence

Mais réduire le bruit du monde n’est pas uniquement à sens unique. Nous sommes tous responsables d’une part de bruit. Comment contribuer ?

C’est simple ! Que ce soit un simple email, un post sur les réseaux sociaux, un blog post, posez-vous la question : est-ce que je suis en train de rendre le monde meilleur en diffusant ce contenu ?

S’il s’agit d’essayer d’obtenir de la reconnaissance, de convaincre votre public (que ce soit de la pertinence de vos idées, de la nécessité d’acheter votre produit, de l’importance de votre vie) ou de cracher votre colère, alors vous ne rendez pas le monde meilleur.

J’ai, un peu par hasard, acheté une licence Antidote en commençant ma déconnexion (ce qui est râlant, ils ont annoncé la version 10 3 semaines après). Antidote possède une fonctionnalité qui fait que chaque mail que vous envoyez vous est affiché à l’écran avec toutes les fautes d’orthographe. C’est un peu pénible, car, Antidote étant lent, cela rend le mail moins immédiat.

Pourtant, deux ou trois fois, à la relecture, j’ai purement et simplement renoncé à envoyer le mail. Moi qui étais ultra-impulsif du clavier, voilà un outil qui me soigne. Si le monde n’a pas besoin de mon email, alors je ne l’envoie pas. Non seulement j’applique Inbox 0 pour moi, mais, désormais, j’aide les autres à l’atteindre.

Exemple frappant : mon mail type de demande de désinscription citant le RGPD comportait un paragraphe tentant de convaincre de l’aspect immoral des pratiques marketing. J’ai arrêté. Je tente de ne répondre à chaque email qu’avec le minimum d’informations nécessaires. Je garde pour moi toutes mes suggestions d’améliorations. C’est difficile, mais ça va soulager pas mal de monde.

Trop de bruit, pas assez de silence ?

Depuis l’avènement d’Internet, nous sommes tous des producteurs de contenus. Si nous rajoutons les contenus générés automatiquement par des ordinateurs, il semble évident qu’il y’a désormais beaucoup trop de contenus, que trouver de l’audience est difficile. C’est un truc dont traite régulièrement l’auteur Neil Jomunsi.

Loin d’être effrayé par cette apparence de abondance, je pense qu’il n’y a en réalité pas assez de contenu de qualité. Il n’y a pas assez d’écrivains, d’artistes. Il n’y a pas assez de contenu qui n’a pour seule vocation que d’enrichir le patrimoine commun de l’humanité.

Et si, avant toute chose, on arrêtait de produire et de consommer du bruit, de la merde ?

À un âge de surabondance de l’information, de publicités épileptiques clignotantes à tous les coins de rue et d’omniscients écrans, publier un contenu devrait être soumis à un filtre strict : « Est-ce que j’ai vraiment envie de publier ça ? Est-ce que je ne rajoute pas une couche de fumier sur la merde du monde ? »

Est-ce que moi, Ploum moralisateur en quête d’égo, je ne suis pas face à un paradoxe en continuant à alimenter automatiquement mes bruyants comptes de réseaux sociaux pour que vous veniez lire mes pontifiants sermons sur le silence ?  La réflexion est en cours.

Photo by @chairulfajar_ on Unsplash

Je suis @ploum, conférencier et écrivain électronique déconnecté rémunérés en prix libre sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens, même symboliques, font une réelle différence pour moi. Merci !

Ce texte est publié sous la licence CC-By BE.

November 12, 2018

Content authors want an easy-to-use page building experience; they want to create and design pages using drag-and-drop and WYSIWYG tools. For over a year the Drupal community has been working on a new Layout Builder, which is designed to bring this page building capability into Drupal core.

Drupal's upcoming Layout Builder is unique in offering a single, powerful visual design tool for the following three use cases:

  1. Layouts for templated content. The creation of "layout templates" that will be used to layout all instances of a specific content type (e.g. blog posts, product pages).
  2. Customizations to templated layouts. The ability to override these layout templates on a case-by-case basis (e.g. the ability to override the layout of a standardized product page)
  3. Custom pages. The creation of custom, one-off landing pages not tied to a content type or structured content (e.g. a single "About us" page).

Let's look at all three use cases in more detail to explain why we think this is extremely useful!

Use case 1: Layouts for templated content

For large sites with significant amounts of content it is important that the same types of content have a similar appearance.

A commerce site selling hundreds of different gift baskets with flower arrangements should have a similar layout for all gift baskets. For customers, this provides a consistent experience when browsing the gift baskets, making them easier to compare. For content authors, the templated approach means they don't have to worry about the appearance and layout of each new gift basket they enter on the site. They can be sure that once they have entered the price, description, and uploaded an image of the item, it will look good to the end user and similar to all other gift baskets on the site.

The Drupal 8 Layout Builder showing a templated gift basket

Drupal 8's new Layout Builder allows a site creator to visually create a layout template that will be used for each item of the same content type (e.g. a "gift basket layout" for the "gift basket" content type). This is possible because the Layout Builder benefits from Drupal's powerful "structured content" capabilities.

Many of Drupal's competitors don't allow such a templated approach to be designed in the browser. Their browser-based page builders only allow you to create a design for an individual page. When you want to create a layout that applies to all pages of a specific content type, it is usually not possible without a developer.

Use case 2: Customizations to templated layouts

While having a uniform look for all products of a particular type has many advantages, sometimes you may want to display one or more products in a slightly (or dramatically) different way.

Perhaps a customer recorded a video of giving their loved one one of the gift baskets, and that video has recently gone viral (because somehow it involved a puppy). If you only want to update one of the gift baskets with a video, it may not make sense to add an optional "highlighted video" field to all gift baskets.

Drupal 8's Layout Builder offers the ability to customize templated layouts on a case per case basis. In the "viral, puppy, gift basket" video example, this would allow a content creator to rearrange the layout for just that one gift basket, and put the viral video directly below the product image. In addition, the Layout Builder would allow the site to revert the layout to match all other gift baskets once the world has moved on to the next puppy video.

The Drupal 8 Layout Builder showing a templated gift basket with a puppy video

Since most content management systems don't allow you to visually design a layout pattern for certain types of structured content, they of course can't allow for this type of customization.

Use case 3: Custom pages (with unstructured content)

Of course, not everything is templated, and content authors often need to create one-off pages like an "About us" page or the website's homepage.

In addition to visually designing layout templates for different types of content, Drupal 8's Layout Builder can also be used to create these dynamic one-off custom pages. A content author can start with a blank page, design a layout, and start adding blocks. These blocks can contain videos, maps, text, a hero image, or custom-built widgets (e.g. a Drupal View showing a list of the ten most popular gift baskets). Blocks can expose configuration options to the content author. For instance, a hero block with an image and text may offer a setting to align the text left, right, or center. These settings can be configured directly from a sidebar.

The Drupal 8 Layout Builder showing how to configure a block

In many other systems content authors are able to use drag-and-drop WYSIWYG tools to design these one-off pages. This type of tool is used in many projects and services such as Squarespace and the new Gutenberg Editor for WordPress (now available for Drupal, too!).

On large sites, the free-form page creation is almost certainly going to be a scalability, maintenance and governance challenge.

For smaller sites where there may not be many pages or content authors, these dynamic free-form page builders may work well, and the unrestricted creative freedom they provide might be very compelling. However, on larger sites, when you have hundreds of pages or dozens of content creators, a templated approach is going to be preferred.

When will Drupal's new Layout Builder be ready?

Drupal 8's Layout Builder is still a beta level experimental module, with 25 known open issues to be addressed prior to becoming stable. We're on track to complete this in time for Drupal 8.7's release in May 2019. If you are interested in increasing the likelihood of that, you can find out how to help on the Layout Initiative homepage.

An important note on accessibility

Accessibility is one of Drupal's core tenets, and building software that everyone can use is part of our core values and principles. A key part of bringing Layout Builder functionality to a "stable" state for production use will be ensuring that it passes our accessibility gate (Level AA conformance with WCAG and ATAG). This holds for both the authoring tool itself, as well as the markup that it generates. We take our commitment to accessibility seriously.

Impact on contributed modules and existing sites

Currently there a few methods in the Drupal module ecosystem for creating templated layouts and landing pages, including the Panels and Panelizer combination. We are currently working on a migration path for Panels/Panelizer to the Layout Builder.

The Paragraphs module currently can be used to solve several kinds of content authoring use-cases, including the creation of custom landing pages. It is still being determined how Paragraphs will work with the Layout Builder and/or if the Layout Builder will be used to control the layout of Paragraphs.


Drupal's upcoming Layout Builder is unique in that it supports multiple different use cases; from templated layouts that can be applied to dozens or hundreds of pieces of structured content, to designing custom one-off pages with unstructured content. The Layout Builder is even more powerful when used in conjunction with Drupal's other out-of-the-box features such as revisioning, content moderation, and translations, but that is a topic for a future blog post.

Special thanks to Ted Bowman (Acquia) for co-authoring this post. Also thanks to Wim Leers (Acquia), Angie Byron (Acquia), Alex Bronstein (Acquia), Jeff Beeman (Acquia) and Tim Plunkett (Acquia) for their feedback during the writing process.

Dear Kettle and Neo4j friends,

Since I joined the Neo4j team in April I haven’t given you any updates despite the fact that a lot of activity has been taking place in both the Neo4j and Kettle realms.

First and foremost, you can grab the cool Neo4j plugins from (the plugin in the marketplace is always out of date since it takes weeks to update the metadata).

Then based on valuable feedback from community members we’ve updated the DataSet plugin (including unit testing) to include relative paths for filenames (for easier git support), to avoid modifying transformation metadata and to set custom variables or parameters.

Kettle unit testing ready for prime time

I’ve also created a plugin to debug transformations and jobs a bit easier.  You can do things like set specific logging levels on steps (or only for a few rows) and work with zoom levels.

Clicking right on a step you can choose “Logging…” and set logging specifics.

Then, back on the subject of Neo4j, I’ve created a plugin to log the execution results of transformations and jobs (and a bit of their metadata) to Neo4j.

Graph of a transformation executing a bunch of steps. Metadata on the left, logging nodes on the right.

Those working with Azure might enjoy the Event Hubs plugins for a bit of data streaming action in Kettle.

The Kettle Needful Things plugin aims to fix bugs and solve silly problems in Kettle.  For now it sets the correct local metastore on Carte servers AND… features a new launcher script called MaitreMaitre supports transformations and jobs, local, remote and clustered execution.

The Kettle Environment plugin aims to take a stab at life-cycle management by allowing you to define a list of Environments:

The Environments dialog shown at the start of Spoon

In each Environment you can set all sorts of metadata but also the location of the Kettle and MetaStore home folders.

Finally, because downloading, patching, installing and configuring all this is a lot of work, I’ve created an automated process which does this for you on a daily bases (for testing) and so you can download Kettle Community Edition version patched to with all the extra plugins above in its 1GB glory at :

To get it on your machine simply run:

wget -O

You can also give these plugins (Except for Needful-things and Environment) a try live on my sandbox WebSpoon server.  You can easily run your own WebSpoon from the also daily updated docker container.

If you have suggestions, bugs, rants, please feel free to leave them here or in the respective github projects.  Any feedback is as always more than welcome.  In fact, thanks you all for the feedback given so far.  It’s making all the difference.  If you feel the need to contribute more opinions on the subjects of Kettle feel free to send me a mail (mattcasters at gmail dot com) to join our kettle-community Slack channel.



November 11, 2018

The post HTTP-over-QUIC will officially become HTTP/3 appeared first on

The protocol that's been called HTTP-over-QUIC for quite some time has now changed name and will officially become HTTP/3. This was triggered by this original suggestion by Mark Nottingham.

The QUIC Working Group in the IETF works on creating the QUIC transport protocol. QUIC is a TCP replacement done over UDP. Originally, QUIC was started as an effort by Google and then more of a "HTTP/2-encrypted-over-UDP" protocol.

Source: HTTP/3 |

If you're interested in what QUIC is and does, I wrote a lengthy blogpost about it: Google’s QUIC protocol: moving the web from TCP to UDP.

The post HTTP-over-QUIC will officially become HTTP/3 appeared first on

November 10, 2018

WP YouTube Lyte users might have received the following mail from Google/ YouTube:

This is to inform you that we noticed your project(s) has not accessed or used the YouTube Data API Service in the past 60 days.

Please note, if your project(s) remains inactive for another 30 days from the date of this email (November 9, 2018), we will disable your project’s access to, or use of, the YouTube API Data Service. As per Section III(D)(4) of the YouTube API Services Developer Policies (link), YouTube has the right to disable your access to, and use of, the YouTube Data API Service if your project has been inactive for 90 consecutive days.

The reason for this is that LYTE caches the responses from YouTube to ensure optimal performance. You can check your API usage at In my case it actually was not inactive although not very active either;

To make sure my API access would not get disabled I ticked the “Empty WP YouTube Lyte’s cache” checkbox in LYTE’s settings, saved changes to force LYTE to re-request the data from the YT API when pages/ posts with LYTE’s being requested again. The result:

I do have a non-neglectable number of videos on this little blog already, but here’s one more for a rainy Saturday-afternoon;

YouTube Video
Watch this video on YouTube.


November 09, 2018

Comment les marketeux et autres publicitaires ont pour mission de détruire l’humanité, un couvercle de tupperware à la fois.

On a tous un tiroir de tupperwares avec 20 tupperwares et 20 couvercles. Et pourtant, vous avez beau tous les essayer, rien à faire. Aucun ne va sur aucun. Parfois, coup de chance, y’a une paire qui s’encastre au prix de gros efforts pour maintenir les 4 coins, paire précieuse que vous guetterez toujours sans pour autant vous débarrasser du reste du tiroir.

C’est un problème mathématiquement ou physiquement complexe, une véritable énigme de l’univers du même ordre que la chaussette de Schrödinger.

Alors, oui, Elon Musk envoie des voitures électriques sur Mars, mais les problèmes importants, comme ceux du tupperware, personne ne les résout.

Tout ça à cause du Marketing !

Imaginez, y’a peut-être un petit gars du département R&D de chez Tupperware. Pendant 12 ans il a bossé sur ce problème, il a écrit des équations de fou dans des espaces affines à 18 dimensions, réinventé les maths. Un jour, Eureka !

Sans perdre une seconde, il court annoncer la bonne nouvelle. Il a non seulement compris le paradoxe, mais il a une solution à proposer. Le problème sera définitivement résolu ! Son cœur bat à tout rompre, la sueur lui dégouline dans les cheveux. La gloire pour lui, un monde meilleur pour le reste de l’humanité ! Plus de tupperwares dépareillés ! Moins de pollution !

C’est là qu’arrive la directrice marketing.

Oui, mais non. Ça va plomber nos ventes. 65% de notre chiffre d’affaires est composé de gens qui achètent des tupperwares neufs parce qu’ils ne trouvent plus le couvercle (ou, au contraire, le contenant). Alors, tu comprends mon petit… Tes espaces affines, c’est gentil mais ici, c’est la vraie vie !

12 ans de recherche et de travail aux oubliettes. Ce grand problème de l’humanité qui touche des millions de personnes et les laissera pour toujours dans la souffrance et les affres des tupperwares dépareillés. Pire, la directrice marketing proposera de faire des collections subtilement différentes pour que le couvercle ait l’air d’être compatible, mais, en fin de compte, ne le soit pas.

Le marketing et la vente sont l’antithèse du progrès. Ils ont pour seul objectif de rendre l’humanité misérable pour nous faire acheter encore et toujours. Si votre boulot consiste à « augmenter les ventes », que ce soit du marketing web, de la pub ou n’importe quoi, vous n’êtes pas une solution, vous êtes le problème.

Je n’invente rien, vous n’avez qu’à regarder votre tiroir à tupperwares pour en avoir la preuve.

Photo par QiYun Lu.

Je suis @ploum, conférencier et écrivain électronique déconnecté rémunérés en prix libre sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens, même symboliques, font une réelle différence pour moi. Merci !

Ce texte est publié sous la licence CC-By BE.

Passive DNS is not a new technique but, for the last months, there was more and more noise around it. Passive DNS is a technique used to record all resolution requests performed by DNS resolvers (bigger they are, bigger they will collect) and then allow to search for historical data. This is very useful when you are working on a security incident. By example, to track the different IP addresses used to host a C2 server.

But Passive DNS can also be very helpful on the other side: offensive security! A practical example is always better to demonstrate the value of passive DNS. During an engagement, I found a server with ports 80, 443 and 8080 publicly available. The port 8080 looks juicy because it could be a management interface, a Tomcat or? Who knows? Helas, there was no reverse DNS defined for the IP address and, while accessing the server directly through the IP address, I just hit a default page:Default Vhost

You probably recognised the good old default page of IIS 6.0 but it’s not so important here. The problem was the same on port 8080: an error message. Now, let’s search for historical data about this IP address. That is where Passive DNS is helpful. Bingo! There were three records associated with this IP. Let’s retry to access the port 8080 but now through a vhost:JBoss

This looks much more interesting! We can now go deeper and maybe find more juicy stuff to report… When you scan IP addresses, always query then against one or more Passive DNS databases to easily detect potential websites.

How to protect against this? You can’t prevent your IP addresses and hostnames to be logged in Passive DNS databases but there is a simple way to avoid problems such as the one described above. Keep your infrastructure up-to-date. Based on what I found and the Passive DNS data, this server is an old one. The application has been moved for a while but the vhost and the application were still configured on the old server…

[The post Passive DNS for the Bad has been first published on /dev/random]

November 08, 2018

PHP, the Open Source scripting language, is used by nearly 80 percent of the world's websites.

According to W3Techs, around 61 percent of all websites on the internet still use PHP 5, a version of PHP that was first released fourteen years ago.

Now is the time to give PHP 5 some attention. In less than two months, on December 31st, security support for PHP 5 will officially cease. (Note: Some Linux distributions, such as Debian Long Term Support distributions, will still try to backport security fixes.)

If you haven't already, now is the time to make sure your site is running an updated and supported version of PHP.

Beyond security considerations, sites that are running on older versions of PHP are missing out on the significant performance improvements that come with the newer versions.

Drupal and PHP 5

Drupal 8

Drupal 8 will drop support for PHP 5 on March 6, 2019. We recommend updating to at least PHP 7.1 if possible, and ideally PHP 7.2, which is supported as of Drupal 8.5 (which was released March, 2018). Drupal 8.7 (to be released in May, 2019) will support PHP 7.3, and we may backport PHP 7.3 support to Drupal 8.6 in the coming months as well.

Drupal 7

Drupal 7 will drop support for older versions of PHP 5 on December 31st, but will continue to support PHP 5.6 as long there are one or more third-party organizations providing reliable, extended security support for PHP 5.

Earlier today, we released Drupal 7.61 which now supports PHP 7.2. This should make upgrades from PHP 5 easier. Drupal 7's support for PHP 7.3 is being worked on but we don't know yet when it will be available.

Thank you!

It's a credit to the PHP community that they have maintained PHP 5 for fourteen years. But that can't go on forever. It's time to move on from PHP 5 and upgrade to a newer version so that we can all innovate faster.

I'd also like to thank the Drupal community — both those contributing to Drupal 7 and Drupal 8 — for keeping Drupal compatible with the newest versions of PHP. That certainly helps make PHP upgrades easier.

November 07, 2018

The post Automatic monitoring of Laravel Forge managed sites with Oh Dear! appeared first on

Did I mention there's a blog for Oh Dear! yet? :-)

This week we shipped a pretty cool feature: if you use Laravel Forge to manage your servers & sites, we can automatically detect newly added sites and start monitoring those instantly.

The cool thing is that Laravel Forge might've started out as a pure-Laravel management tool, it can be used for any PHP-based framework. You can use it to manage your servers for Zend Framework, Symfony, ... or any other kind of project.

Today we're launching a cool feature for our users on the Laravel Forge platform: automatic monitoring of any of your sites and servers managed through Laravel Forge!

Source: Automatic monitoring of Laravel Forge managed sites

Give it a try, the integration is pretty seamless!

The post Automatic monitoring of Laravel Forge managed sites with Oh Dear! appeared first on

November 06, 2018

While I have to admit that I'm using Zabbix since the 1.8.x era, I also have to admit that I'm not an expert, and that one can learn new things every day. I recently had to implement a new template for a custom service, that is multi-instances aware, and so can be started multiple times with various configurations, and so with its own set of settings, like tcp port on which to listen, etc .. , but also the number of instances running as it can be different from one node to the next one.

I was thinking about the best way to implement this through Zabbix, and my initial idea was to just have one template per possible instance type, that would though use macros defined at the host level, to know which port to check, etc .. so in fact backporting into zabbix what configuration management (Ansible in our case) already has to know to deploy such app instance.

But parallel to that, I always liked the fact that Zabbix itself has some internal tools to auto-discover items and so triggers for those : That's called Low-level Discovery (LLD in short).

By default, if you use (or have modified) some zabbix templates, you can see those in actions for the mounted filesystems or even the present network interfaces in your linux OS. That's the "magic" : you added a new mount point or a new interface ? Zabbix discovers it automatically and start monitoring it, and also graph values for those.

So back to our monitoring problem and the need for multiple templates : what if we could use LLD too and so have Zabbix automatically checking our deployed instances (multiple ones) automatically ? The good is that we can : one can create custom LLD rules and so it would work OOTB when only one template would be added for those nodes.

If you read the link above for custom LLD rule, you'll see some examples about a script being called at the agent level, from the zabbix server, at periodic interval, to "discover" those custom discovery checks. The interesting part to notice is that it's a json that is returned to zabbix server , pointing to a new key, that is declared at the template level.

So it (usually) goes like this :

  • create a template
  • create a new discovery rule, give it a name and a key (and also eventually add Filters)
  • deploy a new UserParameter at the agent level reporting to that key the json string it needs to declare to zabbix server
  • Zabbix server receives/parses that json and based on the checks/variables declared in that json, it will create , based on those returned macros, some Item Prototypes, Trigger prototypes and so on ...

Magic! ... except that in my specific case, for some reasons I never allowed the zabbix user to really launch commands, due to limited rights and also the Selinux context in which it's running (for interested people, it's running in the zabbix_agent_t context)

I suddenly didn't want to change that base rule for our deployments, but the good news is that you don't have to use UserParameter for LLD ! . It's true that if you look at the existing Discovery Rules for "Network interface discovery", you'll see the key net.if.discovery, that is used for everything after, but the Type is "Zabbix agent". We can use something else in that list, like we already do for a "normal" check

I'm already (ab)using the Trapper item type for a lot of hardware checks : reason is simple : as zabbix user is limited (and I don't want to grant more rights for it), I have some scripts checking for hardware raid controllers (if any), etc, and reporting back to zabbix through zabbix_sender.

Let's use the same logic for the json string to be returned to Zabbix server for LLD. (as yes, Trapper is in the list for the discovery rule Type.

It's even easier for us, as we'll control that through Ansible : It's what is already used to deploy/configure our RepoSpanner instances so we have all the logic there.

Let's first start by creating the new template for repospanner, and create a discovery rule (detecting each instances and settings) :


You can then apply that template to host[s] and wait ... but first we need to report back from agent to server which instances are deployed/running. So let's see how to implement that through ansible.

To keep it short, in Ansible we have the following (default values, not the correct ones) variables (from roles/repospanner/default.yml):

  - name: default
    admin_cli: False
    rpc_port: 8443
    http_port: 8444
    tls_ca_cert: ca.crt
    tls_cert: nodea.regiona.crt
    tls_key: nodea.regiona.key
    my_cn: localhost.localdomain
    master_node : # to know how to join a cluster for other nodes
    init_node: True # To be declared only on the first node

That simple example has only one instance, but you can easily see how to have multiple ones, etc So here is the logic : let's have ansible, when configuring the node, create the file that will be used zabbix_sender (triggered by ansible itself) to send the json to zabbix server. zabbix_sender can use a file that is separated (man page) like this :

  • hostname (or '-' to use name configured in zabbix_agentd.conf)
  • key
  • value

Those three fields have to be separated by one space only, and important : you can't have extra empty line (but something can you easily see when playing with this the first time)

How does our file (roles/repospanner/templates/zabbix-repospanner-lld.j2) look like ? :

- repospanner.lld.instances { "data": [ {% for instance in repospanner_instances -%} { "{{ '{#INSTANCE}' }}": "{{ }}", "{{ '{#RPCPORT}' }}": "{{ instance.rpc_port }}", "{{ '{#HTTPPORT}' }}": "{{ instance.http_port }}" } {%- if not loop.last -%},{% endif %} {% endfor %} ] }

If you have already used jinja2 templates for Ansible, it's quite easy to understand. But I have to admit that I had troubles with the {#INSTANCE} one : that one isn't an ansible variable, but rather a fixed name for the macro that we'll send to zabbix (and so reused as macro everywhere). But ansible, when trying to translate the jinja2 template, was complaining about missing "comment' : Indeed {# ... #} is a comment in jinja2. So the best way (thanks to people in #ansible for that trick) is to include it in {{ }} brackets but then escape it so that it would be rendered as {#INSTANCE} (nice to know if you have to do that too ....)

The rest is trival : excerpt from monitoring.yml (included in that repospanner role) :

- name: Distributing zabbix repospanner check file
    src: "{{ item }}.j2"
    dest: "/usr/lib/zabbix/{{ item }}"
    mode: 0755
    - zabbix-repospanner-check
    - zabbix-repospanner-lld
  register: zabbix_templates   
    - templates

- name: Launching LLD to announce to zabbix
  shell: /bin/zabbix_sender -c /etc/zabbix/zabbix_agentd.conf -i /usr/lib/zabbix/zabbix-repospanner-lld
  when: zabbix_templates is changed

And this is how is rendered on one of my test node :

- repospanner.lld.instances { "data": [ { "{#INSTANCE}": "namespace_rpms", "{#RPCPORT}": "8443", "{#HTTPPORT}": "8444" }, { "{#INSTANCE}": "namespace_centos", "{#RPCPORT}": "8445", "{#HTTPPORT}": "8446" }  ] }

As ansible auto-announces/push that back to zabbix, zabbix server can automatically start creating (through LLD, based on the item prototypes) some checks and triggers/graphs and so start monitoring each newly instance. You want to add a third one ? (we have two in our case) : ansible pushes the config, would modify the .j2 template and would notify zabbix server. etc, etc ...

The rest is just "normal" operation for zabbix : you can create items/trigger prototypes and just use those special Macros coming from LLD :


It was worth spending some time in the LLD doc and in #zabbix to discuss LLD, but once you see the added value, and that you can automatically configure it through Ansible, one can see how powerful it can be.

I’ve been running a lot lately, and so have been listening to lots of podcasts! Which is how I stumbled upon this great episode of the Lullabot podcast recently — embarrassingly one from over a year ago: “Talking Performance with Pantheon’s David Strauss and Josh Koenig”, with David and Josh from Pantheon and Nate Lampton from Lullabot.

(Also, I’ve been meaning to blog more, including simple responses to other blog posts!)

Interesting remarks about BigPipe

Around 49:00, they start talking about BigPipe. David made these observations around 50:22:

I have some mixed views on exactly whether that’s the perfect approach going forward, in the sense that it relies on PHP pumping cached data through its own system which basically requires handling a whole bunch of strings to send them out, as well as that it seems to be optimized around this sort of HTTP 1.1 behavior. Which, to compare against HTTP 2, there’s not really any cost to additional cost to additional connections in HTTP 2. So I think it still remains to be seen how much benefit it provides in the real world with the ongoing evolution of some of these technologies.

David is right; BigPipe is written for a HTTP 1.1 world, because BigPipe is intended to benefit as many end users as possible.

And around 52:00, Josh then made these observations:

It’s really great that BigPipe is in Drupal core because it’s the kind of thing that if you’re building your application from scratch that you might have to do a six month refactor to even make possible. And the cache layer that supports it, can support lots other interesting things that we’ll be able to develop in the future on top of Drupal 8. […] I would also say that I think the number of cases where BigPipe or ESI are actually called for is very very small. I always whenever we talk about these really hot awesome bleeding-edge cache technologies, I kinda want to go back to what Nate said: start with your Page Cache, figure out when and how to use that, and figure out how to do all the fundamentals of performance before even entertaining doing any of these cutting-edge technologies, because they’re much trickier to implement, much more complex and people sometimes go after those things first and get in over their head, and miss out on a lot of the really big wins that are easier to get and will honestly matter a lot more to end users. “Stop thinking about ESI, turn on your block cache.”

Josh is right too, BigPipe is not a silver bullet for all performance problems; definitely ensure your images and JS are optimized first. But equating BigPipe with ESI is a bit much; ESI is indeed extremely tricky to set up. And … Drupal 8 has always cached blocks by default. :)

Finally, around 53:30 David cites another reason to stress why more sites are not handling authenticated traffic:

[…] things like commenting often move to tools like Disqus and whether you want to use Facebook or the Google+ ones or any one of those kind of options; none of those require dynamic interaction with Drupal.

Also true, but we’re now seeing the inverse movement, with the increased skepticism of trusting social media giants, not to mention the privacy (GDPR) implications. Which means sites that have great performance for dynamic/personalized/uncacheable responses are becoming more important again.

BigPipe’s goal

David and Josh were being constructively critical; I would expect nothing less! :)

But in their description and subsequent questioning of BigPipe, I think they forget its two crucial strengths:

BigPipe works on any server, and is therefore available to everybody, and it works for many things out of the box, including f.e. every uncacheable Drupal block! In other words: no infrastructure (changes) required!

Bringing this optimization that sits at the intersection of front-end & back-end performance to the masses rather than having it only be available for web giants like Facebook and LinkedIn is a big step forward in making the entire web fast.

Using BigPipe does not require writing a single line of custom code; the module effectively progressively enhances Drupal’s HTML rendering — and turned on by default since Drupal 8.5!


Like Josh and David say: don’t forget about performance fundamentals! BigPipe is no silver bullet. If you serve 100% anon traffic, BigPipe won’t make a difference. But for sites with auth traffic, personalized and uncacheable blocks on your Drupal site are streamed automatically by BigPipe, no code changes necessary:

(That’s with 2 slow blocks that take 3 s to render. Only one is cacheable. Hence the page load takes ~6 s with cold caches, ~3 s with warm caches.)

Today, Acquia announced a partnership with BigCommerce, a leading cloud commerce platform. BigCommerce recently launched a headless commerce solution called BigCommerce Commerce-as-a-Service to complement its "traditional" commerce solutions. Acquia's partnership with BigCommerce will center around this Commerce-as-a-Service solution to enable customers to take advantage of headless commerce architectures, while leveraging Drupal and Acquia to power content-rich shopping experiences.

With BigCommerce and Acquia, brands can use a commerce-as-a-service approach to quickly build an online store and oversee product management and transactional data. The front-end of the commerce experience will be powered by Drupal, built and managed using the Acquia Platform, and personalized with Acquia Lift.

This month, Acquia has been focused on expanding our partnerships with headless commerce vendors. This announcement comes on the heels of our partnership with Elastic Path. Our partnership with BigCommerce not only reinforces our belief in headless commerce, but also our commitment to a best-of-breed commerce strategy that puts the needs of our customers first.