Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

October 22, 2019

Hello Readers! The first day of the conference is already over, here is my wrap-up! The event started around 10:30, plenty of time to meet friends around a first coffee!

The conference was kicked off by Axelle Apvrille who is a security researcher focusing on mobile devices and apps. The title was “Smartphone apps: let’s talk about privacy“.

Axelle on Stage

Privacy and smart phone… a strange feeling right? The idea of Axelle was to make an overview of what leaks we have on our beloved devices and how much? How to spot privacy issues? The research was based on classic popular applications . They were analyzed from a privacy and security point of view with the tool Droidlysis. It disassembles the app and searches for patterns like attempts to run an exe, if obfuscated or attempts to retrieve the phone number etc. Many apps retrieve IMEI numbers or GPS location (ex: a cooking app!?). Guess what, many of them fetches data that are not relevant for normal operations. The package size can also be an indicator. More than 500 lines of Android manifest for some apps! Axelle asked to not disclose which ones. Axel also checked what is leaked from a network point of view: MitMproxy or better Frida to hook calls to HTTP(S) requests. Firebase leaks in a medical app for diabetic users. A good tip for developers: Do not integrate dozen of 3rd party SDK’s, provide ways to disable tracking, logging, analytics.

Then, Desiree Sacher presented “Fingerpointing false positives: how to better integrate continuous improvement into security monitoring“. Desiree is working as a SOC architect and hates “false positives”.

Desiree on Stage

A SOC needs to rely on accurate company data to work efficiently. Her goal: build intelligent processes and efficient workflows and detection capabilities. How to prevent false positives: from a technical point of view (ex: change the log management configuration, collection) or admin/user action: update process. She reviewed different situations that you could face within your SOC and how to address them. By example, for key business process artefacts, if you get bad IOC or wrong rule pattens, do not hesitate to challenge your provider. Also, test alert! They should be excluded from reporting. Desiree published an interesting paper to optimize your SOC.

Harlo presented “Tiplines today“. She works for freedom of the press foundation. It’s a non-profit organization which does amongst other stuff software development ( but also advocacy: US Press Freedom Tracker and education: Digital security training. The talk covered multiple techniques and tools to safely communicate.

Messaging Apps Privacy Behaviour

After the lunch break, the stage was to Chris Kubecka who presented “The road to hell is paved with bad passwords“. It was the story about an incident that she handled. It started (as in many cases) with an email compromise following by some email MITM, installation of a rootkit and some extortion. To initial compromization (the mailbox) was due to a very light password (“123456”)! 1st incident: email account breach (password = 123456). Chris reviewed all the aspects of the incident. At the end, no extorsion was paid, suspect reassigned and neutralized. Lessons learned: geopolitics, real human consequences, never trust gifts, prepare and… don’t panic!

Eyal Itkin, from Checkpoint, came on stage with a talk called “Say cheese – how I ransomwared your DSLR camera?“. Like every year, Checkpoint is coming back to with a new idea: what could be broken? 🙂 Conferences are good for this, also friends who use the targets on a daily basis. Why cameras? Photography is a common hobby with many potenial victims. People buy expensive cams and protect them (handled with care). They also contains memories from special events! birthdays, grand-ma’s picture, … with a huge emotional impacts on victims. Once you compromized the device, what to do? Brick it or setup an espionage tool? Why not ransomware? Some people reverse the firmware and added features (see An open source project is great in this case, the documentation is key! The attack vector was PTP (Picture Transfer Protocol). Not only USB but also TCP/IP. There was already a talk about PTP that was presented at HITB in 2013 but more from a spying device aspect. The protocol has no authentication not encryption. Just ask the camera to take a picture and transfer it. First step, analyze the firmware. Grab it from the vendor website but it was AES encrypted and Google did not know the key. Can we dump the firmware from the camera? ROM Dumper! MagicLantern had the key, let’s dump and load into IDA. The OS is DryOS, proprietary to Canon, real-time OS. Let’s break PTP. Each command has a code (open, close, create, etc). Found 3 RCE and 2 were related to bluetooth. CVE-2019-5998 was the most interesting. Classic buffer overflow but not exploitable over WiFi. Restarted again… looking for more exploits. The last step was to write the ransomware, why reinvent the wheel? Use the builtin AES functions. The idea was to deliver the ransomware via a fake update. The presentation ended with a recorded demo of a camera being infected:

Ransomwared Camera Demo

Eve Matringe was the next speaker with a talk titled “The regulation 2019/796 concerning restrictive measures against cyber-attacks threatening the Union of its member states” or… Europa is now NSA-Russia cyberbullet proof? This talk was not technical at all and very oriented to regulation (as the title suggested), not easy to understand when you’re not involved with such texts in a day to day basis. But for me it was a good discovery, I was not aware of its existence.

Mattias Deeg & Gerhard Klostermeier presented “New tales of wireless input devices“. Research based on Hack-RF One and CrazyRadio PA. It’s not a new topic, many talks already covered this technology. He started with a quick recapitulations of what has been said about keyboards and mouses researches (starting from 2016!). Interested but nothing brand new, guess what? Such devices provides lot of weaknesses and should definitively not be used in restricted/sensitive environments. Personally, I liked the picture with the 3 pointers bought from Logitech. Once opened, you have three different PCB’s totally different. Was there a rogue one amongst them? The question was asked to Logitech and no information was disclosed.

Solal Jacob presented “Memory forensics analysis of Cisco IOS XR 32 bits routers with Amnesic-Sherpa“. It was a short talk (20 mins) while Solal explained the technique and tools used to perform memory analysis on Cisco routers. Such devices are more and more implied to security incidents. It’s good to know that software exists to deal with them.

Finally, Ange Albertini closed the first day with his famous “Kill MD5 – Demistifying hash collisions“. See my wrap-up from Pass The Salt 2019, Ange presente the same content.

To conclude, a first good day with nice content. I appreciated to large presence of women who presented today! Good to see a better balance in security conferences! Stay tuned tomorrow for the second day!

[The post 2019 Day #1 Wrap-Up has been first published on /dev/random]

If you’ve ever used Go to decode the JSON response returned by a PHP API, you’ll probably have ran into this error:

json: cannot unmarshal array into Go struct field Obj.field of type map[string]string

The problem here being that PHP, rather than returning the empty object you expected ({}), returns an empty array ([]). Not completely unexpected: in PHP there’s no difference between maps/objects and arrays.

Sometimes you can fix the server:

return (object)$mything;

This ensures that an empty $mything becomes {}.

But that’s not always possible, you might have to work around it on the client. With Go, it’s not all that hard.

First, define a custom type for your object:

type MyObj struct {
    Field map[string]string `json:"field"`


type MyField map[string]string

type MyObj struct {
    Field MyField `json:"field"`

Then implement the Unmarshaler interface:

func (t *MyField) UnmarshalJSON(in []byte) error {  
    if bytes.Equal(in, []byte("[]")) {
        return nil

    m := (*map[string]string)(t)
    return json.Unmarshal(in, m)

And that’s it! JSON deserialization will now gracefully ignore empty arrays returned by PHP.

Some things of note:

  • The method is defined on a pointer receiver (*MyField). This is needed to correctly update the underlying map.
  • I’m casting the t object to map[string]string. This avoids infinite recursion when we later call json.Unmarshal().

Comments | More on | @rubenv on Twitter

The post strace: operation not permitted, ptrace_scope incorrect appeared first on

When using strace on a server, you might get this error message when you try to attach to a running process.

$ strace -f -p 13239
strace: attach: ptrace(PTRACE_SEIZE, 13239): Operation not permitted
strace: Could not attach to process. If your uid matches the uid of the target process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try again as the root user. For more details, see /etc/sysctl.d/10-ptrace.conf: Operation not permitted

Alas, it doesn't work!

Here's why: your current user doesn't have permissions to trace a running process. Here are some workarounds.

Strace a new process instead

If you have the ability, you can strace a new program instead. This might not always be an option, but it works like this.

$ strace -f ./binary

You'd start ./binary again and strace that process.

Get root access

Alternative, get root level privileges to strace running processes. Makes sense, but might not always be an option in your environment.

Allow users to strace other processes with the same uid

You can also change a setting to allow a user to strace processes that have the same uid. In other words: allow a user to strace a processes from itself.

To do so, it requires a root-level change (aka: a root level admin needs to change this).

Have a look at the file /etc/sysctl.d/10-ptrace.conf

$ cat /etc/sysctl.d/10-ptrace.conf
# The PTRACE system is used for debugging.  With it, a single user process
# can attach to any other dumpable process owned by the same user.  In the
# case of malicious software, it is possible to use PTRACE to access
# credentials that exist in memory (re-using existing SSH connections,
# extracting GPG agent information, etc).
# A PTRACE scope of "0" is the more permissive mode.  A scope of "1" limits
# PTRACE only to direct child processes (e.g. "gdb name-of-program" and
# "strace -f name-of-program" work, but gdb's "attach" and "strace -fp $PID"
# do not).  The PTRACE scope is ignored when a user has CAP_SYS_PTRACE, so
# "sudo strace -fp $PID" will work as before.  For more details see:
# For applications launching crash handlers that need PTRACE, exceptions can
# be registered by the debugee by declaring in the segfault handler
# specifically which process will be using PTRACE on the debugee:
#   prctl(PR_SET_PTRACER, debugger_pid, 0, 0, 0);
# In general, PTRACE is not needed for the average running Ubuntu system.
# To that end, the default is to set the PTRACE scope to "1".  This value
# may not be appropriate for developers or servers with only admin accounts.
kernel.yama.ptrace_scope = 1

If you change kernel.yama.ptrace_scope to 0 and reboot the system, you'll now be allowed to strace processes of your own uid.

The post strace: operation not permitted, ptrace_scope incorrect appeared first on

October 21, 2019

I’m in Luxembourg for a full week of infosec events. It started today with the MISP summit. It was already the fifth edition and, based on the number of attendees, the tool is getting more and more popularity. The event started with a recap of what happened since the last edition. It was a pretty busy year with many improvements, add-ons. More and more people contribute to the project.

MISP is not only a tool but other parallel projects try to improve the sharing of information. There is a project to become the European sharing standard. Another evidence that the product is really used and alive: Nine CVE’s were disclosed in 2019. On the Github repository, 331 people contributed to the MISP project:

MISP Contributors

I liked the way it was organized: The schedule was based on small talks (20 minutes max) with straight to the point content: improvements, integration or specific use-cases.

The first one was a review of the MISP users’ feelings about the platform. Based on a survey, statistics were generated. Not many people replied to the survey but results look interesting. The conclusions are that MISP is used by skilled people but they consider the tool complex to use. The survey was provided to people who attended a MISP training organized in Luxembourg. It could be a good idea to collect data from more people.

There were two presentations about performance improvements. People dived into the code to find how to optimize the sharing process or to optimize the comparison of IOC’s (mainly malware samples). Other presentations covered ways to improve some features like Sightings.

New releases of MISP introduce new features. The decaying of IOC’s was introduced one month ago and looks very interesting. There was a presentation from the team about this feature but there was also another one from a CERT that implemented its own model to expire IOC’s.

MISP is a nice tool but it must be interconnected to your existing toolbox. One of the presentations covered the integration of MISP with Maltego to build powerful investigation graphs.

Integration with WHIDS (Windows Host IDS). It relies on Sysmon. It correlates events, detects and reacts (dump file, dump process or dump registry). Gene is the detection engine behind WHIDS. It can be seen as a YARA tool for Windows events. MISP support was recently added with some challenges like performance (huge number of IOC’s).

EclecticIQ presented how their platform can be used with MISP. They created a MISP extension to perform enrichment.

TICK – Threat Intelligence Contextualization Knowledgebase by the KPN SRT. The KPN guys explained how they enrich SOC incidents with some context. I liked the use of Certificate Transparency Monitoring. The goal is to use MISP as a central repository to distribute their IOC’s to 3rd parties.

Another presentation that got my attention was about deploying MISP in the cloud. At first, this looks tricky but the way it was implemented is nice. The presentation was based on AWS but any cloud provider could be used. Fully automatic, with load-balancers, self-provisioning. Great project!

Finally, the day ended with some changes that are in the pipe for upcoming MISP releases. Some old code will be dropped, revamping of the web UI, migration to most recent versions of PHP, Python & more!

Thanks to an event like this one, you quickly realize that the MISP project became mature and that many organizations are using it on a daily basis (like I do).

[Update] Talks have been recorded. I presume that they will be online soon with presentations as well. Check the website.

[The post MISP Summit 0x05 Wrap-Up has been first published on /dev/random]

The post warning: setlocale: LC_CTYPE: cannot change locale (UTF-8): No such file or directory appeared first on

On a freshly installed CentOS 7 machine, I got the following notice when I SSH'd into the server.

warning: setlocale: LC_CTYPE: cannot change locale (UTF-8): No such file or directory

The fix is pretty straight-forward. On the server (not your client), edit the file /etc/environment and add the following lines.

(You'll need root privileges to do this)

$ cat /etc/environment

Log out & back in and you should notice the warning message is gone.

The post warning: setlocale: LC_CTYPE: cannot change locale (UTF-8): No such file or directory appeared first on

October 18, 2019

I published the following diary on “Quick Malicious VBS Analysis“:

Let’s have a look at a VBS sample found yesterday. It started as usual with a phishing email that contained a link to a malicious ZIP archive. This technique is more and more common to deliver the first stage via a URL because it reduces the risk to have the first file blocked by classic security controls. The link was:


The downloaded file is saved as (SHA256: 9bf040f912fce08fd5b05dcff9ee31d05ade531eb932c767a5a532cc2643ea61) has a VT score of 1/56… [Read more]

[The post [SANS ISC] Quick Malicious VBS Analysis has been first published on /dev/random]

October 16, 2019

The post Mac pecl install: configure: error: Please reinstall the pkg-config distribution appeared first on

I ran into this error when doing a pecl install imagick on a Mac.

$ pecl install imagick
checking for pkg-config... no
pkg-config not found
configure: error: Please reinstall the pkg-config distribution
ERROR: `/private/tmp/pear/temp/imagick/configure --with-php-config=/usr/local/opt/php/bin/php-config --with-imagick' failed

By default, the needed pkg-config binary isn't installed. You can install it via Homebrew.

$ brew instal pkg-config
==> Downloading
==> Downloading from
######################################################################## 100.0%
==> Pouring pkg-config-0.29.2.catalina.bottle.1.tar.gz
🍺  /usr/local/Cellar/pkg-config/0.29.2: 11 files, 623KB

And you're all set!

The post Mac pecl install: configure: error: Please reinstall the pkg-config distribution appeared first on

The post Install PHP’s imagick extension on Mac with Brew appeared first on

I was setting up a new Mac and ran into this problem again, where a default PHP installation with brew is missing a few important extensions. In this case, I wanted to get the imagick extension loaded.

This guide assumes you have Homebrew installed and you've installed PHP with brew install php.

Install Image Magick dependency

First, install imagemagick itself. This is needed to get the source files you'll use later to compile the PHP extension with.

$ brew instal pkg-config imagemagick

This will also install the needed pkg-install dependency.

Compile Imagick PHP extension with pecl

Next up, use pecl to get the PHP extension compiled.

$ pecl install imagick
install ok: channel://
Extension imagick enabled in php.ini

It will also auto-register itself in your php.ini and should now be available.

$ php -m | grep -i magic

Note: if you run php-fpm, make sure you to restart your daemon to load the latest extension. Use brew services restart php.

The post Install PHP’s imagick extension on Mac with Brew appeared first on

I published the following diary on “Security Monitoring: At Network or Host Level?“:

Today, to reach a decent security maturity, the keyword remains “visibility”. There is nothing more frustrating than being blind about what’s happening on a network or starting an investigation without any data (logs, events) to process. The question is: how to efficiently keep an eye on what’s happening on your network? There are three key locations to collect data… [Read more]

[The post [SANS ISC] Security Monitoring: At Network or Host Level? has been first published on /dev/random]

October 15, 2019

I said it before, and I say it again: get those national asses out of your EU heads and start a European army.

How else are you going to tackle Turkey, Syria and the US retreating from it all?

The EU is utterly irrelevant in Syria right now. Because it has no own power projection.

When I said “A European Army”, I meant aircraft carriers. I meant nuclear weapons (yes, indeed). I mean European fighter jets that are superior to the Chinese, American and Russian ones. I meant a European version on DARPA. I mean huge, huge Euro investments. I meant ECB (yes, our central bank) involvement in it all. To print money. Insane amounts of ECB backed Euro money creation to fund this army and the technology behind it.

I mean political EU courage. No small things. Super big, huge and totally insane amounts of investments: a statement to the world: The EU is going to defend itself the coming centuries, and it’s going to project military power.

I doubt it will happen in my lifetime.

October 14, 2019

This BruCON edition (also called “0x0B”) is already over! This year, we welcomed more than 500 hackers from many countries to follow wonderful speakers and learn new stuff with practical workshops. Like the previous editions, I played with the network deployed for our attendees. Here is a short debriefing of what we did and found during the conference.

Basically, from an infrastructure point of view, nothing really changed. The same hardware was deployed (mainly wireless access for the attendees and wired network for specific purposes. We welcome the same number of attendees and we see every year more and more people trying to avoid wireless networks. Furthermore, all European travellers benefit now of the free data roaming across Europe. Also, the trend is to use encrypted traffic (SSL or VPNs), which means less “visible” traffic for us (No, we don’t play MitM 😉

First of all, we launched our wall-of-sheep like every year. It remains a very nice security awareness tool. New attendees ask me always how it works or “what’s behind the magic”. Of course, some dumb people keep trying to flood it with p0rn pictures. We used the same technology to reduce the number of offending pictures but it’s not bullet-proof. [Personal opinion: if you spent your time flooding the WoS with p0rn, you completely missed the point of a security conference… that to say!]

If we don’t play MitM, it does not mean that we don’t inspect most of the flows passing by our network. This is clearly stated on the website and the brochure. The first change that we implemented this year was an intercepting proxy to collect all URLs visited by our beloved attendees. Last year, we detected that many corporate laptops were trying to find WPAD files but we needed more data. Here are some stats about the proxy:

  • 274K HTTP requests (GET, POST or CONNECT)
  • 39K unique URL

What about the top-visited URLs? No Facebook etc this year, but many many (read: “way too many”) automatic updates URL! Many applications or systems still check and download their updates via HTTP! When you see how easy it is to manipulate this kind of traffic, it scares me. Bad practice: Do NOT enable automatic updates while connected to wild environments.

What about DNS traffic? Here again, we slightly changed the configuration. Many people don’t trust DNS provided by DHCP and use their own DNS or a public one (Google to the rescue with!). Blocking all DNS traffic to “force” people to use our own was nice but too intrusive. This year, we allowed all traffic but intercepted all UDP/TCP/53 traffic and port-forwarded it to our firewall. This way, people did not see that the DNS traffic was redirected to our own resolver. It was also an easy way to block some malicious or unwanted websites.

Here is the top-10 of resolved domains (without noisy and local domains like the ones used by the CTF):


About DNS, we saw some DNS tunnelling and, for the first time, some DoH traffic! (DNS over HTTPS). Ok, only two different clients but it’s a sign!

DoH Traffic

Also, we collected interesting files transferred via HTTP:


About communications & chat, we detected IRC, Google Talk (both in clear text!), Skype & Teams.

We detected some juicy traffic which looked very suspicious but… it’s was just a guy testing a potential 0-day 🙂

About the logging capabilities, this year, we collected in real-time all data and indexed them into a Splunk instance. This is much more convenient when you need to investigate urgently an incident. We had to track a “bad guy” and he was discovered within a few minutes based on the MD5 of the downloaded picture! (I mean we discovered his MAC address and the device name).

To conclude, some extra numbers:

  • 3 Internet access lines (load-balanced)
  • 171GB downloaded
  • 847 devices connected to the network
    • 504 badges connected
  • 28515 pictures collected by the WoS
  • 282.244 collected files:
  • 898.638 DNS requests
  • 2.757.669 collected flows
  • 274K HTTP requests
  • 101.457 IDS events
  • 52 blocked p0rn sites
  • 4 network gurus 🙂

We are ready for the next editions and, based on what we learned, we already have nice ideas…

[The post BruCON 0x0B Network Post-Mortem Review has been first published on /dev/random]

The post Laravel’s Tinker quits immediately on PHP 7.3 & Mac appeared first on

I had a weird issue with Laravel's Tinker artisan command. Every time I would load it up and type a command, it would immediately quit.

$ php artisan tinker
Psy Shell v0.9.9 (PHP 7.3.8 — cli) by Justin Hileman
>>> Str::plural('url', 1);


It would just jump straight back my terminal prompt.

The fix appears to be to add a custom config that instructs PsySH -- on which Tinker is built -- to not use PHP's pcntl extension for process control.

$ cat ~/.config/psysh/config.php
return [
  'usePcntl' => false,

That one did the tricky for me.

$ php artisan tinker
Psy Shell v0.9.9 (PHP 7.3.8 — cli) by Justin Hileman
>>> Str::plural('url', 1);
=> "url"
>>> Str::plural('url', 2);
=> "urls"

Hope this saves you some headaches!

The post Laravel’s Tinker quits immediately on PHP 7.3 & Mac appeared first on

October 03, 2019

The post A github CI workflow tailored to modern PHP applications (Laravel, Symfony, …) appeared first on

Last year we wrote a blogpost about our setup we use for Oh Dear! with Gitlab, and how we use their pipelines for running our CI tests. Since then, we've moved back to Github since they introduced their free private repositories.

In this post I'll describe how we re-configured our CI environment using Github Actions.

If you have a Laravel application or package, you should find copy/paste-able examples here to get it up and running for yourself. This will only require small adaptations for Symfony, Zend, ...

It's all PHP, after all.

A custom Docker container with PHP 7.3 and extensions

I built a custom Docker container with PHP 7.3 and my necessary extensions. You're free to use it yourself, too: mattiasgeniar/php73.

I'm still pretty new to Docker, so if you spot any obvious errors, I'd love to improve the container. Here's the Dockerfile as is. It's up on Github too, so you're free to fork/modify as you see fit to create your own containers.

FROM php:7.3-cli

LABEL maintainer="Mattias Geniar <>"

# Install package dependencies
RUN apt update && apt install -y libmagickwand-dev git libzip-dev unzip

# Enable default PHP extensions
RUN docker-php-ext-install mysqli pdo_mysql pcntl bcmath zip soap intl gd exif

# Add imagick from pecl
RUN pecl install imagick && echo '' >> /usr/local/etc/php/php.ini

# Install nodejs & yarn
RUN curl -sL | bash - \
    && DEBIAN_FRONTEND=noninteractive apt-get install nodejs -yqq \
    && npm i -g npm \
    && curl -o- -L | bash \
    && npm cache clean --force

# Install composer
ENV PATH=./vendor/bin:/composer/vendor/bin:/root/.yarn/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
RUN curl -sS | php -- --install-dir=/usr/local/bin --filename=composer

WORKDIR /var/www

In short: this takes the official base image of PHP 7.3 and adds some custom extensions, the node & yarn binaries, together with composer.

It's a rather large container in total, because of the dependency of ImageMagick. By needing the source-files, it pulls in more than 100MB of dependencies. Oh well ...

Using this container in your Github Action

You can now use Github Actions to automatically run a set of commands, every time you push code. You can do so by creating a file called .github/workflows/ci.yml . In that file, you'll determine what actions need to run in a container.

Here's our current CI config for Oh Dear.

on: push
name: Run phpunit testsuite
    runs-on: ubuntu-latest
      image: mattiasgeniar/php73

    - uses: actions/checkout@v1
        fetch-depth: 1

    - name: Install composer dependencies
      run: |
        composer install --prefer-dist --no-scripts -q -o;
    - name: Prepare Laravel Application
      run: |
        cp .env.example .env
        php artisan key:generate
    - name: Compile assets
      run: |
        yarn install --pure-lockfile
        yarn run production --progress false
    - name: Set custom php.ini settings
      run: echo 'short_open_tag=off' >> /usr/local/etc/php/php.ini
    - name: Run Testsuite
      run: vendor/bin/phpunit tests/

You can find a simplified example in my Docker example repo. The one above also takes care of the javascript & CSS compilation using yarn/webpack. We need this in our unit tests, but you might be able to remove those run lines.

Setting this up in your own repository

Here's what was needed to get this up-and-running.

  1. Sign up for the "beta" program of Github Actions. You'll get immediate access, but it's opt-in for now.
  2. Create a file called.github/workflows/ci.yml in your repository, take the content from the example above
  3. git push upstream to Github, the action should kick in after a few seconds

You can find an example Action here: mattiasgeniar/docker-examples/actions.

After that, you should see the Actions tab pick up your jobs.

The post A github CI workflow tailored to modern PHP applications (Laravel, Symfony, …) appeared first on

I published the following diary on ““Lost_Files” Ransomware“:

Are good old malware still used by attackers today? Probably not running the original code but malware developers are… developers! They don’t reinvent the wheel and re-use code published here and there. I spotted a ransomware which looked like an old one… [Read more]

[The post [SANS ISC] “Lost_Files” Ransomware has been first published on /dev/random]

October 02, 2019

Phil a fait son dernier plouf. Comme tout bon apnéiste, il a pris sa dernière inspiration.

Ce serait tellement facile de parler de ton sourire omniprésent. Tout le monde le fait. Alors je profite de ce blog pour raconter une histoire rare : la seule fois où je t’ai vu ne pas sourire.

Tu conduisais la camionette. On te disait tous que, avec une remorque bringuebalante, 130km/h, c’était un peu rapide. Tu nous répondais « Boaf » en riant. On t’a dit qu’il y’a avait un combi de flic derrière nous. Tu nous as répondu « Boaf » en riant. Le combi de flic nous a fait signe de nous ranger sur le bas côté. On a éclaté de rire. Sauf toi. Tu es descendu tout penaud te faire sermonner. Tu avais un petit d’air d’enfant qui se fait gronder, tu ne riais pas.

Dans la camionnette, par contre, on était tous hilares !

Quand tu as repris le volant, tu as quand même fait « Boaf » et ton sourire est revenu.

L’eau noire de nos carrières n’aura plus la même saveur sans toi pour les agiter en crawlant comme un phoque parce que « ça te décoince les trompes d’eustache ». Les billets de ce blog seront un peu plus seuls maintenant que tu ne les liras plus, que tu ne m’enverras plus tes commentaires plein de tendresse et de jeux de mots foireux.

Notre seul espoir c’est que, là-bas, tu arrives à les embobiner avec une combine foireuse dont tu as le secret pour qu’on te renvoie ici, avec nous. Genre le coup du bateau prêté par l’université que tu as démonté pour tenter de le faire fonctionner sans clé de contact avant que l’on comprenne que, si tu n’avais pas la clé, c’est parce que l’université n’était pas au courant de sa générosité à ton égard.

Je préfèrais quand t’étais à la bourre que quand tu pars à l’avance. Tout risque de tourner trop rond sans toi.

Putain Mon Philou, tu nous manques déjà…

Je suis @ploum, conférencier et écrivain électronique. Si vous avez apprécié ce texte, n'hésitez pas à me soutenir sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens réguliers, même symboliques, sont une réelle motivation et reconnaissance. Merci !

Ce texte est publié sous la licence CC-By BE.

October 01, 2019

Commander Data with a PADD

Here's a free tip for you weirdos like me who still use private torrent trackers. If you want to earn consistently good upload credit, you only need to seed one show, two at most. Namely, Star Trek: The Next Generation, and Deep Space Nine. Preferably the later seasons on Blu-ray. I'm not sure why this is, but I have a few theories. And what the internet definitely needs is more commentary on Star Trek.

The most obvious reason is that older shows come from a time when there just wasn't as much saturation in the market. Each show could capture a much larger chunk of the audience, both nationally and internationally in syndication. This is reflected in their long-term staying power by way of nostalgia. If you investigate the top seeded torrents on a decent tracker, for content from the late 80s and early 90s, you will find that the top spots are usually captured by season packs of The Simpsons, Seinfeld, and once it appeared in 1994, Friends, which is no surprise at all if you lived through those years.

In between these behemoth shows, you will also spot the other consistent trend, namely the Treks. It shouldn't be a surprise that torrent sites are frequented by nerds with nerdy interests, but what is noticeable is that in the years where they have to compete, Treks rank consistently at the top, higher than even cult classic The X-Files. This was a show which was far less embarrassing to be seen watching at the time, scheduled explicitly for adults, with its two iconic leads, its clever appropriation of deep state intrigue, and an at times fearless exploration of the grotesque and macabre.

If you look up the US viewership numbers, X-Files had about twice as many on average as TNG, ~20 million vs ~10 million. It was clearly the more popular show. Compared to today though, TNG consistently exceeded even the premiere of Star Trek: Discovery 30 years later, at 9 million viewers, despite the fact that the US population has grown by over 30% since, and that Trek is now mainstream.

You could argue that The X-Files sabotaged itself for re-watching, by blue-balling its audience, offering tantalizing Lost-style glimpses of a larger mythos that ultimately didn't go anywhere. It's also hard to ignore that the cultural interest in UFOs and aliens vanished almost entirely once everyone had a digital camera in their pocket. However, most of the X-Files didn't even involve aliens, and instead featured an unrelated mystery-of-the-week. The overall quality is definitely more consistent than TNG too.

Good Morning America visits Star Trek The Next Generation

It's easy to forget just how out-of-touch culture was with popular science fiction at the time, now that nerds are cool. Luckily there is a wonderful time capsule to remind us, in the form of an episode of Good Morning America from 1992. In this show, cringe inducing for any Trek fan, the set of the USS Enterprise is host to a cast of TV anchors out of their depth. They marvel at the decor, the whacky aliens and costumes, the futuristic props and consoles, the trekkie lingo, and so on. Throughout it's pretty obvious that none of them have ever actually watched the show. This is also why Patrick Stewart doesn't appear: he refused, insulted by the disrespect they had for the material.

By then the show was in the middle of its fifth season, and had featured episodes with numerous serious, well-executed topics. The Measure of a Man debated the human rights of Data, the sentient android, in a court room setting. Who Watches the Watchers explored the policies of non-interference with primitive cultures, the immorality of false deities, and the power of the scientific method. Sins of the Father challenged the notion of inherited sin in an honor-based culture, as well as the fish-out-of-water aspects of an exile in his own culture. The High Ground was about terrorism and how people are driven to it by desperation, not broadcast in Ireland at the time, ostensibly due to a throwaway line about Irish unification. Darmok was a prescient linguistic take on the memes we now take for granted. There was also the legendary season-cliffhanger The Best of Both Worlds, in which Picard was abducted by his mortal enemy, and forced to do their bidding in conquering humanity.

None of this is even remotely featured in the few clips shown, and instead they show meaningless treknobabble, some phasers and torpedoes, Deanna Troi sensing nothing (as usual), and a dry explanation of asphyxiation in a vacuum. The interviews with the cast members mostly revolve around trivia, from how long it takes to do their makeup to whether they get recognized on the street. Gates McFadden, to her credit, manages to salvage every bit she's in. At one point, an anchor does say the show addresses more serious topics, but then she fails to mention any, stalling the conversation with some awkward glances. Granted, this was an early morning show mainly designed to fill air time, mixed with a presidential election (health care costs are rising!) and some disastrous weather, but even then this is painfully embarrassing.

Trouble with Tribbles

When she asks the cast whether they think they'll be as famous as the original Trek characters, and whether this will define their careers, Jonathan Frakes says he's just happy to have a job. Marina Sirtis says she thinks it's unlikely she'll still be wearing the uniform 25 years later. Well, technically it ended up being only 10 years, but she's still doing Trek conventions today. Clearly they didn't think what they were doing was remotely as iconic as the original, and the exact opposite is true.

The original Trek is too awkward, too out of touch to watch today, so instead it has been pilfered and repurposed, turning its characters into hot, juvenile blowhards in JJ-Trek. But TNG endures on its own terms, and what's more, its dorky PADDs and omnipresent touch screens turned out to be prophetic. It's fascinating to rewatch today, and realize that eating lunch while holding a tablet is an entirely normal thing now. This is only slightly marred by the fact that they still treat them like pieces of paper, handing over the device instead of just transferring a file. I guess the DRM is really bad in the future.

So why on earth do I think TNG ruined a generation? If it wasn't obvious, I love this show. I still have the collectible card game in a box right behind me.

It has to do with the way they staged their moral dilemmas, because there are two kinds of Trek episodes.

Commander Data on trial for his rights

In the first kind, Type 1, a moral dilemma happens to a character. Data's rights are threatened. Worf's honor is at risk. Picard's dignity has been stripped. Or as in The Drumhead, one of the best episodes, the Enterprise's own crew participates in an investigation that turns into a bona-fide witch hunt. These are the good ones, because they start with real characters and put them in credible jeopardy. Secondary characters are fleshed out as well, with conflicting views and interests, and serve to challenge the assumptions the show and its main characters embody.

In the second kind, Type 2, there is a planet or a species, with a unified culture and philosophy. This is usually centered around one particular extreme. The Ferengi are greedy chauvinists. The Sheliak are xenophobic supremacists. The J'naii are monogender puritans. The species is a trope, a stand-in for an Other who is Different, and the full implications of this conceit are never actually fleshed out. In pretty much every case, the crew is tasked with outsmarting them, and to demonstrate that their values, morals or tactics are better, even if it takes a while to figure out how and why. The drama is a result of fighting this good fight, and results in either a moral victory or defeat. But the matter of which side was right is never in question. The underlying assumption is pretty hard to miss: people who look a certain way act a certain way, and judging the book by its cover is correct. These episodes are actively and shamelessly racist.

Of course it's not a completely rigid binary. The goofy Ferengi of TNG ended up being fleshed out more in DS9, though they never managed to fully escape from the shadow of caricature. The DS9 Cardassians remained unrepentant imperialists, with only the odd exception presented as one of the few "good ones." On the other hand, some brief planets-of-the-week ended up hosting a variety of characters and factions, and let them speak fully for themselves. But the dichotomy still exists, and it's bizarre that the two kinds of stories live together under the same roof.

It's particularly noticeable given the emphasis the show placed on empathy, in the form of Deanna Troi. She was the left hand of the captain, a character whose main function was to sense and regulate people's feelings, and the perfect embodiment of the New Age beliefs popular at the time.

The Sheliak, non-humanoid aliens
N'Grath the insectoid Crime Boss

Empathy is actually a pretty difficult aspect of televised sci-fi. A classic complaint is that almost all the aliens are humans with bumpy foreheads, and this is true, but also entirely necessary. Whenever shows have featured distinctly non-humanoid characters, it weirds people out, because we can't read them at all. Like the aforementioned Sheliak, whose inhuman mime telegraphs no useful information, an aspect which was actually pertinent to the story. For the most part, Trek has wisely avoided this trap. Other shows tried and failed, like Babylon 5, which featured an insectoid crime boss whose animatronic chittering and spasms completely failed to intimidate. It was quietly shelved in subsequent seasons.

As a result, any show that is all about meeting strange new aliens on strange new worlds actually puts an incredibly high premium on human emotions and human behaviors, and can only explore them in two ways... first, by making the aliens just like humans, with the same moral failings, ideological disagreements and conflicting interests, which makes them not aliens... or two, by making the aliens inhuman in some way, and hence somewhat incomprehensible, which automatically otherizes them and makes their way of life undesirable. Unless the writers make an effort, they're not actually going to challenge the audience's preconceptions at all. Less ambitious sci-fi shows like Stargate SG-1 run almost entirely on this formula, contrasting the plucky always-right hero team with the backwards primitives they must liberate and the alien villains who hold them hostage.

When this becomes the lens that you use to relate to Others, you're not actually relating at all.

As tentative evidence that this is a real thing and not just some cultivation theory in disguise, I offer the closest thing we have to a TNG reboot, The Orville. Created by bona fide uber-trekkie Seth McFarlane, you'd expect it to embody all the good parts of TNG. Yet the show had a few whopper Type 2 episodes in its first season, and no real Type 1s. This should be supremely remarkable, and yet has mostly passed by unnoticed, as the people with opinions appear to have confused the trappings of the show it copied for its substance.

Orville - Krill

In Krill, the captain and his pilot infiltrate an enemy vessel in disguise. They blunder through, bluffing their way through conversations only the dumbest adversary could not suss out. They do this without taking any of the expected precautions, and they end up winning by doing the equivalent of setting the aliens' thermostat to 100ºC and boiling them alive. Yes, this show actually wants you to believe that aliens who are extremely light-sensitive would put lethal light bulbs on their space ships and just never turn the dimmer up past 5%. They're religious fundamentalists though, and we all know how stupid they are, right?

Orville - About a Girl

In About a Girl, the show seems to imitate both TNG's occasional court room proceedings and gender morality plays. It features a trial over whether a newborn girl should be allowed to remain female, in a species of male chauvinists who make the Ferengi seem downright tolerant. The amazing part is that the supposedly devastating argument, made by the female second-in-command who is counsel for the defendant, is based on easily dismissed false equivalences, assumed superiority and non-sequiturs. She ought to have them howling with laughter in minutes. Instead, this society of aliens who don't take women seriously takes this woman's poor excuse for reason and logic entirely seriously. Because sexism is bad, you guys.

The second season managed to bring a little bit more substance and new material, but couldn't drop the habit completely, in the form of All the World Is Birthday Cake. It's about a planet of astrologers, who rigidly organize their society according to horoscopes by birth sign, including a particular class of untouchables. The resolution comes from the main characters outsmarting them with a non-credible deception, which a first year physics student could poke numerous holes in. To add insult to injury, the writers apparently don't know the difference between a satellite and a satellite dish. Fuck yeah science! Astrology is dumb!

It feels more like Team Earth: Galaxy Police than Star Trek: The Funny Generation, only the satire isn't intentional, and isn't mocking who they think it is. Nevertheless its first season won a "Best Science Fiction Television Series" award.

If that's what TNG's legacy looks like in 2019, then I'm afraid some people have missed the point entirely.

There's a particular phrase you hear a lot these days. "We're on the right side of history." I'm not saying it's all TNG's fault, but it's hard to imagine how it didn't contribute, given how pervasive Trek references and memes are, even today. There is a similar dichotomy in handling moral dilemmas... whether we look for nuance to understand one of our own who is struggling, or whether we consider someone an outsider, representative of a foreign monoculture, and which needs to be outsmarted and defeated, ideally using their weaknesses against them. Both approaches live under the same roof of compassion, empathy, justice and progress, despite being polar opposites.

The real lesson was that principles are important, vital even. But they must be moderated, by checking ourselves before we chastize others.

September 30, 2019

We are pleased to announce the developer rooms that will be organised at FOSDEM 2020. Developer rooms are assigned to self-organising groups to work together on open source projects, to discuss topics relevant to a broader subset of the community, etc. The individual developer room organisers will issue their calls for participation in the next few days. Saturday 1 February 2020 Topic Call for Participation CfP deadline Ada - TBA Backup and Recovery - TBA Coding for Language Communities announcement 2019-12-01 Collaborative Information and Content Management Applications announcement 2019-12-09 Containers announcement 2019-11-29 Dependency Management announcement 2019-12-04 DNS announcement 2019-12-01舰

September 27, 2019

Régulièrement, des lecteurs me demandent pourquoi je ne traite pas d’écologie sur ce blog. Après tout, la planète est en danger, il faut agir. Pourquoi ne pas écrire sur le sujet ?

La réponse est toute simple : je parle d’écologie. Souvent. Presque tout le temps. Je milite pour sauver la planète, je raconte des histoires pour sensibiliser mes lecteurs.

Mais, contrairement à cette hérésie médiatique et millénariste qui s’est emparée de l’humanité, je ne cherche pas à effrayer. Je veux que les choses changent réellement en traitant le problème à la racine.

En hurlant, en manifestant, en lapidant le malheureux qui aurait encore des ampoules à incandescence, nous ne faisons que hâter notre perte. Nous sommes en train de détruire nos enfants, d’en faire des névrosés, des intégristes. Nous leur montrons l’exemple d’une jeune femme qui brosse l’école pour traverser l’atlantique sur le voilier d’un milliardaire afin de servir de faire-valoir ou de repoussoir électoral aux politiciens. Nous les culpabilisons en leur disant d’agir, de s’agiter médiatiquement, en les décourageant de prendre le temps pour apprendre et réfléchir. Récemment, une gamine de sept ans à qui je tentais vainement d’expliquer que couper un arbre n’était pas un crime, que c’était parfois bénéfique et nécessaire, m’a répondu : « De toutes façons, je préfère mourir que de polluer ».

C’est extrêmement grave. Nous imposons notre culpabilité, notre sentiment d’impuissance à nos enfants. Nous les transformons en ayatollahs d’une idéologie aveugle, irrationnelle, cruelle, inhumaine. Une religion.

La dangereuse violence du potager

Le discours collapsologiste devient la norme. Tout le monde veut apprendre à cultiver son potager, à chasser pour survivre.

Mais personne ne réfléchit qu’il y’a une raison toute simple pour laquelle nous sommes passé à l’industrialisation. Ce n’est pas par plaisir que l’homme a construit des usines. Mais parce que c’est plus efficace, plus performant. Que cela a permit à la majorité de l’humanité de ne plus crever de faim et de misère.

Ce pseudo moyen-âge idéalisé auquel beaucoup aspirent signifie, avant toute chose, la famine en cas de mauvaise récolte ou d’accident, la mort par maladie, le handicap à vie pour de simples fractures.

Nous oublions que les chasseurs-cueilleurs et même les paysans du moyen-âge disposaient de plusieurs hectares par individu, ce qui leur permettait de subsister de justesse. Pour revenir à cet état, il faudrait d’abord se débarrasser de l’immense majorité de l’humanité. Et cela se ferait tout naturellement grâce à la guerre et aux massacres pour conquérir les territoires fertiles.

Si vous croyez à l’effondrement, ce n’est pas la permaculture que vous devez apprendre mais le maniement des armes. Ce ne sont pas les conserves qu’il faut stocker mais les munitions. Au moyen-âge, les villages étaient régulièrement razziés ou sous la protection d’un seigneur qui levait d’énormes impôts. Dans le monde des collapsologues, le maraîcher d’aujourd’hui est donc le serf de demain. Militer pour un retour au potager individuel, c’est littéralement militer pour la guerre, la violence, la lutte pour des ressources rares.

Le réchauffement climatique est un fait établi, indiscutable. Il sera probablement pire que prévu. L’inaction politique est bel et bien criminelle. Mais en devenant tous des survivalistes, nous créons une prophétie autoréalisatrice. Nous nous concentrons sur l’obstacle à éviter.

Traiter le mal à la racine et non ses symptômes

Pourtant, malgré les changements désormais inévitables de notre environnement, l’écroulement de la société n’est pas inexorable. Au contraire, nous sommes la société et celle de demain sera ce que nous voulons qu’elle soit. Nous pouvons accepter la situation comme un fait, utiliser notre intelligence pour prévoir, mettre en place les infrastructures qui rendront le réchauffement moins tragique en réduisant le nombre de morts.

Ces infrastructures sont tant techniques (eau, électricité, internet) que politiques et morales. En créant des outils de gouvernance décentralisés, nous pouvons augmenter la résilience de la société, nous pouvons asseoir les principes collaboratifs qui nous feront vivre au lieu de survivre. En militant pour la libre circulation des personnes, nous pouvons tuer dans l’œuf les conflits le long d’arbitraires frontières. En luttant contre les ségrégations, nous éviterons qu’elles ne se transforment en un communautarisme violent lorsque les ressources se raréfieront.

Avant toute chose, nous devons apprendre à traiter les causes, à comprendre au lieu de nous voiler la face en rejetant la faute sur des concepts arbitrairement vagues comme « les politiques », « l’industrie », « les riches » ou « le capitalisme ».

Pourquoi consommons-nous autant de ressources ? Parce que nous y sommes poussés par la publicité. Pourquoi y sommes-nous poussés ? Pour faire tourner l’économie et créer des emplois ? Pourquoi voulons nous créer des emplois ? Pour consommer ce que nous croyons vouloir à cause de la publicité. Nous devons sortir de ce cercle vicieux, le casser.

Il faut arrêter de créer de l’emploi. Il faut travailler le moins possible, nous sommes déjà trop productifs. Il faut décrédibiliser la publicité et le marketing. Il faut apprendre à être satisfait, à avoir assez. Or, c’est impossible dans un monde où le produit le plus vendu est désormais la malléabilité de notre cerveau. À travers Facebook et Google, nous sommes en permanence scrutés et façonnés pour devenir de bons consommateurs émotionnellement réactifs. Nous militons contre le réchauffement climatique sur… des pages Facebook ! Ce qui va nous exposer à des publicités pour des projets Kickstarter de vélos pliants jetables fabriqués au Turkménistan et à des posts “likés” à outrance qui renforcent nos croyances, tuant tout recul, tout esprit critique.

Chronique d’un effondrement souhaité

Si notre priorité est réellement la santé future de nos enfants, la mesure la plus simple et efficace serait d’interdire immédiatement la cigarette dans l’espace public, y compris l’électronique dont on remarque qu’elle est extrêmement nocive et pousse à l’usage du tabac chez les jeunes.

La cigarette est un marché hyper polluant dont l’objectif est de polluer les poumons des clients et de leur entourage tout en polluant les nappes phréatiques et nos sols (avec les mégots). Or, combien de marcheurs pour le climat s’en sont grillé une, souvent juste à proximité d’enfants ?

Comment peut-on un instant imaginer être crédible en demandant un respect assez abstrait de la planète à une entité abstraite que sont « les politiques » lorsqu’on n’est concrètement pas capable de respecter son propre corps ni celui de ses propres enfants ?

Après la cigarette, il faudrait attaquer la voiture. Avec une solution toute simple : augmenter le prix de l’essence. Transformer l’essence en gigantesque taxe de la voiture. Les mesures factuelles le prouvent : la consommation ne dépend que du prix. Une voiture qui consomme moins roulera plus si le prix n’est pas augmenté. Mais les gilets jaunes nous ont démontré avec quelle énergie nous sommes capables de nous battre pour avoir le droit de polluer plus, de consommer plus, de travailler plus.

Ce que nous disons à nos enfants, c’est qu’ils sont coupables, qu’ils doivent sauver le monde que nous détruisons sciemment. Nous leur faisons peur avec le glyphosate, qui pourrait éventuellement être toxique, même si ce n’est pas certain sur de petites doses, en leur servant un steak de viande rouge bio qui lui, est un cancérigène certain.

Nous entretenons leur peur avec l’aluminium dans les vaccins, avec les ondes wifi, avec le nucléaire alors qu’une seule journée dans les embouteillages sur l’autoroute et une soirée avec des fumeurs sont probablement plus néfastes pour le cerveau que toute une vie avec une antenne wifi sur la tête. Tous les effets secondaires des vaccins ne pourront jamais faire autant de mal qu’une simple épidémie de rougeole.

Les scientifiques qui travaillent sur le nucléaire et qui ont des solutions concrètes aux inconvénients de cette technologie (le danger d’explosion, les déchets) s’arrachent les cheveux car ils ne peuvent plus avoir de budget, ce qui a pour effet de réactiver des vieilles centrales dangereuses ou, pire, des centrales à charbon qui tuent silencieusement des milliers de gens chaque année en polluant notre atmosphère.

Nos peurs hystériques sont en train de créer exactement ce que nous craignons. L’écologie collapsologiste met en place presque volontairement la catastrophe qu’elle prédit. Au nom de l’amour de la terre, les grands inquisiteurs nous torturent afin que nos souffrances psychologiques rachètent les péchés de l’espèce.

L’apocalypse instagramable

Voir des millions de gens marcher pour le climat m’effraie autant que voir d’autres se réchauffer autour de braseros dans leur gilets fluos. J’ai l’impression de voir un troupeau écervelé, bêlant à la recherche d’un chef. Un troupeau qui ne sera satisfait que par des mesures absurdes, médiatiques, spectaculaires. Un troupeau qui a trop de pain et demande de plus grands jeux (car, oui, l’obésité tue plus aujourd’hui que la malnutrition).

La réalité n’est malheureusement jamais spectaculaire. Réfléchir n’est jamais satisfaisant. C’est d’ailleurs la raison, bien connue des psychologues, pour laquelle les théories du complot ont tant de succès. Nous voulons du spectaculaire, du bouleversant. Et le tout sans changer nos habitudes. On veut bien acheter des ampoules plus chères et manifester mais si le changement climatique devient trop dérangeant, on se contentera de dire que c’est un hoax des gaucho-scientifiques. Ou que c’est la faute des politiques.

Sortir du tout à l’emploi, refuser le consumérisme, prendre du recul sur notre rapport à l’information, repenser nos modes de gouvernance. Prévoir des infrastructures d’eau potable et d’électricité mondiale. Décentraliser Internet. Tout cela, malheureusement, n’est pas assez likable. Mélanie Laurent ne pourrait pas en faire un film. Greta Thunberg ne pourrait pas justifier une traversée de l’atlantique.

C’est tellement plus facile de crier à la destruction totale, de trembler de peur, de se réjouir parce que tout un quartier a réussi à faire pousser cinq tomates, de payer pour avoir l’honneur de sarcler la terre du champs de Pierre Rabhi ou de prendre un selfie sur Instagram avec une « star de la méditation qui prie pour l’union des consciences » (je n’invente rien). Ça fait moins peur de mourir à plusieurs, chante très justement Arno.

C’est plus facile, cela nous donne bonne conscience au moindre effort. Malheureusement, c’est au prix de la santé mentale de nos enfants. Nous sommes en train de détruire psychiquement une génération parce que nous refusons de pousser la réflexion, d’accepter nos erreurs, d’évoluer, de penser plus loin que les grammes de CO2 émis par notre voiture de société.

Un péché héréditaire

Ma génération était à peine née lorsque quelques australiens demandèrent à nos parents comment ils pouvaient dormir alors que leurs lits étaient en train de brûler. 32 années plus tard, force est de constater que nous n’avons fait que transformer une évidence scientifique en hystérie collective. Que nous n’avons fait que reporter la culpabilité, en la décuplant, sur la génération de nos enfants. Avec un effet positif quasiment nul.

La maison brûle mais au lieu de leur apprendre à se servir d’un extincteur ou à sauter par la fenêtre, nous leur enseignons à courir en cercle en hurlant le plus fort possible tout en prenant des selfies. Nous leur faisons couper les robinets et nous leur apprenons à disposer des cristaux magiques qui, par leur « énergie vibratoire », devraient éteindre l’incendie.

Notre seul espoir est qu’ils s’en rendent compte avant d’être définitivement traumatisés. Qu’ils nous envoient paître plus rapidement que ce que nous avons fait avec nos parents. Qu’ils nous renvoient à la figure nos marches pour le climat, nos supermarchés bio avec des parkings pour SUV, nos pages Facebook pour gérer les potagers partagés et nos partis écologistes qui veulent avant tout créer de l’emploi et détruire le nucléaire. « Tu faisais quoi papy pour lutter contre le réchauffement climatique ? » « On allait marcher dans la rue pour que d’autres fassent quelque chose ».

Plutôt que de mettre la pression sur les générations suivantes et d’accuser les générations précédentes, ne pourrait-on pas prendre nos responsabilités intergénérationnelles et s’y mettre tout de suite ? Ensemble ?

Parler d’écologie ? C’est peut-être avant tout lâcher le plaisir immédiat de l’indignation facile et parler de notre consommation, de notre responsabilité à sélectionner ce que nous donnons à ingurgiter à notre cerveau.

Photo by Siyan Ren on Unsplash

Je suis @ploum, conférencier et écrivain électronique. Si vous avez apprécié ce texte, n'hésitez pas à me soutenir sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens réguliers, même symboliques, sont une réelle motivation et reconnaissance. Merci !

Ce texte est publié sous la licence CC-By BE.

September 26, 2019

Uwe finished soldering the remaining boards, and sent them to me before his vacation started. I added one connector that was still MIA earlier, and also ordered more lcds and tfp401 modules, and then tested the lot.

We now have the kit for 4 further test systems. 2 for the fosdem office, 2 for the openfest guys. Here's a picture of them all:

lime2 test boards ready to be shipped

Up top, two complete sets, with TFP401 module and LCD, those are for bulgaria. Then the bottom boards, and the tfp401 module are for the FOSDEM offic, where there's already one tfp401 module, and a ton of BPI LCDs. I will be shipping these out later today.

To add to that, i now have a second complete test setup myself.

Mind you, this is still our test setup, allowing us to work on all bits from the capture engine on downstream (video capture, kms display, h264 encoding). We will need our full setup with the ADV7611 to add HDMI audio to the mix. But 80-90% of the software and driver work can be done with the current test setup with tfp401 modules.


I have gone and flashed all the TFP401 modules with the tool i quickly threw together. I created a matching terminal cable to go from the banana-pi M1 to the edid connector on the board, and made one for Uwe and the FOSDEM office (as they both have a module present already).

It turns out that this rom is always writable, and that you could even do so through the hdmi connector. My. My.

Howto rework

Ever since our first successful capture, there has been a howto in the fosdem video hardware github wiki. I have now gone and walked through it again, and updated some bits and pieces.

Wait and see whether the others can make that work :)

Bulldozing the TFP401 backlight

The Adafruit tfp401 module gets quite warm, and especially the big diode in the backlight circuit seems to dissipate quite a lot of heat. It usually is too hot to touch.

A usb amp meter told me that the module was drawing 615mW when hdmi was active. So I went and ran most of the backlight circuit components from the board, and now we're drawing only 272mW. Nice!

A look into the box we will be replacing.

FOSDEMs Mark Vandenborre posted a picture of some video boxes attached to a 48 port switch.

FOSDEM slides boxes in action.

There are 58 of those today, so mentally multiply the pile under the switch by 10.

Marian Marinov from Openfest (who will be getting a test board soon) actually has a picture of the internals of one of the slides boxes on his site:

FOSDEM slides box internals.

This picture was probably made during openfest 2018, so this is from just before we rebuilt all 29 slides boxes. One change is that this slides box lacks the IR LED to control the scaler by playing pcm files :)

Left, you can see scaler (large green board) and the splitter (small green board). In the middle, from top to bottom, the hardware h264 encoder, the banana-pi, and the status LCD. Then there's an ATX power supply, then, hidden under the rats nest of cables, there's a small SSD, and then an ethernet switch.

Our current goal is to turn that into:
- 1 5V power supply
- 1 lime2 with ADV7611 daughterboard.
- keep the lcd.
- keep the 4 port switch.
- perhaps keep the SSD, or just store the local copy of the dump on the SD card (they are large and cheap now).

We would basically shrink everything to a quarter the size, and massively drop the cost and complexity of the hardware. And we get a lot of extra control in return as well.


Thanks a lot to the guys who donated through paypal. Your support is much appreciated and was used on some soldering supplies.

If anyone wants to support this work, either drop me an email for larger donations if you have a VAT number, so i can produce a proper invoice. Or just use paypal for small donations :)

Next up

Sunxi H264 encoding is where it is at now, i will probably write up a quick and dirty cut down encoder for now, as february is approaching fast and there's a lot to do still.

September 24, 2019

Acquia partners with Vista Equity Partners

Today, we announced that Acquia has agreed to receive a substantial majority investment from Vista Equity Partners. This means that Acquia has a new investor that owns more than 50 percent of the company, and who is invested in our future success. Attracting a well-known partner like Vista is a tremendous validation of what we have been able to achieve. I'm incredibly proud of that, as so many Acquians worked so hard to get to this milestone.

Our mission remains the same

Our mission at Acquia is to help our customers and partners build amazing digital experiences by offering them the best digital experience platform.

This mission to build a digital experience platform is a giant one. Vista specializes in growing software companies, for example, by providing capital to do acquisitions. The Vista ecosystem consists of more than 60 companies and more than 70,000 employees globally. By partnering with Vista and leveraging their scale, network and expertise, we can greatly accelerate our mission and our ability to compete in the market.

For years, people have rumored about Acquia going public. It still is a great option for Acquia, but I'm also happy that we stay a private and independent company for the foreseeable future.

We will continue to direct all of our energy to what we have done for so long: provide our customers and partners with leading solutions to build, operate and optimize digital experiences. We have a lot of work to do to help more businesses see and understand the power of Open Source, cloud delivery and data-driven customer experiences.

We'll keep giving back to Open Source

This investment should be great news for the Drupal and Mautic communities as we'll have the right resources to compete against other solutions, and our deep commitment to Drupal, Mautic and Open Source will be unchanged. In fact, we will continue to increase our current level of investment in Open Source as we grow our business.

In talking with Vista, who has a long history of promoting diversity and equality and giving back to its communities, we will jointly invest even more in Drupal and Mautic. We will:

  • Improve the "learnability of Drupal" to help us attract less technical and more diverse people to Drupal.
  • Sponsor more Drupal and Mautic community events and meetups.
  • Increase the amount of Open Source code we contribute.
  • Fund initiatives to improve diversity in Drupal and Mautic; to enable people from underrepresented groups to contribute, attend community events, and more.

We will provide more details soon.

I continue in my role

I've been at Acquia for 12 years, most of my professional career.

During that time, I've been focused on making Acquia a special company, with a unique innovation and delivery model, all optimized for a new world. A world where a lot of software is becoming Open Source, and where businesses are moving most applications into the cloud, where IT infrastructure is becoming a metered utility, and where data-driven customer experiences make or break business results.

It is why we invest in Open Source (e.g. Drupal, Mautic), cloud infrastructure (e.g. Acquia Cloud and Site Factory), and data-centric business tools (e.g. Acquia Lift, Mautic).

We have a lot of work left to do to help businesses see and understand the power of Open Source. I also believe Acquia is an example for how other Open Source companies can do Open Source right, in harmony with their communities.

The work we do at Acquia is interesting, impactful, and, in a positive way, challenging. Working at Acquia means I have a chance to change the world in a way that impacts hundreds of thousands of people. There is nowhere else I'd want to work.

Thank you to our early investors

As part of this transaction, Vista will buy out our initial investors. I want to provide a special shoutout to Michael Skok (North Bridge Venture Partners + Underscore) and John Mandile (Sigma Prima Ventures). I fondly remember Jay Batson and I raising money from Michael and John in 2007. They made a big bet on me — at the time, a college student living in Belgium when Open Source was everything but mainstream.

I'm grateful for the belief and trust they had in me and the support and mentorship they provided the past 12 years. The opportunity they gave me will forever define my professional career. I'm thankful for their support in building Acquia to what it is today, and I am thrilled about what is yet to come.

Stay tuned for great things ahead! It's a great time to be an Acquia customer and Drupal or Mautic user.

I published the following diary on “Huge Amount of Sites Found in Certificate Transparency Logs“:

I’m keeping an eye on the certificate transparency logs using automated scripts. The goal is to track domain names (and their variations) of my customers, sensitive services in Belgium, key Internet players and some interesting keywords. Yesterday I detected a peak of events related to the domain ‘’. This domain, owned by Microsoft, is used to provide temporary remote access to Windows computers. Microsoft allows you to use your own domain but provides also (for more convenience?) a list of available domains. Once configured, you are able to access the computer from a browser… [Read more]

[The post [SANS ISC] Huge Amount of Sites Found in Certificate Transparency Logs has been first published on /dev/random]

September 23, 2019

September 21, 2019

Over the last couple of months Juanfran Granados from Mirai, an hotelier agency that used WordPress extensively, worked hard to add multisite administration capabilities to Autoptimize:

  • If AO has been network-activated, there will be an entry-point in the network settings screens where one can configure Autoptimize for the entire network.
  • On the subsites the AO settings screen will show a message that settings are done at a network level
  • On the network-AO-settings there is an option to allow AO to be configured per site
  • if AO has not been network-activated, things work as they do now; all settings are done on a per-site level

I have just merged his code into separate branch on Github and given that significant changes that went into this (almost all files changed), I need you and you and you and … to download and test that multisite-test branch before I merge the changes into the beta branch. Looking forward to your feedback!

September 19, 2019

Immigration from the Inside


I still remember when I understood how immigration approvals actually work. I was in Vancouver airport, at the immigration office, a no-man's land, applying for a temporary Canadian work permit. For a software engineer like me there was a specific category to fast track approvals, with a few criteria. This meant I could gather the necessary paperwork and references, and "just" hop on a plane, applying on entry instead of through an embassy first. Hence what usually takes place behind the walls of administration was handled immediately and face-to-face; or at least, immediately after the obligatory 1-2 hours of waiting in line.

A funny moment was when the officer asked me which languages I knew, and I told him I spoke Dutch, French and English. Speaking both Canadian languages was a big plus. "No." he said, looking back at his monitor. "Uh... Computer languages." Ah. It was time to speak the magic incantation: "PHP, MySQL, Java." I knew exactly which table this man was looking at, of which a valid applicant needed to know 3. But the words in it were gobbledygook to him, and had I said my experience as a web developer had made me an excellent Fortran or COBOL programmer, he would've taken my word for it.

You might think the lesson is "immigration has to judge people on things they have no ability to evaluate," and that's true, but that's baby's first disenchantment. Most of this interaction took place at a big long counter in an open waiting room, so you got to listen in on everyone ahead of you. I went through this gauntlet multiple times over the years. You see a lot.

Immigrants or visitors with a poor grasp of the language often struggled. Some arrived entirely unprepared, family in tow. Some were left to sit for hours in a corner upon rejection, or just waiting for an interpreter. Derpy Australian snowboard instructors were also a frequent sight, fast tracked as Anglos unless they messed something up. For the most part though, people who were there were pretty sure to be approved, they just had to show their work.

One time an officer did in fact excuse himself to call up my new employer and confirm a few things, which we'd pre-arranged just in case. Eventually they started noticing in their system I was a repeat customer and eased up. I was also a western, educated person, who by then had gone native. The officer isn't just checking a list, they are trying to determine if what you are presenting is a coherent picture with multiple independent pieces of evidence that add up. Forging a degree, a job reference or a background story is not hard, the hard part is playing the part, which would require conning someone whose job it is to spot liars.

Their job is not to verify the information, rather their job is to compare the information and see if if it all matches. They're not just reading the paper, they're reading the person too. If you're nervous and fidgeting, expect to be there for a while as they tease everything apart. If you're confident, you'll blaze through. e.g. Losing my accent took work, you can't fake that, and that said more about my ability to be an integrated resident than a stack of papers ever could.

The entry criteria of degree, skill and means aren't actually evaluated. Only proxies, like a piece of paper that claims you received a degree from University X, or a person they can call who claims to be an employer at a company that exists and who believes in your value. If you can fake those convincingly enough, they'd never know.

The purpose of the gauntlet is mostly the gauntlet itself: it's a process that makes people jump through hoops. Ostensibly this is meant to catch the unqualified, but the thing is, nobody honest is going to try to apply to something they know they don't qualify for. Especially not when it comes to immigration, whose admission rules are signposted, and which requires a significant investment of time and resources. So what they're really looking for is people who are so underinformed they think they are qualified (Dunning-Krugers), or people who pretend to be qualified but are not (Liars).

This isn't even unique to immigration by the way: how many employers have actually called up their employees' colleges to verify they graduated, or even just asked them for a diploma or transcript? They'll call up a reference or two, for sure, but how deep do they really want to go? They'll only do that if the new hire turns out to be incompetent. Or, as in one company I worked at, you suspect someone has a gambling problem, and that their company laptop which was "stolen" was actually pawned off to pay debts. But I digress.

Types of candidates

Think of college admissions, political parties, or just dating. We're all trying to figure out if people are qualified or suitable, but all we can see is whether they can convincingly play the part. Whether the person is qualified is mostly irrelevant, and only comes into play if it makes them act visibly insecure and nervous. Only the bad liars get caught this way. The good liars must be revealed over time, as the difference between competence and confidence becomes apparent.

An important difference is between earning respect and demanding respect. Someone who is competent can earn respect through their work, and gain status among their peers as a result. Someone who is not competent, but confident, can project status or maneuver their way into it, and use that to demand respect. That is, honest people tend to derive status from respect, dishonest people tend to derive respect from status.

Badges, degrees and certificates have been around forever, so this is a pretty old dynamic of tension. Are they called a senior engineer because juniors come to them with questions, or do they want underlings so they'll be seen as senior? Is their degree a sign of genuine talent and interest, or a sign of someone who expertly used group exercises and copied notes to get that paper at all costs? Does that certificate of compliance represent verified principles, or the authority of someone bribed to look the other way?

This is also why this post is titled "Know your Bluecheck." These little blue checkmarks of Twitter and other similar platforms were originally intended as mere verifications of identity, to avoid impersonation. But the people who have one are usually notable in one way or another, so the association between status and the badge was inevitable. The effect is that having a bluecheck confers status rather than just communicating the pre-existence of it. Any time someone publishes a ruleset, those rules can be gamed, and we've gamified status. So now the causal arrow goes predominantly from status to respect, instead of from respect to status.

I don't mean to swerve into "who bluechecks the bluechecks," no, I just wanted to highlight this point because there is also an enormous gap between how the media talks about immigration and how it feels to go through immigration. Very few bluechecks seem to get it these days, and they're the ones supposed to be good at explaining it. There is something nobody talks about and which is nevertheless universal for fellow immigrants I've spoken to.

Anyone who lives on any kind of temporary visa has an expiration date on their life as they know it. Their house, their job, their social circle, their local assets, all are tied to a piece of paper that's valid for only a few years at most. It's like a permanent Sword of Damocles hanging over your head, only it's worse, because this sword is always slowly descending. The only way to crank it back up is to run another gauntlet, to file more things, to accept not being able to travel if they take too long, and risk rejection if the political climate veers particularly conservative for a few years, or some email gets routed to the wrong inbox.

Employers don't understand this either. To the person assigned to help support your case from their end, you're just another item on their to do list, one they'd much rather procrastinate on. Because they don't realize failure means nuking a person's entire life over an avoidable screw up.

Converting this into permanent residency is often hard, and can take years even after having lived there for several, during which you're bound even more tightly to your job. Until you have it, you don't actually get to experience what it's truly like to live in a place. To feel free to put down roots, form long lasting bonds, pursue opportunities, and just accumulate life. To finally be sure you're never gonna have to pack it all up again unless you want to. It puts a damper on any long term plans and your willingness to invest yourself. The little indignities of having to deal with an at-times ridiculous bureaucracy are nothing compared to that sense of perma-dread, or the relief you feel when it's finally gone.

I can't say my foibles with immigration have been particularly tough, I had it easy all things considered, but I do know what it's like to become illegal and be told you have to leave through no fault of your own. Illegal immigrants who have no prospects of ever legitimizing, or ever returning, and who cannot enter the job market, have it far worse. They have to give up on a peace of mind so basic, most citizens don't even realize they have it, and cannot truly empathize with those who lack it.

The framing of either opening borders indiscriminately, or tightening the border checks misses the point. You don't want people who are unqualified, and you don't want frauds, but you can't evaluate qualifications or spot good liars at the border anyway. They pretend they can, but anyone with the means can and does work around them. Meanwhile, the dehumanizing aspects of having to justify your worth over and over to someone who doesn't have the first clue about you or what you do remains. As do the seedy alternatives people pursue in desperation.

I can also tell you a story. About a contractor who's auditing a struggling startup and learns that the product they failed to build was mostly a pretense for the multi-millionaire founder to become a "legitimate businessman" again. Because he wanted to overturn a travel ban due to multiple sexual assault convictions. Who then pulled the plug to cut his losses, and didn't pay his last invoices, cos he already spent over $1m on this.

Visas and passports are some of the most desirable documents of our day. Some people get them because they deserve them, others because they want them. The system can't really tell them apart, it only serves as a bottleneck, to limit the flow and eliminate the obvious mistakes, by inflicting mild to severe inconvenience and indignity on everyone who tries.

Suppose something goes wrong with your immigration paperwork, out of your own control, e.g. because the person responsible let something sit for a few weeks past some crucial deadlines. As a result, you, an entirely qualified person, cannot re-apply without raising some red flags of issuing dates that don't match up, risking rejection. In that case, hypothetically, you may discover that a certain kind of immigration lawyer will tell you you need to be one of the good liars, just for a while. To run the in-person gauntlet in that mode, so that it is resolved with the least amount of conflict for all parties involved. You may also discover their reasoning is sound, and all alternatives worse.

That might be a moment at which you truly understand how immigration works.

After you go through with it I mean, and emerge unscathed and paper in hand, having worked on your mime and diction. Hypothetically.

The post Is bitcoin dead yet? appeared first on

Let's find out together, shall we? In this post I'll look at the current state of bitcoin, both in terms of technology and its usage.

Obligatory disclaimer: this post is not financial advice. You're responsible for any action you take based on this post.

Network hash rate

Also known as: mining.

The bitcoin network is protected by miners all around the world that provide computational power. The more compute power that is thrown at the bitcoin network, the safer it becomes and the harder it gets to undo transactions and double spend them (pay the same bitcoin to 2 different people, each thinking they have received valid bitcoin).

The hash rate is expressed as "hashes per second". The more there are, the more calculations get done every second to protect the network.

Chart provided by

The bitcoin network reached a peak of 100 exahash per second (EH/s) a few days ago. If you consider it reached ~11 EH/s at the peak of the 2017 price bubble and has been increasing ever since, it's showing a lot of strength there.

Contrarian view Bitmain, one of the largest producers of ASIC mining hardware, is said to release new -- more powerful -- bitcoin miners soon. One theory is that they might already be using those themselves, before selling them. That could increase the hash rate, giving the illusion the network is growing in power, to motivate new miners to buy more hardware from them.

Transactions per day

While there are discussions about what bitcoin really is (a store of value like gold or a medium of exchange like money), one valuable metric to look at is how the current network is being used.

Are people using it? Are transactions being sent between individuals? What kind of value transfers are there?

First, let's look at the transactions per day that got confirmed.

Chart provided by

While it doesn't tell the whole story, it again shows signs of increased traffic. With improvements like transaction batching and the uprising of layer-2 protocols like Lightning, this should continue to rise slowly.

Smaller value transactions will start to move more and more off-chain though, to reduce transaction fees and speed up settlement between 2 parties.

So, is actual value being sent in those transactions? Or are these all transactions that transfer pennies just to simulate network activities? Well, let's have a look at the USD value of these transactions, per day.

Chart provided by

The peak of 2017 is clearly visible, but we also see that as of early 2019, the value being moved around on the bitcoin blockchain is increasing once again.

Just to put this into scale: on an average day in 2019, over $1.500.000.000 is being sent between parties.

That's one point five billion dollars. The exchange volume (people trading bitcoin on exchanges) is even multitudes higher, but isn't visible on the blockchain (this sits centralized at the exchanges). The graph above shows the real value of transactions between 2 or more addresses.

Contrarian view A lot of these transactions per day can be accounted to VeriBlock's usage of the bitcoin blockchain. It uses this to secure other blockchains, by storing timestamps on the bitcoin blockchain. These are real transactions with real transaction fees (someone is paying for them). While it may account for a lot of transactions/day, it can not account for the USD equivalent of these transactions.

Transactions per bitcoin block

Every 10 minutes (give or take) a new bitcoin block is mined. In that block, transactions are confirmed and forever secured on the blockchain.

Because the open source technology that is bitcoin keeps improving, we can see that it's now possible to store more transactions per block. This in turn increases the throughput or the theoretical maximum amount of transactions/second that can be sent.

Chart provided by

I'll cover the technical aspects of why this is possible later in this post.

Because more transactions can now fit in a single block, the size of each block has increased as well. This means it's taking up more space per block on your hard drive.

An average bitcoin block contains around 1MB worth of transaction details. Since every 10 minutes a new block gets produced, the size of the blockchain will only increase. You can enable pruning, to delete older blocks, but you wouldn't be able to fully verify all transactions then.

Chart provided by

Off-chain transactions on the Lightning Network

The Lightning Network is a "layer 2" scaling solution for bitcoin.

In short: bitcoin is limited in how many transactions per second it can do. The idea of the Lightning Network is to have one transaction on the bitcoin blockchain to start a series of off-chain payments. You can then exchange with other Lightning Network users in near real-time (< 1s confirmations).

Once in a while, you would commit or confirm your balance to the bitcoin blockchain. This might be in the order of 10 off-chain transactions for 1 on-chain transaction, or even higher in a 100/1, 1000/1 or 10000/1 ratio. It would depend on the amount of risk you're willing to take by not anchoring your payments on the base layer.

There's a pretty good explanation on the Coin Telegraph site about how Lightning works.

Suffice to say: we might see more transactions on the Lightning Network than on the bitcoin base layer, and that's a good thing. So, how are these transactions doing on the lightning network?

Well, this is where it gets tricky. It's hard to measure. There's no central blockchain anymore. There's no way to tell how many transactions/s are actually happening. We can look at other metrics though, like the size of the Lightning Network in terms of capacity available and amount of nodes active.

Chart provided by

Considering how the Lightning Network is only operational since early 2018, I find having more than 4.000 online nodes to be a fairly good place to be at. It's far from stable and in active development.

Among those nodes, they hold over 30.000 channels open between them. A channel allows money to flow from one address to another.

Chart provided by

We see stagnation in the amount of nodes. There was a large increase in early 2019 when there was a fair amount of media attention for Lightning, but we see that effect dry up now. There would need to be new incentives to increase both awareness and adoption of the Lightning Network.

Considering Twitter's Jack Dorsey is an investor in a company creating Lightning Network software, there's reason to believe this might one day be integrated in either the Cash App or Twitter itself, enabling micro-payments.

Technological improvements

The codebase of bitcoin itself is hosted on Everyone's free to see what is happening and follow the discussions around it.

Improvements to the bitcoin protocol get proposed in a "BIP": Bitcoin Improvement Proposal. Think of these like the RFCs for the internet.

Here's what currently being developed & tested:

  • Schnorr signatures: a better & more compact algoritm to replace ECDSA
  • Scriptless Scripts: a means to execute smart-contracts off-chain (uses Schnorr signatures).
  • MAST: a smarter way to encode scripts on the blockchain, without revealing the entire script
  • Taproot: this will allow a script (or "smart contract") to be indistinguishable from a regular transaction, increasing privacy

On the Lighting Network side, there's a lot of activity around these technologies:

  • Atomic Multipath Payments: the ability to pay $10 over 5 different channels with each $2 on them
  • Dual funded channels
  • Eltoo: a safer/better way to ensure no one cheats on the lightning network, by broadcasting an outdated balance
  • Rendez-Vous routing: a solution to combine private & public routing on the Lightning Network
  • Splicing: adding capacity to an existing channel without closing it
  • Submarine Swaps: using on-chain bitcoin to pay a Lightning Network invoice, without first opening a channel (seamless payments of Lightning, essentially)
  • Trampoline Payments: introducing a "router" to route the payments to the next hop, instead of putting the calculation-burden on the client
  • Watchtowers: intermediary nodes that will watch your Lightning Network channels and prevent someone fraudulently closing your channels
  • Ant routing: a new way to route payments, with nodes only having to keep details about their close neighbours for sending payments

There's a ton of movement in this space, but it's mostly hidden from the outside world. And it's complicated. I mean really complicated.

So, is bitcoin dead?

From what I can see, it's far from dead. It's technology that has never been in a better shape.

The improvements being discussed and implemented, the increase in network usage, the amount of organizations implementing and building services on top of the network, ... it's all growing.

I'm excited for the future of both bitcoin and the Lightning Network and I'm struggling to keep up with everything that's being developed. Especially on the Lightning Network side, things are moving fast. New tech gets introduced and deployed in a matter of months, whereas the bitcoin base layer is much more conservative and will favor stability and security over anything else.

Whether you like bitcoin or not is -- of course -- entirely up to you. But what I see being built (both the companies and the technology & protocols) is giving me a lot of confidence in the future of bitcoin. It might stay a niche sector, like many open source solutions, but it's solving real-world problems for a lot of users out there.

Staying up-to-date

These are some of the sources I use to stay informed on what's happening on the tech side of bitcoin & Lightning:

  • Mailing lists
  • Bitcoin merges: all changes making it to the bitcoin core codebase
  • BIPs: all improvement proposals for bitcoin

There are many dedicated websites covering bitcoin too, but too many are just focussed on the price. Purely technical publications are rare, and I highly recommend Bitcoin Optech's weekly newsletter to stay informed.

The post Is bitcoin dead yet? appeared first on

A scale that is in balance

In many ways, Open Source has won. Most people know that Open Source provides better quality software, at a lower cost, without vendor lock-in. But despite Open Source being widely adopted and more than 30 years old, scaling and sustaining Open Source projects remains challenging.

Not a week goes by that I don't get asked a question about Open Source sustainability. How do you get others to contribute? How do you get funding for Open Source work? But also, how do you protect against others monetizing your Open Source work without contributing back? And what do you think of MongoDB, Cockroach Labs or Elastic changing their license away from Open Source?

This blog post talks about how we can make it easier to scale and sustain Open Source projects, Open Source companies and Open Source ecosystems. I will show that:

  • Small Open Source communities can rely on volunteers and self-governance, but as Open Source communities grow, their governance model most likely needs to be reformed so the project can be maintained more easily.
  • There are three models for scaling and sustaining Open Source projects: self-governance, privatization, and centralization. All three models aim to reduce coordination failures, but require Open Source communities to embrace forms of monitoring, rewards and sanctions. While this thinking is controversial, it is supported by decades of research in adjacent fields.
  • Open Source communities would benefit from experimenting with new governance models, coordination systems, license innovation, and incentive models.

Some personal background

Scaling and sustaining Open Source projects and Open Source businesses has been the focus of most of my professional career.

Drupal, the Open Source project I founded 18 years ago, is used by more than one million websites and reaches pretty much everyone on the internet.

With over 8,500 individuals and about 1,100 organizations contributing to Drupal annually, Drupal is one of the healthiest and contributor-rich Open Source communities in the world.

For the past 12 years, I've also helped build Acquia, an Open Source company that heavily depends on Drupal. With almost 1,000 employees, Acquia is the largest contributor to Drupal, yet responsible for less than 5% of all contributions.

This article is not about Drupal or Acquia; it's about scaling Open Source projects more broadly.

I'm interested in how to make Open Source production more sustainable, more fair, more egalitarian, and more cooperative. I'm interested in doing so by redefining the relationship between end users, producers and monetizers of Open Source software through a combination of technology, market principles and behavioral science.

Why it must be easier to scale and sustain Open Source

We need to make it easier to scale and sustain both Open Source projects and Open Source businesses:

  1. Making it easier to scale and sustain Open Source projects might be the only way to solve some of the world's most important problems. For example, I believe Open Source to be the only way to build a pro-privacy, anti-monopoly, open web. It requires Open Source communities to be long-term sustainable — possibly for hundreds of years.
  2. Making it easier to grow and sustain Open Source businesses is the last hurdle that prevents Open Source from taking over the world. I'd like to see every technology company become an Open Source company. Today, Open Source companies are still extremely rare.

The alternative is that we are stuck in the world we live in today, where proprietary software dominates most facets of our lives.


This article is focused on Open Source governance models, but there is more to growing and sustaining Open Source projects. Top of mind is the need for Open Source projects to become more diverse and inclusive of underrepresented groups.

Second, I understand that the idea of systematizing Open Source contributions won't appeal to everyone. Some may argue that the suggestions I'm making go against the altruistic nature of Open Source. I agree. However, I'm also looking at Open Source sustainability challenges from the vantage point of running both an Open Source project (Drupal) and an Open Source business (Acquia). I'm not implying that every community needs to change their governance model, but simply offering suggestions for communities that operate with some level of commercial sponsorship, or communities that struggle with issues of long-term sustainability.

Lastly, this post is long and dense. I'm 700 words in, and I haven't started yet. Given that this is a complicated topic, there is an important role for more considered writing and deeper thinking.

Defining Open Source Makers and Takers


Some companies are born out of Open Source, and as a result believe deeply and invest significantly in their respective communities. With their help, Open Source has revolutionized software for the benefit of many. Let's call these types of companies Makers.

As the name implies, Makers help make Open Source projects; from investing in code, to helping with marketing, growing the community of contributors, and much more. There are usually one or more Makers behind the success of large Open Source projects. For example, MongoDB helps make MongoDB, Red Hat helps make Linux, and Acquia (along with many other companies) helps make Drupal.

Our definition of a Maker assumes intentional and meaningful contributions and excludes those whose only contributions are unintentional or sporadic. For example, a public cloud company like Amazon can provide a lot of credibility to an Open Source project by offering it as-a-service. The resulting value of this contribution can be substantial, however that doesn't make Amazon a Maker in our definition.

I use the term Makers to refer to anyone who purposely and meaningfully invests in the maintenance of Open Source software, i.e. by making engineering investments, writing documentation, fixing bugs, organizing events, and more.


Now that Open Source adoption is widespread, lots of companies, from technology startups to technology giants, monetize Open Source projects without contributing back to those projects. Let's call them Takers.

I understand and respect that some companies can give more than others, and that many might not be able to give back at all. Maybe one day, when they can, they'll contribute. We limit the label of Takers to companies that have the means to give back, but choose not to.

The difference between Makers and Takers is not always 100% clear, but as a rule of thumb, Makers directly invest in growing both their business and the Open Source project. Takers are solely focused on growing their business and let others take care of the Open Source project they rely on.

Organizations can be both Takers and Makers at the same time. For example, Acquia, my company, is a Maker of Drupal, but a Taker of Varnish Cache. We use Varnish Cache extensively but we don't contribute to its development.

A scale that is not in balance

Takers hurt Makers

To be financially successful, many Makers mix Open Source contributions with commercial offerings. Their commercial offerings usually take the form of proprietary or closed source IP, which may include a combination of premium features and hosted services that offer performance, scalability, availability, productivity, and security assurances. This is known as the Open Core business model. Some Makers offer professional services, including maintenance and support assurances.

When Makers start to grow and demonstrate financial success, the Open Source project that they are associated with begins to attract Takers. Takers will usually enter the ecosystem with a commercial offering comparable to the Makers', but without making a similar investment in Open Source contribution. Because Takers don't contribute back meaningfully to the Open Source project that they take from, they can focus disproportionately on their own commercial growth.

Let's look at a theoretical example.

When a Maker has $1 million to invest in R&D, they might choose to invest $500k in Open Source and $500k in the proprietary IP behind their commercial offering. The Maker intentionally balances growing the Open Source project they are connected to with making money. To be clear, the investment in Open Source is not charity; it helps make the Open Source project competitive in the market, and the Maker stands to benefit from that.

When a Taker has $1 million to invest in R&D, nearly all of their resources go to the development of proprietary IP behind their commercial offerings. They might invest $950k in their commercial offerings that compete with the Maker's, and $50k towards Open Source contribution. Furthermore, the $50k is usually focused on self-promotion rather than being directed at improving the Open Source project itself.

A visualization of the Maker and Taker math

Effectively, the Taker has put itself at a competitive advantage compared to the Maker:

  • The Taker takes advantage of the Maker's $500k investment in Open Source contribution while only investing $50k themselves. Important improvements happen "for free" without the Taker's involvement.
  • The Taker can out-innovate the Maker in building proprietary offerings. When a Taker invests $950k in closed-source products compared to the Maker's $500k, the Taker can innovate 90% faster. The Taker can also use the delta to disrupt the Maker on price.

In other words, Takers reap the benefits of the Makers' Open Source contribution while simultaneously having a more aggressive monetization strategy. The Taker is likely to disrupt the Maker. On an equal playing field, the only way the Maker can defend itself is by investing more in its proprietary offering and less in the Open Source project. To survive, it has to behave like the Taker to the detriment of the larger Open Source community.

Takers harm Open Source projects. An aggressive Taker can induce Makers to behave in a more selfish manner and reduce or stop their contributions to Open Source altogether. Takers can turn Makers into Takers.

Open Source contribution and the Prisoner's Dilemma

The example above can be described as a Prisoner's Dilemma. The Prisoner's Dilemma is a standard example of game theory, which allows the study of strategic interaction and decision-making using mathematical models. I won't go into detail here, but for the purpose of this article, it helps me simplify the above problem statement. I'll use this simplified example throughout the article.

Imagine an Open Source project with only two companies supporting it. The rules of the game are as follows:

  • If both companies contribute to the Open Source project (both are Makers), the total reward is $100. The reward is split evenly and each company makes $50.
  • If one company contributes while the other company doesn't (one Maker, one Taker), the Open Source project won't be as competitive in the market, and the total reward will only be $80. The Taker gets $60 as they have the more aggressive monetization strategy, while the Maker gets $20.
  • If both players choose not to contribute (both are Takers), the Open Source project will eventually become irrelevant. Both walk away with just $10.

This can be summarized in a pay-off matrix:

Company A contributes Company A doesn't contribute
Company B contributes A makes $50
B makes $50
A makes $60
B makes $20
Company B doesn't contribute A makes $20
B makes $60
A makes $10
B makes $10

In the game, each company needs to decide whether to contribute or not, but Company A doesn't know what company B decides; and vice versa.

The Prisoner's Dilemma states that each company will optimize its own profit and not contribute. Because both companies are rational, both will make that same decision. In other words, when both companies use their "best individual strategy" (be a Taker, not a Maker), they produce an equilibrium that yields the worst possible result for the group: the Open Source project will suffer and as a result they only make $10 each.

A real-life example of the Prisoner's Dilemma that many people can relate to is washing the dishes in a shared house. By not washing dishes, an individual can save time (individually rational), but if that behavior is adopted by every person in the house, there will be no clean plates for anyone (collectively irrational). How many of us have tried to get away with not washing the dishes? I know I have.

Fortunately, the problem of individually rational actions leading to collectively adverse outcomes is not new or unique to Open Source. Before I look at potential models to better sustain Open Source projects, I will take a step back and look at how this problem has been solved elsewhere.

Open Source: a public good or a common good?

In economics, the concepts of public goods and common goods are decades old, and have similarities to Open Source.

Examples of common goods (fishing grounds, oceans, parks) and public goods (lighthouses, radio, street lightning)

Public goods and common goods are what economists call non-excludable meaning it's hard to exclude people from using them. For example, everyone can benefit from fishing grounds, whether they contribute to their maintenance or not. Simply put, public goods and common goods have open access.

Common goods are rivalrous; if one individual catches a fish and eats it, the other individual can't. In contrast, public goods are non-rivalrous; someone listening to the radio doesn't prevent others from listening to the radio.

I've long believed that Open Source projects are public goods: everyone can use Open Source software (non-excludable) and someone using an Open Source project doesn't prevent someone else from using it (non-rivalrous).

However, through the lens of Open Source companies, Open Source projects are also common goods; everyone can use Open Source software (non-excludable), but when an Open Source end user becomes a customer of Company A, that same end user is unlikely to become a customer of Company B (rivalrous).

For end users, Open Source projects are public goods; the shared resource is the software. But for Open Source companies, Open Source projects are common goods; the shared resource is the (potential) customer.

Next, I'd like to extend the distinction between "Open Source software being a public good" and "Open Source customers being a common good" to the free-rider problem: we define software free-riders as those who use the software without ever contributing back, and customer free-riders (or Takers) as those who sign up customers without giving back.

All Open Source communities should encourage software free-riders. Because the software is a public good (non-rivalrous), a software free-rider doesn't exclude others from using the software. Hence, it's better to have a user for your Open Source project, than having that person use your competitor's software. Furthermore, a software free-rider makes it more likely that other people will use your Open Source project (by word of mouth or otherwise). When some portion of those other users contribute back, the Open Source project benefits. Software free-riders can have positive network effects on a project.

However, when the success of an Open Source project depends largely on one or more corporate sponsors, the Open Source community should not forget or ignore that customers are a common good. Because a customer can't be shared among companies, it matters a great deal for the Open Source project where that customer ends up. When the customer signs up with a Maker, we know that a certain percentage of the revenue associated with that customer will be invested back into the Open Source project. When a customer signs up with a customer free-rider or Taker, the project doesn't stand to benefit. In other words, Open Source communities should find ways to route customers to Makers.

Both volunteer-driven and sponsorship-driven Open Source communities should encourage software free-riders, but sponsorship-driven Open Source communities should discourage customer free-riders.

Lessons from decades of Common Goods management

Hundreds of research papers and books have been written on public good and common good governance. Over the years, I have read many of them to figure out what Open Source communities can learn from successfully managed public goods and common goods.

Some of the most instrumental research was Garrett Hardin's Tragedy of the Commons and Mancur Olson's work on Collective Action. Both Hardin and Olson concluded that groups don't self-organize to maintain the common goods they depend on.

As Olson writes in the beginning of his book, The Logic of Collective Action: Unless the number of individuals is quite small, or unless there is coercion or some other special device to make individuals act in their common interest, rational, self-interested individuals will not act to achieve their common or group interest..

Consistent with the Prisoner's Dilemma, Hardin and Olson show that groups don't act on their shared interests. Members are disincentivized from contributing when other members can't be excluded from the benefits. It is individually rational for a group's members to free-ride on the contributions of others.

Dozens of academics, Hardin and Olson included, argued that an external agent is required to solve the free-rider problem. The two most common approaches are (1) centralization and (2) privatization:

  1. When a common good is centralized, the government takes over the maintenance of the common good. The government or state is the external agent.
  2. When a public good is privatized, one or more members of the group receive selective benefits or exclusive rights to harvest from the common good in exchange for the ongoing maintenance of the common good. In this case, one or more corporations act as the external agent.

The wide-spread advice to centralize and privatize common goods has been followed extensively in most countries; today, the management of natural resources is typically managed by either the government or by commercial companies, but no longer directly by its users. Examples include public transport, water utilities, fishing grounds, parks, and much more.

Overall, the privatization and centralization of common goods has been very successful; in many countries, public transport, water utilities and parks are maintained better than volunteer contributors would have on their own. I certainly value that I don't have to help maintain the train tracks before my daily commute to work, or that I don't have to help mow the lawn in our public park before I can play soccer with my kids.

For years, it was a long-held belief that centralization and privatization were the only way to solve the free-rider problem. It was Elinor Ostrom who observed that a third solution existed.

Ostrom found hundreds of cases where common goods are successfully managed by their communities, without the oversight of an external agent. From the management of irrigation systems in Spain to the maintenance of mountain forests in Japan — all have been successfully self-managed and self-governed by their users. Many have been long-enduring as well; the youngest examples she studied were more than 100 years old, and the oldest exceed 1,000 years.

Ostrom studied why some efforts to self-govern commons have failed and why others have succeeded. She summarized the conditions for success in the form of core design principles. Her work led her to win the Nobel Prize in Economics in 2009.

Interestingly, all successfully managed commons studied by Ostrom switched at some point from open access to closed access. As Ostrom writes in her book, Governing the Commons: For any appropriator to have a minimal interest in coordinating patterns of appropriation and provision, some set of appropriators must be able to exclude others from access and appropriation rights.. Ostrom uses the term appropriator to refer to those who use or withdraw from a resource. Examples would be fishers, irrigators, herders, etc — or companies trying to turn Open Source users into paying customers. In other words, the shared resource must be made exclusive (to some degree) in order to incentivize members to manage it. Put differently, Takers will be Takers until they have an incentive to become Makers.

Once access is closed, explicit rules need to be established to determine how resources are shared, who is responsible for maintenance, and how self-serving behaviors are suppressed. In all successfully managed commons, the regulations specify (1) who has access to the resource, (2) how the resource is shared, (3) how maintenance responsibilities are shared, (4) who inspects that rules are followed, (5) what fines are levied against anyone who breaks the rules, (6) how conflicts are resolved and (7) a process for collectively evolving these rules.

Three patterns for long-term sustainable Open Source

Studying the work of Garrett Hardin (Tragedy of the Commons), the Prisoner's Dilemma, Mancur Olson (Collective Action) and Elinor Ostrom's core design principles for self-governance, a number of shared patterns emerge. When applied to Open Source, I'd summarize them as follows:

  1. Common goods fail because of a failure to coordinate collective action. To scale and sustain an Open Source project, Open Source communities need to transition from individual, uncoordinated action to cooperative, coordinated action.
  2. Cooperative, coordinated action can be accomplished through privatization, centralization, or self-governance. All three work — and can even be mixed.
  3. Successful privatization, centralization, and self-governance all require clear rules around membership, appropriation rights, and contribution duties. In turn, this requires monitoring and enforcement, either by an external agent (centralization + privatization), a private agent (self-governance), or members of the group itself (self-governance).

Next, let's see how these three concepts — centralization, privatization and self-governance — could apply to Open Source.

Model 1: Self-governance in Open Source

For small Open Source communities, self-governance is very common; it's easy for its members to communicate, learn who they can trust, share norms, agree on how to collaborate, etc.

As an Open Source project grows, contribution becomes more complex and cooperation more difficult: it becomes harder to communicate, build trust, agree on how to cooperate, and suppress self-serving behaviors. The incentive to free-ride grows.

You can scale successful cooperation by having strong norms that encourage other members to do their fair share and by having face-to-face events, but eventually, that becomes hard to scale as well.

As Ostrom writes in Governing the Commons: Even in repeated settings where reputation is important and where individuals share the norm of keeping agreements, reputation and shared norms are insufficient by themselves to produce stable cooperative behavior over the long run. and In all of the long-enduring cases, active investments in monitoring and sanctioning activities are quite apparent..

To the best of my knowledge, no Open Source project currently implements Ostrom's design principles for successful self-governance. To understand how Open Source communities might, let's go back to our running example.

Our two companies would negotiate rules for how to share the rewards of the Open Source project, and what level of contribution would be required in exchange. They would set up a contract where they both agree on how much each company can earn and how much each company has to invest. During the negotiations, various strategies can be proposed for how to cooperate. However, both parties need to agree on a strategy before they can proceed. Because they are negotiating this contract among themselves, no external agent is required.

These negotiations are non-trivial. As you can imagine, any proposal that does not involve splitting the $100 fifty-fifty is likely rejected. The most likely equilibrium is for both companies to contribute equally and to split the reward equally. Furthermore, to arrive at this equilibrium, one of the two companies would likely have to go backwards in revenue, which might not be agreeable.

Needless to say, this gets even more difficult in a scenario where there are more than two companies involved. Today, it's hard to fathom how such a self-governance system can successfully be established in an Open Source project. In the future, Blockchain-based coordination systems might offer technical solutions for this problem.

Large groups are less able to act in their common interest than small ones because (1) the complexity increases and (2) the benefits diminish. Until we have better community coordination systems, it's easier for large groups to transition from self-governance to privatization or centralization than to scale self-governance.

The concept of major projects growing out of self-governed volunteer communities is not new to the world. The first trade routes were ancient trackways which citizens later developed on their own into roads suited for wheeled vehicles. Privatization of roads improved transportation for all citizens. Today, we certainly appreciate that our governments maintain the roads.

The roads system evolving from self-governance to privatization, and from privatization to centralization

Model 2: Privatization of Open Source governance

In this model, Makers are rewarded unique benefits not available to Takers. These exclusive rights provide Makers a commercial advantage over Takers, while simultaneously creating a positive social benefit for all the users of the Open Source project, Takers included.

For example, Mozilla has the exclusive right to use the Firefox trademark and to set up paid search deals with search engines like Google, Yandex and Baidu. In 2017 alone, Mozilla made $542 million from searches conducted using Firefox. As a result, Mozilla can make continued engineering investments in Firefox. Millions of people and organizations benefit from that every day.

Another example is Automattic, the company behind WordPress. Automattic is the only company that can use, and is in the unique position to make hundreds of millions of dollars from WordPress' official SaaS service. In exchange, Automattic invests millions of dollars in the Open Source WordPress each year.

Recently, there have been examples of Open Source companies like MongoDB, Redis, Cockroach Labs and others adopting stricter licenses because of perceived (and sometimes real) threats from public cloud companies that behave as Takers. The ability to change the license of an Open Source project is a form of privatization.

Model 3: Centralization of Open Source governance

Let's assume a government-like central authority can monitor Open Source companies A and B, with the goal to reward and penalize them for contribution or lack thereof. When a company follows a cooperative strategy (being a Maker), they are rewarded $25 and when they follow a defect strategy (being a Taker), they are charged a $25 penalty. We can update the pay-off matrix introduced above as follows:

Company A contributes Company A doesn't contribute
Company B contributes A makes $75 ($50 + $25)
B makes $75 ($50 + $25)
A makes $35 ($60 - $25)
B makes $45 ($20 + 25)
Company B doesn't contribute A makes $45 ($20 + $25)
B makes $35 ($60 - $25)
A makes $0 ($10 - $25)
B makes $0 ($10 - $25)

We took the values from the pay-off matrix above and applied the rewards and penalties. The result is that both companies are incentivized to contribute and the optimal equilibrium (both become Makers) can be achieved.

The money for rewards could come from various fundraising efforts, including membership programs or advertising (just as a few examples). However, more likely is the use of indirect monetary rewards.

One way to implement this is Drupal's credit system. Drupal's non-profit organization, the Drupal Association monitors who contributes what. Each contribution earns you credits and the credits are used to provide visibility to Makers. The more you contribute, the more visibility you get on (visited by 2 million people each month) or at Drupal conferences (called DrupalCons, visited by thousands of people each year).

Example issue credit on drupal orgA screenshot of an issue comment on You can see that jamadar worked on this patch as a volunteer, but also as part of his day job working for TATA Consultancy Services on behalf of their customer, Pfizer.

While there is a lot more the Drupal Association could and should do to balance its Makers and Takers and achieve a more optimal equilibrium for the Drupal project, it's an emerging example of how an Open Source non-profit organization can act as a regulator that monitors and maintains the balance of Makers and Takers.

The big challenge with this approach is the accuracy of the monitoring and the reliability of the rewarding (and sanctioning). Because Open Source contribution comes in different forms, tracking and valuing Open Source contribution is a very difficult and expensive process, not to mention full of conflict. Running this centralized government-like organization also needs to be paid for, and that can be its own challenge.

Concrete suggestions for scaling and sustaining Open Source

Suggestion 1: Don't just appeal to organizations' self-interest, but also to their fairness principles

If, like most economic theorists, you believe that organizations act in their own self-interest, we should appeal to that self-interest and better explain the benefits of contributing to Open Source.

Despite the fact that hundreds of articles have been written about the benefits of contributing to Open Source — highlighting speed of innovation, recruiting advantages, market credibility, and more — many organizations still miss these larger points.

It's important to keep sharing Open Source success stories. One thing that we have not done enough is appeal to organizations' fairness principles.

While a lot of economic theories correctly assume that most organizations are self-interested, I believe some organizations are also driven by fairness considerations.

Despite the term "Takers" having a negative connotation, it does not assume malice. For many organizations, it is not apparent if an Open Source project needs help with maintenance, or how one's actions, or lack thereof, might negatively affect an Open Source project.

As mentioned, Acquia is a heavy user of Varnish Cache. But as Acquia's Chief Technology Officer, I don't know if Varnish needs maintenance help, or how our lack of contribution negatively affects Makers in the Varnish community.

It can be difficult to understand the consequences of our own actions within Open Source. Open Source communities should help others understand where contribution is needed, what the impact of not contributing is, and why certain behaviors are not fair. Some organizations will resist unfair outcomes and behave more cooperatively if they understand the impact of their behaviors and the fairness of certain outcomes.

Make no mistake though: most organizations won't care about fairness principles and will only contribute when they have to. For example, most people would not voluntarily redistribute 25-50% of their income to those who need it. However, most of us agree to redistribute money by paying taxes, but only so long as all others have to do so as well and the government enforces it.

Suggestion 2: Encourage end users to offer selective benefits to Makers

We talked about Open Source projects giving selective benefits to Makers (e.g. Automattic, Mozilla, etc), but end users can give selective benefits as well. For example, end users can mandate Open Source contributions from their partners. We have some successful examples of this in the Drupal community:

If more end users of Open Source took this stance, it could have a very big impact on Open Source sustainability. For governments, in particular, this seems like a very logical thing to do. Why would a government not want to put every dollar of IT spending back in the public domain? For Drupal alone, the impact would be measured in tens of millions of dollars each year.

Suggestion 3: Experiment with new licenses

I believe we can create licenses that support the creation of Open Source projects with sustainable communities and sustainable businesses to support it.

For a directional example, look at what MariaDB did with their Business Source License (BSL). The BSL gives users complete access to the source code so users can modify, distribute and enhance it. Only when you use more than x of the software do you have to pay for a license. Furthermore, the BSL guarantees that the software becomes Open Source over time; after y years, the license automatically converts from BSL to General Public License (GPL), for example.

A second example is the Community Compact, a license proposed by Adam Jacob. It mixes together a modern understanding of social contracts, copyright licensing, software licensing, and distribution licensing to create a sustainable and harmonious Open Source project.

We can create licenses that better support the creation, growth and sustainability of Open Source projects and that are designed so that both users and the commercial ecosystem can co-exist and cooperate in harmony.

I'd love to see new licenses that encourage software free-riding (sharing and giving), but discourage customer free-riding (unfair competition). I'd also love to see these licenses support many Makers, with built-in inequity and fairness principles for smaller Makers or those not able to give back.

If, like me, you believe there could be future licenses that are more "Open Source"-friendly, not less, it would be smart to implement a contributor license agreement for your Open Source project; it allows Open Source projects to relicense if/when better licenses arrive. At some point, current Open Source licenses will be at a disadvantage compared to future Open Source licenses.


As Open Source communities grow, volunteer-driven, self-organized communities become harder to scale. Large Open Source projects should find ways to balance Makers and Takers or the Open Source project risks not innovating enough under the weight of Takers.

Fortunately, we don't have to accept that future. However, this means that Open Source communities potentially have to get comfortable experimenting with how to monitor, reward and penalize members in their communities, particularly if they rely on a commercial ecosystem for a large portion of their contributions. Today, that goes against the values of most Open Source communities, but I believe we need to keep an open mind about how we can grow and scale Open Source.

Making it easier to scale Open Source projects in a sustainable and fair way is one of the most important things we can work on. If we succeed, Open Source can truly take over the world — it will pave the path for every technology company to become an Open Source business, and also solve some of the world's most important problems in an open, transparent and cooperative way.

I published the following diary on “Agent Tesla Trojan Abusing Corporate Email Accounts“:

The trojan ‘Agent Tesla’ is not brand new, discovered in 2018, it is written in VisualBasic and has plenty of interesting features. Just have a look at the MITRE ATT&CK overview of its TTP. I found a sample of Agent Tesla spread via a classic email campaign. The sample is delivered in an ACE archive called ‘Parcel Frieght Details.pdf.ace’ (SHA256:d990171e0227ea9458549037fdebe2f38668b1ccde0d02198eee00e6b20bf22a). You can spot the type error in the file name (‘frieght’ instead of ‘freight’). The archive has a VT score of  8/57. Inside the archive, there is a PE file with the same typo error: ‘Parcel Frieght Details.pdf.exe’ (SHA256:5881f0f7dac664c84a5ce6ffbe0ea84427de6eb936e6d8cb7e251d9a430cd42a). The PE file is unknown on VT when writing this diary… [Read more]

[The post [SANS ISC] Agent Tesla Trojan Abusing Corporate Email Accounts has been first published on /dev/random]

September 18, 2019

This is a picture of a very early test setup for the upcoming video boxes for FOSDEM.

Lime2 with test board, capturing 720p

The string of hardware in front of the display (my 4:3 projector analogue, attached over hdmi) is:
* A status LCD, showing the live capture in the background (with 40% opacity)
* An Olimex Lime2 (red board) with our test board on top (green board).
* A adafruit TFP401 hdmi to parallel RGB encoder board (blue board).

You can see the white hdmi cable running from the lime2 hdmi out to the monitor. This old monitor is my test "projector", the fact that it is 4:3 makes it a good test subject.

You can also see a black cable from the capture board to another blue board with a red led. This is a banana-pi M1 as this is the current SBC being used in the FOSDEM video boxes, and i had one lying around anyway, doing nothing. It spews out a test image.

What you are seeing here is live captured data at 1280x720@60Hz, displayed on the monitor, and in the background of the status LCD, with a 1 to 2 frame delay.

Here is a close-up of the status lcd

Video box status lcd, with live capture in the background.

The text and logos in front are just mockups, the text will of course be made dynamic, as otherwise it would not be much of a status display.

And here is a close-up of the 1280x1024 monitor:
Video box status lcd, with live capture in the background.

You will notice 5 16x16 blocks, one for each corner, and one smack-dab in the middle. They encode the position on screen in the B and G components, and the frame count in the R component.

The utility that is running on the lime2 board (fosdem-video-juggler) displays the captured frames on both the LCD and the monitor. This currently tests for all four corners for all 5 blocks.

At 1920x1080, a 24h run showed about 450 pixels being off. Those were all at the start of the frame (pixels 0,0 and 1,0), as this is where our temporary TFP401 board has known issues. This should be fixed when moving to an ADV7611 encoder.

The current setup, and the target resolution for FOSDEM, is 1280x720, (with a very specific modeline for the TFP401). Not a single pixel has been off in an almost 40h run. Not a single one in 8.6 million frames.

The starry-eyed plan

FOSDEM has 28 parallel tracks, all streamed out live. Almost 750 talks on two days, all streamed out live. Nobody does that, period.

What drives this, next to a lot of blood, sweat and tears, are 28 speaker/slides video boxes, and 28 camera video boxes, plus spares. They store the capture, and stream it over the network to be re-encoded on one refurbished laptop per track, which then streams out live. The videobox stream also hits servers, marshalled in for final encoding after devroom manager/speaker review. The livestreams are just a few minutes behind; when devrooms are full, people are watching the livestream just outside the closed off devroom. As a devroom manager, the review experience is pretty slick, with review emails being received minutes after talks are over and videos being done in the next hour or so after that.

The video boxes themselves have an ingenious mix of an hdmi scaler, an active hdmi splitter, a hardware h.264 encoder (don't ask), a banana pi m1 (don't ask), an small SSD, an ethernet switch, and an atx PSU. All contained in big, heavy, laser cut wooden box. It's full of smart choices, and it does what it should do, but it is far from ideal.

There's actual quite a few brilliant bits going on in there. Like the IR LED that controls the hdmi scalers OSD menu. That LED is connected to the headphone jack of the banana pi, and Gerry has hacked up PCM files to be able to set scaling resolutions. Playing audio to step through an OSD menu! Gerry even 3D printed special small brackets, that fit in a usb connector, to hold the IR LED in place. Insane! Brilliant! But a sign that this whole setup is far from ideal.

So last november, during the video team meetup, we mused about a better solution: If we get HDMI and audio captured straight by the Allwinner SoC, then we can drive the projector directly from the HDMI out, and use the on-board h.264 encoder to convert the captured data into a stream that can be sent out over the network. This would give us full control over all aspects of video capture, display and encoding, and the whole thing would be done in OSHW, with full open source software (and mainline support). Plus, the whole thing would be a lot cheaper, a lot smaller, and thus easier to handle, and we would be able to have a ton of hot spares around.

Hashing it out

During the event, in the rare bits of free time, we were able to further discuss this plan. We also had a chat with Tsvetan from Olimex, another big fan of FOSDEM, as to what could be done for producing the resulting hardware, what restrictions there are, how much lead time would be needed etc.

Then, on monday, Egbert Eich, Uwe bonnes and i had our usual "we've been thrown out of our hotels, but still have a few hours to kill before the ICE drives us back"-brunch. When i spoke about the plan, Uwes face lighted up and he started to ask all sorts of questions. Designing and prototyping electronics is his day job at the university of Darmstadt, and he offered to design and even prototype the necessary daughterboard. We were in business!

We decided to stick with the known and trusted Allwinner A20, with its extensive mainline support thanks to the large linux-sunxi community.

For the SBC, we quickly settled on the Olimex Lime2, as it just happened to expose all the pins we needed. Namely, LCD0, CSI1 (Camera Sensor Interface, and most certainly not MIPI-CSI, thanks Allwinner), I2S input and some I2C and interrupt lines. As are most things Olimex, it's fully OSHW. A quick email to Tsvetan from Olimex, and we had ourselves a small pile of Lime2 boards and supporting materials.

After a bit of back and forth, we chose the Analog Devices ADV7611 for a decoder chip, an actual HDMI to parallel RGB + I2S decoder. This is a much more intelligent and powerful converter than the rather dumb (no runtime configuration), and much older, TFP401, which actually is a DVI decoder. As an added bonus, Analog Devices is also a big open source backer, and the ADV7611 has fully open documentation.

Next to FOSDEM, Uwe also makes a point of visiting Embedded World here in Nuernberg. So just 2 weeks after FOSDEM, Uwe and I were able to do our first bit of napkin engineering in a local pub here :)

Then, another stroke of luck... While googling for more information about the ADV7611, i stumbled over the Videobrick project. Back in 2014/2015, when i was not paying too much attention to linux-sunxi, Georg Ottinger and Georg Lippitsch had the same idea, and reached the same conclusions (olimex lime1, adv7611). We got in contact, and Georg (Ottinger ;)) even sent us his test board. They had a working test board, but could not get the timing right, and tried to work around it with a hw hack. Real life also kicked in, and the project ended up losing steam. We will re-use their schematic as the basis for our work, as they also made that OSHW, so their work will definitely live on :)

In late April, FOSDEM core organizers Gerry and Mark, and Uwe and I met up in Frankfurt, and we hashed out some further details. Now the plan included an intermediate board, which connects to the rudimentary TFP401 module, so we could get some software and driver work started, and verify the functionality of some hw blocks.

Back then, i was still worried about the bandwidth limitations of the Allwinner CSI1 engine and the parallel RGB bus. If we could not reliably get 1280x720@60Hz, with a reasonable colour space, then the A20 and probably the whole project was not viable.

720p only?

Mentioning 1280x720@60Hz probably got you worried now. Why not full HD? Well, 1280x720 is plenty for our use-case.

Speakers are liable to make very poor slide design choices. Sometimes slides are totally not suited for display on projectors, and text often can not be read across the room. FOSDEM also does not have the budget to rent top of the line projection equipment for every track, we have to use what is present at the university. So 720p already gives speakers far too many pixels to play with.

Then, the sony cameras that FOSDEM rents (FOSDEM owns one itself), for filming the speaker from across the room, also do 1280x720, but at 30/25Hz. So 720p is what we need.

Having said that, we have in the meantime confirmed that the A20 can reliably handle full HD at 60Hz at full 24bit colour. With one gotcha... This is starting to eat into our available bus and dram bandwidth, and the bursty nature of both capture and the scaler on each display pipe make display glitchy with the current memory/bus/engine timing. So while slides will be 720p, we can easily support FullHD for the camera as there is no projector attached there, or if we are not scaling.

Intermediate setup

Shortly after our Frankfurt meetup, Uwe designed and made the test boards. As said, this test board and the TFP401 module is just temporary hardware to enable us to get the capture and KMS drivers in order, and to get started on h.264 encoding. The h.264 encoding was reverse engineered as part of the cedrus project (Jens Kuske, Manuel Braga, Andreas Baierl) but no driver is present for this yet, so there's quite a bit of work there still.

The TFP401 is pretty picky when it comes to supported modes. It often messes up the start of a frame; a fact that i first noticed on a Olimex 10" LCD that i connected to it to figure out signal routing. The fact that our capture engine does not know about Display Enable also does not make things any better. You can deduce from the heatsink in the picture that that TFP401 also does become warm. All that does not matter for the final solution, as we will use the ADV7611 anyway.

Current status

We currently have the banana pi preloading a specific EDID block with a TFP401 specific modeline. And then we have our CSI1 driver preset its values to work with the TFP401 correctly (we will get that info from the ADV7611 in future). With those settings, we do have CSI capture running reliably, and we are showing it reliably on both the status LCD and the "projector". Not a single pixel has been lost in a ~40h run, with about 21.6TB of raw pixel data transferred, or 8.6 million frames (i then lost 3 frames, as i logged into to bpi and apt-get upgraded, wait and see how we work around load issues and KMS in future).

The allwinner A20 capture engine has parallel RGB (full 24bit mind you), a pixelclock, and two sync signals. It does not have a display enable line, and needs to be told how many clocks behind sync the data starts. This is the source of confusion for the videobrick guys, and forces us to work around the TFP401s limitations.

Another quirk of the design of the capture engine is that it uses one FIFO per channel, and each FIFO outputs to a separate memory address, with no interleaving possible. This makes our full 24bit capture a very unique planar RGB888, or R8G8B8 as i have been calling it. This is not as big an issue as you would think. Any colour conversion engine (we have 2 per display pipe, and at least one in the 2d engine) can do a planar to packed conversion for all of the planar YUV formats out there. The trick is to use an identity matrix for the actual colour conversion, so that the subpixels values go through unchanged.

But it was no surprise that neither V4L2 nor DRM/KMS currently know about R8B8G8 though :)

What you see working above already took all sorts of fixes all over the sunxi mainline code. I have collected about 80 patches against the 5.2 mainline kernel. Some are very board specfic, some are new functionality (CSI1, sprites as kms planes), but there are also some fixes to DRM/KMS, and some rather fundamental improvements to dual display support in the sun4i kms driver (like not driving our poor LCD at 154Hz). A handful of those are ready for immediate submission to mainline, and the rest will be sent upstream in time.

I've also learned that "atomic KMS" is not really atomic, and instead is "batched" (oh, if only i had been allowed to follow my nokia team to intel back in 2011). This forces our userspace application to be a highly threaded affair.

The fundamental premise of our userspace application is that we want to keep all buffers in flight. At no point should raw buffers be duplicated in memory, let alone hit storage. Only the h.264 encoded stream should be sent to disk and network. This explains the name of this userspace application; "juggler". We also never want to touch pixel data with a CPU core, or even the GPU, we want the existing specialized hardware blocks to take care of all that (display, 2d engine, h.264 encoder), as they can do so most efficiently.

This juggler currently captures a frame through V4l2, then display that buffer on the "projector" and on the status LCD, as overlays or "planes", at the earliest possible time. It then also tests the captured image for those 5 marker blocks, which is how we know that we are pixel perfect.

There is a one to two frame delay between hdmi-input and hdmi-output. One frame as we need to wait for the capture engine to complete the capture of a frame, and up to one frame until the target display is done scanning out the previous frame. While we could start guessing where the capture engine is in a frame, and could tie that to the line counter in the display engine, it is just not worth it. 1-2 frames, or a 16.67-33.33ms delay is more than good enough for our purposes. The delay on the h.264 stream is only constrained by the amount of RAM we have, as we do want to release buffers back to the capture engine at one point ;)

Current TODO list.

Since we have full r8g8b8 input, i will have to push said buffer through the 2d engine (2d engines are easy, just like overlays), so that we can feed NV12 to the h.264 encoder. This work will be started now.

I also still need to go write up hdmi hotplug, both on the input (there's an interrupt for that) as on the output side.

The status text on the status LCD is currently a static png, as it is just a mockup. This too will become dynamic. The logo will become configurable. This, and other non-driver work, can hopefully be done by some of the others involved with FOSDEM video.

Once Uwe is back from vacation, he will start work on the first version of the full ADV7611 daughterboard. Once we have that prototyped, we need to go tie the ADV7611 into the CSI1 driver, and provide a solid pair of EDID blocks, and we need to tie I2S captured audio into the h.264 stream as well.

Lots of highly important work to be done in the next 4 months.

Where is this going?

We should actually not get ahead of ourselves too much. We still need to get close to not losing a single pixel at FOSDEM 2020. That's already a pretty heady goal.

Then, for FOSDEM2021, we will work towards XLR audio in and output, and on only needing a single ethernet cable routed across a room (syncing up audio between camera and speaker boxes, instead of centralizing audio at the camera). We will add a 2.54mm pitch connector on top of the current hdmi capture daughterboard to support that audio daughterboard.

Beyond that, we might want to rework this multi-board affair to make an all-in-one board with the gigabit ethernet phy replaced with a switch chip, for a complete, single board OSHW solution. If we then tie in a small li-ion battery, we might be able to live without power for an hour or so. One could even imagine overflow rooms, where the stream of one video box is shown on another video box, all using the same hw and infrastructure.

All this for just FOSDEM?

There seems to be a lot of interest from several open source and hacker conferences. The core FOSDEM organizers are in contact with CCC and debconf over voctomix, and they also went to FOSSAsia and Froscon. Two other crucial volunteers, Vasil and Marian, are core organizers of Sofia's openFest and Plovdivs tuxcon, and they use the existing video boxes already.

So unless we hit a wall somewhere, this hardware, with the associated driver support, userspace application, and all the rest of the FOSDEM video infrastructure will likely hit an open source or hacker conference near you in the next few years.

Each conference is different though, and this is where complete openness, both on the hardware side, and on all of the software solutions needed for this, comes in. The hope is that the cheap hardware will just be present with the organization of each conference, and that just some configuration needs to happen for this specific conference to have reliable capture and streaming, with minimal effort.

Another possible use of this hardware, and especially the associated driver support, is, is of course a generic streaming solution, which is open source and dependable and configurable. It is much smarter to work with this hardware than to try to reverse engineer the IT9919 style devices for instance (usually known as HDMI extenders).

Then, with the openness of the ADV7611, and the more extensive debugfs support that i fear it is bound to grow soon, this hardware will also allow for automated display driver testing. Well, at least up to 1080p60Hz HDMI 1.4a. There actually was a talk in my devroom about this a while ago (which talked about one of those hdmi extenders).

So what's the budget for this?


FOSDEM is free to attend, and has a tiny budget. The biggest free/open source software conference on the planet, with their mad streaming infrastructure, spends less per visitor than said visitor will spend on public transportation in brussels over the weekend. Yes, if you take a cab from the city center to the venue, once, you will already have spent way more than the budget that FOSDEM has allocated for you.

FOSDEM is a full volunteer organization, nobody is paid to work on FOSDEM, not even the core organizers. Some of their budget comes from corporate sponsors (with a limit on what they can spend), visitors donating at the event, and part of the revenue of the beer event. This is what makes it possible for FOSDEM to exist as it does, independent of large corporate sponsors who then would want a controlling stake. This is what makes FOSDEM as grass roots and real as it is. This is why FOSDEM is that great, and also, this is why FOSDEM has endured.

The flip side of that is that crucial work like this cannot be directly supported. And neither should it be, as such a thing would quickly ruin FOSDEM, as then politics takes hold and bickering over money starts. I do get to invoice the hardware i needed to buy for this, but between Olimex' support, and the limited bits that you see there, that's not exactly breaking the bank ;)

With the brutal timeline we have though, it is near impossible for me, a self-employed consultant, to take in paid work for the next few months. Even though recruiting season is upon us again, i cannot realistically accept work at this time, not without killing this project in its tracks. If i do not put in the time now, this whole thing is just not going to happen. We've now gone deep enough into this rabbit hole that there is no way back, but there is a lot of work to be done still.

Over the past few months I have had to field a few questions about this undertaking. Statements like "why don't you turn this into a product you can sell" are very common. And my answer is always the same; Neither selling the hardware, nor selling the service, nor selling the whole solution is a viable business. Even if we did not do OSHW, the hardware is easily duplicated, so there is no chance for any real revenue let alone margin there. Selling a complete solution or trying to run it as a service, that quickly devolves into a team of people travelling the globe for two times 4 months a year (conference season). That means carting people and hardware around the globe, dealing with customs, fixing and working around endless network issues, adjusting to individual conferences needs as to what the graphics on the streams should look like, then dealing with stream post production for weeks after each event. That's not going to help anyone or anything either.

The only way for this to work is for the video capture hardware, the driver support, the "juggler", and the associated streaming infrastructure to be completely open.

We will only do a limited run of this hardware, to cover our own needs, but the hardware is and will remain OSHW, and anyone should be able to pay a company like Olimex to make another batch. The software is all GPLv2 (u-boot/kernel) and GPLv3 (juggler). This will simply become a tool that conference organizers can deploy easily and adjust as needed, but this also makes conferences responsible for their own captures.

This does mean that the time spent on writing and fixing drivers, writing the juggler tool, and designing the hw, all cannot be directly remunerated in a reasonable fashion.

Having said that, if you wish to support this work, i will happily accept paypal donations (at libv at skynet dot be). If you have a VAT number, and wish to donate a larger amount, i can provide proper invoices for my time. Just throw me an email!

This would not be a direct donation to FOSDEM though, and i am not a core FOSDEM organizer, nor do i speak for the FOSDEM organization. I am just a volunteer who has been writing graphics drivers for the past 16 years, who is, first and foremost, a devroom manager (since 2006). I am "simply" the guy who provides a solution to this problem and who makes this new hardware work. You will however be indirectly helping FOSDEM, and probably every other free/open source/hacked conference out there.

Lot's of work ahead though. I will keep you posted when we hit our next milestones, hopefully with a less humongous blog entry :)

September 17, 2019

Even if everybody is not realizing it consciously, our world is becoming incredibly more virtual, borderless and decentralized. Fighting the trend may only make the transition more violent. We may as well embrace it fully and ditch our old paradigms to prepare for a new kind of society.

How we built the virtual world

Virtual reality is always depicted by science fiction as something scary, something not so far away that allows us to spend our time connected to imaginary worlds instead of interacting with the reality. An artificial substitute to a good old-fashioned life, a drug, an addiction.

Is it a dystopian prediction? Nowadays, white-collar workers spend most of their wake time interacting through a screen. Answering emails for work, chatting with colleagues on Slack, attending online meetings on Skype, looking at their friends Instagram during breaks and commutes, playing games and watching series in the evening.

The geographical position of the people with whom we interact is mostly irrelevant. That colleague might be just a few metres across the room or in the Beijing office of the company. That friend might be a neighbour or a university acquaintance currently on a trip to Thailand. It doesn’t matter. We all live, to different degrees, in that huge global connected world which is nothing but a virtual reality.

This can be observed in our vocabulary. While, only a few years ago, we were speaking of “online meetings”, “remote working” and “chatting on the Internet”, those have become the norm, the default. It has to be specified when it’s not online. Job offers should announce that “remote working is not possible for this position”. There are “meetings” and “on-premise events”. You would specify that you meet someone “in person”. Even the acronym “LOL” is now commonly used as a verbal interjection “in real life”. That “real life” expression which is often used as if our online life was not real, as if most of our wake time was imaginary. As an anecdote, the hacker culture coined the term AFK, “Away From Keyboard”, to counter the negative connotation implied by “non-real life” but we are now connected without keyboard anyway.

Blinding ourselves to post-scarcity

Part of the appeal of our online lives might lie in the limitless capabilities. In that world, we are not bounded by the finite resources of matter. We can be everywhere in the world at the same time, we can take part in many discussions, we can consume many contents, learn, entertain ourselves. In fact, we can even have multiple identities, be our different selves. At the same time!

While most of our economy is based on scarcity of goods, the online world offers us a post-scarcity society. As businesses move online, barriers and limitations are gradually removed. The only remaining scarce resource is our time, our attention. That’s why the online economy is now dubbed “economy of attention” even if a better word would probably be “economy of distraction”. But even in the craziest science-fiction books, post-scarcity is rarely imagined, The Culture, by Iain M. Banks, being one of the famous exceptions.

In that new world where geographical location and passport identity don’t really matter, we rely on some technical “tricks” to apply the old rules and pretend nothing has changed. Servers use IP addresses to guess the country of the client computer and follow the local legislation, not even considering that using a VPN is a common practice. State officials use the geographical location of a physical hard drive to know which regulation to follow, trying to blind themselves to the fact that most data are now mirrored around the world. They might also use the country of residence of the owner of those computers, company or individual, to claim taxes. Copyright enforcements and DRM are only legal and technical ways to introduce artificial scarcity paradigms in a post-scarcity environment.

But those are mainly gimmicks. The very concepts of country, local regulation, border and scarcity of information is not making sense any more for the rich and educated part of the population. This was already the case for quite some time for the very rich and their tax-evasion schemes but it is becoming every day more and more accessible for the middle class. History repeats itself: what starts as luxury become more common and affordable before becoming an evidence which has always been there.

One might even say that’s one reason borders are becoming so violent and reckless: they are mainly trying to preserve their own existence, from invasive, annoying and meaningless controls at airports to literally going to war against poor people. Refugees are running away from violence and poverty while we try to prevent them to cross an imaginary line which exists only in our imagination. Lines that were drawn at some point in history to protect some scarce resources which are now abundant.

Is it going too far to dream about a borderless post-scarcity world?

The frogs in the kettle of innovation

Innovation and societal change happen very rarely through a single invention. An invention only makes sense in a broader context, when the world is ready. The switch is often so subtle, so quick that we immediately forget about our old notions. Just like the frog in a boiling water kettle, we don’t realize that a change is happening. We are still telling our children to eat all their plate because people are dying of hunger. But what if we told them that there are currently more people dying from eating too much?

If you invented the road bike during the Middle Ages, it would have been perceived as useless. Your first bike prototype would not cope with the roads and paths of that time. And it would anyway probably cost a lot more than a horse, which was able to travel everywhere. After the era of the horse and the era of the car, we are witnessing that bike might become the best individual transportation platform inside a city. In fact, it is already the case in cities like Amsterdam or Copenhagen.

Are Danish and Dutch bikes different? Absolutely not. The cities are. They were transformed to become bike-friendly just like we purposely transformed our cities to become car-friendly at the start of the twentieth century. Urban planners, car makers and economic interests worked together for decades to create a world where a city without cars is unthinkable. From luxury goods, cars became affordable then obvious. A city without cars? It would be like a country without borders, a citizen without citizenship…

As recently as 15 years ago, mobile Internet was seen as a useless toy by most but a few elite. You could only access WAP specific websites and the connection was awfully slow. This didn’t matter because most of our phones had black and white screens unable to display more than a few lines of text. Even laptops were heavy, slower and more expensive than their desktop counterpart. Plugging in an RJ45 cable was required to access the Internet. They were available in most hotel rooms.

In 2007, Apple invented the first “smartphone” with a screen and without a keyboard (much to the laughs of Blackberry owners). There was not even apps at the time but, suddenly, the infrastructure came into place. 2G became 3G became 4G. The market asked for better coverage from mobile phone operators. Developers started to design “apps”. Websites became “alternative mobile version available” then “mobile first” then “responsive”. In less than a decade, we moved from “mobile Internet” being a useless geek dream to the default reality. Most Internet usage is mobile nowadays. If not on a mobile phone, people work on a very light laptop in a coffee shop, connecting through their phone network because the coffee shop’s wifi is not fast enough for them. My own internet connection has half the speed that 4G on my phone. The move was so quick, so efficient that we immediately forgot what it was like to not have mobile Internet. We switched from “crazy geek dream” to a “granted normality” without intermediary steps. From a luxury to affordability to obvious in only a few years.

Blockchains are the first seed of true decentralization

We are witnessing the same process with blockchains and decentralization initiatives. Most people are currently dismissing it as “a geek dream”, “a bike in the Middle Ages”. But the infrastructure is moving. Some of today’s solutions will be dismissed, like WAP websites. Some are temporary measures. But the whole world is moving toward more decentralization, fewer borders, less material constraints, less scarcity.

Blockchains and decentralized technologies are only a thin layer of innovations applied on the whole telecommunication stack. They are the icing of the cake which may kill forever the whole idea of our world being a scattered set of countries randomly spread around the globe.

With the invention of Bitcoin and other cryptocurrencies, states become powerless when it comes to controlling citizens wealth and collecting taxes. What is their added value in a world where decisions can be taken through new collective and decentralized governance mechanisms? Citizens are starting to choose their country of citizenship as a service, comparing offers and advantages. While places like Monaco, Panama and Switzerland have long been on this market as an “exclusive club for the rich elite”, Estonia is pioneering the “country for digital nomads” niche with its e-citizen program. Countries are mostly becoming identity providers. But this might be temporary as a state-certified identity may become the next RJ45 cable: useful only in some circumstances for a given set of people, unknown to others. Identity will move from “a name on a passport issued by an arbitrary state” to something a lot more subtle, more related to your reputation amongst your peers. Being stateless may become common.

Fighting decentralization or helping to build it?

This evolution might be exciting for technologists facing the painfully slow heaviness of a centralized administration designed in the 19th century, for activists fighting corruption. But it might also frighten the social-minded people who see the state as a tool of redistribution and protection of the minorities.

The danger would be to focus only on possible problems to fight this globalization trends as a whole, opposing the decentralization technologies themselves. Some may try to preserve the nation state paradigm at all cost with a simple argument : “We cannot let people decide by themselves”. In a way or another, every single argument against decentralization is a variation of this authoritarian thinking.

But don’t worry. By its very nature, decentralization is resilient. It cannot be stopped. Fighting it can only add more violence to the transition. Fear of decentralization will probably give fuel to opposition forces with specific interests like authoritarian states and centralized monopolies but, on the historical level, this will be nothing more than a hiccough.

The question we are facing is straightforward: how to build a decentralized and borderless future respectful of our values?

The answer is, of course, a lot more complicated as we have very different, sometimes conflicting set of values. One thing seems clear: we cannot blindly trust one centralized power to do it for us. As shown by the Trump election or the Brexit, the representative democracy paradigm itself is failing as it is now merely a game of stealing your attention to gain your vote. Decentralization will be built, well, it goes without saying, in a decentralized way. In fact, it is happening right now.

Where the states have failed

Decentralization is not “nice to have”. It is a mandatory requirement to address issues where states have demonstrated their incompetence. In the best scenarios, governments are making really slow progress while, in some case, they are simply worsening the situation or opposing any form of resolution.

Global warming is one of the failures of our heavy and slow nation-state world. Despite a palpable sense of urgency, there’s a shared feeling that “nothing has been done”, that the states cannot handle the situation. Heads of state are proud to sign an “agreement” with the name of a city but is it enough?

Governments and states were designed to handle local communities and to go at war with each other, not to manage a global society. Most public administrations are still following a military-like chain-of-command design. What can we expect when nearly 8 billion humans are relying on a few hundred brains to solve the most important global problems?

Historically, every centralized regime has died under its own weight and has been overthrown by chaotic and decentralized collective intelligence.

By investing and building more decentralized solution, we are effectively building a new society where horizontal collaboration is the new norm. We are translating our values into code with the hope that this will preserve those values as there might be no chiefs to impose them any more. Today’s decentralized software projects are, right now, writing in their code what they think an identity should be, what the relations between two humans should be, what the minimal rights should be, what is allowed or not. Sometimes it is highly explicit and even the goal of the project, such as Duniter, a cryptocurrency with a built-in basic income mechanism, sometimes it is subtle and implicit. In any case, the source code we are writing today is the constitution of tomorrow.

Solving the unsolvable, inventing tomorrow

Our societies evolved because we were living in an infinite world with very scarce resources to survive. Today, we are transitioning toward a world where the planet is the only scarce resource while everything else is abundant.

What we choose to work on is telling a story about the future we want to build. This is a deep responsibility and may explain why so much effort goes into decentralization.

Because the goal of decentralization is not to overthrow centralized regime but to collectively solve problems where our billions of brains are needed. The end of states, the evolution of identity and the post-scarcity society will only be consequences.

Photos by Matthieu Joannon, Alina Grubnyak, Alina Grubnyak again and Clarisse Croset on Unsplash

Je suis @ploum, conférencier et écrivain électronique. Si vous avez apprécié ce texte, n'hésitez pas à me soutenir sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens réguliers, même symboliques, sont une réelle motivation et reconnaissance. Merci !

Ce texte est publié sous la licence CC-By BE.

September 16, 2019

The post Announcing Status Pages for the Oh Dear monitoring service appeared first on

We just shipped a major new feature for Oh Dear!: our status pages!

Super clean, intuitive & powerful -- just the way we like it. ;-)

Some of the features include:

We've got plenty more details in our announcement blogpost of the feature for you to read, too.

Perhaps the best part? It's a free feature for all existing Oh Dear! users! You can create as many status pages as you'd like, mix & match the sites you want and run them on any domain name you control.

We're proud to ship this one, it's a natural fit for our uptime monitoring service!

The post Announcing Status Pages for the Oh Dear monitoring service appeared first on

September 13, 2019

I published the following diary on “Rig Exploit Kit Delivering VBScript“:

I detected the following suspicious traffic on a corporate network. It was based on multiples infection stages and looked interesting enough to publish a diary about it. This is also a good reminder that, just by surfing the web, you can spot malicious scripts that will try to infect your computer (Exploit Kits). It started with a succession of HTTP redirects across multiple domains, all using the .xyz TLD… [Read more]

[The post [SANS ISC] Rig Exploit Kit Delivering VBScript has been first published on /dev/random]

September 12, 2019

I published the following diary on “Blocking Firefox DoH with Bind“:

For a few days, huge debates have started on forums and mailing lists regarding the announce of Mozilla to enable DoH (DNS over HTTPS) by default in its Firefox browser. Since this announcement, Google also scheduled a move to this technology with the upcoming Chrome releases (this has been covered in today’s podcast episode). My goal is not here to start a new debate. DoH has definitively good points regarding privacy but the problem is always the way it is implemented. In corporate environments, security teams will for sure try to avoid the use of DoH for logging reasons (DNS logs are a gold mine in incident management and forensics)… [Read more]

[The post [SANS ISC] Blocking Firefox DoH with Bind has been first published on /dev/random]

September 11, 2019

The past years, I've examined's contribution data to understand who develops Drupal, how diverse the Drupal community is, how much of Drupal's maintenance and innovation is sponsored, and where that sponsorship comes from.

You can look at the 2016 report, the 2017 report, and the 2018 report. Each report looks at data collected in the 12-month period between July 1st and June 30th.

This year's report shows that:

  • Both the recorded number of contributors and contributions have increased.
  • Most contributions are sponsored, but volunteer contributions remains very important to Drupal's success.
  • Drupal's maintenance and innovation depends mostly on smaller Drupal agencies and Acquia. Hosting companies, multi-platform digital marketing agencies, large system integrators and end users make fewer contributions to Drupal.
  • Drupal's contributors have become more diverse, but are still not diverse enough.


What are issues?

"Issues" are pages on Each issue tracks an idea, feature request, bug report, task, or more. See for the list of all issues.

For this report, we looked at all issues marked "closed" or "fixed" in the 12-month period from July 1, 2018 to June 30, 2019. The issues analyzed in this report span Drupal core and thousands of contributed projects, across all major versions of Drupal.

What are credits?

In the spring of 2015, after proposing initial ideas for giving credit, added the ability for people to attribute their work in the issues to an organization or customer, or mark it the result of volunteer efforts.

Example issue credit on drupal orgA screenshot of an issue comment on You can see that jamadar worked on this patch as a volunteer, but also as part of his day job working for TATA Consultancy Services on behalf of their customer, Pfizer.'s credit system is truly unique and groundbreaking in Open Source and provides unprecedented insights into the inner workings of a large Open Source project. There are a few limitations with this approach, which we'll address at the end of this report.

What is the Drupal community working on?

In the 12-month period between July 1, 2018 and June 30, 2019, 27,522 issues were marked "closed" or "fixed", a 13% increase from the 24,447 issues in the 2017-2018 period.

In total, the Drupal community worked on 3,474 different projects this year compared to 3,229 projects in the 2017-2018 period — an 8% year over year increase.

The majority of the credits are the result of work on contributed modules:

A pie chart showing contributions by project type: most contributions are to contributed modules.

Compared to the previous period, contribution credits increased across all project types:

A graph showing the year over year growth of contributions per project type.

The most notable change is the large jump in "non-product credits": more and more members in the community started tracking credits for non-product activities such as organizing Drupal events (e.g. DrupalCamp Delhi project, Drupal Developer Days, Drupal Europe and DrupalCon Europe), promoting Drupal (e.g. Drupal pitch deck or community working groups (e.g. Drupal Diversity and Inclusion Working Group, Governance Working Group).

While some of these increases reflect new contributions, others are existing contributions that are newly reported. All contributions are valuable, whether they're code contributions, or non-product and community-oriented contributions such as organizing events, giving talks, leading sprints, etc. The fact that the credit system is becoming more accurate in recognizing more types of Open Source contribution is both important and positive.

Who is working on Drupal?

For this report's time period,'s credit system received contributions from 8,513 different individuals and 1,137 different organizations — a meaningful increase from last year's report.

A graph showing that the number of individual and organizational contributors increased year over year.

Consistent with previous years, approximately 51% of the individual contributors received just one credit. Meanwhile, the top 30 contributors (the top 0.4%) account for 19% of the total credits. In other words, a relatively small number of individuals do the majority of the work. These individuals put an incredible amount of time and effort into developing Drupal and its contributed projects:

9Wim Leers437

Out of the top 30 contributors featured this year, 28 were active contributors in the 2017-2018 period as well. These Drupalists' dedication and continued contribution to the project has been crucial to Drupal's development.

It's also important to recognize that most of the top 30 contributors are sponsored by an organization. Their sponsorship details are provided later in this article. We value the organizations that sponsor these remarkable individuals, because without their support, it could be more challenging for these individuals to be in the top 30.

It's also nice to see two new contributors make the top 30 this year — Alona O'neill with sponsorship from Hook 42 and Thalles Ferreira with sponsorship from CI&T. Most of their credits were the result of smaller patches (e.g. removing deprecated code, fixing coding style issues, etc) or in some cases non-product credits rather than new feature development or fixing complex bugs. These types of contributions are valuable and often a stepping stone towards towards more in-depth contribution.

How much of the work is sponsored?

Issue credits can be marked as "volunteer" and "sponsored" simultaneously (shown in jamadar's screenshot near the top of this post). This could be the case when a contributor does the necessary work to satisfy the customer's need, in addition to using their spare time to add extra functionality.

For those credits with attribution details, 18% were "purely volunteer" credits (8,433 credits), in stark contrast to the 65% that were "purely sponsored" (29,802 credits). While there are almost four times as many "purely sponsored" credits as "purely volunteer" credits, volunteer contribution remains very important to Drupal.

Contributions by volunteer vs sponsored

Both "purely volunteer" and "purely sponsored" credits grew — "purely sponsored" credits grew faster in absolute numbers, but for the first time in four years "purely volunteer" credits grew faster in relative numbers.

The large jump in volunteer credits can be explained by the community capturing more non-product contributions. As can be seen on the graph below, these non-product contributions are more volunteer-centric.

A graph showing how much of the contributions are volunteered vs sponsored.

Who is sponsoring the work?

Now that we've established that the majority of contributions to Drupal are sponsored, let's study which organizations contribute to Drupal. While 1,137 different organizations contributed to Drupal, approximately 50% of them received four credits or less. The top 30 organizations (roughly the top 3%) account for approximately 25% of the total credits, which implies that the top 30 companies play a crucial role in the health of the Drupal project.

Top conytinuying organizationsTop contributing organizations based on the number of issue credits.

While not immediately obvious from the graph above, a variety of different types of companies are active in Drupal's ecosystem:

Category Description
Traditional Drupal businesses Small-to-medium-sized professional services companies that primarily make money using Drupal. They typically employ fewer than 100 employees, and because they specialize in Drupal, many of these professional services companies contribute frequently and are a huge part of our community. Examples are Hook42, Centarro, The Big Blue House, Vardot, etc.
Digital marketing agencies Larger full-service agencies that have marketing-led practices using a variety of tools, typically including Drupal, Adobe Experience Manager, Sitecore, WordPress, etc. They tend to be larger, with many of the larger agencies employing thousands of people. Examples are Wunderman, Possible and Mirum.
System integrators Larger companies that specialize in bringing together different technologies into one solution. Example system agencies are Accenture, TATA Consultancy Services, Capgemini and CI&T.
Hosting companies Examples are Acquia, Rackspace, Pantheon and
End users Examples are Pfizer or bio.logis Genetic Information Management GmbH.

A few observations:

  • Almost all of the sponsors in the top 30 are traditional Drupal businesses with fewer than 50 employees. Only five companies in the top 30 — Pfizer, Google, CI&T, bio.logis and Acquia — are not traditional Drupal businesses. The traditional Drupal businesses are responsible for almost 80% of all the credits in the top 30. This percentage goes up if you extend beyond the top 30. It's fair to say that Drupal's maintenance and innovation largely depends on these traditional Drupal businesses.
  • The larger, multi-platform digital marketing agencies are barely contributing to Drupal. While more and more large digital agencies are building out Drupal practices, no digital marketing agencies show up in the top 30, and hardly any appear in the entire list of contributing organizations. While they are not required to contribute, I'm frustrated that we have not yet found the right way to communicate the value of contribution to these companies. We need to incentivize each of these firms to contribute back with the same commitment that we see from traditional Drupal businesses
  • The only system integrator in the top 30 is CI&T, which ranked 4th with 795 credits. As far as system integrators are concerned, CI&T is a smaller player with approximately 2,500 employees. However, we do see various system integrators outside of the top 30, including Globant, Capgemini, Sapient and TATA Consultancy Services. In the past year, Capgemini almost quadrupled their credits from 46 to 196, TATA doubled its credits from 85 to 194, Sapient doubled its credits from 28 to 65, and Globant kept more or less steady with 41 credits. Accenture and Wipro do not appear to contribute despite doing a fair amount of Drupal work in the field.
  • Hosting companies also play an important role in our community, yet only Acquia appears in the top 30. Rackspace has 68 credits, Pantheon has 43, and has 23. I looked for other hosting companies in the data, but couldn't find any. In general, there is a persistent problem with hosting companies that make a lot of money with Drupal not contributing back. The contribution gap between Acquia and other hosting companies has increased, not decreased.
  • We also saw three end users in the top 30 as corporate sponsors: Pfizer (453 credits), Thunder (659 credits, up from 432 credits the year before), and the German company, bio.logis (330 credits). A notable end user is Johnson & Johnson, who was just outside of the top 30, with 221 credits, up from 29 credits the year before. Other end users outside of the top 30, include the European Commission (189 credits), Workday (112 credits), Morris Animal Foundation (112 credits), Paypal (80 credits), NBCUniversal (48 credits), Wolters Kluwer (20 credits), and Burda Media (24 credits). We also saw contributions from many universities, including the University of British Columbia (148 credits), University of Waterloo (129 credits), Princeton University (73 credits), University of Austin Texas at Austin (57 credits), Charles Darwin University (24 credits), University of Edinburgh (23 credits), University of Minnesota (19 credits) and many more.
A graph showing that Acquia is by far the number one contributing hosting company.
Contributions by system integrators

It would be interesting to see what would happen if more end users mandated contributions from their partners. Pfizer, for example, only works with agencies that contribute back to Drupal, and uses Drupal's credit system to verify their vendors' claims. The State of Georgia started doing the same; they also made Open Source contribution a vendor selection criteria. If more end users took this stance, it could have a big impact on the number of digital agencies, hosting companies and system integrators that contribute to Drupal.

While we should encourage more organizations to sponsor Drupal contributions, we should also understand and respect that some organizations can give more than others and that some might not be able to give back at all. Our goal is not to foster an environment that demands what and how others should give back. Instead, we need to help foster an environment worthy of contribution. This is clearly laid out in Drupal's Values and Principles.

How diverse is Drupal?

Supporting diversity and inclusion within Drupal is essential to the health and success of the project. The people who work on Drupal should reflect the diversity of people who use and work with the web.

I looked at both the gender and geographic diversity of contributors. While these are only two examples of diversity, these are the only diversity characteristics we currently have sufficient data for. recently rolled out support for Big 8/Big 10, so next year we should have more demographics information

Gender diversity

The data shows that only 8% of the recorded contributions were made by contributors who do not identify as male, which continues to indicate a wide gender gap. This is a one percent increase compared to last year. The gender imbalance in Drupal is profound and underscores the need to continue fostering diversity and inclusion in our community.

A graph showing contributions by gender: 75% of the contributions come from people who identify as male.

Last year I wrote a post about the privilege of free time in Open Source. It made the case that Open Source is not a meritocracy, because not everyone has equal amounts of free time to contribute. For example, research shows that women still spend more than double the time as men doing unpaid domestic work, such as housework or childcare. This makes it more difficult for women to contribute to Open Source on an unpaid, volunteer basis. It's one of the reasons why Open Source projects suffer from a lack of diversity, among others including hostile environments and unconscious biases.'s credit data unfortunately still shows a big gender disparity in contributions:

A graph that shows that compared to males, female contributors do more sponsored work, and less volunteer work.

Ideally, over time, we can collect more data on non-binary gender designations, as well as segment some of the trends behind contributions by gender. We can also do better at collecting data on other systemic issues beyond gender alone. Knowing more about these trends can help us close existing gaps. In the meantime, organizations capable of giving back should consider financially sponsoring individuals from underrepresented groups to contribute to Open Source. Each of us needs to decide if and how we can help give time and opportunities to underrepresented groups and how we can create equity for everyone in Drupal.

Geographic diversity

When measuring geographic diversity, we saw individual contributors from six continents and 114 countries:

A graph that shows most contributions come from Europe and North America.
Contributions per capitaContribution credits per capita calculated as the amount of contributions per continent divided by the population of each continent. 0.001% means that one in 100,000 people contribute to Drupal. In North America, 5 in 100,000 people contributed to Drupal the last year.

Contributions from Europe and North America are both on the rise. In absolute terms, Europe contributes more than North America, but North America contributes more per capita.

Asia, South America and Africa remain big opportunities for Drupal, as their combined population accounts for 6.3 billion out of 7.5 billion people in the world. Unfortunately, the reported contributions from Asia are declining year over year. For example, compared to last year's report, there was a 17% drop in contribution from India. Despite that drop, India remains the second largest contributor behind the United States:

A graph showing the top 20 contributing countries.The top 20 countries from which contributions originate. The data is compiled by aggregating the countries of all individual contributors behind each issue. Note that the geographical location of contributors doesn't always correspond with the origin of their sponsorship. Wim Leers, for example, works from Belgium, but his funding comes from Acquia, which has the majority of its customers in North America.

Top contributor details

To create more awareness of which organizations are sponsoring the top individual contributors, I included a more detailed overview of the top 50 contributors and their sponsors. If you are a Drupal developer looking for work, these are some of the companies I'd apply to first. If you are an end user looking for a company to work with, these are some of the companies I'd consider working with first. Not only do they know Drupal well, they also help improve your investment in Drupal.

Rank Username Issues Volunteer Sponsored Not specified Sponsors
1 kiamlaluno 1610 99% 0% 1%
2 jrockowitz 756 98% 99% 0% The Big Blue House (750), Memorial Sloan Kettering Cancer Center (5), Rosewood Marketing (1)
3 alexpott 642 6% 80% 19% Thunder (336), Acro Media Inc (100), Chapter Three (77)
4 RajabNatshah 616 1% 100% 0% Vardot (730), Webship (2)
5 volkswagenchick 519 2% 99% 0% Hook 42 (341), Kanopi Studios (171)
6 bojanz 504 0% 98% 2% Centarro (492), Ny Media AS (28), Torchbox (5), Liip (2), Adapt (2)
7 alonaoneill 489 9% 99% 0% Hook 42 (484)
8 thalles 488 0% 100% 0% CI&T (488), Janrain (3), Johnson & Johnson (2)
9 Wim Leers 437 8% 97% 0% Acquia (421), Government of Flanders (3)
10 DamienMcKenna 431 0% 97% 3% Mediacurrent (420)
11 Berdir 424 0% 92% 8% MD Systems (390)
12 chipway 356 0% 100% 0% Chipway (356)
13 larowlan 324 16% 94% 2% PreviousNext (304), Charles Darwin University (22), University of Technology, Sydney (3), Service NSW (2), Department of Justice & Regulation, Victoria (1)
14 pifagor 320 52% 100% 0% GOLEMS GABB (618), EPAM Systems (16), Drupal Ukraine Community (6)
15 catch 313 1% 95% 4% Third & Grove (286), Tag1 Consulting (11), Drupal Association (6), Acquia (4)
16 mglaman 277 2% 98% 1% Centarro (271), Oomph, Inc. (16), E.C. Barton & Co (3),, Inc. (1), Bluespark (1), Thinkbean (1), LivePerson, Inc (1), Impactiv, Inc. (1), Rosewood Marketing (1), Acro Media Inc (1)
17 adci_contributor 274 0% 100% 0% ADCI Solutions (273)
18 quietone 266 41% 75% 1% Acro Media Inc (200)
19 tim.plunkett 265 3% 89% 9% Acquia (235)
20 gaurav.kapoor 253 0% 51% 49% OpenSense Labs (129), DrupalFit (111)
21 RenatoG 246 0% 100% 0% CI&T (246), Johnson & Johnson (85)
22 heddn 243 2% 98% 2% MTech, LLC (202), Tag1 Consulting (32), European Commission (22), North Studio (3), Acro Media Inc (2)
23 chr.fritsch 241 0% 99% 1% Thunder (239)
24 xjm 238 0% 85% 15% Acquia (202)
25 phenaproxima 238 0% 100% 0% Acquia (238)
26 mkalkbrenner 235 0% 100% 0% bio.logis Genetic Information Management GmbH (234), OSCE: Organization for Security and Co-operation in Europe (41), Welsh Government (4)
27 gvso 232 0% 100% 0% Google Summer of Code (214), Google Code-In (16), Zivtech (1)
28 dawehner 219 39% 84% 8% Chapter Three (176), Drupal Association (5), Tag1 Consulting (3), TES Global (1)
29 e0ipso 218 99% 100% 0% Lullabot (217), IBM (23)
30 drumm 205 0% 98% 1% Drupal Association (201)
31 gabesullice 199 0% 100% 0% Acquia (198), Aten Design Group (1)
32 amateescu 194 0% 97% 3% Pfizer, Inc. (186), Drupal Association (1), Chapter Three (1)
33 klausi 193 2% 59% 40% jobiqo - job board technology (113)
34 samuel.mortenson 187 42% 42% 17% Acquia (79)
35 joelpittet 187 28% 78% 14% The University of British Columbia (146)
36 borisson_ 185 83% 50% 3% Calibrate (79), Dazzle (13), Intracto digital agency (1)
37 Gábor Hojtsy 184 0% 97% 3% Acquia (178)
38 adriancid 182 91% 22% 2% Drupiter (40)
39 eiriksm 182 0% 100% 0% Violinist (178), Ny Media AS (4)
40 yas 179 12% 80% 10% DOCOMO Innovations, Inc. (143)
41 TR 177 0% 0% 100%
42 hass 173 1% 0% 99%
43 Joachim Namyslo 172 69% 0% 31%
44 alex_optim 171 0% 99% 1% GOLEMS GABB (338)
45 flocondetoile 168 0% 99% 1% Flocon de toile (167)
46 Lendude 168 52% 99% 0% Dx Experts (91), ezCompany (67), Noctilaris (9)
47 paulvandenburg 167 11% 72% 21% ezCompany (120)
48 voleger 165 98% 98% 2% GOLEMS GABB (286), Lemberg Solutions Limited (36), Drupal Ukraine Community (1)
49 lauriii 164 3% 98% 1% Acquia (153), Druid (8), Lääkärikeskus Aava Oy (2)
50 idebr 162 0% 99% 1% ezCompany (156), One Shoe (5)

Limitations of the credit system

It is important to note a few of the current limitations of's credit system:

  • The credit system doesn't capture all code contributions. Parts of Drupal are developed on GitHub rather than, and often aren't fully credited on For example, Drush is maintained on GitHub instead of, and companies like Pantheon don't get credit for that work. The Drupal Association is working to integrate GitLab with GitLab will provide support for "merge requests", which means contributing to Drupal will feel more familiar to the broader audience of Open Source contributors who learned their skills in the post-patch era. Some of GitLab's tools, such as in-line editing and web-based code review will also lower the barrier to contribution, and should help us grow both the number of contributions and contributors on
  • The credit system is not used by everyone. There are many ways to contribute to Drupal that are still not captured in the credit system, including things like event organizing or providing support. Technically, that work could be captured as demonstrated by the various non-product initiatives highlighted in this post. Because using the credit system is optional, many contributors don't. As a result, contributions often have incomplete or no contribution credits. We need to encourage all Drupal contributors to use the credit system, and raise awareness of its benefits to both individuals and organizations. Where possible, we should automatically capture credits. For example, translation efforts on are not currently captured in the credit system but could be automatically.
  • The credit system disincentives work on complex issues. We currently don't have a way to account for the complexity and quality of contributions; one person might have worked several weeks for just one credit, while another person might receive a credit for 10 minutes of work. We certainly see a few individuals and organizations trying to game the credit system. In the future, we should consider issuing credit data in conjunction with issue priority, patch size, number of reviews, etc. This could help incentivize people to work on larger and more important problems and save smaller issues such as coding standards improvements for new contributor sprints. Implementing a scoring system that ranks the complexity of an issue would also allow us to develop more accurate reports of contributed work.

All of this means that the actual number of contributions and contributors could be significantly higher than what we report.

Like Drupal itself, the credit system needs to continue to evolve. Ultimately, the credit system will only be useful when the community uses it, understands its shortcomings, and suggests constructive improvements.

A first experiment with weighing credits

As a simple experiment, I decided to weigh each credit based on the adoption of the project the credit is attributed to. For example, each contribution credit to Drupal core is given a weight of 11 because Drupal core has about 1,1 million active installations. Credits to the Webform module, which has over 400,000 installations, get a weight of 4. And credits to Drupal's Commerce project gets just 1 point as it is installed on fewer than 100,000 sites.

The idea is that these weights capture the end user impact of each contribution, but also act as a proxy for the effort required to get a change committed. Getting a change accepted in Drupal core is both more difficult and more impactful than getting a change accepted to Commerce project.

This weighting is far from perfect as it undervalues non-product contributions, and it still doesn't recognize all types of product contributions (e.g. product strategy work, product management work, release management work, etc). That said, for code contributions, it may be more accurate than a purely unweighted approach.

Top contributing individuals based on weighted credits.The top 30 contributing individuals based on weighted issue credits.
Top contributing organizations based on weighted credits.The top 30 contributing organizations based on weighted issue credits.


Our data confirms that Drupal is a vibrant community full of contributors who are constantly evolving and improving the software. It's amazing to see that just in the last year, Drupal welcomed more than 8,000 individuals contributors and over 1,100 corporate contributors. It's especially nice to see the number of reported contributions, individual contributors and organizational contributors increase year over year.

To grow and sustain Drupal, we should support those that contribute to Drupal and find ways to get those that are not contributing involved in our community. Improving diversity within Drupal is critical, and we should welcome any suggestions that encourage participation from a broader range of individuals and organizations.

September 09, 2019

After the 2018 DeepSec edition in November and the BruCON Spring Training in April, I’m happy to come back on the DeepSec 2019 schedule!

OSSEC is sometimes described as a low-cost log management solution but it has many interesting features which, when combined with external sources of information, may help in hunting for suspicious activity occurring on your servers and end-points. During this training, you will learn the basic of OSSEC and its components, how to deploy it and quickly get results. Then we will learn how to deploy specific rules to catch suspicious activities. From an input point of view, we will see how easy it is to learn new log formats to increase the detection scope and, from an output point of view, how we can generate alerts by interconnecting OSSEC with other tools like MISPTheHive or an ELK Stack / Splunk /etc…

A quick overview of the training content:

  • Day 1
    • Hunting & OSINT
    • OSSEC 101
    • Decoder & Rules
    • Fine-tuning alerts
    • Enrichment
    • Hunting with OSSEC
  • Day 2
    • Hunting on Windows
    • Active-Response
    • Logging & Vizualization • Extra examples

The content has been improved since the previous editions. The targeted audience is Blue team members, CSIRTs and all people interested in defensive security. The DeepSec schedule is already online and the registration page is here. Please spread the word!

[The post Training Announce: “Hunting with OSSEC” has been first published on /dev/random]

Souvent galvaudé, essentiellement transformé en argument marketing, le mot « minimalisme » est difficile à définir. Il évoque à la fois un design épuré et une frugalité volontaire.

Mais c’est Cal Newport, dans son livre « Digital Minimalism », qui en donne une définition qui me convient et qui m’inspire. Le minimalisme, c’est la prise en compte du coût total de possession d’un bien ou d’un abonnement à un service.

Si l’on vous offre un objet, vous avez intuitivement l’impression d’être gagnant. Sans rien payer, vous êtes propriétaire de cet objet. Mais l’achat ne représente qu’une fraction du coût total de possession. Il va en effet falloir ranger cet objet, ce qui prend du temps et de l’espace. Il va falloir le gérer, ce qui est une charge mentale. L’entretenir, le nettoyer. Puis, fatalement, il va falloir vous en débarrasser, ce qui demande souvent un effort, une gestion et du temps. Parfois, il faut même payer même si d’autres fois, vous pouvez récupérer un peu d’argent en le revendant. Mais s’en débarrasser représente également une charge émotionnelle si l’objet était un cadeau ou si vous avez construit un attachement sentimental à cet objet. Une charge sentimentale qui peut devenir un fardeau.

En tout et pour tout, chaque objet que nous possédons a donc un coût énorme, même si nous ne l’avons pas payé. Mais il peut également avoir un bénéfice, c’est d’ailleurs tout l’intérêt de l’objet.

Le minimalisme consiste donc à évaluer le rapport coût total/bénéfice de chacune de nos possessions et se débarrasser de ce pour quoi le bénéfice n’est pas assez important. Le minimalisme, c’est donc lutter contre l’intuition que « posséder, c’est mieux que ne rien avoir », une logique consumériste inculquée à grand renfort de marketing dans nos malléables neurones.

Dans son best-seller, la grande prêtresse du rangement Marie Kondo ne dit pas autre chose. Sous prétexte de « rangement », elle passe 200 pages à nous convaincre de jeter, jeter et encore jeter (dans le sens « se débarrasser », donner étant acceptable, mais, aujourd’hui, même les associations de recyclage de vêtements croulent sous des tonnes de loques dont ils ne savent que faire ).

La subtilité de la définition du minimalisme de Cal Newport, c’est que la notion de coût et de bénéfice est intiment personnelle. Elle dépend de vous et de votre vie. Le minimalisme de l’un sera très différent de celui de l’autre. Il ne s’agit donc pas de réduire ou d’unifier, mais de conscientiser nos usages. En ce sens, le minimalisme devient alors l’opposé de l’extrémisme. Il est individualiste, devenant une sorte de quête de simplicité propre à chacun.

Mon expérience de bikepacking n’est finalement rien d’autre qu’une quête minimaliste exacerbée. En bikepacking, chaque gramme superflu se paye comptant. Outre le poids, l’encombrement est également un facteur important. Il est tout naturel que je cherche à appliquer les mêmes préceptes au monde numérique.

Dans le monde numérique, le coût est plus difficile à quantifier. Chez moi, il pourrait se résumer à l’adéquation à mes besoins, l’efficacité et le respect de mes valeurs éthiques.

Retour à Linux

Depuis 5 ans, j’utilise principalement un Mac, souvenir de mon dernier employeur. Si l’expérience fut intéressante, j’éprouve un besoin viscéral de retourner sous Linux. Tout d’abord parce que je trouve que MacOS est un système effroyablement mal pensé (apt-get, où es-tu ?), aux choix ergonomiques parfois douteux (la croix qui minimise l’app au lieu de la fermer) sans pour autant que ce soit plus stable et moins buggué qu’un Linux.

Mais la première des causes, c’est que je suis un libriste dans l’âme, que l’univers Apple et son consumérisme d’applications propriétaires va à l’encontre de mes valeurs.

Plutôt que de tenter de trouver des équivalents Linux à toutes les apps que j’utilise depuis ces dernières années, j’ai décidé de simplifier ma façon de travailler, de m’adapter.

Ainsi, j’ai remplacé Ulysses, Evernote, DayOne et Things par une seule application : Zettlr. Alors, certes, je perds beaucoup de fonctionnalités, mais le bénéfice d’une seule application est énorme. Pour être honnête, cette migration n’est pas encore complète. J’utilise encore rarement Evernote pour prendre une note sur mon téléphone (Zettlr n’a pas de version mobile) et je n’ai pas encore complètement fait mon deuil de certaines fonctionnalités de DayOne pour migrer mon journal (j’ai d’ailleurs réalisé un script DayOne vers Markdown).

La raison de cette procrastination ? Je n’ai tout simplement pas encore trouvé d’ordinateur qui me convient pour installer Linux. Car si je n’aime pas MacOS, il faut reconnaitre que le matériel Apple est extraordinaire. Mon macbook pèse 900g, avec un écran magnifique. Il se glisse dans l’espace d’une feuille A4 et tient toute une journée de travail sur une charge voire toute une semaine lorsque je suis en vacances et ne l’utilise qu’une heure ou deux par jour.

Et, non, Linux ne s’installe pas sur ce modèle (sauf si je suis prêt à me passer de clavier, de souris et à perdre la mise en veille).

J’ai donc regardé du côté de Purism, dont j’aime beaucoup la philosophie, mais leurs laptops restent bien trop gros et lourds. Sans compter que le chargeur n’est pas USB-C et je ne suis pas prêt à abandonner le confort d’un seul chargeur dans mon sac à dos.

Le Starlabs MK II Lite correspond à tous mes critères. Malheureusement, il n’est pas disponible. Je l’avais précommandé, mais, devant les retards à répétition, j’ai annulé ma commande (j’apprécie cependant la transparence et la réactivité de leur support).

J’attendais beaucoup du Slimbook Pro X qui s’est révélé beaucoup trop grand à mon goût, assez moche et potentiellement bruyant (le Macbook est fanless, un confort que je vais avoir du mal à abandonner).

Les marques « classiques » ne m’aident pas beaucoup. Le Dell XPS 13 semble correspondre à mes désirs (malgré qu’il possède un ventilateur), mais je n’arrive pas à le commander dans sa version Ubuntu. Car, oui, tant qu’à faire, j’aimerais au moins favoriser une marque qui fait nativement du Linux. Peut-être que j’en demande trop…

En attendant, je garde mon macbook dont le plus gros défaut, outre MacOS, reste le clavier inconfortable.

Clavier en mobilité

Pour l’écrivain que je tente d’être tous les jours, le clavier est le dispositif le plus important. C’est pourquoi je dis parfois que mon passage au Bépo a été un de mes investissements les plus fructueux. Dans ma quête de minimalisme, j’ai d’ailleurs arrêté de prendre des notes au stylet ou au dictaphone dans Evernote. Notes qui pourrissaient et que je devais convertir, des mois plus tard, en notes écrites. Vider mon Evernote de ses 3000 notes m’a fait prendre conscience de la futilité de l’exercice.

Soit je prends note directement avec un clavier pour commencer un texte, soit je fais confiance à mon cerveau pour faire évoluer l’idée. Dans mes 3000 notes Evernotes, j’ai retrouvé jusqu’à cinq versions différentes de la même idée, parfois séparées de plusieurs années. Prendre des notes rapides n’est donc pas une aide pour moi, mais une manière de me déculpabiliser. Devenir minimaliste est donc également un travail de lâcher-prise sur certaines fallacieuses impressions de contrôle.

Pouvoir écrire partout et être mobile est ma principale motivation d’avoir un laptop léger et petit. Mais le confort d’un véritable clavier me manque. J’ai adoré mes années sur un Typematrix. Lorsque je veux retrouver le plaisir d’écrire, je me tourne vers mon Freewrite, mais il est lourd, encombrant et particulièrement buggué.

Je rêve donc d’un clavier Bluetooth qui serait orthogonal, ergonomique, adapté au Bépo et portable. Je découvre plein de nouveautés sur le forum des bépoistes mais je n’ai pas encore trouvé la perle rare. Lors de mes trips vélos, j’utilise un simple clavier Moko qui, pour ses 25€, fait très bien son boulot et est limite plus agréable que le clavier natif du macbook.

Tout cela me fait réfléchir. Peut-être n’est-ce pas un laptop que je devrais acheter pour mettre Linux, mais une tablette connectée à un clavier Bluetooth ? Tant que je peux l’utiliser sur mes genoux dans un hamac, cela me semble en effet une solution acceptable. Dans cette optique, j’ai testé Ubuntu Touch sur une vieille tablette Nexus 7. Malheureusement, le système reste trop limité. Je regrette qu’Ubuntu Touch ne soit plus aussi activement développé, car j’adorerais avoir un téléphone « convergent » (qui peut se brancher sur un grand écran pour devenir un véritable ordinateur de bureau).

Le téléphone

Et justement, puisqu’on parle de téléphone. Mon OnePlus 5 commence à rendre l’âme (il n’accepte de charger que sporadiquement et son écran est fendu). Comment le remplacer ?

J’adore le concept Librem 5 de Purism. Mais je constate qu’abandonner Android n’est pas possible pour moi à cause de deux raisons majeures : les applications bancaires et les gadgets Bluetooth. Non, je ne veux pas abandonner ma montre profondimètre Garmin et mon GPS de vélo Wahoo. Ces deux appareils contribuent grandement à mon plaisir et mon bien-être dans la vie, le coût de garder Android me semble faible en comparaison. C’est d’ailleurs aussi une raison qui me fait abandonner l’idée d’un LightPhone (outre son cloud propriétaire).

Tant qu’à garder Android, pourquoi ne pas prendre le téléphone le plus léger et petit possible ? Et bien tout simplement parce que je n’en trouve pas. Je suis entré dans un magasin Fnac et j’ai découvert avec amusement qu’il était impossible de différencier les téléphones en exposition. Une longue file de rectangles noirs (ils étaient éteints) d’exactement la même taille ! J’avais l’impression d’être dans une parodie. Le Palm Phone, qui est une exception notable à ce triste conformisme, n’est disponible qu’aux US comme un téléphone de secours. Dommage…

Du coup, peut-être qu’opter pour un FairPhone 3 aurait du sens. J’avoue ne pas être 100% convaincu, ne sachant pas trop ce qui est réellement éthique dans leur démarche et ce qui est du marketing, une forme de green-fair-washing.

Une chose est sûre : je ne compte pas garder l’Android de Google. J’attends de voir ce que proposera /e/, mais, au pire, je me tournerai vers LineageOS.

La tablette

Même si, Android, ce n’est potentiellement pas si mal. Une tablette e-ink Onyx Boox est d’ailleurs annoncée sous Android 9.

Comme tablette e-ink, j’utilise pour le moment un Remarkable. Le Remarkable utilise un logiciel propriétaire, un cloud propriétaire et une app de synchronisation propriétaire dont la version Linux n’est plus mise à jour.

Pour être honnête, j’utilise peu le Remarkable, mais de manière efficace. Il sert à faire des croquis, prendre des notes en réunion ou, son usage principal chez moi, lire des papiers scientifiques et des mémoires que j’annote. Il a remplacé l’imprimante.

Passer à un concurrent tournant sous Android me permettrait de ne plus utiliser leur cloud et leur app propriétaire. Si, en plus, je pouvais connecter un clavier Bluetooth, je tiendrais là une machine à écrire de rêve.

Par contre, je refuse de me lancer dans les projets Kickstarter ou Indiegogo qui n’ont pas encore été rigoureusement testé. D’ailleurs, ma quête de minimalisme m’a conduit à supprimer mes comptes sur ces plateformes pour éviter la tentation de dépenser des sous dans des projets qui seront forcément décevant car ils ne font que vendre du rêve.

La liseuse

Évidemment, ce serait encore mieux de pouvoir connecter un clavier Bluetooth à ma liseuse, que j’ai toujours avec moi, quelle que soit la situation.

Il faut dire qu’après avoir passé en revue des tas de liseuses, j’ai enfin trouvé la perle rare : la Vivlio Touch HD (Vivlio = Pocketbook = Tea en ce qui concerne le hardware).

Fine, légère, disposant de bouton pour tourner les pages, d’un rétroéclairage anti-lumière bleue et quasi étanche, la liseuse permet, moyennant un peu de chipotage, d’utiliser l’app CoolReader qui me permet de lire en mode inversé (blanc sur fond noir). Seuls la prise de notes et le surlignage laissent fortement à désirer.

Mais une liseuse avec laquelle je peux facilement prendre des notes dans des passages de livres et sur laquelle je peux connecter un clavier, c’est mon rêve ultime. Je ne désespère pas.


Le minimalisme se révèle également dans le logiciel. Je vous ai déjà parlé de Zettr, qui remplace désormais 4 applications payantes à lui tout seul.

Mais comment tenter de favoriser l’open source, la simplicité et la compatibilité inter plateforme ? Comment protéger ma vie privée et mes données ?

Les no-brainers

Certaines solutions s’imposent d’elles-mêmes. Bitwarden, par exemple, remplace très avantageusement 1password, Dashlane ou LastPass (des solutions que j’ai toutes utilisées pendant plus d’un an chacune). À l’usage, Bitwarden est simple et parfait. Certes moins joli, mais tellement efficace. J’ai même migré certaines notes Evernote sécurisées dans Bitwarden. Bien sûr, j’ai pris la version payante pour soutenir les développeurs.

Outre l’open source, un aspect très important pour moi est la protection de mes données.

C’est la raison pour laquelle j’utilise principalement Signal pour clavarder. Je tente de convertir tous mes contacts (faites moi plaisir, installez Signal sur votre téléphone, même si vous ne pensez pas l’utiliser. Ça fera plaisir à ceux qui souhaitent protéger leur vie privée). Pour les réfractaires, je dois malheureusement encore garder un compte Whatsapp. Je conserve également un compte Facebook pour une seule et simple raison : participer au groupe de discussion des apnéistes belges. Sans cela, je ne serais pas informé des plongées ! Heureusement, mes amis d’Universal Freedivers postent de plus en plus systématiquement les infos sur leur blog, que je suis par RSS. Quand j’aurai la conviction de ne plus rater d’activités, j’effacerai définitivement mon compte Facebook (comme j’ai effacé mon compte Instagram et comme je compte bientôt supprimer mon compte Linkedin).

Mais je vous parlerai une autre fois de ma quête de suppression de comptes qui m’a emmené à effacer, un à un, près de 300 comptes éparpillés sur le net.

Parfois, un compte peut se révéler utile, mais ne l’est que rarement. C’est le cas d’Airbnb ou Uber. Ma solution est de désinstaller l’app et de ne l’installer qu’en cas de besoin. Cela me permet de ne pas être notifié des mises à jour, de ne pas être espionné par l’app, etc.

Le gros du boulot

Jusque là, c’est relativement simple. Le gros du boulot reste mon compte Google. J’ai déjà migré une bonne partie de mes mails vers Protonmail. Et je garde un œil sur son concurrent le plus actif : Tutanota.

Le gros problème de Protonmail et de Tutanota reste le manque d’un calendrier. Protonmail prétend y travailler depuis des années. Tutanota a même déjà un premier calendrier (trop) simpliste. C’est la dernière chose qui me bloque vraiment avec Google.

Il faut dire qu’un bon calendrier, ce n’est pas évident. Sous MacOS, j’utilise Fantastical et je n’ai pas encore trouvé d’équivalent sous Linux (notamment pour ajouter des événements en langage naturel). Peut-être ? Mais de toute façon, je devrai composer avec le calendrier qu’offriront Tutanota ou Protonmail.

Dernier lien avec Google ? Ce n’est pas tout à fait vrai. Google Music est en effet un service que je trouve très performant. J’y ai uploadé tous mes MP3s depuis des années et je l’utilise gratuitement. Il fait des mixes aléatoires dans mes chansons préférées de manière très convaincante. J’ai bien tenté de jouer avec Funkwhale, mais on en est très loin (déjà, la plupart de mes musiques ne s’uploadent pas, car trop grosses…).

Google Maps reste également l’application la plus pratique et la plus performante pour tracer des trajets, même avec des transports en commun. Ceci dit, je guette Qwant Maps, car je préfère la qualité des données Open Street Maps (et non, OSMAnd n’est pas utilisable quotidiennement).

J’utilise également Google Photo, qui est incroyablement pratique pour sauvegarder toutes mes photos. Ceci dit, je pourrai m’en passer, car mes photos sont désormais également automatiquement sauvegardées sur Tresorit, un équivalent chiffré à Dropbox.

Une cible mouvante

Pour être honnête, j’espérais arriver un jour à une « solution parfaite » et vous décrire les solutions que j’avais trouvées. Je me rends compte que le chemin est long, mais, comme le disent mes framapotes, la voie est libre.

La voie est libre…

Mon idéal, mon objectif est finalement assez mouvant. Le minimalisme n’est pas un état que l’on atteint. C’est une manière de penser, de réfléchir, de conscientiser pour s’améliorer.

Je suis un libriste plein de contradictions et, plutôt que de le cacher, j’ai décidé d’être ouvert, de partager ma quête avec vous pour récolter vos avis, vos conseils et, qui sait, vous donner également des idées. Ce billet n’est finalement qu’une introduction à un chemin que j’espère partager avec vous.

Photo by Ploum on Unsplash

Je suis @ploum, conférencier et écrivain électronique. Si vous avez apprécié ce texte, n'hésitez pas à me soutenir sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens réguliers, même symboliques, sont une réelle motivation et reconnaissance. Merci !

Ce texte est publié sous la licence CC-By BE.

September 06, 2019

Last Tuesday, I launched a small challenge to win a ticket for the BruCON conference. The challenge was solved in approximatively 20:30 and I received the first correct submission of the hash at 23:30 (congratulations to Quentin Kaiser!). It’s time to give you the solution to this small challenge. It was web-based and did not require specific tools. Only being creative!

The challenge started with a simple URL: If you visit this URL, you will get a simple message:

Nice try but nothing here...
Looking for a PHP page?

Any URI redirects to the same page. There is a robots.txt file that should also give you a hint: we need to find a PHP page:

User-agent: *
Disallow: /*.php$

From there, they are multiple techniques to find the expected entry page: The FQDN name could also be a hint “bot” But most of the players found the page just be bruteforce the website. The page was easy to find, only 3 characters: “/bot.php”. Just browse to this page:

Unknown command: "TW96aWxsYS81LjA=". Need some help?

Your bot is execting a command but which one? The strings reported by the bot should look familiar to you: It’s a Base64-encoded string (“=”). Let’s decode it:

$ echo TW96aWxsYS81LjA= | base64 -D

This is the command received by the bot. But you did not submit any command yet. Where this command is coming from? What does your browser send to a server at each HTTP request? A User-Agent of course! It seems that your bot is expecting commands passed via the User-Agent:

Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.100 Safari/537.36

Let’s confirm this by sending another user-agent:

$ curl -A $(echo test|base64)
Unknown command: "ZEdWemRBbz0=". Need some help?

We see indeed a new Based64-encoded string. Ok, but now, which commands are accepted by your bot? There is a hint: “Need some help?”. Let’s submit “help” in Base64 (it’s not case sensitive):

$ curl -A $(echo -n help|base64)
Please submit a command: PING, ECHO, TOKEN, KEY, HELP

“PING” and “ECHO” are just there for the fun. Let’s issue the other commands:

$ curl -A $(echo -n token|base64)
$ curl -A $(echo -n key|base64)

The token looks Base64 encoded (again), let’s decode it. What could you try with a token and a key? Let’s try to XOR them. You can use any tool to achieve this but Cyberchef is very good at performing such tasks:

What about the players now? Two people solved the challenge (well, they contacted me and provided to correct answer), the second a few minutes after the winner! Here are some stats about the server:

HTTP Traffic to the webserver

218 unique IP addresses connected to the challenge:

I think that many people tried to use automated tools (scanner) to try to solve the challenge. Amongst 1.560.017 HTTP requests, only 3091 requests were performed against /bot.php! (only 0,19%!)

[The post BruCON Challenge: The Solution has been first published on /dev/random]

I published the following diary on “PowerShell Script with a builtin DLL“:

Attackers are always trying to bypass antivirus detection by using new techniques to obfuscate their code. I recently found a bunch of scripts that encode part of their code in Base64. The code is decoded at execution time and processed via the ‘IEX’ command… [Read more]

[The post [SANS ISC] PowerShell Script with a builtin DLL has been first published on /dev/random]

I'm excited to share that when Drupal 8.8 drops in December, Drupal's WYSIWYG editor will allow media embedding.

You may wonder: Why is that worth announcing on your blog? It's just one new button in my WYSIWYG editor..

It's a big deal because Drupal's media management has been going through a decade-long transformation. The addition of WYSIWYG integration completes the final milestone. You can read more about it on Wim's blog post.

Drupal 8.8 should ship with complete media management, which is fantastic news for site builders and content authors who have long wanted a simpler way to embed media in Drupal.

Congratulations to the Media Initiative team for this significant achievement!

September 05, 2019

I published the following diary on “Private IP Addresses in Malware Samples?“:

I’m looking for some samples on VT that contains URLs with private or non-routable IP addresses (RFC1918). I found one recently and it made me curious. Why would a malware try to connect to a non-routable IP address?

Here is an example of a macro found in a suspicious Word document (SHA256: c5226e407403b37d36e306f644c3b8fde50c085e273c897ff3f36a23ca0f1c6a)… [Read more]

[The post [SANS ISC] Private IP Addresses in Malware Samples? has been first published on /dev/random]

September 04, 2019

I'm excited to announce that Acquia has acquired Cohesion, the creator of DX8, a software-as-a-service (SaaS) visual Drupal website builder made for marketers and designers. With Cohesion DX8, users can create and design Drupal websites without having to write PHP, HTML or CSS, or know how a Drupal theme works. Instead, you can create designs, layouts and pages using a drag-and-drop user interface.

Amazon founder and CEO Jeff Bezos is often asked to predict what the future will be like in 10 years. One time, he famously answered that predictions are the wrong way to go about business strategy. Bezos said that the secret to business success is to focus on the things that will not change. By focusing on those things that won't change, you know that all the time, effort and money you invest today is still going to be paying you dividends 10 years from now. For Amazon's e-commerce business, he knows that in the next decade people will still want faster shipping and lower shipping costs.

As I wrote in a recent blog post, no-code and low-code website building solutions have had an increasing impact on the web since the early 1990s. While the no-code and low-code trend has been a 25-year long trend, I believe we're only at the beginning. There is no doubt in my mind that 10 years from today, we'll still be working on making website building faster and easier.

Acquia's acquisition of Cohesion is a direct response to this trend, empowering marketers, content authors and designers to build Drupal websites faster and cheaper than ever. This is big news for Drupal as it will lower the cost of ownership and accelerate the pace of website development. For example, if you are still on Drupal 7, and are looking to migrate to Drupal 8, I'd take a close look at Cohesion DX8. It could accelerate your Drupal 8 migration and reduce its cost.

Here is a quick look at some of my favorite features:

An animated GIF showing how to edit styles with Cohesion.An easy-to-use “style builder” enables designers to create templates from within the browser. The image illustrates how easy it is to modify styles, in this case a button design.
An animated GIF showing how to edit a page with Cohesion.In-context editing makes it really easy to modify content on the page and even change the layout from one column to two columns and see the results immediately.

I'm personally excited to work with the Cohesion team on unlocking the power of Drupal for more organizations worldwide. I'll share more about Cohesion DX8's progress in the coming months. In the meantime, welcome to the team, Cohesion!

September 03, 2019

2007 is the year of my first DrupalCon, and the year the #1 most wanted end-user feature was Better media handling. 2019 is the year that Drupal will finally have it. Doing things right takes time!

Back then I never would’ve believed I would someday play a small role in making it happen :)

Without further ado, and without using a mouse:

Reusing and embedding media, using only the keyboard.

The text editor assisted in producing this HTML:

<p>Let's talk about llamas!</p>

<drupal-media alt="A beautiful llama!" data-align="center" data-entity-type="media" data-entity-uuid="84911dc4-c086-4781-afc3-eb49b7380ff5"></drupal-media>

<p>(I like llamas, okay?)</p>

If you’re wondering why something seemingly so simple could have taken such a long time, read on for a little bit of Drupal history! (By no means a complete history.)

2007 and Drupal five

Twelve years ago, in Dries’ State of Drupal talk 1, Better media handling was deemed super important. I attended a session about it — this is the (verbatim) session description:

  • Drupal’s core features for file management and media handling
  • common problems and requirements (restrictions, performance issues, multi-lingual content, dependencies between nodes and files)
  • first approaches: own node types for managing, improved filemananger.module (example: Bloomstreet,European Resistance Archive, Director’s Cut Commercials)
  • next step: generic media management module with pluggable media types, mutli server infrastructure, different protocols, file systems, file encoding/transcoding

It’s surprisingly relevant today.

By the way, you can still look at the session’s slides or even watch it!

2007–2013 (?)

The era of the venerable Media module, from which many lessons were learned, but which never quite reached the required level of usability for inclusion in Drupal core.

2013 (?) – 2019

The Media initiative started around 2013 (I think?), with the Media entity module as the first area of focus. After years of monumental work by many hundreds of Drupal contributors (yes, really!), only one missing puzzle piece was left: WYSIWYG embedding of media. The first thing I worked on after joining Acquia was shipping a WYSIWYG editor with Drupal 8.0, so I was asked to take this on.

To help you understand the massive scale of the Media Initiative: this last puzzle piece represents only the last few percent of work!

Drupal has always focused on content modeling and structured content. WYSIWYG embedding of media should not result in blobs of HTML being embedded. So we’re using domain-specific markup (<drupal-media>) to continue to respect structured content principles. The result is document transclusion combined with an assistive “WYSIWYG” editing UX — which we wished for in 2013.

A little less than two months ago, we added the MediaEmbed text filter to Drupal 8.8 (domain-specific markup), then we made those have previews using CKEditor Widgets for assistive “WYSIWYG” editing, followed by media library integration and per-embed metadata overriding (for example overriding alt, as shown in the screencast).

I was responsible for coming up with an architecture that addressed all needs. To maximally learn from past lessons, I first worked on stabilizing the contributed Entity Embed module. So this builds on many years worth of work on the Entity Embed module from Dave Reid, cs_shadow and slashrsm! Thanks to the historical backlog of reported issues there, we are able to avoid those same problems in Drupal core. The architecture of the CKEditor integration was designed in close collaboration with Krzysztof Krzton of the CKEditor team (thanks CKSource!), and was also informed by the problems reported against that module. So this stabilization work benefited both Drupal core as well as tens of thousands of existing sites using Entity Embed.

I then was able to “just” lift Drupal core patches out of the Entity Embed module (omitting some features that do not make sense in Drupal core). But it’s phenaproxima (thanks Acquia!), oknate and rainbreaw who got this actually committed to Drupal core!

Complete media management shipped in increments

Fortunately, for many (most?) Drupal 8 sites, this will not require significant rework, only gradual change. Drupal 8.8 will ship with complete media management, but it’ll be the fifth Drupal core release in a little over two years that adds layers of functionality in order to arrive at that complete solution:

  • Drupal 8.4 added foundational Media API support, but still required contributed modules to be able to use it
  • Drupal 8.5 made Media work out-of-the-box
  • Drupal 8.6 added oEmbed support (enabling YouTube videos for example) and added an experimental Media Library
  • Drupal 8.7 made the Media Library UI-complete: bulk uploads, massively improved UX
  • Drupal 8.8 will contain the key thing that was blocking Media Library from being marked stable (non-experimental): WYSIWYG integration

Today is the perfect moment to start looking into adopting it shortly after Drupal 8.8 ships in December!

Thanks to phenaproxima for feedback on this post!

  1. See slide 40! ↩︎

*** The challenge has been solved and the ticket is gone! ***

The Belgian security conference BruCON 0x0B is already scheduled in a few weeks! The event becomes more and more popular and we were sold-out very quickly. If you don’t have a ticket, it’s too late! Well, not really. We have alternatives: Buy a training ticket and access to the 2-days conference is included! We have a nice schedule with top-instructors. Another alternative, if you want to participate and give some free time, we are still looking for a few volunteers and you’ll also have access to the conference.

Otherwise, I’ve another good news for you: I still have a ticket to give away to the winner of a small contest. How to play? Like the previous years, just solve a challenge. This time, everything is web-based. Here are the rules of the game:

  • The ticket will be assigned to the first one who submits (first come, first win) the flag by email ONLY (I’m easy to catch!) Tip: the flag is a hash like ‘BruCON_xxxxxxxxx’.
  • There is no malicious activity required to solve the challenge, nothing relevant on this blog or any BruCON website.
  • Be fair and don’t break stuff, don’t DDoS, etc. (or you’ll be blacklisted)
  • You’re free to play and solve the challenge but claim the ticket if you will be certain to attend the conference! (Be fair-play)
  • All costs besides the free ticket are on you! (hotel, travel, …)
  • Don’t just submit the hash, give some details about the solution. It will help me to select a winner if I’ve some doubts about your submission.
  • By trying to solve the challenge, you accept the rules of the game.

Interesting? It’s starting here.

[The post BruCON Challenge: Solve & Win Your Ticket! has been first published on /dev/random]

September 01, 2019

The world of work has changed. Companies have transitioned from highly structured 9-to-5 clockworks, to always-on controlled chaos engines, partially remote or wholly distributed. Workers are affected too, expected to keep up with the 24/7 schedule of their directors and customers. This is only possible with the many communication and collaboration tools we have at our disposal. I work remotely myself, often across an ocean, and after years of this, I'd like to share some observations and advice.

Mainly, that the use of these tools is often severely flawed. I think it stems from a misconception my generation was brought up on: that technology is an admirable end in itself, rather than merely a means to an end. This attitude was pervasive during the 80s and 90s, when a dash of neon green cyberpunk was enough to be too cool for school. It laid the groundwork for the tireless technological optimism that is now associated with Silicon Valley and its colonies, but which is actually just part of the global zeitgeist.

In this contemporary view, when you have a problem, you get some software, and it fixes it. If it's not yet fixed, you add some more. Need to share documents? Just use Dropbox. Need to collaborate? Just use Google Docs. Need to communicate? Get your own Slack, they're a dime a dozen. But there is a huge cost attached: it doesn't just fragment the work across multiple disconnected spaces, it also severely limits our expressive abilities, shoehorning them into each product and platform's particular workflows and interfaces.

Brazil Movie Poter

The Missing Workplace

The first and most prominent casualty of this is the office itself: we have carelessly dismissed its invisible benefits for the dubious luxury of going to work in our pyjamas as remote workers. This is accelerated by the plague of open plan offices, which resemble cafeterias more than workshops or labs. The result in both cases is the same: employees sequester themselves, behind headphones or physical distance, shut off from the everyday cues that provide ambient legibility to the workplace.

It's not just the water cooler that's missing. Did that meeting go well, or are people leaving with their hands in their hair? Is someone usually the last one to turn off the lights, and do they need help? Is now a good time to talk about that thing, or are they busy putting out 4 fires at once? Did they even get a decent night sleep? Good luck reading any of that off a flakey online status indicator that is multiple timezones away.

Slack status

There are tools to fix this, of course. Just set a custom status! With emoji! Now, instead of just going about your work day like a human, you have to constantly self-monitor and provide timely updates on your activities and mental state. But there's an app for that, don't worry. Everyone turns into their own public relations agent, while expected to actively monitor everyone else's feeds. The solution is more of the problem, and the simple medium of body language is replaced by a somewhat trite and trivially spoofable bark. The only way you will get the real information at a distance is by having a serious conversation about it, which takes time and energy.

Even if you do though, you won't be privy to who else is talking to who, unless you explicitly ask. Innocently peeking in through the meeting room glass makes way for a complete lack of transparency. More so, clients don't even visit, lunches are often eaten alone, and occasional beers on Friday are usually off the table. They're not coming back when your workforce is spread across multiple timezones. This is a fundamentally different workplace, which needs a different approach.

The environment is asynchronous by default, yet people often still try to work in a synchronous way. We continue to try and maintain the personal and professional protocols of face to face interaction, even if they're a terrible fit. If you've ever been pinged with a context-less "hey," waiting for your acknowledgement before telling you what's up, you have experienced this. Your conversation partner has failed to realize they have all the time in the world to converse slowly, glacially even, with care and thought put into every message, which is the opposite of rude in that situation. Because it means you can't decide if it's actually necessary to respond if the timing is inconvenient.

A related example is the in-person "hey, I just sent you an email": they know they'll get a response eventually, but they want one now. By first sending the email, they are able to launder their interruption, passing the bulk of the message asynchronously, while keeping their synchronous message a seemingly trivial nothing. This isn't always bad, if you e.g. summarize some urgent notes immediately and let the email fill out the details, but this is rarely the case.

Write-Only Media

The notifications themselves are also a problem. They feature so prominently, they turn every issue into a priority 1 crisis. If left to accumulate for later they just get in the way, like a desk you can't even clear. The expectation is that you'll immediately want to look at it, and this is why they are so enticing for the sender: a response is practically guaranteed. But any medium that caters more to the writer than the reader should be treated with extreme skepticism [Twitter, 2006].

Instant notifications are an example of a mechanism that produces negative work. Whatever task is being interrupted is not just on pause, you've added an additional cost of context switching away and back that wasn't there before. A more destructive version is the careless Reply to All and its close sibling, the lazy Forward to Y'all. Whatever was said, instead of now 1 person reading it, there will be many. Everyone will now spend time digesting it independently, offering a multitude of uncoordinated replies, each of which will then need to be read, and so on. It can even become iterated negative work, and it scales up quickly.

Any time a manager forwards mails wholesale from the level above, or a rep forwards requests from a 3rd party to the entire team, this is what they are doing, and they should really stop that. Instead, you should make sure everyone mainly mass-sends answers, rather than questions. The purpose of a manager and a rep is to shield one side of a process from the details of the other after all. You do not want unfiltered, unvetted assignments to be mixed in with the highly focused, day to day communication of a well-oiled team. Any such attempt at inter-departmental buck passing should be resisted vigorously as the write-only pollution that it is. That said, specialty tools like issue trackers and revision control can be extremely useful even for non-specialist workers. You just need to make sure each group has their own space to work in, and is taught how to use it well.

Each person in a chain, even within a group, should act like an information optimizer, investigating and summarizing the matter at hand so the next ones don’t have to. Conversational style should be minimized, in favor of bullet points, diagrams and analysis. If you don't do this, you will end up with a company where everyone is constantly overloaded by communication, and yet very little gets resolved.

Ping Me Twice, Shame On You

If you do need to get a bunch of people into a synchronous room, virtual or otherwise, there needs to be a clear agenda and goal ahead of time. There should be concrete takeaways at the end, in the form of notes or assigned tasks. Otherwise, you will have nothing to constrain the discussion, and then several people will have to decide for themselves what to do next with the resulting tangle of ideas. Sometimes you will just have the same meeting again a few weeks later, especially if not everyone attends both. Instead you should aim to differentiate between those who need to attend a meeting versus those who just need to hear the conclusion. Particularly naive is the notion that mere recordings or logs are a sufficient substitute for due diligence here, as it takes a special kind of stupid to think that someone would voluntarily subject themselves to an aimless meeting they can't even participate in, after the fact.

This means optimizing for people-space, ensuring that the minimum amount of people are directly involved, as well as people-time, ensuring the least amount of manhours are spent. This also works on the long scale. If a question gets asked multiple times, it signifies a missed opportunity to capture past insight. It is essential to do this in a highly accessible place like a wiki, known and understood by all. It should be structured to match the immediate needs of those who need to read it. Dumping valuable information into chat is therefore an anti-pattern, requiring everyone to filter out the past nuggets of information based on the vague memory of them existing. A permanently updated record is a much better choice, and can serve as the central jumping off point to link to other, more ephemeral tools and resources. It should have every possible convenience for images, markup and app integration.

Unfortunately, few people will take the initiative on a blank canvas. There are two important reasons for this. The first is simply the bystander effect. If someone doesn't fill it out with placeholder outlines, clear instructions and pre-made templates, expect very little to happen organically. Make a place for project bibles, practical operations, one-time event organizing, etc. Also make sure you have a standard tool for diagramming, and some stencils for everything you draw frequently. It's invaluable, a picture says a thousand words. Encourage white board and paper sketching too, and editing them into other notes.

Second and more important is you need to get buy in on the intent and expected benefits. This is hard. The environment in some companies is so dysfunctional, some people have learned that meetings exist to waste time, and ticket queues exist to grow long and stale. They will pattern match sincere requests for participation to a request to waste their time. Or maybe they do appreciate those tools, but they've never been part of a development process where, by the time a ticket reaches a developer, the feature has been fully specced out and validated, and the bug is sufficiently analyzed and reproducible. To achieve this requires the design and QA team to have their own separate queues and tasks, as disciplined as the devs themselves.

Participants need to internalize that they can actually save everyone time, a tide that lifts all boats. It also translates into such luxuries as actually being able to take 2 weeks off without having to check your email. Fear of stepping on toes can prevent contributions from being attempted at all, so you should encourage the notion that the best critique comes in the form of additional proposed edits. Often, bad attempts at collaboration lead to a vicious cycle, where the few initiators burn out while reluctant non-participants feel helpless, until it gets abandoned.

In practice, swarm intelligence is a fickle thing. It can seem magical when things spontaneously come together, but often it's actually the result of some well spotted cow paths being paved, and a few helpful individuals picking up the slack to guide the group. You don't actually want an aimless mob, you want to have one or two captains per group, respected enough to resolve disputes and break ties. When done right, truly collaborative creation can be a wonderful thing, but most group dances require some choreography and practice. If your organization seems to magically run by itself regardless, consider you merely have no idea who's actually running it.

Legibility on Sale

In addition to day-to-day legibility of the workplace, there is a big need for accumulated legibility too. With so much communication now needing to be explicit rather than implicit, you run the risk of becoming incomprehensible to anyone who wasn't there from the start. If this becomes the norm, an unbridgeable divide forms between the old and the new guard, and the former group will only shrink, not grow.

A good antidote for this is to leverage the perspective of the newcomer. Any time someone new joins, they need to be onboarded, which means you are getting a free 3rd party audit of your processes. They will run into the stumbling blocks and pitfalls you step over without thinking. They will extract the information that nobody realizes only exists in everyone's heads. They will ask the obvious questions that haven't actually been written down yet, or even asked.

They should be encouraged to document their own learning process and document answers obtained. This is a good way to make someone feel immediately valued, and the perfect way to teach them early the right habits of your information ecosystem. You get to see what you look like from the outside, so pay attention, and you will learn all your blind spots.

Who are the staff and their roles and competences? How can I reach someone for this thing, and when are they available? What are our current ongoing projects and when are they due? What's our strategic timeline, and what's our budget? What's the process for vacations, or expenses? Remote work takes away a thousand tiny opportunities to learn all this by osmosis, and you need to actively compensate.

The resulting need for transparency may seem daunting, particularly if you need to document financial and legal matters. It can feel like dropping your pants for all to see, opening the floodgates to envy and drama to boot. It's a mistake however to consider it superfluous, because that gate is always open, whether you want it or not. If left unaddressed, it will be found out through gossip regardless, only you won't hear about any accumulated resentment until it's likely too late to resolve amicably.

It's also a red flag if someone doesn't want to document important discussions and negotiations. Like a boss who prefers to talk about performance or a raise entirely verbally and off-the-record, out of anyone else's earshot. Or a worker who can't account for their own hours or tasks, and pretends what they do is simply too complicated to explain. Such tight control of who hears what is never good, and means someone is positioning themselves to control information going up and down an organization entirely for their own benefit. However, as the cost of record keeping has been reduced to practically nothing, employees have a fair amount of power to push back. Everyone should be encouraged to ask for written terms for deals and promises, and keep their own copies of their history, including key negotiations and discussions. They should store this outside of accounts that can be locked out upon dismissal, or tampered with by a malicious inside actor.

I leave you with a trope, the beast that is the Big Vision Meeting. Usually something has gone wrong which casts doubt on the company's future, or which puts management in a bad light, or both. Likely people are being "let go". Before this news can be delivered, the bosses must save face. So they give a 1-3 hour PowerPoint which projects the company into the future for a year or two, and lays out how successful they will be. Crucially absent will be the specifics of how they will get there, and instead you will get abstract playbooks, colorful diagrams and "market research" or "financial analyses" that don't have any real numbers in it.

It's important to consider the perspective of the worker here: the minute the Big Meeting starts, they already know something is up, because it is always called without notice. Everything that is not critically urgent is immediately put on hold. So they have to sit through this possibly hours-long spiel, wondering the entire time how bad it actually is, while the bosses think they are elevating spirits, in a stunning failure of self-awareness. Finally they tell them, and then the meeting ends soon after, and the question they had the entire time was not answered: how are we going to get through the next 2 weeks, what's our plan here?

The worst of the worst will do this by asking the non-fired employees to come in an hour late, so they can fire the unlucky ones by themselves, without having to own up in front of everyone at the same time why they had to let them go. Certain types abhor this lack of image control. You'll learn to spot them quickly enough. My real point though is what this Big Vision Meeting looks like when everyone's remote: they can just break the news individually, selling it as a personal touch, and don't even have to tell the same story to everyone all at once. Sometimes learning to deal with a fully remote environment means taking on the role of an investigator and archivist. Keep that in mind.

The best way to capture the necessary mindset is that of Minimum Viable Bureaucracy: we need to make our tools and processes work for us, with a minimum amount of fuss for the maximum amount of benefit, without any illusions that the technology will simply do it for us. It can even save your bacon when the shit hits the fan.

That means engaging in things many workers are often averse to, like creating meeting agendas, writing concise and comprehensive documentation, taking notes, making archives, and much more. But once people clue in that this actually saves time and effort in the long run, they'll wonder how they ever got things done without it.

Or at least I do.

Edit: Apparently I'm not the first to come up with the term!

August 30, 2019

I published the following diary on “Malware Dropping a Local Node.js Instance“:

Yesterday, I wrote a diary about misused Microsoft tools[1]. I just found another interesting piece of code. This time the malware is using Node.js[2]. The malware is a JScript (SHA256:1007e49218a4c2b6f502e5255535a9efedda9c03a1016bc3ea93e3a7a9cf739c)… [Read more]

[The post [SANS ISC] Malware Dropping a Local Node.js Instance has been first published on /dev/random]

August 29, 2019

I published the following diary on “Malware Samples Compiling Their Next Stage on Premise“:

I would like to cover today two different malware samples I spotted two days ago. They have one interesting behaviour in common: they compile their next stage on the fly directly on the victim’s computer. At a first point, it seems weird but, after all, it’s an interesting approach to bypass low-level detection mechanisms that look for PE files.

By reading this, many people will argue: “That’s fine, but I don’t have development tools to compile some source code on my Windows system”. Indeed but Microsoft is providing tons of useful tools that can be used outside their original context. Think about tools like certutil.exe or bitsadmin.exe. I already wrote diaries about them. The new tools that I found “misused” in malware samples are: “jsc.exe” and “msbuild.exe”. They are chances that you’ve them installed on your computer because they are part of the Microsoft .Net runtime environment. This package is installed on 99.99% of the Windows systems, otherwise, many applications will simply not run. By curiosity, I checked on different corporate environments running hardened endpoints and both tools were always available… [Read more]

[The post [SANS ISC] Malware Samples Compiling Their Next Stage on Premise has been first published on /dev/random]

August 26, 2019

Une palpitante aventure de Thierrix et Ploumix, irréductibles cyclixs qui résistent encore et toujours à l’empire d’Automobulus.

Coincé dans la chaussure de vélo à semelle en carbone ultra-rigide, mon pied glisse sur un rocher pointu. Mon gros orteil hurle de douleur en s’écrasant dans une fente. La pédale de mon vélo surchargé laboure mon mollet droit alors que ma monture me déséquilibre et m’envoie une enième fois au tapis. Je ferme les yeux un instant, je rêve de m’endormir là, au bord du chemin. J’ai faim. J’ai sommeil. J’ai mal dans toutes mes articulations et sur toute ma peau. Ma chaussette a été déchirée par une branche qui m’a entaillé la cheville. Les ronces ont labouré mes tibias. Je ne sais plus quel jour nous sommes, depuis combien de temps nous pédalons. Je serais incapable de donner ma position sur une carte. J’ai vaguement en tête les noms de hameaux que nous avons traversé ce matin ou hier ou avant-hier ou que nous espérons atteindre ce soir. Je les mélange tous. Mon estomac se révulse à l’idée d’avaler une enième barre d’énergie sucrée. À quand date mon dernier repas chaud ? Hier ? Avant-hier ?

Thierry m’a dit qu’on était là pour en chier avant de disparaitre à toute vitesse dans les cailloux, rapide comme un chamois dans des pentes pierrailleuses qui lui rappellent sa garigue natale. Il va devoir m’attendre. Loin de son agilité, je me traine, animal pataud et inadapté. Je souffre. J’ai poussé mon vélo dans des kilomètres de montées trop raides. Je le retiens maladroitement dans des kilomètres de descentes trop escarpées. Ça valait bien la peine de prendre le vélo.

Mon désespoir a évolué. J’espérais atteindre une ville digne de ce nom pour trouver un vrai restaurant. Puis j’ai espéré atteindre une ville tout court, pour remplir ma gourde d’eau fraiche non polluée par les électrolytes sensés m’hydrater mais qui me trouent l’estomac. J’en suis passé à espérer une route, une vraie. Puis un chemin sur lequel je pourrais pédaler. Voire un chemin tout court où chaque mètre ne serait pas un calvaire. Où les escarpements de rochers pointus ne laisseraient pas la place à des océans de ronces traversés d’arbres abbatus.

On est là pour en chier.

Je suis perdu dans la brousse avec un type que je n’avais jamais vu une semaine plus tôt. Le fils d’un tueur qui a la violence dans ses gênes, comme il l’affirme dans le livre qu’il m’a offert la veille de notre départ mais que, heureusement, je n’ai pas encore lu. Peut-être cherchait-il à m’avertir. Mais qu’allais-je faire dans cette chebèque ?

J’en chie. Et, pour être honête, j’aime ça.

Cette aventure, nous avons décidé de vous la raconter. Sans nous consulter. Chacun notre version personnelle. À vous de relever les incohérences, un jeu littéraire géant des 7 erreurs. Vous avez pu lire la version de Thierry. Voici la mienne.


Tout a commencé des années plus tôt. Aucun de nous deux ne se rappelle quand. Thierry et moi nous lisons mutuellement, nous avons des échanges épistolaires sporadiques qui parlent de littérature, d’auto-édition, du revenu de base, de science-fiction. Je l’admire car sur un sujet que je traite en quelques bafouilles bloguesques, il est capable de pondre un livre, de le faire éditer. Je le lis avidement et suis flatté de découvrir avec surprise qu’il me cite dans « La mécanique du texte ». Nous avons la même culture SF, le même mode de fonctionnement. À l’occasion de son séjour en Floride, Thierry découvre le bikepacking, la randonnée en autonomie en vélo. Une discipline qui me fait rêver depuis pluiseurs années mais dans laquelle je n’ai jamais osé m’investir. Thierry, lui, s’y jette à corps perdu et partage ses expériences sur son blog.

Mon épouse me pousse à le contacter pour organiser un périple à deux. Elle sent mon envie. Thierry ne se fait pas prier. En quelques mails, l’idée de base est bouclée. Nous allons relier sa méditerrannée natale à l’atlantique en VTT. Il me conseille sur le matériel et se lance dans un travail de bénédictin pour écrire une trace, un itinéraire fait de centaines de sorties VTT publiées sur le net par des cyclistes de toute la France et qu’il aligne patiemment, bout à bout.

De mon côté, je ne m’occupe que de mon matériel et de mon entrainement. J’ai peur de ne pas être à la hauteur.

Dernier entrainement

27km, 633d+

Une vilaine insolation et un poil de surentrainement m’ont assommé depuis fin juillet. Nous sommes quelques jours chez mes cousins dans les Cévennes. Je n’ai plus roulé depuis 10 jours et une vilaine inquiétude me travaille : ne serais-je pas ridicule face à Thierry ? Moi qui n’ai jamais grimpé le moindre col, moi pour qui la montée la plus longue jamais réalisée en vélo est le mur de Huy.

Mon cousin Adrien me propose une balade. Il va me lancer sur le col de la pierre levée, près de Sumène. Nous partons, je suis heureux de sentir les pédales sous mes pieds. Le col se profile très vite. Après quelques dizaines de mètres, je trouve mon rythme et, comme convenu, j’abandonne Adrien. Je monte seul. Le plaisir est intense. Je suis tellement bien dans mon effort que je suis un peu déçu de voir le sommet arriver si rapidement. J’ai gravi un col. Certes, un tout petit, mais j’ai adoré ça. Les jambes en redemandent.

Mon premier col


54km, 404d+

Je stresse un peu à l’idée de rencontrer Thierry en chair et en os. Enfin, comme tous bons cyclistes, nous sommes plutôt en os qu’en chair.

Rencontrer des connaissances épistolaires est toujours un quitte ou double. Soit la personne se révèle bien plus sympa en vrai qu’en ligne, soit le courant ne passe pas du tout et la rencontre signifie le glas de tous nos échanges.

Je suis d’autant plus nerveux que je vais loger chez Thierry avec ma femme et mes enfants pendant trois jours. J’ai garanti à ma femme que c’était un type bien. En vérité, je n’en sais rien. De son côté, elle veut jauger l’homme à qui elle va confier son mari pendant 10 jours.

Au moment où je sonne au portail de la maison de Candice Renoir (dont je n’avais jamais entendu parler mais c’est ce qu’indique Google Maps), mon angoisse est à son paroxysme. Une inquiétude sociale permanente chez moi que je camouffle depuis plus de 30 ans sous une jovialité et un enjouement sincère mais énergivore.

Dès les premières secondes, je suis rassuré. Thierry est de la première catégorie. S’il pousse des coups de gueule en ligne (et hors-ligne), il est affable, spirituel, intéressant, accueillant. Il me fait me sentir tout de suite à l’aise. Par contre, le vélo passe avant tout. J’ai à peine le temps de sortir mon sac, d’embrasser mes enfants qu’il me fait sauter sur ma bécane pour aller découvrir la garigue avec Fred et Lionel, deux de ses comparses.

La route s’élève vite sur des pistes de gravel. Mon domaine. J’aime quand ça monte, quand le fin gravier roulant crisse sous les pneus. À la redescente, je déchante. Les passages plus caillouteux et plus techniques me forcent à mettre pied à terre.

– Heureusement qu’on a choisi un itinéraire roulant et non-technique.
– Parfois, on passe par là, me lancent-ils en pointant d’étroits sentiers ultra escarpés que je devine à peine dans la piquante végétation.

Mon vélo est un tout rigide. Je ne suis pas un vététiste. Je cumule les handicaps. Mais, heureusement, je compense. Je monte les bosses et je sais affronter le vent. Je tire donc notre mini-peloton dans une très longue ligne droite le long du canal du midi.

Je suis rassuré sur ma forme et mes jambes. Un peu moins sur ma technique. Mais je suis heureux comme un prince de partager une trace sur Strava avec Thierry et ses amis, d’avoir découvert la garigue.

Maintenant, 48h de repos ordonne Thierry. Ou plutôt 48h de préparation des vélos, du matériel, des dernières courses. 48h émotionnellement difficile pour le mari et le père que je suis car je ne sais pas quand je vais revoir ma famille. Le dernier soir, les enfants s’endorment difficilement. Ils sentent ma nervosité. Je suis réveillé à 6h. Le milieu de la nuit pour un nocturne comme moi. J’ai dormi quelques heures. Bien trop peu. J’embrasse mon ainée qui dort profondément. Ma femme et mon fils me font au revoir de la main. Je tente de graver cette image dans ma mémoire, comme un soldat qui part au front.

Jour 1 : premier col

116km, 2150d+

Au revoir !

Nous tournons le coin de la rue. Je ne suis pas de la race des marins qui partent plusieurs mois. Abandonner ma famille pour une dizaine de jours est plus difficile que je ne le pensais. Mais, très vite, le vélo prend le dessus. Nous roulons dans le territoire de Thierry. Il connait les chemins par cœur. J’ai l’impression d’une simple promenade, que nous serons rentrés pour midi.

Rapidement, nous arrivons à Pezenas pour prendre un petit déjeuner. Nous quittons les sentiers battus et rebattus de Thierry mais il est encore à l’aise, proche de son univers. Le chemin se révèle parfois très technique voire impraticable à vélo. Heureusement, ce n’est jamais que sur quelques centaines de mètres, je me m’inquiète pas outre-mesure car les kilomètres défilent.

Nous faisons une pause à Olargues. Je constate que nous n’avons rien mangé de chaud depuis la veille au soir. Dans une ruelle aveugle, un petit boui-boui à l’aspect miteux est le seul établissement ouvert. La patronne, une jeune femme énergique, nous accueille avec un énorme sourire en se pliant en quatre pour nous faire plaisir. Elle se propose de nous faire des crèpes salées avant de retourner sermoner son mari qui, très gentil, semble un peu empoté.

Depuis plusieurs kilomètres, un mur de montagnes se profile à l’horizon. Thierry ne cesse de me répéter que, ce soir, nous dormirons au sommet.

Ce soir, nous dormirons là bas en haut !

J’ai peur.

Je demande à Thierry de faire une pause dans un vague parc pour dormir un quart d’heure. Je me prépare mentalement. Je fais des exercices de respiration.

Nous repartons ensuite. Au pied des montagne, la trace de Thierry révèle sa première erreur majeure. Elle traverse ce qui, assurément, semble un verger puis un champs. De chemin, point. Heureusement, il ne s’agit que de quelques centaines de mètres durant lesquels je pousse mon vélo dans une brouissaille plutôt éparse.

Dans la broussaille…

Le champs débouche sur le hameau de Cailho, quelques maisons construites à flanc de montagne. Je ne le sais pas encore mais nous sommes déjà dans le col. Quelques lacets de bitumes plus loin, la trace s’enfonce dans un chemin de graviers. J’ai pris quelques mètres d’avance sur Thierry. Au premier virage, je m’arrête pour vérifier que nous sommes sur la même route. Dès que je l’aperçois derrière moi, je me remets à pédaler, à mon rythme.

Je pédale sans relâche. Dans les tournants caillouteux, mon vélo surchargé à parfois du mal à tourner assez sec mais je grimpe, les yeux rivés sur mon altimètre. Je sais que nous dormirons à 1000m ce soir. Nous ne sommes pas encore à 400m.

Alors, je pédale, je pédale. Je me mets au défi de ne pas m’arrêter. Défi que je rompt à 800m d’altitude pour ouvrir un paquet de bonbons powerbar et m’injecter une dose de glucose concentrée. Je repars immédiatement. Je souffre mais la dopamine afflue à torrent dans mon cerveau obnubilé par mon compteur et ma roue avant.

994m. 993. 992. 990. J’ai franchi le sommet. Je m’écroule dans l’herbe, heureux. J’ai grimpé un col de près de 1000m avec un vélo surchargé après 116km. J’ai adoré ça. Thierry me rejoint. Nous apercevons un magnifique lac entre les arbres. Vézoles. Notre étape.

Nous ne sommes pas seuls. Le site est fréquenté par de nombreux campeurs et randonneurs. Le temps de trouver un coin désert et nous plantons la tente avant que je m’offre un plongeon dans une eau à 23°C.

Nous n’échangeons pas plus de quelques phrases avant de nous retirer dans nos cocons. Ce n’est pas nécessaire. Nous sommes tous les deux heureux de la journée. En me glissant dans mon sac de couchage, je me sens fier d’être désormais un bikepacker. Je suis convaincu que nous avons passé le plus difficile. Ça va être du gâteau. J’écris dans mon journal que j’ai connu ma journée la plus difficile sur un vélo. J’ai l’impression d’être arrivé.

Il y’a un côté sauvage, hors du temps avec le bikepacking. Il n’y a plus de conventions, de civilisation. On mange dès qu’on peut manger et que l’occasion se présente. On dort quand on peut dormir. On souffre sans savoir quand ça s’arrêtera. On croise des gens, des villes qui ne sont que des instantanés dans un voyage qui semble sans fin. On est complètement seul dans sa douleur, dans son effort, dans son mental. Et on a la satisfaction d’avoir tout ce qu’il faut pour vivre sur soi. On avance et on n’a plus besoin de rien, de personne.

Quelle aventure !

Comme c’est la première fois que je monte cette tente, j’ai mal tendu certaines parties. La toile claque au vent toute la nuit. J’ai l’impression que l’on rôde autour de nos vélos. Je ne dors que d’un œil. Je suis aussi trop excité par notre performance. Je me réveille toutes les heures. Thierry lancera le signal du réveil un peu avant 7h. J’ai l’impression que je n’ai pas dormi pour la deuxième nuit consécutive.

Jour 2 : perdu dans la traduction

67km, 1630d+

Départ du lac de Vézoles. Petit-déjeuner prévu dans 13km à La Salvetat-sur-Agout. Une paille. Surtout que ça va descendre. Je pars à jeun. Grave erreur. La trace se perd dans des pistes noires VTT. Des montées et descentes infinies de pierrailles, du type de celles qu’on voit sur les vidéos Youtube de descente en se demandant « mais comment ils font ? ». Avec un vélo tout rigide chargé de sacs, le chemin tient du calvaire.

On est encore fringuants au départ !

Une des attaches de mon sac de guidon Apidura se rompt. Je suis déçu par la fragilité de l’ensemble. Thierry me confie qu’ils n’ont sans doute jamais fait de VTT chez Apidura. Lui-même a du pas mal bricoler son sac pour l’attacher. Avec un sac bringuebalant, le calvaire risque de se transformer en enfer. J’ai heureusement une illumination : j’ouvre le sac et m’empare de la ceinture de mon bermuda civil avec laquelle je fabrique une fixation qui se révélera bien plus stable et solide que l’attache originale. Nous repartons.

Souvent, la trace semble s’enfoncer dans les bois. Elle ne correspond plus à rien. Sur Google Maps ou Open Street maps, nous sommes dans une zone déserte. Après ma seconde chute, je constate que le jeune n’est pas une bonne idée. J’engouffre une barre d’énergie. Cela me permettra de tenir les 3h que nous mettrons à sortir de cet enfer et atteindre La Salvetat-sur-Agout.

Il est pas loin de 11h, la foule a envahi le village, c’est jour de marché. Nous nous attablons à la terrasse d’une boulangerie pour enfiler des pains au chocolat et des espèces de parts de pizzas carrées. Nous avons fait à peine 13km mais je me dis que, désormais, ça va rouler.

C’est d’ailleurs vrai pendant quelques kilomềtres. Nous traversons le lac de la Raviège. J’ai envie de me baigner mais il faut rouler.

Thierry me certifie que, les cailloux, c’est terminé. Je ne sais pas s’il y croit lui-même ou s’il tente de préserver mon moral.

Très vite, la trace redevient folle. Elle semble traverser en lignes droites des zones vierges sur tous les logiciels de cartographie. Mais elle ne nous laisse pas le choix : aucune route ne va dans la bonne direction.

Nous empruntons des sentiers qui semblent oubliés depuis le moyen-âge. La pieirraille alterne avec la végétation dense. Aucune ville, aucune agglomération. Les villages ne sont que des noms sur la carte avant de se révéler des mirages, un couple de maisons borgnes se battant en duel et nous ayant fait entretenir le faux espoir d’une terrasse de café.

Il est 14h quand, après un petit bout de départementale, nous arrivons à un restaurant que Thierry avait pointé sur l’itinéraire. Le seul restaurant à 20km à la ronde. Une pancarte indique « fermeture à 14h30 ». Nous nous asseyons, prêts à commander. Le serveur vient nous annoncer qu’il ne prend plus les commandes. Le ton est catégorique, je tente vainement de négocier.

J’apprendrai au cours de ce raid que la crèpe du permier jour aura été une exception. En France, le tout n’est pas seulement de trouver un restaurant. Encore faut-il que le restaurant soit ouvert et de tomber dans l’étroite fenêtre où il est acceptable de prendre les commandes. Certaines de nos expériences friseront le burlesque voire la tragi-comédie.

Il est 14h et nous devons nous contenter de 3 maigres morceaux de fromage.

Nous repartons. Il était dit que la journée serait placé sous le signe de l’enfer vert, de la brousse. L’après-midi ne fera pas exception.

Cela fait plusieurs kilomètres que la trace nous emmène sur une route au goudron transpercé par les herbes folles. Pas de croisement, pas d’embranchement. Mais, naïf, je suis persuadé qu’une route mène forcément quelque part.

Nous apprendrons à nos dépends que ce n’est pas toujours le cas. Après plusieurs centaines de mètres de descente, la route nous amène face à une maison faites de bric et de broc. Un chien nous empêche de continuer. Un homme barbu sort, peu amène.

— Vous êtes chez moi ! éructe-t-il.
— Nous suivons la route, explique Thierry. Il ajoute que nous n’avons pas vu d’embranchement depuis des kilomètres, que notre trace ne fait que traverser.

L’homme nous jauge.

— Vous n’avez qu’à remonter jusqu’aux abeilles. Il y’a un chemin.

Effectivement, je me souviens avoir croisé des ruches. Nous remontons péniblement la pente. Peu après les ruches, un semblant de chemin semble se dessiner pour peu que l’on fasse un réel effort d’imagination. Ce chemin n’existe sur aucune carte, aucune trace. En fait, il ne semble exister que comme une légère éclaircie entre les ronces.

D’ailleurs, au détour d’un virage, il se termine abruptement par des masses d’arbres abattus. Pas moyen de passer. J’aperçois, en contrebas, ce qui semble être la continuation du chemin. Nous traversons quelques dizaines de mètres de végétation pour le rejoindre avant de continuer. Après plusieurs centaines de mètres, flagellés par les ronces et les branches basses, nous avons la certitude d’avoir mis plus d’une heure pour contourner la maison de cet antipathique anachorète. Il nous faut désormais sortir du trou, escalader la paroi opposée.

Au beau milieu de la forêt, la trace nous fait traverser ce qui est assurément un jardin entouré de fils électriques. D’une maison de pierre jaillit la musique d’une radio.

— Ne t’arrête pas ! me souffle Thierry. Pas question de se faire détourner une fois de plus.

Nous enjambons les câbles, traversons l’espace à mi-chemin entre le jardin et la clairière, glissons sous la barrière suivante. Le chemin se termine abruptement et s’est écarté de notre trace. Je pars à pied, en éclaireur. Après quelques centaines de mètres dans les ronces et les arbrisseaux aux branches lacérantes, je découvre un chemin plusieurs mètres en contrebas. J’appelle Thierry. Nous faisons descendre les vélos. Le chemin est encombré d’arbres abattus.

Pas trop roulant…

Je suis épuisé. Nous ne faisons que monter et descendre de nos vélos, monter et descendre en altitude, monter et descendre des chemins creux. Il est déjà tard lorsque nous croisons une départementale flanquée de trois maisons que la carte intitule pompeusement « Sénégats ». Ici, tout groupe d’habitations a droit à son nom. Il faut dire qu’ils sont tellement rares.

La trace continue tout droit dans ce qui semble une pente abrupte. Je suggère de nous offrir un détour et d’escalader le tout par la départementale qui zigzague. Thierry interroge une passante au fort accent irlandais. Elle confirme que ça grimpe et qu’elle n’a jamais emprunté ce sentier jusqu’au bout, même à pied.

Nous prenons la départementale en nous interrogeant sur ce qui peut bien emmener une irlandaise ici, dans ce coin où la civilisation se résume à une étroite bande de bitume qui quadrille maladroitement un univers de creux, de trous où même la réception GSM se révèle sporadique.

Il est tard. Nous n’avons fait que 60km mais la question du ravitaillement se fait pressante.

Thierry a pointé Saint-Pierre-de-Trivisy. Une bourgade qui dispose, selon la carte, d’une station d’essence, d’un restaurant, d’une boulangerie et d’un camping. La grande ville !

Il est passé 18h quand nous arrivons. La station d’essence se révèle être un carrossier devant laquelle rouille une antique pompe. Les magasins de première nécessité ne sont pas encore arrivés jusque dans cette partie du pays. Le restaurant n’ouvrira que dans 48h. La boulangerie est logiquement fermée. Nous ne trouvons pas le camping, le désespoir m’envahit, la ville est déserte, glauque.

Soudain, un jeune couple apparait, sourire aux lèvres, un enfant en poussette. Ils ont l’air de vacanciers. Nous nous encquérons d’un endroit où manger. Ils nous suggèrent le snack du camping, juste derrière l’église.

Le camping se révèle un véritable centre de loisirs avec piscine et plaine de jeux. Le snack, lui, ne prend les commandes qu’à partir de 19h, Nous sommes trop tôt sauf pour les desserts. Qu’à cela ne tienne, nous commandons chacun deux crêpes comme entrée et, à 19h pêtante, deux entrecôtes frites. Le tout arrosé d’un dessert.

Vous êtes végétariens ? Pas aujourd’hui !

Jamais nourriture ne m’a semblé si délicieuse.

Le bikepacking, c’est aussi fuir la civilisation. Retourner à l’état sauvage. Les traversées de villages semblent incongrues. Pour chaque personne croisée, nous sommes une erreur, un aventurier. Nous sommes seuls, différents. Mais, après seulement 48h, la civilisation manque. Un vrai repas chaud, une douche, une toilette. Tout ce que nous considérons comme acquis devient un luxe. Même la nourriture ou l’eau fraiche se font rare. Lorsqu’une opportunité de manger apparait, on ne choisit pas. On prend tout ce qui passe car on ne sait pas quand sera la prochaine se présentera .

Fuir la civilisation et se rendre compte des ses bienfaits. Malgré la colère et la déception d’avoir fait si peu de kilomètres, le bikepacking me transforme !

En échange de 22€, le camping nous octroie le droit de planter nos tentes sur une fine bande d’herbe qui sert de parking jouxtant les sanitaires.

Nous voyant arriver avec notre barda, un campeur s’avance spontanément.

— Vous n’avez certainement pas envie de commencer à cuisiner. J’ai justement fait beaucop trop de pâtes. Et j’ai du melon.

Je suis touché par ce simple geste d’humanité. Mais je dois refuser à contrecœur en expliquant qu’on vient à l’instant de s’offrir une entrecôte.

Je commence à mieux maitriser le montage de ma tente et je m’endors presqu’instanément. Avant de glisser dans les bras de morphée, j’ai la conscience de constater que notre sympatique voisin ne partage pas que ses repas. Il aggrémente également le camping de ronflements gargantuesques. Mais je me détache du bruit et me laisse bercer.

Thierry n’arrivera pas à en faire autant. Il passera la nuit à siffler, taper dans les mains et puiser dans sa réserve sudiste de jurons pour le faire taire, le tout au plus grand amusement des deux filles du dormeur qui passent la nuit à glousser. Un orage violent déchire le ciel. Notre tonitruant voisin m’a justement confié que le camping avait été innondé deux jours auparavant. Je guette, vérifie mes sacs. Mais la tente tient parfaitement le coup. Au matin, elle sera déjà presque sèche et tout au plus devrais-je ajouter une goutte d’huile sur la chaine de mon vélo.

Nous repartons et, pour la première fois depuis le départ, je sens que la fatigue gagne également Thierry. Cela me rassure, j’avais l’impression d’avoir affaire à un surhomme. Mais il passe mieux les sentiers en cailloux que les ronflements d’un dormeur.

Jour 3 : les rivières ne sont pas un long fleuve tranquille

102km, 1700d+

Chaque jour se révèle fondamentalement différent. Alors que nous avions eu de la garrigue, des petites buttes sèches et de la caillasse le premier jour, des creux et des bosses vallonnées emplies de végétation le second jour, voici que les chemins se transforment en maigres routes, que les bosses se font plus pentues mais plus roulantes. Je suis dans mon élément, je roule, je prends du plaisir à escalader toutes ces pentes qui me semblent désormais courtes mais qui sont plus longues que tout ce que j’ai jamais fait autour de chez moi. Nous descendons à toute allure vers Albi, les kilomètres défilent. Thierry est moins à l’aise : si son VTT avec suspension à l’avant le faisait flotter dans la caillasse, il lui donne l’impression de coller au bitume. Mais les chemins sont encore nombreux. Les paysages sont sublimes, la civilisation est désormais omniprésente. Nous ne faisons que la contourner mais sa présence rassurante flotte autour de nous, spectre ricanant à notre naïve tentative de lui échapper.

Les tapes-culs !
L’écrivain-philosophe inspiré par le paysage

Nous traversons Albi en trombe, juste le temps de boire un verre au pied de la cathédrale. De retour dans les champs, la trace semble nous ammener au milieu des herbes face à un panneau qui proclame « Pas de droit de passage » avec une autorité de façade.

La journée précédente nous a servi de leçon. Si il n’y a pas l’air d’avoir de chemin, si Open Street Maps n’indique pas de chemin, alors ne nous acharnons pas et contournons l’obstacle. Cette stratégie nous permettra d’enfiler les kilomètres.

Dans les waides, comme on dit par chez moi…

Nous nous arrêtons pour déjeuner à Monestiés, village charmant et plein de caractère. Aux hameaux de maisons isolées ont en effet succédés ces petites bourgades semi-touristiques où l’on respire une atmosphère pseudo-médiévale pour mieux attirer les rédacteurs de guides du routard.

La terrasse du restaurant est agréable mais, bien entendu, nous sommes en dehors des heures de cuisine. Il faudra se contenter d’une assiette de charcuterie à la limite du comestible.

Mais j’ai pris le pli du véritable bikepacker : toute calorie est bonne à prendre, tu ne sais pas quand seras la suivante. La quantité prime sur la qualité, peu importe l’heure et l’endroit.

Nous continuons notre route jusque Laguépie, autre patelin pitoresque à cheval sur un embranchement de l’Aveyron.

Un bord de rivière a même été amménagé en coin baignade avec jeux gonflables et maitres nageurs. Alors que Thierry s’installe à la terrasse locale, je lui glisse :
– Tu me donnes 5 minutes ?

Sans attendre la réponse, j’enfile prestemment mon maillot et plonge sous son regard héberlué dans l’Aveyron. 4:49 plus tard, très exactement, j’en ressors et le rejoins. Il n’a pas envie de plonger. Il est en mode vélo, pas natation. Mais contrairement à lui, qui vit au bord de son étang, je ne laisse jamais passer une seule occasion de m’immerger. Nous repartons et Thierry me propose de suivre l’Aveyron pour éviter de grimper sur le plateau. Notre prochaine étape, Najac, est en effet au bord de la rivière.

Dans la flotte…

En longeant le cours d’eau, nous nous perdons un instant de vue. J’ai continué tout droit et j’ai loupé un embranchement. J’entends la voix de Thierry, sur la gauche, sur un chemin qui s’écarte fortement.
— Rho, c’est dur ici, il faut pousser, c’est de la pierre.
— Moi je suis sur une piste VTT balisée orange, c’est super roulant, lui répondis-je.

Il fait demi-tour pour me suivre. Je ne le sais pas encore mais je viens de commettre la pire erreur de la journée.

Loin de s’arrêter nettement, la piste orange devient simplement de moins en moins franche. Certains obstacles surgissent : la piste est effondrée et il faut descendre dans les cailloux jusqu’au niveau de la rivière pour réescalader ensuite et retrouver une piste qui, bien que balisée, est clairement de moins en moins pratiquable. Elle va jusqu’à disparaitre presque totalement. Nous peinons dans un enfer de caillasse et de végétation. À notre gauche, une falaise à pic. À notre droite, la rivière. Entre les deux, un vague espoir. Faire demi-tour ? Cela implique de repasser toutes les difficultés franchies. L’Aveyron fait des tours et des détours. Je pointe un pont sur la carte. Notre seule chance.

Ça commence à sentir le roussi…
Après ce passage, je n’aurai même plus la force de dégainer mon appareil photo, ça deviendra pire…

Tant bien que mal nous arrivons au fameux pont. C’est un chemin de fer qui passe une dizaine de mètres au-dessus de nos têtes. Thierry ne voit pas ce qu’on peut faire. Je prétends avoir deviné un chemin qui montait vers le pont. Nous faisons demi-tour et, cette fois, mon intuition se révèle juste. Nous débouchons sur un chemin de fer après quelques mètres d’orties et d’herbes folles. Nous traversons rapidement et franchissons le pont tout en restant à distance respectueuse des rails. Juste après le pont, un sentier nous conduit vers un chemin de halage en cailloux blancs. Une autoroute pour nos vélos. Najac se rapproche, nous sommes sortis de l’enfer. Il nous a fallu des heures pour franchir les quelques derniers kilomètres. Je suis épuisé.

Soudain, au détour d’une boucle de l’Aveyron, Najac se profile. Je sursaute.
— Tu vas me faire escalader ça ?


Car Najac est un véritable nid d’aigle perché sur un éperon rocheux. La trace nous emmène au village par un sentier moyen-âgeux aux rochers aussi acérés que la pente. Je pousse avec difficulté mon vélo sur un petit pont couvert d’herbes qui a sans doute vu passer Ramiro et Vasco avant moi.

Le village en lui-même est tout en pentes et escarpements. Mais sur une surface roulante, la pente ne me fait pas peur, nous roulons à la recherche d’un restaurant ouvert. Une habitante nous conseille un établissement. La terrasse est étroite mais dispose de plusieurs tables de libres. Nous nous installons. Il est 19h50 et le serveur vient nous informer qu’il ne prend plus les commandes.

C’est absurde. On dirait un gag à répétition. Heureusement, nous avons croisé un autre restaurant sur le chemin. Le personnel est plus accueillant mais le hamburger, franchement frugal, mettra très longtemps à arriver. À la table d’ầ côté, une parisienne se passionne pour nos aventures. Elle pose plein de questions et nous remerciera pour avoir passé une excellente soirée.

Je me rends compte à quel point le bikepacking fait de nous des voyageurs, des étrangers permanents. Alors que les automobilistes se téléportent sans attirer l’attention, nous avons vu chaque mètre de paysage depuis la méditerrannée. Nous sommes de passage. Nous pouvons effrayer comme passionner mais nous ne laissons pas indifférent.

Nous prenons un dessert. Il prendra encore plus de temps que le burger à arriver. Il fait nuit noire quand nous arrivons au camping qui borde l’Aveyron au pied de Najac. Toute cette escalade n’aura servi qu’à manger un piètre hamburger, je peste.

L’accueil du camping est fermé. Près des sanitaires, la musique joue à fond entrecoupée par une version maladroite du Connemara reprise sur un synthé bon marché. Des gosses hurlent et se poursuivent dans les douches et les toilettes en claquant les portes. N’ayant pas trouvé d’emplacement libre, nous jetons notre dévolu sur un maigre carré d’herbes devant une caravane qui semble à l’abandon. Je suis tellement fatigué que, toute la nuit, je stresse à l’idée que Thierry m’annonce qu’il est 7h. À 5h du matin, des camions déchargent de la pieraille à grand fracas pendant une demi-heure. À 7h, nous émergeons dans un camping trempé par l’humidité de la rivière. Nos vélos, no sacs, nos tentes semblent sortis du cours d’eau lui-même.

Alors que nous nous éclipsons en catimini, j’observe mes fêtards d’hier qui se rendent aux sanitaires. Je me demande s’ils apprécient ce genre de vacances où s’ils n’ont financièrement pas d’autre choix.

Jour 4 : Entre les ronces et les humains

100km, 1600d+

À peine sorti du camping et nous attaquons, à froid et à jeun, 5km d’ascension à 6% de moyenne. Je me sens plein d’énergie mais j’ai appris à me connaitre. Je me lève trop tôt, je dors trop peu. Mon énergie ne va pas durer. Une fois le col franchi, la somnolence s’empare de moi. Comme tous les matins, je vais devoir lutter jusque 11h-midi contre une irrépréssible envie de dormir. Le seul remède ? Dormir jusque 9h. Mais ce n’est malheureusement pas au programme.

Nous nous arrêtons dans une boulangerie borgne, dans un petit village. J’avale deux pains au chocolat pas très bons. Mon estomac commence à se plaindre de ce régime de barres d’énergie, de plateaux charcuterie/fromage et de pains au chocolat. Durant toute la matinée, j’ai des reflux acides particulièrement désagréables. J’espère prendre un thé à Villefranche-de-Rouergue, la grande ville du coin.

Mais la banlieue de Villefranche ne donne pas confiance. Nous arrivons sur une hauteur d’où nous surplombons la ville, grise, industrielle, morne. Si nous descendons dans le centre, sans certitude de dégotter une terrasse à cette heure matinale, il faudra tout remonter. Thierry propose de continuer. Je lui emboite la roue. Villefranche ne me revient pas. J’ai sommeil, j’ai de l’acide dans l’œsophage, les petits pains au chocolat sont sur le point de ressortir et je dis à Thierry :
— Je rêve d’un thé chaud. Un Earl Grey.
— Vu les bleds qu’on va rencontrer, y’a peu de chance.

Et puis se produit ce qu’il convient d’appeler, dans la tradition Boutchik, un miracle. Alors que nous traversons Laramière, un enième bled d’une dizaine de maison qui comporte plus de chèvres que d’habitants, je m’arrête à côté d’un panneau. Une cloche l’orne avec la mention : « Pour le bar, sonnez la cloche ».

Allez-y !

Je n’ai pas le temps d’essayer qu’un homme s’approche de nous, indécis.
– Vous venez pour le bar ?
– Vous avez du thé ?
– Euh, je vais voir. C’est une amie qui a ouvert le bar, elle est partie, je vais voir ce que je peux faire.

Miracle, il nous ramène un thé qui me semblera délicieux et qui calmera complètement mes aigreurs. Complètement baba-cool déjanté, le ding-dong bar, c’est son nom, requiert normalement une carte de membre mais bon, c’est pour le fun. Deux thés et deux parts de gâteau nous couteront la bagatelle de 2€. Sans oublier le passage par les toilettes sèches cachées derrière une planche branlante. Sans doute la partie la plus difficile pour moi. Défequer dans un trou que je creuse dans la forêt, ça me plait encore. Dans les campings où les restaurants, je désinfecte la planche et ça passe en essayant de ne pas trop réfléchir. Mais les toilettes sèches, j’ai vraiment du mal. Comme je suis tout le contraire d’un constipé, dans ce genre de raid je ne peux pas m’offrir de faire la fine fesse.

On n’en parle jamais mais chier est un des éléments les plus incontournables. Il y’a ceux qui peuvent se retenir plusieurs jours et, à l’opposé du spectre, moi, qui doit minimum aller avant de dormir, au réveil et deux ou trois fois sur le reste du parcours. Entre les toilettes publiques, celles des bars et les zones sauvages de forêt, il faut bien calibrer ses besoins. Tout comme pour la bouffe, je ne perds jamais une occasion de chier car je ne sais pas quand la suivante se présentera.

Soulagé, reposé avec une sieste de quelques minutes, je ressors requinqué du ding-dong bar avant de me rassasier avec un fish-and-chips potable au prochain bled. Passage par un dolmen et puis on réattaque les montées et les descentes, avec de véritables petits cols sur des chemins caillouteux et des passages à près de 20%. Je m’accroche, ce sont certainement les passages que je préfère. Surtout une fois au sommet. Tel un muletier, je constate que mon vélo avance mieux dans ce genre de situation quand je jure et crie. Mais c’est surtout du cinéma car j’adore ça.

Au détour d’un sommet, un splendide village moyen-âgeux nous apparait entre les arbres. Saint-Cirq-Lapopie. Thierry m’explique que c’est connu, c’est un joli village touristique. Je n’en avais jamais entendu parler et ne formalise pas. Nous descendons par un petit sentier de type GR, difficilement praticable en vélo et ne recontrons qu’un promeneur. Le chemin se termine abruptement. L’enfer se déchaine brusquement.

Vu de haut, cela a l’air magnifique !

Saint-Cirq-Lapopie n’est pas connu ni touristique. Il est très connu et très touristique. De notre sentier désert, nous débouchons dans une masse compacte d’humains suant, suintant, parlant fort, fumant, achetant des babioles hors de prix et se prenant en photo. Se glisser avec nos vélo jusqu’à une terrasse relève du parcours du combattant. À la moitié de notre glace, Thierry se lève, rapidement imité par moi. Un couple de fumeurs s’est installé à côté et l’air est devenu immédiatement irrespirable.
– C’est l’enfer, murmure Thierry.
– Je préfèrais la brousse du deuxième jour, rénchéris-je.

Dans notre misanthropie commune, nous nous comprenons sans avoir besoin d’en ajouter. Il est temps de fuir. Mais le sentier pour descendre de Saint-Cirq-Lapopie est un GR escarpé encombré de touristes à la condition physique parfois chancelante. Nous devons descendre très prudemment. Arrivé sur les rives du Lot, le même saint cirque (lapopie) continue pendant des kilomètres. Nous ramons à contrecourant d’un flot de touristes espérant à tout prix prendre le même selfie avant de redescendre.

Ce flot s’arrête brusquement avec la traversée d’un parking géant. Malheureusement, le chemin en fait autant. Les ronces et les cailloux me rappellent douloureusement l’épisode de Najac.

– Rha, dès qu’il y’a un peu moins de monde, on est envahi par les ronces.
– Les gens sont une forme de ronce.

Deux philosophes sur leurs vélos, c’est beau comme un quartrième de couverture de Musso. Tant bien que mal, nous suivons le cours du Lot. Plutôt mal que bien. Après un travers tout sauvage, nous récupérons un chemin nettement plus roulant. En bordure d’un champs, des arrosseurs éclaboussent dans notre direction. Thierry a peur d’être mouillé. Je ne peux retenir une exclamation moqueuse.

— Le sudiste a peur de quelques gouttes qui tombe du ciel ! Chez nous, tu ne ferais pas souvent du vélo si tu as peur d’être mouillé.

Je me dis que le chemin finira bien par arriver quelques part.

Et bien non. Après quelques kilomètres, deux voitures garées nous annoncent qu’il se termine en cul de sac. Sur le pas de la porte d’une maison esseullée, deux personnes nous regardent d’un air ébahi et nous informent qu’il faut faire demi-tour. Et que c’est loin. Mais Thierry ne veut pas aller aussi loin. Il a repéré un chemin en montée balisé VTT noir. J’avais espéré qu’il ne l’aurait pas vu.

– On va désormais éviter les rivières, me suggère-t-il.

C’est reparti pour de la pierraille avec 200m d’ascension sur un kilomètre. Je ne marche pas à côté de mon vélo, je tente vainement de le tirer alors que j’escalade ce qui a du être un chemin avant un glissement de terrain. Lorsque ça redevient roulant, il faut encore compter sur une petite centaine de mètre de dénivelé. Avant, ce n’est pas une surprise, de redescendre immédiatement vers Cahors.

Cahors où nous avons décidé de manger. Thierry a envie d’une pizza et, à peine entrés dans la ville, nous tombons sur une petite pizzéria qui répond à nos critères. Avant de m’installer, je m’attends à ce qu’on nous annonce que nous ne sommes pas dans les heures, que la lune n’est pas dans son bon quartier avec cet air typiquement français qui s’étonne même que vous osiez demander un truc aussi incroyable que de manger dans un restaurant.

Contre toute attente, nous sommes servis de manière rapide et très sympathique. L’explication tombe très vite du serveur : son beau-frère, kiné malvoyant, a décidé de plaquer son cabinet pour ouvrir une pizzéria. Et c’est aujourd’hui le premier jour.

Les pizzas étaient très bonnes mais Thierry ne veut pas s’éterniser. Il veut quitter la ville le plus vite possible pour trouver un endroit où dormir.

Nous sommes encore dans les maisons de Cahors que la trace bifurque vers un GR pierrailleux et vertigineux qui nous fait passer sous un pont d’autoroute. Le paysage est patibulaire, envahi de carcasses, de déchets. Au milieu d’un champs ferailles, un homme est assis. Devant nous, une meute de borders collies bloque le chemin et aboie. Certains grondent et montrent les dents. Je demande à l’homme d’appeler ses chiens.

Ça valait bien la peine de prendre le vélo, épisode 118…

Il fait un geste moqueur et rigole. Thierry se charge alors d’ouvrir le chemin en aboyant plus fort. C’est glauque et je propose de ne pas planter notre tente trop près.

Après une descente et une courte bosse, nous débouchons sur un plateau d’où s’élance justement un parapente.

Nous avons le souffle coupé. La vue est magnifique, presqu’à 360°. Nous dominons toute la vallé du Lot. C’est magnifique. Thierry propose de planter notre tente à cet endroit. Je propose quelques mètres en retrait, dans un creux protégé du vent par les buissons. J’ai l’intuition qu’un plateau qui sert de départ aux parapentes doit être légèrement venteux.

Je pars également vérifier la suite de notre trace pour éviter de discuter demain matin et pour passer le cap des 100km pour cette journée. Comme je le pensais, la trace descend le plateau suivant un GR presque vertical. Du genre « Au bord, tu ne vois le chemin qu’en te penchant. » Thierry me rassure, on retournera sur nos pas et on prendra une autre descente.

Nous profitons de la soirée face à ce paysage grandiose. Les villages s’allument dans la vallée, la nuit est magnifique.

Le roi du monde, le lendemain…

Je n’ai aucune idée de quel jour nous sommes, de quand nous sommes partis, de où nous sommes sur la carte. Nos aventures se mélangent. Je ne sais plus si un souvenir se rapporte à cet après-midi ou s’il est déjà vieux de trois jours. Sur mon téléphone, les photos des vacances avec ma famille semblent appartenir à une autre époque, une autre vie. Tout est tellement lointain. La déconnexion est totale. Mon cerveau ne pense qu’à pédaler. Pédaler, trouver à manger, pédaler. Planter la tente, pédaler. Une routine ennivrante.

Malgré quelques allées venues d’amoureux et de parapentistes désireux de passer, comme nous, la nuit sur le plateau, je passerai là une des nuits les plus paisibles.

Jour 5 : highway to sieste

62km, 900d+

Après une nuit dans le calme absolu au sommet de notre mamelon, à apprécier ma tente et mon sac de couchage, mon petit cocon, nous découvrons que la vallée est devenue une mer de nuages de laquelle nous émergeons. La vue est magnifique.

Pas trop envie de descendre là dedans moi…

Comme je l’avais prévu, la descente est difficile et se fait essentiellement à pied. Avant de croiser une route et de rouler dans la brume.

En plus des traditionnels pains au chocolat, le magasin dispose de quelques fruits. Je prends deux abricots et une banane. Des fruits sans saveur qui me sembleront délicieux avant de traverser un étroit pont, magnifique dans la brume, et de continuer à pédaler dans le froid.

La vue de mon guidon. T’as intérêt à en tomber amoureux car t’as le nez dedans en permanence !

Aujourd’hui, nous allons faire étape chez les beaux-parents de Thierry où sejournent sa femme et ses enfants. Il connait bien la région pour la parcourir en VTT. La fatigue aidant, il n’a pas trop envie de s’esquinter sur des difficultés qu’il connait. Et il souhaite arriver pour le déjeuner. Au lieu du VTT, nous passons par les routes où le vélo de Thierry a beaucoup moins de rendement. Je tente de l’aider en prenant de longs relais. J’aime sentir les kilomètres défiler. J’aime les petits coups de culs que l’on passe sur des petites routes. Je pédale avec plaisir, je grimpe. C’est dur, je souffre, mais la brieveté de l’étape rend tout psychologiquement plus facile. Nous arrivons finalement avant 13h, après 62km et presque 900m de dénivelé. Ça m’a semblé tellement facile comparé aux autres jours !

L’adrénaline tombe chez Thierry qui s’écroule à la sieste. Chez moi, elle est remplacée par l’adrénaline sociale. Peur de commettre un impair, peur d’être grossier chez des gens qui ne me connaissent pas et qui m’accueillent à bras ouverts.

Isa, la femme de Thierry, ne semble pas trop m’en vouloir de lui avoir piqué son mari pendant une semaine. Je suis très heureux de rencontrer ce personnage central du livre « J’ai débranché ».

Je prends une douche, fais une machine, nettoie mon vélo. Je vais dormir dans un vrai lit après un vrai repas. Des pâtes, un fruit ! C’est délicieux, j’en rêvais. Je vais faire une grasse mat. Tout cela me semble irréel. Dans mes souvenirs et les photos, les journées de notre périple se mélangent, se confondent. Tout n’est qu’un gigantesque coup de pédale. La seule chose qui me préoccupe, c’est le dénivelé qui reste. C’est de savoir si le chemin existe, si je vais passer. Si j’aurai assez d’eau pour la nuit. Si on va trouver à manger. Si j’ai de la batterie pour mon GPS.

Finalement, le camping sauvage est encore mieux que le camping. J’apprends même à apprécier de chier dans un trou que j’ai creusé.

Cela ne fait que 4 nuits que nous sommes partis… Je me sens tellement différent. Tellement déconnecté de tout le reste de l’univers.

Pourtant, ce n’est pas comme si j’avais envie de continuer ça pendant des mois. Nous sommes en mode extrême. La fatigue est partout. Je suis épuisé, mes fesses sont douloureuses, mon genou se réveille parfois, mes gros orteils sont en permanence engourdis, je rêve d’arriver à Biscarosse, de crier victoire.

Je rêve d’arriver. Mais je ne veux pas que cette aventure s’arrête…

Jour 6 : Mad Max Marmande

99km, 1150d+

Quel plaisir de dormir jusque 9h30. D’être cool, de prendre un petit déjeuner peinard. Je suis un peu gêné de m’immiscer dans la vie de famille de Thierry mais je profite pleinement de l’accueil chaleureux.

Départ à midi. Ça me convient super bien. Pas de gros coup de barre. Plein de petits coups de cul, des paysages toujours beaux même si moins spectaculaires. Et, petit à petit, l’univers se transforme. Tout est champs. Les chemins ne sont que de l’herbe entre deux cultures. Les routes sont fréquentées par des voitures qui roulent vite. Tout semble un peu à l’abandon, un peu sale. Les chiens aboient à notre passage voire nous courent après. Les habitants nous regardent d’un air soupçonneux depuis le pas de leur porte. Pas de galère si ce n’est des chemins coupés à contourner, quelques centaines de mètres à faire dans les labourés. Je regrette d’avoir un sac de cadre qui m’empêche d’épauler mon vélo et de profiter de mon entrainement de cyclocrossman.

Je prends un malin plaisir à chaparder quelques prunes sans descendre de mon vélo, sans m’arrêter. Si j’arrive à attraper la prune depuis le chemin, alors elle est à moi.

Nous n’étions pas en mode course et ça me convient très bien !

Marmande est une ville morte le 15 août. Tout semble fermé. Tout est sale, craignos. On croise un camping car et une voiture de l’équipe nationale belge de cyclisme. Je reconnais Rick Verbrugghe au volant. Je découvre que le tour de l’avenir partait aujourd’hui de Marmande. Cela ne semble pas avoir laissé la moindre gaité. Nous repérons une sorte de boulangerie puis un petit snack crado. Nous espérons trouver mieux. Tout est fermé. Une brasserie propose une carte alléchante. Mais pas de surprise : nous ne sommes pas dans les heures pour commander à manger. Nous retournons vers la boulangerie qui se révèle presque vide.

En désespoir de cause, nous nous rabattons sur l’infâme durum. À la guerre comme à la guerre et Marmande ressemble à une ville bombardée, détruite.

La finesse de la gastronomie locale…

Mon fessier est de plus en plus douloureux. Les jambes tournent bien mais le postérieur sera content de voir arriver la fin du périple. Dès que la route est plate, que le bas de mon dos repose sur la selle, je me mets à crier de douleur. Les applications de crème sont devenues de plus en plus fréquentes. Thierry n’en mène pas plus large, il sert les dents.

Où dormir. Tout est craignos, les terrains sont occupés par des fermes et leur matériel mal entretenu. Les odeurs autour du canal sont pestilentielles.

Thierry pointe un bois sur la carte. Nous nous y enfonçons au hasard. Au milieu, un champ a priori inoccupé. Nous y plantons notre tente entre deux nuées de moustiques. On verra bien demain…

Dernier campement

Je vois que Thierry en a marre. Cela ne l’amuse plus comme terrain. Il veut du VTT, de la pieraille. La pause de famille lui a certainement fait plus de tort que de bien. Contrairement à moi, il a retrouvé ses tracas quotidiens. Il a cassé son rythme. Son postérieur a juste eu le temps de devenir vraiment douloureux sans qu’ils puisse se reposer. De mon côté, même si mon postérieur est également douloureux, je retrouve des chemins comme je les aime, des sentiers creux entre les champs comme ceux qui parcourent mon Brabant-Wallon natal.

Je lui dit qu’on est là pour en chier.

Il ne répond pas.

Chacun son tour.

Jour 7 : un océan de monde

138km, 740d+

L’humidité est affreuse, pénétrante. En se réveillant au milieu du champs, nous retrouvons nos tentes, nos sacs et nos vélos comme passés à la lance d’incendie.

Je constate que nous avons traversé la fameuse frontière pain au chocolat/chocolatine. Elle s’était matérialisée discrètement par des étiquettes « chocolatines » dans les magasins mais, aujourd’hui, pour la première fois, le vendeur m’a repris quand j’ai demandé des pains au chocolat. « Vous voulez dire des chocolatines ? ». J’ai failli lui répondre « Ben oui, des couques au chocolat ! ».

Le programme initial de notre tour prévoyait de pousser jusqu’à Arcachon, de loger dans le coin avant de redescendre jusque Biscarosse le long de l’Atlantique. Mais les fesses de Thierry sont d’un autre avis. Arriver le plus vite possible et en ligne droite. Surtout que les chemins ne sont guère amusants. Du plat, toujours du plat. Si nous étions de véritables écrivains, nous parlerons de morne plaine, d’onde qui bout dans une urne trop pleine, de nous, héros dont Toutatis trompe l’espérance. Nous nous contentons de pester et de jurer.

La pause terrasse, c’est sacré !

Pause déjeuner à Saint-Symphorien. Il est 11h35. Nous souhaitons commander.
— Pas avant midi ! nous semonce un antipathique amphytrion.
Il faudra bien attendre midi quart pour qu’il daigne sortir son carnet de commande. Les paysages ont changé mais les coutumes de l’hospitalité française semblent immuables. Alors que nous partons, un couple de cyclistes âgé s’arrête. Je les admire. Mari et femme, plus proche des 80 ans que des 70 et pourtant toujours vétu en lycra sur des vélos de sports. Le gérant du restaurant n’aura pas la même sympathie que moi. À peine ont-ils posés leurs vélos qu’il leur annonce qu’il n’y a plus de tables ou plus de repas. Contemplant leur déception, je me contente de réenfourcher ma bécane pour repartir. Chaque pause m’offre quelques kilomètres de répit avant que mon fessier se rappelle à moi.

Sur mon GPS, quelque soit le niveau de zoom, nous sommes une petite flèche sur une longue ligne droite sans rien à droite ni à gauche. Une ligne droite qui se perd à l’horizon, c’est désespérant. Rien pour nous distraire de la douleur qui nous lacère le fessier.

Même le GPS est une longue ligne droite !

— À la prochaine zone d’ombre sur la route, je m’arrête pour remettre de la crème, dis-je.

Plusieurs kilomètres plus loin, rien n’a changé. Nous n’avons pas été dans l’ombre une seule fois. Je finis par m’arrêter au soleil tellement la douleur est intense. Je m’invente le jeu de rester en danseuse sur toute la longueur de ces patibulaires arroseurs automatiques, espèce de mats pour câble à haute tension vautrés dans des champs arrides.

Une ligne droite toute plate, le plus psychologiquement éprouvant !

Pour faire un peu d’animation, le bitume se transforme en sable. Si c’est un peu technique au début, j’en suis vite réduit à pousser ma bécane avant de repartir en pédalant dans les bruyères du bas-côté.

Par deux fois, une biche me regardera passer, curieuse, pas du tout effrayée. Je me dis que les chasseurs doivent s’en donner à cœur joie.

Bienvenue dans les landes !

La longue ligne droite finit abruptement par quelques centaines de mètres d’un single track tortueux et nous débouchons dans la ville de Biscarosse.

Nous n’avons pas choisi Biscarosse au hasard mais parce que c’est là qu’habitent mes cousins Brigitte et Vincent qui disposent de deux qualités : Premièrement, ils acceptent de nous accueillir, Thierry et moi. Deuxièmement, je les apprécie énormément et c’est une excellente excuse pour les revoir. Ce que j’avais oublié, malgré plusieurs séjours chez eux, c’est que Biscarosse est très grand et que leur maison est à près de 15km de la plage.

Thierry avait émis l’idée de s’arrêter chez mes cousins et de faire, symboliquement, les derniers kilomètres jusqu’à l’Atlantique le lendemain. Voire de pousser jusqu’à Arcachon. Son fessier n’est plus de cet avis. Nous prenons un thé glacé dans une terrasse du bourg et nous décidons de pousser jusqu’à la mer avant de revenir, de cloturer notre raid aujourd’hui.

Cette dernière pause terrasse est à l’image des commandes que l’on peut faire en bikepacking :
— 2 grand thés glacés, 2 bouteilles d’eau, 3 muffins au chocolat, 1 cookies et 2 grands smoothies.
— Vous êtes combien ?
— Deux, pourquoi ?

J’avoue éprouver un plaisir total à ne plus respecter aucune convention culinaire. Manger n’importe quand, n’importe comment, en grande quantité et en suivant uniquement mes envies. Le bonheur.

Nous repartons par la piste cyclable, que je connais pour l’avoir empruntée lors de mes visites précédentes. Malheureusement, je n’étais jamais venu en août et je suis sidéré par la foule de vélo chargés de parasols et de matelas pneumatiques qui l’encombre. Elle se révèle aussi plus longue que dans mes souvenirs. Heureusement, elle est également vallonnée, ce qui me procure un certain plaisir. Dans la bosse la plus raide, un jeune vététiste en tenue du club d’Arcachon et son père tentent de nous dépasser. J’ai beau avoir 120km au compteur et 10kg de sacs, je refuse de laisser filer. Je règle le fils tandis que Thierry règle le père. À une dizaine de mètres du sommet, j’entends le gamin gémir et craquer derrière moi. Je ressens une bouffée de fierté parfaitement puérile.

En arrivant dans Biscarosse-plage, je réalise que ce que nous avons vécu à Saint-Cirq-Lapopie n’était qu’un des premiers cercles de l’enfer. Les rues sont bondées. La plage est bondée. La mer est bondée jusqu’à une dizaine de mètres. L’horreur.

We did it !

— Tu ne vas quand même pas te baigner ? me demande Thierry tout en connaissant d’avance la réponse.
— Je vais me gêner. J’ai fait 650 bornes pour ça.

J’enfile mon maillot et me fraie un passage jusque dans les vagues. Dont je ressors presqu’immédiatement. Un bain de foule plutôt qu’un bain de mer.

Entre deux baigneurs, on peut même entrapercevoir de l’eau…

Sur mon GSM, Brigitte nous conseille de rentrer par la route plutôt que par la piste cyclable car c’est plus court. J’ai le souvenir d’une route très dangereuse mais nous suivons son conseil. Un cabriolet décapotable nous dépasse à toute allure en nous frôlant. Je me dis que ça ne va pas être de la tarte, dix bornes sur une nationale de ce type.

Mais quelques kilomètres plus loin, les voitures s’arrêtent. L’embouteillage du retour de la plage. Rouler à côté des voitures arrêtées devient jouissif. C’est avec un grand sourire que nous arrivons dans le centre de Biscarosse. Un cycliste arrive derrière moi et semble vouloir nous dépasser. Je m’arrête. C’est Vincent, mon cousin ! Je suis heureux de le voir. Nous lui emboitons la roue mais il fonce à toute allure dans la circulation Biscarossienne, nous entrainant dans un gymkhana infernal. À un carrefour où la circulation redevient fluide, Thierry reconnait le cabriolet qui nous avait dépassé. Nous avons été plus rapide que lui !

Une fois hors de la circulation, Vincent fonce à toute allure. Je me porte à sa hauteur :
— Tu as décidé de nous achever sur la fin ?
— Ben je ne sais pas à quelle vitesse vous roulez, fait-il, à peine essoufflé.
— On ne fait pas 140 bornes à ce rythme-là en tout cas !

C’est enfin l’arrivée chez eux et un dernier déclipsage de pédale. Je congratule Thierry et regarde mon vélo, posé à côté de moi. Une trace, c’est une partition. Nos vélos sont nos instruments. J’éprouve pour le mien ce qu’un violoniste doit éprouver pour un stradivarius. C’est un compagnon, une extension de moi. Je le touche, le caresse, le remercie pour la ballade.

J’ai déjà envie de le réenfourcher pour de nouvelles aventures.

Entre les vélos, une réelle amitié !

Derniers tours de roue et retour

25km, plat.

Thierry reprend le train le lendemain, après avoir empaqueté son vélo avec du plastique culinaire. La SNCF annonce avec fracas sur son site que certains TGV permettent de transporter un vélo monté. Je n’en trouve aucun. La mort dans l’âme, je dois me résigner à faire subir à mon fidèle compagnon un traitement qui me fait mal au cœur. Contrairement à Thierry, je ne démonte pas le guidon mais me contente de le tourner à 90°. Une idée de Vincent pour que je puisse facilement transporter mon « paquet » dont une seule roue est encore utilisable.

Tout y est !
Emballé, c’est pesé !

Mon vélo étant déjà démonté, Vincent me prête un VTT pour un dernier tour de roues dans la région. J’essaie même son fatbike durant quelques centaines de mètres. Les jambes ont envie de tourner mais les fesses, elles, souffrent encore beaucoup trop. Vincent a déjà une expérience de bikepacking, je tente de le motiver à remettre le couvert. Je sens qu’il n’est pas loin de craquer.

Vincent n’a pas l’air de suer ? Il avait un moteur sur son fatbike !

Le lundi, Vincent me dépose à la gare d’Ychoux. Un train pour gagner Bordeaux. Un TGV pour Paris en compagnie d’un autre cycliste monté à Ychoux avec moi. Nos vélos s’entassent sur une montagne de bagages. Arrivés à Paris, nous apprendrons que ces bagages appartiennent à une mère de deux jeunes enfants qui a éffectué le voyage par terre, ses enfants dans les bras. Sans le vouloir, j’ai entraperçu un papier qu’elle montrait au contrôleur et qui déclarait « Procédure de demande d’asile ». Je n’ose imaginer la vie de cette femme. Même si j’ai ma part de soucis et de problèmes, je me sens tellement chanceux d’être à la place que j’occupe. Arrivés à Paris, elle ne sais pas comment atteindre ses bagages, sous nos vélos, avec ses deux enfants dans les bras. Avec l’aide de l’autre cycliste, nous lui descendons tout son matériel et montons sa poussette. Pas le temps de trainer car j’ai une heure, montre en main, pour aller de la gare Montparnasse à la gare du Nord. Avec un vélo démonté. La traversée de la gare Montparnasse est déjà, en soi, une aventure. Mais, tout au long de mon trajet, je recevrai des nombreuses marques de gentillesses et d’aide de jeunes au look de racaille. Dans la rame de métro, un jeune beur en training se lève spontanément pour laisser sa place à une grosse dame en tailleur. Elle refuse, il insiste et ajoute qu’il descend à la prochaine.

Dans le TGV…
Le métro parisien…

J’ai le soupçon que les vrais parisiens sont tous en vacances.

À la descente du métro, j’entends des hurlements. Sur le quai d’en face, une quarantaine de noirs entourent un blanc en hurlant et en gesticulant. Je vois tomber de la chemise du blanc des boulettes de plastique noir. Sa chemise a été arrachée ou il l’a enlevée, révélant des tatouages qui me semblent d’inspiration extrême droite (croix celtique et caractères gothiques). Il pleure à chaudes larmes, non pas de douleur, il n’est clairement pas blessé, mais d’humiliation. Il pleure comme un enfant, sans retenue. Deux gardes de la sécurité blasés lui ordonnent de circuler. Je m’invente une histoire de dealers et de guerre de clans pour expliquer les images que j’ai entraperçues avant de me ruer vers le quai du Thalys.

Une sueur froide m’envahit en constatant qu’il y’a un portique de sécurité. Si je dois déballer mon vélo, ça ne va pas être de la tarte. Heureusement, un cerbère muni d’une mitraillette me jette à peine un regard. Je rentre avec mon vélo. Moi qui ai vécu pendant une semaine avec le strict minimum, je suis effaré par la quantité de bagages qu’emmènent les autres passagers. Je ne savais pas qu’il existait d’aussi grosses valises. Des valises que les propriétaires n’arrivent même pas à hisser dans le train tellement elles sont lourdes.

Dans le Thalys…

C’est le retour à Bruxelles puis à Ottignies. Dans le dernier train, je remonte mon vélo, prêt à pédaler immédiatement vers ma famille. Mais j’étais attendu, il sera dit que mon vélo avait fini son travail. Malgré les nombreux changements et la complexité de la traversée de Paris, cela fait à peine 6h que j’ai quitté Brigitte et Vincent. Je me sens reposé. Tout le contraire d’un trajet en voiture.

L’accueil !

Quelques impressions

Je rêvais de faire du bikepacking et la réalité s’est montrée à la hauteur. J’adore cette discipline. Une grande partie de ce succès doit certainement être attribuée à Thierry pour son travail sur la trace. Nous nous sommes également révélés similaires et complémentaires. J’ai aimé pédaler avec lui, je rêve de recommencer. J’aurais bien pédalé un jour ou deux de plus. Je ne sais pas s’il est du même avis mais je le remercie pour cette expérience.

Si j’aurais bien pédalé un jour ou deux de plus, je ne me sens pas encore prêt pour des périples de 3000 bornes…

Le bonheur c’est aussi de n’avoir souffert d’aucun problème mécanique. Mon vélo, réglé par Pat de chez Moving Store avant le départ, a été parfait. Tout au plus dois-je déplorer une certaine mollesse des freins les derniers jours. Peut-être qu’il aurait fallu purger le liquide de frein avant le départ. Mon Salsa Cutthroat s’est révélé en difficulté dans la pieraille mais, en contrepartie, parfaitement à l’aise partout ailleurs. Il faut dire que la mentalité du pilote y est également pour beaucoup. Il m’est arrivé quelques fois de suivre machinalement Thierry, abruti par la fatigue et de descendre à toute vitesse sans m’en rendre compte. Dès que je réalisais ce que je faisais, je freinais et je devais reprendre le reste de la descente à patte. Le seul réel défaut de mon vélo, outre la fragilité de sa peinture, est la cablerie externe. C’est dommage en 2019 et ça a posé de petits problèmes en s’accrochant dans les ronces des chemins. Un attache de cable s’est même cassée.

L’engin, dans les moments difficiles…

Physiquement, ma condition s’est révélée parfaite. Après le col du premier jour, je n’ai que rarement dépassé 140 de pulsations, je n’ai jamais été dans l’effort intense. Tout au plus dois-je noter un engourdissement des gros orteils, engourdissement qui perdure encore à ce jour, et une légère douleur dans les paumes. Je devrais changer de gants.

Et de selle. Car le pire fut sans conteste mon fessier. Charles, de Training Plus, a réglé mon vélo au quart de poil. Aucune douleur lombaire, aucune douleur de genoux (mes points sensibles). Mais il faut que je discute avec lui de choix de selle, cet arcane du vaudou cycliste.

Le plus dur dans le bikepacking est certainement le retour. À peine rentré, un mail de Thierry m’attendait dans ma boîte. Une phrase : « Le plus difficile, c’est de retrouver d’autres projets ».

Oui, j’ai envie de repartir. Dans sa cabane, mon destrier piaffe d’impatience. Mais je suis heureux de retrouver ma famille après tout ce temps, de serrer dans mes bras ma femme qui m’a poussé à entreprendre ce rêve. Et, je dois l’avouer, de retrouver les touches de mon clavier.

Je garde cependant une séquelle de ces nuits sous la tente, à l’aventure. Je ne sais plus dormir les fenêtres fermées… Je rêve déjà de repartir pédaler, de planter ma tente quelque part dans la nature.

Je suis @ploum, conférencier et écrivain électronique. Si vous avez apprécié ce texte, n'hésitez pas à me soutenir sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens réguliers, même symboliques, sont une réelle motivation et reconnaissance. Merci !

Ce texte est publié sous la licence CC-By BE.

August 25, 2019

Een dikke week geleden kloeg ik dat NV-A nul komma nul heeft geïnvesteerd in het Europese niveau.

En plots ineens vandaag komt Peter De Rover in de media eens wat zagen dat Reynders geen EU commissaris mag worden maar dat het een Vlaming moet zijn?

Kom nu he NV-A’ers. U maakt uzelf belachelijk op deze manier.

Jullie namen de EU niet serieus. Zoveel is duidelijk. Waarom zou dat niveau U dan wel serieus moeten nemen?

Dat doen ze dus niet. U wordt niet serieus genomen. En dat is jullie eigen idiote schuld.

Doe er wat aan. En wees concreet.


August 23, 2019


“Sorry but if sincerely defending your beliefs is clearly and easily distinguishable from trolling, your beliefs are basic.”


Education is the art of conveying a sense of truth by telling a series of decreasing lies.

Stories are templates for ideas our brains most easily absorb. Shape your idea into a classic story, and it will go right in, even if it's made up.

Performance is a natural part of communication. Embrace it and you can become a superhero to others, as long as you don't actually believe it.

Truly understanding people means understanding all their little vanities, self-deceptions and flaws. Start with your own.

Only you can truly motivate yourself. Find your goal and never stop working towards it. It may take 20 years.

Dare to dream and push on, even through personal hell. Sometimes it takes a friend or a crowd. Sometimes the only way out is through.

Don't just think outside the box. Learn to exit the world around it, and observe it from all angles simultaneously. Choose the most optimal move for all possible games you find yourself in.

Always do it with style, even if you have to copy someone else first. Practice.

Social graphs pass along proof-of-work. You can win by producing it and racing the crowd. The people who validate it have competing and often low or wrong standards.

Social harmony depends on an accurate simulation of the other party. People who see a gift of information will continue to grow. People who see threats in every message will never feel safe.

Rituals and tradition are the deep protocols of society, supporting its glacial macro-evolution. We retain and perform them in obfuscated form, often mistaking our lack of understanding for obsolescence.

Foundational myths tell us to learn from the past and avoid it, that we know who's good or evil, what's virtuous or sinful. Yet every generation creates demons from its own ranks, rarely the same as the last.

Don't teach them to look for swastikas, only the jackboot stomping on a face, and mind your own step.

Ghosts are real, they are called egregores and live inside groups of people. We can reverse engineer them to figure out how they work.

Civilization is the art of compressing lessons and erasing mistakes, to fit increasing knowledge into a fixed biological substrate. Decay is the opposite, when mistakes are compressed and lessons are erased.

winzip trial

I published the following diary on “Simple Mimikatz & RDPWrapper Dropper“:

Let’s review a malware sample that I spotted a few days ago. I found it interesting because it’s not using deep techniques to infect its victims. The initial sample is a malicious VBScript. For a few weeks, I started to hunt for more Powershell based on encoded directives. The following regular expression matched on the file… [Read more]

[The post [SANS ISC] Simple Mimikatz & RDPWrapper Dropper has been first published on /dev/random]

August 19, 2019

An Easy Tutorial

Graphics programming can be intimidating. It involves a fair amount of math, some low-level code, and it's often hard to debug. Nevertheless I'd like to show you how to do a simple "Hello World" on the GPU. You will see that there is in fact nothing to be afraid of.

Most environments offer you a printf-equivalent and a string type, but that's not how we do things in GPU land. We like the raw stuff, and we work with pixels themselves. So we're going to draw our text to the console directly. I'll show you the general high level flow, and then wrap up some of the details.

a window saying hello world

First, we're going to define our alphabet.

let alphabet = allocate_values(
  &['H', 'E', 'L', 'O', 'W', 'R', 'D', '🌎'],

Next we define our message by encoding it from this alphabet.

let message = allocate_indices(
//  H  E  L  L  O  W  O  R  L  D
  &[0, 1, 2, 2, 3, 4, 3, 5, 2, 6, 7],

We'll also need to assemble this alphabet soup into positioned text. Don't worry, I precalculated the horizontal X offsets:

let xs = allocate_values(
  &[0.0, 49.0, 130.0, 195.0, 216.0, 238.0, 328.0, 433.0, 496.0, 537.0, 561.0, 667.0],

The font is loaded as glyphs, a map of glyph images:

let glyphs = console.load_glyphs("helvetica.ttf");

We now have everything we need to print it pixel-by-pixel to the top of the console, which we call 'rasterizing':

fn hello_world(
  line: Line,
  message: Vec<Index>,
  alphabet: Vec<Letter>,
  xs: Vec<Float>,
  args: Vec<Argument>,
) {
  // Get glyph library
  let glyphs = args[0];

  // Loop over all the indices in the message
  for i in 0..message.len() {

    // Retrieve the x position for this index.
    let x = xs[i];

    // Retrieve the letter in the alphabet
    let letter = alphabet[message[i]];
    // Retrieve the glyph image for this letter
    let glyph  = glyphs[letter];

    // Rasterize it to the line
    rasterize(line, x, glyph.image, glyph.width, glyph.height);

rasterize() is provided for you, but if you're curious, this is what it looks like on the inside:

fn rasterize(
  line: Line,
  offset: Float,
  image: Frame,
  width: Int,
  height: Int
) {

  // Iterate over rows and columns
  for y in 0..height {
    for x in 0..width {

      // Get target position
      let tx = x + offset;
      let ty = y;

      // Get image pixel color
      let source = get(image, x, y);

      // Get current target pixel color
      let destination = get(line, tx, ty);

      // Blend source color with destination color
      let blended = blend(source, destination);

      // Save new color to target
      set(target, tx, ty, blended);

It's just like blending pixels in Photoshop, with a simple nested rows-and-columns loop.

Okay so I did gloss over an important detail.

The thing is, you can't just call hello_world(...) to run your code. I know it looks like a regular function, just like rasterize(), but it turns out you can only call built-in functions directly. If you want to call one of your own functions, you need to do a little bit extra, because the calling convention is slightly different. I'll just go over the required steps so you can follow along.

First you need to actually access the console you want to print to.

So you create a console instance:

let instance = Console::Instance::new();

and get an adapter from it:

let adapter =
    &AdapterDescriptor {
      font_preference: FontPreference::Smooth,

so you can get an actual console:

let console =
    &ConsoleDescriptor {
      extensions: Extensions {
        subpixel_antialiasing: true,

But this console doesn't actually do anything yet. You need to create an interactive window to put it in:

let events_loop = EventsLoop::new();

let window = WindowBuilder::new()
  .with_dimensions(LogicalSize {
    width: 1280.0, height: 720.0

and then make a surface to draw to:

let surface = instance.create_surface();

Now if you want to print more than one line of text, you need to set up a line feed:

let descriptor =
  LineFeedDescriptor {
    usage: LineUsageFlags::OUTPUT_ATTACHMENT,
    format: TextFormat::UTF8,
    width: 120,
    height: 50,

let line_feed = console.create_line_feed(&surface, &descriptor);

let next_line = line_feed.get_next_line();

And if you want emoji, which we do, you need a separate emoji buffer too:

let images =
    EmojiDescriptor {
      size: Extent2d {
        width: 256,
        height: 256,
      array_size: 1024,
      dimension: ImageDimension::D2,
      format: ImageFormat::RGBA8,
      usage: ImageUsageFlags::OUTPUT_ATTACHMENT,

let emoji_buffer = images.create_default_view();

Okay, we're all set!

Now we just need to encode the call, using a call encoder:

let encoder = console.create_call_encoder();

We begin by describing the special first argument (line), a combo of next_line and the emoji_buffer. We also have to provide some additional flags and parameters:

let call =
    FunctionCallDescriptor {
      console_attachments: &[
        ConsoleAttachmentDescriptor {
          attachment: &next_line,
          load_op: LoadOp::Clear,
          store_op: StoreOp::Store,
          clear_letter: ' ',
      emoji_attachment: Some(
        ConsoleEmojiAttachmentDescriptor {
          attachment: &emoji_buffer,
          load_op: LoadOp::Clear,
          store_op: StoreOp::Store,
          clear_color: "rgba(0, 0, 0, 0)",

The message of type Vec<Index> is added using a built-in convention for indices:


The alphabet: Vec<Letter> and the xs: Vec<Float> can also be directly passed in, because they are accessed 1-to-1 using our indices, as numbered arguments:

  (&alphabet, 0), (&xs, 1)

However, the glyph images are a bit trickier, as they are a custom keyword argument.

To make this work, we need to create an argument group layout, which describes how we'll pass the arguments to sample our glyph images:

let argument_group_layout =
    &ArgumentGroupLayoutDescriptor {
      bindings: &[
        ArgumentGroupLayoutBinding {
            binding: 0,
            visibility: Visibility::PIXEL,
            ty: BindingType::SampledText,
        ArgumentGroupLayoutBinding {
            binding: 1,
            visibility: Visibility::PIXEL,
            ty: BindingType::Sampler,

We then put it into a larger function call layout, in case we have multiple groups of keyword arguments:

let function_call_layout =
    FunctionCallLayoutDescriptor {
      argument_group_layouts: &[argument_group_layout],

We also need to create bindings to match this layout, to actually bind our argument values:

let glyph_view = glyphs.create_default_view();

let sampler = console.create_sampler(
  &TextSamplerDescriptor {
    address_mode: AddressMode::ClampToEdge,
    text_filter: FilterMode::TypeHinted,
    hint_clamp: 100.0,
    max_anisotropy: 4,
    compare_function: CompareFunction::Always,
    border_color: BorderColor::TransparentBlack,

let argument_group =
    &BindGroupDescriptor {
      layout: argument_group_layout,
      bindings: &[
        Binding {
          binding: 0,
          resource: BindingResource::ImageView(&glyph_view),
        Binding {
          binding: 1,
          resource: BindingResource::Sampler(&sampler),

And add it to our call:

call.set_argument_group(0, argument_group);

Alright! We're pretty much ready to make the call now. Just one more thing. The function call descriptor.

We need to pass the raw code for hello_world as a string to console.create_code_module, and annotate it with a few extra bits of information:

let function_call =
    &FunctionCallDescriptor {
      layout: &function_call_layout,
      call_stage: CallStageDescriptor {
        module: console.create_code_module(&hello_world),
        entry_point: "hello_world",
      rasterization_state: RasterizationStateDescriptor {
        emoji_alignment: Alignment::Middle,
        emoji_bias: 0,
        emoji_scale: 1.5,
      text_topology: Topology::Letters,
      console_states: &[
        ConsoleStateDescriptor {
          format: TextFormat::UTF8,
          color: BlendDescriptor {
            src_factor: BlendFactor::SrcAlpha,
            dst_factor: BlendFactor::OneMinusSrcAlpha,
            operation: BlendOperation::Add,
          alpha: BlendDescriptor {
            src_factor: BlendFactor::OneMinusDstAlpha,
            dst_factor: BlendFactor::One,
            operation: BlendOperation::Add,
          write_mask: ColorWriteFlags::ALL,
      emoji_state: Some(EmojiStateDescriptor {
        format: ImageFormat::RGBA8,
        emoji_enabled: true,
        emoji_variant: CompareFunction::LessEqual,
      index_format: IndexFormat::Uint8,
      alphabet_buffers: &[
        AlphabetBufferDescriptor {
          stride: 1,
          step_mode: InputStepMode::Letter,
          attributes: AlphabetAttributeDescriptor {
            attribute_index: 0,
            format: AlphabetFormat::Letter,
            offset: 0,
        AlphabetBufferDescriptor {
          stride: 1,
          step_mode: InputStepMode::Letter,
          attributes: AlphabetAttributeDescriptor {
            attribute_index: 1,
            format: AlphabetFormat::Number,
            offset: 0,
      sample_count: 1,

Which we add to the call:


Well, you actually have to do this first, but it was easier to explain it last.

Now all that's left is to submit the encoded command to the console queue, and we're already done:


a black window


Damn, and I was going to show you how to make a matrix letter effect as an encore. You can pass a letter_shader to rasterizeWithLetterFX(...). It's easy, takes a couple hundred lines tops, all you have to do is call a function on a GPU.

(All code in this post is real, but certain names and places have been changed to protect the innocent. If you'd like to avoid tedious bureaucracy in your code, why not read about how the web people are trying to tame similar lions?)

Objects created
Low code no code

A version of this article was originally published on

Twelve years ago, I wrote a post called Drupal and Eliminating Middlemen. For years, it was one of the most-read pieces on my blog. Later, I followed that up with a blog post called The Assembled Web, which remains one of the most read posts to date.

The point of both blog posts was the same: I believed that the web would move toward a model where non-technical users could assemble their own sites with little to no coding experience of their own.

This idea isn't new; no-code and low-code tools on the web have been on a 25-year long rise, starting with the first web content management systems in the early 1990s. Since then no-code and low-code solutions have had an increasing impact on the web. Examples include:

While this has been a long-run trend, I believe we're only at the beginning.

Trends driving the low-code and no-code movements

According to Forrester Wave: Low-Code Development Platforms for AD&D Professionals, Q1 2019, In our survey of global developers, 23% reported using low-code platforms in 2018, and another 22% planned to do so within a year..

Major market forces driving this trend include a talent shortage among developers, with an estimated one million computer programming jobs expected to remain unfilled by 2020 in the United States alone.

What is more, the developers who are employed are often overloaded with work and struggle with how to prioritize it all. Some of this burden could be removed by low-code and no-code tools.

In addition, the fact that technology has permeated every aspect of our lives — from our smartphones to our smart homes — has driven a desire for more people to become creators. As the founder of Product Hunt Ryan Hoover said in a blog post: As creating things on the internet becomes more accessible, more people will become makers..

But this does not only apply to individuals. Consider this: the typical large organization has to build and maintain hundreds of websites. They need to build, launch and customize these sites in days or weeks, not months. Today and in the future, marketers can embrace no-code and low-code tools to rapidly develop websites.

Abstraction drives innovation

As discussed in my middleman blog post, developers won't go away. Just as the role of the original webmaster (FTP hand-written HTML files, anyone?) has evolved with the advent of web content management systems, the role of web developers is changing with the rise of low-code and no-code tools.

Successful no-code approaches abstract away complexity for web development. This enables less technical people to do things that previously could only be done by developers. And when those abstractions happen, developers often move on to the next area of innovation.

When everyone is a builder, more good things will happen on the web. I was excited about this trend more than 12 years ago, and remain excited today. I'm eager to see the progress no-code and low-code solutions will bring to the web in the next decade.

August 14, 2019

Al vraag ik me af waar de permanente Vlaamse vertegenwoordiging voor de Europese Commissie en Unie zich mee bezig houdt en waarom ze zo weinig van zich laten horen? Dat lijstje prioritaire dossiers is namelijk maar erg mager. Er waren bv. toch handelsovereenkomsten in 2019? Is onze permanente vertegenwoordiging daar dan niet mee bezig geweest? Ik vind er weinig over terug.

Zijn er voor 2020 dan geen handelsovereenkomsten in de maak? Ik dacht het wel. Daar is Vlaanderen niet mee bezig? Dat vindt onze permanente vertegenwoordiging het niet waard om te vermelden op hun website? En zo ja, waar dan?

Zoals ik het begrepen heb spreken al onze diplomaten in Brussel Frans en, veel belangrijker dan dat (want mij part spreken ze het Afrikaans), vertolken ze het Belgische (en eigenlijk ook het Franse) standpunt. Zelden dat van Vlaanderen en nog minder krijgt dit de nodige politieke -en media aandacht.

De NV-A wil vaak doen alsof ze het niveau van de Europese Unie belangrijk vinden. Maar eigenlijk bieden ze maar weinig aan, doen ze er maar weinig mee. En daardoor zijn we als Vlaanderen helemaal niet of maar mager vertegenwoordigd.

Ik had graag in Bart De Wever’s nota wat meer duidelijk gehad in plaats van enkel een Latijnse spreuk. Spreuken en getwitter is geen beleid voeren, Bart. Het wordt tijd dat uw partij i.p.v. enkel grote uitspraken uit te kramen en het populisme aan te wakkeren ook eens toont dat het kan besturen.

Ja ja. We gaan onafhankelijk worden in Europa en de Europese Unie. Dat is duidelijk. Maar hoe gaat de NV-A onze permanente vertegenwoordiging van Vlaanderen voor de EU dan versterken?

Een onafhankelijk Vlaanderen moet niet verwachten dat Belgische diplomaten Vlaamse kastanjes uit het vuur gaan halen.

Wees concreet.

August 13, 2019

The post Time for Change: Going Independent appeared first on

After 12 intense years at Nucleus, it's time for something new: as of September 2019 I'll stop my activities at Nucleus and continue to work as an independent, focussing on Oh Dear!, DNS Spy & Syscast.

The road to change

Why change? Why give up a steady income, health- & hospital insurance, a company car, paid holidays, fun colleagues, exciting tech challenges, ... ?

I think it's best explained by showing what an average day looked like in 2016-2017, at the peak of building DNS Spy.



Back when I had the idea to create a DNS monitoring service, the only way I could make it work was to code on it at crazy hours. Before the kids woke up and after they went to bed. Before and after the more-than-full-time-job.

This worked for a surprisingly long time, but eventually I had to drop the morning hours and get some more sleep in.

Because of my responsibilities at Nucleus (for context: a 24/7 managed hosting provider), I was often woken during the night for troubleshooting/interventions. This, on top of the early hours, made it impossible to keep up.

After a while, the new rhythm became similar, but without the morning routine.



Notice anything missing in that schedule? Household chores? Some quality family time? Some personal me-time to relax? Yeah, that wasn't really there.

There comes a point where you have to make a choice: either continue on this path and end up wealthy (probably) but without a family, or choose to prioritize the family first.

As of September 2019, I'll focus on a whole new time schedule instead.



A radical (at least for me) change of plans, where less time is spent working, more time is spent with the kids, my wife, the cats, the garden, ...

I'm even introducing a bit of whatever-the-fuck-i-want-time in there!

What I'll be working on

In a way I'm lucky.

I'm lucky that I spent the previous 10+ years working like a madman, building profitable side businesses and making a name for myself in both the open source/linux and PHP development world. It allows me to enter September 2019 without a job, but with a reasonable assurance that I'll make enough money to support my family.


For starters, I'll have more time & energy to further build on DNS Spy & Oh Dear!. These 2 side businesses will from now on be called "businesses", as they'll be my main source of income. It isn't enough to live on, mind you, so there's work to be done. But at least there's something there to build on.

Next to that, my current plan is to revive and start building on Syscast. The idea formed in 2016 (the "workaholic" phase, pre-DNS Spy) and was actually pretty fleshed out already. Making online courses, building upon the 10+ years of sysadmin & developer knowledge.

Syscast didn't happen in 2016 and pivoted to a podcast that featured impressive names like Daniel Stenberg (curl & libcurl), Seth Vargo (Hashicorp Vault), Matt Holt (Caddy) and many others instead.

I've always enjoyed giving presentations, explaining complicated technologies in easy terms and guiding people to learn new things. Syscast fits that bill and would make for a logical project to work on.

Looking back at an amazing time

A change like this isn't taken lightly. Believe me when I say I've been debating this for some time.

I'm grateful to both founders of Nucleus, Wouter & David, that they've given me a chance in 2007. I dropped out of college, no degree, just a positive attitude and some rookie PHP knowledge. I stumbled upon the job by accident, just googling for a PHP job. Back then, there weren't that many. It was either Nucleus or a career writing PHP for a bank. I think this is where I got lucky.

I've learned to write PHP, manage Linux & Windows servers, do customer support, how to do marketing, keep basic accounting and the value of underpromise and overdeliver. I'll be forever grateful to both of them for the opportunity and the lessons learned.

It was also an opportunity to work with my best friend, Jan, for the last 9 years. Next to existing friends, I'm proud to call many of my colleagues friends too and I hope we can stay in touch over the years. I find relationships form especially tight in intense jobs, when you heavily rely on each other to get the job done.

Open to new challenges

In true LinkedIn parlanceI'm open to new challenges. That might be a couple of days of consultancy on Linux, software architecture, PHP troubleshooting, scalability advice, a Varnish training, ...

I'm not looking for a full-time role anywhere (see the time tables above), but if there's an interesting challenge to work on, I'll definitely consider it. After all, there are mouths to feed at home. ;-)

If you want to chat, have a coffee, exchange ideas, brainstorm or revolutionize the next set of electric cars, feel free to reach out (my contact page has all the details).

But first, a break

However, before I can start doing any of that, I need a time-out.

In September, my kids will go to school and things will be a bit more quiet around the house. After living in a 24/7 work-phase for the last 10 years, I need to cool down first. Maybe I'll work on the businesses, maybe I won't. I have no idea how hard that hammer will hit come September when I suddenly have my hands free.

Maybe I'll even do something entirely different. Either way, I'll have more time to think about it.

The post Time for Change: Going Independent appeared first on

August 12, 2019

A special bird flying in space has the spotlight while lots of identical birds sit on the ground (lack of diversity)

At Drupalcon Seattle, I spoke about some of the challenges Open Source communities like Drupal often have with increasing contributor diversity. We want our contributor base to look like everyone in the world who uses Drupal's technology on the internet, and unfortunately, that is not quite the reality today.

One way to step up is to help more people from underrepresented groups speak at Drupal conferences and workshops. Seeing and hearing from a more diverse group of people can inspire new contributors from all races, ethnicities, gender identities, geographies, religious groups, and more.

To help with this effort, the Drupal Diversity and Inclusion group is hosting a speaker diversity training workshop on September 21 and 28 with Jill Binder, whose expertise has also driven major speaker diversity improvements within the WordPress community.

I'd encourage you to either sign up for this session yourself or send the information to someone in a marginalized group who has knowledge to share, but may be hesitant to speak up. Helping someone see that their expertise is valuable is the kind of support we need to drive meaningful change.

We now invite proposals for main track presentations, developer rooms, stands and lightning talks. FOSDEM offers open source and free software developers a place to meet, share ideas and collaborate. Renowned for being highly developer-oriented, the event brings together some 8000+ geeks from all over the world. The twentieth edition will take place on Saturday 1st and Sunday 2nd February 2020 at the usual location: ULB Campus Solbosch in Brussels. We will record and stream all main tracks, devrooms and lightning talks live. The recordings will be published under the same licence as all FOSDEM content (CC-BY). If, exceptionally,舰

August 10, 2019

FOSDEM 2020 will take place at ULB Campus Solbosch on Saturday 1 and Sunday 2 February 2020. Further details and calls for participation will be announced in the coming days and weeks.

August 09, 2019

MVC was a mistake

Infinite omnipotent hyperbeing lovingly screaming "YOU'RE VALID", its song echoed across every structural detail of the universe—you, an insignificant mote adrift in some galactic tide shouting back "YES, OKAY, WHAT ELSE?"
Fragnemt by ctrlcreep
(a linguistic anti-depressant in book form)

A reader sent in the following letter:

Dear Mr. Wittens,

I have read with great interest your recent publications in Computers Illustrated. However, while I managed to follow your general exposition, I had trouble with a few of the details.

Specifically, Googling the phrase "topping from the bottom" revealed results that I can only describe as being of an unchristian nature, and I am unable to further study the specific engineering practice you indicated. I am at a loss in how to apply this principle in my own work, where I am currently building web forms for a REST API using React. Should I have a conversation about this with my wife?

With respect,
Gilbert C.

Dear Gilbert,

In any marriage, successful copulation requires commitment and a mutual respect of boundaries. Unlike spouses however, components are generally not monogamous, and indeed, best results are obtained when they are allowed to mate freely and constantly, even with themselves. They are quite flexible.

The specifics of this style are less important than the general practice, which is about the entity ostensibly in control not actually being in control. This is something many form libraries get wrong, React or not, missing the true holy trinity. I will attempt to explain. This may take a while. Grab some hot chocolate.

Simon Stålenhag Painting

That's Numberwang

In your web forms, you're building a collection of fields, and linking them to a data model. Either something to be displayed, or something to be posted, possibly both. Or even if it's a single-page app and it's all for local consumption only, doesn't really matter.

Well, it does, but that's boundary number 1. A form field shouldn't know or care about the logistics of where its data lives or what it's for. Its only task is to specialize the specific job of instantiating the value and manipulating it.

There's another boundary that we're also all used to: widget type. A text field is not a checkbox is not a multi-select. That's boundary number 3. It's the front of the VCR, the stuff that's there so that we can paw at it.

Would you like to buy a vowel for door number 2?

P_L_C_: What APIs are about.
"Previously, on Acko."

You see, when God divided the universe into controlled (value = x) and uncontrolled (initialValue = defaultValue = x) components, He was Mistaken. A true uncontrolled component would have zero inputs.

What you actually have is a component spooning with an invisible policy. Well, if you can't pull them apart, I guess they're not just spooning. But these two are:

  parse={parseNumber} format={formatNumber}
  value={number}    setValue={setNumber}
  (value, isError, onChange, onBlur) =>
      value={value}      isError={isError}
      onChange={onChange} onBlur={onBlur} />

This ValidatingInput ensures that a value can be edited in text form, using your parser and serializer. It provides Commitment, not Control. The C in MVC was misread in the prophecies I'm afraid. Well actually, someone spilled soda on the scrolls, and the result was a series of schisms over what they actually said. Including one ill-fated attempt to replace the Controller with a crazy person.

Point is, as long as the contents of the textfield are parseable, the value is plumbed through to the outside. When it's not, only the text inside the field changes, the inner value. The outside value doesn't, remaining safe from NaNful temptations. But the TextField remains in control, only moderated by the two policies you passed in. The user is free to type in unparseable nonsense while attempting to produce a valid value, rather than having their input messed around with as they type. How does it know it's invalid? If the parser throws an Error. A rare case of exceptions being a good fit, even if it's the one trick VM developers hate.

An "uncontrolled component" as React calls it has a policy to never write back any changes, requiring you to fish them out manually. Its setValue={} prop is bolted shut.

Note our perspective as third party: while we cannot see or touch ValidatingInput's inner workings, it still lets us have complete control over how it actually gets used, both inside and out. You wire it up to an actual widget yourself, and direct its behavior completely. It's definitely not a controller.

This is also why, if you pass in JSON.parse and JSON.stringify as the parser/formatter, with the pretty printer turned on, you get a basic but functional JSON editor for free.

You can also have a DirectInput, to edit text directly in its raw form, without any validation or parsing. Because you wrote that one first, to handle the truly trivial case. But then later you realized it was just ValidatingInput with identity functions glued into the parse/format props.

So let's talk about MVC and see how much is left once you realize the above looks suspiciously like a better, simpler recipe for orthogonalized UI.

Wikipedia - Model-View-Controller

Look at this. It's from Wikipedia so it must be true. No really, look at it closely. There is a View, which only knows how to tell you something. There is a Controller, which only knows how to manipulate something.

This is a crack team of a quadriplegic and a blind man, each in separate rooms, and you have to delegate your UI to them? How, in the name of all that is holy, are you supposed to do that? Prisoner dilemmas and trolleys? No really, if the View is a TextField, with a keyboard cursor, and mouse selection, and scrolling, and Japanese, how is it possible to edit the model backing it unless you are privy to all that same information? What if it's a map or a spreadsheet? ばか!

This implies either one of two things. First, that the Controller has to be specialized for the target widget. It's a TextController, with its own house key to the TextView, not pictured. Or second, that the Controller doesn't do anything useful at all, it's the View that's doing all the heavy lifting, the Controller is just a fancy setter. The TextModel certainly isn't supposed to contain anything View-specific, or it wouldn't be MVC.

Wikipedia does helpfully tell us that "Particular MVC architectures can vary significantly from the traditional description here." but I would argue the description is complete and utter nonsense. Image-googling MVC is however a fun exercise for all the zany diagrams you get back. It's a puzzle to figure out if any two actually describe the same thing.

Model-View-Controller 1 Model-View-Controller 6
Model-View-Controller 2 Model-View-Controller 5
Model-View-Controller 4 Model-View-Controller 3

Nobody can agree on who talks to who, in what order, and who's in charge. For some, the Controller is the one who oversees the entire affair and talks to the user, maybe even creates the Model and View. For others, it's the View that talks to the user, possibly creating a Controller and fetching a Model. For some, the Model is just a data structure, mutable or immutable. For others it's an interactive service to both Controller and View. There may be an implicit 4th observer or dispatcher for each trio. What everyone does agree on, at least, is that the program does things, and those things go in boxes that have either an M, a V or a C stamped on them.

One dimension almost never illustrated, but the most important one, is the scope of the MVC in question.


Sometimes MVC is applied at the individual widget level, indeed creating a TextModel and a TextView and a TextController, which sits inside a FormModel/View/Controller, nested inside an ActivityModel/View/Controller. This widget-MVC approach leads to a massively complected tree, where everything comes in triplicate.

The models each contain partial fragments of an original source repeated. Changes are sometimes synced back manually, easily leading to bugs. Sometimes they are bound for you, but must still travel from a deeply nested child all the way back through its parents. If a View can't simply modify its Model without explicitly tripping a bunch of other unrelated Model/Controller/View combos, to and fro, this is probably an architectural mistake. It's a built-in game of telephone played by robots.

I didn't try to draw all this, because then I'd also have to define which of the three has custody of the children, which is a very good question.


But sometimes people mean the exact opposite, like client/server MVC. There the Model is a whole database, the Controller is your web application server and the View is a templated client-side app. Maybe your REST API is made up of orthogonal Controllers, and your Models are just the rows in the tables pushed through an ORM, with bells on and the naughty bits censored. While this explanation ostensibly fits, it's not accurate. A basic requirement for MVC is that the View always represents the current state of the Model, even when there are multiple parallel Views of the same thing. REST APIs simply don't do that, their views are not live. Sorry Rails.

It is in fact this last job that is difficult to do right, and which is the problem we should not lose sight of. As ValidatingInput shows, there is definitely a requirement to have some duplication of models (text value vs normalized value). Widgets definitely need to retain some local state and handlers to manage themselves (a controller I guess). But it was never about the boxes, it was always about the arrows. How can we fix the arrows?

(An audience member built like a greek statue stands up and shouts: "Kleisli!" A klaxon sounds and Stephen Fry explains why that's not the right answer.)

The Shape of Things To Come

In my experience, the key to making MVC work well is to focus on one thing only: decoupling the shape of the UI from the shape of the data. I don't just mean templating. Because if there's one thing that's clear in modern apps, it's that widgets want to be free. They can live in overlays, side drawers, pop-overs and accordions. There is a clear trend towards canvas-like apps, most commonly seen with maps, where the main artifact fills the entire screen. The controls are entirely contextual and overlaid, blocking only minimal space. Widgets also move and transition between containers, like the search field in the corner, which jumps between the map and the collapsible sidebar. The design is and should be user-first. Whether data is local or remote, or belongs to this or that object, is rarely a meaningful distinction for how you display and interact with it.

Google Maps

So you don't want to structure your data like your UI, that would be terrible. It would mean every time you want to move a widget from one panel to another, you have to change your entire data model end-to-end. Btw, if an engineer is unusually resistant to a proposed UI change, this may be why. Oops. But what shape should it have then?

You should structure your data so that GETs/PUTs that follow alike rules can be realized with the same machinery through the same interface. Your policies aren't going to change much, at least not as much as your minimum viable UI. This is the true meaning of full stack, of impedance matching between front-end and back-end. You want to minimize the gap, not build giant Stålenhag transformers on both sides.

Just because structuring your data by UI is bad, doesn't mean every object should be flat. Consider a user object. What structure would reflect the various ways in which information is accessed and modified? Maybe this:

user {
  profile {
    name, location, birthday
  // ...

There is a nice profile wrapper around the parts a user can edit freely themselves. Those could go into a separately grafted profile table, or not. You have to deal with file uploads and email changes separately anyway, as special cases. Editing a user as a whole is out of the question for non-admins. But when only the profile can be edited, there's no risk of improper change of an email address or URL, and that's simpler to enforce when there's separation. Also remember the password hash example, because you never give it out, and it's silly to remove it every time you fetch a user. Proper separation of your domain models will save you a lot of busywork later, of not having to check the same thing in a dozen different places. There's no one Right Way to do it, rather, this is about cause and effect of your own choices down the line, whatever they are.

There's also an entirely different split to respect. Usually in additional to your domain models, you also have some local state that is never persisted, like UI focus, active tab, playback state, pending edits, ... Some of this information is coupled to domain models, like your users, which makes it tempting to put it all on the same objects. But you should never cross the streams, because they route to different places. Again, the same motivation of not having to strip out parts of your data before you can use it for its intended purpose. You can always merge later.

You can also have an in-between bucket of locally persisted state, e.g. to rescue data and state from an accidentally closed tab. Figuring out how it interacts with the other two requires some thought, but with the boundary in the right place, it's actually feasible to solve it right. Also important to point out: you don't need to have only one global bucket of each type. You can have many.

So if you want to do MVC properly, you should separate your models accordingly and decouple them from the shape of the UI. It's all about letting the data pave the cow paths of the specific domain as naturally as possible. This allows you to manipulate it as uniformly as possible too. You want your Controller to be just a fancy getter/setter, only mediating between Model and View, along the boundaries that exist in the data and not the UI. Don't try to separate reading and writing, that's completely counterproductive. Separate the read-only things from the read-write things.

A mediating Controller is sometimes called a ViewModel in the Model-View-ViewModel model, or more sanely Model-View-Presenter, but the name doesn't matter. The difficult part in the end is still the state transitions on the inside, like when ValidatingInput forcefully reformats the user's text vs when it doesn't. That's what the onBlur was for, if you missed it. In fact, if the View should always reflect the Model, should an outside change overwrite an internal error state? What if it's a multi-user environment? What if you didn't unfocus the textfield before leaving your desk? Falsehoods programmers believe about interaction. Maybe simultaneous props and derived state changes are just unavoidably necessary for properly resolving UI etiquette.

I have another subtle but important tweak. React prefab widgets like TextField usually pass around DOM events. Their onChange and onBlur calls carry a whole Event, but 99% of the time you're just reading out the I imagine the reasoning is that you will want to use the full power of DOM to reach into whatever library you're using and perform sophisticated tricks on top of their widgets by hand. No offense, but they are nuts. Get rid of it, make a TextField that grabs the value from its own events and calls onChange(value) directly. Never write another trivial onChange handler just to unbox an event again.


<TextField ...
  onChange={e => setValue(} />

could be:

<TextField ...
  onChange={setValue} />

If you do need to perform the rare dark magic of simultaneous hand-to-hand combat against every platform and browser's unique native textfield implementation (i.e. progressive enhancement of <input />), please do it yourself and box in the component (or hook) to keep it away from others. Don't try to layer it on top of an existing enhanced widget in a library, or allow others to do the same, it will be a nightmare. Usually it's better to just copy/paste what you can salvage and make a new one, which can safely be all fucky on the inside. Repetition is better than the wrong abstraction, and so is sanity.

Really, have you ever tried to exactly replicate the keyboard/mouse behavior of, say, an Excel table in React-DOM, down to the little details? It's an "interesting" exercise, one which shows just how poorly conceived some of HTML's interactivity truly is. React was victorious over the DOM, but its battle scars are still there. They should be treated and allowed to fade, not wrapped in Synthetic and passed around like precious jewels of old.


What you also shouldn't do is create a special data structure for scaffolding out forms, like this:

    type: "textfield",
    label: "Length"
    format: {
      type: "number",
      precision: 2,
      units: "Meters",
    // ...
  // ...

(If your form arrays are further nested and bloated, you may have a case of the Drupals and you need to have that looked at, and possibly, penicillin).

You already have a structure like that, it's your React component tree, and that one is incremental, while this one is not. It's also inside out, with the policy as the little spoon. This may seem arbitrary, but it's not.

Now might be a good time to explain that we are in fact undergoing the Hegelian synthesis of Immediate and Retained mode. Don't worry, you already know the second half. This also has a better ending than Mass Effect, because the relays you're going to blow up are no longer necessary.

Immediate Mode is UI coding you have likely never encountered before. It's where you use functions to make widgets (or other things) happen immediately, just by calling them. No objects are retained per widget. This clearly can't work, because how can there be interactivity without some sort of feedback loop? It sounds like the only thing it can make is a dead UI, a read-only mockup. Like this dear-imgui code in Rust:

ui.with_style_var(StyleVar::WindowPadding((0.0, 0.0).into()), || {
  ui.window(im_str!("Offscreen Render"))
    .size((512.0 + 10.0, 512.0 + 40.0), ImGuiCond::Always)
    .build(|| {

      ui.columns(2, im_str!(""), false);
      for i in 0..4 {
        ui.image(self.offscreen_textures[i], (256.0, 256.0))

imgui example

We draw a window, using methods on the ui object, with some ui.image()s inside, in two columns. There are some additional properties set via methods, like the initial size and no scrollbars. The inside of the window is defined using .build(...) with a closure that is evaluated immediately, containing similar draw commands. Unlike React, there is no magic, nothing up our sleeve, this is plain Rust code. Clearly nothing can change, this code runs blind.

But actually, this is a fully interactive, draggable window showing a grid of live image tiles. It'll even remember its position and open/collapsed state across program restarts out of the box. You see, when ui.window is called, its bounding box is determined so that it can be drawn. This same information is then immediately used to see if the mouse cursor is pointing at it and where. If the left button is also down, and you're pointing at the title bar, then you're in the middle of a drag gesture, so we should move the window by whatever the current motion is. This all happens in the code above. If you call it every frame, with correct input to ui, you get a live UI.

imgui demos

So where does the mouse state live? At the top, inside ui. Because you're only ever interacting with one widget at a time, you don't actually need a separate isActive state on every single widget, and no onMouseDown either. You only need a single, global activeWidget of type ID, and a single mouseButtons. At least without multi-touch. If a widget discovers a gesture is happening inside it, and no other widget is active, it makes itself the active one, until you're done. The global state tracks mouse focus, button state, key presses, the works, in one data structure. Widget IDs are generally derived from their position in the stack, in relation to their parent functions.

Every time the UI is re-rendered, any interaction since the last time is immediately interpreted and applied. With some imagination, you can make all the usual widgets work this way. You just have to think a bit differently about the problem, sharing state responsibly with every other actor in the entire UI. If a widget needs to edit a user-supplied value, there's also "data binding," in the form of passing a raw memory pointer, rather than the value directly. The widget can read/write the value any way it likes, without having to know who owns it and where it came from.

imgui function tree

Immediate mode UI is so convenient and efficient because it reuses the raw building blocks of code as its invisible form structure: compiled instructions, the stack, pointers, nested function calls, and closures. The effect is that your UI becomes like Deepak Chopra explaining quantum physics, it does not exist unless you want to look at it, which is the same as running it. I realize this may sound like moon language, but there is a whole lecture explaining it if you'd like to know more. Plus 2005 Casey Muratori was dreamboat.

The actual output of dear-imgui, not visible in this code at all, is raw geometry, produced from scratch every frame and passed out through the back. It's pure unretained data. This is pure vector graphics, directly sampled. It can be rendered using a single GPU shader with a single image for the font. All the shapes are triangle meshes with RGBA colors at their vertices, including the anti-aliasing, which is a 1px wide gradient edge.

Its polar opposite is Retained mode, what you likely have always used, where you instead materialize a complete view tree. Every widget, label and shape is placed and annotated with individual styling and layout. You don't recompute this tree every frame, you only graft and prune it, calling .add(...) and .set(...) and .remove(...). It would seem more efficient and frugal, but in fact you pay for it tenfold in development overhead.

By materializing a mutable tree, you have made it so that now the evolution of your tree state must be done in a fully differential fashion, not just computed in batch. Every change must be expressed in terms of how you get there from a previous state. Given n states, there are potentially O(n2) valid pairwise transitions. Coding them all by hand, for all your individual Ms, Vs and Cs, however they work, is both tedious and error-prone. This is called Object-Oriented Programming.

What you generally want to do instead is evolve your source of truth differentially, and to produce the derived artifact—the UI tree—declaratively. IM UI achieves this, because every function call acts as a declarative statement of the desired result, not a description of how to change the previous state to another. The cost is that your UI is much more tightly coupled with itself, and difficult to extend.

react tree

The React model bridges the two modes. On the one hand, its render model provides the same basic experience as immediate UI, except with async effects and state allocation integrated, allowing for true decoupling. But it does this by automating the tedium, not eliminating it, and still materializes an entire DOM tree in the end, whose changes the browser then has to track itself anyway.

I never said it was a good Hegelian synthesis.

If you've ever tried to interface React with something that isn't reactive, like Three.js, you know how silly it feels to hook up the automatic React lifecycle to whatever manual native add/set/remove methods exist. You're just making a crummier version of React inside itself, lacking the proper boundaries and separation.

deferred tree

But we can make it right. We don't actually need to produce the full manifest and blueprint down to every little nut and bolt, we just need to have a clear enough project plan of what we want. What if the target library was immediate instead of retained, or as close as can be and still be performant, and the only thing you kept materialized inside React was the orchestration of the instructions? That is, instead of materializing a DOM, you materialize an immediate mode program at run-time. This way, you don't need to hard-wrap what's at the back, you can plumb the same interface through to the front.

We don't need to expand React functions to things that aren't functions, we just need to let them stop expanding, into a function in which an immediate mode API is called directly. The semantics of when this function is called will need to be clearly defined with suitable policies, but they exist to empower you, not to limit you. I call this Deferred Mode Rendering (nothing like deferred shading). It may be a solution to the lasagna from hell in which Retained and Immediate mode are stacked on top of each other recursively, each layer more expired than the next.

What this alternate <Component /> tree expands into, in the React model, is placeholder <div />s with render props. The deferred mode layers could still bunch up at the front, but they wouldn't need to fence anything off, they could continue to expose it. Because the context => draw(context) closures you expand to can be assembled from smaller ones, injected into the tree as props by parents towards their children. Somewhat like an algebraically closed reducer.

To do this today requires you to get familiar with the React reconciler and its API, which is sadly underdocumented and a somewhat shaky house of cards. There is a mitigating factor though, just a small one, namely that the entirety of React-DOM and React-Native depend on it. For interactivity you can usually wing it, until you hit the point where you need to take ownership of the event dispatcher. But on the plus side, imagine what your universe would be like if you weren't limited by the irreversible mistakes of the past, like not having to have a 1-to-1 tight coupling between things that are drawn and things that can be interacted with. This need not mean starting from scratch. You can start to explore these questions inside little sandboxes you carve out entirely for yourself, using your existing tools inside your existing environment.

If you'd rather wait for the greybeards to get their act together, there is something else you can do. You can stop breaking your tools with your tools and start fixing them.

The Stateless State

I glossed over boundary 1, where the data comes from and where it goes, but in fact, this is where the real fun stuff happens, which is why I had to save it for last.

The way most apps will do this, is to fetch data from a RESTlike API. This is stored in local component state, or if they're really being fancy, a client-side document store organized by URL or document ID. When mutations need to be made, usually the object in question is rendered into a form, then parsed on submit back into a POST or PUT request. All of this likely orchestrated separately for every mutation, with two separate calls to fetch() each.

If you use the API you already have:

let [state, setState] = useState(initialValue);

Then as we've seen, this useState allocates persistent data that can nevertheless vanish at any time, as can we. That's not good for remote data. But this one would be:

let [state, setState] = useRESTState(cachedValue, url);

It starts from a cached value, if any, fetches the latest version from a URL, and PUTs the result back if you change it.

Now, I know what you're thinking, surely we don't want to synchronize every single keystroke with a server, right? Surely we must wait until an entire form has been filled out with 100% valid content, before it may be saved? After all, MS Word prevented you from saving your homework at all unless you were completely done and had fixed all the typos, right? No wait, several people are typing, and no. Luckily it's not an either/or thing. It's perfectly possible to create a policy boundary between what should be saved elsewhere on submission and what should be manipulated locally:

<Form state={state} setState={setState} validate={validate}>{
  (state, isError, onSubmit) => {
    // ...

It may surprise you that this is just our previous component in drag:

<ValidatingInput value={state} setValue={setState} validate={validate}>{
  (state, isError, onSubmit) => {
    // ...

If this doesn't make any sense, remember that it's the widget on the inside that decides when to call the policy's onChange handler. You could wire it up to a Submit button's onClick event instead. Though I'll admit you probably want a Form specialized for this role, with a few extra conveniences for readability's sake. But it would just be a different flavor of the same thing. Notice if onSubmit/onChange takes an Event instead of a direct value it totally ruins it, q.e.d.

In fact, if you want to only update a value when the user unfocuses a field, you could hook up the TextField's onBlur to the policy's onChange, and use it in "uncontrolled" mode, but you probably want to make a BufferedInput instead. Repetition better than the wrong abstraction strikes again.

You might also find these useful, although the second is definitely an extended summer holiday assignment.

// Store in window.localStorage
let [state, setState] = useLocalState(initialValue, url);

// Sync to a magical websocket API
let [state, setState] = useLiveState(initialValue, url);

But wait, there's something missing. If these useState() variants apply at the whole document level, how do you get setters for your individual form fields? What goes between the outer <Shunt /> and the inner <Shunt />?

Well, some cabling:

let [state, setState] = useState({
  metrics: {
    length: 2,
    mass: 10,
  // ...

let useCursor = useRefineState(state, setState);
let [length, setLength] = useCursor('metrics', 'length');
let [mass,   setMass]   = useCursor('metrics', 'mass');

What useCursor does is produce an automatic reducer that will overwrite e.g. the state.metrics.length field immutably when you call setLength. A cursor is basically just a specialized read/write pointer. But it's still bound to the root of what it points to and can modify it immutably, even if it's buried inside something else. In React it makes sense to use [value, setter] tuples. That is to say, you don't play a new game of telephone with robots, you just ask the robots to pass you the phone. With a PBX so you only ever dial local numbers.

Full marks are awarded only when complete memoization of the refined setter is achieved. Because you want to pass it directly to some policy+input combo, or a more complex widget, as an immutable prop value on a memoized component.

A Beautiful Bikeshed

Having now thrown all my cards on the table, I imagine the urge to nitpick or outright reject it has reached critical levels for some. Let's play ball.

I'm aware that the presence of string keys for useCursor lookups is an issue, especially for type safety. You are welcome to try and replace them with a compile-time macro that generates the reader and writer for you. The point is to write the lookup only once, in whatever form, instead of separately when you first read and later write. Possibly JS proxies could help out, but really, this is all trying to paper over language defects anyway.

Unlike most Model-View-Catastrophes, the state you manage is all kept at the top, separating the shape of the data from the shape of the UI completely. The 'routes' are only defined in a single place. Unlike Redux, you also don't need to learn a whole new saga, you just need to make your own better versions of the tools you already have. You don't need to centralize religiously. In fact, you will likely want to use both useRESTState and useLocalState in the same component sooner than later, for data and UI state respectively. It's a natural fit. You will want to fetch the remote state at the point in the tree where it makes the most sense, which is likely near but not at its root. This is something Apollo does get right.

In fact, now replace useState(...) with [state, updateState] = useUpdateState(...), which implements a sparse update language, using a built-in universal reducer, and merges it into a root state automatically. If you want to stream your updates as OT/CRDT, this is your chance to make a useCRDTState. Or maybe you just want to pass sparse lambda updates directly to your reducer, because you don't have a client/server gap to worry about, which means you're allowed to do:

updateState({foo: {thing: {$apply: old => new}}})

Though that last update should probably be written as:

let [thing, updateThing] = useCursor('foo', 'thing');
// ...
updateThing($apply(old => new));

useCursor() actually becomes simpler, because now its only real job is to turn a path like ['foo', 'bar'] into the function:

value => ({foo: {bar: value}})

...with all the reduction logic part of the original useUpdateState().

Of course, now it's starting to look like you should be able to pass a customized useState to any other hook that calls useState, so you can reuse it with different backing stores, creating higher-order state hooks:

let useRemoteState = useRESTState(url);
let useRemoteUpdatedState = useUpdateState(initialValue, useRemoteState);

Worth exploring, for sure. Maybe undo/redo and time travel debugging suddenly became simpler as well.

Moving on, the whole reason you had centralized Redux reducers was because you didn't want to put the update logic inside each individual component. I'm telling you to do just that. Yes but this is easily fixed:

updateThing(manipulateThing(thing, ...));

manipulateThing returns an update representing the specific change you requested, in some update schema or language, which updateThing can apply without understanding the semantics of the update. Only the direct effects. You can also build a manipulator with multiple specialized methods if you need more than one kind of update:


Instead of dispatching bespoke actions just so you can interpret them on the other side, why not refactor your manipulations into reusable pieces that take and return modified data structures or diffs thereof. Use code as your parts, just like dear-imgui. You compute updates on the spot that pass only the difference on, letting the cursor's setter map it into the root state, and the automatic reducer handle the merge.

In fact, while you could conscientiously implement every single state change as a minimal delta, you don't have to. That is, if you want to reorder some elements in a list, you don't have to frugally encode that as e.g. a $reorder operation which maps old and new array indices. You could have a $splice operation to express it as individual insertions and removals. Or if you don't care at all, the bulkiest encoding would be to replace the entire list with $set.

But if your data is immutable, you can efficiently use element-wise diffing to automatically compress any $set operation into a more minimal list of $splices, or other more generic $ops or $nops. This provides a way to add specialized live sync without having to rewrite every single data structure and state change in your app.

If diffing feels icky, consider that the primary tool you use for development, git, relies on it completely to understand everything you get paid for. If it works there, it can work here. For completeness I should also mention immer, but it lacks cursor-like constructs and does not let you prepare updates without acting on them right away. However, immer.produce could probably be orthogonalized as well.

Maybe you're thinking that the whole reason you had Sagas was because you didn't want to put async logic inside your UI. I hear you.

This is a tougher one. Hopefully it should be clear by now that putting things inside React often has more benefits than not doing so, even if that code is not strictly part of any View. You can offload practically all your bookkeeping to React via incremental evaluation mechanisms. But we have to be honest and acknowledge it does not always belong there. You shouldn't be making REST calls in your UI, you should be asking your Store to give you a Model. But doing this properly requires a reactive approach, so what you probably want is a headless React to build your store with.

Until that exists, you will have to accept some misappropriation of resources, because the trade-off is worth it more often than not. Particularly because you can still refactor it so that at least your store comes in reactive and non-reactive variants, which share significant chunks of code between them. The final form this takes depends on the specific requirements, but I think it would look a lot like a reactive PouchDB tbh, with the JSON swapped out for something else if needed. (Edit: oh, and graph knitting)

To wrap it all up with a bow: one final trick, stupid, but you'll thank me. Often, every field needs a unique <label> with a unique id for accessibility reasons. Or so people think. Actually, you may not know that the entire time, you have been able to wrap the <label> around your <input> without naming either. Because <label> is an interaction policy, not a widget.

<label for="name">Name</label>
<input name="name" type="text" value="" />

Works the same as:

  <input type="text" value="" />

You haven't needed the name attribute (or id) on form fields for a long time now in your SPAs. But if your dependencies still need one, how about you make this work:

let [mass, setMass] = useCursor('metrics', 'mass');
// The setter has a name
// == 'state-0-metrics-mass'

  parse={parseNumber} format={formatNumber}
  value={mass}      setValue={setMass}
  // The name is extracted for you
  (name, value, isError, onChange, onBlur) =>
      name={name} value={value} isError={isError}
      onChange={onChange} onBlur={onBlur} />

The setter is the part that is bound to the root. If you need a name for that relationship, that's where you can put it.

As an aside "<label> is an interaction policy" is also a hint on how to orthogonalize interactions in a post-HTML universe, but that's a whole 'nother post.

When you've done all this, you can wire up "any" data model to "any" form, while all the logistics are pro forma, but nevertheless immediate, across numerous components.

You define some state by its persistence mechanism. You refine the state into granular values and their associated setters, i.e. cursor tuples. You can pass them to other components, and let change policy wrappers adopt those cursors, separately from the prefab widgets they wrap. You put the events away where you don't have to think about it. Once the reusable pieces are in place, you only write what is actually unique about this. Your hooks and your components declare intent, not change. Actual coding of differential state transitions is limited only to opportunities where this has a real pay off.

It's particularly neat once you realize that cursors don't have to be literal object property lookups: you can also make cursor helpers for e.g. finding an object in some structured collection based on a dynamic path. This can be used e.g. to make a declarative hierarchical editor, where you want to isolate a specific node and its children for closer inspection, like Adobe Illustrator. Maybe make a hook for a dynamically resolved cursor lookup. This is the actual new hotness: the nails you smashed with your <Component /> hammer are now hooks to hang declarative code off of.

Just keep in mind the price you pay for full memoization, probably indefinitely, is that all your hooks must be executed unconditionally. If you want to apply useCursor to a loop, that won't work. But you don't need to invent anything new, think about it. A dynamically invoked hook is simply a hook inside a dynamically rendered component. Your rows might want to individually update live anyway, it's probably the right place to put an incremental boundary.

I'm not saying the above is the holy grail, far from it, what I am saying is that it's unquestionably both simpler and easier, today, for these sorts of problems. And I've tried a few things. It gets out of the way to let me focus on building whatever I want to build in the first place, empowering my code rather than imprisoning it in tedious bureaucracy, especially if there's a client/server gap. It means I actually have a shot at making a real big boy web app, where all the decades-old conveniences work and the latency is what it should be.

It makes a ton more sense than any explanation of MVC I've ever heard, even the ones whose implementation matches their claims. The closest you can describe it in the traditional lingo is Model-Commitments-View, because the controllers don't control. They are policy components mediating the interaction, nested by scope, according to rules you define. The hyphens are cursors, a crucial part usually overlooked, rarely done right.

Good luck,
Steven Wittens

Previous: The Incremental Machine

August 02, 2019

Let's try and work our way through the oldest problem in computer science, or close to it: cache invalidation and naming things. Starting with the fact that we misnamed it.

In my view, referring to it as "cache invalidation" is somewhat like referring to crime prevention as "death minimization." While the absense of preventable death is indeed a desirable aspect of quality policing, it would suggest a worrying lack of both concern and ambition if that were the extent of a police force's mission statement.

So too should you run away, not walk, from anyone who treats some stale data and a necessary hard-refresh as a minor but unavoidable inconvenience. You see, this person is telling you that expecting to trust your eyes is too much to ask for when it comes to computers. I don't mean correctness or truth value, no, I just mean being able to see what is there so you can make sense of it, right or wrong. They expect you to sanity check every piece of information you might find, and be ready to Shift-Refresh or Alt-F4 if you suspect a glitch in the matrix. It should be pretty obvious this seriously harms the utility of such an environment, for work or pleasure. Every document becomes a potential sticky indicator gauge, requiring you to give it a good whack to make sure it's unstuck.

This should also puzzle you. A discipline whose entire job it is to turn pieces of data into other pieces of data, using precise instructions, is unable to figure out when its own results are no longer valid? This despite having the complete paper trail at our disposal for how each result is produced, in machine readable form.

Why hasn't this been solved by tooling already? We love tools, right? Is it possible? Is it feasible? Which parts?

Adders in the Grass

I'm going to start at the bottom (or at least a bottom) and work my way up and you'll see why. Let's start with a trivial case of a side-effect-free function, integer addition, and anally dissect it:

(a, b) => a + b

The result changes when either a or b do. However, there is a special case: if a and b each change in opposite amounts, the output is unchanged. Here we have a little microcosm of larger issues.

First, it would be perfectly possible to cache this result, and to check whether a or b have changed since the last time. But just computing the sum is faster than two comparisons. You also need permanent extra storage for at least one extra a and b each, and much more if you want a multi-valued cache rather than just a safety valve. Then you need a pruning policy too to keep it from growing.

Second, if you wish to know whether and how the output will actually change, then you must double check. You have to diff the old and the new values, track the resulting deltas through the same computation as the original. So you can compare to the previous result. The requirement that a + b != (a + Δa) + (b + Δb) can then be reduced to Δa != -Δb. Though this is still more actual work.

Addition operator

If this were multiplication instead of a sum, then:

(a, b) => a * b
a * b != (a + Δa) * (b + Δb)

which reduces to:

Δa * b  +  Δb * a  +  Δa * Δb != 0

Here there is a non-linear relationship which involves both values and deltas together. The first two terms depend on one delta and value each, but the last term only kicks in if both inputs change at the same time. This shows how deltas can interfere both constructively and destructively, either triggering or defusing other effects on other inputs. It also implies there are no easy shortcuts to be found in delta-land, because there are many more ways for values and deltas to combine, than just values by themselves.

Multiply operator Multiply operator deltas

In fact you already knew this. Because if you could provide a concise summary of the delta-behavior of a certain class of functions, you'd break a good chunk of the internet and win a few awards:

m => SHA256(m)

The deltas in a hash function don't just get cozy, they have an all-out orgy of computation, the kind they invented 55 gallon drums of lube for.

This is also your first hint of an answer as to y u no tools. What looks well-behaved and cacheable at a small scale may in fact be part of a larger, unpredictable algorithm, which is why trying to automate caching everywhere is generally a non-starter. That is, it is perfectly possible to imagine a sort of God's eye view of a fully incremental piece of code at the opcode level, and to watch as a change in inputs begins the siege of a thousand memoized gates. But it wouldn't be useful because it's far too granular.

This also means caching is not a computer science problem, it is a computer engineering problem. We are fighting against entropy, against the computational chaos we ourselves create. It is not a science, it is damage control.

4-bit adder

On the other hand, we should not give up, because neither + or * are elementary operations in reality. They are abstractions realized as digital circuits. Given a diagram of a binary adder, we could trace the data flow from each input bit, following the potential effects of each delta. But if you know your arithmetic you already know each bit in a sum can only affect itself and the ones to its left.

What's interesting though is that this complicated dance can be entirely ignored, because it serves to realize an abstraction, that of integers. Given integers, we can reason about changes at a different level. By looking at the level of arithmetic, we were able to discover that two specific patterns of matching differences, Δa == -Δb, cancel out, regardless of the specific value of a and b.

In this case, that only gave us counterproductive "optimizations", but that's because we aren't looking at the right level of abstraction yet. The point is abstraction boundaries don't necessarily have to be black boxes like the hash function, they can also be force fields that allow you to contain deltas, or at least, transmute them into more well-behaved ones. So let's climb up, like Bret wants us to.

I Spy With My Little API

For instance, if we look at the maximum operator applied to an entire list of numbers, again a pure function:

xs => xs.reduce(max, -Infinity)

A simple reduce creates a linear dependency between successive elements, with every delta potentially affecting all max() calls after it. However, the output changes more predictably.

If all elements are unique, the result will only change if a new value x + Δx exceeds the previous result (increasing it), or if an old value x was equal to the previous result and its Δx < 0 (decreasing it). Note we don't need to remember which element index it actually was, and we don't need to care about the elements that didn't change either (at least to merely detect a change).

Max operator deltas

If there are duplicates, things are a bit more complicated. Now there is a multi-delta-term Δa * Δb * ... between each set, which won't trigger unless all of them decrease at the same time. Writing out the full delta equation for the max of a list is more fun than I can handle, but you get the idea, and actually, it doesn't really matter much. If we pretend all elements are unique regardless, we simply trigger the occasional false positive (change falsely detected), but crucially no false negatives (change falsely ignored).

Either way, the sequential nature of the computation is no longer relevant at this level, because max() is associative (and commutative too), and reduce is a higher-order function whose deltas cancel out in convenient ways when you give it that sort of operator.

map reduce

Which means we're almost there. Actually dissecting the max operator was still too tedious, too special-cased. But it gives us hints of what to look for.

One such winning combo is Map-Reduce, using the same properties. By mapping each element in isolation, the effects of any change in any input is initially contained, in a way that is independent of the position of an element in a collection. Second, by using an associative reduction operator, this reduction can be done incrementally, as a tree instead of as a flat list. You reduce the list in chunks, and then re-reduce the list of reductions, recursively until you get one result. When some of the items in the list change, only a few chunks are affected, and the overall recomputation along the tree is minimized. The price you pay is to retain all the intermediate reductions in the tree each time.

Map-Reduce is a universal incremental evaluation strategy, which can schedule and execute any pure function of the individual inputs, provided you reduce the result in an algebraically closed fashion. So that exists. Any others?

Well, many sequential/batch processes are universally incrementalizable too. Take for example a lexer, which processes a stream of text and produces tokens. In this case, the input cannot be chunked, it must be traversed start-to-end.


The lexer tracks its syntax in an internal state machine, while consuming one or more characters to produce zero or more tokens.

Conveniently, the lexer tells you everything you need to know through its behavior in consuming and producing. Roughly speaking, as long as you remember the tuple (lexer state, input position, output position) at every step, you can resume lexing at any point, reusing partial output for partially unchanged input. You can also know when to stop re-lexing, namely when the inputs match again and the internal state does too, because the lexer has no other dependencies.

Lining up the two is left as an exercise for the reader, but there's a whole thesis if you like. With some minor separation of concerns, the same lexer can be used in batch or incremental mode. They also talk about Self-Versioned Documents and manage to apply the same trick to incremental parse trees, where the dependency inference is a bit trickier, but fundamentally still the same principle.

What's cool here is that while a lexer is still a pure function in terms of its state and its input, there crucially is inversion of control: it decides to call getNextCharacter() and emitToken(...) itself, whenever it wants, and the incremental mechanism is subservient to it on the small scale. Which is another clue, imo. It seems that pure functional programming is in fact neither necessary nor sufficient for successful incrementalism. That's just a very convenient straightjacket in which it's hard to hurt yourself. Rather you need the application of consistent evaluation strategies. Blind incrementalization is exceedingly difficult, because you don't know anything actionable about what a piece of code does with its data a priori, especially when you're trying to remain ignorant of its specific ruleset and state.

As an aside, the resumable-sequential approach also works for map-reduce, where instead of chunking your inputs, you reduce them in-order, but keep track of reduction state at every index. It only makes sense if your reducer is likely to reconverge on the same result despite changes though. It also works for resumable flatmapping of a list (that is, .map(...).flatten()), where you write out a new contiguous array on every change, but copy over any unchanged sections from the last one.

Each is a good example of how you can separate logistics from policy, by reducing the scope and/or providing inversion of control. The effect is not unlike building a personal assistant for your code, who can take notes and fix stuff while you go about your business.

Don't Blink

This has all been about caching, and yet we haven't actually seen a true cache invalidation problem. You see, a cache invalidation problem is when you have a problem knowing when to invalidate a cache. In all the above, this is not a problem. With a pure function, you simply compare the inputs, which are literal values or immutable pointers. The program running the lexer also knows exactly which part of the output is in question, it's everything after the edit, same for the flatmap. There was never a time when a cache became invalid without us having everything right there to trivially verify and refresh it with.

No, these are cache conservation problems. We were actually trying to reduce unnecessary misses in an environment where the baseline default is an easily achievable zero false hits. We tinkered with that at our own expense, hoping to squeeze out some performance.

There is one bit of major handwavium in there: a != b in the real world. When a and b are composite and enormous, i.e. mutable like your mom, or for extra points, async, making that determination gets somewhat trickier. Async means a gap of a client/server nature and you know what that means.

Implied in the statement (a, b) => 🦆 is the fact that you have an a and a b in your hands and we're having duck tonight. If instead you have the name of a store where you can buy some a, or the promise of b to come, then now your computation is bringing a busload of extra deltas to dinner, and btw they'll be late. If a and b have large dependent computations hanging off them, it's your job to take this additional cloud of uncertainty and somehow divine it into a precise, granular determination of what to keep and what to toss out, now, not later.

1) You don't have an image, you have the URL of an image, and now you need to decide whether the URL will resolve to the same binary blob that's in your local cache. Do they still represent the same piece of data? The cache invalidation problem is that you weren't notified when the source of truth changed. Instead you have to make the call based on the metadata you originally got with the data and hope for the best.

Obviously it's not possible for every browser to maintain long-lived subscriptions to every meme and tiddy it downloaded. But we can brainstorm. The problem is that you took a question that has a mutable answer but you asked it to be immutable. The right answer is "here's the local cache and some refreshments while you wait, ... ah, there's a newer version, here". Protocol. Maybe a short-lived subscription inside the program itself, from the part that wants to show things, subscribing to the part that knows what's in them, until the latter is 100% sure. You just have to make sure the part that wants to show things is re-entrant.

2) You want to turn your scene graph into a flattened list of drawing commands, but the scene graph is fighting back. The matrices are cursed, they change when you blink, like the statues from Doctor Who. Because you don't want to remove the curse, you ask everyone to write IS DIRTY in chalk on the side of any box they touch, and you clean the marks back off 60 times per second when you iterate over the entire tree and put everything in order.

I joke, but what's actually going on here is subtle enough to be worth teasing apart. The reason you use dirty flags on mutable scene graphs has nothing to do with not knowing when the data changes. You know exactly when the data changes, it's when you set the dirty flag to true. So what gives?

The reason is that when children depend on their parents, changes cascade down. If you react to a change on a node by immediately updating all its children, this means that further updates of those children will trigger redundant refreshes. It's better to wait and gather all the changes, and then apply and refresh from the top down. Mutable or immutable matrix actually has nothing to do with it, it's just that in the latter case, the dirty flag is implicit on the matrix itself, and likely on each scene node too.

Push vs pull is also not really a useful distinction, because in order to cleanly pull from the outputs of a partially dirty graph, you have to cascade towards the inputs, and then return (i.e. push) the results back towards the end. The main question is whether you have the necessary information to avoid redundant recomputation in either direction and can manage to hold onto it for the duration.

The dirty flag is really a deferred and debounced line of code. It is read and cleared at the same time in the same way every frame, within the parent/child context of the node that it is set on. It's not data, it's a covert pre-agreed channel for a static continuation. That is to say, you are signaling the janitor who comes by 60 times a second to please clean up after you.

What's interesting about this is that there is nothing particularly unique about scene graphs here. Trees are ubiquitous, as are parent/child dependencies in both directions (inheritance and aggregation). If we reimagine this into its most generic form, then it might be a tree on which dependent changes can be applied in deferred/transactional form, whose derived triggers are re-ordered by dependency, and which are deduplicated or merged to eliminate any redundant refreshes before calling them.

In Case It Wasn't Obvious

So yes, exactly like the way the React runtime can gather multiple setState() calls and re-render affected subtrees. And exactly like how you can pass a reducer function instead of a new state value to it, i.e. a deferred update to be executed at a more opportune and coordinated time.

In fact, remember how in order to properly cache things you have to keep a copy of the old input around, so you can compare it to the new? That's what props and state are, they are the a and the b of a React component.

Δcomponent = Δ(props * state)
           = Δprops * state + Δstate * props + Δprops * Δstate

           = Props changed, state the same (parent changed)
           + State changed, props the same (self/child changed)
           + Props and state both changed  (parent's props/state change
               triggered a child's props/state change)

The third term is rare though, and the React team has been trying to deprecate it for years now.

I prefer to call Components a re-entrant function call in an incremental deferred data flow. I'm going to recap React 101 quickly, because there is a thing that hooks do that needs to be pointed out.

The way you use React nowadays is, you render some component to some native context like an HTML document:

ReactDOM.render(<Component />, context);

The <Component /> in question is just syntactic sugar for a regular function:

let Component = (props) => {
  // Allocate some state and a setter for it
  let [state, setState] = useState(initialValue);

  // Render a child component
  return <OtherComponent foo={} onChange={e => setState(...)}/>;
  // aka
  return React.createElement(OtherComponent, {foo:, onChange: e => setState(...)}, null);

This function gets called because we passed a <Component /> to React.render(). That's the inversion of control again. In good components, props and state will both be some immutable data. props is feedforward from parents, state is feedback from ourselves and our children, i.e. respectively the exterior input and the interior state.

If we call setState(...), we cause the Component() function to be run again with the same exterior input as before, but with the new interior state available.

The effect of returning <OtherComponent .. /> is to schedule a deferred call to OtherComponent(...). It will get called shortly after. It too can have the same pattern of allocating state and triggering self-refreshes. It can also trigger a refresh of its parent, through the onChange handler we gave it. As the HTML-like syntax suggests, you can also nest these <Elements>, passing a tree of deferred children to a child. Eventually this process stops when components have all been called and expanded into native elements like <div /> instead of React elements.

Either way, we know that OtherComponent(...) will not get called unless we have had a chance to respond to changes first. However if the changes don't concern us, we don't need to be rerun, because the exact same rendered output would be generated, as none of our props or state changed.

This incidentally also provides the answer to the question you may not have realized you had: if everything is eventually a function of some Ur-input at the very start, why would anything ever need to be resumed from the middle? Answer: because some of your components want to semi-declaratively self-modify. The outside world shouldn't care. If we do look inside, you are sometimes treated to topping-from-the-bottom, as a render function is passed down to other components, subverting the inversion of control ad-hoc by extending it inward.

So what is it, exactly, that useState() does then that makes these side-effectful functions work? Well it's a just-in-time allocation of persistent storage for a temporary stack frame. That's a mouthful. What I mean is, forget React.

Think of Component as just a function in an execution flow, whose arguments are placed on the stack when called. This stack frame is temporary, created on a call and destroyed as soon as you return. However, this particular invocation of Component is not equally ephemeral, because it represents a specific component that was mounted by React in a particular place in the tree. It has a persistent lifetime for as long as its parent decides to render/call it.

So useState lets it anonymously allocate some permanent memory, keyed off its completely ephemeral, unnamed, unreified execution context. This only works because React is always the one who calls these magic reentrant functions. As long as this is true, multiple re-runs of the same code will retain the same local state in each stack frame, provided the code paths did not diverge. If they did, it's just as if you ran those parts from scratch.

What's also interesting is that hooks were first created to reduce the overuse of <Component /> nesting as a universal hammer for the nail of code composition, because much of the components had nothing to do with UI directly. In fact, it may be that UI just provided us with convenient boundaries around things in the form of widgets, which suggestively taught us how to incrementalize them.

This to me signals that React.render() is somewhat misnamed, but its only mistake is a lack of ambition. It should perhaps be React.apply() or React.incremental(). It's a way of calling a deferred piece of code so that it can be re-applied later with different inputs, including from the inside. It computes minimum updates down a dependency tree of other deferred pieces of code with the same power.

Right now it's still kind of optimized for handling UI trees, but the general strategy is so successful that variants incorporating other reconciliation topologies will probably work too. Sure, code doesn't look like React UI components, it's a DAG, but we all write code in a sequential form, explicitly ordering statements even when there is no explicit dependency between them, using variable names as the glue.

The incremental strategy that React uses includes something like the resumable-sequential flatmap algorithm, that's what the key attribute for array elements is for, but instead of .map(...).flatten() it's more like an incremental version of let render = (el, props) => recurse(el.render(props)) where recurse is actually a job scheduler.

The tech under the hood that makes this work is the React reconciler. It provides you with the illusion of a persistent, nicely nested stack of props and state, even though it never runs more than small parts of it at a time after the initial render. It even provides a solution for that old bugbear: resource allocation, in the form of the useEffect() hook. It acts like a constructor/destructor pair for one of these persistent stack frames. You initialize any way you like, and you return to React the matching destructor as a closure on the spot, which will be called when you're unmounted. You can also pass along dependencies so it'll be un/remounted when certain props like, I dunno, a texture size and every associated resource descriptor binding need to change.

There's even a neat trick you can do where you use one reconciler as a useEffect() inside another, bridging from one target context (e.g. HTML) into a different one that lives inside (e.g. WebGL). The transition from one to the other is then little more than a footnote in the resulting component tree, despite the fact that execution- and implementation-wise, there is a complete disconnect as only fragments of code are being re-executed sparsely left and right.

You can make it sing with judicious use of the useMemo and useCallback hooks, two necessary evils whose main purpose is to let you manually pass in a list of dependencies and save yourself the trouble of doing an equality check. When you want to go mutable, it's also easy to box in a changing value in an unchanging useRef once it's cascaded as much as it's going to. What do you eventually <expand> to? Forget DOMs, why not emit a view tree of render props, i.e. deferred function calls, interfacing natively with whatever you wanted to talk to in the first place, providing the benefits of incremental evaluation while retaining full control.

It's not a huge leap from here to being able to tag any closure as re-entrant and incremental, letting a compiler or runtime handle the busywork, and forget this was ever meant to beat an aging DOM into submission. Maybe that was just the montage where the protagonist trains martial arts in the mountain-top retreat. Know just how cheap O(1) equality checks can be, and how affordable incremental convenience for all but the hottest paths. However, no tool is going to reorganize your data and your code for you, so putting the boundaries in the right place is still up to you.

I have a hunch we could fix a good chunk of GPU programming on the ground with this stuff. Open up composability without manual bureaucracy. You know, like React VR, except with LISP instead of tears when you look inside. Unless you prefer being a sub to Vulkan's dom forever?

Previous: APIs are about Policy
Next: Model-View-Catharsis

August 01, 2019

For the sixth year in a row, Acquia has been recognized as a Leader in the Gartner Magic Quadrant for Web Content Management.

Gartner magic quadrant for web content management

As I've written before, I believe analyst reports like the Gartner Magic Quadrant are important because they introduce organizations to Acquia and Drupal. As I've put if before If you want to find a good coffee place, you use Yelp. If you want to find a nice hotel in New York, you use TripAdvisor. Similarly, if a CIO or CMO wants to spend $250,000 or more on enterprise software, they often consult an analyst firm like Gartner..

You can read the complete report on Thank you to everyone who contributed to this result!

Update: Gartner asked me to take down this post, or to update it to follow their citation guidelines. Specifically, Gartner didn't want my commentary clouding their work. I updated this post to remove any personal commentary so my opinion is not blended with their report.

July 25, 2019

A pox on both houses

“The Web is a system, Neo. That system is our enemy. But when you're inside, you look around, what do you see? Full-stack engineers, web developers, JavaScript ninjas. The very minds of the people we are trying to save.

But until we do, these people are still a part of that system and that makes them our enemy. You have to understand, most of these people are not ready to be unplugged. And many of them are so inured, so hopelessly dependent on the system, that they will fight to protect it.

Were you listening to me, Neo? Or were you looking at the widget library in the red dress?"


"What are you trying to tell me, that I can dodge unnecessary re-renders?"

"No Neo. I'm trying to tell you that when you're ready, you won't have to."


The web is always moving and shaking, or more precisely, shaking off whatever latest fad has turned out to be a mixed blessing after all. Specifically, the latest hotness for many is GraphQL, slowly but surely dethroning King REST. This means changing the way we shove certain data into certain packets. This then requires changing the code responsible for packing and unpacking that data, as well as replacing the entire digital last mile of routing it at both source and destination, despite the fact that all the actual infrastructure in between is unchanged. This is called full stack engineering. Available for hire now.

The expected custom and indeed, regular passtime, is of course to argue for or against, the old or the new. But instead I'd like to tell you why both are completely wrong, for small values of complete. You see, APIs are about policy.


Take your typical RESTful API. I say typical, because an actual Representationally State Transferred API is as common as a unicorn. A client talks to a server by invoking certain methods on URLs over HTTP, let's go with that.

Optimists will take a constructive view. The API is a tool of empowerment. It enables you to do certain things in your program you couldn't do before, and that's why you are importing it as a dependency to maintain. The more methods in the swagger file, the better, that's why it's called swagger.

But instead I propose a subtractive view. The API is a tool of imprisonment. Its purpose is to take tasks that you are perfectly capable of doing yourself, and to separate them from you with bulletproof glass and a shitty telephone handset. One that is usually either too noisy or too quiet, but never just right. Granted, sometimes this is self-inflicted or benign, but rarely both.

This is also why there are almost no real REST APIs. If we consult the book of difficult-to-spot lies, we learn that the primary features of a REST API are Statelessness, Cacheability, Layeredness, Client-Side Injection and a Uniform Interface. Let's check them.

Statelessness means a simple generality. URLs point to blobs, which are GET and PUT atomically. All the necessary information is supplied with the request, and no state is retained other than the contents saved and loaded. Multiple identical GETs and PUTs are idempotent. The DELETE verb is perhaps a PUT of a null value. So far mostly good. The PATCH verb is arguably a stateless partial PUT, and might be idempotent in some implementations, but only if you don't think too much about it. Which means a huge part of what remains are POST requests, the bread and butter of REST APIs, and those aren't stateless or idempotent at all.

Cacheability and layeredness (i.e. HTTP proxies) in turn have both been made mostly irrelevant. The move to HTTPS everywhere means the layering of proxies is more accurately termed a man-in-the-middle attack. That leaves mainly reverse proxying on the server or CDN side. The HTTP Cache-Control headers are also completely backwards in practice. For anything that isn't immutable, the official mechanism for cache invalidation is for a server to make an educated guess when its own data is going to become stale, which it can almost never know. If they guess too late, the client will see stale data. If they guess too soon, the client has to make a remote request before using their local cache, defeating the point. This was designed for a time when transfer time dominated over latency, whereas now we have the opposite problem. Common practice now is actually for the server to tag cacheable URLs with a revision ID, turning a mutable resource at an immutable URL into an immutable resource at a mutable URL.

Client-Side Injection on the other hand, i.e. giving a browser JavaScript to run, is obviously here to stay, but still, no sane REST API makes you interpret JavaScript code to interact with it. That was mostly a thing Rubyists did in their astronautical pursuits to minimize the client/server gap from their point of view. In fact, we have entirely the opposite problem: we all want to pass bits of code to a server, but that's unsafe, so we find various ways of encoding lobotomized chunks of not-code and pretend that's sufficient.

Which leaves us with the need for a uniform interface, a point best addressed with a big belly laugh and more swagger definition file.

Take the most common REST API of them all, and the one nearly everyone gets wrong, /user. User accounts are some of the most locked up objects around, and as a result, this is a prime candidate for breaking all the rules.

The source of truth is usually something like:

ID Email Handle Real Name Password Hash Picture Karma Admin
1 admin John Doe sd8ByTq86ED... s3://bucket/1.jpg 5 true
2 jane Jane Doe j7gREnf63pO... s3://bucket/2.jpg -3 false

But if you GET /user/2, you likely see:

  "id": 2,
  "handle": "jane",
  "picture": "s3://bucket/2.jpg"

Unless you are Jane Doe, receiving:

  "id": 2,
  "email": "",
  "handle": "jane",
  "name": "Jane Doe",
  "picture": "s3://bucket/2.jpg"

Unless you are John Doe, the admin, who'll get:

  "id": 2,
  "email": "",
  "handle": "jane",
  "name": "Jane Doe",
  "picture": "s3://bucket/2.jpg",
  "karma": -3,
  "admin": false

What is supposedly a stateless, idempotent, cacheable, proxiable and uniform operation turns out to be a sparse GET of a database row, differentiated by both the subject and the specific objects being queried, which opaquely determines the specific variant we get back. People say horizontal scaling means treating a million users as if they were one, but did they ever check how true that actually was?

I'm not done yet. These GETs won't even have matching PUTs, because likely the only thing Jane was allowed to do initially was:

POST /user/create
  "name": "Jane Doe",
  "email": "",
  "password": "hunter2"

Note the subtle differences with the above.

  • She couldn't supply her own picture URL directly, she will have to upload the actual file to S3 through another method. This involves asking the API for one-time permission and details to do so, after which her user record will be updated behind the scenes. Really, the type of picture is not string, it is a bespoke read-only boolean wearing a foreign key outfit.
  • She didn't get to pick her own id either. Its appearance in the GET body is actually entirely redundant, because it's merely humoring you by echoing back the number you gave it in the URL. Which it assigned to you in the first place. It's not part of the data, it's metadata... or rather the URL is. See, unless you put the string /user/ before the id you can't actually do anything with it. id is not even metadata, it's truncated metadata; unless you're crazy enough to have a REST API where IDs are mutable, in which case, stop that.
  • One piece of truth "data," the password hash, actually never appears in either GETs or POSTs. Only the unencoded password, which is shredded as soon as it's received, and never given out. Is the hash also metadata? Or is it the result of policy?

PATCH /user/:id/edit is left as an exercise for the reader, but consider what happens when Jane tries to change her own email address? What about when John tries to change Jane's? Luckily nobody has ever accidentally mass emailed all their customers by running some shell scripts against their own API.

Neither from the perspective of the client, nor that of the server, do we have a /user API that saves and loads user objects. There is no consistent JSON schema for the client—not even among a given single type during a single "stateless" session—nor idempotent whole row updates in the database.

Rather, there is an endpoint which allows you to read/write one or more columns in a row in the user table, according to certain very specific rules per column. This is dependent not just on the field types and values (i.e. data integrity), but on the authentication state (i.e. identity and permission), which comes via an HTTP header and requires extra data and lookups to validate.

If there was no client/server gap, you'd just have data you owned fully and which you could manipulate freely. The effect and purpose of the API is to prevent that from happening, which is why REST is a lie in the real world. The only true REST API is a freeform key/value store. So I guess S3 and CouchDB qualify, but neither's access control or query models are going to win any awards for elegance. When "correctly" locked down, CouchDB too will be a document store that doesn't return the same kind of document contents for different subjects and objects, but it will at least give you a single ID for the true underlying data and its revision. It will even tell you in real-time when it changes, a superb feature, but one that probably should have been built into the application-session-transport-whatever-this-is layer as the SUBSCRIBE HTTP verb.

Couch is the exception though. In the usual case, if you try to cache any of your responses, you usually have too much or too little data, no way of knowing when and how it changed without frequent polling, and no way of reliably identifying let alone ordering particular snapshots. If you try to PUT it back, you may erase missing fields or get an error. Plus, I know your Express server spits out some kind of ETag for you with every response, but, without looking it up, can you tell me specifically what that's derived from and how? Yeah I didn't think so. If that field meant anything to you, you'd be the one supplying it.

If you're still not convinced, you can go through this exercise again but with a fully normalized SQL database. In that case, the /user API implementation reads/writes several tables, and what you have is a facade that allows you to access and modify one or more columns in specific rows in these particular tables, cross referenced by meaningless internal IDs you probably don't see. The rules that govern these changes are fickle and unknowable, because you trigger a specific set of rules through a combination of URL, HTTP headers, POST body, and internal database state. If you're lucky your failed attempts will come back with some notes about how you might try to fix them individually, if not, too bad, computer says no.

For real world apps, it is generally impossible by construction for a client to create and maintain an accurate replica of the data they are supposed to be able to query and share ownership of.

Regressive Web Apps

I can already hear someone say: my REST API is clean! My data models are well-designed! All my endpoints follow the same consistent pattern, all the verbs are used correctly, there is a single source of truth for every piece of data, and all the schemas are always consistent!

So what you're saying is that you wrote or scaffolded the exact same code to handle the exact same handful of verbs for all your different data types, each likely with their own Model(s) and Controller(s), and their own URL namespace, without any substantial behavioral differences between them? And you think this is good?

As an aside, consider how long ago people figured out that password hashes should go in the /etc/shadow file instead of the now misnamed /etc/passwd. This is a one-to-one mapping, the kind of thing database normalization explicitly doesn't want you to split up, with the same repeated "primary keys" in both "tables". This duplication is actually good though, because the OS' user API implements Policy™, and the rules and access patterns for shell information are entirely different from the ones for password hashes.

You see, if APIs are about policy and not empowerment, then it absolutely makes sense to store and access that data in a way that is streamlined to enforce those policies. Because you know exactly what people are and aren't going to be doing with it—if you don't, that's undefined behavior and/or a security hole. This is something most NoSQLers also got wrong, organizing their data not by policy but rather by how it would be indexed or queried, which is not the same thing.

This is also why people continue to write REST APIs, as flawed as they are. The busywork of creating unique, bespoke endpoints incidentally creates a time and place for defining and implementing some kind of rules. It also means you never have to tackle them all at once, consistently, which would be more difficult to pull off (but easier to maintain). The stunted vocabulary of ad-hoc schemas and their ill-defined nouns forces you to harmonize it all by hand before you can shove it into your meticulously typed and normalized database. The superfluous exercise of individually shaving down the square pegs you ordered, to fit the round holes you carved yourself, has incidentally allowed you to systematically check for woodworms.

It has nothing to do with REST or even HTTP verbs. There is no semantic difference between:

PATCH /user/1/edit
{"name": "Jonathan Doe"}


UPDATE TABLE users SET name = "Jonathan Doe" WHERE id = 1

The main reason you don't pass SQL to your Rails app is because deciding on a policy for which SQL statements are allowed and which are not is practically impossible. At most you could pattern match on a fixed set of query templates. Which, if you do, would mean effectively using the contents of arbitrary SQL statements as enum values, using the power of SQL to express the absense of SQL. The Aristocrats.

But there is an entirely more practical encoding of sparse updates in {whatever} <flavor /> (of (tree you) prefer).

POST /api/update
  "user": {
    "1": {
      "name": {"$set": "Jonathan Doe"}

It even comes with free bulk operations.

Validating an operation encoded like this is actually entirely feasible. First you validate the access policy of the individual objects and properties being modified, according to a defined policy schema. Then you check if any new values are references to other protected objects or forbidden values. Finally you opportunistically merge the update, and check the result for any data integrity violations, before committing it.

You've been doing this all along in your REST API endpoints, you just did it with bespoke code instead of declarative functional schemas and lambdas, like a chump.

If the acronyms CRDT and OT don't mean anything to you, this is also your cue to google them so you can start to imagine a very different world. One where your sparse updates can be undone or rebased like git commits in realtime, letting users resolve any conflicts among themselves as they occur, despite latency. It's one where the idea of remote web apps being comparable to native local apps is actually true instead of a lie an entire industry has shamelessly agreed to tell itself.

You might also want to think about how easy it would be to make a universal reducer for said updates on the client side too, obviating all those Redux actions you typed out. How you could use the composition of closures during the UI rendering process to make memoized update handlers, which produce sparse updates automatically to match your arbitrary descent into your data structures. That is, react-cursor and its ancestors except maybe reduced to two and a half hooks and some change, with all the same time travel. Have you ever built a non-trivial web app that had undo/redo functionality that actually worked? Have you ever used a native app that didn't have this basic affordance?

It's entirely within your reach.


If you haven't been paying attention, you might think GraphQL answers a lot of these troubles. Isn't GraphQL just like passing an arbitrary SELECT query to the server? Except in a query language that is recursive, typed, composable, and all that? And doesn't GraphQL have typed mutations too, allowing for better write operations?

Well, no.

Let's start with the elephant in the room. GraphQL was made by Facebook. That Facebook. They're the same people who made the wildly successful React, but here's the key difference: you probably have the same front-end concerns as Facebook, but you do not have the same back-end concerns.

The value proposition here is of using a query language designed for a platform that boxes its 2+ billion users in, feeds them extremely precise selections from an astronomical trove of continuously harvested data, and only allows them to interact by throwing small pebbles into the relentless stream in the hope they make some ripples.

That is, it's a query language that is very good at letting you traverse an enormous graph while verifying all traversals, but it was mainly a tool of necessity. It lets them pick and choose what to query, because letting Facebook's servers tell you everything they know about the people you're looking at would saturate your line. Not to mention they don't want you to keep any of this data, you're not allowed to take it home. All that redundant querying over time has to be minimized and overseen somehow.

One problem Facebook didn't have though was to avoid busywork, that's what junior hires are for, and hence GraphQL mutations are just POST requests with a superficial layer of typed paint. The Graph part of the QL is only for reading, which few people actually had real issues with, seeing as GET was the one verb of REST that worked the most as advertised.

Retaining a local copy of all visible data is impractical and undesirable for Facebook's purposes, but should it be impractical for your app? Or could it actually be extremely convenient, provided you got there via technical choices and a data model adapted to your situation? In order to do that, you cannot be fetching arbitrary sparse views of unlabelled data, you need to sync subgraphs reliably both ways. If the policy boundaries don't match the data's own, that becomes a herculean task.

What's particularly poignant is that the actual realization of a GraphQL back-end in the wild is typically done by... hooking it up to an SQL database and grafting all the records together. You recursively query this decidedly non-graph relational database, which has now grown JSON columns and other mutations. Different peg, same hole, but the peg shaving machine is now a Boston Dynamics robot with a cute little dog called Apollo and they do some neat tricks together. It's just an act though, you're not supposed to participate.

Don't get me wrong, I know there are real benefits around GraphQL typing and tooling, but you do have to realize that most of this serves to scaffold out busywork, not eliminate it fully, while leaving the INSERT/UPDATE/DELETE side of things mostly unaddressed. You're expected to keep treating your users like robots that should only bleep the words GET and POST, instead of just looking at the thing and touching the thing directly, preferably in group, tolerant to both error and lag.


This is IMO the real development and innovation bottleneck in practical client/server application architecture, the thing that makes so many web apps still feel like web apps instead of native apps, even if it's Electron. It makes any requirement of an offline mode a non-trivial investment rather than a sane default for any developer. The effect is also felt by the user, as an inability to freely engage with the data. You are only allowed to siphon it out at low volume, applying changes only after submitting a permission slip in triplicate and getting a stamped receipt. Bureaucracy is a necessary evil, but it should only ever be applied at minimum viable levels, not turned into an industry tradition.

The exceptions are rare, always significant feats of smart engineering, and unmistakeable on sight. It's whenever someone has successfully managed to separate the logistics of the API from its policies, without falling into the trap of making a one-size-fits-all tool that doesn't fit by design.

Can we start trying to democratize that? It would be a good policy.

Next: The Incremental Machine

A swagger definition file. Height: 108,409px

Height: 108,409px

A swagger definition file. Height: 108,409px

Height: 108,409px