Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

January 28, 2023

Today I wanted to program an ESP32 development board, the ESP-Pico-Kit v4, but when I connected it to my computer's USB port, the serial connection didn't appear in Linux. Suspecting a hardware issue, I tried another ESP32 board, the ESP32-DevKitC v4, but this didn't appear either, so then I tried another one, a NodeMCU ESP8266 board, which had the same problem. Time to investigate...

The dmesg output looked suspicious:

[14965.786079] usb 1-1: new full-speed USB device number 5 using xhci_hcd
[14965.939902] usb 1-1: New USB device found, idVendor=10c4, idProduct=ea60, bcdDevice= 1.00
[14965.939915] usb 1-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[14965.939920] usb 1-1: Product: CP2102 USB to UART Bridge Controller
[14965.939925] usb 1-1: Manufacturer: Silicon Labs
[14965.939929] usb 1-1: SerialNumber: 0001
[14966.023629] usbcore: registered new interface driver usbserial_generic
[14966.023646] usbserial: USB Serial support registered for generic
[14966.026835] usbcore: registered new interface driver cp210x
[14966.026849] usbserial: USB Serial support registered for cp210x
[14966.026881] cp210x 1-1:1.0: cp210x converter detected
[14966.031460] usb 1-1: cp210x converter now attached to ttyUSB0
[14966.090714] input: PC Speaker as /devices/platform/pcspkr/input/input18
[14966.613388] input: BRLTTY 6.4 Linux Screen Driver Keyboard as /devices/virtual/input/input19
[14966.752131] usb 1-1: usbfs: interface 0 claimed by cp210x while 'brltty' sets config #1
[14966.753382] cp210x ttyUSB0: cp210x converter now disconnected from ttyUSB0
[14966.754671] cp210x 1-1:1.0: device disconnected

So the ESP32 board, with a Silicon Labs, CP2102 USB to UART controller chip, was recognized, and it was attached to the /dev/ttyUSB0 device, as it should normally do. But then suddenly the brltty command intervened and disconnected the serial device.

I looked up what brltty is doing, and apparently this is a system daemon that provides access to the console for a blind person using a braille display. When looking into the contents of the package on my Ubuntu 22.04 system (with dpkg -L brltty), I saw a udev rules file, so I grepped for the product ID of my USB device in the file:

$ grep ea60 /lib/udev/rules.d/85-brltty.rules
ENV{PRODUCT}=="10c4/ea60/*", ATTRS{manufacturer}=="Silicon Labs", ENV{BRLTTY_BRAILLE_DRIVER}="sk", GOTO="brltty_usb_run"

Looking at the context, this file shows:

# Device: 10C4:EA60
# Generic Identifier
# Vendor: Cygnal Integrated Products, Inc.
# Product: CP210x UART Bridge / myAVR mySmartUSB light
# BrailleMemo [Pocket]
# Seika [Braille Display]
ENV{PRODUCT}=="10c4/ea60/*", ATTRS{manufacturer}=="Silicon Labs", ENV{BRLTTY_BRAILLE_DRIVER}="sk", GOTO="brltty_usb_run"

So apparently there's a Braille display with the same CP210x USB to UART controller as a lot of microcontroller development boards have. And because this udev rule claims the interface for the brltty daemon, UART communication with all these development boards isn't possible anymore.

As I'm not using these Braille displays, the fix for me was easy: just find the systemd unit that loads these rules, mask and stop it.

$ systemctl list-units | grep brltty
brltty-udev.service loaded active running Braille Device Support
$ sudo systemctl mask brltty-udev.service
Created symlink /etc/systemd/system/brltty-udev.service → /dev/null.
$ sudo systemctl stop brltty-udev.service

After this, I was able to use the serial interface again on all my development boards.

January 26, 2023

The latest MySQL release has been published on January 17th, 2023. MySQL 8.0.32 contains some new features and bug fixes. As usual, it also contains contributions from our great MySQL Community.

I would like to thank all contributors on behalf of the entire Oracle MySQL team !

MySQL 8.0.32 contains patches from Facebook/Meta, Alexander Reinert, Luke Weber, Vilnis Termanis, Naoki Someya, Maxim Masiutin, Casa Zhang from Tencent, Jared Lundell, Zhe Huang, Rahul Malik from Percona, Andrey Turbanov, Dimitry Kudryavtsev, Marcelo Altmann from Percona, Sander van de Graaf, Kamil Holubicki from Percona, Laurynas Biveinis, Seongman Yang, Yamasaki Tadashi, Octavio Valle, Zhao Rong, Henning Pöttker, Gabrielle Gervasi and Nico Pay.

Here is the list of the above contributions and related bugs, we can see that for this release, our connectors got several contributions, always a good sign of their increasing popularity.

We can also notice the return of a major contributor: Laurynas Biveinis!


Connector / NET

  • #74392 – Support to use (Memory-)Stream for bulk loading data – Alexander Reinert
  • #108837 – Fix unloading issues – Gabriele Gervasi

Connector / Python

  • #81572 – Allow MySQLCursorPrepared.execute() to accept %(key_name)s in operations and dic – Luke Weber
  • #81573 – Add MySQLCursorPreparedDict option – Luke Weber
  • #82366 – Waning behaviour improvements – Vilnis Termanis
  • #89345 – Reduce callproc roundtrip time – Vilnis Termanis
  • #90862 – C extension – Fix multiple reference leaks – Vilnis Termanis
  • #96280 – prepared cursor failed to fetch/decode result of varbinary columns – Naoki Someya
  • #103488 – Stubs (.pyi) for type definition for connection and cursor objects – Maxim Masiutin
  • #108076 – If extra init_command options are given for the Django connector, load them – Sander van de Graaf
  • #108733 – python connector will return a date object when time is 00:00:00 – Zhao Rong

Connector / J

  • #104954 – MysqlDataSource fails to URL encode database name when constructing JDBC URL – Jared Lundell
  • #106252 – Connector/J client hangs after prepare & execute process with old version server – Zhe Huang
  • #106981 – Remove superfluous use of boxing – Andrey Turbanov
  • #108414 – Malformed packet generation for COM_STMT_EXECUTESeongman Yang
  • #108419 – Recognize “ON DUPLICATE KEY UPDATE” in “INSERT SET” Statement – Yamasaki Tadashi
  • #108814 – Fix name of relocation POM file – Henning Pöttker

Connector / C++

  • #108652 – Moving a local object in a return statement prevents copy elision – Octavio Valle

Clients & API

  • C API (client library) Fix sha256_password_auth_client_nonblocking – Facebook
  • #105761 – mysqldump make a non-consistent backup with –single-transaction option – Marcelo Altmann
  • #108861 – Fix typo in dba.upgradeMetadata() error message (Shell AdminAPI) – Nico Pay


  • Fix race between binlog sender heartbeat timeout – Facebook

InnoDB and Clone

  • #106616 (private) – 8.0 upgrade (from 5.6) crashes with Assertion failure – Rahul Malik
  • #107854 (private) – Assertion failure: dict0mem.h – Marcelo Altmann
  • #108111 – Garbled UTF characters in SHOW ENGINE INNODB STATUS – Kamil Holubicki
  • #108317 – clone_os_copy_file_to_buf partial read handling completely broken – Laurynas Biveinis


  • #104934 (private) – Statement crash – Casa Zhang
  • #107633 – Fixing a type-o, should be “truncate” – Dimitry Kudryavtsev

If you have patches and you also want to be part of the MySQL Contributors, it’s easy, you can send Pull Requests from MySQL’s GitHub repositories or send your patches on Bugs MySQL (signing the Oracle Contributor Agreement is required).

Thank you again to all our contributors !

January 25, 2023

Twenty years ago… I decided to start a blog to share my thoughts! That’s why I called it “/dev/random”. How was the Internet twenty years ago? Well, they were good things and bad ones…

With the years, the blog content evolved, and I wrote a lot of technical stuff related to my job, experiences, tools, etc. Then, I had the opportunity to attend a lot of security conferences and started to write wrap-ups. With COVID, fewer conferences and no more reviews. For the last few months, I’m mainly writing diaries for the Internet Storm Center therefore, I publish less private stuff here, and just relay the content published on the ISC website. If you have read my stuff for a long time (or even if you are a newcomer), thank you very much!

A few stats about the site:

  • 2056 articles
  • 20593 pictures
  • 5538 unique visitors to the RSS feed in the last 30 days
  • 85000 hits/day on average (bots & attacks included ?)

I know that these numbers might seem low for many of you but I’m proud of them!

The post This Blog Has 20 Years! appeared first on /dev/random.

I published the following diary on “A First Malicious OneNote Document“:

Attackers are always trying to find new ways to deliver malware to victims. They recently started sending Microsoft OneNote files in massive phishing campaigns. OneNote files (ending the extension “.one”) are handled automatically by computers that have the Microsoft Office suite installed. Yesterday, my honeypot caught a first sample. This is a good opportunity to have a look at these files. The file, called “”, was delivered as an attachment to a classic phishing email… [Read more]

The post [SANS ISC] A First Malicious OneNote Document appeared first on /dev/random.

January 24, 2023

Phew, getting php 8.2 working in my #wordpress plugin’s travis tests required quite a bit of trial & error in travis.yaml, but 8 commits later I won The solution was in these extra lines to ensure libonig5 was installed...


January 23, 2023

Libérons la culture pour cultiver la liberté

Cette conférence a été donnée le 19 novembre 2022 à Toulouse dans le cadre du Capitole du Libre.
Le texte est ma base de travail et ne reprend pas les nombreuses improvisations et disgressions inhérentes à chaque One Ploum Show.
Attention ! Cette conréfence n’est pas une conréfence sur le cyclimse. Merci de votre compréhension.

Qui d’entre vous a compris cette référence à « La classe américaine » ? Ça me fait plaisir d’être là. Je suis content de vous voir. On va manger des chips. Quoi ? C’est tout ce que ça vous fait quand je vous dis qu’on va manger des chips ?

Sérieusement, je suis très content d’être là parmi vous. Je me sens dans mon élément. J’ai fréquenté le monde de l’industrie, celui des startups, de l’académique et même un peu de la finance. Mais il n’y a que parmi les libristes que je me sens chez moi. Parce que nous partageons la même culture. Parce que nous sommes d’accord sur le fait que Vim est bien meilleur qu’Emacs. (non, pas les tomates !)

La culture c’est ça : des références qui font qu’on se comprend, qu’on exprime une certaine complicité. Un des moments forts de mon mariage a été de montrer « La cité de la peur » à mon épouse. Elle n’a pas adoré le film. Bof. Mais nous avons étendu notre vocabulaire commun.

— J’ai faim ! J’ai faim ! J’ai faim !

— On peut se tutoyer ? T’es lourd !

("oui, mais j’ai quand même faim" répond quelqu’un du public)

La culture, c’est ça : une extension du vocabulaire. Il y’a des programmeurs dans la salle ? Et bien la langue, comme le français en ce moment, correspond au langage de programmation. La culture correspond aux bibliothèques. Langage et bibliothèque. La bibliothèque est la culture. Les mots sont magnifiques !

Pour s’exprimer, pour communiquer, pour être en relation bref pour être humain, la culture est indispensable. Lorsque deux cultures sont trop différentes, il est facile de considérer l’autre comme inhumain, comme un ennemi. La culture et le partage de celle-ci sont ce qui nous rend humains.

La culture est pourtant en danger. Elle est menacée, pourchassée, interdite. Remplacée par un succédané standardisé.

Étendre sa culture, c’est augmenter son vocabulaire, affiner sa compréhension du monde. La culture sert de support à la manière de voir le monde. Prêter un livre qu’on aime est un acte d’amour, d’intimité. C’est littéralement se mettre à nu et dire : « J’aimerais que nous ayons une compréhension mutuelle plus profonde ». C’est magnifique !

Mais combien de temps cela sera-t-il légal ? Ou même techniquement possible ? Une fois l’auteur mort, son œuvre disparait pendant 70 ans, car, pour l’immense majorité d’entre eux, il n’est pas rentable de les réimprimer et de payer les droits aux descendants. Nous tuons donc la culture avec l’auteur.

La transmission est pourtant indispensable. La culture se nourrit, évolue et se transforme grâce aux interactions, aux échanges. Or les interactions sont désormais surveillées, monétisées, espionnées. Du coup, elles sont fausses, truquées, inhumaines. Les comptes Twitter et LinkedIn sont majoritairement des faux. Les likes Facebook s’achètent à la pelle. Les visites sur votre site web sont des bots. Les contenus Tiktok et YouTube sont de plus en plus générés automatiquement. Les nouvelles dans les grands médias ? Des journalistes sous-payés (non, encore moins que ça) qui sont en compétition avec des algorithmes pour voir le contenu qui rapportera le plus de clics. Les rédactions sont désormais équipées d’écrans affichant en temps réel les clics sur chaque contenu. Le job des journalistes ? Optimiser cela. Même le code Open Source est désormais généré grâce à Github Copilot. Ces algorithmes se nourrissent de contenu pour en générer de nouveaux. Vous la voyez la boucle ? Le « while True » ?

Pendant des millénaires, notre cerveau était plus rapide que les moyens de communication. Nous apprenions, nous réfléchissions. Pour la première fois dans l’histoire de l’information, notre cerveau est désormais le goulot d’étranglement. C’est lui l’élément le plus lent de la chaîne ! Il ne peut plus tout absorber. Il se gave et s’étouffe !

Lorsque nous sommes en ligne, nous alimentons cet énorme monstre qui se nourrit de nos données, de notre attention, de notre temps, de nos clics. Nous sommes littéralement la chair exploitée du film Matrix. Sauf que dans Matrix, les corps sont nourris, logés dans leur cocon alors que nous bossons et payons pour avoir le droit d’être exploités par cette gigantesque fabrique d’attache-trombones.

Vous connaissez l’histoire de la fabrique d’attache-trombones ? C’est un concept inventé par le chercheur Nick Bostrom dans un papier intitulé « Ethical Issues in Advanced Artificial Intelligence ». Le concept est que si vous créez une intelligence artificielle en lui demandant de fabriquer le plus possible d’attache-trombones le plus rapidement possible, cette intelligence artificielle va rapidement s’arranger pour éliminer les humains qui pourraient la ralentir avant de transformer la planète entière en une montagne d’attache-trombones, ne gardant des ressources que pour coloniser d’autres planètes afin de les transformer en attache-trombones.

Dans une conférence de 2018, l’auteur de science-fiction Charlie Stross a montré qu’il n’était pas nécessaire d’attendre des intelligences artificielles très avancées pour voir se poser le problème. Qu’une entreprise est, par essence, une fabrique d’attache-trombones : une entité dont le seul et unique objectif est de générer de l’argent, quitte à détruire ses créateurs, l’humanité et la planète dans la foulée.

Le concept est parfaitement illustré par cette magnifique scène dans « Les raisins de la colère » de John Steinbeck où un fermier s’en prend à un représentant de la banque qui l’exproprie de son terrain. Il veut aller tuer le responsable de son expropriation. Le banquier lui dit alors : « La banque a une volonté à laquelle nous devons obéir même si nous sommes tous opposés à ses actions ». Bref, une fabrique d’attache-trombones.

La fabrique d’attache-trombones nous fait dépenser, devenir des zombies. Vous avez déjà vu un zombie ? Moi oui. Quand je fais aller la sonnette de mon vélo face à des gens qui tendent un téléphone au bout de leur bras. Ils sont dans un monde virtuel. Ils ont même délégué leur sens auditif à Apple avec ces écouteurs qui ne se retirent plus et qui ont la faculté de transmettre le son réel dans l’oreille. En mettant Apple comme intermédiaire. Comme dans Matrix, les gens vivent dans un monde virtuel. Ça a juste commencé par l’audition au lieu des gros casques devant les yeux comme on l’imaginait.

Pour nous échapper de la fabrique, pour ne pas être transformés en attache-trombones, nous devons créer, entretenir et occuper des espaces réservés aux humains. Pas des algorithmes. Pas des entreprises. Des humains. Et posez-moi ce smartphone qui vous fait littéralement perdre 20 points de QI. Ce n’est pas une blague : quand on dit que les entreprises se nourrissent de notre temps de cerveau, c’est littéral. On perd littéralement l’équivalent de 20 points de QI par le simple fait d’avoir un téléphone à proximité. Le simple son d’une notification distrait autant un conducteur que de ne pas regarder la route pendant une dizaine de secondes. Ces engins nous rendent cons et nous tuent ! Ce n’est pas une image.

Vous avez remarqué comme la déshumanisation du travail nous force de plus en plus à agir comme des automates, comme des algorithmes ? Métropolis, de Fritz Lang, et les Temps Modernes, de Charlie Chaplin, dénonçait l’industrialisation qui transformait nos corps en outils au service de la machine. 100 ans plus tard, c’est exactement pareil avec les cerveaux. On les transforme pour les mettre au service des algorithmes. Algorithmes qui, eux, prétendent se faire passer pour des humains. Nous sommes en train de fusionner l’homme et la machine d’une manière qui n’est pas belle à voir.

Ce qui fait l’humain, c’est sa diversité, sa différence d’un individu à l’autre, mais aussi d’un moment à l’autre. Quel est le connard qui pense sérieusement que comme t’as envoyé un jour un mail à une entreprise, cinq ans plus tard tu souhaites être spammé tous les jours avec leur newsletter ? Je n’invente rien, ça m’est arrivé récemment. L’humain évolue et la culture humaine doit être diverse. Comme la nourriture. Qui pense que manger tous les jours au macdo au point d’en vomir est une bonne idée ? Alors pourquoi accepte-t-on de le faire pour notre cerveau ?

L’archétype de l’industrialisation et de l’uniformisation de la culture est pour moi représenté par les superhéros. On réduit la culture à un combat entre exégètes Marvel ou DC. Ce n’est pas anodin. Vous avez déjà réfléchi à ce que représente un superhéros ? C’est littéralement un milliardaire avec des superpouvoirs innés. Il est supérieur au peuple. Il est également son seul espoir. Il est parfois injustement mal compris, car il est bon, même quand il dézingue toute une ville et ses habitants. Ce sont juste des dommages collatéraux. Le peuple a juste le droit de la fermer. C’est littéralement l’image du monde qu’ont les milliardaires d’eux-mêmes. À titre de comparaison, dans les années 90, la mode était aux films catastrophes. La terre était en danger et les humains normaux (on insistait sur la normalité, sur le fait que leur couple allait mal, qu’ils étaient blancs ou bien Will Smith) s’associaient pour accomplir des actions héroïques et sauver la terre d’un ennemi figurant la pollution. Les héros de Jurassique Park? Des gamins normaux et des scientifiques un peu dépassés. Aujourd’hui, l’humain normal a juste le droit de fermer sa gueule et d’attendre qu’un milliardaire vienne le protéger. Sans milliardaire, l’humain normal est forcé de se battre contre les autres normaux, car les milliardaires nous ont appris que la collaboration était morte ces 20 dernières années. Ils nous ont enseigné à voir tout humain comme un ennemi, un concurrent potentiel et à tenter d’accaparer ce qu’on peut avant une destruction finale. C’est ce qu’on appelle le survivalisme.

Cette vision du monde, nous la devons à la monopolisation de la culture. À la monoculture. Mais il y’a pire ! La culture indépendante est devenue illégale, immorale. Les gens s’excusent de pirater, de partager. À cause d’une des plus grosses arnaques intellectuelles : la propriété intellectuelle. Un concept fourre-tout assez nouveau dans lequel on balance brevets, secrets commerciaux, copyrights, trademarks…

L’intellect est un bien non-rival. Si je partage une idée, cela donne deux idées. Ou 300. Au plus on la partage, au plus la culture croît. Empêcher le partage, c’est tuer la culture. Les fabriques d’attache-trombones ont même réussi à convaincre certains artistes que leurs fans étaient leurs ennemis ! Qu’empêcher la diffusion de la culture était une bonne chose. Que le fait qu’ils crèvent de misère n’était pas dû aux monopoles, mais au fait que les fans se partagent leurs œuvres. Spotify reverse aux artistes un dixième de centime par écoute, mais les pirates seraient responsables de l’appauvrissement des artistes. Pour toucher l’équivalent de ce qu’il touchait avec une vente de CD, vous devez écouter chaque chanson de l’album un millier de fois sur Spotify !

Le libre a tenté de répliquer avec les licences. GPL, Creative Commons. Mais nous sommes trop gentils. Fuck les licences ! Partagez la culture ! Diffusez-la ! Si vous le faites de bon cœur, partagez entre êtres humains. Boycottez Amazon et tentez de découvrir autour de vous des artistes locaux, indépendants. Partagez-les. Diffusez-les. Écrivez des critiques, filmez des parodies. Vous connaissez JCFrog et ses vidéos ? Et bien c’est exactement ça la culture humaine. C’est magnifique. C’est génial.

Ne dites plus « Je veux juste me vider la tête avec une série débile ». On ne se vide pas la tête. On la remplit. Avec de la merde industrielle ou du bio local artisanal, au choix. Faites des références. L’autre jour, j’ai vu sur Mastodon quelqu’un parler de son trajet dans le métro à Paris : « J’ai l’impression d’être dans Printeurs ! ». C’est le plus beau compliment qu’on puisse à un auteur. Merci à cette personne !

Dans Printeurs, tout est publicité. Ce n’est pas un hasard. Vous avez vu comme tout ressemble à une publicité désormais ? Comme le moindre film, le moindre clip vidéo en adopte les codes ? Comme chaque vidéo YouTube n’a plus qu’un objectif : vous faire vous abonner. Fabriquer des attache-trombones.

La culture bio et libre n’est pas une culture de seconde zone. Elle n’est juste pas standard. Et c’est tout son intérêt.

Pour exister, la culture libre a besoin de plateformes libres. Les plateformes propriétaires ont été conçues par le marketing pour le marketing. Pour vendre des cigarettes et de l’alcool à des gamins de 10 ans (c’est la définition du marketing. C’est juste leur métier de prétendre qu’ils font autre chose. Comme disait Bill Hicks, si vous travaillez dans le marketing, « please kill yourself »). Une fois qu’on fume, le marketing cherche à nous prétendre que c’est notre liberté et nous faire oublier que nous polluons afin que nous perdions encore plus de libertés et que nous polluions encore plus. Comme l’alcoolique boit pour oublier qu’il est alcoolique, nous consommons pour oublier que nous consommons. Le simple fait d’être sur une plateforme marketing nous force donc à faire du marketing. Du personal branding. De l’engagment. Des KPI. Promouvoir la culture libre sur Facebook, c’est comme aller manifester pour le climat en SUV. Oui, mais j’ai un vélo électrique dans le coffre, je suis écolo ! Oui, mais Facebook, Insta, c’est là que tout le monde est ! Non, c’est là que sont certains. Mais c’est sûr que sur Facebook, on ne trouve que des gens qui sont… sur Facebook. Il y’a des milliards de gens qui n’y sont pas, pour des raisons très diverses. La manière la plus simple et la plus convaincante de lutter contre ces plateformes est de tout simplement ne pas y être.

Les plateformes libres existent. Comme un simple blog. Mais elles ont besoin de choses à raconter, d’histoires. Le mot « libre » à lui tout seul raconte une histoire. Une histoire qui peut faire peur, être inconfortable. Alors on a essayé de dépolitiser le libre, de l’appeler « open source », de le dépouiller de son histoire. Le résultat, il est dans votre poche. Un téléphone Android tourne sur un Linux open-source. Pourtant, c’est le pire instrument de privation de liberté. Il vous espionne, vous inonde de publicités, vous prive de tout contrôle. RMS avait raison : en renommant le libre « open source », nous avons fait une croix sur la liberté.

La leçon est que la technologie ne peut pas être neutre. Elle est politique par excellence. Se priver de raconter des histoires pour ne pas être politique, c’est laisser la place aux autres histoires, à la publicité. C’est prétendre, comme le disaient Tatcher et Reagan, qu’il n’y a pas d’alternative. Je le disais, mais je gardais moi-même mon compte Facebook. Cela me semblait indispensable. J’ai eu du mal à le supprimer, à me priver de ce que je croyais être un outil incontournable. À la seconde où le compte a été supprimé, le voile s’est levé. Il m’est apparu évident que c’était le contraire. Que pour exister en tant que créateur, il était indispensable de supprimer mon compte.

J’avais beau dire que je ne l’utilisais pas, le simple fait de savoir qu’il y’avait plusieurs milliers de followers liés à mon nom me donnait une illusion de succès. Mes posts avaient beau ne pas avoir d’impact (ou très rarement), je les écrivais pour Facebook ou pour Twitter. Je me suis un jour surpris sous la douche à réfléchir en tweets. Je me suis séché et j’ai effacé mon compte Twitter, effrayé. Je ne faisais que produire des attache-trombones en vous encourageant à faire de même. Ma simple présence sur un réseau permettait à d’autres d’y justifier la leur. Leur présence justifiant la mienne… J’étais plongé dans les écrits de Jaron Lanier et Cal Newport lorsque j’ai réalisé qu’aucun des deux n’avait la moindre présence sur un réseau social propriétaire. Je les lis, j’admire leur pensée. Ils existent. Ils ne sont pas sur les réseaux sociaux. Ce fut une grande inspiration pour moi…

Il faut casser le « pas le choix » ou « TINA (There’Is No Alternative) ». Il y’a 8 milliards d’alternatives. Nous les créons tous les jours, ensemble. Notre rôle n’est pas d’aller convaincre le monde entier de passer à autre chose, mais de créer des multitudes de cocons de culture humaine, d’être prêts à accueillir ceux qui sont dégoutés de leur macdo quotidien, ceux qui, à leur rythme, se lassent d’être exploités et soumis à des algorithmes publicitaires. Il suffit de voir ce qui se passe entre Twitter et Mastodon.

Ces plateformes libres, cette culture libre, il n’y a que nous qui pouvons les préparer, les développer, les faire exister, les partager.

À ceux qui disent que la priorité est la lutter contre le réchauffement climatique, je réponds que la priorité est à la création de plateformes, techniques et intellectuelles, permettant la lutte contre le réchauffement climatique. On ne peut pas être écolo dans un monde financé par la publicité. Il faut penser des alternatives, les inventer. Créer des histoires pour sauver la planète. Une nouvelle forme de culture. Une permaculture !

Mon outil à moi, c’est ma machine à écrire. Elle me libère. Je l’appelle ma « machine à penser ». À vous d’inventer vos propres outils. (oui, même Emacs…) Des outils indispensables pour inventer et partager votre nouvelle culture, ce mélange de code et d’histoires à raconter qui peut sauver l’humanité avant que nous soyons tous transformés en attache-trombones !

Merci !

Et don’t forget to subscribe to my channel.

D’ailleurs, je profite de cette conférence contre la publicité pour faire de la publicité pour mon nouveau livre. Est-ce de la culture libre ? Elle est déjà libre sur Mais pas que ! Car mon éditeur a annoncé que toute la collection Ludomire (dans laquelle sont publiés mes livres) passera en 2023 sous licence CC By-SA.

Photo : David Revoy, Ploum, Pouhiou et Gee dédicaçant lors du Capitole du Libre à Toulouse le 19 novembre 2022.

Ingénieur et écrivain, j’explore l’impact des technologies sur l’humain. Abonnez-vous à mes écrits en français par mail ou par rss. Pour mes écrits en anglais, abonnez-vous à la newsletter anglophone ou au flux RSS complet. Votre adresse n’est jamais partagée et effacée au désabonnement.

Pour me soutenir, achetez mes livres (si possible chez votre libraire) ! Je viens justement de publier un recueil de nouvelles qui devrait vous faire rire et réfléchir.

January 17, 2023

As you know, before FOSDEM, the MySQL Community Team organizes a 2 days conference in Brussels.

During these preFOSDEM MySQL Days, attendees will have the opportunity to learn more about MySQL from Oracle’s MySQL teams and selected professionals from the MySQL community.

Here is the agenda:

Thursday 2nd February

Registration starts at 09:00 AM.

09:30 AM10:00 AMWelcome to preFOSDEM MySQL DayslefredOracle
10:00 AM11:00 AMWhat’s new in MySQL until 8.0.32Kenny GrypOracle
11:00:00 AM11:30 AMCoffee Break
11:30 AM12:00 PMIdeal Techniques for making Major MySQL Upgrades easier !Arunjith AravidanPercona
12:10 PM12:40 AMIntroduction to MySQL Shell for VS CodeKenny GrypOracle
12:40 PM01:40 PMLunch Break
01:40 PM02:10 PMAll about MySQL Table CacheDmitry LenevPercona
02:20 PM02:50 PMMySQL 8.0 – Dynamic InnoDB Redo LoglefredOracle
03:00 PM03:30 PMMySQL HeatWave for OLTP – Overview and what’s newLuis SoaresOracle
03:30 PM04:00 PMCoffee Break
04:00 PM04:30 PMMySQL HeatWave for OLAP – What is possible today (and what is not)Castern ThalheimerOracle
04:40 PM5:10 PMHow Vitess Achieves consensus with replicated MySQLDeepthi SigireddiVitess
05:10 PM5:30 PMEnd of the day

Friday 3rd February

Registration starts at 09:00 AM.

09:30 AM10:20 AMMySQL High Availability and Disaster RecoveryMiguel Araújo & Kenny GrypOracle
10:30 AM11:00 AMMigrate from MySQL Galera Base Solution to MySQL Group ReplicationMarco TusaPercona
11:00:00 AM11:30 AMCoffee Break
11:30 AM12:00 PMWhat’s new in ProxySQL 2.5René CannaòProxySQL
12:10 PM12:40 AMMySQL HeatWave Lakehouse DemoCastern ThalheimerOracle
12:40 PM01:40 PMLunch Break
01:40 PM02:10 PMWhat we can know with Performance Schema in MySQL 8.0Vinicius GrippaPercona
02:20 PM03:20 PMMigration to MySQL Group Replication at thoughts & experiencesSimon J
03:30 PM04:00 PMCoffee Break
04:00 PM04:30 PM11 Reasons to Migrate to MySQL 8Peter ZaitsevPercona
04:40 PM5:10 PMObservability and operability of replication in MySQL 8.0Luis SoaresOracle
05:30 PM06:00 PMEnd of the day++ (awards??)
6:00 PM7:00 PMFree Time and Q&A
7:00 PMlateMySQL Community DinnerParty-Squad

Please register for this in person and free conference, only people with a ticket will be allowed to attend.

After the conference, the Belgian Party-Squad will host the traditional MySQL Community Dinner. Don’t hesitate to register for this amazing event, a unique opportunity to chat with MySQL developers and MySQL professionals over Belgian food and beers.

The Community Dinner must also be ordered separately here:

And finally, Sunday afternoon, don’t miss FOSDEM MySQL & Friends Devroom !

See you soon in Belgium !

January 15, 2023

With FOSDEM just around the corner, it is time for us to enlist your help. Every year, an enthusiastic band of volunteers make FOSDEM happen and make it a fun and safe place for all our attendees. We could not do this without you. This year we again need as many hands as possible, with the buildup (starting Friday at noon), heralding during the conference and Cleanup (on Sunday evening). No need to worry about missing lunch. Food will be provided. Would you like to be part of the team that makes FOSDEM tick? Sign up here! You could舰

Ik heb de afgelopen jaren al heel wat Python-projecten ontwikkeld. Elke keer je een nieuw Python-project start, moet je een hele machinerie opstellen als je het wat robuust wilt aanpakken: tests en linting opzetten, documentatie schrijven, packaging, en dat alles samenlijmen met pre-commit hooks en continue integratie.

Tot voor kort deed ik dat elke keer manueel, maar enkele maanden geleden ben ik op zoek gegaan naar een schaalbaardere manier. Er bestaan diverse projectgeneratoren en projectsjablonen voor Python. Na een uitgebreide vergelijking kwam ik uit bij PyScaffold. Hiermee maak je eenvoudig een sjabloon voor een Git-repository voor een Python-project aan, dat al allerlei best practices toepast rond documentatie, tests, packaging, afhankelijkheden, codestijl enzovoort.

Ik heb ondertussen al twee Python-projecten met PyScaffold aangemaakt en ben met een derde bezig. De lessen die ik hieruit geleerd heb en een overzicht van wat PyScaffold allemaal voor je kan klaarzetten, heb ik beschreven in een achtergrondartikel voor Tweakers: Gelikte code met PyScaffold: Python-projecten ontwikkelen als een pro.

Andere ontwikkelaars zullen misschien tot een andere keuze komen. Zo maakt PyScaffold geen gebruik van Poetry, een moderne tool voor het beheer van afhankelijkheden en het publiceren van Python-pakketten. Maar ik maakte zelf nog geen gebruik van Poetry en PyScaffold bleek vrij goed overeen te komen met mijn stijl van ontwikkelen in Python. Misschien dat ik volgend jaar een andere keuze zou maken.

Uiteindelijk maakt het ook niet zoveel uit wat je gebruikt. Elke projectgenerator of elk projectsjabloon maakt zijn eigen keuzes voor de onderliggende tools, maar allemaal bereiken ze hetzelfde. Tijdswinst bij het opzetten van een nieuw project, en een projectstructuur die allerlei best practices toepast die je een goede basis geeft om robuuste software te ontwikkelen. Ik ga in ieder geval niet zo snel meer een nieuw Python-project opstarten zonder projectgenerator of projectsjabloon.

January 12, 2023


Charlie Mackesy - The boy, the mole, the fox and the horse (2019)

In so far as you can read this book, I read this book. This book allows you to read one page, and then ponder on it for a week. It's a story, but it is also not a story but more a psychological insight into humans. I will probably open this book again several times this year to discover even more about the meaning of a single page.

Brian W. Kernighan - Understanding the digital world (2021)

Kernighan is famous for co-authoring "The C programming Language" with Dennis Ritchie in 1978 and I have always looked up to him.

In this book he gives an excellent overview of computers and networks, including an easy to read introduction to programming, cryptography and (digital) privacy. I would not advice this book for IT-nerds, since it is way too simple. It is though as good an overview as is possible in 260 pages.

David Kushner - Masters of Doom (2003)

Well this was an excellent read! Enjoyable, intriguing, educational and probably only for fans of Doom or of John Carmack.

The book tells the story of the two John's that created the DooM game in the early nineties. David Kushner interviewed a lot of people to get a complete picture of their youth, their first meeting, the development of Commander Keen, Wolfenstein 3D and of course DooM, and also many games after that like Quake and Heretic.

This book is a nice combo with Sid Meier's Memoir.

Sid Meier's Memoir (2020)

I wanted to link to my tiny review of this book, which I read in 2021, but it turns out I have not written anything about it yet.

The story of the creation of the civilization series of games is a really good read, though probably only if you lived in this era and played some of the early Civilization games. Or earlier 'versions' like Empire or Empire Deluxe, which are mentioned in this book for serving as inspiration for the first Civilization game.

I like the 4X turn-based system of gaming, too bad there are almost no other good games using this (Chess comes close though).

January 10, 2023

At the beginning of each year, I publish a retrospective that looks back on the prior year at Acquia. I write these both to reflect on the year, and to keep a record of things that happen at the company. I've been doing this for the past 14 years.

If you'd like to read my previous retrospectives, you can find them here: 2021, 2020, 2019, 2018, 2017, 2016, 2015, 2014, 2013, 2012, 2011, 2010, 2009.

Reflecting on my retrospectives

As Acquia has grown, it has become increasingly challenging for me to share updates on the company's progress in a way that feels authentic and meaningful.

First, I sometimes feel there are fewer exciting milestones to report on. When a company is small and in startup mode, each year tends to bring significant changes and important milestones. As a company transitions to steady growth and stability, the rate of change tends to slow down, or at least, feel less newsworthy. I believe this to be a natural evolution in the lifecycle of a company and not a negative development.

Second, while it was once exciting to share revenue and headcount milestones, it now feels somewhat self-indulgent and potentially off-putting. In light of the current global challenges, I want to be more mindful of the difficulties that others may be experiencing.

I've also been surprised and humbled by the thorough reading of my retrospectives by a wide range of audiences, including customers, partners, investors, bankers, competitors, former Acquians, and industry analysts.

Because of that I have come to realize that my retrospectives have gradually transformed from a celebration of rapid business milestones to something more akin to a shareholder letter — a way for a company to provide insight into its operations, progress, and future plans. As a business student, I have always enjoyed reading the annual shareholder letters of great companies. Some of my favorite shareholder letters offer a glimpse into the leadership's thinking and priorities.

As I continue to share my thoughts and progress through these annual retrospectives, I aim to strike a balance between transparency, enthusiasm, and sensitivity to the current environment. It is my hope that my updates provide valuable insights and serve as historical markers on the state of Acquia each year. Thank you for taking the time to read them.

Acquia business momentum

When writing my 2022 update, it's impossible to ignore the macroeconomic situation.

The year 2022 was marked by inflation, tighter monetary conditions, and conflict in Europe. This caused asset price bubbles to deflate. While this led to lower valuations and earnings multiples for technology companies, corporate earnings generally held up relatively well in comparison.

Looking ahead to 2023, Jerome Powell and the Federal Reserve are expected to keep their foot on the brake. They will keep interest rates high to tame inflation. Higher interest rates translate to slower growth and margin compression.

In anticipation of lower growth and margin compression, many companies reduced their forecasts for 2023 and adjusted their spending. This has resulted in layoffs across the technology sector.

I have great empathy for those who have lost their jobs or businesses as a result. It is a difficult and unsettling time, and my thoughts are with those affected.

I share these basic macroeconomic trends because they played out at Acquia as well. Like many companies, we adjusted our spending throughout the year. This allowed us to exit 2022 with the highest gross margins and EBITDA margins in our history. If the economy slows down in 2023, Acquia is well-positioned to weather the storm.

That said, we did continue our 16-year, unbroken revenue growth streak in 2022. We saw a healthy increase in our annual recurring revenue (ARR). We signed up many new customers, but also renewed existing customers at record-high rates.

I attribute Acquia's growth to the fact that demand for our products hasn't slowed. Digital is an essential part of most organizations' business strategies. Acquia's products are a "must have" investment even in a tough economy, rather than a "nice to have".

In short, we ended the year in our strongest financial position yet – both at the top line and the bottom line. It feels good to be a growing, profitable company in the current economic climate. I'm grateful that Acquia is thriving.

Can you clarify exactly what Acquia does?

A screenshot of a slide with Acquia's vision and mission statement. Acquia's vision statement reads: "To make it possible for dreamers and doers to craft the digital world." Acquia's mission statement says: "To deliver the universal platform for the world's greatest digital experiences.". Acquia's vision and mission statement has been unchanged the last 8+ years.

Frequently, I am asked: Can you clarify exactly what Acquia does?. Often, people only have a general understanding of our business. They vaguely know it has something to do with Drupal.

In this retrospective, I thought it would be helpful to illustrate what we do through examples of our work with customers in 2022.

Our core business is assisting organizations in building, designing, and maintaining their online presence. We specialize in serving customers with high-traffic websites, many websites, or demanding security and compliance requirements. Our platform scales to hundreds of millions of page views and has the best security of any Drupal hosting platform in the world (e.g. ISO-27001, PCI-DSS, SOC1, SOC2, IRAP, CSA STAR, FedRAMP, etc).

For example, we worked closely with Bayer throughout 2022. Bayer Consumer Health had 400+ sites on a proprietary platform coming to the end of its life, and needed to replatform those sites in a very tight timeframe. Bayer rebuilt these sites in Drupal and migrated its operations to Acquia Cloud in the span of 1.5 years. At the peak of the project, Bayer migrated 100 sites a month. Bayer reduced time to market by 40%, with new sites taking two to three weeks to launch. Organic traffic increased from 4% to 57%. Visit durations have increased 13% and the bounce rate has dropped 19%. As a result of the replatform, Bayer has realized a $15 million efficiency in IT and third-party costs over three years.

For many organizations, the true challenge in 2023 is not redesigning or replatforming their websites (using the latest JavaScript framework), or even maintaining their websites. The more impactful business challenge is to create better, more personalized customer experiences. Organizations want to deliver highly relevant customer experiences not only on their websites, but across other channels such as mobile, email, and social.

The issue that organizations face is that they don't have a deep enough understanding of their customers to make experiences highly relevant. While they may collect a large amount of data, they struggle to use it. Customer data is often scattered across various systems: websites, commerce platforms, email marketing tools, in-store point of sale systems, call center software, and more. These systems are often siloed and do not allow for a comprehensive understanding of the customer.

This is a second area of focus for Acquia, and one that has been growing in importance. Acquia helps these organizations integrate all of these systems and create a unified profile for each customer. By breaking down the data silos and establishing a single, reliable source of information about each customer, we can better understand a customer's unique needs and preferences, leading to significantly improved customer experiences.

In 2022, outdoor retailer Sun & Ski Sports used Acquia's Customer Data Platform (CDP) to drive personalized customer experiences across a variety of digital channels. Acquia CDP unified their customer data into a single source of truth, and helped Sun & Ski Sports recognize subtle but impactful trends in their customer data that they couldn't identify before. The result is a 1,500% increase in response rate for direct mail efforts, a 1,100% improvement in incremental net profit on direct mail pieces, a 200% increase in paid social clickthrough rates, and a 25% reduction in cost per click for paid social efforts.

A third area of focus for Acquia is helping our customers manage their digital content more broadly, beyond just their website content. For example, Autodesk undertook a global rebrand. Their goal was to go from a house of brands to a branded house. This meant Autodesk had to update many digital assets such as images and videos across websites, newsletters, presentations, and even printed materials. Using Acquia DAM, Autodesk redesigned the process of packaging, uploading, and releasing their digital assets to their 14,000 global employees, partners, and the public.

A diagram that shows how content, data, experience orchestration, and experience delivery needs to be unified across different systems such as content management systems, email marketing platforms, commerce solutions and more. Experiences need to be personalized and orchestrated across all digital channels. To deliver the best digital experiences, user profile data needs to be unified across all systems. To make experience delivery easy, all content and digital assets need to be reusable across channels. Last but not least, digital experiences need to be delivered fast, securely, and reliably across all digital touchpoints.

When people ask me what Acquia does, they usually know we do something with Drupal. But they often don't realize how much more we do. Initially known exclusively for our Drupal-in-the-cloud offering, we have since expanded to provide a comprehensive Open Digital Experience Platform. This shift has allowed us to offer a more comprehensive solution to help our customers be digital winners.

Customer validation

At our customer and partner conference, Acquia Engage, we shared the stage with some of our customers, including ABInBev, Academy Mortgage, the American Medical Association, Autodesk, Bayer, Burn Boot Camp, Japan Airlines, Johnson Outdoors, MARS, McCormick, Novavax, Omron, Pearl Musical Instruments, Phillips 66, Sony Pictures, Stanley Black & Decker, and Sun & Ski Sports. Listening to their success stories was a highlight of the year for me.

Acquia customer logos, including Cigna, Novartis, Lowe's, Barnes & Noble, Best Buy, Panasonic and more. Some of Acquia's customers in 2022: Cigna, Novartis, Lowe's, Barnes & Noble, Best Buy, Panasonic and more.

I'm also particularly proud of the fact that in 2022, our customers recognized us across two of the most popular B2B software review sites, TrustRadius and G2. Acquia earned Top Rated designation on TrustRadius four times over, and was named a Leader by G2 users in 20 areas.

G2 and TrustRadius badges recognize Acquia products based on functionality and market presence. Customers and partners rated our products very highly on G2 and TrustRadius.
The results prove that customers want what we're building, which is a great feeling. I'm proud of what our team has accomplished.

Analyst validation

We also received validation on our product strategy from some of the industry's top analyst firms, including "Leader" placements in The Forrester Wave: Digital Asset Management For Customer Experience, Q1 2022 Report, as well as in the 2022 Gartner Magic Quadrant for Digital Experience Platforms.

In addition, Acquia advanced to become a "Leader" in the Omdia Digital Experience Management Universe Report, was named an "Outperformer" in the GigaOm Radar Report for DXPs, and was included on Constellation Shortlists across seven categories, including DXP, CMS, CDP, and DAM.

Analyst reports are important to our customers and prospects because these firms take into account customer feedback, a deep understanding of the market, and each vendor's product strategy and roadmap.

Executing on our product vision

In October, I published a Composable Digital Experience Manifesto, a comprehensive 3,500-word update on Acquia's product strategy.

In this manifesto, I outline the growing demand for agility, flexibility, and speed in the enterprise sector and propose a solution in the form of Composable Digital Experience Platforms (DXPs).

Composable DXPs allow organizations to quickly assemble solutions from pre-existing building blocks, often using low-code or no-code interfaces. This enables organizations to adapt to changing business needs.

In my manifesto, I outline six principles that are essential for a DXP to be considered "composable", and I explain how Acquia is actively investing in research and development in alignment with these principles.

A summary of Acquia's current R&D investments in relation to these principles includes:

  • Component discovery and management tools. This includes tools to bundle components in pre-packaged business capabilities, CI/CD pipelines for continuous integration of components, and more.
  • Low-code / no-code tooling to speed up experience building. No-code tools also enable all business stakeholders to participate in experience creation and delivery.
  • Headless and hybrid content management so content can be delivered across many channels.
  • A unified data layer on which content personalization and journey orchestration tools operate.
  • Machine learning-based automation tools to tailor and orchestrate experiences to people's preferences.
  • Services that streamline the management and sharing of content across an organization.
  • Services to manage a global portfolio of diverse sites and deliver experiences at scale.

For a more detailed explanation of these concepts, I encourage you to read my manifesto at

While there are six principles, there are four fundamental blocks to creating and delivering exceptional digital customer experiences: (1) managing content and digital assets, (2) managing user data, (3) personalized experience orchestration using machine learning, and (4) multi-experience composition and delivery across digital touchpoints. In the next sections, I'll talk a bit more about each of these four.

A diagram that shows how user data and content can be used to deliver the best next experience.

Managing content and digital assets

Given the central role of content in any digital experience, content management is a core capability of any DXP. Our content management capabilities are centered around Drupal and Acquia DAM.

Drupal has long been known for its ability to effectively manage and deliver content across various channels and customer touchpoints. In 2022, Drupal made significant strides with the release of Drupal 10, a major milestone that has taken 22 years to achieve.

The Drupal 10 release was covered by numerous articles on the web so I won't repeat the main Drupal 10 advancements or features in my retrospective. I do want to highlight Acquia's contributions to its success. Acquia was founded with the goal of supporting Drupal and, 16 years later, we continue to fulfill that mission as the largest corporate contributor to Drupal 10. I am proud of the work we have done to support the Drupal community and the release of Drupal 10.

I also wanted to provide an update on Acquia DAM. In 2021, Acquia acquired Widen, a digital asset management platform (DAM), and rebranded it as Acquia DAM. In 2022, we integrated it into both Drupal and Acquia's products.

Providing our customers with strong content management capabilities is essential to help them create great digital experiences. The integration of Widen marks an important step towards achieving our vision and implementing our strategy.

The combination of Drupal and Acquia DAM allows us to assist our customers in managing a wide range of digital content, including website content, images, videos, PDFs, and product information.

Managing user data

We have been investing in experience personalization since 2013, and over the past eight years, the market has continued to move toward data-driven, personalized experiences. IDC predicts that the Customer Data Platform (CDP) market will grow about 20% per year, reaching $3.2 billion by 2025.

This growth is driven by significant market changes, such the end of browser support for third-party cookies and the necessity for marketers to create personal customer experiences that are compliant with privacy regulations. The use of first-party data has become increasingly crucial. Acquia CDP is helping our customers with exactly this, and it's the perfect tool for this moment in time.

In order to deliver great customer experiences, it is essential that our customers have a single, accurate source of truth for customer data and complete user profiles. To achieve this, we made the decision to replatform our marketing products on Snowflake throughout 2022, a major architectural undertaking.

With Snowflake serving as our shared data layer, our customers now have easy and direct access to all of their customer data. This allows them to leverage that data across multiple applications within the Acquia product portfolio and connect to other relevant applications as part of their complete digital experience platform (DXP) solutions. I believe that our shared data layer sets us apart in the market and gives us a competitive advantage.

Personalized experience orchestration using machine learning

In 2022, we also focused on establishing integrations between Acquia CDP and other systems, and we added to the extensive set of machine learning models available for use with Acquia CDP. These improvements provide non-technical marketers, or "citizen data scientists", with more options for understanding and leveraging their customer data.

Our machine learning models allow our customers to use predictive marketing to improve and automate their marketing and customer engagement efforts. In 2022, we delivered over 1 trillion machine learning predictions. These predictions help our customers identify the best time and channel for engagement, as well as the most effective content to use, and more.

One of the primary reasons that companies purchase CDPs is to respect their customers' privacy preferences. A major benefit of a CDP is that it helps customers comply with regulations such as GDPR and CCPA. In June, we enhanced Acquia CDP to make it even easier for our customers to comply with subject data requests and privacy laws.

Multi-experience composition and delivery across digital touchpoints

According to a Gartner report from Q4 2022, global end-user spending on public cloud services is expected to increase by 20.7% in 2023, reaching a total of $591.8 billion.

The growth of Open Source software has paralleled the adoption of cloud technology. In 2022, developers initiated 52 million new Open Source projects on GitHub and made over 413 million contributions to existing Open Source projects. 90% of companies report using Open Source code in some manner.

Drupal is one of the most widely used Open Source projects globally and Acquia operates a large cloud platform. Acquia has been a pioneer in promoting both Open Source software and cloud technology. The combination of Open Source and cloud has proven to be particularly powerful in enabling innovation and digital transformation.

At Acquia, we strongly believe that experience creation and delivery will continue to move to Open Source and the cloud for many more years. The three largest public cloud vendors — Amazon, Microsoft, and Google — all reported annual growth between 25% and 40% in Q3 of 2022 (Q4 results are not public yet).

The past few years, our cloud platform has undergone a major technical upgrade called Acquia Cloud Next, modernizing Acquia Cloud using a cloud-native, Kubernetes-based architecture. In 2022, we made significant progress on Acquia Cloud Next, with many customers transitioning to the platform. They have experienced exceptional levels of performance, self-healing, and dynamic scaling for their large websites thanks to Acquia Cloud Next.

In 2022, we also introduced a new product called Acquia Code Studio. In partnership with GitLab, we offer a fully managed continuous integration/continuous delivery (CI/CD) pipeline optimized for Drupal. I have been using Acquia Code Studio myself and have documented my experience in deploying Drupal sites with Acquia Code Studio. In my more than 20 years of working with Drupal, I believe that Acquia Code Studio offers the best Drupal development and deployment workflow I have encountered.


A key part of delivering on our vision is acquisitions. One disappointment for me was that we were unable to complete any acquisitions in 2022, despite our eagerness to do so. We looked at various potential acquisition targets, but ultimately didn't move forward with any. We remain open to and enthusiastic about acquiring other companies to become part of the Acquia family. This will continue to be a key focus for me in 2023.

The number one trend to watch: AI tools

The ever-growing amount of content on the internet shows no signs of slowing down. With the advent of "AI creation tools" like ChatGPT (for text creation) and DALL·E 2 (for image creation), the volume of content will only increase at an accelerated rate.

It is clear that generative AI will be increasingly adopted by marketers, developers, and other creative professionals. It will transform the way we work, conduct research, and create content and code. The impact of this technology on various industries and fields is likely to be significant.

Initially, AI tools like ChatGPT and DALL·E may present opportunities for Drupal and Acquia DAM. As the volume of content continues to increase, the ability to effectively manage and leverage that content becomes even more important. Drupal and Acquia DAM specialize in the management of content after it has been created, rather than in the creation process itself. As such, these tools may complement our offerings and help our customers better handle the growing volume of content. Those in the business of creating content, rather than managing content, are likely to face some disruption in the years ahead.

In the future, ChatGPT and similar AI tools may pose a threat to traditional websites as they are able to summarize information from various websites, rather than directing users to those websites. This shift could alter the relative importance of search engines, websites, SEO optimization, and more.

At Acquia, we will need to consider how these AI tools fit into our product strategy and be mindful of the ethical implications of their use. I will be closely monitoring this trend and plan to write more about it in the future.

Give back more

In the spirit of our "Give Back More" values, the Acquia team sponsored 200 kids in the annual Wonderfund Gift Drive, contributing gifts both in person and online. We also donated to The Pedigree Foundation and UNICEF. Acquians also participated in collective volunteerism such as the Urmi Project (India), Camp Mohawk (UK), and at Cradles to Crayons (Boston).

Acquia also launched its own Environmental, Social, and Governance (ESG) stewardship as a company and joined the Vista Climate Pledge. This is important, not just for me personally, but also for many of our customers, as it aligns with their values and expectations for socially responsible companies.


In some respects, 2022 was more "normal" than the previous few years, as I had the opportunity to reconnect with colleagues, Open Source contributors, and customers at various in-person events and meetings.

However, in other ways, 2022 was difficult due to the state of the economy and global conflict.

While 2022 was not without its challenges, Acquia had a very successful year. I'm grateful to be in the position that we are in.

Of course, none of our results would be possible without the hard work of the Acquia team, our customers, our partners, the Drupal and Mautic communities, and more. Thank you!

I am not sure what 2023 will bring but I wish you success and happiness in the new year.

January 02, 2023

Mastodon due to the decentralized nature can result in a significant extra load on your site if someone posts a link to it. Every Mastodon instance where the post is seen (which can be 1 but also 100 or 1000 or …) will request not only the page and sub-sequentially the oEmbed json object to be able to show a preview in Mastodon. The page requests should not an issue as you surely have page caching...


December 26, 2022

For me, 2022 was the year of Bluetooth. [1] With all the talk about Matter, the one protocol to connect all home automation devices, this can sound strange. However, it will take years for Matter to become adopted, and in the mean time Bluetooth devices are everywhere.

I wrote a book about Bluetooth

Elektor International Media published my book Develop your own Bluetooth Low Energy Applications for Raspberry Pi, ESP32 and nRF52 with Python, Arduino and Zephyr this year. Why did I decide to write a book about Bluetooth? It comes down to a unique combination of accessibility, ubiquity and easy basics of the technology.

Bluetooth Low Energy (BLE) is one of the most accessible wireless communication standards. There's no cost to access the official BLE specifications. Moreover, BLE chips are cheap, and the available development boards (based on an nRF5 or ESP32) and Raspberry Pis are quite affordable. [2] This means you can just start with BLE programming at minimal cost.

On the software side, BLE is similarly accessible. Many development platforms, most of them open source, offer an API (application programming interface) to assist you in developing your own BLE applications. The real-time operating system Zephyr is a powerful platform to develop BLE applications for Nordic Semiconductor nRF5 or equivalent SoCs, with Python and Bleak it's easy to decode BLE advertisemens, and NimBLE-Arduino makes it possible to create powerful firmware such as OpenMQTTGateway for the ESP32.

Another important factor is that BLE radio chips are ubiquitous. You can find them in smartphones, tablets, and laptops. This means that all those devices can talk to your BLE sensors or lightbulbs. Most manufacturers create mobile apps to control their BLE devices, which you can reverse engineer (as I explain in one of the chapters of my book).

You can also find BLE radios in many single-board computers, such as the Raspberry Pi, and in popular microcontroller platforms such as the ESP32. This makes it quite easy for you to create your own gateways for BLE devices. And platforms such as the Nordic Semiconductor nRF5 series of microcontrollers with BLE radio even make it possible to create your own battery-powered BLE devices.

Last but not least, while Bluetooth Low Energy is a complex technology with a comprehensive specification, getting started with the basics is relatively easy. I hope my book contributes to this by explaining the necessary groundwork and showing the right examples to create your own BLE applications.


I contributed to the Theengs project

I have been using OpenMQTTGateway at home for some time, which is a gateway for various wireless protocols that you can install on an ESP32 or other devices. This year OpenMQTTGateway spun out their BLE decoder to a separate project, Theengs Decoder. This is an efficient, portable and lightweight C++ library for BLE payload decoding.

I contributed to the Theengs Decoder project with some decoders for the following devices:

The Theengs project also created a gateway that you can run on a Linux machine such as a Raspberry Pi, Theengs Gateway. This leverages the same Theengs Decoder as OpenMQTTGateway. I quickly adopted this solution as an alternative to bt-mqtt-gateway (which I contributed to with RuuviTag support earlier), and I started contributing to the project. Amongst others, I:

I also started Theengs Explorer under the Theengs umbrella. This is a text user interface to discover BLE devices and show their raw advertisement data and the data as decoded by Theengs Decoder. This project is still in early development, because I wrote this using a pre-0.2 release of Textual, and I still have to rewrite it.


I wrote some Python packages for Bluetooth

Outside the Theengs project, I created two Python packages related to Bluetooth this year. After I struggled with updating the Theengs Explorer code base to the new Textual 0.2 release, I decided to start from scratch with a 'simple' Bluetooth Low Energy scanner. This became HumBLE Explorer, which is a cross-platform, command-line and human-friendly Bluetooth Low Energy scanner, looking like this:


Textual is quite neat. It lets you create interactive applications for the terminal, with widgets such as checkboxes and input fields, a CSS-like layout language, and even mouse support. Moreover, it runs on Linux, Windows and macOS. Although I'm personally only using Linux at home, I find it important that my applications are cross-platform. That's also the reason why this application, as well as all my Bluetooth work with Python, is based on Bleak, which supports the same three operating systems.

Now that HumBLE Explorer is working, I'll revisit Theengs Explorer soon, and update it to the new Textual version with the knowledge that I gained.

A second Bluetooth project that I have been working on this year, even before HumBLE Explorer or Theengs Explorer, is bluetooth-numbers. It's a Python package with a wide set of numbers related to Bluetooth, so Python projects can easily use these numbers. The goal of this project is to provide a shared resource so various Python projects that deal with Bluetooth don't have to replicate this effort by rolling their own database and keeping it updated.

Luckily Nordic Semiconductor already maintains the Bluetooth Numbers Database for Company IDs, Service UUIDs, Characteristic UUIDs and Descriptor UUIDs. My bluetooth-numbers package started as a Python wrapper around this project, by generating Python modules with these data. In the mean time, I extended the package with some SDO Service UUIDs and Member Service UUIDs I extracted from the Bluetooth Assigned Numbers document (but which I'll probably upstream to the Bluetooth Numbers Database), as well as the IEEE database of OUIs for prefixes of Bluetooth addresses.

So you now can install the package from PyPI with pip:

pip install bluetooth-numbers

Then you can get the description of a company ID in your Python code:

>>> from bluetooth_numbers import company
>>> company[0x0499]
'Ruuvi Innovations Ltd.'

Get the description of a service UUID:

>>> from bluetooth_numbers import service
>>> from uuid import UUID
>>> service[0x180F]
'Battery Service'
>>> service[UUID("6E400001-B5A3-F393-E0A9-E50E24DCCA9E")]
'Nordic UART Service'

Get the description of a characteristic UUID:

>>> from bluetooth_numbers import characteristic
>>> from uuid import UUID
>>> characteristic[0x2A37]
'Heart Rate Measurement'
>>> characteristic[UUID("6E400002-B5A3-F393-E0A9-E50E24DCCA9E")]
'UART RX Characteristic'

Get the description of a descriptor UUID:

>>> from bluetooth_numbers import descriptor
>>> descriptor[0x2901]
'Characteristic User Descriptor'

Get the description of an OUI:

>>> from bluetooth_numbers import oui
>>> oui["58:2D:34"]
'Qingping Electronics (Suzhou) Co., Ltd'

I'm using bluetooth-numbers in HumBLE Explorer and Theengs Explorer to show human-readable descriptions of these numbers in the interface. I hope that other Python projects related to Bluetooth will adopt the package too, to prevent everyone from having to keep their own numbers database updated.

I'm keeping the package updated with its various sources, and it has a test suite with 100% code coverage. Contributions are welcome. The API documentation shows how to use it.

Bluetooth developments in Home Assistant and ESPHome

Although I'm personally advocating an MQTT-based approach to home automation, I'm a big fan of Home Assistant and ESPHome because they share my vision of home automation and make it rather easy to use. Both open-source home automation projects had some big improvements in their Bluetooth support this year.

When my book Getting Started with ESPHome: Develop your own custom home automation devices was published last year, BLE support in ESPHome was still quite limited. It only supported reading BLE advertisements, but not connecting to BLE devices. Support for using ESPHome as a BLE client was only added after the book was published.

The biggest BLE addition in 2022 was Bluetooth Proxy in ESPHome 2022.8. This allows you to use your ESP32 devices with ESPHome firmware as BLE extenders for Home Assistant. Each ESPHome Bluetooth proxy device forwards the BLE advertisements it receives to your Home Assistant installation. By strategically placing some devices in various places at home, you can expand the Bluetooth range of your devices this way. [3]

Starting from ESPHome 2022.9 the Bluetooth proxy also supports active connections: it lets Home Assistant connect to your devices that are out of reach of your home automation gateway, as long as one of your Bluetooth proxy devices are in reach of the device you want to connect to.

This feature was joined by a brand new Bluetooth integration in Home Assistant 2022.8, with automatic discovery of new devices and the ability to push device updates. Home Assistant 2022.9 then added support for ESPHome's Bluetooth proxies, and Home Assistant 2022.10 extended this support to active connections.

Another interesting initiative coming from the Home Assistant project is BTHome, a new open standard for broadcasting sensor data over BLE. Raphael Baron's open-source soil moisture sensor b-parasite already adopted the BTHome standard, as did the custom firmware ATC MiThermometer for various Xiaomi temperature sensors. With the BTHome integration in Home Assistant 2022.9, devices advertising in this format will be automatically discovered in your Home Assistant installation.

BTHome's data format is documented in detail, with various data types supported. There's even support for encryption, using AES in CCM mode with a pre-shared key. The bthome-ble project implements a parser for BTHome payloads you can use in your own Python projects. I applaud the initiative to create an open standard for BLE sensor advertisements, and I hope that many open-source devices will adopt BTHome. I will definitely use the format if I create a BLE broadcaster instead of coming up with my own data format.

I also found it interesting to see that Home Assistant decided to move to Bleak as their BLE library. They even sponsored the lead developer David Lechner to implement passive scanning in the Linux backend. This benefits the broader open-source community and allowed me to add passive scanning support to Theengs Gateway, Theengs Explorer and HumBLE Explorer. With Home Assistant as a big user of Bleak, we'll surely see it improving even more. And Bleak is already the best Python library for BLE...


What with BLE in 2023?

I expect that Bluetooth will still remain an important technology in 2023 and further, because I don't see anything changing about the unique combination of accessibility, ubiquity and easy basics. So I will keep contributing to the Theengs project and developing my own Bluetooth projects.

I still have a couple of BLE sensors at home that aren't supported yet by Theengs Decoder, and I'd like to change that! If you have a device that isn't on the list of supported devices, why don't you try adding a decoder? You don't need to be a developer to do this, as the decoders are specifications of the advertisement format in a JSON file.


When I write "Bluetooth", I mean "Bluetooth Low Energy", which is a radical departure from the original Bluetooth standard, which is now called Classic Bluetooth. Bluetooth Low Energy has been part of the Bluetooth standard beginning from Bluetooth 4.0 (2010).


At least before the Raspberry Pis started getting out of stock because of supply chain issues.


Theengs Gateway also supports this type of expanding BLE range, with various ESP32 devices running OpenMQTTGateway scattered around your home. This is called MQTTtoMQTT decoding.

December 24, 2022

The blog has a new layout. Some of the most important changes:

  • Much smaller logo. The logo was taking up waaaaay too much space.
  • The thumbnails have a shadow on the main page.
  • I hope that the font is easier to read. I might tweak this later.
  • Less clutter in the sidebar!
  • The social links have moved to the Contact page.
  • The top menu is rearranged a bit.
  • The blog archive displays the full article, not just an excerpt.
  • Infinite scroll! I don’t know yet if I like it, I might change it later.
  • The blog archive has 2 columns. Again, I’m not sure about this, might change it later. Feedback is welcome, leave a comment!
  • The most recent post is displayed full width.
  • On individual posts the thumbnail image is now the background of the title.
  • I’m still not entirely happy that the author is shown at the bottom of each blog post. I’m the only author here, so that’s useless, but I have not yet found how to remove that. EDIT: fixed with some extra CSS. Thanks for the tip, Frank!

Do you have any suggestions or comments on the new layout?

December 23, 2022

Navigated to OpenAI, an artificial intelligence (AI) research platform:

I asked OpenAI to write a letter to my wife begging her to make orange cookies.

Twenty-four hours later:

A baking sheet holds freshly baked orange cookies.

What an incredible time to be alive. Happy holidays!

December 22, 2022

 It took a while, but here are three books I've read.

Kevin Mitnick - The Art of Invisibility (2017)

I have been interested in Kevin Mitnick since his 1980ies hacking. He is mentioned in several books that I read in the past.

This book is about all the personal data one leaves behind when using the internet (or devices connected to the internet). It is a rather simple guide on how to increase your privacy or how to aim to become almost invisible on the internet.

I would not advice this book for nerds, or for IT security experts, the book is probably not aimed for those people anyway. People who know close to nothing about computers/networks could benefit from this guide. 

Robert M. Pirsig - Zen and the art of motorcycle maintenance (1974)

Now this is an interesting read. It is not about motorcycles, in fact it is perfectly fine if you replace the word motorcycle by 'smartphone' or 'computer' or 'internet privacy and tracking' in the first half of the book. There are some very insightful comments about how people react in different ways to technology.

The main part of this book though, is not the motorcycle trip with his son through the United States, but his alter ego named Phaedrus. The writer was diagnosed with catatonic schizophrenia and received electric-shock treatment. In the book he remembers fragments of this Phaedrus personality (with an IQ of 170).

The second half is a tough read. I often needed to reread several sentences to grasp what he was saying. (Which was never the case in the Mitnick book above.)

Prusa 3D printing handbook

I finally bought a 3D printer and would advice this to anyone who has lot's of time. Yes, this is not yet ready for end users that only want a 'File - Print' option on the computer. But a whole new world opens, at least for me it's a new world with many possibilities and adventures.

It's a small booklet, but I mention it anyway because it is really good, as are all the follow-up guides on


Acquia Cloud was first launched in 2009 as Acquia Hosting. Acquia was one of the earliest adopters of AWS. At the time, AWS had only 3 services: EC2 , S3, and SimpleDB.

A lot has changed since 2009, which led us to re-architect Acquia Cloud starting in late 2019. This effort, labeled "Acquia Cloud Next" (ACN), became the largest technology overhaul in Acquia's history.

In 2013, four years after the launch of Acquia Cloud, Docker emerged. Docker popularized a lightweight container runtime, and a simple way to package, distribute and deploy applications.

Docker was built on a variety of Linux kernel developments, including "cgroups", "user namespaces" and "Linux containers":

  1. In 2006, Paul Menage (Google) contributed generic process containers to the Linux kernel, which was later renamed control groups, or cgroups.
  2. In 2008, Eric W. Biederman (Red Hat) introduced user namespaces. User namespaces allow a Linux process to have its own set of users, and in particular, allow root privileges inside process containers.
  3. In 2008, IBM created the Linux Containers Project (LCP), a set of tools on top of cgroups and user namespaces.

Docker's focus was to deploy containers on a single machine. When organizations started to adopt Docker across a large number of machines, the need for a "container orchestrator" became clear.

Long before Docker was born, in the early 2000s, Google famously built its search engine on commodity hardware. Where competitors used expensive enterprise-grade hardware, Google realized that they could scale faster on cheap hardware running Linux. This worked as long as their software was able to cope with hardware failures. The key to building fault-tolerant software was the use of containers. To do so, Google not only contributed to the development of cgroups and user namespaces, but they also built an in-house, proprietary container orchestrator called Borg.

When Docker exploded in popularity, engineers involved in the Borg project branched off to develop Kubernetes. Google open sourced Kubernetes in 2014, and in the years following, Kubernetes grew to become the leading container management system in the world.

Back to Acquia. By the end of 2019, Acquia Cloud's infrastructure was delivering around 35 billion page views a month (excluding CDN). Our infrastructure had grown to tens of thousands of EC2 instances spread across many AWS regions. We supported some of the highest trafficked events in the world, including coverage of the Olympics, the Australian Open,, the Mueller report, and more.

Throughout 2019, we rolled out many "under the hood" improvements to Acquia Cloud. Thanks to these, our customers' sites saw performance improvements anywhere from 30% to 60%, at no cost to them.

That said, it became harder and harder to make improvements to the existing platform. Because of our scale, it could take weeks to roll out improvements to our fleet of EC2 instances. It was around that time that we set out to re-architect Acquia Cloud from scratch.

Acquia's journey to ACN started prior to Kubernetes and Docker becoming mainstream. Our initial approach was based on cgroups and Linux containers. But as Kubernetes and Docker established themselves in the market, it became clear we had to pivot. We decided to design ACN from the ground up to be a cloud-native, Kubernetes-native platform.

In March of 2021, after a year and a half of development, my little blog,, was the first site to move to ACN. Getting my site live in production was a fun rallying point for our team. Even more so because my site was also the first site to launch on the original Acquia Hosting platform.

I never blogged about ACN because I wanted to wait until enough customer sites had upgraded. Fast forward another year and a half, and a large number of customers are running on ACN. We now have some of our highest traffic customers running on ACN. I can say without a doubt that ACN offers the highest levels of performance, self-healing, and dynamic scaling that Acquia customers have relied on.

ACN continuously monitors application performance, detects failures, reroutes traffic, and scales websites automatically without human assistance. ACN can handle billions of pageviews, gracefully deals with massive traffic spikes, all without manual intervention or architectural changes. Best of all, we can roll out new features in minutes or hours instead of weeks.

There is no better way to visualize this than by sharing a chart:

Acquia cloud next web transactions time The "web transaction times" of a large Fortune 2000 customer that upgraded their main website to Acquia Cloud Next. You can see that PHP (blue area), MySQL (dark yellow area) and Memcached (light yellow) became both much faster and less volatile after migrating to Acquia Cloud Next. (The graph is generated by New Relic. New Relic defines the "web transaction time" as the time between when the application receives a HTTP request and when a HTTP response is sent.)

Customers on Acquia Cloud Next get:

  1. Much faster page performance and web transaction times (see chart above)
  2. 5x faster databases compared to traditional MySQL server deployments
  3. Faster dynamic auto-scaling and faster self-healing
  4. Improved resource isolation - Nginx, Memcached, Cron, and other services all run in dedicated pods

To achieve these results, we worked closely with our partner, AWS. We pushed the boundaries of certain AWS services, including Amazon Elastic File System (EFS), Amazon Elastic Kubernetes Service (EKS), and Amazon Aurora. For example, AWS had to make changes to EKS to ensure that they could meet the scale at which we were growing. After 15 years of working with AWS, we continue to be impressed by AWS' willingness to partner with us and keep up with our demand.

In the process, AWS made upstream Kubernetes contributions to overcome some of our scaling challenges. These helped improve the speed and stability of Kubernetes. We certainly like that AWS shares our values and commitments to Open Source.

Last but not least, I'd be remiss not to give a big shoutout to Acquia's product, architect, and engineering teams. Re-architecting a platform with tens of thousands of EC2 instances running large-scale, mission-critical websites is no small feat.

Our team continued to find creative and state-of-the-art ways to build the best possible platform for Drupal. For a glimpse of that, take a look at this presentation we gave at Kubecon 2022. We learned that by switching our scaling metric from Kubernetes' built-in CPU utilization to a custom metric, we could reduce the churn on our clusters by ~1,000%.

Looking back at ACN's journey over the past 3+ years, I'm incredibly proud of how far we have come.

December 19, 2022

A large sign that spells out 'Acquia' in individually lit letters. Behind the sign, people are engaged in a lively conversation.

In the early days, Acquia was one of the fastest-growing companies in the US. Like most startups, we'd raise money, convert that money into growth, raise money again, etc. In total, we raised nearly $190 million in seven rounds of funding.

At some point, all companies that take this approach have to become self-sustainable. Acquia wasn't any different.

When Acquia did a CEO search in 2017, we had just started that transformation. We hired Mike Sullivan as our CEO to help us grow, while quickly becoming financially independent at the same time.

When Mike told me he decided to leave Acquia at the end of the year, I was sad, but not completely surprised. While there is always more work to do, Mike has accomplished the mission we had set out for him: we continued our growth and became self-sustained. Mike is leaving us in the strongest financial position we have ever been.

Mike will be succeeded by Steve Reny, who has been Acquia's Chief Operating Officer (COO) for 4.5 years. Steve has been guiding all aspects of Acquia's customer success, professional services, global support, security, and operations. And before joining Acquia in 2018, Steve held executive leadership positions at other companies, including as CEO, COO, CFO, head of sales, and head of corporate development.

Everyone at Acquia knows and loves Steve. This CEO transition is natural, planned, and minimally disruptive.

I have a deep appreciation for everything Mike has done for Acquia. And I'm excited for Steve. Not everyone gets to lead one of the most prominent Boston technology companies. As for me, I continue in my role as CTO, and look forward to partnering with Steve.

I believe strongly in Acquia's mission, our purpose, and our opportunity. I have a deep-rooted belief in the critical importance of the web and digital experiences. It's how we communicate, how we stay in touch with loved ones across the world, how we collaborate, how we do business, how we learn, how we bank, and more.

Because of the web's importance to society, we need to help ensure its well-being and long-term health. I think a lot about helping to build a web that I want my children to grow up with. We need to make sure the web is open, accessible, inclusive, safe, energy efficient, pro-privacy, and compliant. Acquia and Drupal both have an important part to play in that.

December 16, 2022

Although I removed the “twitterless twaddle” byline from this blog years ago and I was present on Twitter with my @futtta-account, I’ve never been an avid user. That means I have no “social capital” there and given the Musk shit-show it has quickly become, I revisited the Mastodon account I created in April this year (when Elon declared he wanted to buy Twitter) and as instructed by God I happily...


December 14, 2022

The Drupal logo surrounded by confetti and the number 10.

Today, we released Drupal 10. Getting to this double-digit release has been 22 years in the making! It's an exciting milestone.

Some of Drupal's recent innovations include:

  1. Modernized front end experience (Olivero theme)
  2. Modernized back end experience (Claro theme)
  3. An improved content editing experience (CKEditor 5)
  4. Improved developer experience: built on Symfony 6.2 with support for PHP 8.2, etc
  5. A new theme generator

The five bullets above just scratch the surface. Since the initial release of Drupal 9, 4,000+ changes were committed to Drupal Core alone. These contributions came from 2,129 different people and 616 unique organizations over the course of the last 2.5 years. Incredible!

The launch of Drupal 10 comes at a good time because there is so much turmoil happening within the top social networking sites right now. It's a good reminder that preserving and promoting an Open Web is more important than ever. Open Source, the Indieweb, the Fediverse, and even RSS are all seeing new-found appreciation and adoption as people realize the drawbacks of communicating and collaborating via proprietary platforms.

An Open Web means opportunity for all. And Open Source gives us the freedom to understand how our software works, to collectively improve it, and to build the web we want. Projects like Drupal are fundamental for keeping the Open Web alive and well. The main focus of Drupal 10 was to bring even more site builders to Drupal. In that sense, the release of Drupal 10 will help extend the web's reach and protect its long-term well-being.

If it's been a while since you've considered using Drupal, I encourage you to try Drupal 10.

I'd like to thank every single contributor for what we've accomplished together with Drupal 10!

December 13, 2022

Als je embedded software wilt ontwikkelen, dan merk je al snel dat elke hardwareproducent zijn eigen ontwikkeltools naar voren schuift. Voor Arduino is dat de Arduino IDE, Nordic Semiconductor levert een eigen versie van SEGGER Embedded Studio en STMicroelectronics zweert bij STM32CubeIDE.

Elk van deze tools heeft natuurlijk zijn voordelen, maar als je vaak met verschillende hardware werkt, dan is het complex om je met meerdere ontwikkelomgevingen vertrouwd te maken. PlatformIO biedt een oplossing: het is een opensource, cross-platform, cross-architecture, multi-framework ontwikkelomgeving voor embedded software. Je kunt daardoor in één universele omgeving al je ontwikkelprojecten voor diverse hardware uitvoeren.

Platforms en frameworks

PlatformIO ondersteunt meer dan 1400 ontwikkelbordjes en microcontrollers, waaronder de Atmel AVR-serie en Arduino-bordjes, de Espressif ESP32 en ESP8266, de Microchip PIC32, de Nordic nRF5-chips, de RaspberryPi Pico en andere RP2040-gebaseerde bordjes, de STM32-familie van STMicroelectronics en de Teensy-bordjes. De ondersteuning voor al deze hardware is verzameld in 48 ontwikkelplatforms.

Daarnaast ondersteunt PlatformIO ook 25 frameworks, zoals Arduino, het Espressif IoT Development Framework (IDF), FreeRTOS, Mbed, STM32Cube en Zephyr. Op die manier kun je bijvoorbeeld Arduino- of Zephyr-code gewoon overnemen in PlatformIO, op verschillende directoryindelingen en configuratiebestanden na.

Grafisch of op de opdrachtregel

PlatformIO voorziet in integraties met allerlei editors en ontwikkelomgevingen, zoals Visual Studio Code, CLion, Atom, Eclipse, Siblime Text, maar ook Emacs en Vim. De makers raden zelf Visual Studio Code aan, de populaire cross-platform code-editor van Microsoft.


Visual Studio Code is geen opensourcesoftware. Het is gebaseerd op de repository Code - OSS die onder de MIT-licentie valt, maar met Microsoft-specifieke aanpassingen. Het resultaat verspreidt Microsoft onder een traditionele productlicentie. Een van de aanpassingen die Microsoft doet is telemetrie. Wil je dat vermijden? Gebruik dan VSCodium, dat een build van de broncode van Code - OSS is zonder Microsofts aanpassingen zoals de telemetrie. Om de Visual Studio Marketplace in VSCodium te kunnen gebruiken (om PlatformIO te installeren), moet je dan wel specifieke instructies volgen. Mogelijk overtreed je daarmee de gebruiksvoorwaarden van de Visual Studio Marketplace.

Je kunt ook volledig op de opdrachtregel met PlatformIO werken, zonder daarvoor een grafische interface zoals Visual Studio Code te gebruiken. Installeer daarvoor PlatformIO als Python-module met pip:

pip install platformio

Daarna initialiseer je een nieuw project in de huidige directory met:

pio project init

Projecten bouwen met PlatformIO

Ik ben de laatste jaren voor mijn projecten waardoor ik traditioneel de Arduino IDE of Arduino CLI gebruikte overgestapt naar PlatformIO. Je maakt er eenvoudig projecten mee die je op meerdere types ontwikkelbordjes bouwt, en PlatformIO installeert de benodigde toolchains, frameworks en software development kits. Je geeft ook eenvoudig afhankelijkheden aan voor externe bibliotheken, en ook debugging en unit tests zijn in je projecten te integreren.


Wil je met PlatformIO aan de slag? Voor Linux Magazine schreef ik een artikel waarin ik een eenvoudig voorbeeldproject met PlatformIO aanmaak. Dat werkt op het Seeeduino XIAO-bordje van het Atmel SAM-platform, op de Arduino Nano RP2040 Connect en op de Raspberry Pi Pico.

December 12, 2022

Oracle recently released Read Replica for MySQL Database Service (also known as MySQL HeatWave, with or without the use of HeatWave accelerator for OLTP and OLAP).

Read Replicas are read-only copies of the MySQL DB system in the same region. When you add Read Replicas, they are automatically distributed accross Availability Domains and/or Fault Domains.

Each MySQL DB instance can have up to 18 read replicas.

Read Replicas use MySQL Asynchronous Replication with parallel workers. And when you create the first Read Replica, a Read Replica Load Balancer is created, which distributes read traffic among the read replicas.

Not all shapes are compatible with Read Replicas, the shape requires a minimum of 4 OCPUs:

Currently it is also not possible to change the shape of the Read Replica, you will have to manually configure the replication channel if, for example, you want to have an instance with HeatWave Cluster enabled (an instance using HeatWave secondary engine).


For today’s article, we will deploy WordPress, the most popular open source content management, in Oracle Cloud Infrastructure and distribute the read traffic to Read Replicas.

To deploy WordPress on a compute instance, configure the network, and deploy the MySQL instance in OCI, we will use an OCI Resource Manager Stack.

With a single click, you will be able to deploy the following infrastructure:

Once deployed, we will see how to easily create new Read Replicas. We will then have the following final architecture:


To start deploying all the required OCI resources, we start by clicking on the following button. This will deploy everything using OCI Resource Manager (Terraform):

Deploy to Oracle Cloud

Once clicked and if you are connected to the OCI console, we will arrive at this screen:

Resource Manager Stack Creation

You must review and agree to Oracle Terms of Use, and if you do,the Terraform code will be loaded:

Stack Information: WordPress on OCI with MDS

Then we just have to follow the wizard. This first section concerns the mandatory variables:

Stack Creation : mandatory variables

In the Shapes section it is important to choose a MySQL Shape with at least 4 OCPUs. You must enter the exact name of the shape because unfortunately the OCI API does not feed MySQL DB instance shapes in stacks:

Shapes section

We click on Next and we can apply the stack’s after its creation:

Create Stack

When the application task is complete, you can see information on how to connect to your newly deployed WordPress in the Logs section:

Stack Apply Logs

You can paste the public IP into your browser and finish the WordPress installation with the information returned in the logs. The database host is the IP returned by mds_instance_ip ( in my example).

When WordPress is deployed, this is an example of what you get:

My WordPress

Read Replicas

To deploy Read Replicas, we need to choose the DB Instance and on the bottom left, in the Resources section, choose Read Replicas and click on Create read replica:

Create a read replica

Once created, we can see that a new Endpoint is also created for the Read Replica and an additional one for the Read replica load balancer:


For the example, I will create another Read Replica to have 2 instances to serve the reads:



To use the Read Replicas and split MySQL queries for readings and writing, we’ll use LudicrousDB, a WordPress plugin that, once configured, will do exactly what we need.

In the past, I used HyperDB, but it doen’t seem to be compatible with the latest WordPress (6.1.1) and has some problems with MySQL 8.0’s sql_mode.

We need to connect to the compute instance (where WordPress is installed) in ssh to install and configure LudicrouDB.

To connect to the compute instance, we need to get the ssh private key that was generated during the Stack apply operation.

I haven’t found the easy way to get the sensitive information, some recent updates may have added new privileges to retrieve those outputs:

oci resource manager job outputs

But, as workaround, you can retrieve the ssh key by going to Job resources in the Job details (for the apply stack) and then check the last entry:

Job details – Job resources Job resources – tls keys

We can then find and copy the private_key_openssh:

Just copy it to a file on your computer or to cloud shell. Remove the front and back double quotes character and replace all ‘\n’ by a carriage return.

Then change the permission to 600 (only your user should have read an write access).

When you are done, use OpenSSH to connect (from the command line or Putty) with the user opc using the public IP of the compute instance and the key we just copied:

ssh connection to the compute instance

We can now install LudicrousDB plugin to our WordPress.

[opc@wordpressserver1 ~]$ cd /var/www/html/wp-content/plugins
[opc@wordpressserver1 ~]$ sudo wget
[opc@wordpressserver1 plugins]$ sudo unzip
[opc@wordpressserver1 plugins]$ sudo mv ludicrousdb-master ludicrousdb
[opc@wordpressserver1 plugins]$ sudo rm
[opc@wordpressserver1 plugins]$ sudo chown -R apache. ludicrousdb
[opc@wordpressserver1 plugins]$ sudo cp ludicrousdb/ludicrousdb/drop-ins/db.php ../db.php
[opc@wordpressserver1 plugins]$ sudo cp ludicrousdb/ludicrousdb/drop-ins/db-config.php ../../

We just need to configure LudicrousDB to use the Read Replica Load Balancer:

[opc@wordpressserver1 ~]$ cd /var/www/html/
[opc@wordpressserver1 html]$ sudo vim db-config.php 

And we need to modify the db-config.php file to have something similar:

$wpdb->add_database( array(
	'host'     => DB_HOST,     // If port is other than 3306, use host:port.
	'user'     => DB_USER,
	'password' => DB_PASSWORD,
	'name'     => DB_NAME,
	'write'    => 1,
	'read'     => 0,
) );

 * This adds the same server again, only this time it is configured as a slave.
 * The last three parameters are set to the defaults but are shown for clarity.
$wpdb->add_database( array(
	'host'     => "",     // If port is other than 3306, use host:port.
	'user'     => DB_USER,
	'password' => DB_PASSWORD,
	'name'     => DB_NAME,
	'write'    => 0,
	'read'     => 1,
	'dataset'  => 'global',
	'timeout'  => 0.2,
) );

We just added the IP of the Read Replica Load Balancer ( where read is 1.

And we are done !

I wrote a small PHP code snippet in a WordPress page that displays on which host the request is made:

$result = $wpdb->get_results("select @@hostname as host");
echo "<strong>host:</strong> " . $result[0]->host;

And the result on some reloads of the page:

The name we see here are the actual host names of the instances as they appear in MySQL:

check on the MySQL DB Instance who replicates from it

We can also see the connections on both Read Replicas in their Metrics section of the Oracle Cloud Infrastructure Console:


Deploying WordPress on Oracle Cloud Infrastructure (OCI) with MySQL Database Service using the new Read Replicas is a straightforward process that offers many benefits. By setting up Read Replicas, you can improve the performance and scalability of your WordPress site or any other application using MySQL.

With the detailed instructions provided in this article, you should now have the knowledge and tools necessary to successfully deploy WordPress on OCI with MySQL Database Service using Read Replicas.

This new feature provides elasticity very simply for read traffic, as you can add or remove MySQL instances on demand with a single click! !

December 10, 2022

Today, two months after reaching Elo 1300 on, I reached Elo 1400.

Since reaching 1300, I decided to study two openings: Four Nights for when I'm playing with white, and the Pirc Defence for when I'm playing with black.

I watched some YouTube videos and memorized the top 3-5 lines for each. In practice, it is not often that my opponents follow any of the main lines that I studied. That said, I found it helpful to start each game with a basic plan.

I noticed a pattern on my path from Elo 1300 to Elo 1400: I'd make a move and then immediately realize it has problems. There were just too many: "I could have avoided that mistake if I spent more than 15 seconds thinking!"-moments.

I now realize that the weak point in my play isn't any theoretical knowledge of openings, but a lack of discipline before making a move. I've been trying to force myself to slow down, pick a few candidate moves, and calculate the one that is best. It has helped me blunder less.

As a next goal, I'd like to get to Elo 1500. That would take me from "decent social player" to "beginning club player". My best win to date was against a player with Elo 1488. Reaching Elo 1500 feels far off right now.

Trying to become better at chess has been a humbling experience. I'll let you know when I get to Elo 1500 — if I can get there at all.

December 09, 2022

The Project Browser's logo, surrounded by smaller logos from projects.

Last year in my DrupalCon North America keynote, I proposed the Project Browser Initiative for Drupal. Project Browser makes it easy for site builders to find and install Drupal modules without having to use the command line!

Ever since, the initiative has been in full swing with work taking place on various fronts. Aside from building the browser itself, we are working on various other aspects to provide the best possible user experience: logos are being designed for the top projects, categories are being streamlined to make discovery more intuitive, and project descriptions are being improved for clarity. We are also defining various quality metrics to make sure Project Browser can easily surface the best projects. All combined, these efforts should make it a lot easier for end users to discover and install Drupal modules.

In the past 18 months, Project Browser has gone from announcement to beta. And the latest beta has a full-featured user interface for discovering and installing projects, fulfilling the original vision of users not needing a command line.

Check out this video to see how it works:

The Project Browser team would love for many people to try out the latest beta.

As a side note, Project Browser is built on top of Drupal's new Package Manager. We are building Package Manager for both the Automatic Updates initiative and the Project Browser initiative. Package Manager provides these initiatives the ability to programmatically utilize Composer, a tool commonly used to manage modern PHP applications like Drupal.

The net result is that:

  1. Project Browser (and Automatic Updates) will support any kind of contributed project, even those with complex third party dependency chains.
  2. You can switch between using the Project Browser and the command line. Projects installed via Project Browser can be managed from the command line using Composer, and vice versa.

I wanted to highlight the Package Manager work because it illustrates how much care and effort has gone into building Project Browser and Automatic Updates. The end results will be one of the most powerful, secure and cutting-edge installation and update systems in the market.

I'm excited about this work for many reasons, but I want to highlight two of them. First, it will help bring more people to Drupal, and therefore the Open Web, something that I am very passionate about. Second, this work is fundamental to making Drupal an even more composable platform.

A special thank you to Chris Wells (Redfin Solutions), Leslie Glynn (Redfin Solutions), Fran Garcia-Linares (Drupal Association), Ben Mullins (Acquia), Narendra Singh (Acquia), Srishti Bankar (Acquia), Utkarsh Patidar (Acquia), and Tim Plunkett (Acquia) for leading much of the work. Also a special thank you to Ted Bowman (Acquia), David Strauss (Pantheon) and the Automatic Updates team for working on the Package Manager.

December 07, 2022

With great pleasure (and an apology for missing the deadline by seven days), we can announce the following projects will have a stand at FOSDEM 2023 (4 & 5th of February). This is the list of stands (in no particular order): Eclipse Foundation FOSSASIA Foundation Software Freedom Conservancy CentOS and RDO FreeBSD Project Free Software Foundation Europe Realtime Lounge Free Culture Podcasts Open Culture Foundation + COSCUP Open Toolchain Foundation Open UK and Book Signing Stand The Apache Software Foundation The Perl/Raku Foundation PostgreSQL GNOME KDE GitLab Homebrew Infobooth on amateur radio (hamradio) IsardVDI Jenkins Fluence La Contre-Voie舰

Comment j’ai été mis en vente sur le Web… à mon insu !

Dans ce billet, je vous explique comment j’ai découvert qu’une société de marketing propose mes services, mettant en avant une version fantaisiste de ma biographie, sans que j’en aie été informé.

Mon recueil de nouvelles « Stagiaire au spatioport Oméga 3000 » s’ouvre sur la génération d’un auteur artificiel adapté à vos goûts selon vos données personnelles collectées.

Lorsque j’ai écrit cette introduction, j’étais persuadé que j’allais me faire rattraper un jour ou l’autre par la réalité. Je n’imaginais pas que ce serait avant même que le livre soit disponible dans les librairies !

Et pour cause…

La découverte d’un conférencier homonyme

Après avoir publié un billet sur l’invasion des contenus générés par des AI, j’allais faire directement l’expérience de devenir un conférencier généré automatiquement !

Testant mon nouveau site, quelle ne fut pas ma surprise de trouver sur la première page Google de la recherche « Lionel Dricot » un profil à mon nom sur un site dont je n’avais jamais entendu parler.

Capture d’écran d’une recherche Google pour « Lionel Dricot »

Capture d’écran d’une recherche Google pour « Lionel Dricot »

Un profil décrivant ma biographie avec moult détails, reprenant des photos et vidéos de diverses conférences. J’étais intrigué. Sur Mastodon, un lecteur me signala que le site était chez lui le premier résultat Bing pour une recherche sur mon patronyme .

Capture d’écran d’une recherche Bing pour « Lionel Dricot »

Capture d’écran d’une recherche Bing pour « Lionel Dricot »

Un site étrange, à l’apparence très professionnelle et qui se présente comme une entreprise de « Celebrity Marketing ». Le simple fait que je sois sur un site de Celebrity Marketing a fait pouffer mon épouse. Elle a d’ailleurs remarqué que l’entreprise tire son nom de Simone Veil et Nelson Mandela. Utiliser Simone Veil et Nelson Mandela pour faire du « Celebrity Marketing », ça pose le niveau ! Ah ouais quand même…

Mon profil sur le site incriminé

Mon profil sur le site incriminé

Petite précision : je ne ferai pas de lien vers ce site, car c’est explicitement interdit dans leurs conditions d’utilisation.

Conditions d’utilisation du site S&N interdisant de faire un lien vers le site

Conditions d’utilisation du site S&N interdisant de faire un lien vers le site

Pratiquement, que fait cette société ? C’est très simple : elle met en contact des entreprises à la recherche de conférenciers et des conférenciers. C’est un service assez courant, j’ai même été en contact il y a quelques années avec une agence de ce genre. Souvent, ces agences signent un contrat d’exclusivité : le conférencier est obligé de passer par l’agence pour toutes les conférences qu’il donne. En échange, l’agence lui trouve des conférences, fait sa promotion, le place voir lui trouve un remplaçant en cas de forfait (j’ai moi-même effectué ce genre de remplacements).

Sauf que dans le cas présent, je n’ai signé aucun contrat, je n’ai pas donné mon accord ni même été vaguement informé ! Le site donne l’impression que, pour me contacter, il faut absolument passer par eux. Nous ne sommes plus dans la bêtise, mais dans la malhonnêteté caractérisée.

Formulaire pour me contacter… via le site S&N !

Formulaire pour me contacter… via le site S&N !

Où je découvre des facettes ignorées de ma propre vie

La lecture de ma biographie est particulièrement intéressante, car, à première vue, elle est tout à fait crédible. Une personne peu informée n’y trouverait, à première vue, pas grand-chose à redire à part quelques fautes d’orthographe (mon roman s’appelle « Printeurs », à la française, pas « Printer » et j’ai du mal à imaginer qu’il puisse être perçu comme un message d’espoir ! La scène du nouveau-né dans le vide-ordure n’était pas assez explicite ?)

Mais une lecture attentive relève des aberrations. Ces aberrations ont toutes une explication pour peu qu’on se mette à creuser. Ainsi j’aurais écrit une nouvelle intitulée « Voulez-vous installer Linux mademoiselle ? ». Comme l’a découvert un lecteur, cette phrase est extraite d’une de mes nouvelles intitulées « Les non-humains », publiée sur Linuxfr et Framasoft.

J’ai également appris également que je suis cofondateur d’Ubuntu. Excusez du peu ! C’est bien entendu faux. Je suis co-auteur du premier livre publié sur Ubuntu, ce qui est très différent. Certaines phrases semblent également sorties de leur contexte (pourquoi insister sur l’obésité et la malnutrition ?) Enfin, le tout se termine par le sublime :

Lors de ses conférences, Ploum nous prédit un monde plus sain et doux.

Le ton général et les références font fortement penser à un texte généré artificiellement. Du type : « Donne-moi une biographie de Lionel Dricot », le tout en anglais suivi d’une traduction automatique. Il est possible que ce soit ce qu’on appelle un « mechanical turk », un travailleur sous-payé à qui on demande un travail que pourrait faire une IA (très fréquent dans les chats de support). Mais cela aurait dû au moins lui prendre une heure et j’ai du mal à imaginer qu’on paye une heure de travail pour pondre ma biographie.

Que le texte soit ou non généré par une IA, cela ne change rien. Il pourrait très bien l’être et est représentatif de ce que produisent et produiront toujours les IAs : quelque chose qui a l’air correct, mais est constellé de fautes difficilement détectables pour un non-spécialiste (j’ai la chance d’être le plus grand spécialiste vivant de ma propre biographie).

Comment réagir ?

À ce stade, je pourrais tout simplement envoyer un mail et exiger le retrait de la page, l’histoire en resterait là. J’ai alerté une connaissance qui est également sur ce site.

Mais ce serait trop facile. L’existence de ce profil pose plusieurs problèmes.

Premièrement en se mettant en intermédiaire entre moi et des clients potentiels sans mon accord et en donnant l’impression que je suis affilié à cette entreprise. Cela pourrait sérieusement nuire à mon image ou à mon business (si j’avais l’une ou l’autre).

Mais l’existence de ce genre de profil peut tordre la réalité de manière encore plus insidieuse. Admettons qu’un wikipédien, affilié ou nom à cette entreprise, se serve de ces infos pour créer une fiche Wikipédia à mon nom. Cela semble parfaitement légitime vu que cette page semble avoir été faite avec mon accord. Cette info pourrait être reprise ailleurs. Soudainement, je deviendrais l’auteur d’une nouvelle que je n’ai jamais écrite. De nombreux libristes informés s’affronteront pour savoir si je suis oui ou non cofondateur d’Ubuntu. Déjà que je suis devenu un écrivain français sur Babelio !

En envoyant un simple mail pour demander le retrait de cette page, je légitime cette pratique business et me prépare à devoir surveiller en permanence le web pour faire retirer les profils générés sans mon accord.

Attaquer en justice une société dans un pays qui n’est pas le mien (car Babelio se plante, pour info) ? Ô joies administratives en perspectives ! (si vous êtes juriste spécialisé et intéressé, contactez-moi)

Ou alors il me reste la solution de lutter avec mes armes à moi. De faire le ploum et de vous raconter cette histoire de la manière la plus transparente possible. Afin de vous mettre en garde sur le fait que tout ce que vous lisez sur le web est désormais un gloubi-boulga qui a l’air sérieux, qui a l’air correct, mais qui ne l’est pas. Toutes les plateformes sont impactées. Tous les résultats des moteurs de recherche.

En rendant cette histoire publique, je sais que la société va réagir avec « ouin-ouin je suis une entrepreneuse-je-ne-pensais-pas-à-mal-je-le-ferai-plus » ou alors « c’est-le-stagiaire-qui-a-fait-une-erreur-on-le-surveillera-mieux » voir « on-a-fait-ce-profil-avec-nos-petites-mains-parcec-qu’on-admire-votre-travail-on-penserait-que-vous-seriez-flatté ». Bref d’odieux mensonges hypocrites. C’est la base du métier du marketing : mentir pour pourrir la vie des autres (et détruire la planète).

Et si la malhonnêteté ne vous est pas encore flagrante, apprenez que la société se targue de posséder la propriété intellectuelle des textes et photos sur son site. Je pense que le photographe du TEDx Louvain-la-Neuve serait ravi de l’apprendre… La plupart de ces images de moi ne sont même pas sous licence libre !

Conditions d’utilisation du site S&N stipulant la propriété intellectuelle des contenus

Conditions d’utilisation du site S&N stipulant la propriété intellectuelle des contenus

Le futur du web…

Si cela n’était pas encore clair, je suis désormais la preuve vivante que tout ce que pond le marketing est du mensonge. Ce qui est juste ne l’est que par hasard. Et tout ce qui nous tombe sous les yeux est désormais du marketing. Pour sortir de ce merdier, il va falloir trouver des solutions (Bill Hicks en proposait une très convaincante…).

Nous allons devoir reconstruire des cercles de confiance. Oublier nos formations à reconnaître les « fake news » et considérer toute information comme étant fausse par défaut. Identifier les personnes en qui nous avons confiance et vérifier qu’un texte signé avec leur nom est bien de leur plume. Ce n’est pas parce qu’il y’a un cadenas vert ou une marque bleue à côté du pseudo que l’on peut faire confiance. C’est même peut-être le contraire…

Bref, bienvenue dans un web de merde !

Ingénieur et écrivain, j’explore l’impact des technologies sur l’humain. Abonnez-vous à mes écrits en français par mail ou par rss. Pour mes écrits en anglais, abonnez-vous à la newsletter anglophone ou au flux RSS complet. Votre adresse n’est jamais partagée et effacée au désabonnement.

Pour me soutenir, achetez mes livres (si possible chez votre libraire) ! Je viens justement de publier un recueil de nouvelles qui devrait vous faire rire et réfléchir.

December 05, 2022

Drowning in AI Generated Garbage : the silent war we are fighting

All over the web, we are witnessing very spectacular results from statistic algorithms that have been in the work for the last forty years. We gave those algorithms an incredibly catchy name: "Artificial Intelligence". We now have very popular and direct applications for them: give the algorithm a simple text prompt (don’t get me started on the importance of text) and it generates a beautiful original picture or a very serious-sounding text. It could also generate sounds or videos (we call them "deep fakes"). After all, it generates only a stream of bits, a bunch of 1 and 0 open to interpretation.

All of this has been made possible because billions of humans were uploading and sharing texts and pictures on the commons we call "the Internet" (and more specifically the web, a common more endangered every day because of the greediness of monopolies). People upload their creation. Or creations from others. After all, does "owning" a text or a picture has any meaning anywhere except in the twisted minds of corrupted lawyers?

What we are witnessing is thus not "artificial creativity" but a simple "statistical mean of everything uploaded by humans on the internet which fits certain criteria". It looks nice. It looks fantastic.

While they are exciting because they are new, those creations are basically random statistical noise tailored to be liked. Facebook created algorithms to show us the content that will engage us the most. Algorithms are able to create out of nowhere this very engaging content. That’s exactly why you are finding the results fascinating. Those are pictures and text that have the maximal probability of fascinating us. They are designed that way.

But one thing is happening really fast.

Those "artificial" creations are also uploaded on the Internet. Those artificial artefacts are now part of the statistical data.

Do you see where it leads?

The algorithms are already feeding themselves on their own data. And, as any graduate student will tell you, training on your own results is usually a bad idea. You end sooner or later with pure overfitted inbred garbage. Eating your own shit is never healthy in the long run.

Twitter and Facebook are good examples of such algorithmic trash. The problem is that they managed to become too powerful and influential before we realised it was trash.

From now on, we have to treat anything we see on the Internet as potential AI garbage. The picture gallery from an artist? The very cool sounding answer on Stackoverflow? This article in the newspaper? This short viral video? This book on Amazon? They are all potential AI garbage.

Fascinating garbage but garbage nonetheless.

The robot invasion started 15 years ago, mostly unnoticed. We were expecting killing robots, we didn’t realise we were drowned in AI generated garbage. We will never fight laser wearing Terminators. Instead, we have to outsmart algorithms which are making us dumb enough to fight one against the other.

Time to enter into resistance, to fight back by being and acting like decent human beings. Disconnect. Go outside. Start human discussions. Refuse to take for granted "what was posted on the Internet". Meet. Touch. Smell. Build local businesses. Flee from monopolies. Refuse to quickly share and like things on your little brainwired screen. Stop calling a follower number "you community" and join small online human communities. Think.

How to recognise true human communities free of algorithmics interferences?

I don’t know. I don’t even know if there are any left. That’s frightening. But as long as we can pull the plug, we can resist. Disconnect!

As a writer and an engineer, I like to explore how technology impacts society. You can subscribe by email or by rss. I value privacy and never share your adress.

If you read French, you can support me by buying/sharing/reading my books and subscribing to my newsletter in French or RSS. I also develop Free Software.

December 04, 2022

La fin d’un blog et la dernière version de

Avertissement : Ce billet est une rétrospective technique des 18 années de ce blog. Il contient des termes informatiques et traite de la manière dont j’ai développé du code pour créer les pages que vous lisez. N’hésitez pas à passer les paragraphes qui contiennent trop de jargon.

La naissance d’un blog

Je suis un précurseur visionnaire.

En 2004, sur les conseils de mon ami Bertrand qui avait constaté que j’écrivais de longues tartines éparpillées aux quatre coins du web, je finis par ouvrir un blog. J’étais au départ réticent, affirmant qu’un blog n’était qu’un site web comme un autre, que la mode passerait vite. Tout comme le podcast n’était jamais qu’un fichier MP3, que la mode passerait tout autant. J’avais tenu un discours similaire en 97, affirmant que le web n’était que du texte affiché à l’écran, que la mode passerait. Juste avant de créer mon premier site. Un véritable précurseur visionnaire vous dis-je.

Inspiré par le Standblog de Tristan Nitot (que je lisais et lis toujours), j’installai le logiciel Dotclear sur le serveur de deux amis et me mis à bloguer. Pour ne plus jamais arrêter. Que Bertrand, Tristan, Anthony, Fabien et Valérie (qui nomma mon blog "Where is Ploum?") soient ici mille fois remerciés.

En 2010, n’arrivant pas à trouver un thème Dotclear 2 qui me satisfasse, je décidai de migrer temporairement vers Wordpress (et non pas vers J2EE). Plateforme sur laquelle je suis resté depuis.

La vie avec Wordpress n’est pas de tout repos : mises à jour fréquentes, incompatibilités avec certains plug-ins, évolutions de plug-ins et de thèmes, certains devenant payants, messages d’alertes pour des versions PHP ou MySQL dépassées. Sans compter des pléthores de versions d’un fichier htaccess à ne surtout pas toucher sous peine de tout casser, des sauvegardes de bases de données à faire et oubliées dans un coin.

Cherchant un minimalisme numérique, Wordpress ne me convenait plus du tout. Il ne correspondait plus non à ma philosophie. Malgré quelques tentatives, je n’avais pas réussi à retirer tout le JavaScript ni certaines fontes hébergées par Google sans casser mon thème. En 2018, je me suis activement mis à chercher une alternative.

À cette époque, j’ai rencontré Matt, le fondateur de J’ai contribué au projet afin de le rendre open source (ce que Matt fera sous le nom WriteFreely). Nous avons tenté de l’adapter à mes besoins. Besoins que je décrivais dans un long document évolutif. En parallèle, je testais tous les générateurs de sites statiques, les trouvant complexes, n’arrivant pas à faire exactement ce que je voulais.

Je prétendais chercher du minimalisme et je reproduisais, sans le vouloir, le syndrome du project manager J2EE dont je m’étais moqué.

Découvrant le protocole Gemini, je me suis rendu compte que c’était bel et bien ce genre de minimalisme auquel j’aspirais. J’en étais convaincu : mon nouvelle génération devrait également être sur Gemini.

Mais loin de m’aider, cette certitude ne faisait qu’ajouter une fonctionnalité à la liste déjà longue de ce que je voulais pour mon blog. Je me perdais dans une quête d’un workflow idéal.

Après quelques mois, abandonnant l’idée de mettre mon blog sur Gemini, je me décidai à ouvrir un Gemlog sur Pour tester. Que cmccabe soit ici publiquement remercié.

J’écrivais tous mes fichiers à la main dans Vim, je les envoyai ensuite sur le serveur distant depuis mon terminal. Le tout sans le moindre automatisme. J’y prenais énormément de plaisir. Alors que je pensais juste tester la technologie, je me suis naturellement retrouvé à écrire sur mon Gemlog, à réfléchir, à partager. Je retrouvais la naïveté initiale de mon blog, la spontanéité.

Au fil des mois, j’introduisis néanmoins certaines automatisations. Sauvegardes et envoi vers le serveur grâce à git. Un petit script pour générer la page d’index. Les billets sur mon gemlog connaissaient un certain succès et certains les partageaient sur le web grâce à un proxy gemini−>web. Un comble !

Et c’est à ce moment-là que je compris que mon blog ne serait jamais sur Gemini. Ce serait le contraire ! J’allais mettre mon gemlog sur le web. Et importer près de 800 billets Wordpress dans mon Gemlog. Plus de 800.000 mots écrits en 18 années de blog. L’équivalent de 15 livres de la taille de Printeurs.

Lire avant tout

Depuis mon premier Dotclear, je jouais avec les thèmes, les plug-ins, les artifices, les commentaires. Je ne m’étais jamais vraiment posé la question de ce que j’attendais de mon blog.

Mon blog est, depuis ces années, un fil de vie, un élément essentiel de mon identité. Mon blog me reflète, je suis qui je suis grâce à mon blog. Il est une partie de mon intimité, de mon essence.

Qu’ai-je envie de faire de ma vie ? Écrire ! Mon blog doit donc me faciliter le fait d’écrire et son pendant indissociable : être lu !

Être lu ne signifie pas être découvert, avoir des fans, des likes ou des abonnés. Être lu signifie que chaque personne arrivant sur un article sera considérée comme une lectrice et respectée comme telle. Pas d’engagement, de métriques, d’invitation à découvrir d’autres articles. Une lectrice a le droit de lire dans les meilleures conditions et de passer ensuite à autre chose.

Au travail !

Pour la première fois, le chemin me semblait enfin clair. Je n’allais pas tenter de trouver le logiciel parfait pour faire ce que je voulais. Je n’allais pas planifier, tester, connecter des solutions différentes en écumant le web. J’allais tout faire à la main, tout seul comme un grand. Si j’arrivais à convertir mon blog Wordpress en fichiers gmi (le format Gemini), il ne me restait qu’à écrire une petite routine pour convertir le tout en HTML.

Un adage chez les programmeurs dit que tout programme complexe nait parce que le programmeur pensait sincèrement que c’était facile. Mon script ne fait pas exception à la règle. Il m’aura fallu plusieurs mois pour peaufiner et arriver à un résultat acceptable. Devant me passer du service Mailpoet intégré à Wordpress (service dont la licence m’était fournie par un sympathique lecteur, qu’il soit ici remercié), je du me résoudre à écrire ma propre gestion d’email pour pouvoir l’intégrer à un service open source. Ce fut la partie la plus difficile (probablement parce qu’en toute honnêteté, cela ne m’intéresse pas du tout). Si vous voulez recevoir les billets par mail, il existe désormais deux mailing-listes (si vous avez reçu ce billet par mail, vous êtes inscrit à la première FR mais pas à celle en anglais EN, je vous laisse vous inscrire si vous le souhaitez) :

J’avoue être assez fier du résultat. Chaque billet que vous lisez est désormais un simple fichier texte que j’écris et corrige avant de publier en l’insérant dans le répertoire FR ou EN selon la langue. À partir de là, le tout est envoyé par git sur le service sourcehut et un script transforme mon texte en une page gmi, une page hmtl ou un email. À l’exception des éventuelles images, chaque page est complètement indépendante et ne fait appel à aucune ressource externe. Même les 40 lignes de CSS (pas une de plus) sont incluses. Cela permet des pages légères, rapides à charger même sur une mauvaise connexion, compatibles avec absolument toutes les plateformes même les plus anciennes, des pages que vous pouvez sauver, imprimer, envoyer sans craindre de perdre des informations. Bref, des véritables pages web, un concept devenu absurdement rare.

La signification du minimalisme

En codant ce site, il m’est apparu que le minimalisme impliquait de faire des sacrifices. D’abandonner certains besoins. La raison pour laquelle je n’avais jamais été satisfait jusqu’à présent était mon incapacité à abandonner ce que je pensais essentiel.

Les tags aident-ils la lecture ? Non, ils ont donc disparu. Les séries ? J’étais convaincu d’en avoir besoin. J’ai commencé à les implémenter, mais je n’ai pas été convaincu et j’ai mis ce travail de côté. La recherche intégrée ? La fonctionnalité est certes utile, mais son bénéfice ne couvre pas le coût de sa complexité. J’ai dû me faire violence pour l’abandonner, mais, une fois convaincu, quel soulagement !

Pour remplacer la recherche, je dispose de deux armes : la première est que la liste de tous mes billets est désormais disponible sur une simple page. Si vous connaissez un mot du titre du billet que vous recherchez, vous le trouverez avec un simple Ctrl+f dans votre navigateur.

Pour la recherche plus profonde sur le contenu, mes billets étant désormais de simples fichiers texte sur mon disque dur, la commande "grep" me convient parfaitement. Et elle fonctionne même lorsque je suis déconnecté.

Car l’aspect déconnecté est primordial. Ma déconnexion dans la première moitié de 2022 m’a fait prendre conscience à quel point mon blog Wordpress n’était plus en phase avec moi. Je ne pouvais plus le consulter simplement, je ne pouvais plus y poster sans passer du temps en ligne.

Mes lecteurs les plus techniques peuvent également me consulter offline avec un simple "git clone/git pull".

La dernière version ?

Le titre de ce billet est volontairement racoleur (et si vous êtes arrivé jusqu’ici, c’est que ça fonctionne), mais, oui, ce billet annonce bel et bien la fin de mon blog sur le web tel qu’il a été durant 18 ans.

Désormais, vous ne lirez plus que mon Gemlog. Gemlog dans lequel j’ai importé le contenu de mon ancien blog. Cette approche Gemini-first implique des contraintes assez fortes, notamment celle de n’avoir qu’un lien par ligne (ce qui rend certains de mes anciens billets truffés de liens assez particuliers à lire, je le reconnais).

J’ai cependant pris grand soin de faire en sorte que les anciennes URLs fonctionnent toujours. "Cool URLs never change". Si ce n’est pas le cas, signalez-le-moi !

Une autre particularité de ce projet dont je suis fier est que tout mon blog ne dépend désormais plus que de deux briques logicielles : git et python, des composants fondamentaux sur lesquels je peux espérer me baser jusqu’à la fin de ma vie. Le tout étant rédigé dans Vim et corrigé par le couple Antidote/Grammalecte (le point le plus fragile de mon système).

Ce qui me fait dire que ce site est peut-être bel et bien la dernière version de Après Dotclear et Wordpress, je ne dépends désormais plus de personne. Plus de mises à jour imposées, plus de changements soudains d’interface, plus d’adaptation à des nouvelles versions (à part un éventuel python 4 qui ne devrait pas poser de problème vu que je n’utilise à dessein aucune bibliothèque externe). J’évolue à mon rythme et en faisant exactement ce qui me plait, sans dépendre d’une communauté ou d’un fournisseur.

Aurais-je été plus efficace avec un générateur de site web existant ? Peut-être. Je n’en suis pas convaincu. J’aurais dû l’apprendre et me plier à ses contraintes arbitraires. Pour ensuite tenter de l’adapter à mes besoins. Même si cela avait été plus rapide sur le court terme, il aurait été nécessaire de me plier aux nouvelles versions, d’espérer qu’il soit maintenu, de m’intégrer dans la communauté et j’aurais forcément fini par migrer vers une autre solution un moment ou un autre.

La philosophie du code

Pour la première fois, mon blog exprime donc avec son code des valeurs que je tente de mettre par écrit : la simplicité volontaire est difficile, mais libère autant l’auteur que les lecteurs. Elle implique une vision tournée vers un long terme qui se compte en décennies. L’indépendance se conquiert en apprenant à maitriser des outils de base plutôt qu’en tentant d’adopter la dernière mode.

En apportant les dernières touches au code qui génère ce qui n’est pour vous qu’une page parmi tant d’autres, j’ai eu l’impression d’avoir réduit la distance qui nous séparait. Les intermédiaires entre mon clavier et votre intelligence ont été réduits au strict nécessaire. Plutôt que des connexions à des interfaces impliquant des copier-coller, des chargements de librairies JavaScript, j’écris désormais dans un simple fichier texte.

Fichier texte qui s’affiche ensuite dans vos mails, votre lecteur RSS ou votre nagivateur.

Cela parait trivial, simple. C’est pourtant l’essence du web. Une essence qui est malheureusement beaucoup trop rare.

Merci de me lire, de me partager (pour certain·e·s depuis des années), de partager mon intimité. Merci pour vos réactions, vos suggestions et votre soutien. J’espère que cette version vous plaira.

Bonnes lectures et bons partages !

PS: Si vous relisez régulièrement certains anciens articles (plusieurs personnes m’ont confié le faire), n’hésitez pas à vérifier que tout est OK et me signaler tout problème éventuel. Comme tout logiciel, le travail n’est jamais terminé. La version Wordpress restera disponible sur le domaine pour quelques mois.

Ingénieur et écrivain, j’explore l’impact des technologies sur l’humain. Abonnez-vous à mes écrits en français par mail ou par rss. Pour mes écrits en anglais, abonnez-vous à la newsletter anglophone ou au flux RSS complet. Votre adresse n’est jamais partagée et effacée au désabonnement.

Pour me soutenir, achetez mes livres (si possible chez votre libraire) ! Je viens justement de publier un recueil de nouvelles qui devrait vous faire rire et réfléchir.

December 03, 2022

Reinventing How We Use Computers

Nearly two years ago, I put into words the dream I had for a durable computer. A computer that would be built for a lifetime. A computer that would not do everything but could do 80% of what I expect from it. I called this idea the Forever Computer.

I expected to launch a conversation about what we really expect from computers. What do we really want from them? What are some limitations that could free us? What if we didn’t have a pointer but still wanted to be user-friendly by avoiding cryptic key combinations? What if we only had rare and intermittent connections? What if we didn’t have a high-resolution screen? Thinking about that gave birth to Offpunk, a command-line and offline web browser.

Unsurprisingly, most of the reactions I had from my Forever Computer dream where about hardware. Every idea, every project I saw could be summarised as "How to make hardware we can repair while not questioning what we do with this hardware?" The (very interesting) Framework laptop is available as… a Chromebook. This is like transitioning to electric cars while having electricity generated from coal and not questioning why we ride in the first place. Oh, wait…

By developing and using Offpunk, I had to think about what it means to use a computer. I ended up putting my fingers on a huge paradigm problem : we don’t have computers any more. We have "content consuming devices".

The Consuming Screen

On my typewriter, two small retractable metal holders allow the page to stand up while being written. It’s fairly common. I realised that I retract them to keep the paper flowing horizontally. Instead of building a wall of text between me and my environment, I stay open to my surroundings, I catch the ideas that are besides the machine, further away. On the Freewrite, the horizontal e-ink screen does exactly the same. And it works great. It allows me to see what I’m writing without being absorbed by it.

On our computers, the screen is always bigger, shinier, brighter. It is designed to consume you while you consume content. It is a wall to lock you in, to make you prisoner of your little space. Developers need three screens to inhabit this virtual space. Meetings are now little rooms where everyone put a screen behind himself and others while pretending to listen to someone who connected his own screen to a projector (because the only way to have us outside of our own little screen is a bigger shared screen). Tablets and phones are screen-only computers designed to take our attention, to make us consume more and more contents even when we are with beloved ones.

Browsing the Gemini network with Offpunk allowed me to realise how "consuming and producing content" was a disease and not something we ever wanted to do as humans.

The Lost Input

But while computers were transformed into screens, we completely lost the main input mechanism: the keyboard. It is not that the inefficient and absurd misaligned qwerty keyboard didn’t evolve. It actually worsened. We lost the mechanical keys in the search for flatness. We lost comfort. We lost any ergonomics. The only point was to make a keyboard as flat as possible and the same size as the screen to fit it in a laptop. With its infamous butterfly keyboard, Apple even managed to make it painful to type. And I’m not talking about those small touchscreen keyboards which still mimic a full, often misaligned, qwerty keyboard.

We know what a good keyboard is. Independent tinkerers managed to build awesome stuff. There were some really interesting experiments but, in the end, it looks like every attempt at reinventing the keyboard ends in some kind of split orthogonal form with alternative layout (Dvorak, Colemak or, for me, Bépo).

Why haven’t we seen a single computer with such a built-in keyboard? Because computers are not built to type any more. There are built to consume content. Even professional coders spend more time consuming Slack messages and Github badge notifications than writing code.

The Clamshell Compromise

When you add a bright screen to a cramped keyboard, you end up with the worst possible design: the clamshell laptop.

The clamshell is perfect to close it and put it in a bag. It is awful to use. It is like giving a triangle to a cello player before a concert because, hey, the triangle is easiest to travel with.

With a clamshell, the keyboard and the screen are never where there should be. The keyboard is too high, forcing our arms and shoulders in a stressful position while the screen is too low and deforms our neck. Our body is suffering because we don’t want to think about what we do with our computers.

As I was thinking why it was so relieving to let the paper roll horizontally on my typewriter, I realised that it allowed me not to read it all the time. I was encouraged to look outside while typing, having only glances at the paper. When lying lower, text is more comfortable to read horizontally. You only need a really slight angle to read a book open on a table.

What it means for the Forever Computer

All those points made me realise that a true Forever Computer should be built "keyboard first". The keyboard should be the most important part, with a housing for travel. The same housing could host a small screen, possibly an e-ink one. While travelling, that would allow you to read deeply (with the screen in your hand, like an e-reader) or to attach it to the keyboard to write while not being absorbed by the screen. You separate the action of reading and writing instead of being always between two chairs.

The keyboard would feature a port to plug a bigger screen that you could have at home or in your office. Those screens could be put on the wall or, like any external monitor, configured to be at eye level.

When you think about it, it allows us to reintroduce the locality of action. Want to watch videos? Go to the living room and plug to the big screen. Want to code in an IDE or do some graphic work? Go to the office and plug into your desk’s screen. Not at your desk? You can read, takes notes, answer your emails (that will we synchronised when needed). But don’t pretend to code a little, answer a message, watch a video while eating at a restaurant.

Flipping the trend

We currently own a screen with very minimal input to allow us to consume content and access our own data which are on some company servers. The only thing we own, the only thing we pay for is a screen. Sometimes with a bad keyboard.

What I call the Forever Computer is exactly the opposite. You own your input (your favourite keyboard and trackball). You own your data (stored with the computer itself in the keyboard housing). The screen is only a commodity. You can share the screen, you can use someone else screen, you can plug to the one in your hotel room.

This is the point where my dream made me realise what a nightmare our tech dystopia has become. When did building the same old laptop or phone with a handful of replaceable part became the epitome of sustainability and innovation? Why is cramming a few more pixels and a few more CPU cycles to allow hundred megabytes of JavaScript to render some text using a cool new font seen as "a revolution". Like any other tool, we should accept that you need to learn how to use a computer. Short-term UX marketing and regular updates with arbitrary changes in the interface have killed the very notion of "learning to use a computer". People have been reduced to "consumers having to adapt to the change of their consuming machine". We forgot that a computer should not hide how it works to be easy but instead allow its user to learn gradually about it. If we want durability, learnability is the key. When you learn something, you take care of it. You start to like it, to maintain it. It’s the opposite of forced upgrade cycles.

I don’t know if there will be something like my "Forever Computer" in my lifetime. But there’s one thing I’m now certain: the ethical computer will be radically different of what we have in 2022. Or it will never be.

Links and further resources

If you want to discuss the Forever Computer, I’ve opened a mailing-list on the subject. Join us!

I’m closely following the work done by MNT on its Reform and its future Reform Pocket computers. People like them are probably those that could reinvent computing and free us from the current paradigm.

While not radical at all, the Framework is probably the most interesting project for regular computer users.

As a writer and an engineer, I like to explore how technology impacts society. You can subscribe by email or by rss. I value privacy and never share your adress.

If you read French, you can support me by buying/sharing/reading my books and subscribing to my newsletter in French or RSS. I also develop Free Software.

November 29, 2022

More and more people are requesting how they could connect to MySQL without using a password but using a SSL certificate. Known as X509.

CA Certificate

A CA certificate is a digital certificate issued by a certificate authority (CA). It’s used by clients to verify the SSL certificates sign by this CA.

Such certificates is usually paid and needs to be manually installed with MySQL Server. But by default, MySQL generates a self-signed certificate and provides its own CA.

For obvious reason, I will use the certificates that have been auto-generated by MySQL on my system. However, for production, I encourage you to have a real CA signed certificate.

The CA certificate is called ca.pem and is located in MySQL’s datadir (/var/lib/mysql/ca.pem on Oracle Linux, RHEL, Fedora, CentOS, …).

In case you don’t know where your ca.pem is located, you can check in the global variables of your MySQL Server:

SQL> select * from performance_schema.global_variables
     where variable_name like 'ssl_ca';
| ssl_ca        | ca.pem         |

The server and the client must use the same CA.

Client Key

MySQL also generates a client key (client-key.pem) but we will generate one per client.

We need of course to use openSSL to generates and verify our certificates.

We start by creating a client certificate, remove the passphrase and sign it:

$ openssl req -newkey rsa:2048 -days 365 -nodes -keyout user1-key.pem -out user1-req.pem
Ignoring -days without -x509; not generating a certificate
..........+...+..... and plenty others ;-)....++++++...
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
Country Name (2 letter code) [XX]:BE
State or Province Name (full name) []:Hainaut
Locality Name (eg, city) [Default City]:Maurage
Organization Name (eg, company) [Default Company Ltd]:MySQL
Organizational Unit Name (eg, section) []:Community
Common Name (eg, your name or your server's hostname) []:user1
Email Address []

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:

You have now to new files:

  • user1-key.pem – the user’s private key
  • user1-req.pem – the user’s PEM certificate request

And now we need to generate the public key (x509 certificate) that will also use the same CA certificates of MySQL. I will generate a certificate valid for 1 year (365 days):

$ sudo openssl x509 -req -in user1-req.pem -days 365 -CA /var/lib/mysql/ca.pem \
     -CAkey /var/lib/mysql/ca-key.pem -set_serial 01 -out user1-cert.pem
Certificate request self-signature ok
subject=C = BE, ST = Hainaut, L = Maurage, O = MySQL, OU = Community,
        CN = user1, emailAddress =

The “subject” output is very important as this is what we will use in MySQL for the user credentials.

Verifying the Certificate

We can now already verify the certificate we generated:

$ openssl verify -CAfile /var/lib/mysql/ca.pem /var/lib/mysql/server-cert.pem \
/var/lib/mysql/server-cert.pem: OK
user1-cert.pem: OK

MySQL User Creation

We need to create a MySQL user that will use the certificate. By default, with the loaded password policy, we also need to provide a password:

     SUBJECT '/C=BE/ST=Hainaut/L=Maurage/O=MySQL/OU=Community/CN=user1/';
Query OK, 0 rows affected (0.0114 sec)

If we don’t want to use a password but only the certificate, it’s possible the remove “identified by ‘Passw0rd!’, but you need to uninstall the component validate_password and re-install it just after.

UNINSTALL COMPONENT 'file://component_validate_password';

Even with the privilege PASSWORDLESS_USER_ADMIN, if the component is installed, the password must comply with the policy.

Pay attention that it’s recommended to also specify the “issuer” of the certificate like CREATE USER user1 REQUIRE ISSUER '/C=BE/ST=Bruxelles/L=Bruxelles/ O=MySQL/CN=CA/' SUBJECT '/C=BE/ST=Hainaut/L=Maurage/O=MySQL/OU=Community/CN=user1/';


We can now connect using the certificate and the key:

The same certificate and key can be used in MySQL Shell for Visual Studio Code:


It’s possible to use X509 certificates (self-signed or not) to connect to MySQL. With or without a password. This method is working with the old mysql client, with MySQL Shell for classic and X protocol and also with MySQL Shell for Visual Studio Code.

You can find more information related to MySQL and Encrypted Connections in the manual.

Enjoy MySQL !

November 28, 2022

…et autres joyeusetés que nous réserve le futur

Couverture de « Stagiaire au spatioport Omega 3000… et autres joyeusetés que nous réserve le futur ».

Couverture de « Stagiaire au spatioport Omega 3000… et autres joyeusetés que nous réserve le futur ».

Pourriez-vous devenir le premier Madame pipi mâle de la station spatiale Omega 3000 ? Ou optimiser le rendement des mines de chocolat de la Lune ? La vie privée étant abolie, percerez-vous l'identité secrète de l'homme le plus riche du monde ? Comment lutter contre les monopoles informatiques si, lassée de vous voir taper à la machine, votre famille vous inscrit à une initiation aux ordinateurs ? Jouerez-vous un rôle majeur dans le destin de la galaxie ou resterez-vous un figurant ?

Toutes les réponses à ces questions (et à bien d’autres) sont désormais disponibles dans « Stagiaire au spatioport Omega3000 et autres joyeusetés que nous réserve le futur », un recueil de nouvelles désormais disponibles en ligne et dans toutes les librairies de Suisse. Il arrivera dans celles de France et de Belgique en février 2023.

Ce qui est un peu tard pour les cadeaux de Noël/Newtonmass, raison pour laquelle vous pouvez directement commander ce recueil chez l’éditeur.

« Stagiaire au spatioport Omega 3000 » est une idée cadeau idéale, car, contrairement à un roman, qu’on aime ou qu’on n’aime pas, le livre offre ici 15 histoires très différentes. Certaines plus farfelues, d’autres sérieuses. Des rigolotes, des absurdes ou des interpelantes voire tout simplement poétiques. L’une évoque la problématique du genre dans le cadre du space opera ( « Stagiaire au spatioport Omega 3000 »). D’autres alertent sur l’emprise des monopoles informatiques ( « Le dernier espoir »), la disparition de la vie privée en ligne ( « Le jour où la transparence se fit » ) ou l’impact à très long terme de nos choix technologiques ( « Les successeurs » ).

En (vous) offrant ce recueil, vous offrez donc des moments de plaisir, de rire et de poésie, mais également, sans en avoir l’air, des pistes de réflexion et des introductions à des sujets potentiellement difficiles que vous, lecteurs de mon blog, vous connaissez probablement déjà.

Autour de la bûche de Noël, rien que le titre et la couleur de la couverture devraient occuper une bonne partie de la soirée et détourner un bon moment les conversations de la coupe du monde au Qatar, de la crise économique et de la guerre en Ukraine. Avouez que, à ce prix là, c’est donné !

Alors, plutôt que de parcourir les centres commerciaux surchauffés, offrir 15 nouvelles est une idée de cadeau rapide, chic et pas cher !

Pour ceux dont la liste de lecture peut attendre février, commandez le livre dès maintenant chez votre libraire. On ne se rend compte de l’importance des librairies que lorsqu’on les perd, soutenez-les ! L’ISBN est 978-2-940609-29-1.

Plus qu’un simple recueil…

Depuis ma plus tendre enfance, je dévore les recueils de nouvelles. J’adore quand les nouvelles sont entrecoupées d’anecdotes par l’auteur. Ce que fait Isaac Asimov avec un talent incroyable.

Lorsque Lionel, mon éditeur homonyme, m’a proposé de publier un recueil de nouvelles, j’ai d’abord pensé à les rassembler de manière traditionnelle, n’osant même pas tenter d’imiter le grand Asimov. Mon épouse m’a convaincu d’écouter mon intuition et de faire ce recueil avant tout pour moi, comme je voudrais le lire.

Donc acte. Chaque nouvelle est désormais accompagnée d’une petite note où j’explique l’inspiration et le processus d’écriture derrière le texte. Parfois, je digresse un peu sur les thèmes qui me sont chers. Vous me connaissez, on ne se refait pas…

Le résultat est que loin d’être juste un assemblage de texte, ce recueil est devenu une forme de mise à nu, un partage très intime entre l’écrivain et chaque lect·eur·rice. Avec mon éditeur, nous avons pris la décision d’inclure également quelques « erreurs de jeunesse ». Ce ne sont pas mes meilleurs textes, mais rendre transparente mon évolution personnelle est une manière d’illustrer mon travail et, je l’espère, d’inspirer d’autres à apprécier leurs propres progrès. Pour tout avouer, je n’ose pas me relire, je suis un peu gêné de ce que vous allez découvrir de moi. Tout en étant très fier d’offrir un recueil qui est bien plus que la somme des textes qui le composent.

Si vous lisez ce blog, ce recueil est ce qui s’en rapproche le plus au format papier. Tout en étant bien plus amusant et gai à lire. Le partager et le recommander est la plus belle manière de soutenir mon travail.

Je serai très heureux d’avoir vos avis, vos réactions, de lire vos critiques sur vos blogs ou vos espaces en ligne respectifs. N’hésitez pas à m’envoyer vos retours. Sur Mastodon, je vous propose d’utiliser le hashtag #omega3000.

Et si vous avez découvert la surprise (qui est, si Wikipédia est exact, une première mondiale), chut ! Ne la spoilez pas pour les autres…

Ingénieur et écrivain, j’explore l’impact des technologies sur l’humain. Abonnez-vous à mes écrits en français par mail ou par rss. Pour mes écrits en anglais, abonnez-vous à la newsletter anglophone ou au flux RSS complet. Votre adresse n’est jamais partagée et effacée au désabonnement.

Pour me soutenir, achetez mes livres (si possible chez votre libraire) ! Je viens justement de publier un recueil de nouvelles qui devrait vous faire rire et réfléchir.

November 20, 2022

Today I upgraded my home computers to Fedora 37.

I’m using some software that don’t have any rpm available for Fedora 37.

Some of these applications, I used them every day and I couldn’t work on my machine without them. Hamster is one of them, I use it for time tracking. However the project seems to not be maintained for some time. I use it with the Gnome-Shell extension too.

I also regularly use sysbench compiled with libmysql from MySQL.

The last package is the one of an application I blogged recently, Projecteur.

If you are also using those applications or if you want to try them on Fedora 37, here are the packages I made for my computers:

Enjoy !

November 15, 2022

Offpunk 1.7 : Offpunk and Sourcehut

Releasing Offpunk 1.7 today which fixes a handful of crashes and brings some nice features.

UPDATE : releasing 1.7.1 because of a crash in 1.7

1. Autocompletion on your list names with "add", "move" and "list" command. This is really useful if, like me, you are playing with 20 different lists.

2. In Gemini and Gopher, plain text links are added to the list of links of the page. This is really useful with Gopher posts where the culture is to add a bunch of links at the end of the post with a number. After reading a post, simply press "enter" (keep the command empty) to see the list of links in that post, including in Gopher.

3. Added a configurable "search" command which, by default, uses

4. Added a configurable "wikipedia" (or "w") command which, by default, uses Also added shortcuts for English, French and Spanish: "wen", "wfr" and "wes".

I used those default but I’m open to suggestions to change them if there are better alternatives. Please send me an email to or send your suggestion to the devel mailing list.

Wait? Is Offpunk on sourcehut now? What about notabug/tildegit?

Hosting Offpunk is a complicated story.

At first, offpunk started as a simple fork of AV-98 and was thus called "AV98-offline". Like AV-98, it was hosted on tildegit because it was a fork.

Realising that the project started having a life on its own and could never be merged back to AV-98, which was the original intent, I changed its name to Offpunk, thinking it was just marketing after all. I didn’t even change the name on the repository immediately, not even thinking about it. Yes, I’m that bad at marketing.

As I was one day playing in tildegit settings, I saw "AV98-offline" in a field and changed it by "offpunk". I didn’t imagine that this would also change the URL of the page and break any old link to the project. But too late, it was done.

First move.

As Offpunk started to attract attention, several contributors complained that they could not open an account on tildegit, which was reserved for people active in the tilde universe. I was aware of another fork of AV-98 where ew0k was maintaining a few patches. I knew well about it because I forked his fork myself to get those patches into Offpunk. This AV-98-community-edition was hosted on was open to anyone. So I moved there.

Second move.

What I learned the hard way was that is doing strange things with git tags and, worst of all, doesn’t have a RSS feed to follow releases, something of uttermost importance for packagers. I consider packagers the most important link between users and developers. I really want their work as easy as possible (and still hoping to see offpunk one day in Debian, BSD and Void). As I was toying with sourcehut, I created an offpunk project there to enable an RSS feed of the releases.

Third move, fourth repository in less than a year.

I still keep notabug as I believe that the interface might be more familiar to lots of users but I’m hoping to use more and more sourcehut for mailing list, to browse the code, to handle tickets/bug reports (no bug reports were created on sourcehut so far, still waiting to test it). Sourcehut will also host the future Offpunk website.

I hope this explanation helps you to understand why offpunk situation is so confusing. I’m sorry for that, it’s all because I was a bit careless on this aspect.

If offpunk makes you curious but you are still a bit confused about what it is or how it works, I would really be happy to hear from you. This software has changed my online life so deeply that I would like to give others the opportunity to do the same. I created an offpunk-users mailing list where we can discuss together, in the open and where I can read your experience and your suggestions.


As I’m more and more tired of "fun" features, gamification, heavy JS interfaces, my love for sourcehut is growing. I’m a bit concerned by the ban of blockchain projects as I like blockchain technologies. I understand Drew’s position on this (most blockchain projects are, in fact, scams disguised as cool open source projects) but I do really appreciate the work Drew is doing, both on the software and business side. I want to support it and I was really happy to see Drew change his mind about not-rejecting HTML email on lists. Not that I really care about the issue but I care about a benevolent dictator for life being able to slightly change his mind on some of his core principles.

Today, I applied for the first time a patch sent to offpunk-devel. A new workflow which, in the long run, is a lot more adapted with working offline. Thanks for the patch, Sotiris Papatheodorou!

Conclusion: offpunk 1.7 is released and has found a new home on sourcehut. I hope that this move was the last for a very long time.

Happy offline browsing!

As a writer and an engineer, I like to explore how technology impacts society. You can subscribe by email or by rss. I value privacy and never share your adress.

If you read French, you can support me by buying/sharing/reading my books and subscribing to my newsletter in French or RSS. I also develop Free Software.

November 14, 2022

Vraag je je ook weleens af welke vogelsoorten er allemaal je tuin bezoeken? Met een Raspberry Pi en een microfoon kun je de hele dag het geluid in je tuin opnemen. De software BirdNET-Pi herkent daarin vogeldeuntjes en toont je in een webinterface handige statistieken van wanneer welke vogels te horen zijn.


BirdNET is een app voor Android en iOS die vogelsoorten kan onderscheiden aan de hand van geluidsopnames. Ideaal als je tijdens een wandeling je afvraagt welke vogel dat speciale deuntje laat horen. Onder de motorkap draait een neuraal netwerk van onderzoekers van Cornell University dat op geluiden van drieduizend vogelsoorten is getraind. Je neemt enkele seconden van het deuntje op je telefoon op, de app analyseert die vervolgens en benoemt daarna de vogels waarvan het geluid daar het meest op lijkt.

Luistervinkende Raspberry Pi

Patrick McGuire heeft met BirdNET-Pi een versie van BirdNET gemaakt die op de Raspberry Pi werkt. De software luistert continu naar het geluid van een usb-microfoon en herkent daarin realtime vogelgeluiden. Dat werkt op een Raspberry Pi 4B, 3B+ en Zero W 2. Om van alle mogelijkheden te kunnen genieten, is een Raspberry Pi 4B wel aangeraden.

Als je BirdNET ook als app op je telefoon kunt draaien, wat is dan het voordeel van BirdNET-Pi? Het belangrijkste pluspunt is dat BirdNET-Pi continu luistert, waardoor je het de klok rond vogelgeluiden kunt laten herkennen. Voor vogelliefhebbers levert dat ook interessante statistieken op, zoals hoe laat je het meeste kans maakt om specifieke vogelsoorten te spotten.


Een ander voordeel is dat BirdNET-Pi zijn analyses volledig offline doet, terwijl de smartphone-app elke audio-opname naar de servers van het BirdNET-project moet doorsturen om daar de analyse uit te voeren. Je zou dus een Raspberry Pi met BirdNET-Pi op een plaats zonder netwerktoegang kunnen installeren en na een dag de opgeslagen detecties kunnen raadplegen.

In een artikel van me op lees je hoe je BirdNET-Pi op een Raspberry Pi installeert en configureert.

Présence à Toulouse au Capitole du Libre ces 19 et 20 novembre (avec des surprises !)

Ce samedi 19 novembre, à 11h30, je donnerai une conférence à Toulouse au Capitole du Libre sur le thème :

Attention ! Cette conréfence n’est pas une conréfence sur le cyclimse ! Merci de votre compréhension !

Ça parlera de culture, de libre, de culture libre, de liberté de culture et de l’importance d’échapper à la monoculture, de libérer nos cerveaux en libérant nos références culturelles.

J’en profiterai pour annoncer quelques bonnes surprises dont je n’ai pas encore eu l’occasion de parler sur ce blog.

Après la conf, je resterai jusqu’au dimanche après-midi, assis à une table entre Pouhiou et David Revoy, excusez du peu, pour dédicacer mon roman Printeurs et, première surprise, mon nouveau recueil de nouvelles qui n’est officiellement pas encore sorti.

Quand je dis « pas encore sorti », je veux dire que même moi je ne l’ai pas encore vu. Mon éditeur a envoyé une poignée des tout premiers exemplaires imprimés directement à Toulouse. Je le découvrirai donc devant vous à la table de dédicace. Une exclusivité, comme le cassoulet, 100% toulousaine. Je peux cependant déjà vous dire que si vous êtes partisan du respect de la vie privée, si vous pestez contre l’invasion des smartphones ou que vous vous êtes déjà retrouvé face à un extra-terrestre gluant dans des toilettes qui ne correspondaient pas à votre sexe, vous devriez trouver votre bonheur dans ce recueil qui devrait arriver dans toutes les librairies françaises, belges et suisses en janvier.

Si vous êtes dans le coin, n’hésitez pas à venir faire un tour et, si le cœur vous en dit, à m’aborder.

J’ai découvert aux Imaginales à Épinal que plusieurs lecteurs hésitaient à m’aborder pour « ne pas me déranger ». J’avoue que je déteste être dérangé quand je suis concentré, mais si je vais à une conférence, c’est précisément pour rencontrer des gens. Donc pour être dérangé ! Et vous n’avez pas à vous justifier de ne pas acheter mes livres. D’ailleurs, les ventes seront assurées par un libraire professionnel.

Pour ceux qui préfèrent les versions électroniques, mais souhaitent malgré tout une dédicace, je vous propose de venir avec le morceau de papier/carton de votre choix. Je le glisserai dans ma machine à écrire pour vous faire un petit mot.

Parce qu’au fond de moi, ça me fait sacrément plaisir de retrouver une conf libriste, de me retrouver au milieu de geeks et geekettes en t-shirts déglingués, de discuter de vive voix après parfois près de vingt années d’échanges sur le net, de m’associer à d’autres Vimeurs pour troller les utilisateurs d’Emacs, de parler de Gemini et de naviguer en ligne de commande avec des gens qui trouvent ça normal (je peux vous faire une démo d’Offpunk, mon navigateur web offline), de vous donner des nouvelles de ma boulangère …

Bref, je suis sacrément content de vous (re)voir !

(non je ne suis pas malade en voiture. C’est juste que quand je suis content je vomis. Et là je suis hypercontent !)

Ingénieur et écrivain, j’explore l’impact des technologies sur l’humain. Abonnez-vous à mes écrits en français par mail ou par rss. Pour mes écrits en anglais, abonnez-vous à la newsletter anglophone ou au flux RSS complet. Votre adresse n’est jamais partagée et effacée au désabonnement.

Pour me soutenir, achetez mes livres (si possible chez votre libraire) ! Je viens justement de publier un recueil de nouvelles qui devrait vous faire rire et réfléchir.

November 12, 2022

We now invite proposals for presentations. FOSDEM offers open source and free software developers a place to meet, share ideas and collaborate. Renowned for being highly developer-oriented, the event brings together some 8000+ geeks from all over the world. The twenty-third edition will take place on Saturday 4th and Sunday 5th February 2023 at the usual location, ULB Campus Solbosch in Brussels. It will also be possible to participate online. Developer Rooms For more details about the Developer Rooms, please refer to the Calls for Papers that are being added to when they are issued. Main Tracks Main舰

The Debian Videoteam has been sprinting in Cape Town, South Africa -- mostly because with Stefano here for a few months, four of us (Jonathan, Kyle, Stefano, and myself) actually are in the country on a regular basis. In addition to that, two more members of the team (Nicolas and Louis-Philippe) are joining the sprint remotely (from Paris and Montreal).

Videoteam sprint

(Kyle and Stefano working on things, with me behind the camera and Jonathan busy elsewhere.)

We've made loads of progress! Some highlights:

  • We did a lot of triaging of outstanding bugs and merge requests against our ansible repository. Stale issues were closed, merge requests have been merged (or closed when they weren't relevant anymore), and new issues that we found while working on them were fixed. We also improved our test coverage for some of our ansible roles, and modernized as well as improved the way our documentation is built. (Louis-Philippe, Stefano, Kyle, Wouter, Nicolas)
  • Some work was done on SReview, our video review and transcode tool: I fixed up the metadata export code and did some other backend work, while Stefano worked a bit on the frontend, bringing it up to date to use bootstrap 4, and adding client-side filtering using vue. Future work on this will allow editing various things from the webinterface -- currently that requires issuing SQL commands directly. (Wouter and Stefano)
  • Jonathan explored new features in OBS. We've been using OBS for our "loopy" setup since DebConf20, which is used for the slightly more interactive sponsor loop that is shown in between talks. The result is that we'll be able to simplify and improve that setup in future (mini)DebConf instances. (Jonathan)
  • Kyle had a look at options for capturing hardware. We currently use Opsis boards, but they are not an ideal solution, and we are exploring alternatives. (Kyle)
  • Some package uploads happened! libmedia-convert-perl will now (hopefully) migrate to testing; and if all goes well, a new version of SReview will be available in unstable soon.

The sprint isn't over yet (we're continuing until Sunday), but loads of things have already happened. Stay tuned!

November 11, 2022

Hello dear MySQL Community !

As you may already know FOSDEM 2023 is again going to be held in-person. FOSDEM will take place February 4th and 5th 2023.

We have also decided to put our pre-FOSDEM MySQL Day on track for a fifth edition.

As for the last edition, the event will be spread over 2 days.

These 2 extra days related to the world’s most popular open source database will take place just before FOSDEM, the 2nd and 3rd February at the usual location in Brussels.

Please don’t forget to register as soon as possible as you may already know, the seats are limited !

Register on eventbrite:

And, please don’t forget, that if you have register for the event and you cannot make it, please free back your ticket for somebody else.


The agenda is not yet determined and you may have noticed that the MySQL Devroom during FOSDEM will be very short with a very limited amount of sessions.

Therefore, we will certainly include in the pre-FOSDEM agenda some of the sessions that will not appear in the Devroom agenda.

Also, if you would like to propose a session exclusively for the pre-FOSDEM MySQL Days, as the duration of the sessions is more flexible, please do not hesitate to contact me.

I am very happy to see you all again !

November 10, 2022

I published the following diary on “Do you collect “Observables” or “IOCs”?“:

Indicators of Compromise, or IOCs, are key elements in blue team activities. IOCs are mainly small pieces of technical information that have been collected during investigations, threat hunting activities or malware analysis. About the last example, the malware analyst’s goal is identify how the malware is behaving and how to indentify it.

Most common IOCs are… [Read more]

The post [SANS ISC] Do you collect “Observables” or “IOCs”? appeared first on /dev/random.

Next week, November 16th, I will participate to the MySQL Innovation and Cloud Virtual Day in French.

My colleagues will present what’s new in MySQL 8.0.31 and also summarize all the big news that was announced at Oracle Cloud World in Las Vegas.

Attendees will learn about the MySQL HeatWave offering in OCI.

I will be presenting something that is only available in MySQL on-prem and in OCI as a managed service: MySQL Document Store.

The event is in French and attendees will have the opportunity to discuss and chat with MySQL experts (included Olivier!!) during the event.

Registration is required to attend this free event: Register Here.

On se voit la semaine prochaine !

November 09, 2022

I published the following diary on “Another Script-Based Ransomware“:

In the past, I already found some script-based ransomware samples written in Python or Powershell. The last one I found was only a “proof-of-concept” (my guess) but it demonstrates how easy such malware can be developed and how they remain undetected by most antivirus products.
I found a malicious VisualBasic script that attracted my attention. The SHA256 is 8c8ed4631248343f8732a83193828471e005900fbaf144589d57f6900b9c8996 and its VT score is only 3/57!. It’s no flagged as malicious but, even more, it’s reported as a simple mallicious script… [Read more]

The post [SANS ISC] Another Script-Based Ransomware appeared first on /dev/random.

Evolution of the Web.

The Web, circa 2000:

A website is a place where people can read what you wrote.

The Web, circa 2020:

A website is an online place where you can tell people to follow you on different social media so they might have a chance, if the algorithm allows it, to read what you hope to write in the future.

A social media is a platform where you can create an account, get an audience and ask them to follow you on another social media platform.

Gemini, circa 2020:

Cf "The Web, circa 2000".

As a writer and an engineer, I like to explore how technology impacts society. You can subscribe by email or by rss. I value privacy and never share your adress.

If you read French, you can support me by buying/sharing/reading my books and subscribing to my newsletter in French or RSS. I also develop Free Software.

November 08, 2022

So Autoptimize Pro has finally been released, somewhat secretively (I like flying under the radar). All info about features can be found on including a 20% early birds discount code, but if you use my internet nickname/ handle of whatever you want to call it (it’s part of the domain of this blog and has 3 t’s in it ) as coupon code between now and November 16th...


November 07, 2022

Recently, I wrote three articles on how to analyze queries and generate a slow query log for MySQL Database Service on OCI:

In these post, we were generating a slow query log in text or JSON directly in Object Storage.

Today, we will see how we can generate a slow query log in text directly using MySQL Shell and form Performance Schema.

The generated log can be used to digest the queries with a tool like pt-query-digest.

The MySQL Plugin used is logs and is available on my GitHub repo dedicated to my MySQL Shell Plugins.

This is an example of an output using pt-query-digest:

# Query 3: 0.40 QPS, 0.00x concurrency, ID 0xF70E8D59DF2D27CB4C081956264E69AB at byte 104128
# This item is included in the report because it matches --limit.
# Scores: V/M = 0.01
# Time range: 2022-11-06T23:12:14 to 2022-11-06T23:12:19
# Attribute    pct   total     min     max     avg     95%  stddev  median
# ============ === ======= ======= ======= ======= ======= ======= =======
# Count          1       2
# Exec time      8    13ms    87us    13ms     6ms    13ms     9ms     6ms
# Lock time      1     4us       0     4us     2us     4us     2us     2us
# Rows sent      0       1       0       1    0.50       1    0.71    0.50
# Rows examine   0       1       0       1    0.50       1    0.71    0.50
# Rows affecte   0       0       0       0       0       0       0       0
# Merge passes   0       0       0       0       0       0       0       0
# Tmp tables     0       0       0       0       0       0       0       0
# Tmp disk tbl   0       0       0       0       0       0       0       0
# Query size     1     420     210     210     210     210       0     210
# Cpu time       0       0       0       0       0       0       0       0
# Max memory     2   2.55M   1.15M   1.40M   1.28M   1.40M 178.87k   1.28M
# String:
# Bytes sent   n/a
# Databases    test
# Execution en PRIMARY
# Full join    no
# Full scan    no (1/50%), yes (1/50%)
# Hosts        n/a
# No index use no (1/50%), yes (1/50%)
# Tmp table    no
# Tmp table on no
# Tmp tbl size n/a
# Users        n/a
# Query_time distribution
#   1us
#  10us  ################################
# 100us
#   1ms
#  10ms  ################################
# 100ms
#    1s
#  10s+
# Tables
#    SHOW TABLE STATUS FROM `test` LIKE 'scott'\G
#    SHOW CREATE TABLE `test`.`scott`\G
select doc->"$.holeScores[6].number" hole_number, regexp_substr(json_search(replace(replace(replace(doc->"$.holeScores[*].score", ', ' , '","'), '[' ,'["'), ']', '"]'), 'one', 1), '[0-9]+') score_idx from scott\G

MySQL Shell for Visual Studio Code and OCI

It’s also possible to use this plugin with MySQL Shell for Visual Studio Code and generate a slow query log for an MySQL instance running on OCI.

You need to copy the plugins in ~/.mysqlsh-gui/plugin_data/gui_plugin/shell_instance_home/plugins.

For the Windows user, the path is C:\Users\<user_name>\AppData\Roaming\MySQL\mysqlsh-gui\plugin_data\gui_plugin\shell_instance_home\plugins.

Let’s connect using a Free Bastion Host:

Once the plugin is used with the method genetateSlowQueryLog(), a prompt is displayed to enter the filename: We can then open the file that was saved locally:


Here is another solution to generate MySQL Slow Query Log directly from Performance Schema. The generated file is as close as possible to a real slow query log file with a few slight differences, some more information and some less.

As you could see, the plugin can be also used with MySQL Database Service in OCI where we don’t have access to the slow query log file.

Of course, using Performance Schema has some limitations, like the max amount of entries in the table and the possibility to miss some queries during the table truncation if we use the option.

But well used, this can be a very nice source to digest some queries during a limited time.

Enjoy MySQL and good hunting for query optimization candidates !

Our responsibility as software engineers

On Twitter, an ex-Twitter engineer called Steve Krenzel talk about his job and we can infer that it’s not different from Google/Facebook/Microsoft.

While we have no confirmation that this is true, there’s no particular reason to doubt what is said. There’s no huge secret. But this serves as a reminder.

A reminder that it is normal that any application on your phone is registering every single action you are doing, every single location and beaming those info back.

A reminder that companies are buying those data and, no, they don’t care about privacy. They want hard data on individuals. And they got them.

A reminder that this is the world we are currently living in with phones recording position and sound everywhere. For one story where the engineer refused to do it, how many times was it done silently?

As software engineers, we have a huge moral responsibility. We are not only implementing it, we are ourselves using those tools and letting our family and children using them.

We are like medical experts discussing the toxicity of tobacco while working for the big tobacco industry and, more importantly, buying cigarettes for our own children.

For software engineer, moving the needle is easy: just say no. Say no to jobs which are against your own moral compass. Just tell your friends, your family. Refuse to help them when setting their new phone or their new MacBook. The whole industry lies on our implicit complicity.

As a writer and an engineer, I like to explore how technology impacts society. You can subscribe by email or by rss. I value privacy and never share your adress.

If you read French, you can support me by buying/sharing/reading my books and subscribing to my newsletter in French or RSS. I also develop Free Software.

November 06, 2022

We are pleased to announce the developer rooms that will be organised at FOSDEM 2023. Developer rooms are assigned to self-organising groups to work together on open source projects, to discuss topics relevant to a broader subset of the community, etc. The individual developer room organisers will issue their calls for participation in the next few days. The list below will be updated accordingly. Topic Call for Participation Binary Tools CfP BSD CfP Collaboration and Content Management CfP Community CfP Confidential Computing CfP Containers CfP Continuous Integration and Continuous Deployment CfP Declarative and Minimalistic Computing CfP Distributions CfP DNS舰
A underground subway tunnel with an incoming train.

It's no secret that I don't like using Facebook, Instagram or Twitter as my primary platform for sharing photos and status updates. I don't trust these platforms with my data, and I don't like that they track my friends and family each time I post something.

For those reasons, I set a personal challenge in 2018 to take back control over my social media, photos and more. As a result, I quit Facebook and stopped using Twitter for personal status updates. I still use Twitter for Drupal-related updates.

To this date, I still occasionally post on Instagram. The main reason I still post on Instagram is that it's simply the easiest way for friends and family to follow me. But every time I post a photo on Instagram, I cringe. I don't like that I cause friends and family to be tracked.

My workflow is to upload photos to my website first. After uploading photos to my website, I occasionally POSSE a photo to Instagram. I have used this workflow for many years. As a result, I have over 10,000 photos on my website, and only 300 photos on Instagram. By all means, my website is my primary platform for sharing photos.

I decided it was time to make it a bit easier for people to follow my photography and adventures from my website. Last month I silently added a photo stream, complete with an RSS feed. Now there is a page that shows my newest photos and an easy way to subscribe to them.

While an RSS feed doesn't have the same convenience factor as Instagram, it is better than Instagram in the following ways: more photos, no tracking, no algorithmic filtering, no account required, and no advertising.

My photo stream and photo galleries aspire to the privacy of a printed photo album.

I encourage you to subscribe to my photos RSS feed and to unfollow me on Instagram.

Step by step, I will continue to build my audience here, on my blog, on the edge of the Open Web, on my own terms.

From a technical point of view, my photo stream uses responsive images that are lazy loaded — it should be pretty fast. And it uses the Open Graph Protocol for improved link sharing.

PS: The photo at the top is the subway in Boston. Taken on my way home from work.

Also posted on IndieNews.

October 31, 2022

Writing tests can be hard. The key is to get started. When I start work on a new Drupal module, I like to start off by adding a very simple test and making it pass. Once I have one simple test passing, it becomes easy to add more. The key is get started with the simplest test possible. This page explains how I do that.

The most basic PHPUnit test for Drupal

The most basic test looks something like this:

namespace Drupal\Tests\my_module\Functional;

use Drupal\Tests\BrowserTestBase;

class MyModuleTest extends BrowserTestBase {

   * Modules to enable.
   * @var array<string>
  protected static $modules = ['my_module'];

   * Theme to enable. This field is mandatory.
   * @var string
  protected $defaultTheme = 'stark';

   * The simplest assert possible.
  public function testOne() {

Drupal uses PHPUnit for testing, and the test above is a PHPUnit test.

The test lives in docroot/modules/custom/my_module/tests/src/Functional/MyModuleTest.php.

Installing PHPUnit for Drupal

Drupal does not ship with PHPUnit out-of-the-box, so it needs to be installed.

The best way to install PHPUnit on a Drupal site is by installing the drupal/core-dev package. It can be installed using Composer:

$ composer require drupal/core-dev --dev --update-with-all-dependencies

The command above will download and configure PHPUnit, along with other development dependencies that are considered a best-practice for Drupal development (e.g. PHPStan).

Configuring PHPUnit for Drupal

Once installed, you still have to configure PHPUnit. All you need to do is set two variables:

  • SIMPLETEST_BASE_URL should be the URL of your site (e.g. http://localhost/).
  • SIMPLETEST_DB should be your database settings (e.g. mysql://username:password@localhost/database).

You can specify these directly in docroot/core/phpunit.xml.dist, or you can set them as environment variables in your shell.

Running a PHPUnit test for Drupal

With PHPUnit installed, let's run our basic test:

$ cd docroot/core
$ ../../vendor/bin/phpunit ../modules/custom/my_module/tests/src/Functional/MyModuleTest.php

We have to execute our test from inside the docroot/core/ directory so PHPUnit can find our test.

Using PHPUnit with Lando

I use Lando for my local development environment. There are a couple of things you can add to your .lando.yml configuration file to make using PHPUnit a bit easier:

        SIMPLETEST_BASE_URL: "http://localhost/"
        SIMPLETEST_DB: "mysql://username:password@localhost/database"
        BROWSERTEST_OUTPUT_DIRECTORY: "/app/phpunit-results"
    service: appserver
    cmd: "php /app/vendor/bin/phpunit --verbose -c /app/docroot/core/phpunit.xml.dist"

The first 6 lines sets the variables and the last 4 lines create a new lando test command.

After changing my .lando.yml, I like to rebuild my Lando environment, though that might not be strictly necessary:

$ lando rebuild -y

Now, you can run the PHPUnit test as follows:

$ lando test modules/custom/my_module/tests/src/Functional/MyModuleTest.php

About this page

This page is part of my digital garden. It's like a page in my personal notebook rather than a finished blog post. I documented this information for myself, but I'm putting it out there as it might be useful for others. Contrary to my blog posts, pages in my digital garden are less finished, and updated and refined over time. It grows and lives like a plant in a garden, and I'll tend to it from time to time.

October 26, 2022

Gisteren was Frank Vander linden in De Ideale Wereld. Mijn Kosovaarse vriendin vroeg me wie die interessante kale vent op de TV was. Ik antwoordde haar: “oh ik denk een drummer van De Kreuners of zo”.

Dat was onvergefelijk fout, natuurlijk. Als boetedoening speel ik vandaag alle nummers van De Mens af hier in de huiskamer. Zodat ze een beetje onze Vlaamse muziekcultuur kan waarnemen.

Bij deze ook mijn uitgebreide excuses in deze open brief aan de Frank: sorry dat ik mij totaal vergist heb. Ik heb er heel de nacht over wakker gelegen. Hoe kon zoiets toch gebeuren?

October 24, 2022

It has been a while since I did not take time to write a security conference wrap-up. With all these COVID restrictions, we were stuck at home for a while. Still today, some events remain postponed and, worse, canceled! The energy crisis in Europe does not help, some venues are already increasing their prize to host events! How will this evolve? No idea, but, fortunately, they are motivated people who are motivated to organize excellent events! Still no this year but, people from decided to organize the first edition of the Cyber Threat Intelligence Summit in Luxembourg. Before the pandemic, there was also the MISP summit organized close to This event, held last week, was a mix of pure CTI- and MISP-related presentations. Here is a quick wrap-up and some links to interesting content.

The first-day keynote was performed by Patrice Auffret and was about “Ethical Internet Scanning in 2022”. Patrice is the founder of Onypthe. If you don’t know the service, it’s a very good concurrent to Shodan. They collect data about Internet connected objects and URLs. Today, scanning is accepted by most of network owners. IMHO, if you care about port-scanning, you failed. It’s part of the “background noise”. What about the ethical aspect of this? Questions that won’t be covered: law aspects, is it useful/useless? Patrice made 10 recommendations to follow when scanning the Internet:

  • Explain the purpose
  • Put an opt-out
  • Provide abuse contacts
  • Provides lists of scanners IP addresses
  • Have good reverse DNS
  • Handle abuse requests
  • Don’t fuzz, just use standard packets/protocols
  • Scan slowly
  • Use fixed IP addresses (no trashable ones)
  • Remote collected data (upon request – GDPR)

They are many active players (Shodan, Censys, ZoomEye, LeakIX, …). Don’t blame them because they scan you. Buy an account, use their REST API and query the information they know about you and increase your footprint visibility. Don’t forget that bad guys do it all the time, they know your assets better than you. Also “You can’t’ secure assets you don’t know

Robert Nixon presented two talks related to MISP: “In Curation we trust” and, later, “MISP to Power BI”. He explained the common problem that we are all facing when starting to deal with threat intel. The risks of “noise” and low-level information. If you collect garbage, you’ll enrich garbage and generate more garbage. Think about the classic IOC “”. He explained the process of curating MISP events to make them more valuable inside an organization. Example of processing:

  • Remove potential false positive
  • Reduce the lack of contextualization or inconsistencies
  • Are IOCs actionable?

Robert stayed on stage for the 2nd part of his presentation: “MISP + Power BI”. The idea is to access the MISP SQL db from Power BI (of course in a safe way) and create powerful dashboards based on data available in MISP. Be careful with the tables you will open (correlation is not the best one). Robert explained how data can be prepared for better performances (transform some data), convert them, and remove unwanted columns).

The next presentation was called “What can time-based analysis say in Ransomware cases?” by Jorina Baron & david Rufenacht. Like many MSPP, they’ve been busy with ransomware for a while. The compiled data from multiple incidents were used to generate some useful statistics. They focused on:

  • Initial access
  • Lateral movement
  • Effect on target

The dataset was based on 31 ransomware attacks. Some facts:

  • 14d between initial to lateral
  • 40h between lateral to encryption

Interesting approach for ransomware attacks, besides the classic technical details.

Then Sami Mokaddem presented “Automation with MISP workflows”. Recently, a new feature was added to MISP: Workflows. You may roughly compare it to a small XOAR tool inside MISP that helps to trigger actions based on parameters. Example of uses:

  • Chat notifications
  • Prevent publication with a specific sanity check 

After lunch, we had some lightning talks and “HZ Rat goes China – Following the tail of an unknown backdoor” by Axel Wauer. The malware was analyzed and some numbers were reported:

  • 120 samples analyzed
  • 2 delivery techniques
  • 3 malware versions identified
  • C2 servers are online
  • Campaign is ongoing for 2y
  • Focus on Asia/China

As usual, Andras Iklody presented some quick updates about MISP. What changed recently, what’s in the pipe. The amount of work is impressive: 16 releases, 3768 commits, and 100+ contributors.

Cyril Bras came on stage to present his view on sharing IOCs with customers and partners.

And we had another interesting tool to expand the MISP capabilities. Koen Van Impe presented his tool “Web Scraper“. The idea is to feed a MISP instance with instructed informations or structured (CSV, TXT, …). You definitely need to check it if you are struggling with a lot of documents to ingest in MISP!

Another topic that comes often on the table about MISP: How to use it and the cloud and, more important, how to scale it properly? Mikesh Nagar came to talk about his experience with high-level MISP instances hosted in the cloud.

The last presentation of the day was not public.

The second started with another keynote presented by Gregory Boddin: “Internet Leak Exchange Platform“. LeakIx could be seen as a competitor of Onyphe but they focus more on searching for vulnerabilities instead of “mapping the Internet”. The idea could be resumed to “Get there before the threat actors”. Gregory explained how the platform works, what they are searching for, and how they handle collected data.

Then, Markus Ludwig came with a presentation called “Communities – the underestimated super power of CTI“. Not technical at all, the idea was to have a broader view of the huge amount of security researchers. They are more free researchers than paid ones! The community is big and people deserve credits to keep them motivated and, in this case, they tend to contribute more! Great presentation!

Paul Jung presented “How to fail your TI usage?“. I like the idea. When you have to support multiple infrastructures, tools and customers, it’s very easy to make mistake and to fail in distributing high-value CTI. Paul reviewed his top-14 fails and annoying stuff when using CTI for detection

  • RFC1918 IPs, multicast IP, … (Tip: use warning lists from MISP)
  • Wrong URLs http://p or http://https://xxx -> Validate FQDN
  • Human FP – use top Alexa, in reports?
  • NDA & other funny limitations (TLP, sharing across multiple customers)
  • Automation issues
  • Babel issues (deal with tags/taxonomies/humans). Naming conventions!
  • Keep control (control what if you have … or not)
  • Hardware limitations: SSL Ex: This FQDN is useless without ID/path/…
  • Hardware limitation: SIEMs : Parameters to URLs can be removed, will never match
  • Hardware limitation: SIEMs’2: Too much data to ingest, try to reduce
  • Decaying and life-cycle
  • Maintaining TI takes time
  • Avoid “Void-ification” of events…An IP alone is not relevant (context)
  • Keep data relevant for an analyst (what kind of threat is it? ingress? egress? 

Then Louise Taggart presented her view about “Strategic Intelligence“. Threat Intelligence does not only rely on technical information. They are different levers:

  • Tactical “what” -> IOCs
  • Operational “how” -> TTPs
  • Strategic “why? what’s next? so what?” (context, motivation, …)

Then, we switched back to technical content with a presentation by the French ANSSI: “Problematic of air-gapped MISP instances“… MISP is a tool that requires a lot of networking resources. By design, a MISP instance must exchange information (events) with other instances. But sometimes, MISP is used in high-security environment that are disconnected from the Internet (air-gapped). In this case, how to use MISP? How to deploy it and how to feed it? Some processes were describe, like how to build and deploy a MISP instance without connectivity. A tool was presented: sftp2misp. I already worked with air-gapped MISP instances and, trust me, it’s not easy!


Paul Rascagneres came on stage to talk about “The state of the art of webshells in 2022”. Webshell remains a classic way to interact with a compromised computer. Once a vulnerability can be exploited, attackers drop a webshell to perform the next steps of the attack. Paul reviewed different types of webshells (of course, not all of them have the same quality and anti-detection defenses). The example of in-memory webshell based on a Tomcat Valve was pretty cool.

After the lunch and a new serie of lighthing talks, Antoine Cailliau presented his tool: DocIntel. We all have thousands of CTI-related documents like blog posts, PDF reports, RSS feeds, … The goal of DocIntel is to index them, extract useful information and enrich them. This way, you’ll build your own CTI repository. I just installed my own instance as a test and it deserves to be invstigated.

The next talk was about the similar topic: “Report Curation and Threat Library – How to organize your. Knowledge” by Patrick Grau.

Koen came back on stage to present “CTI Operational Procedures with Jupyter Notebooks and MISP“. Jupiter notebooks are pretty popular for a while. Koen explained how to use MISP, PyMISP with Jupyter notebooks to document your CTI operational procedures. 

The next talk was “The holy grail for STIX and MISP format” by Christian Studer.

Quentin Jerome presented “WHIDS Update“. WHIDS is an open source EDR for Windows. Great project that is directly linked to MISP to get IOC and detect suspicious activites.

During the lightning talks, some cool projects/tools were presented:

The event was intensive: 2 days, talks of 30 mins with great content. Of course, the social dimension was the best one. It’s so cool to see good old (and new!) friends. Teasing: a second edition of the CTI-Summit should be organized next year in parallel to

All talks (expect the non-TLP:Clear ones) have been recoreded and are available on Youtube, thanks to Cooper who made an awesome job as usual!

The post CTI-Summit 2022 Luxembourg Wrap-Up appeared first on /dev/random.

October 22, 2022

Normaal gesproken

Normaal gaan we in Vlaanderen niet al te extreem doen met onze houtkachel. Per slot van rekening hebben we allemaal wel of gas, elektriciteit of een warmtepomp of nog iets anders om onze woning te verwarmen.

De houtkachel werd of wordt meestal gebruikt om gezelligheid te bereiken.

Maar dit jaar hebben we een redelijk zware energiecrisis. Die zorgt ervoor dat ons gas heel erg duur is. Daarom gaan we een aantal alternatieven uitproberen.

Wat ik in mijn tuin zoal vond waren de eikels van een eikenboom. Enkele problemen met het verbranden van eikels:

  • Je moet de eikels verzamelen
  • De eikels zijn (vaak te) vochtig
  • De (verzamelde) eikels zijn gemengd met bladeren
  • Je moet de eikels snel in de kachel krijgen

De eikels verzamelen

Dit doe ik met een grashark. Ik geef voorkeur aan eikels die op verharde grond liggen. We moeten niet autistisch doen en proberen alle eikels te verzamelen. Meestal zijn er voldoende eikels die op de verharde grond liggen. Bovendien gebruik ik de hark om de bladeren te scheiden van de eikels (we zitten hier op aarde niet in een vacuüm en dus vallen de eikels sneller dan de bladeren. Je bent een ingenieur? Dan kan je die kennis gebruiken).

De eikels zijn te vochtig

Ik verzamel de eikels wanneer het droog weer is. Bovendien houd ik de eikels een paar dagen (zelfs weken) bij in een (open) emmer of container alvorens ik ze in de kachel gebruik. Hoe langer je ze kan drogen, hoe beter.

De eikels zijn (nog steeds) gemengd met bladeren

Die bladeren zijn vaak vuil en vochtig, wat rook veroorzaakt wanneer we de verzameling verbranden. Daarom heb ik een ijzeren raster op de emmers waarin ik de eikels laat vallen gelegd. Zo kunnen de bladeren eenvoudig gefilterd worden.

Je moet de eikels snel in de kachel krijgen.

Hoe langer het deurtje van de kachel open is, hoe meer rook er ontsnapt. Dat is niet goed want ongezond voor de mensbeesten die zich in de buurt van de kachel bevinden. Daarom heb ik geïnvesteerd in een ijzeren blik om de eikels uit een grotere emmer te kunnen scheppen in voldoende hoeveelheid per keer. Ik leg de eikels ook rondom het brandende vuur in plaats van in het vuur. Zodat pas later, wanneer de deur van de kachel reeds gesloten is (en er geen rook in de leefruimte meer kan ontsnappen), de eikels zullen verbranden.

Tot slot

Tot slot heb ik een luchtzuiveraar in de leefruimte staan. Iedere ochtend open ik ook alle ramen (d.i. vrij belangrijk). Tijdens de avond zijn er vaak ook twee ramen die in een lijn van West naar Oost liggen open. De wind in Vlaanderen waait meestal van West naar Oost. Geloof je me niet? Kijk maar eens hoe de landingsbanen van de luchthaven van Zaventem liggen. En zoek op waarom dat zo is. Je wil dat de wind snel door je leefruimte waait, zodat het zoveel mogelijk fijnstof meeneemt.

Uiteraard verlies je door de raam open te zetten wat van de door de kachel gegenereerde warmte. Maar als je daar dan weer een beetje over nadenkt, dan doet dat er niet zoveel toe: je hebt die warmte gratis vergekregen door gebruik te maken van eikels. Of niet?

Of is het beter dat je longen gevuld worden met fijnstof?

Je kan ook even na de kacheldeur gesloten is die ramen weer sluiten. Uit een goede kachel die goed geïnstalleerd is ontsnapt letterlijk niets van rook naar de leefruimte. Als vanzelfsprekend wil je dit enkel met een goede kachel doen.


Is dit allemaal goed voor de luchtkwaliteit van Vlaanderen? Vast niet.

Mijn kachel is een erg moderne DDG kachel die m.a.w. een zogenaamde volledige verbranding doet (en ook aangesloten is op de centrale verwarming). Maar dit is uiteraard niet goed voor de luchtkwaliteit van het land.

Ik ben er niet zeker van of men momenteel veel betere alternatieven heeft: Russisch gas verbranden aan meer dan duizend euro per maand is ook niet echt een goede optie. Elektriciteit is onwaarschijnlijk duur. Dus ja.

De wet

Je mag niets anders dan onbewerkt hout verbranden. De eikels zijn volgens mij onbewerkt hout. Maarja. Het hangt vast van een interpretatie af.

October 20, 2022

With the latest MySQL release (8.0.31), MySQL adds support for the SQL standard INTERSECT and EXCEPT table operators.

Let’s have a look how to use them.

We will use the following table:

  `name` varchar(20) DEFAULT NULL,
  `tacos` int DEFAULT NULL,
  `sushis` int DEFAULT NULL,
  PRIMARY KEY (`id`)

For our team meeting, we will order tacos and sushi’s.

Each record represent the order of each team member:

select * from new;
| id | name        | tacos | sushis |
|  1 | Kenny       |  NULL |     10 |
|  2 | Miguel      |     5 |      0 |
|  3 | lefred      |     4 |      5 |
|  4 | Kajiyamasan |  NULL |     10 |
|  5 | Scott       |    10 |   NULL |
|  6 | Lenka       |  NULL |   NULL |


The manual says that INTERSECT limits the result from multiple SELECT statements to those rows which are common to all. INTERSECT operator is part of the ANSI/ISO SQL standard (ISO/IEC 9075-2:2016(E))

We want to run two queries, the first one will list all records where the team member chose tacos and the second one will return all records where the person chose sushi’s.

The two separate queries are:

(query 1) select * from new where tacos>0;

(query 2) select * from new where sushis>0;

The only record that is present in both results is the one with id=3.

Let’s use INTERSECT to confirm that:

select * from new where tacos > 0 
select * from new where sushis > 0;
| id | name   | tacos | sushis |
|  3 | lefred |     4 |      5 |

Excellent, on previous versions of MySQL, the result of such query would have been:

ERROR 1064 (42000): You have an error in your SQL syntax; 
check the manual that corresponds to your MySQL server version 
for the right syntax to use near 
'intersect select * from new where sushis > 0' at line 1


In the manual, we can read that EXCEPT limits the result from the first SELECT statement to those rows which are (also) not found in the second.

Let’s find out all team members that will only eat tacos using EXCEPT:

select * from new where tacos > 0 
select * from new where sushis > 0;
| id | name   | tacos | sushis |
|  2 | Miguel |     5 |      0 |
|  5 | Scott  |    10 |   NULL |

And if we want to perform the reverse and get all those that will only eat sushi’s we inverse the queries order like this:

select * from new where sushis > 0 
select * from new where tacos > 0;
| id | name        | tacos | sushis |
|  1 | Kenny       |  NULL |     10 |
|  4 | Kajiyamasan |  NULL |     10 |


MySQL 8.0.31 continues the legacy of 8.0 to include support for SQL standards such as Window Functions, Common Table Expressions, Lateral Derived Tables, JSON_TABLES, JSON_VALUE, …

Enjoy MySQL !

October 10, 2022

Today, at the peak of the biggest chess drama in years, I silently reached Elo 1300 on

Elo is the rating system used in chess, and Elo 1300 means I'm a "decent social player".

I shared this news with my family, and got something between a blank stare and an eye roll. I don't blame them.

If you think chess is boring, I encourage you to take a look at game 6 of the 2021 World Chess Championship.

I'll let you know when I get to Elo 1400. Being busy and all, that will likely take me several months.

M5Stack ontwikkelt ESP32-microcontrollerbordjes met behuizing en scherm die je zo in je woonkamer zou kunnen zetten. Het zijn de ideale producten om een dashboard voor je smarthome van te maken. Met UIFlow is dat ook vrij eenvoudig te programmeren, op een grafische manier. Zo hoef je geen kaas gegeten te hebben van programmeercode.

Ik heb in en rond m'n huis allerlei temperatuursensoren staan. De binnen- en buitentemperatuur wilde ik ergens eenvoudig tonen. De M5Stack Core Ink leek me daarvoor ideaal: het is een ESP32-microcontrollerbordje met 1,54inch e-ink-scherm.


Ik heb dit soort dashboards al op verschillende manieren geprogrammeerd: met Homepoint, ESPHome en openHASP. Nog een andere manier is UIFlow, een webgebaseerde ontwikkelomgeving voor de producten van M5Stack.

UIFlow ondersteunt alleen de hardware van M5Stack, maar geen willekeurig ESP32-microcontrollerbordjes. Het voordeel is dat de drivers van hun eigen producten allemaal ingebouwd zijn. Je kunt ook de grafische interface die je op het scherm van bijvoorbeeld de Core Ink wilt tonen 'tekenen' in de webinterface:


Verder 'programmeer' je je software in Blockly, een grafische programmeertaal waarbij je blokjes met elkaar verbindt. Onderliggend wordt je Blockly-code overigens in MicroPython omgezet. Functionaliteit die niet in UIFlow is voorzien, kun je dan ook als MicroPython-code toevoegen.

In een artikel van me op lees je hoe ik de binnen- en buitentemperatuur van sensoren via MQTT inlees en op de Core Ink toon. De volledige code ziet er als volgt uit:


October 09, 2022

Yesterday I explained how to make a scrap computer do OCR on your scanner/printer’s scanned PDFs in case you have a SMB file share (a Windows file share) where the printer will write to.

I also promised I would make the E-Mail feature of the printer send E-mails with the PDFs in that E-mail being OCR scanned.

I had earlier explained how you can make your old scanner/printer support modern SMTP servers that have TLS, by introducing a scrap computer running Postfix to forward the E-mails for you. This article depends on that, of course. As we will let the scrap computer now do the OCR part. If you have not yet done that, first do it before continuing here.

I looked at Xavier Merten‘s CockooMX, and decided to massacre it until it would do what I want it to do. Namely call ocrmypdf on the application/pdf attachments and then add the resulting PDF/A (which will have OCR text) to the E-mail.

First install some extra software: apt-get install libmime-tools-perl . It will provide you with MIME::Tools, we will use MIME::Parser and MIME::Entity.

Create a Perl script called /usr/local/bin/ (chmod 755 it) that looks like this (which is Xavier’s CockooMX massacred and reduced to what I need – Sorry Xavier. Maybe we could try to make CockooMX have a plugin like infrastructure? But writing what looks suspicious to a database ain’t what I’m aiming for here):


# Copyright note

use Digest::MD5;
use File::Path qw(make_path remove_tree);
use File::Temp;
use MIME::Parser;
use Sys::Syslog;
use strict;
use warnings;

use constant EX_TEMPFAIL => 75; # Mail sent to the deferred queue (retry)
use constant EX_UNAVAILABLE => 69; # Mail bounced to the sender (undeliverable)

my $syslogProgram	= "ocrpdf";
my $sendmailPath	= "/usr/sbin/sendmail";
my $syslogFacility	= "mail";
my $outputDir		= "/var/ocrpdf";
my $ocrmypdf		= "/usr/bin/ocrmypdf";

# Create our working directory
$outputDir = $outputDir . '/' . $$;
if (! -d $outputDir && !make_path("$outputDir", { mode => 0700 })) {
  syslogOutput("mkdir($outputDir) failed: $!");

# Save the mail from STDIN
if (!open(OUT, ">$outputDir/content.tmp")) {
  syslogOutput("Write to \"$outputDir/content.tmp\" failed: $!");
while() {
  print OUT $_;

# Save the sender & recipients passed by Postfix
if (!open(OUT, ">$outputDir/args.tmp")) {
  syslogOutput("Write to \"$outputDir/args.tmp\" failed: $!");
foreach my $arg (@ARGV) {
  print OUT $arg . " ";

# Extract MIME types from the message
my $parser = new MIME::Parser;
my $entity = $parser->parse_open("$outputDir/content.tmp");

# Extract sender and recipient(s)
my $headers = $entity->head;
my $from = $headers->get('From');
my $to = $headers->get('To');
my $subject = $headers->get('Subject');

syslogOutput("Processing mail from: $from ($subject)");

remove_tree($outputDir) or syslogOuput("Cannot delete \"$outputDir\": $!");

exit 0;

sub processMIMEParts
  my $entity = shift || return;
  for my $part ($entity->parts) {
    if($part->mime_type eq 'multipart/alternative' ||
       $part->mime_type eq 'multipart/related' ||
       $part->mime_type eq 'multipart/mixed' ||
       $part->mime_type eq 'multipart/signed' ||
       $part->mime_type eq 'multipart/report' ||
       $part->mime_type eq 'message/rfc822' ) {
         # Recursively process the message
     } else {
       if( $part->mime_type eq 'application/pdf' ) {
         my $type = lc  $part->mime_type;
         my $bh = $part->bodyhandle;
         syslogOutput("OCR for: \"" . $bh->{MB_Path} . "\" (" . $type . ") to \"" . $bh->{MB_Path} . ".ocr.pdf" . "\"" );
         # Perform the OCR scan, output to a new file
         system($ocrmypdf, $bh->{MB_Path}, $bh->{MB_Path} . ".ocr.pdf");
         # Add the new file as attachment
         $entity->attach(Path   => $bh->{MB_Path} . ".ocr.pdf",
                         Type   => "application/pdf",
                         Encoding => "base64");

# deliverMail - Send the mail back
sub deliverMail {
  my $entity = shift || return;

  # Write the changed entity to a temporary file
  if (! open(FH, '>', "$outputDir/outfile.tmp")) {
    syslogOutput("deliverMail: cannot write $outputDir/outfile.tmp: $!");

  # Read saved arguments
  if (! open(IN, "$outputDir/args.tmp")) {
    syslogOutput("deliverMail: Cannot read $outputDir/args.tmp: $!");
    exit EX_TEMPFAIL;
  my $sendmailArgs = ;
  # Read mail content from temporary file of changed entity
  if (! open(IN, "$outputDir/outfile.tmp")) {
    syslogOutput("deliverMail: Cannot read $outputDir/content.txt: $!");
  # Spawn a sendmail process
  syslogOutput("Spawn=$sendmailPath -G -i $sendmailArgs");
  if (! open(SENDMAIL, "|$sendmailPath -G -i $sendmailArgs")) {
    syslogOutput("deliverMail: Cannot spawn: $sendmailPath $sendmailArgs: $!");
    exit EX_TEMPFAIL;
  while() {
    print SENDMAIL $_;

# Send Syslog message using the defined facility
sub syslogOutput {
  my $msg = shift or return(0);
  openlog($syslogProgram, 'pid', $syslogFacility);
  syslog('info', '%s', $msg);

Now we just do what Xavier’s CockooMX documentation also tells you to do: add it to

Create a UNIX user: adduser ocrpdf

Change the smtp service:

smtp      inet  n       -       -       -       -       smtpd
-o content_filter=ocrpdf

Create a new service

ocrpdf  unix  -       n       n       -       -       pipe
user=ocrpdf argv=/usr/local/bin/ -f ${sender} ${recipient}

October 08, 2022

Modern printers can do OCR on your scans. But as we talked about last time, aren’t all printers or scanners modern.

We have a scrap computer that is (already) catching all E-mails on a badly configured local SMTP server, to then forward it to a well configured SMTP server that has TLS. Now we also want to do OCR on the scanned PDFs.

My printer has a so called Network Scan function that scans to a SMB file share (that’s a Windows share). The scrap computer is configured to share /var/scan using Samba as ‘share’, of course. The printer is configured to use that share. Note that you might need in smb.conf this for very old printers:

client min protocol = LANMAN1
server min protocol = LANMAN1
client lanman auth = yes
client ntlmv2 auth = no
client plaintext auth = yes
ntlm auth = yes
security = share

And of course also something like this:

path = /var/scan
writable = yes
browsable = yes
guest ok = yes
public = yes
create mask = 0777

First install software: apt-get install ocrmypdf inotify-tools screen bash

We need a script to perform OCR scan on a PDF. We’ll here use it in another script that monitors /var/scan for changes. Later in another post I’ll explain how to use it from Postfix’s on the attachments of an E-mail. Here is /usr/local/bin/

! /bin/sh
TMP=`mktemp -d -t XXXXX`
mkdir -p $DIR/ocr
cd $DIR
TIMESTAMP=`stat -c %Y "$a"`
ocrmypdf --force-ocr "$a" "$TMP/OCR-$a"
mv -f "$TMP/OCR-$a" "$DIR/ocr/$TIMESTAMP-$a"
chmod 777 "$DIR/ocr/$TIMESTAMP-$a"
cd /tmp
rm -rf $TMP

Note that I prepend the filename with a timestamp. That’s because my printer has no way to give the scanned files a good filename that I can use for my archiving purposes. You can of course do this different.

Now we want a script that monitors /var/scan and launches that script in the background each time a file is created.

My Xerox WorkCentre 7232 uses a directory called SCANFILE.LCK/ for its own file locking. When it is finished with a SCANFILE.PDF it deletes that LCK directory.

Being bad software developers the Xerox people didn’t use a POSIX rename for SCANFILE.PDF to do an atomic write operation at the end.

It looks like this:

inotifywait -r -m  /var/scan | 
while read file_path file_event file_name; do
echo ${file_path}${file_name} event: ${file_event}
Setting up watches. Beware: since -r was given, this may take a while!
Watches established.
/var/scan/ event: OPEN,ISDIR
/var/scan/ event: ACCESS,ISDIR
/var/scan/ event: ACCESS,ISDIR
/var/scan/ event: OPEN,ISDIR
/var/scan/ event: ACCESS,ISDIR
/var/scan/ event: ACCESS,ISDIR
/var/scan/XEROXSCAN003.LCK event: CREATE,ISDIR
/var/scan/XEROXSCAN003.LCK event: OPEN,ISDIR
/var/scan/XEROXSCAN003.LCK event: ACCESS,ISDIR
/var/scan/ event: OPEN,ISDIR
/var/scan/ event: ACCESS,ISDIR
/var/scan/ event: ACCESS,ISDIR
/var/scan/ event: OPEN,ISDIR
/var/scan/ event: ACCESS,ISDIR
/var/scan/XEROXSCAN003.PDF event: CREATE
/var/scan/XEROXSCAN003.PDF event: OPEN
/var/scan/XEROXSCAN003.PDF event: MODIFY
/var/scan/XEROXSCAN003.PDF event: MODIFY
/var/scan/XEROXSCAN003.PDF event: MODIFY
/var/scan/XEROXSCAN003.PDF event: MODIFY
/var/scan/XEROXSCAN003.PDF event: ATTRIB
/var/scan/XEROXSCAN003.LCK event: OPEN,ISDIR
/var/scan/XEROXSCAN003.LCK/ event: OPEN,ISDIR
/var/scan/XEROXSCAN003.LCK event: ACCESS,ISDIR
/var/scan/XEROXSCAN003.LCK/ event: ACCESS,ISDIR
/var/scan/XEROXSCAN003.LCK event: ACCESS,ISDIR
/var/scan/XEROXSCAN003.LCK/ event: ACCESS,ISDIR
/var/scan/XEROXSCAN003.LCK/ event: DELETE_SELF
/var/scan/XEROXSCAN003.LCK event: DELETE,ISDIR

The printer deleting that SCANFILE.LCK/ directory is a good moment to start our OCR script (call it for example /usr/local/bin/

! /bin/bash
inotifywait -r -m -e DELETE,ISDIR /var/scan |
while read file_path file_event file_name; do
if [ ${file_event} = "DELETE,ISDIR" ]; then
if [[ ${file_name} == *"LCK" ]]; then
filename=`echo ${file_name} | sed -e "s/$suffix$//"`.PDF
/usr/local/bin/ $filename &

Give both scripts 755 permissions with chmod and now you just run screen /usr/local/bin/

When your printer was written by good software developers, it will do POSIX rename. That looks like this (yes, also when done over a SMB network share):

inotifywait -r -m  /var/scan | 
while read file_path file_event file_name; do
echo ${file_path}${file_name} event: ${file_event}
Setting up watches. Beware: since -r was given, this may take a while!
Watches established.
/var/scan/ event: OPEN,ISDIR
/var/scan/ event: ACCESS,ISDIR
/var/scan/ event: OPEN,ISDIR
/var/scan/ event: ACCESS,ISDIR
/var/scan/ event: OPEN,ISDIR
/var/scan/ event: ACCESS,ISDIR
/var/scan/ event: OPEN,ISDIR
/var/scan/ event: ACCESS,ISDIR
/var/scan/.tmp123.GOODBRANDSCAN-123.PDF event: CREATE
/var/scan/.tmp123.GOODBRANDSCAN-123.PDF event: OPEN
/var/scan/.tmp123.GOODBRANDSCAN-123.PDF event: MODIFY
/var/scan/.tmp123.GOODBRANDSCAN-123.PDF event: MOVED_FROM
/var/scan/GOODBRANDSCAN-123.PDF event: MOVED_TO

That means that your parameters for inotifywait could be -r -m -e MOVED_TO and in ${file_name} you’ll have that GOODBRANDSCAN-123.PDF. This is of course better than Xerox’s way with their not invented here LCK things that probably also wouldn’t be necessary with a POSIX rename call.

I will document how to do this to the E-mail feature of the printer with Postfix later.

I first need a moment in my life where I actually need this hard enough that I will start figuring out how to extract certain attachment MIME parts from an E-mail with Posix’s I guess I will have to look into CockooMX by Xavier Mertens for that. Update: that article is available now.


October 01, 2022

Dat vind ik een goed idee. Maar dan wel wanneer ook vrouwen moeten gaan.

Sterven voor je land is geen zwaar werk. Vrouwen zijn daar net zo geschikt voor als mannen. Het enige wat je moet doen is in een zelfgegraven put of in de loopgraven (die tegenwoordig door graafmachines, wat een luxe, voor je gemaakt zijn) rustig te wachten tot de artillerie van de tegenstander de benen en armen van je lijf kapotbombarderen. Gedurende dat spektakel zie je meestal eerst een aantal keer hetzelfde gebeuren bij je vriendinnen in het putteke een beetje verderop.

Tijdens dat wachten heb je diarree en ben je daardoor constant uitgedroogd, komen de ratten aan je tenen knauwen, worden je strijdende vriendinnen geweldadig zot (want ze hebben hun vriendinnen aan gort geschoten zien worden), is er de hele nacht constant bombardement waarbij iedere knal een knal op jouw putteke had kunnen zijn, zie je mensen levend verbrand worden door fosfor-regen en krijg je continu idiote opdrachten van de leidinggevende commandanten (die, trouwens, vrouwen kunnen zijn: geen probleem he, dat kan ook. Dat kan ook).

Dat gaat van een zomer of lente-situatie uit. Want in de winter is het ook nog eens negatief een paar graden gedurende de hele nacht dat jij daar zit. Teminste als je hier bij ons gestationeerd bent, en niet dichter bij Rusland. Waar het min 20 en min 30 is en je gewoonweg doodvriest. Terwijl de ratten aan je tenen knauwen en je diarree hebt. Just in case you forgot about that.

Om op dat moment getrained te worden is iets dat geheel ‘equal’ hoort te zijn. Zoals de toiletten moeten we unisex-loopgraven hebben. Zoals de huidige generatie het wil moeten we dus ook unisex-legerdienst hebben.

Ik ben helemaal voorstander. Zolang de vrouwtjes ook moeten gaan. We moeten allemaal ons steentje bijdragen. Niet waar?

September 29, 2022

I have a dual boot on my desktop pc: Windows 11 and Ubuntu Linux. I hardly every use the Windows installation. Maybe for some games, but Steam has gotten better and better at supporting games on Linux. Or when you need to login on some government website with your eID and you can’t use the ItsMe app.

Many moons ago I did a boo-boo: for some reason I felt that I had to make my EFI system partition bigger. Which also meant resizing and moving all other partitions. Linux didn’t flinch but Windows pooped in its pants. Apparently that operating system is soooo legacy that it can’t cope with a simple partition move. I tried to fix it using a Windows system repair disk but the damn thing just couldn’t be arsed.

The partitions on my first hard disk

For a long time I just couldn’t be bothered with any further repair attempts. I don’t need that Windows anyway. I can always run Windows in VirtualBox if I really need it. It also means that I can nuke a 414 GiB partition and use that space for better things. As you can see in the screenshot, I mounted it on /mnt/windows with the intention of copying the directory Users/Amedee to Linux, in case there was still something of value there. Probably not, but better safe than sorry.

There’s just one small snag: for the life of me, I couldn’t find a Windows activation key, or remember where I put it. It’s not an OEM PC so the key isn’t stored in the BIOS. And I didn’t want to waste money on buying another license for an operating system that I hardly ever use.

I googled for methods to retrieve the Windows activation key. Some methods involve typing a command on the command prompt of a functioning Windows operating system, so those were not useful for me. Another method is just reading the activation key from the Windows Registry:

Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\SoftwareProtectionPlatform\BackupProductKeyDefault

I don’t need a working Windows operating system to read Registry keys, I can just mount the Windows filesystem in Linux and query the Registry database files in /Windows/System32/config/. I found 2 tools for that purpose: hivexget and reglookup.


This one is the simplest, it directly outputs the value of a registry key.


sudo apt install --yes libhivex-bin


hivexget /mnt/windows/Windows/System32/config/SOFTWARE \
     "\Microsoft\Windows NT\CurrentVersion\SoftwareProtectionPlatform" \


This requires a bit more typing.


sudo apt install --yes reglookup


reglookup -p "/Microsoft/Windows NT/CurrentVersion/SoftwareProtectionPlatform/BackupProductKeyDefault" \
/Microsoft/Windows NT/CurrentVersion/SoftwareProtectionPlatform/BackupProductKeyDefault,SZ,XXXXX-XXXXX-XXXXX-XXXXX-XXXXX,

The output has a header and is comma separated. Using -H removes the header, and then cut does the rest of the work;

reglookup -H -p "/Microsoft/Windows NT/CurrentVersion/SoftwareProtectionPlatform/BackupProductKeyDefault" \
     /mnt/windows/Windows/System32/config/SOFTWARE \
     | cut --delimiter="," --fields=3

September 28, 2022

We now invite proposals for developer rooms for FOSDEM 2023. FOSDEM offers open source and free software developers a place to meet, share ideas and collaborate. Renowned for being highly developer-oriented, the event brings together some 8000+ geeks from all over the world. The twenty-third edition will take place Saturday 4th and Sunday 5th February 2023 in Brussels, Belgium. Developer rooms are assigned to self-organising groups to work together on open source and free software projects, to discuss topics relevant to a broader subset of the community, etc. Most content should take the form of presentations. Proposals involving collaboration舰