Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

December 12, 2025

Today marks exactly 25 years since I registered amedee.be. On 12 December 2000, at 17:15 CET, my own domain officially entered the world. It feels like a different era: an internet of static pages, squealing dial-up modems, and websites you assembled yourself with HTML, stubbornness, and whatever tools you could scrape together. 🧑‍💻📟

I had websites before that—my first one must have been around 1996, hosted on university servers or one of those free hosting platforms that have long since disappeared. There is no trace of those early experiments, and that’s probably for the best. Frames, animated GIFs, questionable colour schemes… it was all part of the charm. 💾✨

But amedee.be was the moment I claimed a place on the internet that was truly mine. And not just a website: from the very beginning, I also used the domain for email, which added a level of permanence and identity that those free services never could. 📬

Over the past 25 years, I have used more content management systems than I can easily list. I started with plain static HTML. Then came a parade of platforms that now feel almost archaeological: self-written Perl scripts, TikiWiki, XOOPS, Drupal… and eventually WordPress, where the site still lives today. I’m probably forgetting a few—experience tends to blur after a quarter century online. 🗂🕸

Not all of that content survived. I’ve lost plenty along the way: server crashes, rushed or ill-planned CMS migrations, and the occasional period of heroic under-backing-up. I hope I’ve learned something from each of those episodes. Fortunately, parts of the site’s history can still be explored through the Wayback Machine at the Internet Archive—a kind of external memory for the things I didn’t manage to preserve myself. 📉🧠📚

The hosting story is just as varied. The site spent many years at Hetzner, had a period on AWS, and has been running on DigitalOcean for about a year now. I’m sure there were other stops in between—ones I may have forgotten for good reasons. ☁🔧

What has remained constant is this: amedee.be is my space to write, tinker, and occasionally publish something that turns out useful for someone else. A digital layer of 25 years is nothing to take lightly. It feels a bit like personal archaeology—still growing with each passing year. 🏺📝

Here’s to the next 25 years. I’m curious which tools, platforms, ideas, and inevitable mishaps I’ll encounter along the way. One thing is certain: as long as the internet exists, I’ll be here somewhere. 🚀

A mother in bed holds a newborn baby, surrounded by three formally dressed adults in a hospital room. My mom as a newborn in her mother's arms, surrounded by my grandparents and great-grandparents.

I never knew my great grandparents. They left no diary, no letters, only a handful of photographs. Sometimes I look at those photos and wonder what they cared about. What were their days like? What made them laugh? What problems were they working through?

Then I realize it could be different for my descendants. A long-running blog like mine is effectively an autobiography.

So far, it captures nearly twenty years of my life: my PhD work, the birth of my children, and the years of learning how to lead Drupal and build a community. It even captures the excitement of starting two companies, and the lessons I learned along the way.

And in recent years, it captures the late night posts where I try to make sense of what AI might change. They are a snapshot of a world in transition. One day, it may be hard to remember AI was ever new.

In a way, a blog is a digital time capsule. It is the kind of record I wish my great grandparents had left behind.

I did not start blogging with this in mind. I wrote to share ideas, to think out loud, to guide the Drupal community, and to connect with others. The personal archive was a side effect.

Now I see it differently. Somewhere in there is a version of me becoming a father. A version trying to figure out how to build something that lasts. A version wrestling, late at night, with technology changes happening in front of me.

If my grandchildren ever want to know who I was, they will not have to guess. They will be able to hear my voice.

If that idea feels compelling, this might be a good time to start a blog or a website. Not to build a large audience, but just to leave a trail. Future you may be grateful you began.

December 11, 2025

Let’s now see how we can connect to our MySQL HeatWave DB System, which was deployed with the OCI Hackathon Starter Kit in part 1. We have multiple possibilities to connect to the DB System, and we will use three of them: MySQL Shell in the command line MySQL Shell is already installed on the […]

December 10, 2025

In The Big Blind, investor Fred Wilson highlights how AI is revolutionizing geothermal energy discovery. This sentence stood out for me:

It is wonderfully ironic that the technology that is creating an energy crisis is also a potential solve for that energy crisis.

AI consumes massive amounts of electricity, but also helps to discover new sources of clean energy. The source of demand might become the source of supply. I emphasized the word "might" because history is not on our side.

But energy scarcity is a "discovery problem", and AI excels at discovery. The geothermal energy was always there. The bottleneck was our ability to find it. AI can analyze seismic data, geological surveys and satellite imagery at a scale no human can match.

The quote stood out because technology rarely cleans up its own mess. More cars don't fix traffic and more coal doesn't fix pollution. Usually it takes a different technology to clean up the mess, if it can be cleaned up at all. For example, the internet created information overload, and search engines were invented to help manage it.

But here, for AI and energy, the system using the resource might also be the one capable of discovering more of it.

I see a similar pattern in open source.

Most open source projects depend on a small group of maintainers who review code, maintain infrastructure and keep everything running. They shoulder a disproportionate share of the work.

AI risks adding to that burden. It makes it easier for people to generate code and submit pull requests, but reviewing those code contributions still falls on the same few maintainers. When contributions scale up, review capacity has to keep pace.

And just like with energy discovery, AI might also be the solution. There already exist AI-powered review tools that can scan pull requests, enforce project standards and surface issues before a human even looks at them. If you believe AI-generated code is here to stay (I do), AI-assisted review might not be optional.

I'm no Fred Wilson, but as an occasional angel investor, such review tools look like a good way to go long on vibe coding. And as Drupal's Project Lead, I'd love to collaborate with the providers of these tools. If we can make open source maintenance more scalable and sustainable, everyone benefits.

So yes, the technology making a situation worse might also be capable of helping to solve it. That is rare enough to pay attention to.

L’autocomplétion de nos intentions

Lorsque j’ai commencé à utiliser mon premier smartphone, en 2012, j’utilisais évidemment le clavier fourni par défaut qui proposait de l’autocomplétion.

Il ne m’a fallu que quelques jours pour être profondément choqué. L’autocomplétion proposait des mots qui convenaient parfaitement, mais qui n’étaient pas ceux que j’avais en tête. En acceptant une autocomplétion par désir d’économiser quelques pressions de doigts, je me retrouvais avec une phrase différente de ce que j’avais initialement envisagé. Je modifiais le cours de ma pensée pour m’adapter à l’algorithme !

C’était choquant !

Moi qui étais passé au Bépo quelques années plus tôt et qui avais découvert les puissances de la dactylographie pour affiner mes idées, je ne pouvais imaginer laisser une machine me dicter ma pensée, même pour un texte aussi mondain qu’un SMS. Je me suis donc mis en quête d’un clavier optimisé pour usage sur un écran tactile minuscule, mais sans autocomplétion. J’ai trouvé MessagEase, que j’ai utilisé pendant des années avant de passer à ThumbKey, version libre du précédent.

Le choc fut encore plus violent lorsqu’apparurent les suggestions de réponses aux emails dans l’interface Gmail. Ma première expérience avec ce système fut de me voir proposer plusieurs manières de répondre par l’affirmative à un email professionnel auquel je voulais répondre négativement. Avec horreur, je perçus en moi un vague instinct de cliquer pour me débarrasser plus vite de cet email corvée.

Cette expérience m’inspira la nouvelle « Les imposteurs », lisible dans le recueil « Stagiaire au spatioport Omega 3000 et autres joyeusetés que nous réserve le futur » (qui est justement disponible à -50% jusqu’au 15 décembre ou à prix normal, mais sans frais de port chez votre libraire).

L’autocomplétion manipule notre intention, cela ne fait aucun doute. Et s’il y a bien quelque chose que je souhaite préserver chez moi, c’est mon cerveau et mes idées. Comme un footballeur préserve ses jambes, comme un pianiste préserve ses mains, je chéris et protège mon cerveau et mon libre arbitre. Au point de ne jamais boire d’alcool, de ne jamais consommer la moindre drogue : je ne veux pas altérer mes perceptions, mais, au contraire, les affiner.

Mon cerveau est ce que j’ai de plus précieux, l’autocomplétion même la plus basique est une atteinte directe à mon libre arbitre.

Mais, avec les chatbots, c’est désormais une véritable économie de l’intention qui se met en place. Car si les prochaines versions de ChatGPT ne sont pas meilleures à répondre à vos questions, elles seront meilleures à les prédire.

Non pas à cause de pouvoir de divination ou de télépathie. Mais parce qu’elles vous auront influencé pour vous amener dans la direction qu’elles auront choisie, à savoir la plus profitable.

Une partie de l’intérêt disproportionné que les politiciens et les CEOs portent aux chatbots vient clairement de leur incompétence voire de leur bêtise. Leur métier étant de dire ce que l’audience veut entendre, même si cela n’a aucun sens, ils sont sincèrement étonnés de voir une machine être capable de les remplacer. Et ils sont le plus souvent incapables de percevoir que tout le monde n’est pas comme eux, que tout le monde ne fait pas semblant de comprendre à longueur de journée, que tout le monde n’est pas Julius.

Mais, chez les plus retors et les plus intelligents, une partie de cet intérêt peut également s’expliquer par le potentiel de manipulation des foules. Là où Facebook et TikTok ont ponctuellement influencé des élections majeures grâce à des mouvements de foule virtuels, une ubiquité de ChatGPT et consorts permet un contrôle total sur les pensées les plus intimes de tous les utilisateurs.

Après tout, j’ai bien entendu dans l’épicerie de mon quartier une femme se vanter auprès d’une amie d’utiliser ChatGPT comme conseiller pour ses relations amoureuses. À partir de là, il est trivial de modifier le code pour faire en sorte que les femmes soient plus dociles, plus enclines à sacrifier leurs aspirations personnelles pour celles de leur conjoint, de pondre plus d’enfants et de les éduquer selon les préceptes ChatGPTesques.

Contrairement au fait de résoudre les « hallucinations », problème insoluble, car les Chatbots n’ont pas de notion de vérité épistémologique, introduire des biais est trivial. En fait, il a été démontré plusieurs fois que ces biais existent déjà. C’est juste que nous avons naïvement supposé qu’ils étaient involontaires, mécaniques.

Alors qu’ils sont un formidable produit à vendre à tous les apprentis dictateurs. Un produit certainement rentable et pas très éloigné du ciblage publicitaire que vendent déjà Facebook et Google.

Un produit qui apparaît comme parfaitement éthique, approprié et même bénéfique à l’humanité. Du moins si on se fie à ce que nous répondra ChatGPT. Qui confirmera d’ailleurs son propos en pointant vers plusieurs articles scientifiques. Rédigés avec son aide.

Je suis Ploum et je viens de publier Bikepunk, une fable écolo-cycliste entièrement tapée sur une machine à écrire mécanique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !

Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.

December 09, 2025

The room formally known as “Lightning Talks” is now known as /dev/random. After 25 years, we say goodbye to the old Lightning Talks format. In place, we have two new things! /dev/random: 15 minute talks on a random, interesting, FOSS-related subject, just like the older Lightning Talks New Lightning Talks: a highly condensed batch of 5 minute quick talks in the main auditorium on various FOSS-related subjects! Last year we experimented with running a more spontaneous lightning talk format, with a submission deadline closer to the event and strict short time limits (under five minutes) for each speaker. The experiment舰

This week, Ruby on Rails creator David Heinemeier Hansson and WordPress founding developer Matt Mullenweg started fighting about what "open source" means. I've spent twenty years working on open source sustainability, and I have some thoughts.

David Heinemeier Hansson (also known as DHH) released a new kanban tool, Fizzy, this week and called it open source.

People quickly pointed out that the O'Saasy license that Fizzy is released under blocks others from offering a competing SaaS version, which violates the Open Source Initiative's definition. When challenged, he brushed it off on X and said, "You know this is just some shit people made up, right?". He followed with "Open source is when the source is open. Simple as that".

This morning, Matt Mullenweg rightly pushed back. He argued that you can't ignore the Open Source Initiative definition. He compared it to North Korea calling itself a democracy. A clumsy analogy, but the point stands.

Look, the term "open source" has a specific, shared meaning. It is not a loose idea and not something you can repurpose for marketing. Thousands of people shaped that definition over decades. Ignoring that work means benefiting from the community while setting aside its rules.

This whole debate becomes spicier knowing that DHH was on Lex Fridman's podcast only a few months ago, appealing to the spirit and ethics of open source to criticize Matt's handling of the WP Engine dispute. If the definition is just "shit people made up", what spirit was Matt violating?

The definition debate matters, but the bigger issue here is sustainability. DHH's choice of license reacts to a real pressure in open source: many companies make real money from open source software while leaving the hard work of building and maintaining it to others.

This tension also played a role in Matt's fight with WP Engine, so he and DHH share some common ground, even if they handle it differently. We see the same thing in Drupal, where contributions from the biggest companies in our ecosystem is extremely uneven.

DHH can experiment because Fizzy is new. He can choose a different license and see how it works. Matt can't as WordPress has been licensed under the GPL for more than twenty years. Changing that now is virtually impossible.

Both conversations are important, but watching two of the most influential people in open source argue about definitions while we all wrestle with free riders feels a bit like firefighters arguing about hose lengths during a fire.

The definition debate matters because open source only works when we agree on what the term means. But sustainability decides whether projects like Drupal, WordPress, and Ruby on Rails keep thriving for decades to come. That is the conversation we need to have.

In Drupal, we are experimenting with contribution credits and with guiding work toward companies that support the project. These ideas have helped, but also have not solved the imbalance.

Six years ago I wrote in my Makers and Takers blog post that I would love to see new licenses that "encourage software free riding", but "discourage customer free riding". O'Saasy is exactly that kind of experiment.

A more accurate framing would be that Fizzy is source available. You can read it, run it, and modify it. But DHH's company is keeping the SaaS rights because they want to be able to build a sustainable business. That is defensible and generous, but it is not open source.

I still do not have the full answer to the open source sustainability problem. I have been wrestling with it for more than twenty years. But I do know that reframing the term "open source" is not the solution.

Some questions are worth asking, and answering:

  • How do we distinguish between companies that can't contribute and those that won't?
  • What actually changes corporate behavior: shame, self-interest, punitive action, exclusive benefits, or regulation?

If this latest debate brings more attention to these questions, some good may come from it.

December 08, 2025

Elizabeth Spiers recently wrote a great retrospective on early blogging. Spiers was the founding editor of Gawker, a provocative blog focused on celebrities and the media industry. She left in 2003, more than a decade before the site went bankrupt after a lawsuit by Hulk Hogan, funded by Peter Thiel.

Today, she continues to blog on her own site, and she captured the difference between the early web and social media perfectly:

I think of this now as the difference between living in a house you built that requires some effort to visit and going into a town square where there are not particularly rigorous laws about whether or not someone can punch you in the face.

In the early days of blogging, responding to someone's post took real work. You had to write something on your own site and hope they noticed. As Spiers puts it, if someone wanted to engage with you, they had to come to your house and be civil before you'd let them in. If a troll wanted to attack you, they had to do it on their own site and hope you took the bait. Otherwise, no one would see it.

It's a reminder that friction can be a feature, not a bug. Having to write on your own blog filtered out low-effort and low-quality responses. Social media removed that friction. That has real benefits: more voices, faster conversations, lower barriers to entry. But it also means the town square gets crowded fast, and some people come just to shout.

It's the same in real life. When I think about the best conversations I've had, they happened in someone's living room or around a dinner table, not out in a busy public square, which often feels better suited for protests and parades. It works the same way on the web, which is why I'm barely active on social media anymore.

I experienced this tension on my own blog. For years I had anonymous comments enabled. I've always believed in the two-way nature of the web, and I still do. But eventually I turned comments off. Every month I wonder if I should bring them back.

But as my blog gained traction, the quality of the comments had become uneven. There were more off-topic questions, sloppy writing, and the occasional troll. Of course, there were still great comments that led to real conversations, and those make me rethink turning comments back on. But between Drupal, Acquia, and family, I stopped having the time to moderate.

These days the thoughtful responses come by email. It takes more effort than a comment, so the people who write usually have something substantive to say. The downside is that these exchanges stay private, which can be a shame.

What I like the most about Spiers' blog post is that the early web didn't just enable better conversation. It required it. You had to say something interesting enough that someone would bookmark your URL and come back. Maybe that is the thing worth protecting: not the lack of a comments section, but the kind of friction that rewards effort.

In that spirit, I'm going to make an effort to link to more blog posts worth visiting. Consider this me knocking on Spiers' door.

December 05, 2025

Fuck, ik word volgende week 57! Eigenlijk heb ik daar best vrede mee, net als met mijn kalend hoofd. Maar anderzijds stellen mensen zich af en toe de vraag wie ze zijn en ook daar ben ik geen uitzondering op. Ik ben onder andere…

Source

Mais c’est plus joli !

La nouvelle version de votre site web est inutilisable, vos emails sont longs et illisibles, vos slides et vos graphiques sont creux. Mais j’entends bien votre argument pour justifier toute cette merde :

C’est plus joli !

On fout tout en l’air pour faire « joli ». Même les icônes des applications sont devenues indistinguables les unes des autres, car « c’est plus joli ».

Le joli nous tue !

R.L. Dane réalise que ce qui lui manque le plus est son ordinateur en noir et blanc. Non pas parce qu’il veut revenir au noir et blanc, mais parce que le fait que certains ordinateurs soient en noir et blanc forçait les designeurs à créer des interfaces lisibles et simples dans toutes les conditions.

C’est exactement ce que je reproche à tous les sites web modernes et toutes les apps. Les concepteurs devraient être forcés de les utiliser sur un vieil ordinateur ou sur un smartphone un peu ancien. C’est joli si on a justement le dernier smartphone avec les derniers espiogiciels à la mode.

Je conchie le « joli ». Le joli, c’est la déculture totale, c’est Trump qui fout des dorures partout, c’est le règne du kitch et de l’incompétence. Votre outil est joli simplement parce que vous ne savez pas l’utiliser ! Parce que vous avez oublié que des gens compétents peuvent l’utiliser. Le joli s’oppose à la praticité.

Le joli s’oppose au beau.

Le beau est profond, artistique, réfléchi, simple. Le beau requiert une éducation, une difficulté. Un artisan chevronné s’émerveille devant la finesse et la simplicité d’un outil. Le consommateur décérébré lui préfère la version avec des paillettes. Le mélomane apprécie une interprétation dans une salle de concert là où votre enceinte connectée impose un bruit terne et sans relief à tous les passants dans le parc. Le joli rajoute au beau une lettre qui le transforme : le beauf !

Oui, vos logorrhées ChatGPTesques sont beaufs. Vos images générées par Midjourney sont le comble du mauvais goût. Votre chaîne YouTube est effrayante de banalités. Vos podcasts ne sont qu’un comblage d’ennui durant votre jogging. Le énième redesign de votre app n’est que la marque de votre inculture. Vos slides PowerPoint et vos posts LinkedIn sont à la limite du crétinisme clinique.

Thierry Crouzet parle de l’addiction à plaire. Mais même cela est faux. On ne veut pas réellement plaire, juste obéir à des algorithmes pour augmenter notre nombre de followers. On veut un profil qui fait « joli ».

Contre le rose bonbon kitch, le headbanging. Contre le technofascisme, le technopunk ringard !

Oui, mais, ça marche !

Le joli est un bonbon, une sucrerie. Cela n’a jamais été aussi vrai que sur les réseaux sociaux, une analogie que j’utilise depuis plus d’une décennie.

Sur son gemlog, Asquare approndit le concept de manière très intéressante : iel suggère que moins les sucreries sont bonnes, plus on en consomme. (oui, c’est sur Gemini, un truc pas joli pour lequel il faut un navigateur dédié et ça n’a rien à voir avec l’IA de Google)

Et ça a du sens : si vous mangez un morceau d’un excellent chocolat avec une tasse de thé de qualité, vous n’aurez pas envie d’en prendre 10 morceaux, de vous gaver. À l’inverse, un chocolat industriel donne une légère satisfaction, mais pas suffisante, on en veut toujours plus.

C’est pareil avec les réseaux sociaux : au plus vous scrollez sur des trucs vaguement intéressants, au plus vous continuez. La merde est addictive ! Et au plus il y a du contenu, au plus la qualité moyenne baisse, cela a été démontré.

Ce qui n’est pas sympa pour la merde, car, comme me le signalent de nombreux lecteurs, la merde est un excellent compost pour faire pousser de bonnes choses. Ce n’est pas le cas des réseaux sociaux, qui font surtout pousser le crétinisme et le fascisme.

À l’opposé de cette « jolie merde », si vous êtes abonnés, comme moi, à d’excellents blogs, vous lisez un article et cela vous fait réfléchir. Le carnet de Thierry Crouzet de novembre, par exemple, me fait beaucoup réfléchir. Après l’avoir lu, je n’ai pas envie de papillonner. Je m’arrête, je me pose des questions, j’ai envie d’y penser, mais aussi de le savourer.

Mais rien ne vaut pour moi la saveur, la beauté d’un bon livre !

L’artisanat derrière la beauté des livres

Je rencontre trop de gens qui me confient aimer beaucoup la lecture, mais n’avoir « plus le temps de lire ». Les mêmes, par contre, sont hyper actifs sur les réseaux sociaux, sur d’infinis groupes Whatsapp ou Discord. C’est comme prétendre n’avoir pas assez faim pour manger ses légumes parce qu’on mange des bonbons toute la journée. Forcément, on a un distributeur dans notre poche ! Et le bonbon n’étant pas vraiment satisfaisant, on reprend un autre… C’est pareil pour certains livres à grand succès produits à la chaîne !

Mais je parlais plus haut du fait qu’apprécier la beauté nécessite la compétence. Les éditions PVH ont justement décidé de se mettre à nu, d’exposer le travail derrière chaque livre pour justifier le prix de l’objet. C’est quelque chose que je remarque : mieux on comprend un travail, au plus on l’apprécie. Cela s’oppose à la nourriture industrielle sous blister qui présuppose de « ne pas savoir comment c’est fait ».

Outre ces explications, deux de mes livres sont à -50% jusqu’au 15 décembre. Si vous n’avez pas de librairie près de chez vous, c’est l’occasion de rentabiliser les frais de port (qui ont explosé, merci la poste française).

L’artisanat derrière l’écriture

PVH vient de sortir un roman de fantasy écrit à 10 mains : « Le bastion des dégradés ».

Julien Hirt, un des co-auteurs et l’auteur de Carcinopolis (que je recommande chaudement si vous n’êtes pas une âme sensible), nous décrit le processus dans un billet passionnant.

Les auteurs sont comme les chats : il est notoirement difficile de leur apprendre à marcher au pas.

À la lecture du billet, la première chose qui me vient à l’esprit, c’est que je suis impatient de lire ce roman. Ce sont tout·e·s des potes dont j’ai lu au moins un des romans, le mélange doit être ébouriffant !

La seconde chose, c’est que ça donne envie de faire pareil. Je regrette de ne pas vivre en Suisse. Comme dit Julien, la Suisse romande est à la fantasy ce que la Scandinavie est au polar. Mais moi, je suis en Belgique !

Bon, après, j’avoue que déjà je suis plus SF que fantasy, mais écrire dans un cloud en chattant, ce serait très très difficile pour moi. Je serais plutôt du genre à vouloir écrire dans un dépôt git en échangeant sur une mailing-liste. Un peu comme si je développais un logiciel libre. Comme le dit Marcello Vitali-Rosati, l’outil a une énorme influence sur l’écriture.

Je ferais plus facilement partie d’un collectif geek-SF, dans la mouvance Neal Stephenson/Charlie Stross/Cory Doctorow. Mais, comme le dit très bien Julien, il faut trouver des complémentarités. Les geeks-SF manquent trop souvent de poésie.

Je repousse sans cesse l’écriture de la suite de Printeurs, mais plus j’y pense, plus je crois que ça pourrait être un projet collectif. J’aime beaucoup par exemple « Chroniques d’un crevard », nouvelle issue du Recueil de Nakamoto. Je trouve que l’univers est parfaitement compatible avec celui de Printeurs.

Trouver le beau derrière le joli

Vous voyez le résultat ? Le fait de tenter de consommer de bonnes choses me donne des idées, me donne envie de produire moi-même des choses. J’éprouve de la gratitude envers les gens qui écrivent des choses que j’aime et avec qui je peux échanger « entre êtres humains ». C’est parfois tellement fort que j’ai l’impression d’être dans un âge d’or !

Bon, peut-être un peu trop, car j’ai trop d’idées qui se bousculent, je rencontre trop de gens intéressants. Finalement, la beauté est partout dès qu’on fait l’effort de mettre maintenir le « joli » à distance. S’il n’y avait pas tant de belles choses à découvrir, je pourrais peut-être me consacrer plus à l’écriture…

Sous les jolis pavés, la beauté de la plage !

Je suis Ploum et je viens de publier Bikepunk, une fable écolo-cycliste entièrement tapée sur une machine à écrire mécanique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !

Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.

December 04, 2025

A futuristic illustration of bikers merged with a screenshot of the Drupal Canvas visual page builder interface.

When we launched Drupal CMS 1.0 eleven months ago, I posted the announcement on Reddit. Brave of me, I know. But I wanted non-Drupal people to actually try it.

There were a lot of positive reactions, but there was also honest feedback. The most common? "Wake me up when your new experience builder is ready". The message was clear: make page building easier and editing more visual.

I was not surprised. For years I have heard the same frustration. Drupal is powerful, but not always easy to use. That criticism has been fair. We have never lacked capability, but we have not always delivered the user experience people expect.

Well, wake up.

Today we released Drupal Canvas 1.0, a new visual page builder for Drupal. You can create reusable components that match your design system, drag them on to a page, edit content in place, preview changes across multiple pages, and undo mistakes with ease.

Watch the video below. Better yet, show it to someone who thinks they know what Drupal looks like. I bet their first reaction will be: "Wait, is that Drupal?". That reaction is exactly what we have been working toward. It makes Drupal feel more modern and less intimidating.

I also want to set expectations. Drupal Canvas 1.0 helps us catch up with other page builders more than it helps us leap ahead. We had to start there.

But it helps us catch up in the right way, bringing the ease of modern tools while keeping Drupal's identity intact. This isn't Drupal becoming simpler by becoming less powerful. Drupal Canvas sits on top of everything that makes Drupal so powerful: structured content, fine-grained permissions, scalability, and much more.

Most importantly, it opens new doors. Frontend developers can create components in React without having to learn Drupal first. And as shown in my DrupalCon Vienna keynote, Drupal Canvas will have an AI assistant that can generate pages from natural language prompts.

Drupal Canvas is a remarkable piece of engineering. The team at Acquia and contributors across the community put serious craft into this. You can see it in the result. I'm thankful for the time, care, and skill everyone brought to it.

So what is next? We keep building. Drupal Canvas 1.0 is step one, and this is a good moment for more of the Drupal community to get involved. Now is the time to build on it, test it, and improve it. Especially because Drupal CMS 2.0 ships in less than two months with Drupal Canvas included.

Shipping Drupal Canvas 1.0 is a major milestone. It shows we are listening. And it shows what we can accomplish when we focus on the experience as much as the capability. I cannot wait to see what people build with it.

Is Pixelfed sawing off the branch that the Fediverse is sitting on?

In January 2025, I became aware that there was a real problem with Pixelfed, the "Instagram inspired Fediverse client". The problem is threatening the whole Fediverse. As Pixelfed received a lot of media attention, I choose to wait. In March 2025, I decided that the situation was quieter and wrote an email to Dansup, Pixelfed’s maintainer, with an early draft of this post. Dan’s promptly replied, in a friendly tone. But didn’t want to acknowledge the problem which I’ve confirmed many times with Pixelfed users. I want to bring the debate on the public place. If I’m wrong, I will at least understand why. If Dan is wrong on this very specific issue, we will at least open the debate.

This post will be shared to my Fediverse audience through my @ploum@mamot.fr Mastodon account. But Pixelfed users will not see it. Even if they follow me, even if many people they follow boost it. Instead, they will see a picture of my broken keyboard that I posted a week ago.

The latest post of Ploum according to Pixelfed. The latest post of Ploum according to Pixelfed.

That’s because, despite its name, Pixelfed is NOT a true Fediverse application. It does NOT respect the ActivityPub protocol. Any Pixelfed user following my @ploum@mamot.fr will only see a very small fraction of what I post. They may not see anything from me for months.

But why? Simple! The Pixelfed app has unilaterally decided not to display most Fediverse posts for the arbitrary reason that they do not contain a picture.

This is done on purpose and by design. Pixelfed is designed to mimic Instagram. Displaying text without pictures was deliberately removed from the code (it was possible in previous versions) in order to make the interface prettier.

This is unlike a previous problem where Pixelfed would allow unauthorised users to read private posts from unknowing fediverse users, which was promptly fixed.

In this case, we are dealing with a conscious design decision by the developers. Being pretty is more important than transmitting messages.

Technically, this means that a Pixelfed user P will think that he follows someone but will miss most of the content. On the opposite, the sender, for example a Mastodon user M, will believe that P has received his message because M follows him.

This is a grave abuse of the protocol: messages are silently dropped. It stands against everything the Fediverse is trying to do: allow users to communicate. My experience with open protocols allows me to say that it is a critical problem and that it cannot be tolerated. Would you settle for a mail provider which silently drop all emails you receive if they contain the letter "P"?

The principle behind a communication protocol is to create trust that messages are transmitted. Those messages could, of course, be filtered by the users but those filters should be manually triggered and always removable. If a message is not delivered, the sender should be notified.

In 2025, I’ve read several articles about people trying the Fediverse but leaving it because "there’s not enough content despites following lot of people". Due to the Pixelfed buzz in January, I’m now wondering: "how many of those people were using Pixelfed and effectively missing most of the Fediverse content?"

The importance of respecting the protocol

I cannot stress enough how important that problem is.

If Pixelfed becomes a significant actor, its position will gravely undermine the ActivityPub protocol to the point of making it meaningless.

Imagine a new client, TextFed, that will never display posts with pictures. That makes as much sense as the opposite. Lots of people, like me, find pictures disturbing and some people cannot see pictures at all. So TextFed makes as much sense as Pixelfed. Once you have TextFed, you realise that TextFed and PixelFed users can follow each other, they can comment on post from Mastodon users, they can exchange private messages but they will never be able to see post from each other.

For any normal users, there’s no real way to understand that they miss some messages. And even if you do, it is very hard to find that the cause is the absence of pictures in them make them "not pretty enough" to Pixelfed developers. Worse of all : some Mastodon posts do contain a picture but are not displayed in Pixelfed. That’s because the picture is from a link preview and was not manually uploaded. Try to explain that to your friends that reluctantly followed you on the Fediverse. Have a look at any Mastodon account and try to guess which posts will we showed to the Pixelfed followers!

That’s not something any normal human is supposed to understand. For Pixelfed users, there’s no way to see they are missing on some content. For Mastodon users, there’s no way to see that some of their audience is missing on some content.

With a broken trust in the protocol, people will revert to create Mastodon accounts to follow Mastodon, Pixelfed accounts to follow Pixelfed and Textfed to follow Textfed. Even if it is not 100% needed, that’s the first intuition. It’s already happening around me: I’ve witnessed multiple people with a Mastodon account creating a Pixelfed account to follow Pixelfed users. They do this naturally because they were used to do that with Twitter and Instagram.

Congratulations, you have successfully broken ActivityPub and, as a result, the whole Fediverse. What Meta was not able to do with Threads, the Fediverse did it to itself. Because it was prettier.

Pixelfed will be forced to comply anyway

Now, imagine for a moment that Pixelfed takes off (which is something I wish for and would be healthy for the Fediverse) and that interactions are strong between Mastodon users and Pixelfed users (also something I wish for). I let you imagine how many bug reports developers will receive about "some posts are not appearing in my followers timeline" or "not appearing in my timeline".

This will result in a heavy pressure for Pixelfed devs to implement text-only messages. They will, at some point, be forced to comply, having eroded trust in the Fediverse for nothing.

Once a major actor in a decentralised network starts to mess with the protocol, there are only two possible outputs: either that actor lose steam or that actor becomes dominant enough to impose its own vision of the protocol. In fact, there’s a third option: the whole protocol becomes irrelevant because nobody trust it anymore.

What if Pixelfed becomes dominant?

But imagine that Pixelfed is now so important that they can stick to their guns and refuse to display text messages.

Well, there’s a simple answer: every other fediverse software will now add an image with every post. Mastodon will probably gain a configurable "default picture to attach to every post so your posts are displayed in Pixelfed".

And now, without having formerly formalised it, the ActivityPub protocol requires every message to have a picture.

That’s how protocol works. It already happened: that’s how all mail clients started to implement the winmail.dat bug.

Sysadmins handling storage and bandwidth for the Fediverse thank you in advance.

We are not there yet

Fortunately, we are not there yet. Pixelfed is still brand new. It still can go back to displaying every message an end user expect to see when following another Fediverse user.

I stress out that it should be by default, not a hidden setting. Nearly all Pixelfed users I’ve asked were unaware of that problem. They thought that if they follow someone on the Fediverse, they should, by default, see all their public posts.

There’s no negotiation. No warning on the Pixelfed website will be enough. In a federated communication system, filters should be opt-in. If fact, that’s what older versions of Pixelfed were doing.

But, while text messages MUST be displayed by default (MUST as in RFC), they can still be displayed as less important. For example, one could imagine having them smaller or whatever you find pretty as long as it is clear that the message is there. I trust Pixelfed devs to be creative here.

The Fediverse is growing. The Fediverse is working well. The Fediverse is a tool that we urgently need in those trying times. Let’s not saw off the branch on which we stand when we need it the most.

UPDATE: Dansup, Pixelfed Creator, replied the following on Mastodon:

We are working on several major updates, and while I believe that Pixelfed should only show photo posts, that decision should be up to each user, which we are working to support.

I’m Ploum, a writer and an engineer. I like to explore how technology impacts society. You can subscribe by email or by rss. I value privacy and never share your adress.

I write science-fiction novels in French. For Bikepunk, my new post-apocalyptic-cyclist book, my publisher is looking for contacts in other countries to distribute it in languages other than French. If you can help, contact me!

December 03, 2025

A glowing blue doorway opens into a lush, colorful forest.

When I tell people that Acquia Source will let customers export their entire website and leave our platform anytime, I usually get puzzled looks.

We really mean the entire site: the underlying Drupal code, theme, configuration, content, and data. The export gives you a complete, working Drupal site that you can run on any infrastructure you choose.

Most SaaS platforms do the opposite. They make it hard to leave. When you export, you may get all your content, but never the code.

Why do we want to make it easy for customers to leave?

First, when leaving is easy, customers stay because they want to, not because they are trapped. That accountability pushes us to build better products. It means that at Acquia, we have to earn our customers' business every day by delivering value, not by making it hard to leave.

Second, the ability to leave means teams can start small and scale without hitting a wall. All SaaS products have constraints, and Acquia Source is no exception. When your application reaches a level of complexity that requires deeper customization, you can take your entire site to Acquia Cloud or any other Drupal hosting environment. You never need to start over.

Last but not least, because Acquia Source is built on Drupal, we want it to reflect Drupal's open source freedoms. Full export is how we make those principles real in a SaaS context.

We call this Open SaaS.

We first tried this idea with Drupal Gardens in 2010, which also allowed full exports. I loved that feature then, and I still love it now. I have always believed it was a big deal. More importantly, our customers did too.

One of Acquia's largest customers began on Drupal Gardens more than a decade ago. They used it to explore Drupal, then naturally grew into Acquia Cloud and Site Factory as their needs became more complex. Today they run some of the world's biggest media properties on Drupal and Acquia.

Trust comes from freedom, not lock-in. The exit door you'll never use is exactly what makes you confident enough to stay. It does seem counterintuitive to make leaving easy, but not all SaaS is created equal. With our Open SaaS approach, you get the freedom to grow and the ability to leave whenever you choose.

Or How I Discovered That Fusion Is… Fine, I Guess 🕺

Last night I did something new: I went fusion dancing for the first time.
Yes, fusion — that mysterious realm where dancers claim to “just feel the music,” which is usually code for nobody knows what we’re doing but we vibe anyway.
The setting: a church in Ghent.
The vibe: incense-free, spiritually confusing. ⛪

Spoiler: it was okay.
Nice to try once. Probably not my new religion.

Before anyone sharpens their pitchforks:
Lene (Kula Dance) did an absolutely brilliant job organizing this.
It was the first fusion event in Ghent, she put her whole heart into it, the vibe was warm and welcoming, and this is not a criticism of her or the atmosphere she created.
This post is purely about my personal dance preferences, which are… highly specific, let’s call it that.

But let’s zoom out. Because at this point I’ve sampled enough dance styles to write my own David Attenborough documentary, except with more sweat and fewer migratory birds. 🐦

Below: my completely subjective, highly scientific taxonomy of partner dance communities, observed in their natural habitats.


🎻 Balfolk – Home Sweet Home

Balfolk is where I grew up as a dancer — the motherland of flow, warmth, and dancing like you’re collectively auditioning for a Scandinavian fairy tale.

There’s connection, community, live music, soft embraces, swirling mazurkas, and just the right amount of emotional intimacy without anyone pretending to unlock your chakras.

Balfolk people: friendly, grounded, slightly nerdy, and dangerously good at hugs.

Verdict: My natural habitat. My comfort food. My baseline for judging all other styles. ❤


💫 Fusion: A Beautiful Thing That Might Not Be My Thing

Fusion isn’t a dance style — it’s a philosophical suggestion.

“Take everything you’ve ever learned and… improvise.”

Fusion dancers will tell you fusion is everything.
Which, suspiciously, also means it is nothing.

It’s not a style; it’s a choose-your-own-adventure.
You take whatever dance language you know and try to merge it with someone else’s dance language, and pray the resulting dialect is mutually intelligible.

I had a fun evening, truly. It was lovely to see familiar faces, and again: Lene absolutely nailed the organization. Also a big thanks to Corentin for the music!
But for me personally, fusion sometimes has:

  • a bit too much freedom
  • a bit too little structure
  • and a wildly varying “shared vocabulary” depending on who you’re holding

One dance feels like tango in slow motion, the next like zouk without the hair flips, the next like someone is trying to do tai chi at you. Mostly an exercise in guessing whether your partner is leading, following, improvising, or attempting contemporary contact improv for the first time.

Beautiful when it works. Less so when it doesn’t.
And all of that randomly in a church in Ghent on a weeknight.

Verdict: Fun to try once, but I’m not currently planning my life around it. 😅


🤸 Contact Improvisation: Gravity’s Favorite Dance Style

Contact improv deserves its own category because it’s fusion’s feral cousin.

It’s the dance style where everyone pretends it’s totally normal to roll on the floor with strangers while discussing weight sharing and listening with your skin.

Contact improv can be magical — bold, creative, playful, curious, physical, surprising, expressive.
It can also be:

  • accidentally elbowing someone in the ribs
  • getting pinned under a “creative lift” gone wrong
  • wondering why everyone else looks blissful while you’re trying not to faceplant
  • ending up in a cuddle pile you did not sign up for

It can exactly be the moment where my brain goes:

“Ah. So this is where my comfort zone ends.”

It’s partnered physics homework.
Sometimes beautiful, sometimes confusing, sometimes suspiciously close to a yoga class that escaped supervision.

I absolutely respect the dancers who dive into weight-sharing, rolling, lifting, sliding, and all that sculptural body-physics magic.
But my personal dance style is:

  • musical
  • playful
  • partner-oriented
  • rhythm-based
  • and preferably done without accidentally mounting someone like a confused koala 🐨

Verdict: Fascinating to try, excellent for body awareness, fascinating to observe, but not my go-to when I just want to dance and not reenact two otters experimenting with buoyancy. 🦦 Probably not something I’ll ever do weekly.


🪕 Contra: The Holy Grail of Joyful Chaos

Contra is basically balfolk after three coffees.
People line up, the caller shouts things, everyone spins, nobody knows who they’re dancing with and nobody cares. It’s wholesome, joyful, fast, structured, musical, social, and somehow everyone becomes instantly attractive while doing it.

Verdict: YES. Inject directly into my bloodstream. 💉


🍻 Ceilidh: Same Energy, More Shouting

Ceilidh is what you get when Contra and Guinness have a love child.
It’s rowdy, chaotic, and absolutely nobody takes themselves seriously — not even the guy wearing a kilt with questionable underwear decisions. It’s more shouting, more laughter, more giggling at your own mistakes, and occasionally someone yeeting themselves across the room.

Verdict: Also YES. My natural ecosystem.


🇧🇷 Forró: Balfolk, but Warmer

If mazurka went on Erasmus in Brazil and came back with stories of sunshine and hip movement, you’d get Forró.

Close embrace? Check.
Playfulness? Check.
Techniques that look easy until you attempt them and fall over? Check.
I’m convinced I would adore forró.

Verdict: Where are the damn lessons in Ghent? Brussel if we really have to. Asking for a friend. (The friend is me.) 😉


🕺 Lindy Hop & West Coast Swing: Fun… But the Vibe?

Both look amazing — great music, athletic energy, dynamic, cool moves, full of personality.
But sometimes the community feels a tiny bit like:

“If you’re not wearing vintage shoes and triple-stepping since birth, who even are you?”

It’s not that the dancers are bad — they’re great.
It’s just… the pretentie.

Verdict: Lovely to watch, less lovely to join.
Still looking for a group without the subtle “audition for fame-school jazz ensemble” energy.


🌊 Zouk: The Idea Pot

Zouk dancers move like water. Or like very bendy cats.
It’s sexy, flowy, and full of body isolations that make you reconsider your spine’s architecture.

I’m not planning to become a zouk person, but I am planning to steal their ideas.
Chest isolations?
Head rolls?
Wavy body movements?
Yes please. For flavour. Not for full conversion.

Verdict: Excellent expansion pack, questionable main quest.


💃 Salsa, Bachata & Friends: Respectfully… No

I tried. I really did.
I know people love them.
But the Latin socials generally radiate too much:

  • machismo
  • perfume
  • nightclub energy
  • “look at my hips” nationalism
  • and questionable gender-role nostalgia

If you love it, great.
If you’re me: no, no, absolutely not, thank you.

Verdict: iew iew nééé. 🪳
Fantastic for others. Not for me.


🍷 Tango: The Forbidden Fruit

Tango is elegant, intimate, dramatic… and the community is a whole ecosystem on its own.

There are scenes where people dance with poetic tenderness, and scenes where people glare across the room using century-old codified eyebrow signals that might accidentally summon a demon. 👀

I like tango a lot — I just need to find a community that doesn’t feel like I’m intruding on someone’s ancestral mating ritual. And where nobody hisses if your embrace is 3 mm off the sacred norm.

Verdict: Promising, if I find the right humans.


🧘‍♂️ Ecstatic Dance / 5 Rhythms / Biodanza / Tantric Whatever

Look.
I’m trying to be polite.
But if I wanted to flail around barefoot while being spiritually judged by someone named Moonfeather, I’d just do yoga in the wrong class.

I appreciate the concept of moving freely.
I do not appreciate:

  • uninvited aura readings
  • unclear boundaries
  • workshops that smell like kombucha
  • communities where “I feel called to share” takes 20 minutes

And also: what are we doing? Therapy? Dance? Summoning a forest deity? 🧚

Verdict: Too much floaty spirituality, not enough actual dancing.
Hard pass. ✨


📝 Conclusion

I’m a simple dancer.
Give me clear structure (contra), playful chaos (ceilidh), heartfelt connection (balfolk), or Brazilian sunshine vibes (forró).

Fusion was fun to try, and I’m genuinely grateful it exists — and grateful to the people like Lene who pour time and energy into creating new dance spaces in Ghent. 🙌

But for me personally?
Fusion can stay in the category of “fun experiment,” but I won’t be selling all my worldly possessions to follow the Church of Expressive Improvisation any time soon.
I’ll stay in my natural habitat: balfolk, contra, ceilidh, and anything that combines playfulness, partnership, and structure.

If you see me in a dance hall, assume I’m there for the joy, the flow, and preferably fewer incense-burning hippies. 🕯

Still: I’m glad I went.
Trying new things is half the adventure.
Knowing what you like is the other half.

And I’m getting pretty damn good at that. 💛

Amen.
(Fitting, since I wrote this after dancing in a church.)

November 27, 2025

A blue heart

Today is Thanksgiving in the US. I know it's not a global holiday, but it has me thinking about gratitude, and specifically about a team that rarely gets the recognition it deserves: the Drupal Security Team.

As Drupal's project lead, I'm barely involved in our security work. And you know what? That is a sign that things are working really well.

Our Security Team reviews reports, analyzes vulnerabilities, coordinates patches across supported Drupal versions, and publishes advisories. They work with Drupal module maintainers and reporters to protect millions of websites. They also educate our community proactively, ensuring problems are prevented, not just fixed. It can be a lot of work, and delicate work.

To get an idea of the quality of their work, check out recent advisories at drupal.org/security. I know it's maybe strange to point out security advisories, but their work meets the highest standards of maturity. For example, Drupal is authorized as a CVE Numbering Authority, which means our security processes meet international standards for vulnerability coordination.

Whether you're running a small blog or critical government infrastructure, the Security Team protects you with the same consistency and professionalism.

While I'm on our private security team mailing list, they do all this without needing me to oversee or interfere. In fact, the team handles everything so smoothly that my involvement would only slow them down. In the world of open source leadership, there is no higher compliment I can pay them.

Security work is largely invisible when done well. Nobody celebrates the absence of breaches. The researchers who report issues often get more recognition than the team members who spend hours verifying, patching, and coordinating fixes.

All software has security bugs, and fortunately for Drupal, critical security bugs are rare. What really matters is how you deal with security releases.

To our Security Team: thank you for your excellence. Thank you for protecting Drupal's reputation through consistent, professional, often invisible work, week after week.

November 26, 2025

A train moves quickly through a dimly lit underground tunnel, leaving streaks of light in its path.

Several years ago, I built a photo stream on my Drupal-powered website. You can see it at https://dri.es/photos. This week, I gave it a small upgrade: infinite scroll.

My first implementation used vanilla JavaScript using the Intersection Observer API, and it worked fine. It took about 30 lines of custom JavaScript and 20 lines of PHP code.

But Drupal now ships with htmx support, and that had been on my mind. So a couple of hours later, I rewrote the feature with htmx to see if it could do the same job more simply.

It's something I love about Drupal: how we keep adding small, well-chosen features like htmx support. Not flashy, but they quietly make everyday work nicer. Years ago, Drupal was one of the first CMSes to adopt jQuery, and our early adoption helped contribute to its widespread use. Today, we're replacing parts of jQuery with htmx, and Drupal may well be among the first CMSes to ship htmx in core.

If, like me, you haven't used htmx before, it lets you add dynamic behavior to pages using HTML attributes instead of writing JavaScript. Want to load content when something is clicked or scrolled into view? You add an attribute like hx-get="/load-more" and htmx handles the request, then swaps the response into your page. It gives you AJAX-style interactions without having to write JavaScript.

To make the photo stream load more images as you scroll, I added an "htmx trigger". When it scrolls into view, htmx fetches more photos and appends them to the right container. The resulting HTML looks like this:

<div hx-get="/photos/load-more?offset=25"
         hx-trigger="revealed"
         hx-target="#album"
         hx-swap="beforeend">
  <figure>
   ...
  </figure>
</div>

The hx-get points to a controller that returns the next batch of photos. The hx-trigger="revealed" attribute means "fire when scrolled into view". The hx-target="#album" tells htmx where to put the new content, and hx-swap="beforeend" appends it at the end of that #album container.

I didn't want users to hit the last photo and have to wait for more to load. To keep the scrolling smooth, I added the trigger a few photos before the end. This pre-fetches the next batch before the user even realizes they are running out of photos. This is what the code in Drupal looks likes:

// Trigger 3 images before the end to prefetch the next batch.
$trigger = array_keys($images)[max(0, count($images) - 4)];

foreach ($images as $key => $image) {
  …

  if ($key === $trigger) {
    // Add htmx attributes to the <div> surrounding the image.
    $build['#attributes']['hx-get'] = '/photos/load-more?offset=' . ($offset + $limit);
    $build['#attributes']['hx-trigger'] = 'revealed';
    $build['#attributes']['hx-target'] = '#album';
    $build['#attributes']['hx-swap'] = 'beforeend';
  }
}

And the controller that returns the HTML:

public function loadMorePhotos(Request $request) {
  $offset = $request->query->getInt('offset', 0);
  $limit = 25;
  $photos = PhotoCollection::loadRecent($offset, $limit);
  if (!$photos) {
    return new Response('');
  }

  $build = $this->buildImages($photos, $offset, $limit);
  $html = \Drupal::service('renderer')->renderRoot($build);
  return new Response($html);
}

Each response includes 25 photos. It continues fetching new photos as you scroll down until there are no more photos, at which point the controller returns an empty response and the scrolling stops.

As you can tell, there is no custom JavaScript in my code. It's all abstracted away by htmx. The htmx version took less than 10 lines of PHP code (shown above) instead of 30+ lines of custom JavaScript. The loadMorePhotos controller I needed either way.

The savings are negligible. Replacing a couple dozen lines of JavaScript won't change the world. And at 16KB gzipped, htmx is much larger than the custom JavaScript I wrote by hand. But it still feels reasonable. My photo stream is image-heavy, and htmx adds less than 0.5% to the initial page weight.

Overall, I'd say that htmx grew on me. There is something satisfying about declarative code. You describe what should happen, and the implementation disappears. I may try it in a few more places to improve the user experience of my site.

Don’t Do Snake Oil Writing

In computer security, it is often said that the fact you don’t see any vulnerability in the code you write is no proof that your code is secure. It is proof that you are blind to all the mistakes you made in your shitty code.

The less competent you are, the more confident you will be and the more vulnerable code you will write.

And people will exploit vulnerabilities of your code. Even if you honestly believe in your aptitude, you will end up writing "snake-oil" security systems.

But I’m not a cryptographer. I’m a writer.

When you use an LLM to generate text, the fact that you find the output good doesn’t mean that it is good. It only means that you are blind to the shit you’ve generated.

The simple idea that you think you could get people read your bland generated text and not notice is the proof that you are totally incompetent at writing. You should not trust yourself with that the same way I would never trust myself to check if LLM-generated source code is secure.

Did you really expect nobody to notice that your text was generated? Seriously?

People will notice how stupid your writing is. Some, like myself, will be offended. Other will simply walk away with a bad feeling. One sure is certain: nobody will think it is interesting. Nobody will care about what you wrote. People will simply stop reading you. People will stop sharing you, stop discussing about your writing.

Because you are doing snake-oil writing.

Fortunately, the cure is very simple.

Even if you think that what you produce is bad, be honest, straight. People will notice that you want to improve. Some will even offer advice. You will learn. You will make mistakes, which is an essential part of learning. If you acknowledge those mistakes, people will appreciate your work even more.

Writing secure code is not about magical genius thinking from behind a Guy Fawkes mask. It is about tediously learning patterns of vulnerabilities, about humility that you can’t catch everything alone.

Writing text is not about doing beautiful sentences. It is thinking about the information you really want to transmit. Some really good writers make awful sentences. But they are still good because each sentence gives you something, because you feel information and emotions flowing from the writer to you.

If you are tempted to use an LLM to generate a text, don’t publish the output of the LLM. Publish the prompt! That’s where your information is. It is what people want to hear.

You were tricked into doubting your own ability to write and to use a very costly text generator instead of trusting yourself. This impairs your ability to learn, to improve while insulting all the people that may read you. Like a cocaine addict, you are destroying yourself and destroying your reputation by screaming like a maniac. But you feel good because your brain is altered to believe that "you are better and more productive".

Stop the slop while you can.

If you are holding an MBA and using LLM to generate marketing content, it may be too late. If that’s the case, follow Bill Hicks advice and, please, kill yourself!

I’m Ploum, a writer and an engineer. I like to explore how technology impacts society. You can subscribe by email or by rss. I value privacy and never share your adress.

I write science-fiction novels in French. For Bikepunk, my new post-apocalyptic-cyclist book, my publisher is looking for contacts in other countries to distribute it in languages other than French. If you can help, contact me!

November 21, 2025

Nos comptoirs virtuels

La façade d’un grand café parisien. Zoom sur l’enseigne un peu décrépie ornée d’un pouce blanc sur fond de peinture écaillée : « Le Facebook ».

Intérieur bondé. Moyenne d’âge : 55-60 ans. Les murs sont recouverts de publicité. Les clients sont visiblement tous des habitués et alternent entre ballons de rouge et bières.

— Depuis qu’on peut plus fumer, c’est quand même plus pareil.
— Tout ça c’est la faute de communisses sissgenres !
— Les quois ?
— Les sissgenres. C’est un mot qu’y disent pour légaliser la pédophilie.
— Je croyais qu’on disait transse ?
— C’pareil. Enfin, je crois. Un truc de tarlouzes.
— En tout cas, on peut même plus se rouler une clope en paix !

Une voix résonne provenant d’une table voisine :
— Mon petit fils a fait premier au concours de poésie de son lycée.

Toute la salle crie « Bravo ! » et applaudit pendant 3 secondes avant de reprendre les conversations comme si rien ne s’était passé.

Fondu

Une cafétéria aux murs blancs couverts de posters motivationnels dont les images sont très visiblement générées par IA. Les clients portent tous des costumes-cravates ou des tailleurs un peu cheap, mais qui font illusion de loin. Tous consomment du café dans un gobelet en plastique qu’ils remuent légèrement avec une touillette en bois. Un petit pot contient des touillettes usagées sous une inscription « Pour sauver la planète, recyclez vos touillettes ! »

Gros plan sur Armand, visage bien rasé, lunettes, pommettes saillantes. Il a l’air stressé, mais essaie d’en imposer avec son sourire nerveux.

— Depuis que je fréquente « Le Linkedin », mon rendement de conversion client a augmenté de 3% et j’ai été officiellement nommé Marketing Story Customers Deputy Manager. C’est une belle réussite que je dois à mon réseau.

La caméra s’éloigne. On constate que, comme tous les autres clients, il est seul à sa table et en train de parler à un robot qui acquiesce machinalement.

Fondu

L’endroit branché avec des lumières colorées qui clignotent et de la musique tellement à fond que tu ne sais pas passer commande autrement qu’en hurlant. Des néons hyper design dessinent le nom du bar : « Instagram »

Les cocktails coûtent un mois de salaire, sont faits avec des jus de fruits en boîte. De temps en temps, un client fait une crise d’épilepsie, mais tout le monde trouve ça normal. Et puis les murs sont recouverts de posters géants représentant des paysages somptueux.

La barbe de trois jours soigneusement travaillée, Youri-Maxime pointe un poster à sa compagne.
— Cette photo est magnifique, on doit absolument aller là-bas !

Estelle n’a pas 30 ans, mais son visage est gonflé par la chirurgie esthétique. Sans regarder son compagnon, elle répond :
— Excellente idée, on prendra une photo là, je mettrai mon bikini jaune MachinBazar(TM) et je me ferai un maquillage BrolTruc(TM).
— Trop génial, répond le mec sans quitter des yeux son smartphone. Il me tarde de pouvoir partager la photo !

Fondu

Un ancien entrepôt qui a été transformé en loft de luxe. Briques nues, tuyauteries apparentes. Mais c’est intentionnel. Cependant, on sent que l’endroit n’est plus vraiment entretenu. Il y a des la poussière. Des détritus s’accumulent dans un coin. Les toilettes refoulent. Ça pue la merde dans tout le bar.

Au mur, un grand panneau bleu est barré d’un grand X noir. En dessous, on peut lire, à moitié effacé : « Twitter ».

Dans des pulls élimés et des pantalons de velours, une bande d’habitués est assise à une table. Chacun tape frénétiquement sur le clavier repliable de sa tablette.

Une bande de voyous s’approchent. Ils ont des tatouages en forme de croix gammées, d’aigles, de symboles vaguement nordiques. Ils interpellent les habitués.

— Eh, les mecs ! Vous faites quoi ?
— Nous sommes des journalistes, on écrit des articles. Ça fait 15 ans qu’on vient ici pour travailler.
— Et vous écrivez sur quoi ?
— Sur la démocratie, les droits des trans…

Un nazi a violemment donné un coup de batte de baseball sur la table, éclatant les verres des journalistes.

— Euh, enchaine aussitôt un autre journaliste, on écrit surtout sur le grand remplacement, sur les dangers du wokisme.

Le nazi renifle.

— Vous êtes cool les mecs, continuez !

Fondu

Exactement le même entrepôt sauf que cette fois-ci tout est propre. Le panneau, tout nouveau, indique « Bluesky ». Quand on s’approche des murs, on se rend compte qu’ils sont en fait en carton. Il s’agit d’un décor de cinéma !

Il n’y a pas d’habitués, le bar vient d’ouvrir.

— Bienvenue, lance le patron a la foule qui entre. Je sais que vous ne voulez pas rester à côté, car c’est devenu sale et rempli de nazis. Ici, pas de risque. Tout est pareil, mais décentralisé.

La foule pousse un soupir de satisfaction. Un client fronce les sourcils.

— En quoi est-ce décentralisé ? C’est pareil que…

Il n’a pas le temps de finir sa phrase. Le patron a claqué des doigts et deux cerbères sortis de nulle part le poussent dehors.

— C’est décentralisé, continue le patron, et c’est moi qui prends les commandes.
— Chouette, murmure un client. On va pouvoir avoir l’impression de faire un truc nouveau sans rien changer.
— En plus, on peut lui faire confiance, réplique un autre. C’est le patron de l’ancien bar. Il l’a revendu à un nazi et a pris une partie de l’argent pour ouvrir celui-ci.
— Alors, c’est clairement un gage de confiance !

Fondu

Une vielle grange avec de la paille par terre. Il y a des poules, on entend un mouton bêler.

Un type dans une chemise à carreaux élimée appuie sur un vieux thermos pour en tirer de la bouillasse qu’il tend à ses clients.

— C’est du bio issu du commerce équitable, dit-il. Du Honduras. Ou du Nicaragua ? Faut que je vérifie…
— Merci, répond une grande femme aux cheveux mauves d’un côté du crâne, rasés de l’autre côté.

Elle a un énorme piercing dans le nez, une jupe en voilettes, des bas résille troués et des chaussettes aux couleurs du drapeau trans qui lui remontent jusqu’aux genoux. Elle va s’asseoir devant une vieille table en tréteaux sur laquelle un type barbu en t-shirt « FOSDEM 2004 » tape fiévreusement sur le clavier d’un ordinateur Atari qui prend la moitié de la table. Des câbles sortent de partout.

Arrive une vieille dame aux yeux pétillants. Elle s’appuie sur une canne d’une main, tire un cabas à roulettes de l’autre.

— Bonjour tout le monde ! Vous allez bien aujourd’hui ?

Tout le monde répond des trucs différents en même temps, une poule s’affole et s’envole sur la table en caquetant. La vieille dame ouvre son cabas, faisant tomber une pile de livres de la Pléiade, un Guillaume Musso et une botte de poireaux.

— Regardez ce que je nous ai fait ! Une enseigne pour mettre devant le portail.

Elle déplie un ouvrage au crochet de plusieurs mètres de long. Inscrit en lettres de laine aux coloris plus que douteux, on peut vaguement déchiffrer « Mastodon ». Si on penche la tête et qu’on cligne des yeux.

— Bravo ! C’est magnifique ! entonne une cliente.
— Il fallait dire « Fediverse » dit un autre.
— Est-ce que ça ne rend pas l’endroit un peu trop commercial ? Faudrait pas qu’on devienne comme le bar à néon d’en face.
— Ouais, c’est sûr, c’est le risque. Faudrait que les clients d’en face viennent ici, mais sans que ce soit commercial.
— C’est de la laine bio, continue la vieille dame.

Dans l’étable, une vache mugit.

Je suis Ploum et je viens de publier Bikepunk, une fable écolo-cycliste entièrement tapée sur une machine à écrire mécanique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !

Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.

November 20, 2025

DrupalCon Nara just wrapped up, and it left me feeling energized.

During the opening ceremony, Nara City Mayor Gen Nakagawa shared his ambition to make Nara the most Drupal-friendly city in the world. I've attended many conferences over the years, but I've never seen a mayor talk about open source as part of his city's long-term strategy. It was surprising, encouraging, and even a bit surreal.

Because Nara came only five weeks after DrupalCon Vienna, I didn't prepare a traditional keynote. Instead, Pam Barone, CTO of Technocrat and a member of the Drupal CMS leadership team, led a Q&A.

I like the Q&A format because it makes space for more natural questions and more candid answers than a prepared keynote allows.

We covered a lot: the momentum behind Drupal CMS, the upcoming Drupal Canvas launch, our work on a site template marketplace, how AI is reshaping digital agencies, why governments are leaning into open source for digital sovereignty, and more.

If you want more background, my DrupalCon Vienna keynote offers helpful context and includes a video recording with product demos.

The event also featured excellent sessions with deep dives into these topics. All session recordings are available on the DrupalCon Nara YouTube playlist.

Having much of the Drupal CMS leadership team together in Japan also turned the week into a working session. We met daily to align on our priorities for the next six months.

On top of that, I spent most of my time in back-to-back meetings with Drupal agencies and end-users. Hearing about their ambitions and where they need help gave me a clearer sense of where Drupal should go next.

Thank you to the organizers and to everyone who took the time to meet. The commitment and care of the community in Japan really stood out.

November 19, 2025

We saw in part 1 how to deploy our starter kit in OCI, and in part 2 how to connect to the compute instance. We will now check which development languages are available on the compute instance acting as the application server. After that, we will see how easy it is to install a new […]

November 17, 2025

In part 1, we saw how to deploy several resources to OCI, including a compute instance that will act as an application server and a MySQL HeatWave instance as a database. In this article, we will see how to SSH into the deployed compute instance. Getting the key To connect to the deployed compute instance, […]

A lone astronaut stands on cracked ground as bright green energy sprouts upward, symbolizing a new beginning.

Ten years ago, Acquia shut down Drupal Gardens, a decision that I still regret.

We had launched Drupal Gardens in 2009 as a SaaS platform that let anyone build Drupal websites without touching code. Long-time readers may remember my various blog posts about it.

It was pretty successful. Within a year, 20,000 sites were running on Drupal Gardens. By the time we shut it down, more than 100,000 sites used the platform.

Looking back, shutting down Drupal Gardens feels like one of the biggest business mistakes we made.

At the time, we were a young company with limited resources, and we faced a classic startup dilemma. Drupal Gardens was a true SaaS platform. Sites launched in minutes, and customers never had to think about updates or infrastructure. Enterprise customers loved that simplicity, but they also needed capabilities we hadn't built yet: custom integrations, fleet management, advanced governance, and more.

For a while, we tried to serve both markets. We kept Drupal Gardens running for simple sites while evolving parts of it into what became Acquia Cloud Site Factory for enterprise customers. But with our limited resources, maintaining both paths wasn't sustainable. We had to choose: continue making Drupal easier for simple use cases, or focus on enterprise customers.

We chose enterprise. Seeing stronger traction with larger organizations, we shut down the original Drupal Gardens and doubled down on Site Factory. By traditional business metrics, we made the right decision. Acquia Cloud Site Factory remains a core part of Acquia's business today and is used by hundreds of customers that run large site fleets with advanced governance requirements, deep custom integrations, and close collaboration with development teams.

But that decision also moved us away from the original Drupal Gardens promise: serving the marketer or site owner who didn't want or need a developer team. Acquia Cloud Site Factory requires technical expertise, whereas Drupal Gardens did not.

For the next ten years, I watched many organizations struggle with the very challenge Drupal Gardens could have solved. Large organizations often want one platform that can support both simple and complex sites. Without a modern Drupal-based SaaS, many turned to WordPress or other SaaS tools for their smaller sites, and kept Drupal only for their most complex properties.

The problem is that a multi-CMS environment comes with a real cost. Teams must learn different systems, juggle different authoring experiences, manage siloed content, and maintain multiple technology stacks. It can slow them down and make digital operations harder than they need to be. Yet many organizations continue to accept this complexity simply because there has not been a better option.

Over the years, I spoke with many customers who ran a mix of Drupal and non-Drupal sites. They echoed these frustrations in conversation after conversation. Those discussions reminded me of what we had left behind with Drupal Gardens: many organizations want to standardize on a single CMS like Drupal, but the market hadn't offered a solution that made that possible.

So, why start a new Drupal SaaS after all these years? Because the customer need never went away, and we finally have the resources. We are no longer the young company forced to choose.

Jeff Bezos famously advised investing in what was true ten years ago, is true today, and will be true ten years from now. His framework applies to two realities here.

First, organizations will always need websites of different sizes and complexity. A twenty-page campaign site launching tomorrow has little in common with a flagship digital experience under continuous development. Second, running multiple, different technology stacks is rarely efficient. These truths have held for decades, and they're not going away.

This is why we've been building Acquia Source for the past eighteen months. We haven't officially launched it yet, although you may have seen us begin to talk about it more openly. For now, we're testing Acquia Source with select customers through a limited availability program.

Acquia Source is more powerful and more customizable than Drupal Gardens ever was. Drupal has changed significantly in the past ten years, and so has what we can deliver. While Drupal Gardens aimed for mass adoption, Acquia Source is built for organizations that can afford a more premium solution.

As with Drupal Gardens, we are building Acquia Source with open principles in mind. It is easy to export your site, including code, configuration, and content.

Just as important, we are building key parts of Acquia Source in the open. A good example is Drupal Canvas. Drupal Canvas is open source, and we are developing it transparently with the community.

Acquia Source does not replace Acquia Cloud or Acquia Cloud Site Factory. It complements them. Many organizations will use a combination of these products, and some will use all three. Acquia Source helps teams launch sites fast, without updates or maintenance. Acquia Cloud and Site Factory support deeply integrated applications and large, governed site fleets. The common foundation is Drupal, which allows IT and marketing teams to share skills and code across different environments.

For me, Acquia Source is more than a new product. It finally delivers on a vision we've had for fifteen years: one platform that can support everything from simple sites to the most complex ones.

I am excited about what this means for our customers, and I am equally excited about what it could mean for Drupal. It can strengthen Drupal's position in the market, bring more sites back to Drupal, and create even more opportunities for Acquia to contribute to Drupal.

La complainte du technopunk ringard

Certains d’entre vous me lisent en étant abonnés via RSS ou via la newsletter. D’autres tombent par hasard sur certains de mes billets lorsque ceux-ci sont partagés sur des forums ou des réseaux sociaux. Peut-être que ce billet est le premier que vous découvrez de ce blog ! Si c’est le cas, bienvenue !

Mais il existe une troisième catégorie de lecteurs et lectrices : celles et ceux qui, tout simplement, se décident à aller de temps en temps sur ce site pour voir si j’ai publié des articles et si les titres les intéressent.

Étant moi-même accro au RSS, fréquentant des blogueurs qui parlent de leur nombre d’abonnés, de leurs mailings-listes, j’oublie trop souvent que cette simple solution est possible. C’est un lecteur qui me l’a expliqué lors d’une séance de dédicaces :

— Je suis ton blog depuis des années, j’ai presque tout lu depuis au moins 10 ans !
— Ah, génial. Tu es abonné au RSS ?
— Non.
— À la mailing-liste ?
— Non.
— Tu me suis sur Mastodon ?
— Non, je n’utilise pas les réseaux sociaux.
— Comment tu fais alors pour me suivre ?
— Ben, de temps en temps, je me demande si t’as écrit un article et je tape « www.ploum.net » dans la barre de mon navigateur et je rattrape mon retard.
— …

Enfoncé le ploum ! Lorsqu’on est le nez dans le guidon comme moi, on oublie parfois la simplicité, la liberté du web. Influencé malgré moi par une faune linkedinesque de junkies des statistiques, j’oublie trop souvent qu’un billet de blog s’adresse aussi (et même avant tout) à des personnes qui ne me connaissent pas, qui n’ont pas lu tous mes billets depuis 6 mois, qui ne savent pas ce qu’est le protocole Gemini.

Le papillonnage, la sérendipité sont l’essence de l’être humain. Et, cerise sur le gâteau, il est impossible de comptabiliser, de quantifier ce genre de lecteurs. Un usage autrefois normal, mais aujourd’hui incroyablement rebelle et anticapitaliste du web. Un usage technopunk !

La fin des crêtes

Non, je n’ai jamais porté de crête colorée ni de veste à clous. Mais je roule à vélo ! D’ailleurs, mon dernier roman s’intitule « Bikepunk ».

Dans son livre « L’odyssée du pingouin cannibale », le dandy punk Yann Kerninon fait une analyse intéressante du mouvement punk. Si celui-ci était indéniablement provocant et choquant dans les années 70, il est devenu ensuite la norme. Hurler, baiser et se bourrer la gueule ne sont que des choses normales, divertissantes. Kurt Cobain, héritier du mouvement punk, s’est suicidé lorsqu’il a compris que sa rébellion, son dégoût du système n’était qu’un énième spectacle consolidant le système en question.

La crête colorée n’est plus choquante, au contraire, elle rapportera des likes sur Instagram ! Ce qui devient punk, ce qui choque, c’est d’envoyer chier toutes les métriques, de refuser les diktats (des réseaux) sociaux, d’utiliser un dumbphone, de ne pas être sur Whatsapp, de ne pas être au courant des résultats des matchs de foot ni même du nom de l’émission de télé à la mode.

Essayez et vous verrez que votre entourage vous regardera avec un air d’incompréhension totale. De choc !

Alors que si vous hurlez « No Future » sur une place, je suis sûr que les passants vous filmeront pour récolter des likes.

Le rejet de la mode

La philosophie punk, à la base, c’est le refus total de la mode, de la tendance. Être technopunk, c’est donc se passionner pour les technologies vieilles, ennuyantes, sans budget marketing.

Terence Eden parle de ces technologies ouvertes qui existent en arrière-plan, n’attendant que l’occasion propice pour révéler leur utilité. La radio amateur. Les QR codes, qui ont soudain été popularisés durant la pandémie, parce que soudainement nécessaire.

Il en est de même selon lui pour le Fediverse : personne ne le remarque encore. Mais il est là et le restera jusqu’au moment où on aura besoin de lui. Le rachat de Twitter aurait pu être ce moment. Cela n’a pas été le cas. C’est pas grave, ce sera pour la prochaine fois.

Car les gens sont des moutons crétins. Celleux partis de Twitter sont allés sur Bluesky juste parce que le marketing prétendait que c’était « décentralisés ». Et puis c’était nouveau tout en étant exactement pareil.

J’avais, à l’époque alerté sur le fait que Bluesky était aussi décentralisé que la cryptomonnaie Ripple : c’est-à-dire pas du tout.

À ce tarif, Facebook est également décentralisé : ben oui, leur infrastructure repose sur des serveurs redondants décentralisés. Vous croyez que j’exagère ?

Patatas vient de découvrir que l’équipe Bluesky travaille en secret sur des algorithmes pour cacher certaines réponses qui ne plaisent pas.

Et comme le dit Patatas, il y a bien des tentatives de créer des solutions indépendantes pour se connecter au réseau BS, mais, premièrement, c’est très compliqué et, deuxièmement, presque personne ne les utilisera et donc c’est comme si elles n’existaient pas.

Je le dis depuis 2023 : Bluesky n’est pas décentralisé et ne peut, par sa conception même, pas l’être. Le protocole AT n’est qu’un écran de fumée pour faire croire aux programmeurs qui ne creusent pas trop que la décentralisation future est crédible. C’est un outil marketing.

Ces technologies qui attendent leur moment

C’est pareil pour le protocole XMPP, qui permet de chatter de manière décentralisée depuis 20 ans. Les gens préfèrent Whatsapp ? Pas grave, XMPP attendra d’être vraiment indispensable. Ou cette mode absurde de passer les salons de discussions sur des technologies propriétaires, y compris pour les communautés Open Source. Slack, Telegram maintenant Discord. La plus-value par rapport à un serveur IRC est à peu près nulle. C’est juste du marketing ! (oui, mais les émojis sont plus jolis… ta gueule !)

C’est aussi pour cela que j’aime tellement le réseau Gemini. C’est littéralement technopunk !

Quoi ? C’est compliqué ? Faut faire un effort ? C’est pas joli ? C’est élitiste ? Et tu crois qu’entretenir une crête colorée sur le sommet de son crâne, c’est à la portée de tout le monde ? Bien sûr qu’être technopunk ça demande un effort. Tu voudrais que tout soit facile, sans apprendre et joli justement dans l’esthétique à la mode que t’impose un marketeux défoncé ? Mais retourne dans les jupes de Zuckerberg !

Devoir apprendre et pouvoir apprendre sont des éléments indissociables de la low-tech !

La ligne de commande, ça aussi c’est punk. C’est pas joli, mais c’est hyper efficace : toute personne qui te voit utiliser ton ordinateur part en hurlant. Tes proches font venir un exorciste.

C’est pas pour rien que j’ai créé un navigateur web et gemini qui fonctionne en ligne de commande. Il s’appelle… Offpunk !

Oui, je lis les blogs et le web en ligne de commande. Rien à battre de vos polices de caractères choisies avec amour, de vos mises en pages CSS, de vos javascript pourris. On n’a de toute façon pas les mêmes goûts !

Punk et politique

La philosophie punk, opposition frontale au Thatcherisme, est indissociable de la politique. Et la technologie est complètement politique. Les GAFAM sont désormais complètement fascistes, comme le résume très bien mart-e.

Tu te disais ptêtre parfois que si t’avais vécu sous Pétain en 43, t’aurais été résistant. Ben si tu utilises les GAFAMs parce que plus facile/plus joli/tout le monde le fait/pas le choix, j’ai le regret de t’informer que non. T’es pas du tout résistant. En fait, tu es en train de mettre une affiche « travail - famille - patrie » sur la porte de ta maison. Exactement pour les mêmes raisons que ceux qui l’ont fait à l’époque.

Cyberpunk

C’est pas pour rien que le genre dystopique qui a accompagné l’essor d’Internet s’appelle… Cyberpunk. « Cyberpunk » est également le nom d’un récent essai d’Asma Mhalla qui décrit parfaitement la situation : nous vivons dans une dystopie fasciste avec une idéologie très assumée et si tu n’en ressens pas les effets, c’est juste que t’es pas encore dans les populations visées, que tu te plies bien à tout, que t’as ton petit compte Gmail, Whatsapp, Facebbok et Microsoft pour bien faire comme tout le monde en espérant que ta blancheur de peau, ton hétérosexualité cisgenre et ton compte en banque te permettent de passer entre les gouttes.

T’as essayé de n’avoir aucun de ces comptes ? De ne pas avoir un smartphone Apple ou Android ? Et bien tu verras comme de simples choses comme payer un ticket de bus ou ouvrir un compte en banque sont compliquèes, comme tu deviens un paria pour ne pas simplement obéir aux règles édictées par une poignée de multinationales fascistes !

À propos de cyberpunk, la version audio de mon roman Printeurs est désormais gratuite sur Les Mille Mondes :

Syfy le décrit comme « encore plus sombre et anticapitaliste » que le Neuromancien de Gibson. Et il est sous licence libre, disponible sur toutes les bonnes plateformes pirates. Parce que Fuck Ze System !

Bon, après, si t’es un bourgeois qui peut se permettre de lâcher une tite pièce, n’hésite pas à le commander chez ton libraire ou sur le site PVH.

Parce que les livres papiers et les libraires, ça, c’est vraiment hyper technopunk, mon adelphe !

Je suis Ploum et je viens de publier Bikepunk, une fable écolo-cycliste entièrement tapée sur une machine à écrire mécanique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !

Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.

November 15, 2025

With great pleasure we can announce that the following project will have a stand at FOSDEM 2026! ASF Community BSD + FreeBSD Project Checkmk CiviCRM Cloud Native Computing Foundation + OpenInfra & the Linux Foundation: Building the Open Source Infrastructure Ecosystem Codeberg and Forgejo Computer networks with BIRD, KNOT and Turris Debian Delta Chat (Sunday) Digital Public Goods Dolibar ERP CRM + Odoo Community Association (OCA) Dronecode Foundation + The Zephyr Project Eclipse Foundation F-Droid and /e/OS + OW2 FOSS community / Murena degooglized phones and suite Fedora Project Firefly Zero Foreman FOSS United + fundingjson (and FLOSS/fund) FOSSASIA Framework舰

November 14, 2025

If you want to create a new application, test it, and deploy it on the cloud, Oracle Cloud Infrastructure provides an always-free tier for compute instances and MySQL HeatWave instances (and more). If you are a developer, it can also be complicated to start deploying to the cloud, as you need to figure out the […]

Soutenez Ploum, achetez un livre !

Je sais que vous allez être fortement sollicités pour les dons durant la période qui vient. Mais je sais que vous allez certainement devoir trouver des cadeaux en urgence. Je vous propose un échange : vous soutenez Ploum et, en échange, je vous aide à trouver les cadeaux que vous allez offrir durant les fêtes !

Car je n’ai pas besoin de dons ni de soutien financier. Non, j’ai besoin que vous achetiez mon livre Bikepunk avant la fin de l’année.

Pourquoi j’ai besoin de vendre Bikepunk ?

Les critiques sur mon roman Bikepunk ont été très positives. Inspirant même des réflexions philosophiques.

Mais ce qui m’a le plus marqué, c’est l’engouement des lecteurs et des lectrices. J’ai reçu des dizaines de messages parfois accompagnés de photos de voyage à vélo inspirés par la lecture du livre. J’ai reçu des témoignages de personnes ayant acheté un vélo suite à leur lecture. Le livre a donc un effet dont je n’osais rêver : il encourage la pratique du vélo. Cela me donne envie qu’il ait une diffusion encore plus large.

Dans le monde de l’édition francophone, un livre a une première vie comme « grand format ». Il coûte aux alentours de 20€. S’il se vend bien, il sera édité un an ou deux plus tard au format poche, sur du papier de moins bonne qualité, avec moins de fioritures dans la mise en page et pour un prix généralement en dessous de 10€.

Ce format poche permet au livre de devenir accessible à un lectorat beaucoup plus large, à plus de librairies, de bibliothèque. Bref, je rêve que Bikepunk soit édité au format poche.

La bonne nouvelle, c’est que les discussions en ce sens sont en cours. Mais le critère majeur qui influence l’éditeur poche est le nombre de ventes durant sa première année. Et c’est là que vous pouvez vraiment m’aider et me soutenir : en achetant un exemplaire de Bikepunk « grand format » pour vous ou pour offrir avant la fin de l’année. Chaque exemplaire vendu accroît la probabilité de voir le livre sortir au format poche.

Alors oui, le grand format est plus cher. Mais dans le cas de Bikepunk, je vous garantis que la couverture de Bruno Leyval et la mise en page en font un magnifique objet à offrir. À quelqu’un d’autre ou à vous-même.

Commandez à l’avance !

En achetant Bikepunk, vous faites la promotion du vélo, vous avez un cadeau de moins à trouver, vous me soutenez, mais vous soutenez également les éditions PVH, qui éditent de la littérature sous licence libre ! Et je peux vous assurer que ce n’est pas évident tous les jours. Chaque livre vendu fait réellement la différence pour un petit éditeur qui doit gérer les « retours » (à savoir être forcé de racheter les invendus des librairies quand ces dernières veulent faire de la place dans leur stock, le monde du livre est une jungle !)

L’année passée, je vous avais proposé une petite sélection de livres à offrir.

Cette sélection est toujours valable, mais notez les dernières sorties PVH dont le roman Rush de Thierry Crouzet, le très mystérieux roman de fantasy écrit à 10 mains : Le bastion des dégradés et, traduit pour la première fois en français, le classique de la SF italienne Bloodbusters, par le génial Francesco Verso. Je n’ai encore lu aucun des trois, mais, connaissant personnellement tous les auteurs et ayant entendu certains bruits de couloir, je suis très impatient.

Nouveautés PVH dans la collection Asynchrone Nouveautés PVH dans la collection Asynchrone

Un gros truc qui aide vraiment PVH, c’est de commander le plus tôt possible. Que ce soit en librairie ou via le site, il y a souvent des retards de distribution indépendants de PVH. Surtout en période de fête. Du coup, le mieux est de commander maintenant sur le site ou d’envoyer un email à votre libraire pour commander le plus vite possible (ce qui vous évite les frais de port).

Je sais que je me répète…

Pour certains d’entre vous, ça sent la répétition, limite la vente forcée insistante. Je m’en excuse. Mais ce qui semble évident pour quelqu’un qui lit chacun de mes billets ne l’est pas spécialement pour tout le monde. Comme le prouve cette anecdote vécue lors d’un festival récent.

Une personne passe devant moi puis s’arrête devant le carton portant le nom « Ploum ».

— Tu es Ploum ?
— Oui
(la personne sort son téléphone, lance un navigateur et se rend sur mon blog)
— Je veux dire, tu es le ploum qui écrit ce blog ?
— Oui, c’est moi.
— Génial ! Ça fait 20 ans que je lis régulièrement tes articles. Un peu moins maintenant à cause des enfants, mais j’aime beaucoup. Tu fais quoi à un stand de dédicaces ?
— Je dédicace mes romans.
— Ah bon, tu as écrit des romans ! C’est nouveau ?
— À peu près 5 ans.

La moralité de cette histoire, c’est que beaucoup de lecteurs de ce blog ne savent pas ou n’ont pas prêté beaucoup attention au fait que j’écris des romans. Et c’est tout à fait normal, voire souhaitable, de ne pas être au courant de tous mes billets !

Alors, une fois n’est pas coutume, j’aimerais insister là-dessus parce que j’ai besoin de vous. À la fois pour acheter le livre et en faire la promotion autour de vous voire chez votre libraire, en postant une critique sur votre blog ou sur Babelio.

Lorsqu’on n’a ni l’envie ni les moyens de s’offrir des affiches dans le métro parisien, lorsqu’on refuse de financer Bolloré pour apparaître dans le top 10 des points Relay, lorsqu’on n’imprime pas 100.000 exemplaires (dont la moitié seront détruits après un an) pour inonder les présentoirs des librairies, vendre un livre est une véritable gageure !

Mais heureusement, je peux compter sur vous. Alors, un énorme merci pour votre soutien !

Bikepunk dans un sapin de Noël avec une boule vélo et une boule machine à écrire Bikepunk dans un sapin de Noël avec une boule vélo et une boule machine à écrire

Et si vous avez déjà Bikepunk, pourquoi ne pas tester mes autres livres ?

Bikepunk suspendu

Paradoxalement, je sais que c’est souvent celles et ceux qui ont les fins de mois les plus difficiles qui sont les plus enclins à contribuer aux campagnes de dons des associations, à aider les autres. Je sais également que mettre 20€ dans un livre n’est pas toujours facile. En fait, je me sens incroyablement mal face à des personnes qui hésitent à acheter le livre pour des raisons de budget. J’essaye alors le plus souvent de les convaincre de ne pas l’acheter et de télécharger à la place la version epub pirate (et entièrement légale) ou de l’emprunter. Bref, je suis très mauvais vendeur et c’est une des raisons pour lesquelles je tiens tant à ce que Bikepunk soit édité au format poche : pour qu’il soit moins cher !

J’insiste donc sur un point : cette demande de soutien ne s’adresse qu’à celles et ceux qui peuvent confortablement dépenser 20€ dans un livre ou un cadeau.

Pour les autres, j’ai décidé de « suspendre » 10 exemplaires du livre. Pour bénéficier d’un exemplaire suspendu, il suffit de m’écrire à l’adresse suspendu(at)bikepunk.fr. Pas besoin de vous justifier. Je vous fais confiance et je vous garantis que votre demande restera confidentielle. Malheureusement, je ne peux pas offrir les frais de port, ceux-ci resteront donc à votre charge.

Merci encore et toutes mes excuses pour cet encart publicitaire dans votre flux et bonnes lectures !

Je suis Ploum et je viens de publier Bikepunk, une fable écolo-cycliste entièrement tapée sur une machine à écrire mécanique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !

Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.

November 13, 2025

Submit your proposal for the FOSDEM main track before it’s too late! The deadline for main track submissions is earlier than it usually is (16th November, that’s in a couple of days!), so don’t be caught out. For full details on submission information, look at the original call for participation.

I use the lightweight Kubernetes K3s on a 3-node Raspberry Pi 4 cluster.

And created a few ansible roles to provision the virtual machines with cloud image with cloud-init and deploy k3s on it.

I updated the roles below to be compatible with the latest Debian release: Debian 13 Trixie.

With this release comes a new movie ;-)

Deploy k3s on ips

The latest version 1.3.0 is available at: https://github.com/stafwag/ansible-k3s-on-vms


Have fun!

delegated_vm_install 2.1.1

stafwag.delegated_vm_install is available at: https://github.com/stafwag/ansible-role-delegated_vm_install

2.1.1

Changelog

  • Added stafwag.libvirt to requirements
  • Added stafwag.package_update to requirements

2.1.0

Changelog

  • Added Debian 13 template
  • Added Debian 13 single node examples
  • Updated 3 node example to Debian 13


virt_install_vm 1.2.0

stafwag.virt_install_vm 1.1.0 is available at: https://github.com/stafwag/ansible-role-virt_install_vm

1.2.0

Changelog

  • Added Debian 13 template and example
  • Set graphics to VNC


qemu_img

stafwag.qemu_img 2.3.3 available at: https://github.com/stafwag/ansible-role-qemu_img

1.2.0

Changelog

  • Corrected ansible lint errors
  • Updated convert to array
    • Updated conversion to array to be compatible with the latest ansible versions
    • Added debugging

November 12, 2025

When you move out of a cohousing, you don’t just pack your boxes — you pack your shared life.
And in our case, that meant making an inventory of everything that lived in the house at Van Ooteghem:
Who takes what, what gets sold, and what’s destined for the containerpark.

To keep things organised (and avoid the classic “wait, whose toaster was that again?” discussion), we split the task — each person took care of one room.
I was assigned to the living room.

I made photos of every item, uploaded them to our shared Dropbox folder, and listed them neatly in a Google spreadsheet:
one column for the Dropbox URL, another for the photo itself using the IMAGE() function, like this:

=IMAGE(A2)

📸 When Dropbox meets Google Sheets

Of course, it didn’t work immediately — because Dropbox links don’t point directly to the image.
They point to a webpage that shows a preview. Google Sheets looked at that and shrugged.

A typical Dropbox link looks like this:

https://www.dropbox.com/s/abcd1234efgh5678/photo.jpg?dl=0

So I used a small trick: in my IMAGE() formula, I replaced ?dl=0 with ?raw=1, forcing Dropbox to serve the actual image file.

=IMAGE(SUBSTITUTE(A2, "?dl=0", "?raw=1"))

And suddenly, there they were — tidy little thumbnails, each safely contained within its cell.


🧩 Making it fit just right

You can fine-tune how your image appears using the optional second argument of the IMAGE() function:

=IMAGE("https://example.com/image.jpg", mode)

Where:

  • 1fit to cell (default)
  • 2stretch (fill the entire cell, may distort)
  • 3keep original size
  • 4custom size, e.g. =IMAGE("https://example.com/image.jpg", 4, 50, 50) (sets width and height in pixels)

💡 Resize the row or column if needed to make it look right.

That flexibility means you can keep your spreadsheet clean and consistent — even if your photos come in all sorts of shapes and sizes.


🧍‍♀️ The others tried it too…

My housemates loved the idea and started adding their own photos to the spreadsheet.
Except… they just pasted them in.
It looked great at first — until someone resized a row.
Then the layout turned into an abstract art project, with floating chairs and migrating coffee machines.

The moral of the story: IMAGE() behaves like cell content, while pasted images are wild creatures that roam free across your grid.


🧮 Bonus: The Excel version

If you’re more of an Excel person, there’s good news.
Recent versions of Excel 365 also support the IMAGE() function — almost identical to Google Sheets:

=IMAGE("https://www.dropbox.com/s/abcd1234efgh5678/photo.jpg?raw=1", "Fit")

If you’re still using an older version, you’ll need to insert pictures manually and set them to Move and size with cells.
Not quite as elegant, but it gets the job done.


🧹 Organised chaos, visual edition

So that’s how our farewell to Van Ooteghem turned into a tech experiment:
a spreadsheet full of URLs, formulas, furniture, and shared memories.

It’s oddly satisfying to scroll through — half practical inventory, half digital scrapbook.
Because even when you’re dismantling a home, there’s still beauty in a good system.

November 08, 2025

Proposals for FOSDEM JUNIOR can now be submitted! FOSDEM Junior is a specific track to organise workshops and activities for children from age 7 to 17 during the FOSDEM weekend. These activities are for children to learn and get inspired about technology and open source. We are looking for activities for children from age 7 to 17. These activities are for children to learn and get inspired about technology. Last year’s activities included microcontrollers, game development, embroidery, python programming, mobile application development, music, and data visualization. If you are still unsure if your activity fits FOSDEM Junior, feel free to舰

November 07, 2025

La guerre que mènent les robots ascientifiques contre la solitude intellectuelle

On entend souvent que l’IA a des avantages et des inconvénients, mais que c’est un changement de paradigme, qu’il faut l’accepter. Qu’on ne peut pas « refuser en bloc ». Pourtant, le refus en bloc a bien eu lieu avec, par exemple, les NFTs. Et c’était, à mon avis, à raison parce que les NFT n’apportaient rien.

Je prétends que la frénésie d’IA n’est pas du tout un changement de paradigme. C’est même le contraire.

Oui, je le prétends haut et fort, ce qu’on nous vend avec l’IA n’est guère plus utile que les NFT. Tous les « nouveaux usages » sont, dans l’immense majorité, parfaitement stupides, voire hyper dangereux. Et les cas qui seraient potentiellement utiles ne sont tout simplement pas rentables. Si l’internet mobile a été un réel changement de paradigme, les IA ne le sont pas.

Ce que nous voulons entendre

Ce qu’on nous vend comme de l’IA n’est, en réalité, rien de plus qu’un robot conversationnel programmé pour nous faire plaisir. Pour dire ce qu’on veut entendre. Comme le dit Tattierantula dans un fil Mastodon : « Au plus on a étudié le domaine, au plus on est susceptible de tomber dans le piège du robot conversationnel. Tout comme un roboticien se sent désolé pour un aspi-robot coincé dans un coin. »

Non, le robot conversationnel n’est pas un truc qui va « changer nos paradigmes ». Oui, un truc peut être à la mode et être complètement con. Non, on ne doit pas être nécessairement « subtil », « mesuré » ou « ouvert à la nouveauté ». Virgile Andreani l’a superbement illustré avec ce pouet sur Mastodon :

Qu'on le veuille ou non, l'astrologie est là pour rester. Certains de nos étudiant·es l'utilisent déjà : ça n'aurait donc pas de sens de lutter contre. Comment en faire un usage responsable et éthique ? Découvrez dès maintenant notre formation aux enjeux de l'astrologie dans un monde qui change, dispensée par des experts indépendants reconnus mondialement : le chef de Astrology Inc, celui de OpenAstrology, et Jean-Michel. Cette formation de trois heures délivrera le diplôme reconnu de Maître Astrologue Ingénieur en Osselets, et offrira aussi un accès exclusif à l'ensemble de vos étudiant·es, employé·es, chômeur·euses, retraité·es, familles, aux Astrology Factories construites sur l'ensemble du territoire avec vos impôts. Tou·tes ensemble vers un futur sous le signe de l'astrologie qui nous permettra sûrement un jour de répondre aux grandes questions de l'humanité comme comprendre comment être plus rationnels ! 🛸✨🔮

J’insiste sur ce point important : les robots conversationnels sont littéralement programmés pour répondre ce que nous voulons lire. Il n’y a aucune notion de « vérité » ou de « réel » dans l’entraînement des IA. Ce sont donc, par essence, des outils ascientifiques. La comparaison avec l’astrologie est parfaitement apte. Et on peut trouver une utilité aux robots conversationnels de la même manière qu’on peut trouver une utilité à jouer au tarot pour lancer une conversation ou imaginer la suite d’une histoire sur laquelle nous bloquons. Ou jouer à pile ou face une décision qui nous fait hésiter.

Même les opposants aux LLMs concèdent que « cela peut avoir une grande utilité, par exemple en médecine ». C’est entièrement faux. Les premiers résultats sont catastrophiques : une perte rapide d’expérience chez les spécialistes utilisant l’IA, une incapacité pour les jeunes générations de se former. Un outil qui n’a aucun fondement épistémologique de vérité ne peut, par définition, pas être utile pour un scientifique.

Et répondre avec une anecdote où « ça a fonctionné » est justement le contraire de la méthode scientifique.

La disparition du sentiment de solitude

La seule réelle utilité de cette IA serait donc d’offrir une oreille attentive à celleux qui se sentent seul·e·s. Sur ce sujet, un très long article d’Huber Guillaud sur les IAs utilisées comme compagnon de discussion.

Une idée m’a frappée dans ce texte : le fait que les chatbots font disparaître le sentiment de solitude tout comme les mobiles et leurs notifications ont fait disparaître le sentiment d’ennui, ennui pourtant fondamental pour reposer le cerveau.

Mais ce sont les sentiments qui ont disparu ! Exactement comme la coca coupe le sentiment de faim sans pour autant apporter la moindre valeur nutritive. Pour cette raison, elle est massivement utilisée par les pauvres.

Et c’est exactement la même chose pour les usages professionnels, voire scientifiques, de l’IA. Ce papier de Duc Cuong Nguyen et Catherine Whelch insiste sur ce point. Les LLMs sont des outils conversationnels tellement convaincants que même les scientifiques ont l’impression de parler avec d’autres scientifiques compétents. Ils se font alors complètement induire en erreur, car ils ont l’impression d’avoir enfin une oreille attentive qui les comprend.

Pour caricaturer, la hype autour du « prompt engineering » a convaincu une génération de managers et de politiciens qu’il suffit de dire « Tu es un chercheur de génie qui vient de recevoir le prix Nobel de médecine pour avoir guéri le cancer. Explique-moi en cinq phrases ta découverte » pour que ChatGPT nous donne réellement le remède contre le cancer. Et les scientifiques, dont les financements dépendent des politiciens et des managers du privé, sont forcés d’aller dans ce sens, parfois sans s’en rendre compte. Voire, pire, en commençant à réellement apprécier les interactions avec les LLMs, ce que Hamilton Mann compare au syndrome de Stockholm.

La lutte contre l’intellectualisme

La réflexion intellectuelle nécessaire à toute quête scientifique nécessite de longs temps de solitude. Ce que Cal Newport appelle « Deep Work ». Il est nécessaire de s’abstraire des conventions sociales, du charisme humain capable de faire passer des vessies pour des lanternes et de se retrouver seul. Seul face aux données reflétant la réalité, seul face aux longs enchainements logiques.

Les robots générationnels ne sont donc pas un changement de paradigme. Comme les notifications et l’obligation de connexion permanente, ils ne sont qu’un outil de plus pour tenter d’envahir nos réflexions et notre solitude. Ils ne sont qu’une arme de plus dans la guerre que le consumérisme mène contre l’intellectualisme.

Je suis Ploum et je viens de publier Bikepunk, une fable écolo-cycliste entièrement tapée sur une machine à écrire mécanique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !

Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.

November 05, 2025

As a long-time Radiohead fan I was happy to see video of their first gig in many years and although did enjoy the set I don’t have the feeling I’m missing out on anything really. Great songs, great musicians but nothing new, nothing really unexpected (not counting a couple of songs that they only very rarely perform live). I watched the vid and then switched to a The Smile live show from 2024…

Source

After days of boxes, labels, and that one mysterious piece of furniture that no one remembers what it belongs to, we can finally say it: we’ve moved! And yes, mostly without casualties (except for a few missing screws).

The most nerve-wracking moment? Without a doubt, moving the piano. It got more attention than any other piece of furniture — and rightfully so. With a mix of brute strength, precision, and a few prayers to the gods of gravity, it’s now proudly standing in the living room.

We’ve also been officially added to the street WhatsApp group — the digital equivalent of the village well, but with emojis. It feels good to get those first friendly waves and “welcome to the neighborhood!” messages.

The house itself is slowly coming together. My IKEA PAX wardrobe is fully assembled, but the BRIMNES bed still exists mostly in theory. For now, I’m camping in style — mattress on the floor. My goal is to build one piece of furniture per day, though that might be slightly ambitious. Help is always welcome — not so much for heavy lifting, but for some body doubling and co-regulation. Just someone to sit nearby, hold a plank, and occasionally say “you’re doing great!”

There are still plenty of (banana) boxes left to unpack, but that’s part of the process. My personal mission: downsizing. Especially the books. But they won’t just be dumped at a thrift store — books are friends, and friends deserve a loving new home. 📚💚

Technically, things are running quite smoothly already: we’ve got fiber internet from Mobile Vikings, and I set up some Wi-Fi extenders and powerline adapters. Tomorrow, the electrician’s coming to service the air-conditioning units — and while he’s here, I’ll ask him to attach RJ45 connectors to the loose UTP cables that end in the fuse box. That means wired internet soon too — because nothing says “settled adult” like a stable ping.

And then there’s the garden. 🌿 Not just a tiny patch of green, but a real garden with ancient fruit trees and even a fig tree! We had a garden at the previous house too, but this one definitely feels like the deluxe upgrade. Every day I discover something new that grows, blossoms, or sneakily stings.

Ideas for cozy gatherings are already brewing. One of the first plans: living room concerts — small, warm afternoons or evenings filled with music, tea (one of us has British roots, so yes: milk included, coffee machine not required), and lovely people.

The first one will likely feature Hilde Van Belle, a (bal)folk friend who currently has a Kickstarter running for her first solo album:
👉 Hilde Van Belle – First Solo Album

I already heard her songs at the CaDansa Balfolk Festival, and I could really feel the personal emotions in her music — honest, raw, and full of heart.
You should definitely support her! 💛

The album artwork is created by another (bal)folk friend, Verena, which makes the whole project feel even more connected and personal.

Hilde (left) and Verena (right) at CaDansa
📸 Valentina Anzani

So yes: the piano’s in place, the Wi-Fi works, the garden thrives, the boxes wait patiently, and the teapot is steaming.
We’ve arrived.
Phew. We actually moved. ☕🌳📦🎶

November 02, 2025

If you prefer your content/ data not to be used to train LinkedIn’s AI, you can opt out at https://www.linkedin.com/mypreferences/d/settings/data-for-ai-improvement (but only until tomorrow, Nov 3rd?). Crazy that this requires an explicit opt-out by the way, it really should be opt-in (so off by default). More info about this can be found at tweakers.net by the way. Feel free to share…

Source

November 01, 2025

Quand éclatera la bulle IA…

Comme le souligne Ed Zitron, toute la bulle actuelle est basée sur 7 ou 8 entreprises qui s’échangent des promesses de s’échanger des milliards d’euros lorsque tous les habitants de la planète dépenseront en moyenne 100€ par mois pour utiliser ChatGPT. Ce qui, pour éviter une cascade de faillites, doit arriver dans les 4 ans au plus tard.

Spoiler : cela n’arrivera jamais.

(oui, Gee m’a convaincu que le spoiler, ce n’était finalement pas si mal et que si spoiler gâche réellement une œuvre, c’est que l’œuvre est merdique)

Tout le monde est en train de perdre une fortune à investir dans une infrastructure IA que personne ne veut payer pour utiliser.

Parce qu’à part deux ou trois managers lobotomisés par les écoles de commerce et tout fiers de poster sur LinkedIn des textes générés qu’ils n’ont même pas lus, ChatGPT est loin d’être la révolution promise. C’est juste ce qu’on veut nous faire croire. Cela fait partie du marketing !

Tout comme on veut nous faire croire qu’avoir une app pour tout nous simplifie la vie alors que si on réfléchit rationnellement, le coût cognitif de l’app est souvent supérieur à l’alternative, mais encore faut-il qu’alternative il y ait.

Et ce n’est pas un hasard si je parle d’apps pour illustrer la bulle IA.

La saturation du smartphone

Parce que les apps ont été promues à grand renfort de marketing pour promouvoir indirectement la vente de smartphone. Qu’une gigantesque industrie s’est formée sur le fait que le marché des smartphones était en pleine croissance. Et si la bulle n’a pas explosé, un autre phénomène est arrivé très récemment : la saturation.

Tout le monde a un smartphone. Celleux qui n’en ont pas n’en veulent pas en conscience. Celleux qui en ont vont le remplacer de moins en moins souvent, car les nouveaux modèles n’apportent plus rien et, au contraire, retirent des fonctions (combien s’accrochent à un vieux modèle pour garder un jack audio ?). L’industrie a fini sa croissance et va se stabiliser dans un renouvellement de l’existant.

Si j’étais méchant, je dirais que le marché du dumbphone a un plus fort potentiel de croissance que celui du smartphone.

Toutes les petites sociétés et les services qui se sont fait convaincre de « développer une app » ne l’ont fait que pour promouvoir la vente de smartphone et, désormais, cela ne sert plus à rien. Elles se retrouvent à devoir gérer trois clientèles différentes : Android, iPhone et celleux sans aucun des deux. Bref, vous pouvez arrêter de « promouvoir vos apps », cela ne vous rapportera rien d’autre que du travail et des coûts supplémentaires. Que cela vous serve de leçon à l’heure où vous « investissez dans l’IA ».

Spoiler : personne n’a compris la leçon.

Croissance, Ô Croissance !

Mais le capitalisme a besoin de croissance ou, au moins, la croyance d’une croissance future.

C’est exactement ce que promet l’IA et la raison pour laquelle nous sommes gavés de marketing pour l’IA partout, que les nouveaux téléphones ont des « puces IA » et que toutes les apps se mettent à jour pour nous forcer à utiliser l’IA.

Par rapport à l’explosion du mobile, j’observe néanmoins une différence flagrante, fondamentale.

Beaucoup de gens « normaux » n’aiment pas. Les mêmes qui étaient fascinés par les premiers smartphones, qui étaient fascinés par les réponses de ChatGPT sont furieux d’avoir une IA dans Whatsapp ou dans leur brosse à dents. Celleux qui croient le plus en l’IA ne sont pas celleux qui l’utilisent le plus (ce qui était le cas du smartphone), mais celleux qui pensent que d’autres vont l’utiliser !

Mais ces autres ne l’utilisent finalement que très peu. Et ce n’est pas une réflexion éthique ou écologique ou politique.

C’est juste que ça les emmerde. Que même avec du matraquage marketing incessant, ils ont l’impression que ça complique leur vie plutôt que la simplifier. J’observe de plus en plus de gens refuser de faire des mises à jour, tenter de faire durer le plus longtemps possible un appareil pour éviter d’avoir à racheter le nouveau avec des écrans tactiles à la place des boutons. Ce que j’expliquais déjà en 2023.

Nous avons atteint un point de saturation technologique.

Lorsque vous refusez l’IA ou que vous demandez un bouton physique pour régler le volume de la radio dans votre nouvelle voiture, les vendeurs et le marketing se contentent de vous classer dans la catégorie des « ignares qui ne comprennent rien à la technologie » (dans laquelle ils me classent dès que j’ouvre la bouche, et ça m’amuse beaucoup d’être pris pour un ignare technologique).

Mais, à un moment, ces ignares vont devenir une masse importante. Ces non-consommateurs vont commencer à peser, économiquement, politiquement.

L’espoir des ignares technologiques

L’économie de la merdification (ou « crapitalism » comme l’appelle Charlie Stross) est, sans le vouloir, en train de promouvoir la décroissance et le low-tech !

Chaque système contient en lui les germes de la révolution qui le renversera.

Du moins, je l’espère…

La bulle du web 2.0 a éclaté en laissant en place une énorme infrastructure sur laquelle ont pu prospérer quelques survivants (Amazon, Google) et quelques nouveaux venus (Facebook, Netflix) qui se sont littéralement approprié les restes.

Parfois, une bulle ne laisse rien derrière elle, comme celle des subprimes en 2008.

Qu’en sera-t-il de la bulle IA dans laquelle nous sommes ? Les datacenters construits seront rapidement obsolètes. La destruction sera énorme.

Mais ces data centers nécessitent de l’électricité. Beaucoup d’électricité. Et pour une grande partie, une électricité renouvelable, car c’est la moins chère à l’heure actuelle.

Mon pari est donc que l’éclatement de la bulle IA va créer une offre d’électricité sans précédent. Celle-ci va devenir presque gratuite et, dans certains cas, elle aura même un prix négatif (parce qu’une fois les batteries pleines, il faut bien évacuer la production excédentaire).

Et si l’électricité est gratuite, il devient de plus en plus difficile de justifier de brûler du pétrole. La fin de la bulle IA pourrait également sonner la fin d’une bulle pétrolière qui dure depuis 70 ans.

Comme le dit Charlie Stross, on y est peut–être déjà :

Ça paraît un peu utopiste et, au moment de l’éclatement, ça ne va pas être joli à voir. Ça peut commencer demain comme dans 10 ans. La transition risque de durer plus qu’une poignée d’années. Mais, clairement, c’est un changement civilisationnel qui s’amorce…

On n’y arrivera pas sans douleur. La récession économique va alimenter tous les délires fascistes disposant d’une infrastructure d’espionnage dont l’officier de la Stasi le plus fou n’aurait jamais pu rêver. Mais ptêtre qu’au bout du tunnel, on se dirigera vers ce que Tristan Nitot a décrit dans sa novelette « Vélorutopia ». Il prétend s’inspirer, entre autres, de Bikepunk. Mais je crois qu’il dit ça pour me flatter.

EDIT 9 novembre : ajout du lien de Charlie Stross que j’avais oublié.

Je suis Ploum et je viens de publier Bikepunk, une fable écolo-cycliste entièrement tapée sur une machine à écrire mécanique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !

Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.

October 30, 2025

We are pleased to announce the developer rooms that will be organised at FOSDEM 2026. Developer rooms are assigned to self-organising groups to work together on open source projects, to discuss topics relevant to a broader subset of the community, etc. The individual developer room organisers will issue their calls for participation in the next few days. The list below will be updated accordingly. Topic Call for Participation AI Plumbers CfP Audio, Video & Graphics Creation CfP Bioinformatics & Computational Biology CfP Browser and web platform CfP BSD, illumos, bhyve, OpenZFS CfP Building Europe’s Public Digital Infrastructure CfP Collaboration舰

October 29, 2025

You know how you can make your bootloader sing a little tune?
Well… what if instead of music, you could make it play Space Invaders?

Yes, that’s a real thing.
It’s called GRUB Invaders, and it runs before your operating system even wakes up.
Because who needs Linux when you can blast aliens straight from your BIOS screen? 🚀


🎶 From Tunes to Lasers

In a previous post — “Resurrecting My Windows Partition After 4 Years 🖥🎮
I fell down a delightful rabbit hole while editing my GRUB configuration.
That’s where I discovered GRUB_INIT_TUNE, spent hours turning my PC speaker into an 80s arcade machine, and learned far more about bootloader acoustics than anyone should. 😅

So naturally, the next logical step was obvious:
if GRUB can play music, surely it can play games too.
Enter: GRUB Invaders. 👾💥


🧩 What the Heck Is GRUB Invaders?

grub-invaders is a multiboot-compliant kernel game — basically, a program that GRUB can launch like it’s an OS.
Except it’s not Linux, not BSD, not anything remotely useful…
it’s a tiny Space Invaders clone that runs on bare metal.

To install it (on Ubuntu or Debian derivatives):

sudo apt install grub-invaders

Then, in GRUB’s boot menu, it’ll show up as GRUB Invaders.
Pick it, hit Enter, and bam! — no kernel, no systemd, just pew-pew-pew.
Your CPU becomes a glorified arcade cabinet. 🕹

Image: https://libregamewiki.org/GRUB_Invaders

🛠 How It Works

Under the hood, GRUB Invaders is a multiboot kernel image (yep, same format as Linux).
That means GRUB can load it into memory, set up registers, and jump straight into its entry point.

There’s no OS, no drivers — just BIOS interrupts, VGA mode, and a lot of clever 8-bit trickery.
Basically: the game runs in real mode, paints directly to video memory, and uses the keyboard interrupt for controls.
It’s a beautiful reminder that once upon a time, you could build a whole game in a few kilobytes.


🧮 Technical Nostalgia

Installed size?

Installed-Size: 30
Size: 8726 bytes

Yes, you read that right: under 9 KB.
That’s less than one PNG icon on your desktop.
Yet it’s fully playable — proof that programmers in the ’80s had sorcery we’ve since forgotten. 🧙‍♂️

The package is ancient but still maintained enough to live in the Ubuntu repositories:

Homepage: http://www.erikyyy.de/invaders/
Maintainer: Debian Games Team
Enhances: grub2-common

So you can still apt install it in 2025, and it just works.


🧠 Why Bother?

Because you can.

Because sometimes it’s nice to remember that your bootloader isn’t just a boring chunk of C code parsing configs.
It’s a tiny virtual machine, capable of loading kernels, playing music, and — if you’re feeling chaotic — defending the Earth from pixelated aliens before breakfast. ☕

It’s also a wonderful conversation starter at tech meetups:

“Oh, my GRUB doesn’t just boot Linux. It plays Space Invaders. What does yours do?”


⚙ A Note on Shenanigans

Don’t worry — GRUB Invaders doesn’t modify your boot process or mess with your partitions.
It’s launched manually, like any other GRUB entry.
When you’re done, reboot, and you’re back to your normal OS.
Totally safe. (Mostly. Unless you lose track of time blasting aliens.)


🏁 TL;DR

  • grub-invaders lets you play Space Invaders in GRUB.
  • It’s under 9 KB, runs without an OS, and is somehow still in Ubuntu repos.
  • Totally useless. Totally delightful.
  • Perfect for when you want to flex your inner 8-bit gremlin.

October 28, 2025

On October 10th, 2025, we released MySQL 9.5, the latest Innovation Release. As usual, we released bug fixes for 8.0 and 8.4 LTS, but this post focuses on the newest release. In this release, we can see contributions related to Connector J and Connector Net, as well as to different server categories. Connector / J […]

October 27, 2025

Qu’est-ce que l’outil va faire de moi ?

Je ne peux résister à vous partager cet extrait issu de « L’odyssée du pingouin cannibale », de l’inénarrable Yann Kerninon, philosophe et punk rocker anarchocycliste :

Quand on m’envie d’écrire des livres et d’être un philosophe, j’ai toujours envie de répondre « allez vous faire foutre ». Dans une interview télévisée, le philosophe Kostas Axelos affirmait que ce n’était jamais le penseur qui faisait la pensée, mais bien toujours la pensée qui faisait le penseur. Il ajoutait qu’il aurait bien aimé qu’il en soit autrement. Au journaliste étonné qui lui demandait pourquoi, il répondit avec un léger sourire : « Parce que c’est la source d’une grande souffrance. »

Cette idée que la pensée fait le penseur est poussée encore plus loin par Marcello Vitali-Rosati dans son excellent « Éloge du bug ». Dans cet ouvrage, que je recommande chaudement, Marcello critique la dualité platonicienne qui imprègne la pensée occidentale depuis 2000 ans. Il y aurait les penseurs et les petites mains, les dirigeants et les obéissants, le virtuel et le réel. Ce dénigrement de la matérialité aurait été poussé à son paroxysme par les GAFAM qui tentent de cacher toute l’infrastructure sur laquelle elles s’appuient. Nous avons nos données dans « le cloud », nous cliquons pour passer une commande et, magiquement, le paquet arrive à notre porte le lendemain.

Lorsqu’un étudiant me dit que son téléphone se connecte à un satellite, lorsqu’un politicien s’étonne que les câbles sous-marins existent encore, lorsqu’un usager associe « wifi » et internet, ce n’est pas de la simple ignorance comme je l’ai toujours cru. C’est en réalité le résultat de décennies de lavage de cerveau et de marketing pour tenter de nous faire oublier la matérialité, pour tenter de nous convaincre que nous sommes tous des « décideurs » à qui obéit un génie magique.

Marcello fait le parallèle avec le génie d’Aladdin. Car, au cas où vous ne l’auriez pas remarqué, Aladdin est inculte. Il veut « des beaux vêtements » mais n’a aucune idée de ce qui fait que des vêtements sont beaux ou non. Il ne pose aucun choix. Il est sous la coupe totale du génie qui prend l’entièreté des décisions. Il croit être le maître, il est le jouet du génie.

Je me permets même de pousser l’analogie en faisant appel aux habits neufs de l’empereur : lorsqu’Aladdin sera complètement dépendant du génie, celui-ci lui fournira des habits « invisibles » en le convainquant que ce sont les plus beaux. Ce processus est désormais connu sous le nom de « merdification ».

Le néoplatonicisme de Plotin voulait que l’écrit ne soit qu’une tâche vulgaire, subalterne de la pensée.

Avec ChatGPT et consorts, la Silicon Valley a inventé le néo-néoplatonicisme. La pensée elle-même devient vulgaire, subalterne à l’idée. Le grand entrepreneur a l’ébauche d’une idée, plutôt un désir intuitif. Charge aux sous-fifres de le réaliser ou, pour le moins, de créer une campagne marketing pour modifier la réalité, pour convaincre que ce désir est réel, génial, souhaitable et réaliste. Que son auteur mérite les lauriers. C’est ce que j’ai appelé « la mystification de la Grande Idée ».

Mais ce n’est pas le penseur qui fait la pensée. C’est la pensée qui fait le penseur.

Ce n’est pas la pensée qui fait l’écrit, c’est l’écrit qui fait la pensée.

L’acte d’écriture est physique, matériel. L’utilisation d’un outil ou d’un autre va grandement affecter l’écrit et, par conséquent, la pensée et donc le penseur lui-même. Sur ce sujet, je ne peux que vous recommander chaudement « La mécanique du texte » de Thierry Crouzet. L’absence de cette référence m’a d’ailleurs sauté aux yeux dans « Éloge du bug », car les deux livres sont très complémentaires.

Si toute cette réflexion semble pour le moins abstraite, j’en ai fait l’expérience de première main. En écrivant à la machine à écrire, bien entendu, comme c’est le cas pour mon roman Bikepunk.

Mais le changement le plus profond que j’ai vécu est probablement lié à ce blog.

Il y a 3 ans, j’ai enfin réussi à quitter Wordpress pour faire un blog statique que je génère avec mon propre script.

De manière amusante, Marcello Vitali-Rosati vient de faire un cheminement identique.

Mais ce n’est pas un long processus réflexif qui m’a amené à cela. C’est le fait d’être saoulé par la complexité de Wordpress, de me rendre compte que j’avais perdu le plaisir d’écrire et que je le retrouvais sur le réseau Gemini. J’ai mis en place des choses sans en comprendre les tenants et les aboutissants. J’ai expérimenté. J’ai été confronté à des centaines de microdécisions que je ne soupçonnais pas. J’ai appris énormément sur le HTML en développant Offpunk et je l’ai appliqué sur ce blog. Pour être honnête, je me suis rendu compte que j’avais oublié qu’il était possible de faire une simple page HTML sans JavaScript, sans un thème CSS fait par un professionnel. Et pourtant, une fois en ligne, je n’ai reçu que des éloges sur un site pourtant minimal.

Mon processus de blogging s’est complètement modifié. Je me suis remis à Vim après m’être remis pleinement à Debian. Mes écrits s’en sont ressentis. J’ai été invité à parler de minimalisme numérique, de low-tech.

Mais je n’ai pas rejoint Gemini parce que je me sentais un minimaliste numérique dans l’âme. Je n’ai pas quitté Wordpress par amour de la low-tech. Je n’ai pas créé Offpunk parce que je suis un guru de la ligne de commande.

C’est exactement le contraire ! Gemini m’a illuminé sur une manière de voir et de vivre un minimalisme numérique. Programmer ce blog m’a fait comprendre l’intérêt de la low-tech. Créer Offpunk et l’utiliser ont fait de moi un adepte de la ligne de commande.

La pensée fait le penseur ! L’outil fait le créateur ! Le logiciel libre fait le hacker ! La plateforme fait l’idéologie ! Le vélo fait la condition physique !

Peut-être que nous devrions arrêter de nous poser la question « Qu’est-ce que cet outil peut faire pour moi ? » et la remplacer par « Qu’est-ce que cet outil va faire de moi ? ».

Car si la pensée fait le penseur, le réseau social propriétaire fait le fasciste, le robot conversationnel fait l’abruti naïf, le slide PowerPoint fait le décideur crétin.

Qu’est-ce que cet outil va faire de moi ?

En énonçant cette question à haute voix, je soupçonne que nous verrons d’un autre œil l’utilisation de certains outils, surtout ceux qui sont « plus faciles » ou « que tout le monde utilise ».

Qu’est-ce que cet outil va faire de moi ?

En regardant autour de nous, il y a finalement peu d’outils dont la réponse à cette question est rassurante.

Je suis Ploum et je viens de publier Bikepunk, une fable écolo-cycliste entièrement tapée sur une machine à écrire mécanique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !

Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.

October 22, 2025

We hebben de sleutels nog niet, maar ik plan toch al een verhuisdag op woensdag 29 oktober. (To be confirmed, maar het komt dichterbij!)


Waar ik hulp bij kan gebruiken 💪

  • Bananendozen verhuizen — ik zorg dat het meeste op voorhand is ingepakt.
    Ruwe schatting: zo’n 30 dozen (ik heb ze niet geteld, ik leef graag gevaarlijk).
  • Demonteren en verhuizen van meubels:
    • Boekenrek (IKEA KALLAX 5×5)
    • Kleerkast
    • Bed
    • Bureau
  • Diepvries verhuizen (van de keuken op het gelijkvloers naar de kelder in het nieuwe huis).

De dozen en meubels staan nu op de 2de verdieping. Ik probeer vooraf al wat dozen naar beneden te sleuren — want trappen, ja.

Assembleren van meubels op het nieuwe adres doen we een andere dag.
Doel van de dag: niet overprikkeld geraken.


Wat ik zelf regel 🚐

Ik voorzie een kleine bestelwagen via Dégage autodelen.


Wat ik nog nodig heb 🧰❤

  • Een elektrische schroevendraaier (voor het IKEA-spul).
  • Handige, stressbestendige mensen met een vleugje organisatietalent.
  • Enkele auto’s die over en weer kunnen rijden — zelfs al is het maar voor een paar dozen.
  • Emotionele support crew die op tijd kunnen zeggen: “Hey, pauze.”

Praktisch 📍

  • Oud adres: Ledeberg
  • Nieuw adres: tussen station Gent-Sint-Pieters en de Sterre
  • Afstand: ongeveer 4 km
    (Exacte adressen deel ik met de helpers.)

Ik maak een WhatsApp-groep voor coördinatie.


Afsluiter 🍕

Verhuisdag Part 1 eindigt met gratis pizza’s.
Want eerlijk: dozen sleuren is zwaar, maar pizza maakt alles beter.


Wil je komen helpen (met spierkracht, auto, gereedschap of goeie vibes)?
Laat iets weten — hoe meer handen, hoe minder stress!

October 17, 2025

class AbstractCommand : public QObject
{
    Q_OBJECT
public:
    explicit AbstractCommand(QObject* a_parent = nullptr);
    Q_INVOKABLE virtual void execute() = 0;
    virtual bool canExecute() const = 0;
signals:
    void canExecuteChanged( bool a_canExecute );
};

AbstractCommand::AbstractCommand(QObject *a_parent)
    : QObject( a_parent )
{
}

AbstractConfigurableCommand::AbstractConfigurableCommand(QObject *a_parent)
    :   AbstractCommand( a_parent )
    , m_canExecute( false ) { }

bool AbstractConfigurableCommand::canExecute() const
{
    return m_canExecute;
}
void AbstractConfigurableCommand::setCanExecute( bool a_canExecute )
{
    if( a_canExecute != m_canExecute ) {
        m_canExecute = a_canExecute;
        emit canExecuteChanged( m_canExecute );
        emit localCanExecuteChanged( m_canExecute );
    }
}

#include 

#include "CompositeCommand.h"

/*! \brief Constructor for a empty initial composite command */
CompositeCommand::CompositeCommand( QObject *a_parent )
    : AbstractCommand ( a_parent ) {}

/*! \brief Constructor for a list of members */
CompositeCommand::CompositeCommand( QList a_members, QObject *a_parent )
    : AbstractCommand ( a_parent )
{
    foreach (AbstractCommand* member, a_members) {
        registration(member);
        m_members.append( QSharedPointer(member) );
    }
}

/*! \brief Constructor for a list of members */
CompositeCommand::CompositeCommand( QList> a_members, QObject *a_parent )
    : AbstractCommand ( a_parent )
    , m_members ( a_members )
{
    foreach (const QSharedPointer& member, m_members) {
        registration(member.data());
    }
}

/*! \brief Destructor */
CompositeCommand::~CompositeCommand()
{
    foreach (const QSharedPointer& member, m_members) {
        deregistration(member.data());
    }
}

void CompositeCommand::executeAsync()
{
    foreach (const QSharedPointer& member, m_members) {
        member->executeAsync();
    }
}

bool CompositeCommand::canExecute() const
{
    foreach (const QSharedPointer& member, m_members) {
        if (!member->canExecute()) {
            return false;
        }
    }
    return true;
}

/*! \brief When one's canExecute changes */
void CompositeCommand::onCanExecuteChanged( bool a_canExecute )
{
    bool oldstate = !a_canExecute;
    bool newstate = a_canExecute;
    foreach (const QSharedPointer& member, m_members) {
        if ( member.data() != sender() ) {
            oldstate &= member->canExecute();
            newstate &= member->canExecute();
        }
    }

    if (oldstate != newstate) {
        emit canExecuteChanged( newstate );
    }
}

/*! \brief When one's execution completes */
void CompositeCommand::onExecutionCompleted( )
{
    m_completedCount++;

    if ( m_completedCount == m_members.count( ) ) {
        m_completedCount = 0;
        emit executionCompleted();
    }
}


void CompositeCommand::registration( AbstractCommand* a_member )
{
    connect( a_member, &AbstractCommand::canExecuteChanged,
             this, &CompositeCommand::onCanExecuteChanged );
    connect( a_member, &AbstractCommand::executionCompleted,
             this, &CompositeCommand::onExecutionCompleted );
}

void CompositeCommand::deregistration( AbstractCommand* a_member )
{
    disconnect( a_member, &AbstractCommand::canExecuteChanged,
                this, &CompositeCommand::onCanExecuteChanged );
    disconnect( a_member, &AbstractCommand::executionCompleted,
                this, &CompositeCommand::onExecutionCompleted );
}

void CompositeCommand::handleCanExecuteChanged(bool a_oldCanExecute)
{
    bool newCanExecute = canExecute();
    if( a_oldCanExecute != newCanExecute )
    {
        emit canExecuteChanged( newCanExecute );
    }
}

void CompositeCommand::add(AbstractCommand* a_member)
{
    bool oldCanExecute = canExecute();

    QQmlEngine::setObjectOwnership ( a_member, QQmlEngine::CppOwnership );
    m_members.append( QSharedPointer( a_member ) );
    registration ( a_member );

    handleCanExecuteChanged(oldCanExecute);
}

void CompositeCommand::add(const QSharedPointer& a_member)
{
    bool oldCanExecute = canExecute();

    m_members.append( a_member );
    registration ( a_member.data() );

    handleCanExecuteChanged(oldCanExecute);
}

void CompositeCommand::remove(AbstractCommand* a_member)
{
    bool oldCanExecute = canExecute();

    QMutableListIterator > i( m_members );
    while (i.hasNext()) {
        QSharedPointer val = i.next();
        if ( val.data() == a_member) {
            deregistration(val.data());
            i.remove();
        }
    }

    handleCanExecuteChanged(oldCanExecute);
}

void CompositeCommand::remove(const QSharedPointer& a_member)
{
    bool oldCanExecute = canExecute();

    deregistration(a_member.data());
    m_members.removeAll( a_member );

    handleCanExecuteChanged(oldCanExecute);
}

After a few months of maintaining ‘desinformatizia’ We took away their mystery.

Ik heb mezelf eindelijk een (refurbished) nieuw toestel gekocht waarmee je kan bellen en zo van die andere nuttige dingen.

Uiteraard moet daar SailfishOS van me opstaan.

En deze keer werkt echt alles dat ik nodig heb. Goede ontvangst, GPS, 4G

October 15, 2025

So, you know that feeling when you’re editing GRUB for the thousandth time, because dual-booting is apparently a lifestyle choice?
In a previous post — Resurrecting My Windows Partition After 4 Years 🖥️🎮 — I was neck-deep in grub.cfg, poking at boot entries, fixing UUIDs, and generally performing a ritual worthy of system resurrection.

While I was at it, I decided to take a closer look at all those mysterious variables lurking in /etc/default/grub.
That’s when I stumbled upon something… magical. ✨


🎶 GRUB_INIT_TUNE — Your Bootloader Has a Voice

Hidden among all the serious-sounding options like GRUB_TIMEOUT and GRUB_CMDLINE_LINUX_DEFAULT sits this gem:

# Uncomment to get a beep at grub start
#GRUB_INIT_TUNE="480 440 1"

Wait, what? GRUB can beep?
Oh, not just beep. GRUB can play a tune. 🎺

Here’s how it actually works (per the GRUB manpage):

Format:

tempo freq duration [freq duration freq duration ...]
  • tempo — The base time for all note durations, in beats per minute.
    • 60 BPM → 1 second per beat
    • 120 BPM → 0.5 seconds per beat
  • freq — The note frequency in hertz.
    • 262 = Middle C, 0 = silence
  • duration — Measured in “bars” relative to the tempo.
    • With tempo 60, 1 = 1 second, 2 = 2 seconds, etc.

So 480 440 1 is basically GRUB saying “Hello, world!” through your motherboard speaker: 0.25 seconds at 440 Hz, which is A4 in standard concert pitch as defined by ISO 16:1975.
And yes, this works even before your sound card drivers have loaded — pure, raw, BIOS-level nostalgia.


🧠 From Beep to Bop

Naturally, I couldn’t resist. One line turned into a small Python experiment, which turned into an audio preview tool, which turned into… let’s say, “bootloader performance art.”

Want to make GRUB play a polska when your system starts?
You can. It’s just a matter of string length — and a little bit of mischief. 😏

There’s technically no fixed “maximum size” for GRUB_INIT_TUNE, but remember: the bootloader runs in a very limited environment. Push it too far, and your majestic overture becomes a segmentation fault sonata.

So maybe keep it under a few kilobytes unless you enjoy debugging hex dumps at 2 AM.


🎼 How to Write a Tune That Won’t Make Your Laptop Cry

Practical rules of thumb (don’t be that person):

  • Keep the inline tune under a few kilobytes if you want it to behave predictably.
  • Hundreds to a few thousands of notes is usually fine; tens of thousands is pushing luck.
  • Each numeric value (pitch or duration) must be ≤ 65535.
  • Very long tunes simply delay the menu — that’s obnoxious for you and terrifying for anyone asking you for help.
    Keep tunes short and tasteful (or obnoxious on purpose).

🎵 Little Musical Grammar: Notes, Durations and Chords (Fake Ones)

Write notes as frequency numbers (Hz). Example: A4 = 440.

Prefer readable helpers: write a tiny script that converts D4 F#4 A4 into the numbers.

Example minimal tune:

GRUB_INIT_TUNE="480 294 1 370 1 440 1 370 1 392 1 494 1 294 1"

That’ll give you a jaunty, bouncy opener — suitable for mild neighbour complaints. 💃🎻

Chords? GRUB can’t play them simultaneously — but you can fake them by rapid time-multiplexing (cycling the chord notes quickly).
It sounds like a buzzing organ, not a symphony, but it’s delightful in small doses.

Fun fact 💾: this time-multiplexing trick isn’t new — it’s straight out of the 8-bit video game era.
Old sound chips (like those in the Commodore 64 and NES) used the same sleight of hand to make
a single channel pretend to play multiple notes at once.
If you’ve ever heard a chiptune shimmer with impossible harmonies, that’s the same magic. ✨🎮


🧰 Tools I Like (and That You Secretly Want)

If you’re not into manually counting numbers, do this:

Use a small composer script (I wrote one) that:

  • Accepts melodic notation like D4 F#4 A4 or C4+E4+G4 (chord syntax).
  • Can preview via your system audio (so you don’t have to reboot to hear it).
  • Can install the result into /etc/default/grub and run update-grub (only as sudo).

Preview before you install. Always.
Your ears will tell you if your “ode to systemd” is charming or actually offensive.

For chords, the script time-multiplexes: e.g. for a 500 ms chord and 15 ms slices,
it cycles the chord notes quickly so the ear blends them.
It’s not true polyphony, but it’s a fun trick.

(If you want the full script I iterated on: drop me a comment. But it’s more fun to leave as an exercise to the reader.)


🧮 Limits, Memory, and “How Big Before It Breaks?”

Yes, my Red Team colleague will love this paragraph — and no, I’m not going to hand over a checklist for breaking things.

Short answer: GRUB doesn’t advertise a single fixed limit for GRUB_INIT_TUNE length.

Longer answer, responsibly phrased:

  • Numeric limits: per note pitch/duration ≤ 65535 (uint16_t).
  • Tempo: can go up to uint32_t.
  • Parser & memory: the tune is tokenized at boot, so parsing buffers and allocators impose practical limits.
    Expect a few kilobytes to be safe; hundreds of kilobytes is where things get flaky.
  • Usability: if your tune is measured in minutes, you’ve already lost. Don’t be that.

If you want to test where the parser chokes, do it in a disposable VM, never on production hardware.
If you’re feeling brave, you can even audit the GRUB source for buffer sizes in your specific version. 🧩


⚙️ How to Make It Sing

Edit /etc/default/grub and add a line like this:

GRUB_INIT_TUNE="480 440 1 494 1 523 1  587 1  659 3"

Then rebuild your config:

sudo update-grub

Reboot, and bask in the glory of your new startup sound.
Your BIOS will literally play you in. 🎶


💡 Final Thoughts

GRUB_INIT_TUNE is the operating-system equivalent of a ringtone for your toaster:
ridiculously low fidelity, disproportionately satisfying,
and a perfect tiny place to inject personality into an otherwise beige boot.

Use it for a smile, not for sabotage.

And just when I thought I’d been all clever reverse-engineering GRUB beeps myself…
I discovered that someone already built a web-based GRUB tune tester!
👉 https://breadmaker.github.io/grub-tune-tester/

Yes, you can compose and preview tunes right in your browser —
no need to sacrifice your system to the gods of early boot audio.
It’s surprisingly slick.

Even better, there’s a small but lively community posting their GRUB masterpieces on Reddit and other forums.
From Mario theme beeps to Doom startup riffs, there’s something both geeky and glorious about it.
You’ll find everything from tasteful minimalist dings to full-on “someone please stop them” anthems. 🎮🎶

Boot loud, boot proud — but please boot considerate. 😄🎻💻

October 13, 2025

Dear lazyweb,

At work, we are trying to rotate the GPG signing keys for the Linux packages of the eID middleware

We created new keys, and they will be installed on all Linux machines that have the eid-archive package installed soon (they were already supposed to be, but we made a mistake).

Running some tests, however, I have a bit of a problem:

[wouter@rhel rpm-gpg]$ sudo rpm --import RPM-GPG-KEY-BEID-RELEASE
[wouter@rhel rpm-gpg]$ sudo rpm --import RPM-GPG-KEY-BEID-RELEASE-2025
fout: RPM-GPG-KEY-BEID-RELEASE-2025: key 1 import failed.
[wouter@rhel rpm-gpg]$ sudo rpm --import RPM-GPG-KEY-BEID-CONTINUOUS

This is on RHEL9.

The only difference between the old keys and the new one, apart of course from the fact that the old one is, well, old, is that the old one uses the RSA algorithm whereas the new one uses ECDSA on the NIST P-384 curve (the same algorithm as the one used by the eID card).

Does RPM not support ECDSA keys? Does anyone know where this is documented?

(Yes, I probably should have tested this before publishing the new key, but this is where we are)

October 08, 2025

As a solo developer, I wear all the hats. 🎩👷‍♂️🎨
That includes the very boring Quality Assurance Hat™ — the one that says “yes, Amedee, you do need to check for trailing whitespace again.”

And honestly? I suck at remembering those little details. I’d rather be building cool stuff than remembering to run Black or fix a missing newline. So I let my robot friend handle it.

That friend is called pre-commit. And it’s the best personal assistant I never hired. 🤖


🧐 What is this thing?

Pre-commit is like a bouncer for your Git repo. Before your code gets into the club (your repo), it gets checked at the door:

“Whoa there — trailing whitespace? Not tonight.”
“Missing a newline at the end? Try again.”
“That YAML looks sketchy, pal.”
“You really just tried to commit a 200MB video file? What is this, Dropbox?”
“Leaking AWS keys now, are we? Security says nope.”
“Commit message says ‘fix’? That’s not a message, that’s a shrug.”

Pre-commit runs a bunch of little scripts called hooks to catch this stuff. You choose which ones to use — it’s modular, like Lego for grown-up devs. 🧱

When I commit, the hooks run. If they don’t like what they see, the commit gets bounced.
No exceptions. No drama. Just “fix it and try again.”

Is it annoying? Yeah, sometimes.
But has it saved my butt from pushing broken or embarrassing code? Way too many times.


🎯 Why I bother (as a hobby dev)

I don’t have teammates yelling at me in code reviews. I am the teammate.
And future-me is very forgetful. 🧓

Pre-commit helps me:

  • 📏 Keep my code consistent
  • 💣 It catches dumb mistakes before I make them permanent.
  • 🕒 Spend less time cleaning up
  • 💼 Feel a little more “pro” even when I’m hacking on toy projects
  • 🧬 It works with any language. Even Bash, if you’re that kind of person.

Also, it feels kinda magical when it auto-fixes stuff and the commit just… works.


🛠 Installing it with pipx (because I’m not a barbarian)

I’m not a fan of polluting my Python environment, so I use pipx to keep things tidy. It installs CLI tools globally, but keeps them isolated.
If you don’t have pipx yet:

python3 -m pip install --user pipx
pipx ensurepath

Then install pre-commit like a boss:

pipx install pre-commit

Boom. It’s installed system-wide without polluting your precious virtualenvs. Chef’s kiss. 👨‍🍳💋


📝 Setting it up

Inside my project (usually some weird half-finished script I’ll obsess over for 3 days and then forget for 3 months), I create a file called .pre-commit-config.yaml.

Here’s what mine usually looks like:

repos:
  - repo: https://github.com/pre-commit/pre-commit-hooks
    rev: v5.0.0
    hooks:
      - id: trailing-whitespace
      - id: end-of-file-fixer
      - id: check-yaml
      - id: check-added-large-files

  - repo: https://github.com/gitleaks/gitleaks
    rev: v8.28.0
    hooks:
      - id: gitleaks

  - repo: https://github.com/jorisroovers/gitlint
    rev: v0.19.1
    hooks:
      - id: gitlint

  - repo: https://gitlab.com/vojko.pribudic.foss/pre-commit-update
    rev: v0.8.0
    hooks:
      - id: pre-commit-update

🧙‍♂️ What this pre-commit config actually does

You’re not just tossing some YAML in your repo and calling it a day. This thing pulls together a full-on code hygiene crew — the kind that shows up uninvited, scrubs your mess, locks up your secrets, and judges your commit messages like it’s their job. Because it is.

📦 pre-commit-hooks (v5.0.0)

These are the basics — the unglamorous chores that keep your repo from turning into a dumpster fire. Think lint roller, vacuum, and passive-aggressive IKEA manual rolled into one.

  • trailing-whitespace:
    🚫 No more forgotten spaces at the end of lines. The silent killers of clean diffs.
  • end-of-file-fixer:
    👨‍⚕️ Adds a newline at the end of each file. Why? Because some tools (and nerds) get cranky if it’s missing.
  • check-yaml:
    🧪 Validates your YAML syntax. No more “why isn’t my config working?” only to discover you had an extra space somewhere.
  • check-added-large-files:
    🚨 Stops you from accidentally committing that 500MB cat video or .sqlite dump. Saves your repo. Saves your dignity.
🔐 gitleaks (v8.28.0)

Scans your code for secrets — API keys, passwords, tokens you really shouldn’t be committing.
Because we’ve all accidentally pushed our .env file at some point. (Don’t lie.)

✍ gitlint (v0.19.1)

Enforces good commit message style — like limiting subject line length, capitalizing properly, and avoiding messages like “asdf”.
Great if you’re trying to look like a serious dev, even when you’re mostly committing bugfixes at 2AM.

🔁 pre-commit-update (v0.8.0)

The responsible adult in the room. Automatically bumps your hook versions to the latest stable ones. No more living on ancient plugin versions.

🧼 In summary

This setup covers:

  • ✅ Basic file hygiene (whitespace, newlines, YAML, large files)
  • 🔒 Secret detection
  • ✉ Commit message quality
  • 🆙 Keeping your hooks fresh

You can add more later, like linters specific for your language of choice — think of this as your “minimum viable cleanliness.”

🧩 What else can it do?

There are hundreds of hooks. Some I’ve used, some I’ve just admired from afar:

  • black is a Python code formatter that says: “Shhh, I know better.”
  • flake8 finds bugs, smells, and style issues in Python.
  • isort sorts your imports so you don’t have to.
  • eslint for all you JavaScript kids.
  • shellcheck for Bash scripts.
  • … or write your own custom one-liner hook!

You can browse tons of them at: https://pre-commit.com/hooks.html


🧙‍♀️ Make Git do your bidding

To hook it all into Git:

pre-commit install

Now every time you commit, your code gets a spa treatment before it enters version control. 💅

Wanna retroactively clean up the whole repo? Go ahead:

pre-commit run --all-files

You’ll feel better. I promise.


🎯 TL;DR

Pre-commit is a must-have.
It’s like brushing your teeth before a date: it’s fast, polite, and avoids awkward moments later. 🪥💋
If you haven’t tried it yet: do it. Your future self (and your Git history, and your date) will thank you. 🙏

Use pipx to install it globally.
Add a .pre-commit-config.yaml.
Install the Git hook.
Enjoy cleaner commits, fewer review comments — and a commit history you’re not embarrassed to bring home to your parents. 😌💍

And if it ever annoys you too much?
You can always disable it… like cancelling the date but still showing up in their Instagram story. 😈💔

git commit --no-verify

Want help writing your first config? Or customizing it for Python, Bash, JavaScript, Kotlin, or your one-man-band side project? I’ve been there. Ask away!

October 02, 2025

Proposals for stands for FOSDEM 2026 can now be submitted! FOSDEM 2026 will take place at the ULB on the 31st of January and 1st of February 2026. As has become traditional, we offer free and open source projects a stand to display their work to the audience. You can share information, demo software, interact with your users and developers, give away goodies, sell merchandise or accept donations. All is possible! We offer you: One table (180x80cm) with a set of chairs and a power socket Fast wireless internet access You can choose if you want the spot for the舰

October 01, 2025

Sometimes Linux life is bliss. I have my terminal, my editor, my tools, and Steam games that run natively. For nearly four years, I didn’t touch Windows once — and I didn’t miss it.

And then Fortnite happened.

My girlfriend Enya and her wife Kyra got hooked, and naturally I wanted to join them. But Fortnite refuses to run on Linux — apparently some copy-protection magic that digs into the Windows kernel, according to Reddit (so I don’t know if it’s true). It’s rare these days for a game to be Windows-only, but rare enough to shatter my Linux-only bubble. Suddenly, resurrecting Windows wasn’t a chore anymore; it was a quest for polyamorous Battle Royale glory. 🕹

My Windows 11 partition had been hibernating since November 2021, quietly gathering dust and updates in a forgotten corner of the disk. Why it stopped working back then? I honestly don’t remember, but apparently I had blogged about it. I hadn’t cared — until now.


The Awakening – Peeking Into the UEFI Abyss 🐧

I started my journey with my usual tools: efibootmgr and update-grub on Ubuntu. I wanted to see what the firmware thought was bootable:

sudo efibootmgr

Output:

BootCurrent: 0001
Timeout: 1 seconds
BootOrder: 0001,0000
Boot0000* Windows Boot Manager ...
Boot0001* Ubuntu ...

At first glance, everything seemed fine. Ubuntu booted as usual. Windows… did not. It didn’t even show up in the GRUB boot menu. A little disappointing—but not unexpected, given that it hadn’t been touched in years. 😬

I knew the firmware knew about Windows—but the OS itself refused to wake up.


The Hidden Enemy – Why os-prober Was Disabled ⚙

I soon learned that recent Ubuntu versions disable os-prober by default. This is partly to speed up boot and partly to avoid probing unknown partitions automatically, which could theoretically be a security risk.

I re-enabled it in /etc/default/grub:

GRUB_DISABLE_OS_PROBER=false

Then ran:

sudo update-grub

Even after this tweak, Windows still didn’t appear in the GRUB menu.


The Manual Attempt – GRUB to the Rescue ✍

Determined, I added a manual GRUB entry in /etc/grub.d/40_custom:

menuentry "Windows" {
    insmod part_gpt
    insmod fat
    insmod chain
    search --no-floppy --fs-uuid --set=root 99C1-B96E
    chainloader /EFI/Microsoft/Boot/bootmgfw.efi
}

How I found the EFI partition UUID:

sudo blkid | grep EFI

Result: UUID="99C1-B96E"

Ran sudo update-grub… Windows showed up in GRUB! But clicking it? Nothing.

At this stage, Windows still wouldn’t boot. The ghost remained untouchable.


The Missing File – Hunt for bootmgfw.efi 🗂

The culprit? bootmgfw.efi itself was gone. My chainloader had nothing to point to.

I mounted the NTFS Windows partition (at /home/amedee/windows) and searched for the missing EFI file:

sudo find /home/amedee/windows/ -type f -name "bootmgfw.efi"
/home/amedee/windows/Windows/Boot/EFI/bootmgfw.efi

The EFI file was hidden away, but thankfully intact. I copied it into the proper EFI directory:

sudo cp /home/amedee/windows/Windows/Boot/EFI/bootmgfw.efi /boot/efi/EFI/Microsoft/Boot/

After a final sudo update-grub, Windows appeared automatically in the GRUB menu. Finally, clicking the entry actually booted Windows. Victory! 🥳


Four Years of Sleeping Giants 🕰

Booting Windows after four years was like opening a time capsule. I was greeted with thousands of updates, drivers, software installations, and of course, the installation of Fortnite itself. It took hours, but it was worth it. The old system came back to life.

Every “update complete” message was a heartbeat closer to joining Enya and Kyra in the Battle Royale.


The GRUB Disappearance – Enter Ventoy 🔧

After celebrating Windows resurrection, I rebooted… and panic struck.

The GRUB menu had vanished. My system booted straight into Windows, leaving me without access to Linux. How could I escape?

I grabbed my trusty Ventoy USB stick (the same one I had used for performance tests months ago) and booted it in UEFI mode. Once in the live environment, I inspected the boot entries:

sudo efibootmgr -v

Output:

BootCurrent: 0002
Timeout: 1 seconds
BootOrder: 0002,0000,0001
Boot0000* Windows Boot Manager ...
Boot0001* Ubuntu ...
Boot0002* USB Ventoy ...

To restore Ubuntu to the top of the boot order:

sudo efibootmgr -o 0001,0000

Console output:

BootOrder changed from 0002,0000,0001 to 0001,0000

After rebooting, the GRUB menu reappeared, listing both Ubuntu and Windows. I could finally choose my OS again without further fiddling. 💪


A Word on Secure Boot and Signed Kernels 🔐

Since we’re talking bootloaders: Secure Boot only allows EFI binaries signed with a trusted key to execute. Ubuntu Desktop ships with signed kernels and a signed shim so it boots fine out of the box. If you build your own kernel or use unsigned modules, you’ll either need to sign them yourself or disable Secure Boot in firmware.


Diagram of the Boot Flow 🖼

Here’s a visual representation of the boot process after the fix:

flowchart TD
    UEFI["⚙ UEFI Firmware BootOrder:<br/>0001 (Ubuntu) →<br/>0000 (Windows)<br/>(BootCurrent: 0001)"]

    subgraph UbuntuEFI["shimx64.efi"]
        GRUB["📂 GRUB menu"]
        LINUX["🐧 Ubuntu Linux<br/>kernel + initrd"]
        CHAINLOAD["🪟 Windows<br/>bootmgfw.efi"]
    end

    subgraph WindowsEFI["bootmgfw.efi"]
        WBM["🪟 Windows Boot Manager"]
        WINOS["💻 Windows 11<br/>(C:)"]
    end

    UEFI --> UbuntuEFI
    GRUB -->|boots| LINUX
    GRUB -.->|chainloads| CHAINLOAD
    UEFI --> WindowsEFI
    WBM -->|boots| WINOS

From the GRUB menu, the Windows entry chainloads bootmgfw.efi, which then points to the Windows Boot Manager, finally booting Windows itself.


First Battle Royale 🎮✨

After all the technical drama and late-night troubleshooting, I finally joined Enya and Kyra in Fortnite.

I had never played Fortnite before, but my FPS experience (Borderlands hype, anyone?) and PUBG knowledge from Viva La Dirt League on YouTube gave me a fighting chance.

We won our first Battle Royale together! 🏆💥 The sense of triumph was surreal—after resurrecting a four-year-old Windows partition, surviving driver hell, and finally joining the game, victory felt glorious.


TL;DR: Quick Repair Steps ⚡

  1. Enable os-prober in /etc/default/grub.
  2. If Windows isn’t detected, try a manual GRUB entry.
  3. If boot fails, copy bootmgfw.efi from the NTFS Windows partition to /boot/efi/EFI/Microsoft/Boot/.
  4. Run sudo update-grub.
  5. If GRUB disappears after booting Windows, boot a Live USB (UEFI mode) and adjust efibootmgr to set Ubuntu first.
  6. Reboot and enjoy both OSes. 🎉

This little adventure taught me more about GRUB, UEFI, and EFI files than I ever wanted to know, but it was worth it. Most importantly, I got to join my polycule in a Fortnite victory and prove that even a four-year-old Windows partition can rise again! 💖🎮

September 30, 2025

This command on a Raspberry Pi 3:

stty -F /dev/ttyACM0 ispeed 9600 ospeed 9600 raw

Resulted in this error:

stty: /dev/ttyACM0: Inappropriate ioctl for device

This happened after the sd-card (with the OS) of the Pi entered failsafe mode so it is read only. I used dd to copy the 16GB sd-card to a new 128GB one:

dd if=/dev/sdd of=/srv/iso/pi_20250928.iso
dd if=/srv/iso/pi_20250928.iso of=/dev/sdd

The solution was to stop the module, remove the device, and start it again:

# modprobe -r cdc_acm
# rm -rf /dev/ttyACM0
# modprobe cdc_acm

... as it was probably copied with dd from the read-only sd-card.


epilogue: The 16GB SD was ten to fifteen years old. The Pi expanded the new card to 118GB not 128GB as advertised: relevant XKCD https://xkcd.com/394/

September 24, 2025

We need to talk.

You and I have been together for a long time. I wrote blog posts, you provided a place to share them. For years that worked. But lately you’ve been treating my posts like spam — my own blog links! Apparently linking to an external site on my Page is now a cardinal sin unless I pay to “boost” it.
And it’s not just Facebook. Threads — another Meta platform — also keeps taking down my blog links.

So this is goodbye… at least for my Facebook Page.
I’m not deleting my personal Profile. I’ll still pop in to see what events are coming up, and to look at photos after the balfolk and festivals. But our Page-posting days are over.

Here’s why:

  • Your algorithm is a slot machine. What used to be “share and be seen” has become “share, pray, and maybe pay.” I’d rather drop coins in an actual jukebox than feed a zuckerbot just so friends can see my work.
  • Talking into a digital void. Posting to my Page now feels like performing in an empty theatre while an usher whispers “boost post?” The real conversations happen by email, on Mastodon, or — imagine — in real life.
  • Privacy, ads, and that creepy feeling. Every login is a reminder that Facebook isn’t free. I’m paying with my data to scroll past ads for things I only muttered near my phone. That’s not the backdrop I want for my writing.
  • The algorithm ate my audience. Remember when following a Page meant seeing its posts? Cute era. Now everything’s at the mercy of an opaque feed.
  • My house, my rules. I built amedee.be to be my own little corner of the web. No arbitrary takedowns, no algorithmic chokehold, no random “spam” labels. Subscribe by RSS or email and you’ll get my posts in the order I publish them — not the order an algorithm thinks you should.
  • Better energy elsewhere. Time spent arm-wrestling Facebook is time I could spend writing, playing the nyckelharpa, or dancing a Swedish polska at a balfolk. All of that beats arguing with a zuckerbot.

From now on, if people actually want to read what I write, they’ll find me at amedee.be, via RSS, email, or Mastodon. No algorithms, no takedowns, no mystery boxes.

So yes, we’ll still bump into each other when I check events or browse photos. But the part where I dutifully feed you my blog posts? That’s over.

With zero boosted posts and one very happy nyckelharpa,
Amedee

September 20, 2025

Proposals for developer rooms and main track talks for FOSDEM 2026 can now be submitted! FOSDEM offers open source and free software developers a place to meet, share ideas and collaborate. Renowned for being highly developer-oriented, the event brings together some 8000+ geeks from all over the world. The twenty-sixth edition will take place on Saturday 31st January and Sunday 1st February 2026 at the usual location, ULB Campus Solbosch in Brussels. Developer Rooms Developer rooms are assigned to self-organising groups to work together on open source and free software projects, to discuss topics relevant to a broader subset of舰

September 17, 2025

It's that time again! FOSDEM 2026 will take place on Saturday 31st of January and Sunday 1st of February 2026. Further details and calls for participation will be announced in the coming days and weeks.

Mood: Slightly annoyed at CI pipelines 🧨
CI runs shouldn’t feel like molasses. Here’s how I got Ansible to stop downloading the internet. You’re welcome.


Let’s get one thing straight: nobody likes waiting on CI.
Not you. Not me. Not even the coffee you brewed while waiting for Galaxy roles to install — again.

So I said “nope” and made it snappy. Enter: GitHub Actions Cache + Ansible + a generous helping of grit and retries.

🧙‍♂️ Why cache your Ansible Galaxy installs?

Because time is money, and your CI shouldn’t feel like it’s stuck in dial-up hell.
If you’ve ever screamed internally watching community.general get re-downloaded for the 73rd time this month — same, buddy, same.

The fix? Cache that madness. Save your roles and collections once, and reuse like a boss.

💾 The basics: caching 101

Here’s the money snippet:

path: .ansible/
key: ansible-deps-${{ hashFiles('requirements.yml') }}
restoreKeys: |
  ansible-deps-

🧠 Translation:

  • Store everything Ansible installs in .ansible/
  • Cache key changes when requirements.yml changes — nice and deterministic
  • If the exact match doesn’t exist, fall back to the latest vaguely-similar key

Result? Fast pipelines. Happy devs. Fewer rage-tweets.

🔁 Retry like you mean it

Let’s face it: ansible-galaxy has… moods.

Sometimes Galaxy API is down. Sometimes it’s just bored. So instead of throwing a tantrum, I taught it patience:

for i in {1..5}; do
  if ansible-galaxy install -vv -r requirements.yml; then
    break
  else
    echo "Galaxy is being dramatic. Retrying in $((i * 10)) seconds…" >&2
    sleep $((i * 10))
  fi
done

That’s five retries. With increasing delays.
💬 “You good now, Galaxy? You sure? Because I’ve got YAML to lint.”

⚠ The catch (a.k.a. cache wars)

Here’s where things get spicy:

actions/cache only saves when a job finishes successfully.

So if two jobs try to save the exact same cache at the same time?
💥 Boom. Collision. One wins. The other walks away salty:

Unable to reserve cache with key ansible-deps-...,
another job may be creating this cache.

Rude.

🧊 Fix: preload the cache in a separate job

The solution is elegant:
Warm-up job. One that only does Galaxy installs and saves the cache. All your other jobs just consume it. Zero drama. Maximum speed. 💃

🪄 Tempted to symlink instead of copy?

Yeah, I thought about it too.
“But what if we symlink .ansible/ and skip the copy?”

Nah. Not worth the brainpower. Just cache the thing directly.
✅ It works. 🧼 It’s clean. 😌 You sleep better.

🧠 Pro tips

  • Use the hash of requirements.yml as your cache key. Trust me.
  • Add a fallback prefix like ansible-deps- so you’re never left cold.
  • Don’t overthink it. Let the cache work for you, not the other way around.

✨ TL;DR

  • ✅ GitHub Actions cache = fast pipelines
  • ✅ Smart keys based on requirements.yml = consistency
  • ✅ Retry loops = less flakiness
  • ✅ Preload job = no more cache collisions
  • ❌ Re-downloading Galaxy junk every time = madness

🔥 Go forth and cache like a pro.

Got better tricks? Hit me up on Mastodon and show me your CI magic.
And remember: Friends don’t let friends wait on Galaxy.

💚 Peace, love, and fewer ansible-galaxy downloads.

September 14, 2025

lookat 2.1.0

Lookat 2.1.0 is the latest stable release of Lookat/Bekijk, a user-friendly Unix file browser/viewer that supports colored man pages.

The focus of the 2.1.0 release is to add ANSI Color support.


 

News

14 Sep 2025 Lookat 2.1.0 Released

Lookat / Bekijk 2.1.0rc2 has been released as Lookat / Bekijk 2.1.0

3 Aug 2025 Lookat 2.1.0rc2 Released

Lookat 2.1.0rc2 is the second release candicate of Lookat 2.1.0

ChangeLog

Lookat / Bekijk 2.1.0rc2
  • Corrected italic color
  • Don’t reset the search offset when cursor mode is enabled
  • Renamed strsize to charsize ( ansi_strsize -> ansi_charsize, utf8_strsize -> utf8_charsize) to be less confusing
  • Support for multiple ansi streams in ansi_utf8_strlen()
  • Update default color theme to green for this release
  • Update manpages & documentation
  • Reorganized contrib directory
    • Moved ci/cd related file from contrib/* to contrib/cicd
    • Moved debian dir to contrib/dist
    • Moved support script to contrib/scripts

Lookat 2.1.0 is available at:

Have fun!

September 11, 2025

I once read a blurb about the benefits of bureaucracy, and how it is intended to resist political influences, autocratic leadership, priority-of-the-day decision-making, silo'ed views, and more things that we generally see as "Bad Things™️". I'm sad that I can't recall where it was, but its message was similar as what The Benefits Of Bureaucracy: How I Learned To Stop Worrying And Love Red Tape by Rita McGrath presents. When I read it, I was strangely supportive to the message, because I am very much confronted, and perhaps also often the cause, for bureaucracy and governance-related deliverables in the company that I work for.

Bureacracy and (hyper)governance

Bureaucracy, or governance in general, often puts a bad taste in the mouth of whomever dares to speak about it though. And I fully agree, hypergovernance or hyperbureaucracy will put too much burden in the organization. The benefits will no longer be visible, and the creativity and innovation of people will be stifled.

Hypergovernance is a bad thing indeed, and often comes up in the news. Companies loathing the so-called overregulation of the European Union for instance, getting together in action groups to ask for deregulation. A recent topic here was Europe's attempt for moving towards a more sustainable environment given the lack of attention on sustainability by the various industries and governments. The premise to regulate this was driven by the observation that principally guiding and asking doesn't work: sustainability is a long-term goal, yet most industries and governments focus on short-term benefits.

The need to simplify regulation, and the reaction on the bureacracy needed to align with the reporting expectations of Europe, triggered the update by the European Commission in a simplification package it calls the Omnibus package.

I think that is the right way forward, not for this particular case (I don't know enough about ESG to be any useful resource on that), but also within regulated industries and companies where the bureaucracy is considered to dampen progression and efficiency. Simplification and optimization here is key, not just running down things. In the Capability Maturity Model, a process is considered efficient if it includes deliberate process optimization and improvement. So why not deliberately optimize and improve? Evaluate and steer?

Benefits of bureaucracy

It would be bad if bureaucracy itself would be considered a negative point of any organization. Many of the benefits of bureaucracy I fully endorse myself.

Standardization, where procedures and policies are created to ensure consistency in operations and decision-making. Without standardization, you gain inefficiencies, not benefits. If a process is considered too daunting, standardization might be key to improve on it.

Accountability, where it is made clear who does what. Holding people or teams accountable is not a judgement, but a balance of expectations and responsibilities. If handled positively, accountability is also an expression of expertise, endorsement for what you are or can do.

Risk management, which is coincidentally the most active one in my domain (the Digital Operational Resilience Act has a very strong focus on risk management), has a primary focus on reducing the likelihood of misconduct and errors. Regulatory requirements and internal controls are not the goal, but a method.

Efficiency, by streamlining processes through established protocols and procedures. Sure, new approaches and things come along, but after the two-hundredth request to do or set up something only to realize it still takes 50 mandays... well, perhaps you should focus on streamlining the process, introduce some bureaucracy to help yourself out.

Transparency, promoting clear communication and documentation, as well as insights into why something is done. This improves trust among the teams and people.

In a world where despotic leadership exists, you will find that a good working bureacracy can be a inhibitor for too aggressive change. That can frustrate the wanna-be autocrat (if they are truly autocrat, then there is no bureacracy), but with the right support, it can indicate and motivate why this resistance exists. If the change is for the good - well, bureaucracy even has procedures for change.

Bureaucracy also prohibits islands and isolated decision making. People demanding all the budgets for themselves because they find that their ideas are the only ones worth working on (everybody has these in the company) will also find that the bureacracy is there to balance budgeting, allocate resources to the right projects that benefit the company as a whole, and not just the 4 people you just onboarded in your team and gave macbooks...

Bureaucracy isn't bad, and some people prefer to have strict rules or procedures. Resisting change is a human behavior, but promoting anarchy is also not the way forward. Instead, nurture a culture of continuous improvement: be able to point out when things go beyond their reach, and learn about the reasoning and motivation that others bring up. Those in favor of bureacracy will see this as a maturity increase, and those that are affected by over-regulation will see this as an improvement.

We can all strive to remain in a bureaucracy and be happy with it.

Feedback? Comments? Don't hesitate to get in touch on Mastodon.

September 10, 2025

There comes a time in every developer’s life when you just know a certain commit existed. You remember its hash: deadbeef1234. You remember what it did. You know it was important. And yet, when you go looking for it…

💥 fatal: unable to read tree <deadbeef1234>

Great. Git has ghosted you.

That was me today. All I had was a lonely commit hash. The branch that once pointed to it? Deleted. The local clone that once had it? Gone in a heroic but ill-fated attempt to save disk space. And GitHub? Pretending like it never happened. Typical.

🪦 Act I: The Naïve Clone

“Let’s just clone the repo and check out the commit,” I thought. Spoiler alert: that’s not how Git works.

git clone --no-checkout https://github.com/user/repo.git
cd repo
git fetch --all
git checkout deadbeef1234

🧨 fatal: unable to read tree 'deadbeef1234'

Thanks Git. Very cool. Apparently, if no ref points to a commit, GitHub doesn’t hand it out with the rest of the toys. It’s like showing up to a party and being told your friend never existed.

🧪 Act II: The Desperate fsck

Surely it’s still in there somewhere? Let’s dig through the guts.

git fsck --full --unreachable

Nope. Nothing but the digital equivalent of lint and old bubblegum wrappers.

🕵 Act III: The Final Trick

Then I stumbled across a lesser-known Git dark art:

git fetch origin deadbeef1234

And lo and behold, GitHub replied with a shrug and handed it over like, “Oh, that commit? Why didn’t you just say so?”

Suddenly the commit was in my local repo, fresh as ever, ready to be inspected, praised, and perhaps even resurrected into a new branch:

git checkout -b zombie-branch deadbeef1234

Mission accomplished. The dead walk again.


☠ Moral of the Story

If you’re ever trying to recover a commit from a deleted branch on GitHub:

  1. Cloning alone won’t save you.
  2. git fetch origin <commit> is your secret weapon.
  3. If GitHub has completely deleted the commit from its history, you’re out of luck unless:
    • You have an old local clone
    • Someone forked the repo and kept it
    • CI logs or PR diffs include your precious bits

Otherwise, it’s digital dust.


🧛 Bonus Tip

Once you’ve resurrected that commit, create a branch immediately. Unreferenced commits are Git’s version of vampires: they disappear without a trace when left in the shadows.

git checkout -b safe-now deadbeef1234

And there you have it. One undead commit, safely reanimated.

September 03, 2025

Het is al veel te lang geleden dat ik de mooiste maand ter wereld vierde met een mooi liedje. Vandaar deze Septemberige “August Moon” live door Jacob Alon: Watch this video on YouTube.

Source

At some point, I had to admit it: I’ve turned GitHub Issues into a glorified chart gallery.

Let me explain.

Over on my amedee/ansible-servers repository, I have a workflow called workflow-metrics.yml, which runs after every pipeline. It uses yykamei/github-workflows-metrics to generate beautiful charts that show how long my CI pipeline takes to run. Those charts are then posted into a GitHub Issue—one per run.

It’s neat. It’s visual. It’s entirely unnecessary to keep them forever.

The thing is: every time the workflow runs, it creates a new issue and closes the old one. So naturally, I end up with a long, trailing graveyard of “CI Metrics” issues that serve no purpose once they’re a few weeks old.

Cue the digital broom. 🧹


Enter cleanup-closed-issues.yml

To avoid hoarding useless closed issues like some kind of GitHub raccoon, I created a scheduled workflow that runs every Monday at 3:00 AM UTC and deletes the cruft:

schedule:
  - cron: '0 3 * * 1' # Every Monday at 03:00 UTC

This workflow:

  • Keeps at least 6 closed issues (just in case I want to peek at recent metrics).
  • Keeps issues that were closed less than 30 days ago.
  • Deletes everything else—quietly, efficiently, and without breaking a sweat.

It’s also configurable when triggered manually, with inputs for dry_run, days_to_keep, and min_issues_to_keep. So I can preview deletions before committing them, or tweak the retention period as needed.


📂 Complete Source Code for the Cleanup Workflow

name: 🧹 Cleanup Closed Issues

on:
  schedule:
    - cron: '0 3 * * 1' # Runs every Monday at 03:00 UTC
  workflow_dispatch:
    inputs:
      dry_run:
        description: "Enable dry run mode (preview deletions, no actual delete)"
        required: false
        default: "false"
        type: choice
        options:
          - "true"
          - "false"
      days_to_keep:
        description: "Number of days to retain closed issues"
        required: false
        default: "30"
        type: string
      min_issues_to_keep:
        description: "Minimum number of closed issues to keep"
        required: false
        default: "6"
        type: string

concurrency:
  group: ${{ github.workflow }}-${{ github.ref }}
  cancel-in-progress: true

permissions:
  issues: write

jobs:
  cleanup:
    runs-on: ubuntu-latest

    steps:
      - name: Install GitHub CLI
        run: sudo apt-get install --yes gh

      - name: Delete old closed issues
        env:
          GH_TOKEN: ${{ secrets.GH_FINEGRAINED_PAT }}
          DRY_RUN: ${{ github.event.inputs.dry_run || 'false' }}
          DAYS_TO_KEEP: ${{ github.event.inputs.days_to_keep || '30' }}
          MIN_ISSUES_TO_KEEP: ${{ github.event.inputs.min_issues_to_keep || '6' }}
          REPO: ${{ github.repository }}
        run: |
          NOW=$(date -u +%s)
          THRESHOLD_DATE=$(date -u -d "${DAYS_TO_KEEP} days ago" +%s)
          echo "Only consider issues older than ${THRESHOLD_DATE}"

          echo "::group::Checking GitHub API Rate Limits..."
          RATE_LIMIT=$(gh api /rate_limit --jq '.rate.remaining')
          echo "Remaining API requests: ${RATE_LIMIT}"
          if [[ "${RATE_LIMIT}" -lt 10 ]]; then
            echo "⚠️ Low API limit detected. Sleeping for a while..."
            sleep 60
          fi
          echo "::endgroup::"

          echo "Fetching ALL closed issues from ${REPO}..."
          CLOSED_ISSUES=$(gh issue list --repo "${REPO}" --state closed --limit 1000 --json number,closedAt)

          if [ "${CLOSED_ISSUES}" = "[]" ]; then
            echo "✅ No closed issues found. Exiting."
            exit 0
          fi

          ISSUES_TO_DELETE=$(echo "${CLOSED_ISSUES}" | jq -r \
            --argjson now "${NOW}" \
            --argjson limit "${MIN_ISSUES_TO_KEEP}" \
            --argjson threshold "${THRESHOLD_DATE}" '
              .[:-(if length < $limit then 0 else $limit end)]
              | map(select(
                  (.closedAt | type == "string") and
                  ((.closedAt | fromdateiso8601) < $threshold)
                ))
              | .[].number
            ' || echo "")

          if [ -z "${ISSUES_TO_DELETE}" ]; then
            echo "✅ No issues to delete. Exiting."
            exit 0
          fi

          echo "::group::Issues to delete:"
          echo "${ISSUES_TO_DELETE}"
          echo "::endgroup::"

          if [ "${DRY_RUN}" = "true" ]; then
            echo "🛑 DRY RUN ENABLED: Issues will NOT be deleted."
            exit 0
          fi

          echo "⏳ Deleting issues..."
          echo "${ISSUES_TO_DELETE}" \
            | xargs -I {} -P 5 gh issue delete "{}" --repo "${REPO}" --yes

          DELETED_COUNT=$(echo "${ISSUES_TO_DELETE}" | wc -l)
          REMAINING_ISSUES=$(gh issue list --repo "${REPO}" --state closed --limit 100 | wc -l)

          echo "::group::✅ Issue cleanup completed!"
          echo "📌 Deleted Issues: ${DELETED_COUNT}"
          echo "📌 Remaining Closed Issues: ${REMAINING_ISSUES}"
          echo "::endgroup::"

          {
            echo "### 🗑️ GitHub Issue Cleanup Summary"
            echo "- **Deleted Issues**: ${DELETED_COUNT}"
            echo "- **Remaining Closed Issues**: ${REMAINING_ISSUES}"
          } >> "$GITHUB_STEP_SUMMARY"


🛠️ Technical Design Choices Behind the Cleanup Workflow

Cleaning up old GitHub issues may seem trivial, but doing it well requires a few careful decisions. Here’s why I built the workflow the way I did:

Why GitHub CLI (gh)?

While I could have used raw REST API calls or GraphQL, the GitHub CLI (gh) provides a nice balance of power and simplicity:

  • It handles authentication and pagination under the hood.
  • Supports JSON output and filtering directly with --json and --jq.
  • Provides convenient commands like gh issue list and gh issue delete that make the script readable.
  • Comes pre-installed on GitHub runners or can be installed easily.

Example fetching closed issues:

gh issue list --repo "$REPO" --state closed --limit 1000 --json number,closedAt

No messy headers or tokens, just straightforward commands.

Filtering with jq

I use jq to:

  • Retain a minimum number of issues to keep (min_issues_to_keep).
  • Keep issues closed more recently than the retention period (days_to_keep).
  • Parse and compare issue closed timestamps with precision.
  • Exclude pull requests from deletion by checking the presence of the pull_request field.

The jq filter looks like this:

jq -r --argjson now "$NOW" --argjson limit "$MIN_ISSUES_TO_KEEP" --argjson threshold "$THRESHOLD_DATE" '
  .[:-(if length < $limit then 0 else $limit end)]
  | map(select(
      (.closedAt | type == "string") and
      ((.closedAt | fromdateiso8601) < $threshold)
    ))
  | .[].number
'

Secure Authentication with Fine-Grained PAT

Because deleting issues is a destructive operation, the workflow uses a Fine-Grained Personal Access Token (PAT) with the narrowest possible scopes:

  • Issues: Read and Write
  • Limited to the repository in question

The token is securely stored as a GitHub Secret (GH_FINEGRAINED_PAT).

Note: Pull requests are not deleted because they are filtered out and the CLI won’t delete PRs via the issues API.

Dry Run for Safety

Before deleting anything, I can run the workflow in dry_run mode to preview what would be deleted:

inputs:
  dry_run:
    description: "Enable dry run mode (preview deletions, no actual delete)"
    default: "false"

This lets me double-check without risking accidental data loss.

Parallel Deletion

Deletion happens in parallel to speed things up:

echo "$ISSUES_TO_DELETE" | xargs -I {} -P 5 gh issue delete "{}" --repo "$REPO" --yes

Up to 5 deletions run concurrently — handy when cleaning dozens of old issues.

User-Friendly Output

The workflow uses GitHub Actions’ logging groups and step summaries to give a clean, collapsible UI:

echo "::group::Issues to delete:"
echo "$ISSUES_TO_DELETE"
echo "::endgroup::"

And a markdown summary is generated for quick reference in the Actions UI.


Why Bother?

I’m not deleting old issues because of disk space or API limits — GitHub doesn’t charge for that. It’s about:

  • Reducing clutter so my issue list stays manageable.
  • Making it easier to find recent, relevant information.
  • Automating maintenance to free my brain for other things.
  • Keeping my tooling neat and tidy, which is its own kind of joy.

Steal It, Adapt It, Use It

If you’re generating temporary issues or ephemeral data in GitHub Issues, consider using a cleanup workflow like this one.

It’s simple, secure, and effective.

Because sometimes, good housekeeping is the best feature.


🧼✨ Happy coding (and cleaning)!

September 02, 2025

Recently, I published an article related to MRS (MySQL REST Service), which we released as a lab. I wanted to explore how I could use this new cool feature within an application. I decided to create an application from scratch using Helidon. The application uses the Sakila sample database. Why Helidon? I decided to use […]

August 28, 2025

TLDR: A software engineer with a dream, undertook the world’s most advanced personal trainer course. Despite being the odd duck amongst professional athletes, coaches, and body builders, graduated top of class, and is now starting a mission to build a free and open source fitness platform to power next-gen fitness apps and a wikiPedia-style public service. But he needs your help!

The fitness community & industry seem largely dysfunctional.

  1. Content mess on social media

Social media is flooded with redundant, misleading content, e.g. endless variations of the same “how to do ‘some exercise’” videos, repackaged daily by creators chasing relevance. Deceiving clickbait to farm engagement. (notable exception: Sean Nalewanyj has been consistently authentic, accurate, and entertaining). Some content is actually great, but extremely hard to find, needs curation, and sometimes context.

  1. Selling “programs” and “exercise libraries”

Coaches/vendors sell workout programs and exercise instructions as if they are proprietary “secret sauce”. They aren’t. As much as they like you to believe otherwise, workout plans and exercise instructions are not copyrightable. Specific expressions of them, such as video demonstrations are, though there are ways such videos can be legally used by 3rd parties, and I see an opportunity for a win-win model with the creator, but more on that later. This is why the seller usually simply repeat what is already widely understood, replicate (possibly old and misguided) instructions or overcomplicate in an attempt to differentiate. Furthermore, workout programs (or rather, a “rendition” of it) often have no or limited personalization options. The actually important parts, such as adjusting programs across multiple goals (e.g. combining with sports), across time (based on observed progression) or to accommodate injuries, assuring consistency with current scientific evidence, requires expensive expertise, and is often missing from the “product”.

Many exercise libraries exist, but they require royalties. To build a new app (even a non-commercial one) you need to either license a commercial library or recreate your own. I checked multiple free open source ones, but let’s just say there’s serious quality and legal concerns. Finally, WikiPedia is legal and is favorably licensed, but is too text-based and can’t easily be used by applications.

  1. Sub-optimal AI’s

When you ask an LLM for guidance, it sometimes does a pretty good job, often it doesn’t, because:

  • Unclear sources: AI’s regurgitate whatever programs they were fed during training, sometimes written by experts, sometimes by amateurs.
  • Output degrades when you need specific personalized advice or when you need adjustments over time which is actually a critical piece for progressing in your fitness journey. Try to make it too custom and it starts hallucinating.
  • Today’s systems are text based. They may seam cheap today, because vendors are subsidizing the true cost in an attempt to capture the market. But it’s inefficient, inaccurate and financially unsustainable. It also makes for a very crude user interface.

I believe future AI’s will use data models and UI’s that are both domain specific and richer than just text, so we need a library to match. Even “traditional” (non-AI) applications can gain a lot of functionality if they can leverage such structured data.

  1. Closed source days are numbered

Good coaches can bring value via in-person demonstrations, personalization, holding clients accountable and helping them adopt new habits. This unscalable model puts an upper limit on their income. The better known coaches on social media solve this by launching their own apps, some of these seem actually quite good but suffer from the typical downsides that we’ve seen in other software domains such as vendor lock-in, lack of data ownership, incompatibility across apps, lack of customization, high fees, etc; We’ve seen how this plays out in devtools, enterprise, in cloud. According to more and more investment firms, it’s starting to play out in every other industry (just check the OSS Capital portfolio or see what Andreesen Horowitz, one of the most successful VC funds of all time has to say on it): software, is becoming open source in all markets. It leads to more user-friendly products and is the better way to build more successful businesses. It is a disruptive force. Board the train or be left in the dust.

What am I going to do about it?

I’m an experienced software engineer. I have experience building teams and companies. I’m lucky enough to have a window of time and some budget, but I need to make it count. First thing I did is to educate myself properly on fitness. In 2024-2025 I participated in the Menno Henselmans Personal Trainer course. This is the most in-depth, highly accredited, science based course program for personal trainers that I could find. It was an interesting experience being the lone software developer amongst a group of athletes, coaches and body builders. Earlier this year I graduated Magna Cum Laude, top of class.

Right: one of the best coaches in the world. Left: this guy

Now I am a certified coach, who learned from one of the top coaches worldwide with decades of experience. I can train and coach individuals. But as a software engineer I know that even a small software project can grow to change the world. What Wikipedia did for articles, is what I aspire to make for fitness: a free public service comprising information, but more so hands-on tools applications to put information into action. Perhaps give opportunities to industry professionals to differentiate in more meaningful ways.

To support the applications, we also need:

  • an exercise library
  • an algorithms library (or “tools library” in AI lingo)

I’ve started prototyping both the libraries and some applications on body.build. I hope to grow a project and a community around it that will outlive me. It is therefore open source.

Next-gen applications and workflows

Thus far body.build has:

  • a calorie calculator
  • a weight lifting volume calculator
  • a program builder
  • a crude exercise explorer (prototype)

I have some ideas for more stuff that can be built by leveraging the new libraries (discussed below):

  • a mobile companion app to perform your workouts, have quick access to exercise demonstrations/cues, and log performance. You’ll own your data to do your own analysis, and a personal interest to me is the ability to try different variations or cues (see below), track their results and analyze which work better for you. (note: I’m maintaining a list of awesome OSS health/fitness apps, perhaps an existing one could be reused)
  • detailed analysis of (and comparison between) different workout programs, highlighting different areas being emphasized, estimated “bang for buck” (expecting size and strength gains vs fatigue and time spent)
  • just-in-time workout generation. Imagine an app to which you can say “my legs are sore from yesterday’s hike. In 2 days I will have a soccer match. My availability is 40 min between two meetings and/or 60min this evening at the gym. You know my overall goals, my recent workouts and that I reported elbow tendinitis symptoms in my last strength workout. Now generate me an optimal workout”.

The exercise library

The library should be liberally licensed and not restrict reuse. There is no point trying to “protect” content that has very little copyright protection anyway. I believe that “opening up” to reuse (also commercial) is not only key to a successful project, but also unlocks commercial opportunities for everyone.

The library needs in-depth awareness that goes beyond what apps typically contain and include:

  • exercises alternatives and customization options (and their trade-offs)
  • detailed biomechanical data (such as muscle involvements and loading patterns across the range of muscle length and different joint movements

Furthermore, we also need the usual textual description and exercise demonstration videos. Unlike “traditional” libraries that present a singular authoritative view, I find it valuable to clarify what is commonly agreed upon vs where coaches or studies still disagree, and present an overview of the disagreement with further links to the relevant resources, which could be an Instagram post, a YouTube video or a scientific study) A layman can go with the standard instructions, whereas advanced trainees get an overview of different options and cues, which they can check out in more detail, try and see what works best for them. No need to scroll social media for random tips, they’re all aggregated and curated in one place.

Our library (liberally licensed) includes the text and links to 3rd party public content. The 3rd party content itself is typically subject to stronger limitations, but can always be linked to, and often also embedded under fair use and under the standard YouTube license, for free applications anyway. Perhaps one day we’ll have our own content library, but for now we can avoid a lot of work and promote existing creators’ quality content. Win-win.

Body.build today has a prototype of this. I’m gradually adding more and more exercises. Today it’s part of the codebase, but at some point it’ll make more sense to separate them out, and use one of the Creative Commons licenses.

The algorithms library

Through the course, I learned about various principles (validated by decades of coaching experience and by scientific research) to construct optimal training based on input factors (e.g. optimal workout volume depends on many factors, including sex, sleep quality, food intake, etc.) Similarly, things like optimal recovery timing or exercise swapping can be calculated, using well understood principles).

Rather than thinking of workout programs as the main product I think of them as just a derivative, an artifact that can be generated by first determining an individual’s personal parameters, and then applying these algorithms on them. Better yet, instead of pre-defining a rigid multi-month program (which only works well for people with very consistent schedules), this approach allows to generate guidance at any point in any day. Which would work better for people with inconsistent agenda’s.

Body.build today has a few of such codified algorithms, I’ve only implemented what I’ve needed so far.

I need help

I may know how to build some software, and have several ideas on new types of applications that can be built that can be beneficial to people. But there is a lot that I haven’t figured out yet! Maybe you can help?

Particular pain points:

  1. Marketing: developers don’t like it, but it’s critical. We need to determine who to build for? (coaches? developers? end-users? beginners or experts?). What are their biggest issues to solve? How do we reach them? Marketing firms are quite expensive. Perhaps AI will make this a lot more accessible. For now, I think perhaps it makes most sense to build apps for technology & open source enthusiasts who geek out about optimizing their weight lifting and body building. The type of people who would obsessively scroll Instagram or YouTube hoping to find novel workout tips (who will now hopefully have a better way). I’ld like to bring sports&conditioning more to the forefront too, but can’t prioritize this now.
  2. UX and UI design.

Developer help is less critical, but of course welcome too.

If you think you can help, please reach out on body.build discord or X/Twitter, and of course Check out body.build! and please forward this to people who are into both fitness and open source technology! Thanks!