As a long-time Radiohead fan I was happy to see video of their first gig in many years and although did enjoy the set I don’t have the feeling I’m missing out on anything really. Great songs, great musicians but nothing new, nothing really unexpected (not counting a couple of songs that they only very rarely perform live). I watched the vid and then switched to a The Smile live show from 2024…
November 05, 2025
After days of boxes, labels, and that one mysterious piece of furniture that no one remembers what it belongs to, we can finally say it: weâve moved! And yes, mostly without casualties (except for a few missing screws).
The most nerve-wracking moment? Without a doubt, moving the piano. It got more attention than any other piece of furniture â and rightfully so. With a mix of brute strength, precision, and a few prayers to the gods of gravity, itâs now proudly standing in the living room.
Weâve also been officially added to the street WhatsApp group â the digital equivalent of the village well, but with emojis. It feels good to get those first friendly waves and âwelcome to the neighborhood!â messages.
The house itself is slowly coming together. My IKEA PAX wardrobe is fully assembled, but the BRIMNES bed still exists mostly in theory. For now, Iâm camping in style â mattress on the floor. My goal is to build one piece of furniture per day, though that might be slightly ambitious. Help is always welcome â not so much for heavy lifting, but for some body doubling and co-regulation. Just someone to sit nearby, hold a plank, and occasionally say âyouâre doing great!â
There are still plenty of (banana) boxes left to unpack, but thatâs part of the process. My personal mission: downsizing. Especially the books. But they wonât just be dumped at a thrift store â books are friends, and friends deserve a loving new home. 

Technically, things are running quite smoothly already: weâve got fiber internet from Mobile Vikings, and I set up some Wi-Fi extenders and powerline adapters. Tomorrow, the electricianâs coming to service the air-conditioning units â and while heâs here, Iâll ask him to attach RJ45 connectors to the loose UTP cables that end in the fuse box. That means wired internet soon too â because nothing says âsettled adultâ like a stable ping.
And then thereâs the garden.
Not just a tiny patch of green, but a real garden with ancient fruit trees and even a fig tree! We had a garden at the previous house too, but this one definitely feels like the deluxe upgrade. Every day I discover something new that grows, blossoms, or sneakily stings.
Ideas for cozy gatherings are already brewing. One of the first plans: living room concerts â small, warm afternoons or evenings filled with music, tea (one of us has British roots, so yes: milk included, coffee machine not required), and lovely people.
The first one will likely feature Hilde Van Belle, a (bal)folk friend who currently has a Kickstarter running for her first solo album:
Hilde Van Belle â First Solo Album
I already heard her songs at the CaDansa Balfolk Festival, and I could really feel the personal emotions in her music â honest, raw, and full of heart.
You should definitely support her! 
The album artwork is created by another (bal)folk friend, Verena, which makes the whole project feel even more connected and personal.
Hilde (left) and Verena (right) at CaDansa
Valentina Anzani
So yes: the pianoâs in place, the Wi-Fi works, the garden thrives, the boxes wait patiently, and the teapot is steaming.
Weâve arrived.
Phew. We actually moved. 



November 04, 2025
I've been shooting with Leica M cameras for a few years. This page lists the lenses I use today and vintage lenses I would love to try.
Lenses I currently own
| Lens | Years | Notes |
|---|---|---|
| Leica Summilux-M 35 mm f/1.4 ASPH FLE | 2010 â 2022 | Sharp, fast, and versatile. My everyday lens. |
| Leica Summilux-M 50 mm f/1.4 ASPH | 2023 â Present | Sharp, fast, and balanced with beautiful color and contrast. |
| Leica Noctilux-M 50 mm f/0.95 | 2008 â Present | Dreamy and full of depth, but not always sharp. |
Vintage lenses I'd love to try or buy
Lately, I've become more interested in vintage lenses than new ones. Their way of handling light and color is softer and has more character than modern glass. My Noctilux showed me how much beauty there is in imperfection. These older lenses can be hard to find and often confusing to identify, so I'm keeping track of them here.
| Lens | Years | Notes |
|---|---|---|
| Leica Elmarit-M 28 mm f/2.8 v4 | 1992 â 2006 | Compact and sharp with minimal distortion. A clean, classic wide-angle. |
| Leica Summicron-M 35 mm f/2 v4, German-made ("Bokeh King") | 1979 â 1999 | Famous for its smooth bokeh and gentle rendering. Avoid Canadian version. |
| Leica Summilux-M 35 mm f/1.4 AA (Double Aspherical) | 1990 â 1994 | Rare and collectible. Produces warm tones and painterly backgrounds. |
| Leica Summitar 50 mm f/2 | 1939 â 1955 | Soft and nostalgic, with the kind of glow only old glass gives. |
| Leica Summicron 50 mm f/2 Rigid v2 | 1956 â 1968 | A crisp, confident lens that defined Leica's look for years. |
November 02, 2025
If you prefer your content/ data not to be used to train LinkedIn’s AI, you can opt out at https://www.linkedin.com/mypreferences/d/settings/data-for-ai-improvement (but only until tomorrow, Nov 3rd?). Crazy that this requires an explicit opt-out by the way, it really should be opt-in (so off by default). More info about this can be found at tweakers.net by the way. Feel free to share…
November 01, 2025

Quand Ă©clatera la bulle IAâŠ
Comme le souligne Ed Zitron, toute la bulle actuelle est basĂ©e sur 7 ou 8 entreprises qui sâĂ©changent des promesses de sâĂ©changer des milliards dâeuros lorsque tous les habitants de la planĂšte dĂ©penseront en moyenne 100⏠par mois pour utiliser ChatGPT. Ce qui, pour Ă©viter une cascade de faillites, doit arriver dans les 4 ans au plus tard.
SpoilerâŻ: cela nâarrivera jamais.
(oui, Gee mâa convaincu que le spoiler, ce nâĂ©tait finalement pas si mal et que si spoiler gĂąche rĂ©ellement une Ćuvre, câest que lâĆuvre est merdique)
Tout le monde est en train de perdre une fortune Ă investir dans une infrastructure IA que personne ne veut payer pour utiliser.
Parce quâĂ part deux ou trois managers lobotomisĂ©s par les Ă©coles de commerce et tout fiers de poster sur LinkedIn des textes gĂ©nĂ©rĂ©s quâils nâont mĂȘme pas lus, ChatGPT est loin dâĂȘtre la rĂ©volution promise. Câest juste ce quâon veut nous faire croire. Cela fait partie du marketingâŻ!
Tout comme on veut nous faire croire quâavoir une app pour tout nous simplifie la vie alors que si on rĂ©flĂ©chit rationnellement, le coĂ»t cognitif de lâapp est souvent supĂ©rieur Ă lâalternative, mais encore faut-il quâalternative il y ait.
Et ce nâest pas un hasard si je parle dâapps pour illustrer la bulle IA.
La saturation du smartphone
Parce que les apps ont Ă©tĂ© promues Ă grand renfort de marketing pour promouvoir indirectement la vente de smartphone. Quâune gigantesque industrie sâest formĂ©e sur le fait que le marchĂ© des smartphones Ă©tait en pleine croissance. Et si la bulle nâa pas explosĂ©, un autre phĂ©nomĂšne est arrivĂ© trĂšs rĂ©cemmentâŻ: la saturation.
Tout le monde a un smartphone. Celleux qui nâen ont pas nâen veulent pas en conscience. Celleux qui en ont vont le remplacer de moins en moins souvent, car les nouveaux modĂšles nâapportent plus rien et, au contraire, retirent des fonctions (combien sâaccrochent Ă un vieux modĂšle pour garder un jack audioâŻ?). Lâindustrie a fini sa croissance et va se stabiliser dans un renouvellement de lâexistant.
Si jâĂ©tais mĂ©chant, je dirais que le marchĂ© du dumbphone a un plus fort potentiel de croissance que celui du smartphone.
Toutes les petites sociĂ©tĂ©s et les services qui se sont fait convaincre de «âŻdĂ©velopper une appâŻÂ» ne lâont fait que pour promouvoir la vente de smartphone et, dĂ©sormais, cela ne sert plus Ă rien. Elles se retrouvent Ă devoir gĂ©rer trois clientĂšles diffĂ©rentesâŻ: Android, iPhone et celleux sans aucun des deux. Bref, vous pouvez arrĂȘter de «âŻpromouvoir vos appsâŻÂ», cela ne vous rapportera rien dâautre que du travail et des coĂ»ts supplĂ©mentaires. Que cela vous serve de leçon Ă lâheure oĂč vous «âŻinvestissez dans lâIAâŻÂ».
SpoilerâŻ: personne nâa compris la leçon.
Croissance, Ă CroissanceâŻ!
Mais le capitalisme a besoin de croissance ou, au moins, la croyance dâune croissance future.
Câest exactement ce que promet lâIA et la raison pour laquelle nous sommes gavĂ©s de marketing pour lâIA partout, que les nouveaux tĂ©lĂ©phones ont des «âŻpuces IAâŻÂ» et que toutes les apps se mettent Ă jour pour nous forcer Ă utiliser lâIA.
Par rapport Ă lâexplosion du mobile, jâobserve nĂ©anmoins une diffĂ©rence flagrante, fondamentale.
Beaucoup de gens «âŻnormauxâŻÂ» nâaiment pas. Les mĂȘmes qui Ă©taient fascinĂ©s par les premiers smartphones, qui Ă©taient fascinĂ©s par les rĂ©ponses de ChatGPT sont furieux dâavoir une IA dans Whatsapp ou dans leur brosse Ă dents. Celleux qui croient le plus en lâIA ne sont pas celleux qui lâutilisent le plus (ce qui Ă©tait le cas du smartphone), mais celleux qui pensent que dâautres vont lâutiliserâŻ!
Mais ces autres ne lâutilisent finalement que trĂšs peu. Et ce nâest pas une rĂ©flexion Ă©thique ou Ă©cologique ou politique.
Câest juste que ça les emmerde. Que mĂȘme avec du matraquage marketing incessant, ils ont lâimpression que ça complique leur vie plutĂŽt que la simplifier. Jâobserve de plus en plus de gens refuser de faire des mises Ă jour, tenter de faire durer le plus longtemps possible un appareil pour Ă©viter dâavoir Ă racheter le nouveau avec des Ă©crans tactiles Ă la place des boutons. Ce que jâexpliquais dĂ©jĂ en 2023.
Nous avons atteint un point de saturation technologique.
Lorsque vous refusez lâIA ou que vous demandez un bouton physique pour rĂ©gler le volume de la radio dans votre nouvelle voiture, les vendeurs et le marketing se contentent de vous classer dans la catĂ©gorie des «âŻignares qui ne comprennent rien Ă la technologieâŻÂ» (dans laquelle ils me classent dĂšs que jâouvre la bouche, et ça mâamuse beaucoup dâĂȘtre pris pour un ignare technologique).
Mais, à un moment, ces ignares vont devenir une masse importante. Ces non-consommateurs vont commencer à peser, économiquement, politiquement.
Lâespoir des ignares technologiques
LâĂ©conomie de la merdification (ou «âŻcrapitalismâŻÂ» comme lâappelle Charlie Stross) est, sans le vouloir, en train de promouvoir la dĂ©croissance et le low-techâŻ!
Chaque systÚme contient en lui les germes de la révolution qui le renversera.
Du moins, je lâespĂšreâŠ
La bulle du web 2.0 a éclaté en laissant en place une énorme infrastructure sur laquelle ont pu prospérer quelques survivants (Amazon, Google) et quelques nouveaux venus (Facebook, Netflix) qui se sont littéralement approprié les restes.
Parfois, une bulle ne laisse rien derriĂšre elle, comme celle des subprimes en 2008.
Quâen sera-t-il de la bulle IA dans laquelle nous sommesâŻ? Les datacenters construits seront rapidement obsolĂštes. La destruction sera Ă©norme.
Mais ces data centers nĂ©cessitent de lâĂ©lectricitĂ©. Beaucoup dâĂ©lectricitĂ©. Et pour une grande partie, une Ă©lectricitĂ© renouvelable, car câest la moins chĂšre Ă lâheure actuelle.
Mon pari est donc que lâĂ©clatement de la bulle IA va crĂ©er une offre dâĂ©lectricitĂ© sans prĂ©cĂ©dent. Celle-ci va devenir presque gratuite et, dans certains cas, elle aura mĂȘme un prix nĂ©gatif (parce quâune fois les batteries pleines, il faut bien Ă©vacuer la production excĂ©dentaire).
Et si lâĂ©lectricitĂ© est gratuite, il devient de plus en plus difficile de justifier de brĂ»ler du pĂ©trole. La fin de la bulle IA pourrait Ă©galement sonner la fin dâune bulle pĂ©troliĂšre qui dure depuis 70 ans.
Ăa paraĂźt un peu utopiste et, au moment de lâĂ©clatement, ça ne va pas ĂȘtre joli Ă voir. Ăa peut commencer demain comme dans 10 ans. La transition risque de durer plus quâune poignĂ©e dâannĂ©es. Mais, clairement, câest un changement civilisationnel qui sâamorceâŠ
On nây arrivera pas sans douleur. La rĂ©cession Ă©conomique va alimenter tous les dĂ©lires fascistes disposant dâune infrastructure dâespionnage dont lâofficier de la Stasi le plus fou nâaurait jamais pu rĂȘver. Mais ptĂȘtre quâau bout du tunnel, on se dirigera vers ce que Tristan Nitot a dĂ©crit dans sa novelette «âŻVĂ©lorutopiaâŻÂ». Il prĂ©tend sâinspirer, entre autres, de Bikepunk. Mais je crois quâil dit ça pour me flatter.
Je suis Ploum et je viens de publier Bikepunk, une fable Ă©colo-cycliste entiĂšrement tapĂ©e sur une machine Ă Ă©crire mĂ©canique. Pour me soutenir, achetez mes livres (si possible chez votre libraire)âŻ!
Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.
October 30, 2025

October 29, 2025
You know how you can make your bootloader sing a little tune?
Well⊠what if instead of music, you could make it play Space Invaders?
Yes, thatâs a real thing.
Itâs called GRUB Invaders, and it runs before your operating system even wakes up.
Because who needs Linux when you can blast aliens straight from your BIOS screen? 
From Tunes to Lasers
In a previous post â âResurrecting My Windows Partition After 4 Years 
â â
I fell down a delightful rabbit hole while editing my GRUB configuration.
Thatâs where I discovered GRUB_INIT_TUNE, spent hours turning my PC speaker into an 80s arcade machine, and learned far more about bootloader acoustics than anyone should. 
So naturally, the next logical step was obvious:
if GRUB can play music, surely it can play games too.
Enter: GRUB Invaders. 

What the Heck Is GRUB Invaders?
grub-invaders is a multiboot-compliant kernel game â basically, a program that GRUB can launch like itâs an OS.
Except itâs not Linux, not BSD, not anything remotely usefulâŠ
itâs a tiny Space Invaders clone that runs on bare metal.
To install it (on Ubuntu or Debian derivatives):
sudo apt install grub-invaders
Then, in GRUBâs boot menu, itâll show up as GRUB Invaders.
Pick it, hit Enter, and bam! â no kernel, no systemd, just pew-pew-pew.
Your CPU becomes a glorified arcade cabinet. 
Image: https://libregamewiki.org/GRUB_Invaders
How It Works
Under the hood, GRUB Invaders is a multiboot kernel image (yep, same format as Linux).
That means GRUB can load it into memory, set up registers, and jump straight into its entry point.
Thereâs no OS, no drivers â just BIOS interrupts, VGA mode, and a lot of clever 8-bit trickery.
Basically: the game runs in real mode, paints directly to video memory, and uses the keyboard interrupt for controls.
Itâs a beautiful reminder that once upon a time, you could build a whole game in a few kilobytes.
Technical Nostalgia
Installed size?
Installed-Size: 30
Size: 8726 bytes
Yes, you read that right: under 9 KB.
Thatâs less than one PNG icon on your desktop.
Yet itâs fully playable â proof that programmers in the â80s had sorcery weâve since forgotten. 
The package is ancient but still maintained enough to live in the Ubuntu repositories:
Homepage: http://www.erikyyy.de/invaders/
Maintainer: Debian Games Team
Enhances: grub2-common
So you can still apt install it in 2025, and it just works.
Why Bother?
Because you can.
Because sometimes itâs nice to remember that your bootloader isnât just a boring chunk of C code parsing configs.
Itâs a tiny virtual machine, capable of loading kernels, playing music, and â if youâre feeling chaotic â defending the Earth from pixelated aliens before breakfast. 
Itâs also a wonderful conversation starter at tech meetups:
âOh, my GRUB doesnât just boot Linux. It plays Space Invaders. What does yours do?â
A Note on Shenanigans
Donât worry â GRUB Invaders doesnât modify your boot process or mess with your partitions.
Itâs launched manually, like any other GRUB entry.
When youâre done, reboot, and youâre back to your normal OS.
Totally safe. (Mostly. Unless you lose track of time blasting aliens.)
TL;DR
grub-invaderslets you play Space Invaders in GRUB.- Itâs under 9 KB, runs without an OS, and is somehow still in Ubuntu repos.
- Totally useless. Totally delightful.
- Perfect for when you want to flex your inner 8-bit gremlin.
Last summer, I was building a small automation in n8n when I came across Activepieces. Both tools promise the same thing: connect your applications, automate your workflows, and host it yourself. But when I clicked through to Activepieces' GitHub repo, I noticed it's released under the MIT license. Truly Open Source, not just source-available like n8n.
As I dug deeper into these tools, something crystallized for me: business logic is moving out of individual applications and into the orchestration layer.
Today, most organizations run on dozens of disconnected tools. A product launch means logging into Mailchimp for email campaigns, Salesforce for lead tracking, Google Analytics for performance monitoring, Drupal for content publishing, Slack for team coordination, and a spreadsheet to keep everything synchronized. We copy data between systems, paste it into different formats, and manually trigger each step. In other words, most organizations are still doing orchestration by hand.
With orchestration tools maturing, this won't stay manual forever. That led me to an investment thesis that I call the Orchestration Shift: the tools we use to connect systems are becoming as important as the systems themselves.
This shift could change how we think about enterprise software architecture. For the last decade, we've talked about the "marketing technology stack" or "martech stack": collections of tools connected through rigid, point-to-point integrations. Orchestration changes this fundamentally. Instead of each tool integrating directly with others, an orchestration layer coordinates how they work together: the "martech stack" becomes a "martech network".
Why I invested in Activepieces
I believe that in the next five to ten years, orchestration platforms like Activepieces are likely to become critical infrastructure in many organizations. If that happens, this shift needs Open Source infrastructure. Not only proprietary SaaS platforms or source-available licenses with commercial restrictions, but truly open infrastructure.
The world benefits when critical infrastructure has strong Open Source alternatives. Linux gave us an alternative to proprietary operating systems. MySQL and PostgreSQL gave us alternatives to Oracle. And of course, Drupal and WordPress gave us alternatives to dozens of proprietary CMSes.
That is why Activepieces stood out: it is Open Source and positioned for an important market shift.
So I reached out to Ash Samhouri, their co-founder and CEO, to learn more about their vision. After a Zoom call, I came away impressed by both the mission and the momentum. When I got the opportunity to invest, I took it.
A couple months later, n8n raised over $240 million at a $2.5 billion valuation, validation that the orchestration market was maturing rapidly.
I invested not just money, but also time and effort. Over the summer, I worked with JĂŒrgen Haas to create a Drupal integration for Activepieces and the orchestration module for Drupal. Both shipped the week before DrupalCon Vienna, where I demonstrated them in my opening keynote.
How orchestration changes platforms
Consider what this means for platforms like Drupal, which I have led for more than two decades. Drupal has thousands of contributed modules that integrate with external services. But if orchestration tools begin offering those same integrations in a way that is easier and more powerful to use, we have to ask how Drupal's role should evolve.
Drupal could move from being the central hub that manages integrations to becoming a key node within this larger orchestration network.
In this model, Drupal continues managing and publishing content while also acting as a connected participant in such a network. Events in Drupal can trigger workflows across other systems, and orchestration tools can trigger actions back in Drupal. This bidirectional connection makes both more powerful. Drupal gains capabilities without adding complexity to its core, while orchestration platforms gain access to rich content, structured data, publishing workflows, and more.
Drupal can also learn architecturally from these orchestration platforms. Tools like n8n and Activepieces use a simple but powerful pattern: every operation has defined inputs and outputs that can be chained together to build workflows. Drupal could adopt this same approach, making it easier to build internal automations and positioning Drupal as an even more natural participant in orchestration networks.
We have seen similar shifts before. TCP/IP did not make telephones irrelevant; it changed where the intelligence lived. Phones became endpoints in a network defined by the protocol connecting them. Orchestration may follow a similar path, becoming the layer that coordinates how business systems work together.
Where orchestration is heading
Today, orchestration platforms handle workflow automation: when X happens, do Y. Form submissions create CRM entries, send email notifications, post Slack updates. I demonstrated this pattern in my DrupalCon Vienna keynote, showing how predefined workflows eliminate manual work and custom integration code.
But orchestration is evolving toward something more powerful: digital workers. These AI-driven agents will understand context, make decisions, and execute complex tasks across platforms. A digital worker could interpret a goal like "Launch the European campaign for our product launch", analyze what needs to happen, build the workflows, coordinate across your martech network, execute them, and report results.
Tools like Activepieces and protocols like the Model Context Protocol are laying the groundwork for this future. We're moving from automation (executing predefined steps) to autonomy (understanding intent and figuring out how to achieve it). The future will likely require both: deterministic workflows for reliability and consistency, combined with AI-driven decision-making for flexibility and intelligence.
This shift makes the orchestration layer even more critical. It's not just connecting systems anymore; it's where business intelligence and decision-making will live.
Conclusion
When I first clicked through to Activepieces' GitHub repo last summer, I was looking for a tool to automate a workflow. What I found was something bigger: a glimpse of how business software architecture is fundamentally changing. I've been thinking about it since.
To me, the question isn't whether orchestration will become critical infrastructure. It's whether that infrastructure will be open and built collaboratively. That is a future worth investing in, both with capital and with code.
October 28, 2025

October 27, 2025

Quâest-ce que lâoutil va faire de moiâŻ?
Je ne peux rĂ©sister Ă vous partager cet extrait issu de «âŻLâodyssĂ©e du pingouin cannibaleâŻÂ», de lâinĂ©narrable Yann Kerninon, philosophe et punk rocker anarchocyclisteâŻ:
Quand on mâenvie dâĂ©crire des livres et dâĂȘtre un philosophe, jâai toujours envie de rĂ©pondre «âŻallez vous faire foutreâŻÂ». Dans une interview tĂ©lĂ©visĂ©e, le philosophe Kostas Axelos affirmait que ce nâĂ©tait jamais le penseur qui faisait la pensĂ©e, mais bien toujours la pensĂ©e qui faisait le penseur. Il ajoutait quâil aurait bien aimĂ© quâil en soit autrement. Au journaliste Ă©tonnĂ© qui lui demandait pourquoi, il rĂ©pondit avec un lĂ©ger sourireâŻ: «âŻParce que câest la source dâune grande souffrance.âŻÂ»
Cette idĂ©e que la pensĂ©e fait le penseur est poussĂ©e encore plus loin par Marcello Vitali-Rosati dans son excellent «âŻĂloge du bugâŻÂ». Dans cet ouvrage, que je recommande chaudement, Marcello critique la dualitĂ© platonicienne qui imprĂšgne la pensĂ©e occidentale depuis 2000 ans. Il y aurait les penseurs et les petites mains, les dirigeants et les obĂ©issants, le virtuel et le rĂ©el. Ce dĂ©nigrement de la matĂ©rialitĂ© aurait Ă©tĂ© poussĂ© Ă son paroxysme par les GAFAM qui tentent de cacher toute lâinfrastructure sur laquelle elles sâappuient. Nous avons nos donnĂ©es dans «âŻle cloudâŻÂ», nous cliquons pour passer une commande et, magiquement, le paquet arrive Ă notre porte le lendemain.
Lorsquâun Ă©tudiant me dit que son tĂ©lĂ©phone se connecte Ă un satellite, lorsquâun politicien sâĂ©tonne que les cĂąbles sous-marins existent encore, lorsquâun usager associe «âŻwifiâŻÂ» et internet, ce nâest pas de la simple ignorance comme je lâai toujours cru. Câest en rĂ©alitĂ© le rĂ©sultat de dĂ©cennies de lavage de cerveau et de marketing pour tenter de nous faire oublier la matĂ©rialitĂ©, pour tenter de nous convaincre que nous sommes tous des «âŻdĂ©cideursâŻÂ» Ă qui obĂ©it un gĂ©nie magique.
Marcello fait le parallĂšle avec le gĂ©nie dâAladdin. Car, au cas oĂč vous ne lâauriez pas remarquĂ©, Aladdin est inculte. Il veut «âŻdes beaux vĂȘtementsâŻÂ» mais nâa aucune idĂ©e de ce qui fait que des vĂȘtements sont beaux ou non. Il ne pose aucun choix. Il est sous la coupe totale du gĂ©nie qui prend lâentiĂšretĂ© des dĂ©cisions. Il croit ĂȘtre le maĂźtre, il est le jouet du gĂ©nie.
Je me permets mĂȘme de pousser lâanalogie en faisant appel aux habits neufs de lâempereurâŻ: lorsquâAladdin sera complĂštement dĂ©pendant du gĂ©nie, celui-ci lui fournira des habits «âŻinvisiblesâŻÂ» en le convainquant que ce sont les plus beaux. Ce processus est dĂ©sormais connu sous le nom de «âŻmerdificationâŻÂ».
Le nĂ©oplatonicisme de Plotin voulait que lâĂ©crit ne soit quâune tĂąche vulgaire, subalterne de la pensĂ©e.
Avec ChatGPT et consorts, la Silicon Valley a inventĂ© le nĂ©o-nĂ©oplatonicisme. La pensĂ©e elle-mĂȘme devient vulgaire, subalterne Ă lâidĂ©e. Le grand entrepreneur a lâĂ©bauche dâune idĂ©e, plutĂŽt un dĂ©sir intuitif. Charge aux sous-fifres de le rĂ©aliser ou, pour le moins, de crĂ©er une campagne marketing pour modifier la rĂ©alitĂ©, pour convaincre que ce dĂ©sir est rĂ©el, gĂ©nial, souhaitable et rĂ©aliste. Que son auteur mĂ©rite les lauriers. Câest ce que jâai appelĂ© «âŻla mystification de la Grande IdĂ©eâŻÂ».
Mais ce nâest pas le penseur qui fait la pensĂ©e. Câest la pensĂ©e qui fait le penseur.
Ce nâest pas la pensĂ©e qui fait lâĂ©crit, câest lâĂ©crit qui fait la pensĂ©e.
Lâacte dâĂ©criture est physique, matĂ©riel. Lâutilisation dâun outil ou dâun autre va grandement affecter lâĂ©crit et, par consĂ©quent, la pensĂ©e et donc le penseur lui-mĂȘme. Sur ce sujet, je ne peux que vous recommander chaudement «âŻLa mĂ©canique du texteâŻÂ» de Thierry Crouzet. Lâabsence de cette rĂ©fĂ©rence mâa dâailleurs sautĂ© aux yeux dans «âŻĂloge du bugâŻÂ», car les deux livres sont trĂšs complĂ©mentaires.
Si toute cette rĂ©flexion semble pour le moins abstraite, jâen ai fait lâexpĂ©rience de premiĂšre main. En Ă©crivant Ă la machine Ă Ă©crire, bien entendu, comme câest le cas pour mon roman Bikepunk.
Mais le changement le plus profond que jâai vĂ©cu est probablement liĂ© Ă ce blog.
Il y a 3 ans, jâai enfin rĂ©ussi Ă quitter Wordpress pour faire un blog statique que je gĂ©nĂšre avec mon propre script.
De maniĂšre amusante, Marcello Vitali-Rosati vient de faire un cheminement identique.
Mais ce nâest pas un long processus rĂ©flexif qui mâa amenĂ© Ă cela. Câest le fait dâĂȘtre saoulĂ© par la complexitĂ© de Wordpress, de me rendre compte que jâavais perdu le plaisir dâĂ©crire et que je le retrouvais sur le rĂ©seau Gemini. Jâai mis en place des choses sans en comprendre les tenants et les aboutissants. Jâai expĂ©rimentĂ©. Jâai Ă©tĂ© confrontĂ© Ă des centaines de microdĂ©cisions que je ne soupçonnais pas. Jâai appris Ă©normĂ©ment sur le HTML en dĂ©veloppant Offpunk et je lâai appliquĂ© sur ce blog. Pour ĂȘtre honnĂȘte, je me suis rendu compte que jâavais oubliĂ© quâil Ă©tait possible de faire une simple page HTML sans JavaScript, sans un thĂšme CSS fait par un professionnel. Et pourtant, une fois en ligne, je nâai reçu que des Ă©loges sur un site pourtant minimal.
Mon processus de blogging sâest complĂštement modifiĂ©. Je me suis remis Ă Vim aprĂšs mâĂȘtre remis pleinement Ă Debian. Mes Ă©crits sâen sont ressentis. Jâai Ă©tĂ© invitĂ© Ă parler de minimalisme numĂ©rique, de low-tech.
Mais je nâai pas rejoint Gemini parce que je me sentais un minimaliste numĂ©rique dans lâĂąme. Je nâai pas quittĂ© Wordpress par amour de la low-tech. Je nâai pas créé Offpunk parce que je suis un guru de la ligne de commande.
Câest exactement le contraireâŻ! Gemini mâa illuminĂ© sur une maniĂšre de voir et de vivre un minimalisme numĂ©rique. Programmer ce blog mâa fait comprendre lâintĂ©rĂȘt de la low-tech. CrĂ©er Offpunk et lâutiliser ont fait de moi un adepte de la ligne de commande.
La pensĂ©e fait le penseurâŻ! Lâoutil fait le crĂ©ateurâŻ! Le logiciel libre fait le hackerâŻ! La plateforme fait lâidĂ©ologieâŻ! Le vĂ©lo fait la condition physiqueâŻ!
Peut-ĂȘtre que nous devrions arrĂȘter de nous poser la question «âŻQuâest-ce que cet outil peut faire pour moiâŻ?âŻÂ» et la remplacer par «âŻQuâest-ce que cet outil va faire de moiâŻ?âŻÂ».
Car si la pensĂ©e fait le penseur, le rĂ©seau social propriĂ©taire fait le fasciste, le robot conversationnel fait lâabruti naĂŻf, le slide PowerPoint fait le dĂ©cideur crĂ©tin.
Quâest-ce que cet outil va faire de moiâŻ?
En Ă©nonçant cette question Ă haute voix, je soupçonne que nous verrons dâun autre Ćil lâutilisation de certains outils, surtout ceux qui sont «âŻplus facilesâŻÂ» ou «âŻque tout le monde utiliseâŻÂ».
Quâest-ce que cet outil va faire de moiâŻ?
En regardant autour de nous, il y a finalement peu dâoutils dont la rĂ©ponse Ă cette question est rassurante.
Je suis Ploum et je viens de publier Bikepunk, une fable Ă©colo-cycliste entiĂšrement tapĂ©e sur une machine Ă Ă©crire mĂ©canique. Pour me soutenir, achetez mes livres (si possible chez votre libraire)âŻ!
Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.
October 23, 2025
The web is changing fast. AI now writes content, builds web pages, and answers questions directly, often bypassing websites entirely.
People often wonder what this means for Drupal, so at DrupalCon Vienna, I tackled this head-on. My message was simple: AI is the storm, but it's also the way through it. Instead of fighting AI, we're leaning into it.
My keynote focused on how Drupal is evolving across four product areas. We're making it easier to get started with Site Templates, enabling visual site building through Drupal Canvas, accelerating development with AI assistance, and exploring complex workflows with new orchestration tools.
If you missed the keynote, you can watch the video below, or download my slides (62 MB).
Vienna felt like a turning point. People could see the pieces coming together. Drupal is finding its footing in the AI era, leading in AI innovation, and ready to help shape what comes next for the web.
Growing Drupal with Site Templates
One of the most important ways to grow Drupal is to make it easier and faster to build new sites. We began that work with Recipes, a way to quickly add common features to a site. Recipes help people go from idea to a website in hours instead of days.
At DrupalCon Vienna, I talked about the next step in that journey: our first Site Template. Site Templates build on Recipes and also include a complete design with layouts, visual style, and sample content. The result is that you can go from a new Drupal install to a fully working website in minutes. It will be the easiest way yet to get started with Drupal.
Next, we plan to introduce more Site Templates and launch a Site Template Marketplace where anyone can discover, share, and build on templates for different use cases.
A new visual editing experience
At DrupalCon Vienna, the energy around Drupal Canvas was infectious. Some even called it "CanvasCon". Drupal Canvas sessions were often standing room only, just like the Drupal AI sessions.
I first showed an early version of Drupal Canvas at DrupalCon Barcelona in September 2024, when we launched Drupal's Starshot initiative. The progress we've made in just one year is remarkable. My keynote showed parts of Drupal Canvas in action, but for a deeper dive, I recommend watching this breakout session.
Version 1.0 of Drupal Canvas is scheduled for November 2025. Starting in January 2026, it will become the default page builder in Drupal CMS 2.0. After more than 15 months of development and countless contributors working to make Drupal easier for everyone, it's hard to believe we're almost there. This marks the beginning of a new chapter for how people create with Drupal.
What excites me most is what this solves. For years, building pages in Drupal required technical expertise. Drupal Canvas gives end-users a visual page builder that is both more powerful and easy to use. Plus, it supports React, which means front-end developers can contribute using skills they already have.
Drupal's accidental AI advantage
Every content management system faces defining moments. For Drupal, one came with the release of Drupal 8. We rebuilt Drupal from the ground up, adopting modern design patterns and improving configuration management, versioning, workflows, and more.
The transition was hard, but here is the surprising part: ten years later those decisions gave Drupal an unexpected advantage in today's AI-driven web. The architecture we created is exactly what AI systems need today. When AI modifies content, you need version control to roll back mistakes. When it builds pages, you need structured data, permissions, and workflows. Drupal already has those capabilities.
For years, Drupal prioritized flexibility and robustness while other platforms focused on ease of use. What once felt like extra complexity now makes perfect sense. Drupal has quietly become one of the most AI-ready platforms available.
AI is the storm, and the way through the storm
As I said in my keynote: "Some days AI terrifies me. An hour later it excites me. By the evening, I'm tired of hearing about it.". Still, we can't ignore AI.
I first introduced AI as part of Starshot. Five months ago, it became its own dedicated track with the launch of the Drupal AI initiative. Since then, twenty two agencies have backed it with funding and contributors, together contributing over one million dollars. This is the largest fundraising effort in Drupal's history.
The initiative is already producing impressive results. At DrupalCon Vienna, we released Drupal AI version 1.2, a major step forward for the initiative.
In my keynote, I also demonstrated three new AI capabilities:
- AI-powered page building: Drupal AI can now generate complete, designed pages in minutes using a component-based design system in Drupal Canvas. What site builders used to build in hours now happens in minutes while maintaining your site's structure and style.
- Context Control Center: Teams can define brand voice, target audiences, and key messages from a single UI. All AI agents draw from this source of truth.
- Autonomous agents: When you update information in the Context Control Center, such as a product price or company statistic, agents automatically find every instance throughout your site and propose updates. You review and approve changes before they go live.
Orchestration as a path to explore
Earlier this year, I wrote about the great digital agency unbundling. As AI automates more technical work, agencies need to evolve their business models and find new ways to create value.
One promising direction is orchestration: building systems and workflows that connect AI agents, content platforms, CRMs, and marketing tools into intelligent, automated workflows. I think of it as DXP 2.0.
Most organizations have complex marketing technology stacks. Connecting all the systems in their stack often requires custom code or repetitive manual tasks. This integration work can be time-consuming and hard to maintain.
Modern orchestration tools solve this by automating how information flows between systems. Instead of writing custom code, you can use no-code tools to define workflows that trigger automatically. When someone fills out a form, the system creates a CRM contact, sends a welcome email, and notifies your team without any manual work.
In my keynote, I showed how ECA and ActivePieces can work together. JĂŒrgen Haas, who created ECA, and I collaborated on this integration. ECA lets you define automations inside Drupal using events, conditions, and actions. ActivePieces is an open source automation platform similar to Zapier or n8n.
This approach allows us to build user experiences that are not only better and smarter, but also positions Drupal to benefit from AI innovation happening across the broader ecosystem. The idea resonated in Vienna. People approached me enthusiastically with related projects and demos, including tools like Flowdrop or Drupal's MCP module.
Between now and DrupalCon Chicago, we're inviting the community to explore and expand on this work. Join us in #orchestration on Drupal Slack, test the new Orchestration module, connect more automation platforms, or help improve ECA. If this direction proves valuable, we'll share what we learned at DrupalCon Chicago.
Building the future together
At DrupalCon Vienna, I felt something shift. Sessions were packed. People were excited about Site Templates and the Marketplace. Drupal Canvas drew huge crowds, and even more agencies signed up to join the Drupal AI initiative. During contribution day, more people than usual showed up looking for ways to help.
That energy in Vienna reflected something bigger. AI is changing how people use the web and how we build for it. It can feel threatening, and it can feel full of possibility, but what became clear in Vienna is that Drupal is well positioned at this inflection point, with both momentum and direction.
What makes this moment special is how the community is responding with focus and collaboration. We are approaching it as a much more coordinated effort, while still leaving room for experimentation.
Vienna showed me that the Drupal community is ready to take this on together. We have navigated uncharted territory before, but this time there is a boldness and unity I have not seen in years. That is the way through the storm. I am proud to be part of it.
I want to extend my gratitude to everyone who contributed to making my presentation and demos a success. A special thank you to Adam G-H, Aidan Foster, ASH Sullivan, Bålint Kléri, Cristina Chumillas, Elliott Mower, Emma Horrell, Gåbor Hojtsy, Gurwinder Antal, James Abrahams, Jurgen Haas, Kristen Pol, Lauri Timmanee, Marcus Johansson, Martin Anderson-Clutz, Pamela Barone, Tiffany Farriss, Tim Lehnen, and Witze Van der Straeten. Many others contributed indirectly to make this possible. If I've inadvertently omitted anyone, please reach out.
October 22, 2025
I had just published my post about having dinner with Garry Kasparov when I got a call. Belgium's Prime Minister Bart De Wever had dropped out of a simultaneous exhibition where Kasparov would play 20 people at once. Did I want to take his seat?
Of course I didn't hesitate. Within hours, I was sitting across a chessboard from Kasparov. I was never going to win. The question was: in how many moves would I lose?
Playing against Garry Kasparov. Photo © Jelle Jansegers.
Kasparov opened with white, and I defended with black using the Caro-Kann defense. I blundered my rook on move 11. A mistake I'm still kicking myself over. But I kept fighting. A few times I made him pause and think for a minute or so. I resigned at move 25. None of the twenty players managed a draw or a win.
Playing against Garry Kasparov. Photo © Jelle Jansegers.
The event was livestreamed, with GM Nigel Short and FM Lennert Lenaerts providing commentary. Here is a snippet where they review my position:
This morning, I entered our game into Chess.com's analysis tool. Kasparov played with 94% accuracy, while I managed 80% accuracy (estimated 2100 ELO performance). Not bad for someone who hung a rook early.
I'm grateful I got to play him. It's a game I'll remember for the rest of my life.
Playing against Garry Kasparov. Photo © Jelle Jansegers.
Here is the game in PGN notation for anyone who wants to analyze it:
[Event "Simul Exhibition Antwerp"]
[Date "2025.10.21"]
[White "Kasparov, Garry"]
[Black "Buytaert, Dries"]
[Result "1-0"]
[Variant "Standard"]
[ECO "B12"]
[Opening "Caro-Kann Defense: Advance Variation, Tal Variation"]
1. e4 c6 2. d4 d5 3. e5 Bf5 4. h4 h5
5. c4 e6 6. Nc3 Bb4 7. Qb3 Bxc3+ 8. bxc3 Qc7 9. Ba3 Nd7
10. Nf3 Rb8 11. Bd6 Qc8 12. Bxb8 Qxb8 13. cxd5 exd5 14. c4 Ne7
15. Bd3 Bxd3 16. Qxd3 dxc4 17. Qxc4 Nb6 18. Qc2 Qc8 19. Kf1 Qd7
20. Re1 Kd8 21. Ng5 Rf8 22. e6 Qd5 23. exf7 Qc4+ 24. Qxc4 Nxc4
25. Ne6+
And in FEN notation:
3k1r2/pp2nPp1/2p1N3/7p/2nP3P/8/P4PP1/4RK1R b - - 1 25
We hebben de sleutels nog niet, maar ik plan toch al een verhuisdag op woensdag 29 oktober. (To be confirmed, maar het komt dichterbij!)
Waar ik hulp bij kan gebruiken 
- Bananendozen verhuizen â ik zorg dat het meeste op voorhand is ingepakt.
Ruwe schatting: zoân 30 dozen (ik heb ze niet geteld, ik leef graag gevaarlijk). - Demonteren en verhuizen van meubels:
- Boekenrek (IKEA KALLAX 5×5)
- Kleerkast
- Bed
- Bureau
- Diepvries verhuizen (van de keuken op het gelijkvloers naar de kelder in het nieuwe huis).
De dozen en meubels staan nu op de 2de verdieping. Ik probeer vooraf al wat dozen naar beneden te sleuren â want trappen, ja.
Assembleren van meubels op het nieuwe adres doen we een andere dag.
Doel van de dag: niet overprikkeld geraken.
Wat ik zelf regel 
Ik voorzie een kleine bestelwagen via Dégage autodelen.
Wat ik nog nodig heb 

- Een elektrische schroevendraaier (voor het IKEA-spul).
- Handige, stressbestendige mensen met een vleugje organisatietalent.
- Enkele autoâs die over en weer kunnen rijden â zelfs al is het maar voor een paar dozen.
- Emotionele support crew die op tijd kunnen zeggen: âHey, pauze.â
Praktisch 
- Oud adres: Ledeberg
- Nieuw adres: tussen station Gent-Sint-Pieters en de Sterre
- Afstand: ongeveer 4 km
(Exacte adressen deel ik met de helpers.)
Ik maak een WhatsApp-groep voor coördinatie.
Afsluiter 
Verhuisdag Part 1 eindigt met gratis pizzaâs.
Want eerlijk: dozen sleuren is zwaar, maar pizza maakt alles beter.
Wil je komen helpen (met spierkracht, auto, gereedschap of goeie vibes)?
Laat iets weten â hoe meer handen, hoe minder stress!
October 21, 2025
When I was about 10 years old, my uncle gave me a chess computer. It was the "Kasparov Team-Mate Advanced Trainer", released in 1988. More than 35 years later, I still have it. Last night I was lucky enough to have dinner with the man whose name is on that device.
Garry Kasparov is one of the greatest chess players of all time. He was the number one chess player in the world for 21 years and became famous for his matches against IBM's Deep Blue. Since retiring from chess, he has become a prominent advocate for democracy, even running against Putin in 2008.
With Garry Kasparov in Antwerp, October 2025
During our conversation, Garry broke the news that Grandmaster Daniel Naroditsky had passed away unexpectedly. He was only 29. The news stopped me cold. For a moment, I just sat there, trying to process it.
I've probably watched every video Daniel Naroditsky published on his YouTube channel over the past four years. His videos made me fall in love with chess in a way I never had before. I was drawn not only to his mastery but to how generously he shared it. His real achievement wasn't his chess rating but how many people he made better.
Here I was, sitting across from the chess legend whose computer first introduced me to the game as a child, learning about the sudden loss of the person who reignited that passion decades later. One sitting in front of me, very much alive and passionately debating. The other suddenly gone.
It's strange how we can form connections with people we never meet. Through a name on a device, through videos we watch online, they become a part of our lives. When you meet one in person, the excitement is real. When you learn another has died, so is the grief.
I left that dinner thinking about the strangeness of it all. Two people who shaped my relationship with chess, colliding in one unexpected evening.
October 17, 2025
class AbstractCommand : public QObject
{
Q_OBJECT
public:
explicit AbstractCommand(QObject* a_parent = nullptr);
Q_INVOKABLE virtual void execute() = 0;
virtual bool canExecute() const = 0;
signals:
void canExecuteChanged( bool a_canExecute );
};
AbstractCommand::AbstractCommand(QObject *a_parent)
: QObject( a_parent )
{
}
AbstractConfigurableCommand::AbstractConfigurableCommand(QObject *a_parent)
: AbstractCommand( a_parent )
, m_canExecute( false ) { }
bool AbstractConfigurableCommand::canExecute() const
{
return m_canExecute;
}
void AbstractConfigurableCommand::setCanExecute( bool a_canExecute )
{
if( a_canExecute != m_canExecute ) {
m_canExecute = a_canExecute;
emit canExecuteChanged( m_canExecute );
emit localCanExecuteChanged( m_canExecute );
}
}
#include
#include "CompositeCommand.h"
/*! \brief Constructor for a empty initial composite command */
CompositeCommand::CompositeCommand( QObject *a_parent )
: AbstractCommand ( a_parent ) {}
/*! \brief Constructor for a list of members */
CompositeCommand::CompositeCommand( QList a_members, QObject *a_parent )
: AbstractCommand ( a_parent )
{
foreach (AbstractCommand* member, a_members) {
registration(member);
m_members.append( QSharedPointer(member) );
}
}
/*! \brief Constructor for a list of members */
CompositeCommand::CompositeCommand( QList> a_members, QObject *a_parent )
: AbstractCommand ( a_parent )
, m_members ( a_members )
{
foreach (const QSharedPointer& member, m_members) {
registration(member.data());
}
}
/*! \brief Destructor */
CompositeCommand::~CompositeCommand()
{
foreach (const QSharedPointer& member, m_members) {
deregistration(member.data());
}
}
void CompositeCommand::executeAsync()
{
foreach (const QSharedPointer& member, m_members) {
member->executeAsync();
}
}
bool CompositeCommand::canExecute() const
{
foreach (const QSharedPointer& member, m_members) {
if (!member->canExecute()) {
return false;
}
}
return true;
}
/*! \brief When one's canExecute changes */
void CompositeCommand::onCanExecuteChanged( bool a_canExecute )
{
bool oldstate = !a_canExecute;
bool newstate = a_canExecute;
foreach (const QSharedPointer& member, m_members) {
if ( member.data() != sender() ) {
oldstate &= member->canExecute();
newstate &= member->canExecute();
}
}
if (oldstate != newstate) {
emit canExecuteChanged( newstate );
}
}
/*! \brief When one's execution completes */
void CompositeCommand::onExecutionCompleted( )
{
m_completedCount++;
if ( m_completedCount == m_members.count( ) ) {
m_completedCount = 0;
emit executionCompleted();
}
}
void CompositeCommand::registration( AbstractCommand* a_member )
{
connect( a_member, &AbstractCommand::canExecuteChanged,
this, &CompositeCommand::onCanExecuteChanged );
connect( a_member, &AbstractCommand::executionCompleted,
this, &CompositeCommand::onExecutionCompleted );
}
void CompositeCommand::deregistration( AbstractCommand* a_member )
{
disconnect( a_member, &AbstractCommand::canExecuteChanged,
this, &CompositeCommand::onCanExecuteChanged );
disconnect( a_member, &AbstractCommand::executionCompleted,
this, &CompositeCommand::onExecutionCompleted );
}
void CompositeCommand::handleCanExecuteChanged(bool a_oldCanExecute)
{
bool newCanExecute = canExecute();
if( a_oldCanExecute != newCanExecute )
{
emit canExecuteChanged( newCanExecute );
}
}
void CompositeCommand::add(AbstractCommand* a_member)
{
bool oldCanExecute = canExecute();
QQmlEngine::setObjectOwnership ( a_member, QQmlEngine::CppOwnership );
m_members.append( QSharedPointer( a_member ) );
registration ( a_member );
handleCanExecuteChanged(oldCanExecute);
}
void CompositeCommand::add(const QSharedPointer& a_member)
{
bool oldCanExecute = canExecute();
m_members.append( a_member );
registration ( a_member.data() );
handleCanExecuteChanged(oldCanExecute);
}
void CompositeCommand::remove(AbstractCommand* a_member)
{
bool oldCanExecute = canExecute();
QMutableListIterator > i( m_members );
while (i.hasNext()) {
QSharedPointer val = i.next();
if ( val.data() == a_member) {
deregistration(val.data());
i.remove();
}
}
handleCanExecuteChanged(oldCanExecute);
}
void CompositeCommand::remove(const QSharedPointer& a_member)
{
bool oldCanExecute = canExecute();
deregistration(a_member.data());
m_members.removeAll( a_member );
handleCanExecuteChanged(oldCanExecute);
}
Ik heb mezelf eindelijk een (refurbished) nieuw toestel gekocht waarmee je kan bellen en zo van die andere nuttige dingen.
Uiteraard moet daar SailfishOS van me opstaan.
En deze keer werkt echt alles dat ik nodig heb. Goede ontvangst, GPS, 4G
October 15, 2025
So, you know that feeling when youâre editing GRUB for the thousandth time, because dual-booting is apparently a lifestyle choice?
In a previous post â Resurrecting My Windows Partition After 4 Years đ„ïžđź â I was neck-deep in grub.cfg, poking at boot entries, fixing UUIDs, and generally performing a ritual worthy of system resurrection.
While I was at it, I decided to take a closer look at all those mysterious variables lurking in /etc/default/grub.
Thatâs when I stumbled upon something… magical. âš
đ¶ GRUB_INIT_TUNE â Your Bootloader Has a Voice
Hidden among all the serious-sounding options like GRUB_TIMEOUT and GRUB_CMDLINE_LINUX_DEFAULT sits this gem:
# Uncomment to get a beep at grub start
#GRUB_INIT_TUNE="480 440 1"
Wait, what? GRUB can beep?
Oh, not just beep. GRUB can play a tune. đș
Hereâs how it actually works (per the GRUB manpage):
Format:
tempo freq duration [freq duration freq duration ...]
- tempo â The base time for all note durations, in beats per minute.
- 60 BPM â 1 second per beat
- 120 BPM â 0.5 seconds per beat
- freq â The note frequency in hertz.
- 262 = Middle C, 0 = silence
- duration â Measured in âbarsâ relative to the tempo.
- With tempo 60,
1= 1 second,2= 2 seconds, etc.
- With tempo 60,
So 480 440 1 is basically GRUB saying âHello, world!â through your motherboard speaker: 0.25 seconds at 440 Hz, which is A4 in standard concert pitch as defined by ISO 16:1975.
And yes, this works even before your sound card drivers have loaded â pure, raw, BIOS-level nostalgia.
đ§ From Beep to Bop
Naturally, I couldnât resist. One line turned into a small Python experiment, which turned into an audio preview tool, which turned into⊠letâs say, âbootloader performance art.â
Want to make GRUB play a polska when your system starts?
You can. Itâs just a matter of string length â and a little bit of mischief. đ
Thereâs technically no fixed âmaximum sizeâ for GRUB_INIT_TUNE, but remember: the bootloader runs in a very limited environment. Push it too far, and your majestic overture becomes a segmentation fault sonata.
So maybe keep it under a few kilobytes unless you enjoy debugging hex dumps at 2 AM.
đŒ How to Write a Tune That Wonât Make Your Laptop Cry
Practical rules of thumb (donât be that person):
- Keep the inline tune under a few kilobytes if you want it to behave predictably.
- Hundreds to a few thousands of notes is usually fine; tens of thousands is pushing luck.
- Each numeric value (pitch or duration) must be †65535.
- Very long tunes simply delay the menu â thatâs obnoxious for you and terrifying for anyone asking you for help.
Keep tunes short and tasteful (or obnoxious on purpose).
đ” Little Musical Grammar: Notes, Durations and Chords (Fake Ones)
Write notes as frequency numbers (Hz). Example: A4 = 440.
Prefer readable helpers: write a tiny script that converts D4 F#4 A4 into the numbers.
Example minimal tune:
GRUB_INIT_TUNE="480 294 1 370 1 440 1 370 1 392 1 494 1 294 1"
Thatâll give you a jaunty, bouncy opener â suitable for mild neighbour complaints. đđ»
Chords? GRUB canât play them simultaneously â but you can fake them by rapid time-multiplexing (cycling the chord notes quickly).
It sounds like a buzzing organ, not a symphony, but itâs delightful in small doses.
Fun fact đŸ: this time-multiplexing trick isnât new â itâs straight out of the 8-bit video game era.
Old sound chips (like those in the Commodore 64 and NES) used the same sleight of hand to make
a single channel pretend to play multiple notes at once.
If youâve ever heard a chiptune shimmer with impossible harmonies, thatâs the same magic. âšđź
đ§° Tools I Like (and That You Secretly Want)
If youâre not into manually counting numbers, do this:
Use a small composer script (I wrote one) that:
- Accepts melodic notation like
D4 F#4 A4orC4+E4+G4(chord syntax). - Can preview via your system audio (so you donât have to reboot to hear it).
- Can install the result into
/etc/default/gruband runupdate-grub(only as sudo).
Preview before you install. Always.
Your ears will tell you if your âode to systemdâ is charming or actually offensive.
For chords, the script time-multiplexes: e.g. for a 500 ms chord and 15 ms slices,
it cycles the chord notes quickly so the ear blends them.
Itâs not true polyphony, but itâs a fun trick.
(If you want the full script I iterated on: drop me a comment. But it’s more fun to leave as an exercise to the reader.)
đ§ź Limits, Memory, and âHow Big Before It Breaks?â
Yes, my Red Team colleague will love this paragraph â and no, Iâm not going to hand over a checklist for breaking things.
Short answer: GRUB doesnât advertise a single fixed limit for GRUB_INIT_TUNE length.
Longer answer, responsibly phrased:
- Numeric limits: per note pitch/duration †65535 (
uint16_t). - Tempo: can go up to
uint32_t. - Parser & memory: the tune is tokenized at boot, so parsing buffers and allocators impose practical limits.
Expect a few kilobytes to be safe; hundreds of kilobytes is where things get flaky. - Usability: if your tune is measured in minutes, youâve already lost. Donât be that.
If you want to test where the parser chokes, do it in a disposable VM, never on production hardware.
If youâre feeling brave, you can even audit the GRUB source for buffer sizes in your specific version. đ§©
âïž How to Make It Sing
Edit /etc/default/grub and add a line like this:
GRUB_INIT_TUNE="480 440 1 494 1 523 1 587 1 659 3"
Then rebuild your config:
sudo update-grub
Reboot, and bask in the glory of your new startup sound.
Your BIOS will literally play you in. đ¶
đĄ Final Thoughts
GRUB_INIT_TUNE is the operating-system equivalent of a ringtone for your toaster:
ridiculously low fidelity, disproportionately satisfying,
and a perfect tiny place to inject personality into an otherwise beige boot.
Use it for a smile, not for sabotage.
And just when I thought Iâd been all clever reverse-engineering GRUB beeps myselfâŠ
I discovered that someone already built a web-based GRUB tune tester!
đ https://breadmaker.github.io/grub-tune-tester/
Yes, you can compose and preview tunes right in your browser â
no need to sacrifice your system to the gods of early boot audio.
Itâs surprisingly slick.
Even better, thereâs a small but lively community posting their GRUB masterpieces on Reddit and other forums.
From Mario theme beeps to Doom startup riffs, thereâs something both geeky and glorious about it.
Youâll find everything from tasteful minimalist dings to full-on âsomeone please stop themâ anthems. đźđ¶
Boot loud, boot proud â but please boot considerate. đđ»đ»
October 14, 2025

La justesse au lieu de lâexactitude
OĂč je parle de hockey sous-marin, dâavions militaires et du pourrissement des oranges punks.
Lâarbitraire de lâarbitrage
Câest un fait historique peu connu, mais jâai, briĂšvement, Ă©tĂ© arbitre sportif. Jâai en effet arbitrĂ© des matchs de premiĂšre division belge de hockey subaquatique (si, câest un sport qui existe). Bon, en rĂ©alitĂ©, jâai Ă©tĂ© arbitre parce que chaque Ă©quipe de division 2 devait envoyer des joueurs arbitrer des matchs de division 1. Mais, au final, je lâai quand mĂȘme fait.
Je me souviens dâun match particuliĂšrement important entre les deux meilleures Ă©quipes de Belgique qui sâaffrontaient pour le titre de champion de Belgique.
En hockey subaquatique, il y a normalement deux arbitres dans lâeau. Mais, lors de ce match, le second arbitre sâavĂ©ra dĂ©passĂ© et, Ă chaque action, me faisait le geste signifiant "je nâai pas vu lâaction" (Ă©tant Ă©quipĂ©s de masque et de tubas, les arbitres communiquent par gestes codifiĂ©s).
Je me suis donc retrouvĂ© Ă arbitrer presque seul ce qui Ă©tait probablement le match le plus important du championnat. MalgrĂ© mon manque dâexpĂ©rience, jâai trĂšs vite compris que la seule maniĂšre de garder le contrĂŽle dâun match trĂšs engagĂ© Ă©tait de prendre des dĂ©cisions fermes avec assurance. Le palet Ă©tait sorti aprĂšs une mĂȘlĂ©e confuseâŻ? Pas le temps dâanalyser au millimĂštre qui Ă©tait le dernier joueur Ă lâavoir touchĂ©âŻ: je devais simplement prendre une dĂ©cision. Quoi que je dĂ©cide, lâautre Ă©quipe allait rĂ©clamer. Câest dâailleurs arrivĂ© trĂšs vite. En hockey subaquatique, seul le capitaine peut, en thĂ©orie, sâadresser Ă lâarbitre. Un joueur est venu, en se plaignant. Jâai fait le geste de demander sâil Ă©tait capitaine et, comme ce nâĂ©tait pas le cas, je lâai exclu pour 3 minutes. Il sâest mis Ă hurler, Jâai rajoutĂ© 2 minutes dâexclusion. CâĂ©tait particuliĂšrement sĂ©vĂšre. Mais, Ă partir de ce moment, plus aucune de mes dĂ©cisions nâa Ă©tĂ© contestĂ©e.
Jâai essayĂ© de les rendre les plus justes possible et le match sâest trĂšs bien dĂ©roulĂ©.
Jâai appris une chose importanteâŻ: lâarbitrage nâest pas une discipline scientifique. Est-il physiquement possible de dĂ©terminer exactement quelle a Ă©tĂ© la derniĂšre crosse Ă toucher le palet avant quâil sorteâŻ? Ă partir de quand exactement un shoot est-il considĂ©rĂ© comme dangereuxâŻ? MĂȘme la frontiĂšre entre un goal et un sauvetage de justesse sur la ligne possĂšde un certain degrĂ© dâarbitraire.
Pour prendre une dĂ©cision juste, lâarbitre peut utiliser son intuition humaine. Si un dĂ©fenseur a foncĂ© vers un attaquant et que le palet est sorti, on peut, dans le doute, estimer que la sortie est la faute du dĂ©fenseur. Lâarbitre peut Ă©galement «âŻsentirâŻÂ» lâaspect volontaire ou non dâune faute.
Mais tout cela nâĂ©tait possible que parce que, contrairement au football, le hockey subaquatique nâest pas Ă©quipĂ© de camĂ©ras qui scrutent tout au ralenti. Le football qui est devenu un sport que je trouve absolument impossible Ă apprĂ©cierâŻ: aprĂšs avoir marquĂ© un goal, les joueurs se tournent dĂ©sormais vers lâarbitre et attendent pour savoir sâil nây avait pas eu un hors-jeu millimĂ©trĂ© 5 minutes plus tĂŽt. Le tout est analysĂ© en coulisse par un type devant un ordinateur qui transmet ses dĂ©cisions dans lâoreillette de lâarbitre. Ou plutĂŽt les dĂ©cisions prises par un ordinateur.
Lâaspect humain du jeu a complĂštement disparu et prend les attributs dâune dĂ©cision pseudoscientifique, tentant de dĂ©couvrir une «âŻvĂ©ritĂ©âŻÂ». Or, scientifiquement, il nây a pas de vĂ©ritĂ© possible. Le hors-jeu se dĂ©clare au moment oĂč le ballon quitte le pied du passeur. Ce moment nâexiste pas. Le ballon se dĂ©formant, je mets au dĂ©fi quiconque de dĂ©terminer la milliseconde exacte de cet Ă©vĂ©nement. Il en est de mĂȘme pour dĂ©cider si une ligne Ă Ă©tĂ© franchie ou non. Ă partir de quel millimĂštre peut-on dire quâune sphĂšre a franchi une ligne tracĂ©e sur des brins dâherbeâŻ? Rien que le placement des camĂ©ras et lâĂ©clairage du stade vont influencer la dĂ©cision. MĂȘme en cyclisme il est parfois incroyablement difficile de dĂ©terminer quel vĂ©lo a franchi la ligne en premier. Et la dĂ©cision est alors prise sans appel possible.
Scientifiquement, câest trĂšs compliquĂ© de tracer une limite exacte. Un de mes profs de polytechnique disait que les appareils Ă aiguille sont toujours plus prĂ©cis que les afficheurs numĂ©riques, car on peut voir la mesure «âŻrĂ©elleâŻÂ» ⊠quitte Ă bouger un peu la tĂȘte pour quâelle corresponde Ă ce que lâon veutâŻ!
Pour mesurer scientifiquement, il faut poser des hypothĂšses, discuter, prendre plusieurs mesures, rĂ©pĂ©ter une expĂ©rience. Humainement, au contraire, il est possible de prendre la dĂ©cision qui parait la plus juste possible sur le moment mĂȘme. La dĂ©cision pourra toujours ĂȘtre discutĂ©e par aprĂšs, mais, dans le feu de lâaction, câest celle qui a Ă©tĂ© prise.
Et mĂȘme si les dĂ©cisions ne sont pas parfaites, le fait quâelles paraissent justes Ă premiĂšre vue va crĂ©er une relation de confiance envers lâarbitre. Lâarbitre se sentira responsable et utilisera son intuition pour prĂ©server sa rĂ©putation. Lorsque jâai arbitrĂ© ce fameux match de hockey, je nâai jamais cherchĂ© Ă prendre la dĂ©cision la plus exacte, mais toujours la plus juste.
Mais la machine ne permet plus la justesse. La justesse sâefface au profit dâune arbitraire exactitude. Lâarbitre obĂ©it dĂ©sormais Ă des instructions qui lui sont soufflĂ©es dans lâoreillette. Il ne peut plus prendre de dĂ©cisions. Il ne peut plus prendre de dĂ©cisions, mais, paradoxalement, il en reste responsable.
De la complexité comme justification de la non-décision
Il nây a pas que les arbitres de sport. Les pilotes de chasse sont dĂ©sormais confrontĂ©s au mĂȘme problĂšme.
Le F-35 est un avion tellement complexe quâil est devenu tout bonnement inpilotable. Le 27 aoĂ»t 2025, un appareil sâest Ă©crasĂ©. Le train dâatterrissage Ă©tait bloquĂ© en position semi-ouverte et le pilote a tentĂ© une sĂ©rie de «âŻtouch downâŻÂ», une procĂ©dure vieille comme lâaviation et que Buck Danny utilise notamment dans Prototype FX-13, un album de 1961, pour rĂ©soudre le mĂȘme problĂšme.
Buck Danny nâavait pas un ordinateur hyper complexe Ă son bord et il sauve finalement lâavion. En 2025, lâordinateur a considĂ©rĂ© que la procĂ©dure Ă©tait un atterrissage classique. Lâavion sâest mis en mode «âŻroulage au solâŻÂ» alors quâil Ă©tait en train de redĂ©coller. En mode roulage Ă plusieurs centaines de mĂštres dâaltitude, lâengin Ă©tait bien entendu ingouvernable, forçant lâĂ©jection du pilote.
Un problĂšme mĂ©canique prĂ©visible et «âŻclassiqueâŻÂ» sâest transformĂ©, grĂące aux ordinateurs en catastrophe.
Comme ce prof dâĂ©lectronique qui, pour justifier lâimportance de lâĂ©lectronique moderne, nous avait expliquĂ© que grĂące Ă lâĂ©lectronique, sa voiture avait pu ĂȘtre rĂ©parĂ©e en moins dâune heure le jour mĂȘme de son dĂ©part en vacances. La panne en questionâŻ? Un dĂ©faut du capteur Ă©lectronique qui inventait de fausses pannes.
Plus besoin dâavoir un problĂšme rĂ©el. DĂ©sormais, tout est automatisĂ©âŻ! En 2024, un pilote sâest Ă©jectĂ© de son F-35, car, malgrĂ© plusieurs reboot, son casque connectĂ© indiquait des erreurs critiques.
ProblĂšmeâŻ: aprĂšs lâĂ©jection du pilote, lâavion a continuĂ© Ă voler correctement pendant de trĂšs longues minutes. Il semblerait que son casque avait un simple bug informatique.
La subtilitĂ© de lâhistoire câest que le pilote en question voit dĂ©sormais sa carriĂšre mise entre parenthĂšses et est poursuivi pour abandon dâavion fonctionnel. Sauf quâil a suivi Ă la lettre la procĂ©dure relative aux messages dâerreur affichĂ©s dans son casque.
Non seulement la complexitĂ© crĂ©e artificiellement des problĂšmes, mais elle empĂȘche les humains dâacquĂ©rir de lâexpĂ©rience et de prendre des dĂ©cisions. Nous nâavons plus des pilotes qui «âŻsententâŻÂ» leur avion, mais des opĂ©rateurs suivants des procĂ©dures informatisĂ©es. Câest pareil quand mon garagiste me dit que lâerreur 550 de mon vĂ©hicule force Ă un retour chez le concessionnaire. Lequel nâa, au final, fait que remplacer une durite, ce que mon garagiste indĂ©pendant aurait pu faire directement si le logiciel ne lâen avait pas empĂȘchĂ©.
Le prix de lâespionnage permanent
Si vous lisez ce blog, vous avez conscience de lâespionnage permanent dont nous sommes victimes. Et lâune des consĂ©quences directes de cet espionnage, câest que tout peut dĂ©sormais ĂȘtre scrutĂ© mĂȘme longtemps aprĂšs. Toutes les dĂ©cisions peuvent ĂȘtre discutĂ©es pour savoir si, scientifiquement, câĂ©tait bien la bonne dĂ©cision.
Le foot, encore lui, est lâexemple parfaitâŻ: aprĂšs 90 minutes de match suivent des heures voire, dans certains cas, des journĂ©es entiĂšres de discussions entre des types qui regardent chaque image au ralenti pour conclure que lâarbitre, lâentraĂźneur ou les joueurs ont pris de mauvaises dĂ©cisions.
Quâon soit arbitre, pilote de chasse ou simple citoyen, la seule stratĂ©gie possible pour un humain raisonnable est donc de ne plus prendre de dĂ©cisions (ce qui est dĂ©jĂ une dĂ©cision en soi).
Nous nous sommes fait avoir. Nous servons la machine aveuglĂ©ment, nâen tirant aucun bĂ©nĂ©fice lorsque tout va bien et en nous faisant taper sur les doigts lorsque tout va mal. Ce qui sert dâexcuses Ă mettre encore plus de machines dans lâhistoire.
Nous sommes devenus cet assemblage biologiquement mĂ©canique absurde et contre nature quâAnthony Burgess appelle «âŻLâorange mĂ©caniqueâŻÂ» (la signification du titre est en effet parfaitement explicite dans le livre, bien plus que dans le film).
Et si vous pensez que lâIA peut vous dĂ©passer, câest parce que, justement, vous agissez comme une IA, comme une orange mĂ©canique. Par dĂ©finition, ChatGPT surpassera toujours en intelligence celleux qui font confiance Ă ChatGPT.
Comme le disait lâinĂ©narrable Yann Kerninon en 2017, il faut juste arrĂȘter de nous prendre pour des machines. Redevenir des humains. Des oranges biologiques qui pourrissent et crĂšvent, mais qui sont, juste avant, pleines de saveurs et de vitamines.
Andreas raconte son expĂ©rience avec un de ses collĂšgues incapable de rĂ©soudre un problĂšme particulier (ce qui arrive Ă tout le monde, surtout quand on a le nez dans le guidon). Mais la particularitĂ©, câest que le collĂšgue en question nâa jamais cherchĂ© Ă rĂ©soudre le problĂšme. Il cherchait Ă ce que ChatGPT lui donne la rĂ©ponse.
LorsquâAndreas a compris la cause du problĂšme en question, il a tentĂ© de lâexpliquer Ă son collĂšgue, mais ce nâest que lorsque ce dernier a donnĂ© lâexplication Ă ChatGPT et que ChatGPT a acquiescĂ© quâils ont pu enfin avancer.
Andreas conclut avec le fait que le cerveau est un muscle. Moins on lâutilise, plus il sâatrophie et plus lâacte de penser devient douloureux et moins on a envie de lâutiliser (et donc plus il sâatrophie).
Jâen profite pour rappeler que ChatGPT et consorts sont littĂ©ralement des machines Ă reformuler vos questions, Ă acquiescer Ă tous vos biais cognitifs. Mais Olivier Ertzscheid lâexplique mieux que moi dans ce court billet «âŻ3 minutes chronoâŻÂ»
Mais lĂ oĂč Andrea est trop optimiste est lorsquâil imagine que savoir penser va devenir une qualitĂ© rare et enviable sur le marchĂ© du travail.
Rare, oui.
Enviable, certainement pas. Car penser, câest remettre en question. Le capitalisme actuel me fait penser Ă la guerre 14-18. Les travailleurs sont la chair Ă canon. Les intellectuels sont les pacifistes, les objecteurs de conscience. En tentant de remettre en question lâĂ©pouvantable boucherie dont ils Ă©taient tĂ©moins, ils nâont gagnĂ© que le droit de se faire fusiller.
Questionner, se rebeller, câest ĂȘtre un perdantâŻ!
Comme je le disaisâŻ: on ne reçoit pas de mĂ©dailles pour rĂ©sister. Câest mĂȘme plutĂŽt le contraireâŻ: les mĂ©dailles sont lĂ pour rĂ©compenser ceux qui perpĂ©tuent le systĂšme sans poser de questions.
Refuser de devenir une orange mĂ©canique, câest accepter de pourrir. Câest mĂȘme le cĂ©lĂ©brer en sâenfonçant des clous de girofle dans la chair pour que ce pourrissement sente bon.
En plus, ça donne un look punk, vous ne trouvez pas�
Je suis Ploum et je viens de publier Bikepunk, une fable Ă©colo-cycliste entiĂšrement tapĂ©e sur une machine Ă Ă©crire mĂ©canique. Pour me soutenir, achetez mes livres (si possible chez votre libraire)âŻ!
Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.
October 13, 2025

Dear lazyweb,
At work, we are trying to rotate the GPG signing keys for the Linux packages of the eID middleware
We created new keys, and they will be installed on all Linux machines that have the eid-archive package installed soon (they were already supposed to be, but we made a mistake).
Running some tests, however, I have a bit of a problem:
[wouter@rhel rpm-gpg]$ sudo rpm --import RPM-GPG-KEY-BEID-RELEASE
[wouter@rhel rpm-gpg]$ sudo rpm --import RPM-GPG-KEY-BEID-RELEASE-2025
fout: RPM-GPG-KEY-BEID-RELEASE-2025: key 1 import failed.
[wouter@rhel rpm-gpg]$ sudo rpm --import RPM-GPG-KEY-BEID-CONTINUOUS
This is on RHEL9.
The only difference between the old keys and the new one, apart of course from the fact that the old one is, well, old, is that the old one uses the RSA algorithm whereas the new one uses ECDSA on the NIST P-384 curve (the same algorithm as the one used by the eID card).
Does RPM not support ECDSA keys? Does anyone know where this is documented?
(Yes, I probably should have tested this before publishing the new key, but this is where we are)
October 09, 2025
When working on presentations, I like to extract my speaker notes to review the flow and turn them into blog posts. I'm doing this right now for my DrupalCon Vienna talk.
I used to do this manually, but with presentations often having 100+ slides, it gets tedious and isn't very repeatable. So I ended up automating this with a Python script.
Since I use Apple Keynote or Google Slides rather than Microsoft PowerPoint, I first export my presentations to PowerPoint format, then run my Python script.
If you've ever needed to pull speaker notes from a presentation for review, editing or blogging, here is my script and how to use it.
Speaker notes extractor script
Save this code as powerpoint-to-text.py:
#!/usr/bin/env python3
"""Extract speaker notes from PowerPoint presentations to text files."""
import sys
from pathlib import Path
from pptx import Presentation
def extract_speaker_notes(pptx_path: Path) -> tuple[str, int]:
presentation = Presentation(pptx_path)
notes_text = []
for i, slide in enumerate(presentation.slides, 1):
if slide.notes_slide and slide.notes_slide.notes_text_frame:
notes = slide.notes_slide.notes_text_frame.text.strip()
if notes:
notes_text.append(f"=== Slide {i} ===\n{notes}\n")
return "\n".join(notes_text), len(notes_text)
def main():
if len(sys.argv) != 2:
print("Usage: python powerpoint-to-text.py presentation.pptx")
sys.exit(1)
input_path = Path(sys.argv[1])
if not input_path.exists():
print(f"Error: File '{input_path}' not found")
sys.exit(1)
if input_path.suffix.lower() != '.pptx':
print(f"Warning: '{input_path}' may not be a PowerPoint file")
try:
notes_text, notes_count = extract_speaker_notes(input_path)
except Exception as e:
print(f"Error reading presentation: {e}")
sys.exit(1)
output_path = input_path.with_suffix('.txt')
output_path.write_text(notes_text, encoding='utf-8')
print(f"Extracted {notes_count} slides with notes to {output_path}")
if __name__ == "__main__":
main()
The script uses the python-pptx library to read PowerPoint files. This library understands the internal structure of .pptx files (which are zip archives containing XML). It provides a clean Python interface to access slides and their speaker notes. The script loops through each slide, checks if it has notes, and writes them to a text file.
Usage
I like to use uv to run Python code. uv is a fast, modern Python package manager that handles dependencies automatically:
$ uv run --with python-pptx powerpoint-to-text.py your-presentation.pptx
This saves a .txt file with your notes in the same directory as the input file, not the current directory or desktop.
The text file contains:
=== Slide 1 ===
Speaker notes from slide 1 ...
=== Slide 3 ===
Speaker notes from slide 3 ...
Only slides with speaker notes are included.
October 08, 2025
As a solo developer, I wear all the hats. 


That includes the very boring Quality Assurance Hat
â the one that says âyes, Amedee, you do need to check for trailing whitespace again.â
And honestly? I suck at remembering those little details. Iâd rather be building cool stuff than remembering to run Black or fix a missing newline. So I let my robot friend handle it.
That friend is called pre-commit. And itâs the best personal assistant I never hired. 
What is this thing?
Pre-commit is like a bouncer for your Git repo. Before your code gets into the club (your repo), it gets checked at the door:
âWhoa there â trailing whitespace? Not tonight.â
âMissing a newline at the end? Try again.â
âThat YAML looks sketchy, pal.â
âYou really just tried to commit a 200MB video file? What is this, Dropbox?â
âLeaking AWS keys now, are we? Security says nope.â
âCommit message says âfixâ? Thatâs not a message, thatâs a shrug.â
Pre-commit runs a bunch of little scripts called hooks to catch this stuff. You choose which ones to use â itâs modular, like Lego for grown-up devs. 
When I commit, the hooks run. If they donât like what they see, the commit gets bounced.
No exceptions. No drama. Just âfix it and try again.â
Is it annoying? Yeah, sometimes.
But has it saved my butt from pushing broken or embarrassing code? Way too many times.
Why I bother (as a hobby dev)
I donât have teammates yelling at me in code reviews. I am the teammate.
And future-me is very forgetful. 
Pre-commit helps me:
Keep my code consistent
It catches dumb mistakes before I make them permanent.
Spend less time cleaning up
Feel a little more âproâ even when Iâm hacking on toy projects
It works with any language. Even Bash, if youâre that kind of person.
Also, it feels kinda magical when it auto-fixes stuff and the commit just⊠works.
Installing it with pipx (because I’m not a barbarian)
I’m not a fan of polluting my Python environment, so I use pipx to keep things tidy. It installs CLI tools globally, but keeps them isolated.
If you donât have pipx yet:
python3 -m pip install --user pipx
pipx ensurepath
Then install pre-commit like a boss:
pipx install pre-commit
Boom. Itâs installed system-wide without polluting your precious virtualenvs. Chef’s kiss. 

Setting it up
Inside my project (usually some weird half-finished script Iâll obsess over for 3 days and then forget for 3 months), I create a file called .pre-commit-config.yaml.
Here’s what mine usually looks like:
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v5.0.0
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
- id: check-yaml
- id: check-added-large-files
- repo: https://github.com/gitleaks/gitleaks
rev: v8.28.0
hooks:
- id: gitleaks
- repo: https://github.com/jorisroovers/gitlint
rev: v0.19.1
hooks:
- id: gitlint
- repo: https://gitlab.com/vojko.pribudic.foss/pre-commit-update
rev: v0.8.0
hooks:
- id: pre-commit-update
What this pre-commit config actually does
You’re not just tossing some YAML in your repo and calling it a day. This thing pulls together a full-on code hygiene crew â the kind that shows up uninvited, scrubs your mess, locks up your secrets, and judges your commit messages like it’s their job. Because it is.
pre-commit-hooks (v5.0.0)
These are the basics â the unglamorous chores that keep your repo from turning into a dumpster fire. Think lint roller, vacuum, and passive-aggressive IKEA manual rolled into one.
trailing-whitespace:
No more forgotten spaces at the end of lines. The silent killers of clean diffs.end-of-file-fixer:
Adds a newline at the end of each file. Why? Because some tools (and nerds) get cranky if itâs missing.check-yaml:
Validates your YAML syntax. No more âwhy isnât my config working?â only to discover you had an extra space somewhere.check-added-large-files:
Stops you from accidentally committing that 500MB cat video or .sqlitedump. Saves your repo. Saves your dignity.
gitleaks (v8.28.0)
Scans your code for secrets â API keys, passwords, tokens you really shouldnât be committing.
Because weâve all accidentally pushed our .env file at some point. (Donât lie.)
gitlint (v0.19.1)
Enforces good commit message style â like limiting subject line length, capitalizing properly, and avoiding messages like âasdfâ.
Great if you’re trying to look like a serious dev, even when you’re mostly committing bugfixes at 2AM.
pre-commit-update (v0.8.0)
The responsible adult in the room. Automatically bumps your hook versions to the latest stable ones. No more living on ancient plugin versions.
In summary
This setup covers:
Basic file hygiene (whitespace, newlines, YAML, large files)
Secret detection
Commit message quality
Keeping your hooks fresh
You can add more later, like linters specific for your language of choice â think of this as your “minimum viable cleanliness.”
What else can it do?
There are hundreds of hooks. Some Iâve used, some Iâve just admired from afar:
blackis a Python code formatter that says: âShhh, I know better.âflake8finds bugs, smells, and style issues in Python.isortsorts your imports so you donât have to.eslintfor all you JavaScript kids.shellcheckfor Bash scripts.- ⊠or write your own custom one-liner hook!
You can browse tons of them at: https://pre-commit.com/hooks.html
Make Git do your bidding
To hook it all into Git:
pre-commit install
Now every time you commit, your code gets a spa treatment before it enters version control. 
Wanna retroactively clean up the whole repo? Go ahead:
pre-commit run --all-files
Youâll feel better. I promise.
TL;DR
Pre-commit is a must-have.
Itâs like brushing your teeth before a date: itâs fast, polite, and avoids awkward moments later. 

If you havenât tried it yet: do it. Your future self (and your Git history, and your date) will thank you. 
Use pipx to install it globally.
Add a .pre-commit-config.yaml.
Install the Git hook.
Enjoy cleaner commits, fewer review comments â and a commit history youâre not embarrassed to bring home to your parents. 

And if it ever annoys you too much?
You can always disable it⊠like cancelling the date but still showing up in their Instagram story. 

git commit --no-verify
Want help writing your first config? Or customizing it for Python, Bash, JavaScript, Kotlin, or your one-man-band side project? Iâve been there. Ask away!
Several years ago, I built a photo stream on my website and pretty much stopped posting on Instagram.
I didn't like how Instagram made me feel, or the fact that it tracks my friends and family when they look at my photos. And while it was a nice way to stay in touch with people, I never found a real sense of community there.
Instead, I wanted a space that felt genuinely mine. A place that felt like home, not a podium. No tracking, no popularity contests, no clickbait, no ads. Just a quiet corner to share a bit of my life, where friends and family could visit without being tracked.
Leaving Instagram meant giving up its biggest feature: subscribers and notifications. On Instagram, people follow you and automatically see your posts. On my website, you have to remember to visit.
To bridge this gap, I first added an RSS feed for my photos. But since not everyone uses RSS, I later started a monthly photo newsletter. Once a month, I pick my favorite photos, format them for email, and send them out.
After sending five or six photo newsletters, I could already feel my motivation fading. Each one only took about twenty minutes to make, but it still felt like too much friction. So, I decided to fix that.
Under the hood, my photo stream runs on Drupal, built as a custom module. I added two new routes to my custom Drupal module:
/photos/$year/$month: shows all photos for a given month, with the usual features: lazy loading, responsive images, Schema.org markup, and so on./photos/$year/$month?email=true: shows the same photos, but stripped down and formatted specifically for email clients.
Now my monthly workflow looks like this: visit /photos/2025/9?email=true, copy the source HTML, paste it into Buttondown, and hit 'Send'. That twenty-minute task became a one-minute task.
I spent two hours coding this to save nineteen minutes a month. In other words, it takes about six months before the time saved equals the time invested. The math checks out: 120 / 19 â 6. My developer brain called it a win. My business brain called it a write-off.
But the real benefit isn't the time saved. The easier something is, the more likely I am to stick with it. Automation doesn't just save time. It also preserves good intentions.
Could I automate this completely? Sure. I'm a developer, remember. Maybe I will someday. But for now, that one-minute task each month feels right. It's just enough involvement to stay connected to what I'm sharing, without the friction that kills motivation.
October 02, 2025

I see so much social media content vanish into an algorithmic black hole.
Post a photo on Instagram and hundreds of people see it. Tweet a thought and it spreads across the internet in minutes. But that same content becomes invisible within days, buried beneath the constant scroll.
Six years ago, I deleted my Facebook account. And for the past two years, I've mostly stopped posting on Instagram and X (formerly Twitter).
It's a bittersweet. I started using Twitter in 2007 after Evan Williams, one of Twitter's co-founders, personally introduced it to me at FooCamp. I loved what it stood for then.
Jeff Robbins (co-founder Lullabot + former rockstar), Larry Page (co-founder Google) and Evan Williams (co-founder Blogger.com, co-founder Twitter) at FooCamp 2007.
I still post on LinkedIn now and then, but LinkedIn has gone backwards too, with so many shallow, click-baity posts.
When people ask why I'm not active on social media anymore, the truth is simple: I'm happier without it.
I continue to publish on my own website, even if it's not as often as I'd like. Posting on your own site gives you something social media doesn't: permanence.
Every week I get emails from people who discover an old blog post or a photo I shared on my website years ago. That never happens with my old tweets or social media posts.
My best posts from a decade ago still show up in search results. They still spark conversations. They still get referenced, and they still help people solve problems.
Social media content has a half-life measured in hours or days. Blog posts can compound over years.
This is why I'm building my audience here, on the edge of the internet. Some days it feels like swimming against the current. But when I see a post I wrote years ago still helping someone today, I know it's worth it.
Social media gives you reach. Blog posts give you longevity.
October 01, 2025
Sometimes Linux life is bliss. I have my terminal, my editor, my tools, and Steam games that run natively. For nearly four years, I didnât touch Windows once â and I didnât miss it.
And then Fortnite happened.
My girlfriend Enya and her wife Kyra got hooked, and naturally I wanted to join them. But Fortnite refuses to run on Linux â apparently some copy-protection magic that digs into the Windows kernel, according to Reddit (so I don’t know if it’s true). Itâs rare these days for a game to be Windows-only, but rare enough to shatter my Linux-only bubble. Suddenly, resurrecting Windows wasnât a chore anymore; it was a quest for polyamorous Battle Royale glory. 
My Windows 11 partition had been hibernating since November 2021, quietly gathering dust and updates in a forgotten corner of the disk. Why it stopped working back then? I honestly donât remember, but apparently I had blogged about it. I hadnât cared â until now.
The Awakening â Peeking Into the UEFI Abyss 
I started my journey with my usual tools: efibootmgr and update-grub on Ubuntu. I wanted to see what the firmware thought was bootable:
sudo efibootmgr
Output:
BootCurrent: 0001
Timeout: 1 seconds
BootOrder: 0001,0000
Boot0000* Windows Boot Manager ...
Boot0001* Ubuntu ...
At first glance, everything seemed fine. Ubuntu booted as usual. Windows⊠did not. It didn’t even show up in the GRUB boot menu. A little disappointingâbut not unexpected, given that it hadnât been touched in years. 
I knew the firmware knew about Windowsâbut the OS itself refused to wake up.
The Hidden Enemy â Why os-prober Was Disabled 
I soon learned that recent Ubuntu versions disable os-prober by default. This is partly to speed up boot and partly to avoid probing unknown partitions automatically, which could theoretically be a security risk.
I re-enabled it in /etc/default/grub:
GRUB_DISABLE_OS_PROBER=false
Then ran:
sudo update-grub
Even after this tweak, Windows still didnât appear in the GRUB menu.
The Manual Attempt â GRUB to the Rescue 
Determined, I added a manual GRUB entry in /etc/grub.d/40_custom:
menuentry "Windows" {
insmod part_gpt
insmod fat
insmod chain
search --no-floppy --fs-uuid --set=root 99C1-B96E
chainloader /EFI/Microsoft/Boot/bootmgfw.efi
}
How I found the EFI partition UUID:
sudo blkid | grep EFIResult:
UUID="99C1-B96E"
Ran sudo update-grub⊠Windows showed up in GRUB! But clicking it? Nothing.
At this stage, Windows still wouldnât boot. The ghost remained untouchable.
The Missing File â Hunt for bootmgfw.efi 
The culprit? bootmgfw.efi itself was gone. My chainloader had nothing to point to.
I mounted the NTFS Windows partition (at /home/amedee/windows) and searched for the missing EFI file:
sudo find /home/amedee/windows/ -type f -name "bootmgfw.efi"
/home/amedee/windows/Windows/Boot/EFI/bootmgfw.efi
The EFI file was hidden away, but thankfully intact. I copied it into the proper EFI directory:
sudo cp /home/amedee/windows/Windows/Boot/EFI/bootmgfw.efi /boot/efi/EFI/Microsoft/Boot/
After a final sudo update-grub, Windows appeared automatically in the GRUB menu. Finally, clicking the entry actually booted Windows. Victory! 
Four Years of Sleeping Giants 
Booting Windows after four years was like opening a time capsule. I was greeted with thousands of updates, drivers, software installations, and of course, the installation of Fortnite itself. It took hours, but it was worth it. The old system came back to life.
Every âupdate completeâ message was a heartbeat closer to joining Enya and Kyra in the Battle Royale.
The GRUB Disappearance â Enter Ventoy 
After celebrating Windows resurrection, I rebooted⊠and panic struck.
The GRUB menu had vanished. My system booted straight into Windows, leaving me without access to Linux. How could I escape?
I grabbed my trusty Ventoy USB stick (the same one I had used for performance tests months ago) and booted it in UEFI mode. Once in the live environment, I inspected the boot entries:
sudo efibootmgr -v
Output:
BootCurrent: 0002
Timeout: 1 seconds
BootOrder: 0002,0000,0001
Boot0000* Windows Boot Manager ...
Boot0001* Ubuntu ...
Boot0002* USB Ventoy ...
To restore Ubuntu to the top of the boot order:
sudo efibootmgr -o 0001,0000
Console output:
BootOrder changed from 0002,0000,0001 to 0001,0000
After rebooting, the GRUB menu reappeared, listing both Ubuntu and Windows. I could finally choose my OS again without further fiddling. 
A Word on Secure Boot and Signed Kernels 
Since weâre talking bootloaders: Secure Boot only allows EFI binaries signed with a trusted key to execute. Ubuntu Desktop ships with signed kernels and a signed shim so it boots fine out of the box. If you build your own kernel or use unsigned modules, youâll either need to sign them yourself or disable Secure Boot in firmware.
Diagram of the Boot Flow 
Hereâs a visual representation of the boot process after the fix:
flowchart TD
UEFI["
UEFI Firmware BootOrder:<br/>0001 (Ubuntu) â<br/>0000 (Windows)<br/>(BootCurrent: 0001)"]
subgraph UbuntuEFI["shimx64.efi"]
GRUB["
GRUB menu"]
LINUX["
Ubuntu Linux<br/>kernel + initrd"]
CHAINLOAD["
Windows<br/>bootmgfw.efi"]
end
subgraph WindowsEFI["bootmgfw.efi"]
WBM["
 Windows Boot Manager"]
WINOS["
Windows 11<br/>(C:)"]
end
UEFI --> UbuntuEFI
GRUB -->|boots| LINUX
GRUB -.->|chainloads| CHAINLOAD
UEFI --> WindowsEFI
WBM -->|boots| WINOS
From the GRUB menu, the Windows entry chainloads bootmgfw.efi, which then points to the Windows Boot Manager, finally booting Windows itself.
First Battle Royale 

After all the technical drama and late-night troubleshooting, I finally joined Enya and Kyra in Fortnite.
I had never played Fortnite before, but my FPS experience (Borderlands hype, anyone?) and PUBG knowledge from Viva La Dirt League on YouTube gave me a fighting chance.
We won our first Battle Royale together! 
The sense of triumph was surrealâafter resurrecting a four-year-old Windows partition, surviving driver hell, and finally joining the game, victory felt glorious.
TL;DR: Quick Repair Steps 
- Enable os-prober in
/etc/default/grub. - If Windows isnât detected, try a manual GRUB entry.
- If boot fails, copy
bootmgfw.efifrom the NTFS Windows partition to/boot/efi/EFI/Microsoft/Boot/. - Run
sudo update-grub. - If GRUB disappears after booting Windows, boot a Live USB (UEFI mode) and adjust
efibootmgrto set Ubuntu first. - Reboot and enjoy both OSes.

This little adventure taught me more about GRUB, UEFI, and EFI files than I ever wanted to know, but it was worth it. Most importantly, I got to join my polycule in a Fortnite victory and prove that even a four-year-old Windows partition can rise again! 

September 30, 2025
This command on a Raspberry Pi 3:
stty -F /dev/ttyACM0 ispeed 9600 ospeed 9600 raw
Resulted in this error:
stty: /dev/ttyACM0: Inappropriate ioctl for device
This happened after the sd-card (with the OS) of the Pi entered failsafe mode so it is read only. I used dd to copy the 16GB sd-card to a new 128GB one:
dd if=/dev/sdd of=/srv/iso/pi_20250928.isodd if=/srv/iso/pi_20250928.iso of=/dev/sdd
The solution was to stop the module, remove the device, and start it again:
# modprobe -r cdc_acm
# rm -rf /dev/ttyACM0
# modprobe cdc_acm
... as it was probably copied with dd from the read-only sd-card.
epilogue: The 16GB SD was ten to fifteen years old. The Pi expanded the new card to 118GB not 128GB as advertised: relevant XKCD https://xkcd.com/394/
September 24, 2025
We need to talk.
You and I have been together for a long time. I wrote blog posts, you provided a place to share them. For years that worked. But lately youâve been treating my posts like spam â my own blog links! Apparently linking to an external site on my Page is now a cardinal sin unless I pay to âboostâ it.
And itâs not just Facebook. Threads â another Meta platform â also keeps taking down my blog links.
So this is goodbye⊠at least for my Facebook Page.
Iâm not deleting my personal Profile. Iâll still pop in to see what events are coming up, and to look at photos after the balfolk and festivals. But our Page-posting days are over.
Hereâs why:
- Your algorithm is a slot machine. What used to be âshare and be seenâ has become âshare, pray, and maybe pay.â Iâd rather drop coins in an actual jukebox than feed a zuckerbot just so friends can see my work.
- Talking into a digital void. Posting to my Page now feels like performing in an empty theatre while an usher whispers âboost post?â The real conversations happen by email, on Mastodon, or â imagine â in real life.
- Privacy, ads, and that creepy feeling. Every login is a reminder that Facebook isnât free. Iâm paying with my data to scroll past ads for things I only muttered near my phone. Thatâs not the backdrop I want for my writing.
- The algorithm ate my audience. Remember when following a Page meant seeing its posts? Cute era. Now everythingâs at the mercy of an opaque feed.
- My house, my rules. I built amedee.be to be my own little corner of the web. No arbitrary takedowns, no algorithmic chokehold, no random âspamâ labels. Subscribe by RSS or email and youâll get my posts in the order I publish them â not the order an algorithm thinks you should.
- Better energy elsewhere. Time spent arm-wrestling Facebook is time I could spend writing, playing the nyckelharpa, or dancing a Swedish polska at a balfolk. All of that beats arguing with a zuckerbot.
From now on, if people actually want to read what I write, theyâll find me at amedee.be, via RSS, email, or Mastodon. No algorithms, no takedowns, no mystery boxes.
So yes, weâll still bump into each other when I check events or browse photos. But the part where I dutifully feed you my blog posts? Thatâs over.
With zero boosted posts and one very happy nyckelharpa,
Amedee
September 20, 2025

September 18, 2025

De la mystification de la Grande Idée
et de la nĂ©gation de lâexpĂ©rience
Festival Hypermondes à Mérignac
Je dĂ©dicacerai ce samedi 20 et dimanche 21 septembre Ă MĂ©rignac, dans le cadre du festival Hypermondes. Je participe Ă©galement Ă une table ronde le dimanche. Et pour tout vous dire, jâai sacrĂ©ment le trac, car je serai entourĂ© de noms qui peuplent ma bibliothĂšque et dont jâai lu et relu les livresâŻ: Pierre Bordage, J.C. Dunyach, Pierre Raufast, Catherine Dufour, Laurent Genefort⊠Sans oublier Schuiten et Peeters, qui ont marquĂ© mon adolescence et surtout, mon idole, le plus grand scĂ©nariste BD de ce siĂšcle, Alain Ayroles (parce que pour le siĂšcle prĂ©cĂ©dent, câest Goscinny).
Bref, je me sens tout petit au milieu de ces gĂ©ants alors nâhĂ©sitez pas Ă venir me faire un coucou pour que je me sente moins seul sur le standâŻ!
La mythologie de lâidĂ©e
Dans le film «âŻGlass OnionâŻÂ» (Rian Johnson, 2022), un milliardaire de la Tech, parodie de ZuckerMusk, invite des amis sur son Ăźle privĂ©e pour une sorte de cluedo gĂ©ant. Qui dĂ©gĂ©nĂšre Ă©videmment lorsquâun vĂ©ritable crime est commis.
Ce que jâai beaucoup aimĂ© dans ce film, câest lâinsistance sur un point trop souvent oubliĂ©âŻ: ce nâest pas parce quâon est riche et/ou cĂ©lĂšbre quâon est intelligent. Et ce nâest pas parce quâon arrive Ă faire croire au public quâon est surintelligent, au point de le croire soi-mĂȘme, quâon lâest rĂ©ellement.
Bref, câest une belle remise Ă leur place du monde des milliardaires, des influenceurs, starlettes et tout ce qui gravite autour.
NĂ©anmoins, un point particulier mâa chagrinĂ©âŻ: toute une partie de lâintrigue repose sur savoir qui a eu le premier lâidĂ©e de la startup qui fera le succĂšs du milliardaire, idĂ©e qui est littĂ©ralement griffonnĂ©e sur une serviette en papier.
Câest trĂšs amusant dans le film, mais comme je lâai dĂ©jĂ ditâŻ: une idĂ©e seule ne vaut rienâŻ!
LâidĂ©e nâest que lâĂ©tincelle initiale dâun projet, mais le rĂ©sultat final sera impactĂ© par les milliers de dĂ©cisions et dâadaptations prises en cours de route.
Le rĂŽle de lâarchitecte
Si vous nâavez jamais fait construire de maison, vous pensez peut-ĂȘtre que vous dĂ©crivez la maison de vos rĂȘves Ă un architecte. Celui-ci vous propose un plan. Vous validez, les ingĂ©nieurs et les ouvriers sâemparent du plan et la maison se construit.
Sauf quâen rĂ©alitĂ©, vous ĂȘtes incapable de dĂ©crire la maison de vos rĂȘves. Vos intuitions sont toutes contradictoires. Ce que jâappelle le syndrome de «âŻla maison de plain-pied sur deux Ă©tagesâŻÂ». Et quand bien mĂȘme vous avez rĂ©flĂ©chi en profondeur, lâarchitecte va pointer tout un tas de problĂšmes pratiques avec vos idĂ©es. De choses auxquelles vous nâavez pas pensĂ©. Câest trĂšs joli toutes ces vitres, mais comment allez-vous les entretenirâŻ?
Il va falloir faire des compromis, prendre des dĂ©cisions. Et une fois le plan validĂ©, les dĂ©cisions continueront sur le chantier. Ă cause des imprĂ©vus ou des milliers de petits problĂšmes qui nâapparaissaient pas sur le plan. Voulez-vous vraiment un Ă©vier Ă cet endroit vu que la porte sâouvre dessusâŻ?
Au final, la maison de vos rĂȘves sera trĂšs diffĂ©rente de ce que vous avez imaginĂ©. Pendant des annĂ©es, vous lui trouverez des dĂ©fauts. Mais ces dĂ©fauts sont des compromis que vous avez expressĂ©ment choisis.
LâidĂ©e dâun roman
En tant quâĂ©crivain, il mâarrive rĂ©guliĂšrement de me voir poser la questionâŻ: «âŻDâoĂč te viennent toutes ces idĂ©esâŻ?âŻÂ»
Comme si avoir lâidĂ©e Ă©tait un problĂšme. Des idĂ©es, jâen ai des centaines dans mes tiroirs. Le travail nâest pas dâavoir lâidĂ©e, câest de faire le plan puis de transformer ce plan en construction.
Jâai plusieurs fois reçu des propositions de typeâŻ: «âŻJâai une super idĂ©e pour un roman, je te la partage, tu Ă©cris et on fait 50/50âŻÂ».
Vous imaginez un instant arriver chez un architecte avec un truc griffonnĂ© et direâŻ: «âŻJâai une super idĂ©e pour une maison, je vous la montre, vous la construisez, vous trouvez un entrepreneur et on partageâŻÂ»âŻ?
Contrairement Ă Printeurs, que jâai rĂ©digĂ© sans scĂ©nario prĂ©alable, jâai Ă©crit Bikepunk avec une vĂ©ritable structure. Je suis parti dâune idĂ©e initiale. Jâai brainstormĂ© avec Thierry Crouzet (nos Ă©changes ont fait naĂźtre le fameux flash de lâhistoire). Puis jâai creusĂ© les personnages. Jâai Ă©crit une nouvelle dans cet univers (crĂ©ant le personnage de Dale), jâai ensuite travaillĂ© la structure pendant un mois avec un tableau de liĂšge sur lequel je punaisais des fiches. Enfin, je me suis mis Ă lâĂ©criture. Bien des fois, je me suis retrouvĂ© confrontĂ© Ă des incohĂ©rences, jâai dĂ» prendre des dĂ©cisions.
Le rĂ©sultat final ne ressemble en rien Ă ce que jâimaginais. Certaines scĂšnes clĂ© de mon synopsis se sont rĂ©vĂ©lĂ©es, Ă la relecture de simples transitions. Des improvisations de derniĂšres minutes semblent, au contraire, avoir marquĂ© toute une frange de lecteurices.
Le code nâest quâune sĂ©rie de dĂ©cisions
Une idĂ©e nâest quâune Ă©tincelle qui peut potentiellement se propager, se mĂ©langer Ă dâautres. Mais, pour allumer un feu, la source initiale de lâĂ©tincelle compte bien moins que le combustible.
Lâinvention qui mit cela en exergue est certainement lâordinateur. Car un ordinateur est, par essence, une machine qui fait ce quâon lui demande.
Exactement ce quâon lui demande. Ni plus ni moins.
Lâhumain a Ă©tĂ© confrontĂ© au fait quâil est extrĂȘmement compliquĂ© de savoir ce que lâon veut. Que câest presque impossible de lâexprimer sans ambiguĂŻtĂ©. Que cela nĂ©cessite un langage dĂ©diĂ©.
Un langage de programmation.
Et maitriser un langage de programmation demande un esprit tellement analytique et rationnel quâun mĂ©tier sâest créé pour lâutiliser: programmeur, codeuse, dĂ©veloppeur. Le terme importe peu.
Mais, tout comme un architecte, une programmeuse doit en permanence prendre des dĂ©cisions quâelle pense ĂȘtre les meilleures pour le projet. Pour lâavenir. Ou bien elle identifie les paradoxes pour en discuter avec le client. «âŻVous mâavez demandĂ© une interface simple avec un seul bouton tout en me spĂ©cifiant douze fonctionnalitĂ©s qui doivent avoir un accĂšs direct avec un bouton dĂ©diĂ©. On fait quoiâŻ?âŻÂ» (cas vĂ©cu).
De la stupidité de croire en une IA productive
Ce que je dis paraĂźt peut-ĂȘtre Ă©vident, mais lorsque jâentends le nombre de personnes qui parlent de «âŻvibe programmingâŻÂ», je me dis que beaucoup trop de monde a Ă©tĂ© bercĂ© avec le paradigme de «âŻlâidĂ©e magiqueâŻÂ» comme dans Onion Glass.
Les IAs sont perçues comme des machines magiques qui font ce que vous voulez.
Sauf que, quand bien mĂȘme elles seraient parfaites, vous ne savez pas ce que vous voulez.
Les IA ne peuvent pas prendre correctement ces milliers de dĂ©cisions. Des algorithmes statistiques ne peuvent produire que des rĂ©sultats alĂ©atoires. Vous ne pouvez pas juste Ă©mettre votre idĂ©e et voir le rĂ©sultat apparaĂźtre (ce qui est le fantasme des crĂ©tins-managers, cette race dâidiots formĂ©s dans les Ă©coles de management qui est persuadĂ©e que les exĂ©cutants sont une charge dont il faudrait idĂ©alement se passer).
Le fantasme ultime est une machine «âŻintuitiveâŻÂ», quâil ne faut pas apprendre. Mais lâapprentissage nâest pas seulement technique. LâexpĂ©rience humaine est globale. Un architecte va penser aux problĂšmes de la maison de vos rĂȘves parce quâil a dĂ©jĂ suivi vingt chantiers et eu un aperçu des problĂšmes. Chaque nouveau livre dâun Ă©crivain reflĂšte son expĂ©rience avec les prĂ©cĂ©dents. Certaines dĂ©cisions sont les mĂȘmes, dâautres, au contraire, sont diffĂ©rentes pour expĂ©rimenter.
Ne pas vouloir apprendre son outil, câest la dĂ©finition mĂȘme de la stupiditĂ© la plus crasse.
Penser quâune IA pourrait remplacer un dĂ©veloppeur, câest montrer sa totale incompĂ©tence quant au travail du dĂ©veloppeur en question. Penser quâune IA peut Ă©crire un livre ne peut provenir que de gens qui ne lisent pas eux-mĂȘmes, qui ne voient que du papier imprimĂ©.
Ce nâest pas que câest techniquement impossible. Dâailleurs, beaucoup le font parce que vendre du papier imprimĂ©, ça peut ĂȘtre rentable avec le bon marketing, peu importe ce qui est imprimĂ©.
Câest juste que le rĂ©sultat ne pourra jamais ĂȘtre satisfaisant. Tous les compromis, les dĂ©cisions seront le fruit dâun alĂ©a statistique sur lequel vous nâavez aucun contrĂŽle. Les paradoxes ne seront pas rĂ©solus. Bref, câest et ce sera toujours de la merde.
Un business, câest bien plus quâune idĂ©eâŻ!
Facebook nâĂ©tait pas la premiĂšre tentative de rĂ©seau social entre Ă©tudiants. Amazon nâĂ©tait pas le premier site de vente de livre en ligne. Whatsapp Ă©tait une application pour afficher sa disponibilitĂ© pour un coup de fil Ă ses amis. Instagram servait Ă la base Ă partager sa position. Microsoft nâavait jamais dĂ©veloppĂ© de systĂšme dâexploitation lorsquâils ont vendu la licence DOS Ă IBM.
Bref, lâidĂ©e initiale ne vaut rien. Ce qui a fait le succĂšs de ces entreprises, ce sont les milliards de dĂ©cisions prises Ă chaque instant, les rĂ©ajustements.
Prendre ces dĂ©cisions est ce qui construit le succĂšs, fĂ»t-il commercial, artistique ou personnel. Croire quâun ordinateur pourrait prendre ces dĂ©cisions Ă votre place câest faire preuve non seulement de naĂŻvetĂ©, mais câest Ă©galement prouver totalement son incompĂ©tence dans le domaine concernĂ©.
Dans Onion Glass, ce point mâa particuliĂšrement chagrinĂ©, car poussĂ© Ă lâabsurde. Comme si une serviette avec trois traits de crayon pouvait valoir des milliards.
Ce petit quelque chose en plus
Et si je me rĂ©jouis de frĂ©quenter tant dâauteurs que jâadmire Ă MĂ©rignac, ce nâest pas pour Ă©changer des idĂ©es, mais mâimprĂ©gner de leurs expĂ©riences, de leur personnalitĂ© qui leur fait construire des Ćuvres que jâadmire.
Jâai dĂ» relire des dizaines et des dizaines de fois lâintĂ©gralitĂ© de «âŻDe capes et de crocsâŻÂ», le chef dâĆuvre de Masbou et Ayroles.
Ă chaque relecture, je savoure chaque case. Je sens que les auteurs sâamusent, se laissent porter, emporter par leurs personnages dans des bifurcations a priori imprĂ©vues, improbables. Quelle IA aurait lâidĂ©e de faire intervenir le caquĂštement dâun poulailler dans la complĂ©tion dâun alexandrinâŻ? Quel algorithme se pavanerait de la cĂ©sure Ă lâhĂ©misticheâŻ?
Lâhumain et son expĂ©rience auront toujours quelque chose en plus, quelque chose dâindĂ©finissable dont le mot mâĂ©chappe.
Ah siâŠ
Quelque chose que, sans un pli, sans une tache, lâhumain emporte malgrĂ© luiâŠ
Et câestâŠ
â CâestâŻ?
Son panacheâŻ!
Je suis Ploum et je viens de publier Bikepunk, une fable Ă©colo-cycliste entiĂšrement tapĂ©e sur une machine Ă Ă©crire mĂ©canique. Pour me soutenir, achetez mes livres (si possible chez votre libraire)âŻ!
Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.
September 17, 2025

Amedee Van Gasse
đ Smarter CI with GitHub Actions Cache for Ansible (aka âStop Downloading the Internetâ)
Mood: Slightly annoyed at CI pipelines 
CI runs shouldn’t feel like molasses. Here’s how I got Ansible to stop downloading the internet. You’re welcome.
Letâs get one thing straight: nobody likes waiting on CI.
Not you. Not me. Not even the coffee you brewed while waiting for Galaxy roles to install â again.
So I said ânopeâ and made it snappy. Enter: GitHub Actions Cache + Ansible + a generous helping of grit and retries.
Why cache your Ansible Galaxy installs?
Because time is money, and your CI shouldn’t feel like it’s stuck in dial-up hell.
If youâve ever screamed internally watching community.general get re-downloaded for the 73rd time this month â same, buddy, same.
The fix? Cache that madness. Save your roles and collections once, and reuse like a boss.
The basics: caching 101
Hereâs the money snippet:
path: .ansible/
key: ansible-deps-${{ hashFiles('requirements.yml') }}
restoreKeys: |
ansible-deps-
Translation:
- Store everything Ansible installs in
.ansible/ - Cache key changes when
requirements.ymlchanges â nice and deterministic - If the exact match doesnât exist, fall back to the latest vaguely-similar key
Result? Fast pipelines. Happy devs. Fewer rage-tweets.
Retry like you mean it
Letâs face it: ansible-galaxy has⊠moods.
Sometimes Galaxy API is down. Sometimes itâs just bored. So instead of throwing a tantrum, I taught it patience:
for i in {1..5}; do
if ansible-galaxy install -vv -r requirements.yml; then
break
else
echo "Galaxy is being dramatic. Retrying in $((i * 10)) secondsâŠ" >&2
sleep $((i * 10))
fi
done
Thatâs five retries. With increasing delays.
“You good now, Galaxy? You sure? Because Iâve got YAML to lint.”
The catch (a.k.a. cache wars)
Hereâs where things get spicy:
actions/cacheonly saves when a job finishes successfully.
So if two jobs try to save the exact same cache at the same time?
Boom. Collision. One wins. The other walks away salty:
Unable to reserve cache with key ansible-deps-...,
another job may be creating this cache.
Rude.
Fix: preload the cache in a separate job
The solution is elegant:
Warm-up job. One that only does Galaxy installs and saves the cache. All your other jobs just consume it. Zero drama. Maximum speed. 
Tempted to symlink instead of copy?
Yeah, I thought about it too.
“But what if we symlink .ansible/ and skip the copy?”
Nah. Not worth the brainpower. Just cache the thing directly.
It works.
Itâs clean.
You sleep better.
Pro tips
- Use the hash of
requirements.ymlas your cache key. Trust me. - Add a fallback prefix like
ansible-deps-so youâre never left cold. - Donât overthink it. Let the cache work for you, not the other way around.
TL;DR
GitHub Actions cache = fast pipelines
Smart keys based on requirements.yml= consistency
Retry loops = less flakiness
Preload job = no more cache collisions
Re-downloading Galaxy junk every time = madness
Go forth and cache like a pro.
Got better tricks? Hit me up on Mastodon and show me your CI magic.
And remember: Friends donât let friends wait on Galaxy.
Peace, love, and fewer ansible-galaxy downloads.
September 14, 2025

Lookat 2.1.0 is the latest stable release of Lookat/Bekijk, a user-friendly Unix file browser/viewer that supports colored man pages.
The focus of the 2.1.0 release is to add ANSI Color support.
Â
News
14 Sep 2025 Lookat 2.1.0 Released
Lookat / Bekijk 2.1.0rc2 has been released as Lookat / Bekijk 2.1.0
3 Aug 2025 Lookat 2.1.0rc2 Released
Lookat 2.1.0rc2 is the second release candicate of Lookat 2.1.0
ChangeLog
Lookat / Bekijk 2.1.0rc2
- Corrected italic color
- Donât reset the search offset when cursor mode is enabled
- Renamed strsize to charsize ( ansi_strsize -> ansi_charsize, utf8_strsize -> utf8_charsize) to be less confusing
- Support for multiple ansi streams in ansi_utf8_strlen()
- Update default color theme to green for this release
- Update manpages & documentation
- Reorganized contrib directory
- Moved ci/cd related file from contrib/* to contrib/cicd
- Moved debian dir to contrib/dist
- Moved support script to contrib/scripts
Lookat 2.1.0 is available at:
- https://www.wagemakers.be/english/programs/lookat/
- Download it directly from https://download-mirror.savannah.gnu.org/releases/lookat/
- Or at the Git repository at GNU savannah https://git.savannah.gnu.org/cgit/lookat.git/
Have fun!
September 11, 2025
I once read a blurb about the benefits of bureaucracy, and how it is intended to resist political influences, autocratic leadership, priority-of-the-day decision-making, silo'ed views, and more things that we generally see as "Bad Thingsâąïž". I'm sad that I can't recall where it was, but its message was similar as what The Benefits Of Bureaucracy: How I Learned To Stop Worrying And Love Red Tape by Rita McGrath presents. When I read it, I was strangely supportive to the message, because I am very much confronted, and perhaps also often the cause, for bureaucracy and governance-related deliverables in the company that I work for.
Bureacracy and (hyper)governance
Bureaucracy, or governance in general, often puts a bad taste in the mouth of whomever dares to speak about it though. And I fully agree, hypergovernance or hyperbureaucracy will put too much burden in the organization. The benefits will no longer be visible, and the creativity and innovation of people will be stifled.
Hypergovernance is a bad thing indeed, and often comes up in the news. Companies loathing the so-called overregulation of the European Union for instance, getting together in action groups to ask for deregulation. A recent topic here was Europe's attempt for moving towards a more sustainable environment given the lack of attention on sustainability by the various industries and governments. The premise to regulate this was driven by the observation that principally guiding and asking doesn't work: sustainability is a long-term goal, yet most industries and governments focus on short-term benefits.
The need to simplify regulation, and the reaction on the bureacracy needed to align with the reporting expectations of Europe, triggered the update by the European Commission in a simplification package it calls the Omnibus package.
I think that is the right way forward, not for this particular case (I don't know enough about ESG to be any useful resource on that), but also within regulated industries and companies where the bureaucracy is considered to dampen progression and efficiency. Simplification and optimization here is key, not just running down things. In the Capability Maturity Model, a process is considered efficient if it includes deliberate process optimization and improvement. So why not deliberately optimize and improve? Evaluate and steer?
Benefits of bureaucracy
It would be bad if bureaucracy itself would be considered a negative point of any organization. Many of the benefits of bureaucracy I fully endorse myself.
Standardization, where procedures and policies are created to ensure consistency in operations and decision-making. Without standardization, you gain inefficiencies, not benefits. If a process is considered too daunting, standardization might be key to improve on it.
Accountability, where it is made clear who does what. Holding people or teams accountable is not a judgement, but a balance of expectations and responsibilities. If handled positively, accountability is also an expression of expertise, endorsement for what you are or can do.
Risk management, which is coincidentally the most active one in my domain (the Digital Operational Resilience Act has a very strong focus on risk management), has a primary focus on reducing the likelihood of misconduct and errors. Regulatory requirements and internal controls are not the goal, but a method.
Efficiency, by streamlining processes through established protocols and procedures. Sure, new approaches and things come along, but after the two-hundredth request to do or set up something only to realize it still takes 50 mandays... well, perhaps you should focus on streamlining the process, introduce some bureaucracy to help yourself out.
Transparency, promoting clear communication and documentation, as well as insights into why something is done. This improves trust among the teams and people.
In a world where despotic leadership exists, you will find that a good working bureacracy can be a inhibitor for too aggressive change. That can frustrate the wanna-be autocrat (if they are truly autocrat, then there is no bureacracy), but with the right support, it can indicate and motivate why this resistance exists. If the change is for the good - well, bureaucracy even has procedures for change.
Bureaucracy also prohibits islands and isolated decision making. People demanding all the budgets for themselves because they find that their ideas are the only ones worth working on (everybody has these in the company) will also find that the bureacracy is there to balance budgeting, allocate resources to the right projects that benefit the company as a whole, and not just the 4 people you just onboarded in your team and gave macbooks...
Bureaucracy isn't bad, and some people prefer to have strict rules or procedures. Resisting change is a human behavior, but promoting anarchy is also not the way forward. Instead, nurture a culture of continuous improvement: be able to point out when things go beyond their reach, and learn about the reasoning and motivation that others bring up. Those in favor of bureacracy will see this as a maturity increase, and those that are affected by over-regulation will see this as an improvement.
We can all strive to remain in a bureaucracy and be happy with it.
Feedback? Comments? Don't hesitate to get in touch on Mastodon.
September 10, 2025
There comes a time in every developerâs life when you just know a certain commit existed. You remember its hash: deadbeef1234. You remember what it did. You know it was important. And yet, when you go looking for it…
![]()
fatal: unable to read tree <deadbeef1234>
Great. Git has ghosted you.
That was me today. All I had was a lonely commit hash. The branch that once pointed to it? Deleted. The local clone that once had it? Gone in a heroic but ill-fated attempt to save disk space. And GitHub? Pretending like it never happened. Typical.
Act I: The NaĂŻve Clone
“Letâs just clone the repo and check out the commit,” I thought. Spoiler alert: thatâs not how Git works.
git clone --no-checkout https://github.com/user/repo.git
cd repo
git fetch --all
git checkout deadbeef1234
fatal: unable to read tree 'deadbeef1234'
Thanks Git. Very cool. Apparently, if no ref points to a commit, GitHub doesnât hand it out with the rest of the toys. It’s like showing up to a party and being told your friend never existed.
Act II: The Desperate fsck
Surely itâs still in there somewhere? Letâs dig through the guts.
git fsck --full --unreachable
Nope. Nothing but the digital equivalent of lint and old bubblegum wrappers.
Act III: The Final Trick
Then I stumbled across a lesser-known Git dark art:
git fetch origin deadbeef1234
And lo and behold, GitHub replied with a shrug and handed it over like, âOh, that commit? Why didnât you just say so?â
Suddenly the commit was in my local repo, fresh as ever, ready to be inspected, praised, and perhaps even resurrected into a new branch:
git checkout -b zombie-branch deadbeef1234
Mission accomplished. The dead walk again.
Moral of the Story
If you’re ever trying to recover a commit from a deleted branch on GitHub:
- Cloning alone wonât save you.
git fetch origin <commit>is your secret weapon.- If GitHub has completely deleted the commit from its history, you’re out of luck unless:
- You have an old local clone
- Someone forked the repo and kept it
- CI logs or PR diffs include your precious bits
Otherwise, itâs digital dust.
Bonus Tip
Once youâve resurrected that commit, create a branch immediately. Unreferenced commits are Gitâs version of vampires: they disappear without a trace when left in the shadows.
git checkout -b safe-now deadbeef1234
And there you have it. One undead commit, safely reanimated.
September 09, 2025
Last month, my son Axl turned eighteen. He also graduated high school and will leave for university in just a few weeks. It is a big milestone and we decided to mark it with a three-day backpacking trip in the French Alps near Lake Annecy.
We planned a loop that would start and end at Col de la Forclaz, located at 1,150 meters (3,770 feet). From there the trail would climb through forests and alpine pastures, eventually taking us to the summit of La Tournette at 2,351 meters (7,713 feet). It is the highest peak around Lake Annecy and one of the most iconic hikes in the region.
A topographical map of our hike to La Tournette, with each of the three hiking days highlighted in a different color.
Over the past months we studied maps, gathered supplies, and prepared for the hike. My own training started only three weeks before, just enough to feel unprepared in a more organized way. I took up jogging and did stairs with my pack on, though I should have also practiced sleeping on uneven ground.
Flying to Annecy
Loading our packs into the plane before departure.
In the cockpit of Ben's Cessna en route to Annecy.
As we approached Annecy and prepared to land, La Tournette (left) rose above the other peaks, showing the climb that lay ahead.
Axl thought the trip would begin with a nine-hour drive south to Annecy. What he didn't know was that one of my best friends, Ben, who owns a Cessna, would be flying us instead. It felt like the perfect surprise to begin our trip.
Two hours later we landed in Annecy. After a quick lunch by the lake, we taxied to Col de la Forclaz, famous for paragliding and its views over Lake Annecy.
Paragliders circling above Lake Annecy.
Day 1: Col de la Forclaz â Refuge de la Tournette
Our trail started near the paragliding launch point at Col de la Forclaz. From there it climbed steadily through the forest. It was quiet and enclosed, with only brief glimpses of the valley through the branches.
We paused to add a stone to a cairn along the trail.
Eventually we emerged above the tree line near Chalet de l'Aulp. In September, the chalet is closed during the week, but we sat on a hill nearby, ate the rice bars Vanessa had prepared, and simply enjoyed the moment.
After our break the trail grew steeper, winding through alpine meadows and rocky slopes. The valley stretched out far below us while big mountains rose ahead.
The view on our way up to La Tournette, just past Chalet de l'Aulp.
After a couple of hours we reached the site of the old Refuge de la Tournette, which has been closed for years, and found a flat patch of grass nearby for our first campsite.
As we sat down, we had time for deeper conversations. We talked about how life will change now that he is eighteen and about to leave for university. I told him our hike felt symbolic. The climb up was like the years behind us, when his parents, myself included, made most of the important decisions in his life. The way down would be different. From here he would choose his own path in life, and my role would be to walk beside him and support him.
Saying it out loud caught me by surprise. It left me with a lump in my throat and tears in my eyes. It felt as if we would leave one version of Axl at the summit and return with another, stepping into adulthood and independence.
Axl reflecting at the end of our first day on the trail.
Dinner was a freeze-dried chili con carne that turned out to be surprisingly good. Just as we were eating, a large flock of sheep came streaming down the hillside, their bells ringing. Fortunately they kept moving, or sleep would have been impossible.
After dinner we set up our tent and took stock of our water supply. We had carried five liters of water each, knowing we could only refill at tomorrow's refuge. By now we had already used three liters for the climb and dinner. I could have drunk more, but we set two liters aside for the morning and felt good about that plan.
Our tent pitched in the high meadows near the old Refuge de la Tournette.
Later, as we settled into our sleeping bags, I realized how many firsts Axl had packed into a single day. He had taken his first private flight in a small Cessna, carried a heavy backpack up a mountain for the first time, figured out food and water supplies, and experienced wild camping for the first time.
Day 2: Refuge de la Tournette â Refuge de Praz D'zeures
Sleeping was difficult. The ground that looked flat in daylight was uneven, and my foldout mat offered little comfort. I doubt I slept more than two hours. By 4:30 the rain had started, and I started to worry about how slippery the trail might become.
When we unzipped the tent around 6:00 in the morning, a "bouquetin" (ibex) with one horn broken off stood quietly in the gray light, watching us without fear. Its calm presence was a gentle contrast to my own concerns.
When we opened the tent in the morning, a bouquetin (ibex) was standing right outside, watching us.
By 7:00 a shepherd appeared out of nowhere while we were making coffee. He told us wild camping here was not permitted. He spoke with quiet firmness. We were startled and embarrassed, having missed the sign lower down the trail. We apologized, promised to leave no trace, and packed up to be on our way.
Before leaving he shared the forecast: rain by early afternoon, thunderstorms by 16:00. You do not want to be on a summit when lightning rolls in. With that in mind we set off quickly, pushing harder than usual. Unlike the sunny day before, when the path was busy with hikers, the mountain felt empty. It was just the two of us moving upward in the quiet.
Shortly before the final section to the summit it began to rain again. Axl put on his gloves as the drops felt icy. The rain passed quickly, but we knew more was coming.
The final stretch to La Tournette was steep, with chains and metal ladders to help the climb. We scrambled with hands and feet, and with our heavy packs it was nerve-racking at times.
Reaching the top was unforgettable. At the summit we climbed onto the Fauteuil, a massive block of rock shaped like an armchair. From there the Alps spread out in every direction. In the far distance, the snow-covered summit of Mont Blanc caught the light. Below us, Lake Annecy shimmered in the valley, while dark rain clouds gathered not too far away.
At the summit of La Tournette.
We gave each other a big hug, a quiet celebration of the climb and the moment. We stayed only a short while, knowing the weather was turning.
From the summit we set our course for the Refuge de Praz D'zeures, a mountain hut about two hours' hike away. As we descended, the landscape softened into wide alpine pastures where bouquetin grazed among the grass and flowers.
The ridge from La Tournette toward Refuge de Praz D'zeures. If you look closely, the trail winds all the way into the woods.
Pushing to stay ahead of the rain and thunderstorms in the forecast, we reached the refuge (1,774 meters) around 14:30 in the afternoon. The early start seemed like a wise choice, but our hearts sank when we saw the sign on the door: "fermé" (closed).
For a moment we feared we had nowhere to stay. Then we noticed another sign off to the side. It took us a minute to decipher, since it was written in French, but eventually we understood: in September the refuge only opens from Thursday evening through Sunday night. Some relief washed over us. It was indeed Thursday, and most likely the hosts or guardians of the hut had not yet arrived.
We decided to wait and settled onto the wooden deck outside. Our water supplies were gone, but Axl found an outdoor kitchen sink with a faucet, fed by a nearby stream, and filled a couple of our empty water bottles. I drank one almost in a single go.
The outdoor sink at Refuge de Praz D'zeures where Axl refilled our empty water bottles.
To pass the time we opened the small travel chess set I had received for Christmas. The game gave us something to focus on as the clouds thickened around the hut. A damp chill settled in, and we pulled on extra layers while we leaned over the tiny board.
Sure enough, a little later the hosts of the refuge appeared. A couple with three children and two dogs. They came up the trail carrying heavy containers of food, bread, water, and other supplies on their backs, all the way from a small village more than an hour's hike below.
Almost immediately they warned us that the water at the refuge was not potable. I tried not to dwell on the bottle I had just finished, but luckily Axl had used his water filter when filling our bottles. We would find out soon enough how well it worked. (Spoiler: we survived!)
The rustic kitchen at Refuge de Praz D'zeures. I loved the moka pots and pans in the background.
The refuge itself was simple but full of character. There is no hot water, no heating, and the bunks are basic, yet the young family who runs it brings life and warmth to the place. My French was as rusty as their English, but between us we managed.
It was so cold inside the hut that we retreated into our sleeping bags for warmth. Then, as the rain began to pour, they lit a wood stove in the main room. Through the window we watched the clouds wrap around the refuge while water streamed off the roof in steady sheets. We felt grateful not to be out in our tent that night or still on the trail.
That night we had a local specialty: raclette au charbon de bois, cheese cooked on a small charcoal grill in the middle of the table. I have had raclette many times before, but never like this. The smell of melting cheese filled the room as we poured it over potatoes and tore into fresh bread. After two days of freeze-dried meals it felt like a feast. I loved it, especially because I knew Vanessa would never let me use charcoal in our living room.
After dinner we went straight to bed, exhausted from the long day and the lack of sleep the night before. Even though the bunks were hard, we slept deeply and well.
Day 3: Refuge de Praz D'zeures â La Forclaz
Morning began with breakfast in the hut. A simple bowl of cereal with milk tasted unusually good after two days of powdered meals. Two other guests joined us at the breakfast table and teased me that they could hear me snore through the thin wooden walls. Axl just grinned. Apparently my reputation had spread faster than I thought.
The day's hike covered about 8.5 kilometers (5.3 miles), starting with a 350-meter (1,150-foot) climb to a ridge, followed by a long descent of nearly 1,000 meters (3,280 feet). The storms from the night before had left everything soaked, so even the flat stretches of trail were slippery and slow.
Reaching the ridge made the effort worthwhile. On one side the clouds pressed in so tightly there was no visibility at all. On the other side the valley opened wide, green and bright in the morning light. It felt like standing with one foot in the fog and the other in the sun.
We walked along the ridge and talked, the kind of conversations that only seem to happen when you spend hours on the trail together. At one point we stopped to look for fossils and, to our delight, actually found one.
By the time we reached the village of Montmin it felt like stepping back into civilization. We left the trail for quiet roads, ate lunch in the sun near the church, and rinsed our muddy boots and rain gear at a faucet. From there it was only a short walk back to La Forclaz, where three days earlier we had set off with heavy packs and full of anticipation.
Axl rinsing the mud off his rain pants at a fountain in Montmin.
We had returned tired, lighter, and grateful for the experience and for each other. To celebrate we ordered crĂȘpes in a tiny cafĂ© and ate them in the sun, looking up at the peaks we had just conquered. Later that afternoon, we swam in Lake Annecy in our underwear and finished the day with a cold beer.
The next morning we caught a taxi to Geneva airport, flew home, and by afternoon we were back in Belgium, welcomed with Vanessa's shepherd's pie. In just a few weeks Axl will leave for university and his room will be empty, but that evening we were all together, sharing stories from our trip over dinner.
Axl standing on a dock at Lake Annecy.
It had been a short trip, just three days in the mountains. Hard at times, but very manageable for a not-super-fit forty-six year old. Some journeys are measured less by distance, duration, or difficulty, but by the transitions they mark. This one gave us what we needed: time together before the changes ahead, and a memory we will carry for the rest of our lives. On the trail Axl often walked ahead of me. In life, I will have to learn to walk beside him.

Ă la recherche de lâhumanitĂ© perdueâŠ
La mort ou le retour de la lucidité
Drmollytov est une bibliothĂ©caire qui a Ă©tĂ© trĂšs gravement blessĂ©e dans un accident de moto, accident oĂč son mari a perdu la vie.
AprĂšs une pĂ©riode de convalescence et de deuil, elle tente de reconstruire sa vie et, graduellement, elle prend conscience de la frĂ©nĂ©sie consumĂ©riste dans laquelle est engagĂ©e toute personne «âŻnormaleâŻÂ». Depuis le dĂ©sir de shopping aux rĂ©seaux sociaux en passant par les abonnements aux services de streaming.
Au plus elle fait du nettoyage dans sa vie, au plus elle retrouve du temps et de lâĂ©nergie. Au moins elle Ă©prouve le besoin «âŻdâĂȘtre vueâŻÂ». Il faut dire que de poster uniquement sur Gemini, ça nâaide pas pour la visibilitĂ©âŻ!
- view from the present (drmollytov.smol.pub)
- Rappel pour celleux qui ne savent pas ouvrir les liens Gemini comme ici au-dessusâŻ: Gemini, le protocole du slow web (ploum.net)
Une phrase mâa marquĂ©e sur son dernier billet postĂ© sur GeminiâŻ: «âŻMon seul regret avec les rĂ©seaux sociaux, câest dâavoir Ă©tĂ© dessus tout court. Ils ont vidĂ© mon Ă©nergie mentale, dĂ©vorĂ© mon temps et je suis certaine quâils ont extrait une part de mon Ăąme.âŻÂ» (traduction trĂšs libre).
Je me rends compte quâil ne suffit pas de se libĂ©rer des mĂ©canismes dâaddiction des rĂ©seaux sociaux. Il faut Ă©galement conscientiser Ă quel point ils ont dĂ©formĂ©, dĂ©truit, dĂ©naturĂ© nos pensĂ©es, nos relations sociales, nos motivations. PireâŻ: ils nous rendent objectivement stupidesâŻ! Depuis 2010, le QI moyen est en train de descendre, ce qui nâĂ©tait jamais arrivĂ© depuis lâinvention du QI (quoi quâon pense de cet outil).
Les rĂ©seaux sociaux sont intrinsĂšquement liĂ©s au smartphone. Ils ont rĂ©ellement explosĂ© lorsquâils se sont optimisĂ©s pour la consommation passive sur un petit Ă©cran tactile (chose Ă laquelle Zuckerberg ne croyait pas du tout). Ă lâinverse, lâaddiction aux rĂ©seaux sociaux a créé une demande continue pour des smartphones toujours plus brillants et prenant des photos toujours plus susceptibles de gĂ©nĂ©rer des likes.
Comme le souligne Jose Briones, ces interactions permanentes sur une plaque de verre lisse, les écouteurs vissés sur les oreilles, nous font perdre la conscience du tactile, de la matérialité.
Au-delà de notre addiction aux chiffres colorés
Quand on a Ă©tĂ© addict, quand ou y a cru vraiment, quand on y a investi Ă©normĂ©ment de soi, il ne suffit pas dâarrĂȘter de fumer pour ĂȘtre en bonne santĂ©. ArrĂȘter, ce nâest que le premier pas nĂ©cessaire et indispensable. Mais il reste un long chemin Ă parcourir pour se reconstruire par aprĂšs, pour retrouver lâhumain qui a Ă©tĂ© blessĂ©, enfoui.
LâĂȘtre humain que, finalement, peu de monde a intĂ©rĂȘt Ă ce que vous retrouviez, mais qui est lĂ , enfui sous des notifications incessantes, sous la consultation compulsive de vos likes, de vos statistiques, de vos abonnĂ©s. Pour Jose Briones, il a fallu plus de trois ans sans smartphone pour que se calme son angoisse⊠de ne pas avoir de smartphoneâŻ!
Cela fait des annĂ©es que je nâai plus de statistiques sur les frĂ©quentations de ce blog. Parce que ce nâest pas trĂšs Ă©thique, mais, surtout, parce que cela me rendait fou, parce que ma santĂ© mentale en pĂątissait incroyablement. Parce que je nâarrivais plus Ă ĂȘtre satisfait de mon Ă©criture autrement que par le nombre de lecteurs que ça me ramenait. Parce que de simples statistiques dĂ©truisaient mon Ăąme.
Comme le dit le blog This dayâs portion, vous nâavez pas besoin de statistiquesâŻ!
Et si le rĂ©seau Mastodon est trĂšs loin dâĂȘtre parfait, il est assez simple dây trouver une instance qui ne vous espionne pas. La majoritĂ© ne le fait dâailleurs pas. Au contraire de Bluesky qui traque toutes vos interactions Ă travers la sociĂ©tĂ© Statsig. Statsig qui vient dâĂȘtre rachetĂ©e par OpenAI, le crĂ©ateur de ChatGPT.
On dirait que Sam Altman tente de faire comme Musk et de gagner de lâinfluence politique en noyautant les rĂ©seaux sociaux centralisĂ©s. On sâest foutu de la gueule de Musk, mais force est de constater que ça a trĂšs bien fonctionnĂ©. Et que, comme je le disais en 2023, ce nâest quâune question de temps avant que ça arrive Ă Bluesky qui nâest pas du tout dĂ©centralisĂ©, contrairement Ă ce que rĂ©pĂšte le marketing.
Mais le pire avec toutes ces statistiques, toutes ces donnĂ©es, câest que nous sommes les premiers Ă vouloir les rĂ©colter et Ă nous vendre pour les optimiser et les consulter sur de jolis graphiques colorĂ©s affichĂ©s sur nos plaques de verre lisse et brillante.
LâinhumanitĂ© dâun monde qui se vend
Je ne cesse de rĂ©pĂ©ter ce quâarticule justement Thierry Crouzet dans son dernier articleâŻ: les marketeux ont imposĂ© leur vision du monde, forçant les artistes, les intellectuels et les scientifiques Ă devenir des commerciaux, ce qui est lâantithĂšse de leur nature profonde. Car artistes, intellectuels et scientifiques ont en commun dâĂȘtre dans une quĂȘte, peutâĂȘtre illusoire, de vĂ©ritĂ©, dâabsolu. LĂ oĂč le marketing est, par dĂ©finition, lâart du mensonge, de la tromperie, de lâapparence et de lâexploitation de lâhumain.
Cette destruction mentale enseignĂ©e dans les Ă©coles de commerce est Ă©galement Ă lâĆuvre avec lâIA. Il faudra des annĂ©es pour que les personnes addicts Ă lâIA puissent, si tout va bien, retrouver leur Ăąme dâhumain, leur capacitĂ© de raisonnement autonome. Les dĂ©veloppeurs qui dĂ©pendent de Github sont en premiĂšre ligne.
- Github, l'IA et les fours Ă micro-ondes (mart-e.be)
- We need to talk about your Github addiction (ploum.net)
En espĂ©rant que nous puissions arriver Ă redevenir des humains sans devoir recourir Ă la solution extrĂȘme dĂ©crite par Thierry Bayoud et LĂ©a Deneuville dans lâexcellente nouvelle «âŻChronique dâun crevardâŻÂ», nouvelle prĂ©sente dans le Recueil de Nakamoto, que je recommande chaudement et prĂ©sentĂ© ici par Ysabeau. Et, oui, les nouvelles sont sous licence libre.
- Note de lecture le Recueil de Nakamoto (nouvelles) (ysabeau)
- Le recueil de Nakamoto (pvh-editions.com)
LâidĂ©e derriĂšre Chronique dâun crevard mâa rappelĂ© mon propre roman Printeurs. Ăa serait chouette de voir les deux univers se rejoindre dâune maniĂšre ou dâune autre. Car câest ça toute la beautĂ© de la crĂ©ation artistique libre.
De lâart comme instinct de survie
De la crĂ©ation artistique tout court, devrais-je dire, jusquâau moment oĂč les juristes dâentreprise ont rĂ©ussi Ă convaincre les artistes quâils devaient ĂȘtre des maniaques de la «âŻprotection de leur propriĂ©tĂ© intellectuelleâŻÂ» ce qui les a transformĂ©s en victimes de la plus formidable arnaque de ces derniĂšres dĂ©cennies. Tout comme les marketeux, les juristes dâentreprise sont, par essence, des gens qui vont tâexploiter. Câest leur mĂ©tierâŻ!
Seule la technique changeâŻ: les marketeux mentent et te promettent le bonheur, les juristes menacent et corrompent. Les deux ne cherchent quâĂ augmenter le bĂ©nĂ©fice de leur employeur. Les deux ont rĂ©ussi Ă convaincre les artistes dâĂ©teindre leur humanitĂ© pour devenir eux-mĂȘmes marketeux et juriste, de faire du «âŻpersonal brandingâŻÂ» et de la «âŻpropriĂ©tĂ© intellectuelleâŻÂ».
Ă ce propos, Cory Doctorow explique trĂšs bien sur quelle illusion sâest construite la fameuse «âŻpropriĂ©tĂ© intellectuelleâŻÂ» et Ă quel point ceux qui lâont conçue savaient trĂšs bien que câĂ©tait une arnaque Ă lâĂ©chelle planĂ©taire pour tenter de transformer le monde entier en une colonie Ă©tatsunienne.
Marketeux et juristes ont rĂ©ussi Ă convaincre les artistes de haĂŻr ce qui fait la base de leur mĂ©tierâŻ: leur public, renommĂ©s «âŻpiratesâŻÂ» dĂšs quâils ne passent pas entre les barriĂšres Nadar du corporatisme de surveillance. Ils ont rĂ©ussi Ă convaincre les scientifiques de haĂŻr ce qui fait la base de leur mĂ©tierâŻ: le partage sans restriction de la connaissance. Le fait que la plus grande base de donnĂ©es scientifiques du monde, Sci-hub, soit considĂ©rĂ©e comme pirate et interdite partout dans le monde dit tout ce que vous avez besoin de savoir sur notre sociĂ©tĂ©.
Le capitalisme de surveillance pourrit tout ce quâil touche. Tout dâabord en rendant difficile la vie des contestataires (ce qui est de bonne guerre), mais, surtout, en achetant et corrompant les rebelles qui rĂ©ussissent malgrĂ© tout. Devenu millionnaire, cet artiste antisystĂšme deviendra le premier soutien du systĂšme en question et adaptera son sloganâŻ: «âŻSoyez rebelles, mais pas trop, achetez mes produits dĂ©rivĂ©sâŻ!âŻÂ».
Tout comme le surrĂ©alisme a Ă©tĂ© la rĂ©ponse artistique et intellectuelle au fascisme, lâart seul peut sauver notre humanitĂ©. Un art brut, tactile, sensoriel. Mais, avant toute chose, un art libre qui se partage, qui se diffuse et qui envoie se faire foutre les notions de propriĂ©tĂ©s virtuelles.
Un art qui se partage, mais force le public Ă partager Ă©galement, Ă retrouver lâessence de notre humanitĂ©âŻ: le partage.
- Photo dâillustration prise par Diegohnxiv et reprĂ©sentant une fresque en lâhonneur de SciHub Ă lâuniversitĂ© de Mexico
- Printeurs et Le recueil de Nakamoto sont commandables chez votre libraire indĂ©pendant prĂ©fĂ©rĂ©âŻ! Les ebooks sont sur libgen, au moins pour Printeurs. Je le sais, câest moi qui lâai uploadĂ©.
Je suis Ploum et je viens de publier Bikepunk, une fable Ă©colo-cycliste entiĂšrement tapĂ©e sur une machine Ă Ă©crire mĂ©canique. Pour me soutenir, achetez mes livres (si possible chez votre libraire)âŻ!
Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.
September 03, 2025
Het is al veel te lang geleden dat ik de mooiste maand ter wereld vierde met een mooi liedje. Vandaar deze Septemberige “August Moon” live door Jacob Alon: Watch this video on YouTube.
At some point, I had to admit it: Iâve turned GitHub Issues into a glorified chart gallery.
Let me explain.
Over on my amedee/ansible-servers repository, I have a workflow called workflow-metrics.yml, which runs after every pipeline. It uses yykamei/github-workflows-metrics to generate beautiful charts that show how long my CI pipeline takes to run. Those charts are then posted into a GitHub Issueâone per run.
Itâs neat. It’s visual. It’s entirely unnecessary to keep them forever.
The thing is: every time the workflow runs, it creates a new issue and closes the old one. So naturally, I end up with a long, trailing graveyard of “CI Metrics” issues that serve no purpose once they’re a few weeks old.
Cue the digital broom. đ§č
Enter cleanup-closed-issues.yml
To avoid hoarding useless closed issues like some kind of GitHub raccoon, I created a scheduled workflow that runs every Monday at 3:00 AM UTC and deletes the cruft:
schedule:
- cron: '0 3 * * 1' # Every Monday at 03:00 UTC
This workflow:
- Keeps at least 6 closed issues (just in case I want to peek at recent metrics).
- Keeps issues that were closed less than 30 days ago.
- Deletes everything elseâquietly, efficiently, and without breaking a sweat.
Itâs also configurable when triggered manually, with inputs for dry_run, days_to_keep, and min_issues_to_keep. So I can preview deletions before committing them, or tweak the retention period as needed.
đ Complete Source Code for the Cleanup Workflow
name: đ§č Cleanup Closed Issues
on:
schedule:
- cron: '0 3 * * 1' # Runs every Monday at 03:00 UTC
workflow_dispatch:
inputs:
dry_run:
description: "Enable dry run mode (preview deletions, no actual delete)"
required: false
default: "false"
type: choice
options:
- "true"
- "false"
days_to_keep:
description: "Number of days to retain closed issues"
required: false
default: "30"
type: string
min_issues_to_keep:
description: "Minimum number of closed issues to keep"
required: false
default: "6"
type: string
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
permissions:
issues: write
jobs:
cleanup:
runs-on: ubuntu-latest
steps:
- name: Install GitHub CLI
run: sudo apt-get install --yes gh
- name: Delete old closed issues
env:
GH_TOKEN: ${{ secrets.GH_FINEGRAINED_PAT }}
DRY_RUN: ${{ github.event.inputs.dry_run || 'false' }}
DAYS_TO_KEEP: ${{ github.event.inputs.days_to_keep || '30' }}
MIN_ISSUES_TO_KEEP: ${{ github.event.inputs.min_issues_to_keep || '6' }}
REPO: ${{ github.repository }}
run: |
NOW=$(date -u +%s)
THRESHOLD_DATE=$(date -u -d "${DAYS_TO_KEEP} days ago" +%s)
echo "Only consider issues older than ${THRESHOLD_DATE}"
echo "::group::Checking GitHub API Rate Limits..."
RATE_LIMIT=$(gh api /rate_limit --jq '.rate.remaining')
echo "Remaining API requests: ${RATE_LIMIT}"
if [[ "${RATE_LIMIT}" -lt 10 ]]; then
echo "â ïž Low API limit detected. Sleeping for a while..."
sleep 60
fi
echo "::endgroup::"
echo "Fetching ALL closed issues from ${REPO}..."
CLOSED_ISSUES=$(gh issue list --repo "${REPO}" --state closed --limit 1000 --json number,closedAt)
if [ "${CLOSED_ISSUES}" = "[]" ]; then
echo "â
No closed issues found. Exiting."
exit 0
fi
ISSUES_TO_DELETE=$(echo "${CLOSED_ISSUES}" | jq -r \
--argjson now "${NOW}" \
--argjson limit "${MIN_ISSUES_TO_KEEP}" \
--argjson threshold "${THRESHOLD_DATE}" '
.[:-(if length < $limit then 0 else $limit end)]
| map(select(
(.closedAt | type == "string") and
((.closedAt | fromdateiso8601) < $threshold)
))
| .[].number
' || echo "")
if [ -z "${ISSUES_TO_DELETE}" ]; then
echo "â
No issues to delete. Exiting."
exit 0
fi
echo "::group::Issues to delete:"
echo "${ISSUES_TO_DELETE}"
echo "::endgroup::"
if [ "${DRY_RUN}" = "true" ]; then
echo "đ DRY RUN ENABLED: Issues will NOT be deleted."
exit 0
fi
echo "âł Deleting issues..."
echo "${ISSUES_TO_DELETE}" \
| xargs -I {} -P 5 gh issue delete "{}" --repo "${REPO}" --yes
DELETED_COUNT=$(echo "${ISSUES_TO_DELETE}" | wc -l)
REMAINING_ISSUES=$(gh issue list --repo "${REPO}" --state closed --limit 100 | wc -l)
echo "::group::â
Issue cleanup completed!"
echo "đ Deleted Issues: ${DELETED_COUNT}"
echo "đ Remaining Closed Issues: ${REMAINING_ISSUES}"
echo "::endgroup::"
{
echo "### đïž GitHub Issue Cleanup Summary"
echo "- **Deleted Issues**: ${DELETED_COUNT}"
echo "- **Remaining Closed Issues**: ${REMAINING_ISSUES}"
} >> "$GITHUB_STEP_SUMMARY"
đ ïž Technical Design Choices Behind the Cleanup Workflow
Cleaning up old GitHub issues may seem trivial, but doing it well requires a few careful decisions. Here’s why I built the workflow the way I did:
Why GitHub CLI (gh)?
While I could have used raw REST API calls or GraphQL, the GitHub CLI (gh) provides a nice balance of power and simplicity:
- It handles authentication and pagination under the hood.
- Supports JSON output and filtering directly with
--jsonand--jq. - Provides convenient commands like
gh issue listandgh issue deletethat make the script readable. - Comes pre-installed on GitHub runners or can be installed easily.
Example fetching closed issues:
gh issue list --repo "$REPO" --state closed --limit 1000 --json number,closedAt
No messy headers or tokens, just straightforward commands.
Filtering with jq
I use jq to:
- Retain a minimum number of issues to keep (
min_issues_to_keep). - Keep issues closed more recently than the retention period (
days_to_keep). - Parse and compare issue closed timestamps with precision.
- Exclude pull requests from deletion by checking the presence of the
pull_requestfield.
The jq filter looks like this:
jq -r --argjson now "$NOW" --argjson limit "$MIN_ISSUES_TO_KEEP" --argjson threshold "$THRESHOLD_DATE" '
.[:-(if length < $limit then 0 else $limit end)]
| map(select(
(.closedAt | type == "string") and
((.closedAt | fromdateiso8601) < $threshold)
))
| .[].number
'
Secure Authentication with Fine-Grained PAT
Because deleting issues is a destructive operation, the workflow uses a Fine-Grained Personal Access Token (PAT) with the narrowest possible scopes:
Issues: Read and Write- Limited to the repository in question
The token is securely stored as a GitHub Secret (GH_FINEGRAINED_PAT).
Note: Pull requests are not deleted because they are filtered out and the CLI wonât delete PRs via the issues API.
Dry Run for Safety
Before deleting anything, I can run the workflow in dry_run mode to preview what would be deleted:
inputs:
dry_run:
description: "Enable dry run mode (preview deletions, no actual delete)"
default: "false"
This lets me double-check without risking accidental data loss.
Parallel Deletion
Deletion happens in parallel to speed things up:
echo "$ISSUES_TO_DELETE" | xargs -I {} -P 5 gh issue delete "{}" --repo "$REPO" --yes
Up to 5 deletions run concurrently â handy when cleaning dozens of old issues.
User-Friendly Output
The workflow uses GitHub Actionsâ logging groups and step summaries to give a clean, collapsible UI:
echo "::group::Issues to delete:"
echo "$ISSUES_TO_DELETE"
echo "::endgroup::"
And a markdown summary is generated for quick reference in the Actions UI.
Why Bother?
Iâm not deleting old issues because of disk space or API limits â GitHub doesnât charge for that. Itâs about:
- Reducing clutter so my issue list stays manageable.
- Making it easier to find recent, relevant information.
- Automating maintenance to free my brain for other things.
- Keeping my tooling neat and tidy, which is its own kind of joy.
Steal It, Adapt It, Use It
If youâre generating temporary issues or ephemeral data in GitHub Issues, consider using a cleanup workflow like this one.
Itâs simple, secure, and effective.
Because sometimes, good housekeeping is the best feature.
đ§Œâš Happy coding (and cleaning)!

How I fell in love with calendar.txt
The more I learn about Unix tools, the more I realise we are reinventing everyday Rube Goldbergâs wheels and that Unix tools are, often, elegantly enough.
Months ago, I discovered calendar.txt. A simple file with all your dates which was so simple and stupid that I wondered 1) why I didnât think about it myself and, 2) how it could be useful.
I downloaded the file and tried it. Without thinking much about it, I realised that I could add the following line to my offpunk startup:
!grep `date -I` calendar.txt --color
And, just like that, I suddenly have important things for my day everytime I start Offpunk. In my "do_the_internet.sh", I added the following:
grep `date -I`calendar.txt --color -A 7
Which allows me to have an overview of the next seven days.
But what about editing? This is the alias I added to my shell to automatically edit todayâs date:
alias calendar="vim +/`date -I` ~/inbox/calendar.txt"
It feels so easy, so elegant, so simple. All those aliases came naturally, without having to spend more than a few seconds in the man page of "date". No need to fiddle with a heavy web interface. I can grep through my calendar. I can edit it with vim. I can share it, save it and synchronise it without changing anything else, without creating any account. Looking for a date, even far in the future, is as simple as typing "/YEAR-MONTH-DAY" in vim.
Recurring events
The icing on the cake became apparent when I received my teaching schedule for the next semester. I had to add a recurring event every Tuesday minus some special cases where the university is closed.
Not a big deal. I do it each year, fiddling with the web interface of my calendar to find the good options to make the event recurrent then removing those special cases without accidentally removing the whole series.
It takes at most 10 minutes, 15 if I miss something. Ten minutes of my life that I hate, forced to use a mouse and click on menus which are changing every 6 months because, you know, "yeah, redesign".
But, with my calendar.txt, it takes exactly 15 seconds.
/Tue
To find the first Tuesday.
i
To write the course number and classroom number, escape then
n.n.n.n.n.n.nn.n.n.n.nn.n.n.n.n.
Iâm far from being a Vim expert but this occurred naturally, without really thinking about the tool. I was only focused on the date being correct. It was quick and pleasant.
Shared events and collaboration
I read my email in Neomutt. When Iâm invited to an event, I must open a browser to access the email through my webmail and click the "Yes" button in order to have it added to my calendar. Events I didnât respond show in my calendar, even if I donât want them. It took me some settings-digging not to display events I refused. Which is kinda dumb but so are the majority of our tools those days.
With calendar.txt, I manually enter the details from the invitation, which is not perfect but takes less time than opening a browser, login into a webmail and clicking a button while waiting at each step the loading of countless of JavaScript libraries.
Invitations are rare enough that I donât mind entering the details by hand. But Iâm thinking about doing a small bash script that would read an ICS file and add it to calendar.txt. It looks quite easy to do.
I also thought about doing the reverseâŻ: a small script that would create an ICS and send it by email to any address added to an event. But it would be hard to track down which events were already sent and which ones are new. Letâs stick to the web interface when I need to create a shared event.
Calendar.txt should remain simple and for my personal use. The point of Unix tools is to allow you to create the tools you need for yourself, not create a startup with a shiny name/logo that will attract investors hoping to make billions in a couple of years by enshitifying the life of captive users.
And when you work with a team, you are stuck anyway with the worst possible tool that satisfies the need of the dumbest member of the team. Usually the manager.
With Unix tools, each solution is personal and different from the others.
Simplifying calendaring
Another unexpected advantage of the system is that you donât need to guess the end date of events anymore. All I need to know is that I have a meeting at 10 and a lunch at 12. I donât need to estimate the duration of the meeting which is, anyway, usually only a rough estimation and not an important information. But you canât create an event in modern calendar without giving a precise end.
Calendar.txt is simple, calendar.txt is good.
I can add events without thinking about it, without calendaring being a chore. Sandra explains how she realised that using an online calendar was a chore when she started to use a paper agenda.
Going back to a paper calendar is probably something I will end up doing but, in the meantime, calendar.txt is a breeze.
Trusting my calendar
Most importantly, I now trust my calendar.
Iâve been burned by this before: I had created my whole journey to a foreign country on my online calendar only to discover upon landing that my calendar had decided to be "smart" and to change all events because I was not in the same time zone. Since then, I actually write the time of an event in the title of the event, even if it looks redundant. This also helps with events being moved by accident while scrolling on a smartphone or in a browser. Which is rare but happened enough to make me anxious.
I had the realisation that I donât trust any calendar application because, for events with a very precise time (like a train), I always fall back on checking the confirmation email or PDFs.
Itâs not the case anymore with calendar.txt. I trust the file. I trust the tool.
There are not many tools you can trust.
Mobile calendar.txt
I donât need notifications about events on my smartphone. If a notification tells me about an event I forgot, it would be too late anyway. And if my phone is on silent, like always, the notification is useless anyway. We killed notifications with too much notification, something I addressed hereâŻ:
I do want to consult/edit my calendar on my phone. Getting the file on my phone is easy as having it synchronised with my computer through any mean. Itâs a simple txt file.
Using it is another story.
Looking at my phone, I realise how far we have fallen: Android doesnât allow me to do a simple shortcut to that calendar.txt file which would open on the current day. Thereâs probably a way but I canât think of one. Probably because I donât understand that system. After all, Iâm not supposed to even try understanding it.
Android is not Unix. Android, like other proprietary Operating System, is a cage you need to fight against if you donât want to surrender your choices, your data, your soul. Unix is freedom: hard to conquer but impossible to let go as soon as you tasted it.
Iâm Ploum, a writer and an engineer. I like to explore how technology impacts society. You can subscribe by email or by rss. I value privacy and never share your adress.
I write science-fiction novels in French. For Bikepunk, my new post-apocalyptic-cyclist book, my publisher is looking for contacts in other countries to distribute it in languages other than French. If you can help, contact me!
September 02, 2025


Une vie sans notifications
AvertissementâŻ: Cet article parle de mon expĂ©rience avec le Mudita Kompakt, mais, nâĂ©tant pas liĂ© Ă cette firme, je ne rĂ©pondrai Ă aucune question concernant cet appareil ni le Hisense A5. Tout ce que jâai Ă dire au sujet de cet appareil est dans ce billet. Pour le reste, voyez les nombreuses vidĂ©os et articles sur le Web ou rejoignez le forum de Mudita.
Le grand problĂšme du minimalisme numĂ©rique, câest quâil nây a pas une solution satisfaisante pour tout le monde. Sur les forums consacrĂ©s au minimalisme numĂ©rique, chaque solution est critiquĂ©e pour faire «âŻtropâŻÂ» et, parfois par les mĂȘmes personnes, pas assez.
Jâai vu des personnes en quĂȘte dâun dumbphone pour soigner leur addiction Ă lâhyperconnexion se plaindre de ne pouvoir installer Whatsapp et Facebook dessus. Dâune maniĂšre gĂ©nĂ©rale, jâai suffisamment dâexpĂ©rience dans lâindustrie pour savoir que les humains sont trĂšs mauvais pour dĂ©terminer leurs propres besoins, ce que jâappelle «âŻle syndrome de la maison de plain-pied Ă 3 Ă©tagesâŻÂ». Je veux tout, mais que ce soit minimaliste.
Lâaddiction Ă lâĂ©cran
Il y a 6 ans, je pris conscience quâune grande part de mon addiction Ă mon smartphone venait de lâĂ©cran lui-mĂȘme. Comme lâa dit un jour un participant Ă un forum que je frĂ©quente, les Ă©crans sont devenus tellement beaux, les couleurs tellement riches que lâimage est plus belle que la rĂ©alitĂ©. Regarder une photo de paysage aux couleurs saturĂ©es et retouchĂ©e est plus beau que de regarder le paysage en rĂ©alitĂ©, avec sa grisaille, son autoroute qui nâapparait pas dans le cadre de la photo, sa pluie fine qui nous rentre dans le cou. Nous nâutilisons plus nos smartphones pour faire quelque chose, ils font partie de notre corps et de notre esprit.
Dans mon cas personnel, se passer dâĂ©cran avec un dumbphone ne pouvait convenir, car lâusage le plus important que je fais de mon tĂ©lĂ©phone est de mâorienter Ă vĂ©lo et, alors que je suis au milieu de la nature, de crĂ©er un itinĂ©raire de retour Ă charger sur mon GPS. Ăa et utiliser Signal, le seul chat de ma vie quotidienne.
Pour ĂȘtre franc, le concept de dumbphone me dĂ©passe un peu, car sâil y a bien un truc dont je nâai pas envie ni besoin, câest dâĂȘtre joignable partout tout le temps. Si câest pour laisser un dumbphone en silencieux dans ma poche, autant ne rien prendre du toutâŻ!
Ma quĂȘte de sobriĂ©tĂ© numĂ©rique a donc commencĂ© avec lâun des trĂšs rares smartphones Ă Ă©cran e-ink, le Hisense A5 (Ă droite sur la photo dâillustration).
Le Hisense est un produit bon marchĂ© de piĂštre qualitĂ©, Ă destination du marchĂ© chinois. Sâil ne disposait dâaucun service Google, il est plein de spywares chinois impossibles Ă dĂ©sinstaller (mais que je tentais de contenir avec le firewall Adguard, configurĂ© aux petits oignons pendant des heures).
Pendant six ans, jâai utilisĂ© exclusivement cet appareil. Lors de mon premier voyage avec, un trip en train en Bretagne pour un projet de livre, jâai Ă©tĂ© en permanence anxieux Ă lâidĂ©e que «âŻquelque chose se passe mal, car je nâavais pas un vrai smartphoneâŻÂ». Mais, petit Ă petit, je me suis surpris Ă moins lâutiliser que son prĂ©dĂ©cesseur, Ă accepter ses limites. LâĂ©cran e-ink liĂ© Ă la lenteur et aux bugs du logiciel en partie en chinois ne me donnait pas du tout envie de lâutiliser. TrĂšs vite, la couleur des Ă©crans de smartphones mâest apparue comme violente, agressive. Mais que je passe quelques minutes sur un tel Ă©cran et, soudain, je retrouve un bon vieux shoot de dopamine, une envie de lâutiliser pour faire quelque chose. QuoiâŻ? Peu importe tant que je peux garder les yeux rivĂ©s sur ces lumineuses formes mouvantes.
Lâhyperconnexion permanente
Il mâest souvent arrivĂ© de prĂ©tendre que le Hisense nâĂ©tait pas un smartphone pour Ă©viter dâinstaller une app soi-disant indispensable pour un service dont jâavais besoin ou, tout simplement, pour obtenir une carte papier dans ces restaurants qui ont lâimpression dâĂȘtre Ă la pointe de la technologie, car ils ont un QR code scotchĂ© sur la table. Mais câĂ©tait un mensonge. Car le Hisense est, au fond, un smartphone des plus classiques.
Jây avais mes emails, un navigateur web et mĂȘme Mastodon, que jâai trĂšs vite supprimĂ©, mais auquel je pouvais accĂ©der via Firefox ou Inkbro, navigateur optimisĂ© pour les Ă©crans e-ink.
Mon addiction va beaucoup mieux. Jâai perdu lâhabitude dâavoir mon smartphone tout le temps sur moi, ne le prenant pour sortir que si je pense en avoir rĂ©ellement besoin. Ăa nâa lâair de rien, mais, pour certains addicts, sortir sans tĂ©lĂ©phone est une vĂ©ritable aventure. Faire lâexpĂ©rience est une excellente maniĂšre de rĂ©aliser Ă quel point on est addict.
Le Hisense a beau ĂȘtre gros et moche, lorsque je lâavais avec moi et que jâavais un temps mort, je vĂ©rifiais si je nâavais pas reçu dâemails. Je devais, comme tout smartphone, penser Ă le remettre en silencieux lorsque jâavais activĂ© la sonnerie, car je voulais ĂȘtre joignable, faire les mises Ă jour des toutes les apps que je gardais, car je pouvais en avoir potentiellement besoin. Bref, la gestion classique dâun smartphone.
Cela me convenait, mais, les smartphones e-ink nâayant jamais vraiment percĂ©, je me demandais comment jâallais remplacer un appareil qui prĂ©sentait des signes de faiblesse (batterie qui se vide soudainement, chargement qui ne fonctionne plus que, intermittence).
Câest alors que jâai Ă©tĂ© convaincu par le Kompakt de Mudita.
Mudita est une entreprise polonaise qui cherche Ă offrir des produits favorisant la pleine conscience. Des rĂ©veils au design Ă©purĂ©, des montres et un dumbphone, le Mudita Pure, qui me faisais grandement de lâĆil, car basĂ© sur un systĂšme entiĂšrement Open Source. Malheureusement, le Pure Ă©tait trop minimaliste pour moi, car ne permettant pas dâutiliser mon GPS de vĂ©lo.
Puis est arrivé le Kompakt.
BasĂ© sur Android, le Kompakt est techniquement un smartphone. En plus petit. Et avec un Ă©cran e-ink dâune qualitĂ© bien moindre que le Hisense. Et pourtantâŠ
Premiers pas avec le Kompakt
La premiĂšre chose qui mâa frappĂ©e en dĂ©ballant mon Kompakt, câest que je nâai dĂ» crĂ©er aucun compte, passer par aucune procĂ©dure autre que le choix de la langue. InsĂ©rez une carte SIM, allumez et ça fonctionne.
Mudita fait trĂšs attention Ă la vie privĂ©e et ne propose aucun service en ligne. Le tĂ©lĂ©phone est entiĂšrement dĂ©googlisĂ© voire mĂȘme «âŻdĂ©cloudisĂ©âŻÂ» (contrairement Ă , par exemple, Murena /e/OS). Pour la premiĂšre fois depuis des lustres, je ne devais pas combattre mon tĂ©lĂ©phone, je ne devais pas le configurer, le transformer.
Câest incroyable comme ça mâa fait plaisir.
Pour ĂȘtre transparent, il faut prĂ©ciser que les dĂ©veloppeurs rĂ©cupĂšrent des donnĂ©es anonymisĂ©es de debug et que ce nâest, pour le moment, pas dĂ©sactivable. Une discussion Ă ce sujet est en cours sur le forum Mudita.
Car, oui, Mudita dispose dâun forum de discussion auquel participe une employĂ©e de la firme qui tente dâaider les utilisateurs et se fait le relais vers les dĂ©veloppeurs. Un truc qui Ă©tait la base en 2010, mais qui semble incroyable de nos jours.
Bref, je me suis senti un client respectĂ©, pas une vache Ă lait. Et je nâen reviens toujours pas.
Outre le forum, ce qui mâa frappĂ© avec le Mudita, câest quâil pousse rĂ©ellement lâidĂ©e de minimalisme jusque dans ses retranchements. Lâapplication GPS permet de chercher une adresse et de faire un itinĂ©raire piĂ©ton, vĂ©lo ou voiture. Et câest tout. Des optionsâŻ? Aucune, nada, nihilâŻ! Et vous savez quoiâŻ? Ăa fonctionneâŻ!âŻVu que câest OpenStreetMap, ça fonctionne trĂšs bien et jâai dĂ©sinstallĂ© Comaps que jâavais mis par rĂ©flexe. Lâappareil photo⊠prend des photos et câest tout (on peut juste activer/dĂ©sactiver le flash).
Trouver son chemin avec le Mudita Kompakt
Câest comme ça pour toutes les applisâŻ: en 10 minutes, vous aurez fait le tour de toutes les options et câest incroyablement rafraichissant.
Bon, parfois, câest limite trop. Le lecteur de musique affiche vos MP3 et les lit par ordre alphabĂ©tique. Câest tout. Ils ont promis dâamĂ©liorer ça, mais, au fond, je trouve ça amusant dâĂȘtre limitĂ© de cette façon.
Les choses sérieuses
Si Mudita nâoffre aucun service en ligne, la meilleure maniĂšre dâinteragir avec son tĂ©lĂ©phone est le Mudita Center, un logiciel compatible Windows/MacOS/Linux. AprĂšs lâavoir tĂ©lĂ©chargĂ©, vous devez brancher votre tĂ©lĂ©phone Ă votre ordinateur avec⊠retenez votre souffle⊠un cĂąble USB. (sur Debian/Ubuntu, vous devez ĂȘtre membre du groupe "dialout". "sudo adduser ploum dialout", reboot et puis câest bon)
Un cĂąbleâŻ! En 2025âŻ! Incroyable, nonâŻ? Quand je vois comme jâai dĂ» me battre avec le Freewrite dâAstrohaus qui force lâutilisation de son cloud propriĂ©taire, jâapprĂ©cie Ă outrance le fait de brancher mon tĂ©lĂ©phone avec un cĂąble.
Avec le Mudita Center, vous pouvez envoyer des fichiers sur lâappareil. Les MP3 pour la musique, les epub ou les pdf, qui seront ouverts dans lâappli E-reader. Pratique pour les billets de train et autres tickets Ă©lectroniques. Une section est rĂ©servĂ©e pour transfĂ©rer les fichiers APK que vous voulez «âŻsideloaderâŻÂ». On sâest tellement fait entuber par Google et Apple quâon a perdu le droit dâutiliser le mot «âŻinstallerâŻÂ» en parlant dâun logiciel. Ce mot est dĂ©sormais privatisĂ© et il faut «âŻsideloaderâŻÂ» (du moins tant que câest encore lĂ©galement possibleâŠ).
Dans mon cas, je nâai sideloadĂ© quâun seul APKâŻ: F-Droid. Avec F-Droid, jâai pu installer Molly (un client Signal), mon gestionnaire de mot de passe, mon appli 2FA, le clavier Flickboard et Aurora Store. Avec Aurora Store, jâai pu installer Komoot, Garmin, Proton Calendar et lâapp SNCB pour les horaires de train. Pas de navigateur ni dâemail cette foisâŻ! Je me suis quand mĂȘme accordĂ© lâapplication WikipĂ©dia, pour tester.
Tout est petit, en noir et blanc (ça, jâavais dĂ©jĂ lâhabitude), mais, dans mon cas, tout fonctionne. Attention que ce ne sera peut-ĂȘtre pas votre cas. Les applis qui ont besoin des services Google, comme Strava, refuseront de se lancer (ce qui ne change pas de mon Hisense). Mon appli bancaire nĂ©cessite une camĂ©ra Ă selfie (ce que le Kompakt nâa pas) et je vais devoir trouver une solution de rechange.
Au fait, Mudita ne permet pas de personnaliser la liste des applications. Celles-ci sont classĂ©es par ordre alphabĂ©tique. Pas de raccourcis, pas dâoptions. Si câest perturbant au dĂ©but, cela se rĂ©vĂšle trĂšs vite trĂšs apprĂ©ciable, car câest, une fois encore, un truc de moins Ă penser, une excuse de moins pour chipoter.
Ă noter quâil est cependant possible de cacher des applications. Si vous nâutilisez pas lâapp de mĂ©ditation, vous pouvez la cacher, tout simplement. Simple et efficace.
Les notifications
Câest lorsque jâai reçu mon premier message Signal que jâai rĂ©alisĂ© un truc Ă©trange. Jâai bien entendu le son, mais je ne voyais pas de notifications.
Et pour cause⊠Le Mudita Kompakt nâa pas de notificationsâŻ! LâĂ©cran dâaccueil vous montre si vous avez eu des appels ou des SMS, mais, pour le reste, il nây a pas de notifications du toutâŻ!
Mon premier rĂ©flexe a Ă©tĂ© dâinvestiguer lâinstallation dâun launcher alternatif, InkOS, qui permet une plus grande configurabilitĂ© et des notifications.
Mais⊠Attendez une secondeâŻ! Que suis-je en train de faireâŻ? Je cherche Ă refaire un smartphoneâŻ! Et si je tentais dâutiliser le Mudita de la maniĂšre pour laquelle il a Ă©tĂ© conçuâŻ?Sans notificationsâŻ!
Le seul rĂ©el problĂšme avec cette approche câest que les notifications existent, mais que Mudita les cache. On les entend donc, mais on ne peut pas savoir dâoĂč elles proviennent.
Dans mon cas, câest essentiellement Signal (enfin, Molly pour celleux qui suivent). Dans Signal, jâai donc configurĂ© un profil de notification qui soit silencieux sauf pour les membres de ma famille proche.
Si jâentends mon tĂ©lĂ©phone faire un son, je sais que câest un message de ma famille. Pour les autres, je ne les vois que lorsque je choisis dâouvrir Signal. Ce qui est exactement ce quâun systĂšme de communication devrait ĂȘtre.
Bien entendu, les choses se compliquent si vous avez plusieurs applications qui envoient des notifications. Il est possible dâavoir accĂšs aux notifications et de les dĂ©sactiver par applications en utilisant "Activity Launcher" disponible sur F-Droid. Câest un peu du chipotage, mais ça mâa permis de dĂ©sactiver les notifications «âŻparasitesâŻÂ» de tout ce qui nâest pas Signal.
La vie sans notifications
Lorsquâon accepte ce mode de fonctionnement, le Mudita prend soudainement tout son sens.
Jâavais dĂ©jĂ fortement rĂ©duit les notifications sur le Hisense. Il Ă©tait dâailleurs en silencieux la plupart du temps. Mais, Ă chaque fois que je consultais lâĂ©cran, je voyais les petites icĂŽnes. Machinalement, je glissais mon doigt pour faire apparaĂźtre le tiroir Ă notifications et «âŻvĂ©rifierâŻÂ» avant de glisser latĂ©ralement pour supprimer.
Bon sang que jâai en horreur ces gestes de glissement des doigts, jamais prĂ©cis, jamais satisfaisant comme le bruit dâune touche quâon enfonce. Tiens, le Mudita ne permet dâailleurs pas de «âŻswiperâŻÂ» dans la liste des applications. Il faut faire dĂ©filer avec une flĂšche. Câest minime, mais jâapprĂ©cieâŻ!
Mais mĂȘme si je nâavais pas de notifications sur le Hisense, je vĂ©rifiais de temps en temps si je nâavais pas reçu un mail important. Je vĂ©rifiais une information sur un site web.
Oh, rien de bien mĂ©chant. Une addiction parfaitement sous contrĂŽle. Mais une sĂ©rie de rĂ©flexes dont jâavais envie de nettoyer ma vie.
Avec le Mudita Kompakt, lâexpĂ©rience est trĂšs perturbante. Machinalement, je saisis lâappareil et⊠rien. Il nây a rien Ă faire. La seule chose que je peux vĂ©rifier, câest Signal. Je suis ensuite forcĂ© de reposer ce petit Ă©cran.
Pas besoin non plus de mettre en silencieux ou en mode avion. Le Kompakt dispose dâun switch hardware «âŻOffline+âŻÂ» qui dĂ©sactive tous les rĂ©seaux, tous les capteurs, y compris lâappareil photo et le micro. En Offline+, rien ne rentre et rien ne sort de lâappareil. Et quand je suis connectĂ©, je sais que je nâaurai que les appels tĂ©lĂ©phoniques, les sms et les messages Signal de ma famille proche.
Câest comme un nouveau mondeâŠ
Tout nâest pas parfait
On ne va pas se leurrer, le Mudita Kompakt est loin dâĂȘtre parfait. Il est petit, mais un peu trop gros. Il y a des bugs comme lâalarme qui se dĂ©clenche en retard ou pas du tout, comme lâappareil photo qui met prĂšs de deux secondes entre la pression sur le bouton et la prise effective de lâimage (mais câest dĂ©jĂ mieux que lâappareil photo du Hisense dont la lentille sâest bloquĂ©e aprĂšs quelques semaines dâutilisation, rendant toutes mes photos irrĂ©mĂ©diablement floues).
Le forum regorge dâutilisateurs insatisfaits. Pour certains car je pense quâils nâont pas conscientisĂ© les limites du minimalisme numĂ©rique, quâils espĂ©raient un smartphone complet avec un Ă©cran e-ink. Mais, dans dâautres cas, câest clairement Ă cause de bugs dans le systĂšme pour des cas dâusage qui ne me concernent pas directement, comme les problĂšmes Bluetooth alors que je suis tout heureux dâavoir un jack audio. Le Kompakt utilise une version trĂšs fortement modifiĂ©e et dĂ©googlisĂ©e dâAndroid 12 (les tĂ©lĂ©phones Googe sont Ă Android 16). Le hardware lui-mĂȘme est assez ancien (plus que mon Hisense). Ce sont des dĂ©tails qui peuvent se rĂ©vĂ©ler importants. La batterie, par exemple, tient 3/4 jours, ce qui nâest pas extraordinaire en comparaison avec le Hisense. Une heure de hotspot wifi consomme 10% de batterie lĂ oĂč le Hisense nâen perdait pas 3%.
Lâapplication musicale est vraiment trĂšs minimalisteâŻ!
MalgrĂ© ses limites, lâappareil nâest pas bon marchĂ© et il est probablement possible de configurer nâimporte quel appareil Android pour avoir un Ă©cran Ă©purĂ© et pas de notifications. Mais ce qui me plaĂźt avec le Mudita câest justement le fait que je ne le configure pas. Que je ne cherche pas Ă comprendre ce que je dois bloquer, que je ne passe pas du temps Ă optimiser mon Ă©cran dâaccueil ou le placement des applications.
Ăa ne conviendra certainement pas Ă tout le monde. Il y a certainement plein de dĂ©fauts qui rendent le Kompakt inutilisable pour vous. Mais pour mon petit cas personnel et pour mon mode de vie actuel, câest un vrai bonheur (Ă lâexception de cette saletĂ© dâappli bancaire pour laquelle je nâai pas encore trouvĂ© de solution).
Accepter de lĂącher prise
Car, oui, je vais rater des choses. Je verrai des mails urgents bien plus tard. Je ne pourrai pas chercher une information rapide quand je suis en déplacement. Je ne pourrai pas prendre de belles photos.
Câest le principe mĂȘmeâŻ! Le minimalisme numĂ©rique câest, par essence, ne plus pouvoir tout faire tout le temps. Câest ĂȘtre forcĂ© de sâennuyer dans les temps morts, de planifier certaines expĂ©ditions, de demander une information autour de soi si nĂ©cessaire, de se dire, dans certaines situations, que la vie aurait Ă©tĂ© plus facile avec un smartphone traditionnel.
Si vous nâavez pas effectuĂ© Ă lâavance ce travail de faire le tri entre ce qui est vraiment nĂ©cessaire pour vous, ne songez mĂȘme pas Ă prendre un tĂ©lĂ©phone minimaliste comme le Kompakt. Ce tĂ©lĂ©phone me convient parce que ça fait des annĂ©es que je rĂ©flĂ©chis Ă ce sujet et parce quâil est compatible avec mon mode de vie et mes obligations.
Si vous idĂ©alisez le minimalisme sans rĂ©flĂ©chir aux consĂ©quences, vous serez frustrĂ©s Ă la premiĂšre friction, Ă la premiĂšre perte de cette facilitĂ© omniprĂ©sente quâest le smartphone. Le minimalisme numĂ©rique est Ă©galement un concept trĂšs personnel. Quand je vois le nombre dâappareils connectĂ©s que je possĂšde, jâai du mal Ă dire que je suis un «âŻminimalisteâŻÂ». Je cherche juste Ă conscientiser et Ă faire en sorte que chaque appareil ait pour moi un bĂ©nĂ©fice trĂšs clair avec le moins possible de dĂ©savantages (comme mon GPS de vĂ©lo ou mon analyseur de qualitĂ© dâair). Je ne suis pas vraiment minimaliste, je pense juste que le smartphone traditionnel a sur ma vie un impact nĂ©gatif trĂšs important que je cherche Ă minimiser.
Certain·es me disent quâils nâont pas le choix. Comme le dit trĂšs bien Jose Briones, câest faux. Nous avons, pour le moment, le choix. Câest juste que ce nâest pas un choix facile et quâil faut assumer les consĂ©quences, accepter de changer son mode de vie pour cela.
Lâobligation du smartphone
Et, justement, ce qui est le plus effrayant avec cette dĂ©marche de tenter de minimiser lâusage du smartphone, câest de rĂ©aliser Ă quel point il devient presque obligatoire dâen avoir un, Ă quel point ce choix devient de plus en plus tĂ©nu. Des services commerciaux, mais Ă©galement des administrations publiques considĂšrent que vous avez obligatoirement un smartphone rĂ©cent, que vous disposez dâun compte Google, dâune connexion Internet permanente, dâune batterie bien chargĂ©e et que vous ĂȘtes dâaccord dâinstaller une Ă©niĂšme application dessus et de crĂ©er un compte en ligne pour quelque chose dâaussi mondain que de payer un emplacement de parking ou accĂ©der Ă un Ă©vĂ©nement. Un contrĂŽleur de train me confiait rĂ©cemment que la SNCB planifiait de supprimer le ticket papier pour mettre en avant lâusage de lâapp et du smartphone.
Sur les forums minimalistes, jâai mĂȘme dĂ©couvert une catĂ©gorie dâutilisateurs qui possĂšdent un smartphone classique pour aller au travail, par simple peur dâĂȘtre moquĂ© ou de passer pour un rebelle auprĂšs de leurs collĂšgues. De la mĂȘme maniĂšre, certains refusent dâinstaller Signal ou dâeffacer leur compte Facebook par crainte que cela puisse paraĂźtre suspect. Une chose me semble claireâŻ: si câest la peur de potentiellement paraĂźtre suspect qui vous retient, il est urgent dâagir maintenant, tant que cette suspicion nâest que potentielle. Installez Signal et GrapheneOS ou /e/OS maintenant pour avoir lâexcuse, dans le futur, de dire que ça fait des mois ou des annĂ©es que vous fonctionnez comme cela.
Dans le cas du smartphone, je constate avec effroi que sâil est thĂ©oriquement possible de ne jamais en avoir eu, il est extrĂȘmement difficile de revenir en arriĂšre. Les banques, par exemple, empĂȘchent souvent de revenir Ă une mĂ©thode dâauthentification sans smartphone une fois que celle-ci a Ă©tĂ© activĂ©eâŻ!
Je souris quand je pense aux fois oĂč mon refus du smartphone mâa valu une rĂ©flexion de typeâŻ: «âŻAhâŻ? Vous nâĂȘtes pas Ă lâaise avec les nouvelles technologiesâŻ?âŻÂ». Souvent, je ne rĂ©ponds pas. Ou je me contente dâun «âŻsi, justementâŠâŻÂ». Au moins, avec le Mudita, je pourrai le brandir et affirmer haut et fortâŻ: je nâai pas de smartphoneâŻ!
Vous et moi, nous savons que ce nâest techniquement pas tout Ă fait vrai, mais ceux qui veulent imposer lâubiquitĂ© du smartphone GoogApple sont, par dĂ©finition, des ignares technologiques. Ils nây verront que du feu⊠Et, pour un temps, ils seront encore forcĂ©s de sâadapter, dâaccepter que, non, tout le monde nâa pas tout le temps un smartphone.
Je suis Ploum et je viens de publier Bikepunk, une fable Ă©colo-cycliste entiĂšrement tapĂ©e sur une machine Ă Ă©crire mĂ©canique. Pour me soutenir, achetez mes livres (si possible chez votre libraire)âŻ!
Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.
August 28, 2025
TLDR: A software engineer with a dream, undertook the world’s most advanced personal trainer course. Despite being the odd duck amongst professional athletes, coaches, and body builders, graduated top of class, and is now starting a mission to build a free and open source fitness platform to power next-gen fitness apps and a wikiPedia-style public service. But he needs your help!
The fitness community & industry seem largely dysfunctional.
- Content mess on social media
Social media is flooded with redundant, misleading content, e.g. endless variations of the same “how to do ‘some exercise’” videos, repackaged daily by creators chasing relevance. Deceiving clickbait to farm engagement. (notable exception: Sean Nalewanyj has been consistently authentic, accurate, and entertaining). Some content is actually great, but extremely hard to find, needs curation, and sometimes context.
- Selling “programs” and “exercise libraries”
Coaches/vendors sell workout programs and exercise instructions as if they are proprietary “secret sauce”. They aren’t. As much as they like you to believe otherwise, workout plans and exercise instructions are not copyrightable. Specific expressions of them, such as video demonstrations are, though there are ways such videos can be legally used by 3rd parties, and I see an opportunity for a win-win model with the creator, but more on that later. This is why the seller usually simply repeat what is already widely understood, replicate (possibly old and misguided) instructions or overcomplicate in an attempt to differentiate. Furthermore, workout programs (or rather, a “rendition” of it) often have no or limited personalization options. The actually important parts, such as adjusting programs across multiple goals (e.g. combining with sports), across time (based on observed progression) or to accommodate injuries, assuring consistency with current scientific evidence, requires expensive expertise, and is often missing from the “product”.
Many exercise libraries exist, but they require royalties. To build a new app (even a non-commercial one) you need to either license a commercial library or recreate your own. I checked multiple free open source ones, but let’s just say there’s serious quality and legal concerns. Finally, WikiPedia is legal and is favorably licensed, but is too text-based and can’t easily be used by applications.
- Sub-optimal AI’s
When you ask an LLM for guidance, it sometimes does a pretty good job, often it doesn’t, because:
- Unclear sources: AI’s regurgitate whatever programs they were fed during training, sometimes written by experts, sometimes by amateurs.
- Output degrades when you need specific personalized advice or when you need adjustments over time which is actually a critical piece for progressing in your fitness journey. Try to make it too custom and it starts hallucinating.
- Today’s systems are text based. They may seam cheap today, because vendors are subsidizing the true cost in an attempt to capture the market. But it’s inefficient, inaccurate and financially unsustainable. It also makes for a very crude user interface.
I believe future AI’s will use data models and UI’s that are both domain specific and richer than just text, so we need a library to match. Even “traditional” (non-AI) applications can gain a lot of functionality if they can leverage such structured data.
- Closed source days are numbered
Good coaches can bring value via in-person demonstrations, personalization, holding clients accountable and helping them adopt new habits. This unscalable model puts an upper limit on their income. The better known coaches on social media solve this by launching their own apps, some of these seem actually quite good but suffer from the typical downsides that we’ve seen in other software domains such as vendor lock-in, lack of data ownership, incompatibility across apps, lack of customization, high fees, etc; We’ve seen how this plays out in devtools, enterprise, in cloud. According to more and more investment firms, it’s starting to play out in every other industry (just check the OSS Capital portfolio or see what Andreesen Horowitz, one of the most successful VC funds of all time has to say on it): software, is becoming open source in all markets. It leads to more user-friendly products and is the better way to build more successful businesses. It is a disruptive force. Board the train or be left in the dust.
What am I going to do about it?
I’m an experienced software engineer. I have experience building teams and companies. I’m lucky enough to have a window of time and some budget, but I need to make it count. First thing I did is to educate myself properly on fitness. In 2024-2025 I participated in the Menno Henselmans Personal Trainer course. This is the most in-depth, highly accredited, science based course program for personal trainers that I could find. It was an interesting experience being the lone software developer amongst a group of athletes, coaches and body builders. Earlier this year I graduated Magna Cum Laude, top of class.

Now I am a certified coach, who learned from one of the top coaches worldwide with decades of experience. I can train and coach individuals. But as a software engineer I know that even a small software project can grow to change the world. What Wikipedia did for articles, is what I aspire to make for fitness: a free public service comprising information, but more so hands-on tools applications to put information into action. Perhaps give opportunities to industry professionals to differentiate in more meaningful ways.
To support the applications, we also need:
- an exercise library
- an algorithms library (or “tools library” in AI lingo)
I’ve started prototyping both the libraries and some applications on body.build. I hope to grow a project and a community around it that will outlive me. It is therefore open source.
Next-gen applications and workflows
Thus far body.build has:
- a calorie calculator
- a weight lifting volume calculator
- a program builder
- a crude exercise explorer (prototype)
I have some ideas for more stuff that can be built by leveraging the new libraries (discussed below):
- a mobile companion app to perform your workouts, have quick access to exercise demonstrations/cues, and log performance. You’ll own your data to do your own analysis, and a personal interest to me is the ability to try different variations or cues (see below), track their results and analyze which work better for you. (note: I’m maintaining a list of awesome OSS health/fitness apps, perhaps an existing one could be reused)
- detailed analysis of (and comparison between) different workout programs, highlighting different areas being emphasized, estimated “bang for buck” (expecting size and strength gains vs fatigue and time spent)
- just-in-time workout generation. Imagine an app to which you can say “my legs are sore from yesterday’s hike. In 2 days I will have a soccer match. My availability is 40 min between two meetings and/or 60min this evening at the gym. You know my overall goals, my recent workouts and that I reported elbow tendinitis symptoms in my last strength workout. Now generate me an optimal workout”.
The exercise library
The library should be liberally licensed and not restrict reuse. There is no point trying to “protect” content that has very little copyright protection anyway. I believe that “opening up” to reuse (also commercial) is not only key to a successful project, but also unlocks commercial opportunities for everyone.
The library needs in-depth awareness that goes beyond what apps typically contain and include:
- exercises alternatives and customization options (and their trade-offs)
- detailed biomechanical data (such as muscle involvements and loading patterns across the range of muscle length and different joint movements
Furthermore, we also need the usual textual description and exercise demonstration videos. Unlike “traditional” libraries that present a singular authoritative view, I find it valuable to clarify what is commonly agreed upon vs where coaches or studies still disagree, and present an overview of the disagreement with further links to the relevant resources, which could be an Instagram post, a YouTube video or a scientific study) A layman can go with the standard instructions, whereas advanced trainees get an overview of different options and cues, which they can check out in more detail, try and see what works best for them. No need to scroll social media for random tips, they’re all aggregated and curated in one place.
Our library (liberally licensed) includes the text and links to 3rd party public content. The 3rd party content itself is typically subject to stronger limitations, but can always be linked to, and often also embedded under fair use and under the standard YouTube license, for free applications anyway. Perhaps one day we’ll have our own content library, but for now we can avoid a lot of work and promote existing creators’ quality content. Win-win.
Body.build today has a prototype of this. I’m gradually adding more and more exercises. Today it’s part of the codebase, but at some point it’ll make more sense to separate them out, and use one of the Creative Commons licenses.
The algorithms library
Through the course, I learned about various principles (validated by decades of coaching experience and by scientific research) to construct optimal training based on input factors (e.g. optimal workout volume depends on many factors, including sex, sleep quality, food intake, etc.) Similarly, things like optimal recovery timing or exercise swapping can be calculated, using well understood principles).
Rather than thinking of workout programs as the main product I think of them as just a derivative, an artifact that can be generated by first determining an individual’s personal parameters, and then applying these algorithms on them. Better yet, instead of pre-defining a rigid multi-month program (which only works well for people with very consistent schedules), this approach allows to generate guidance at any point in any day. Which would work better for people with inconsistent agenda’s.
Body.build today has a few of such codified algorithms, I’ve only implemented what I’ve needed so far.
I need help
I may know how to build some software, and have several ideas on new types of applications that can be built that can be beneficial to people. But there is a lot that I haven’t figured out yet! Maybe you can help?
Particular pain points:
- Marketing: developers don’t like it, but it’s critical. We need to determine who to build for? (coaches? developers? end-users? beginners or experts?). What are their biggest issues to solve? How do we reach them? Marketing firms are quite expensive. Perhaps AI will make this a lot more accessible. For now, I think perhaps it makes most sense to build apps for technology & open source enthusiasts who geek out about optimizing their weight lifting and body building. The type of people who would obsessively scroll Instagram or YouTube hoping to find novel workout tips (who will now hopefully have a better way). I’ld like to bring sports&conditioning more to the forefront too, but can’t prioritize this now.
- UX and UI design.
Developer help is less critical, but of course welcome too.
If you think you can help, please reach out on body.build discord or X/Twitter, and of course Check out body.build! and please forward this to people who are into both fitness and open source technology! Thanks!
August 27, 2025
Have you ever fired up a Vagrant VM, provisioned a project, pulled some Docker images, ran a build⊠and ran out of disk space halfway through? Welcome to my world. Apparently, the default disk size in Vagrant is tinyâand while you can specify a bigger virtual disk, Ubuntu won’t magically use the extra space. You need to resize the partition, the physical volume, the logical volume, and the filesystem. Every. Single. Time.
Enough of that nonsense.
đ The setup
Hereâs the relevant part of my Vagrantfile:
Vagrant.configure(2) do |config|
config.vm.box = 'boxen/ubuntu-24.04'
config.vm.disk :disk, size: '20GB', primary: true
config.vm.provision 'shell', path: 'resize_disk.sh'
end
This makes sure the disk is large enough and automatically resized by the resize_disk.sh script at first boot.
âš The script
#!/bin/bash
set -euo pipefail
LOGFILE="/var/log/resize_disk.log"
exec > >(tee -a "$LOGFILE") 2>&1
echo "[$(date)] Starting disk resize process..."
REQUIRED_TOOLS=("parted" "pvresize" "lvresize" "lvdisplay" "grep" "awk")
for tool in "${REQUIRED_TOOLS[@]}"; do
if ! command -v "$tool" &>/dev/null; then
echo "[$(date)] ERROR: Required tool '$tool' is missing. Exiting."
exit 1
fi
done
# Read current and total partition size (in sectors)
parted_output=$(parted --script /dev/sda unit s print || true)
read -r PARTITION_SIZE TOTAL_SIZE < <(echo "$parted_output" | awk '
/ 3 / {part = $4}
/^Disk \/dev\/sda:/ {total = $3}
END {print part, total}
')
# Trim 's' suffix
PARTITION_SIZE_NUM="${PARTITION_SIZE%s}"
TOTAL_SIZE_NUM="${TOTAL_SIZE%s}"
if [[ "$PARTITION_SIZE_NUM" -lt "$TOTAL_SIZE_NUM" ]]; then
echo "[$(date)] Resizing partition /dev/sda3..."
parted --fix --script /dev/sda resizepart 3 100%
else
echo "[$(date)] Partition /dev/sda3 is already at full size. Skipping."
fi
if [[ "$(pvresize --test /dev/sda3 2>&1)" != *"successfully resized"* ]]; then
echo "[$(date)] Resizing physical volume..."
pvresize /dev/sda3
else
echo "[$(date)] Physical volume is already resized. Skipping."
fi
LV_SIZE=$(lvdisplay --units M /dev/ubuntu-vg/ubuntu-lv | grep "LV Size" | awk '{print $3}' | tr -d 'MiB')
PE_SIZE=$(vgdisplay --units M /dev/ubuntu-vg | grep "PE Size" | awk '{print $3}' | tr -d 'MiB')
CURRENT_LE=$(lvdisplay /dev/ubuntu-vg/ubuntu-lv | grep "Current LE" | awk '{print $3}')
USED_SPACE=$(echo "$CURRENT_LE * $PE_SIZE" | bc)
FREE_SPACE=$(echo "$LV_SIZE - $USED_SPACE" | bc)
if (($(echo "$FREE_SPACE > 0" | bc -l))); then
echo "[$(date)] Resizing logical volume..."
lvresize -rl +100%FREE /dev/ubuntu-vg/ubuntu-lv
else
echo "[$(date)] Logical volume is already fully extended. Skipping."
fi
đĄ Highlights
- â
Uses
partedwith--scriptto avoid prompts. - â
Automatically fixes GPT mismatch warnings with
--fix. - â
Calculates exact available space using
lvdisplayandvgdisplay, withbcfor floating point math. - â Extends the partition, PV, and LV only when needed.
- â
Logs everything to
/var/log/resize_disk.log.
đš Gotchas
- Your disk must already use LVM. This script assumes you’re resizing
/dev/ubuntu-vg/ubuntu-lv, the default for Ubuntu server installs. - You must use a Vagrant box that supports VirtualBox’s disk resizingâthankfully,
boxen/ubuntu-24.04does. - If your LVM setup is different, youâll need to adapt device paths.
đ Automation FTW
Calling this script as a provisioner means I never have to think about disk space again during development. One less yak to shave.
Feel free to steal this setup, adapt it to your team, or improve it and send me a patch. Or better yetâdonât wait until your filesystem runs out of space at 3 AM.
August 26, 2025

Pas de médaille pour les résistants
Si vous voulez changer le monde, il faut entrer en rĂ©sistance. Il faut accepter dâagir et de se taire. Il faut accepter de perdre du confort, des opportunitĂ©s, des relations. Et il ne faut espĂ©rer aucune rĂ©compense, aucune reconnaissance.
Un espionnage pire que tout ce que vous imaginez
Prenez notre dĂ©pendance envers quelques monopoles technologiques. Je pense quâon ne se rend pas compte de lâespionnage permanent que nous imposent les smartphones. Et que ces donnĂ©es ne sont pas simplement stockĂ©es «âŻchez GoogleâŻÂ».
Tim Sh a dĂ©cidĂ© dâinvestiguer. Il a ajoutĂ© un simple jeu gratuit sur un iPhone vierge dont tous les services de localisation Ă©taient dĂ©sactivĂ©s. Cela semble raisonnable, nonâŻ?
En analysant les paquets, il a dĂ©couvert la quantitĂ© incroyable dâinformation qui Ă©tait envoyĂ©e par le moteur du jeu Unity. Cela signifie que le concepteur du jeu lui-mĂȘme ne sait sans doute pas que son jeu vous espionne.
Mais Tim Sh a fait mieuxâŻ: il a traquĂ© ces donnĂ©es et dĂ©couvertes oĂč elles Ă©taient revendues. Ce sont des entreprises ayant pignon sur rue qui revendent, en temps rĂ©els, les donnĂ©es utilisateursâŻ: position, historique et instantanĂ©e, niveau de la batterie et luminositĂ©, connexion internet utilisĂ©e, opĂ©rateur tĂ©lĂ©phonique, espace libre disponible sur le tĂ©lĂ©phone.
Le tout est accessible en temps rĂ©el pour des millions dâutilisateurs. Y compris des utilisateurs persuadĂ©s de protĂ©ger leur vie privĂ©e en dĂ©sactivant les permissions de localisation, en faisant attention voire mĂȘme en utilisant des containers GrapheneOSâŻ: il nây a en effet aucune malice, aucun piratage, aucune illĂ©galitĂ©. Si lâapplication fonctionne, câest quâelle envoie ses donnĂ©es, point Ă la ligne.
Ă noter Ă©galementâŻ: les donnĂ©es concernant les EuropĂ©ens sont plus chĂšres. En effet, le RGPD les rend plus difficiles Ă obtenir. Ce qui est la preuve que la rĂ©gulation politique fonctionne. Le RGPD est trĂšs loin dâĂȘtre suffisant. Sa seule utilitĂ© rĂ©elle est de dĂ©montrer que le pouvoir politique peut agir.
Nous avons tendance Ă nous moquer de la petitesse de lâEurope, car nous la mesurons en utilisant les mĂ©triques amĂ©ricaines. Nos politiciens rĂȘvent de «âŻlicornesâŻÂ» et de monopoles europĂ©ens. Câest une erreur, la force europĂ©enne est opposĂ©e Ă ces valeurs.
Comme le souligne Marcel SelâŻ: malgrĂ© tous ses dĂ©fauts, lâEurope est trĂšs imparfaite, mais, peut-ĂȘtre, la structure dans le monde la plus progressiste et qui protĂšge le mieux ses citoyens.
La saturation de lâindignation
Face Ă ce constat, nous observons deux rĂ©actions. Le «âŻjemenfoutismeâŻÂ» et lâindignation violente. Mais, contrairement Ă ce quâon pourrait croire, la seconde nâa pas plus dâimpact que la premiĂšre.
Olivier Ertzscheid parle de la saturation de lâindignation. Une indignation permanente qui nous fait perdre toute capacitĂ© dâagir.
Je vois beaucoup dâindignation concernant le gĂ©nocide quâIsraĂ«l commet Ă Gaza. Mais peu ou prou dâactions. Pourtant, une action simple est de supprimer ses comptes Whatsapp. Il est presque certain que les donnĂ©es Whatsapp servent pour cibler des frappes. Supprimer son compte, câest donc une action rĂ©elle. Moins il y aura de comptes Whatsapp, moins les Gazaouis trouveront lâapp indispensable, moins il y aura de donnĂ©es pour IsraĂ«l.
Au lieu de sâindigner, entrez en rĂ©sistance active. Coupez autant que vous pouvez les cordons. Vous allez perdre des opportunitĂ©sâŻ? Des contactsâŻ? Vous allez rater des informationsâŻ?
Câest le butâŻ! Câest lâobjectifâŻ! Câest la rĂ©sistance, le nouveau maquis. Oui, mais "machin", il est sur Facebook. Quand on entre sa rĂ©sistance, on y va pas en pantoufle avec toute la famille. Câest le principe mĂȘme de la rĂ©sistanceâŻ: de prendre des risques, dâaccomplir des actions que tout le monde ne comprend ou nâapprouve pas avec lâespoir de faire changer les choses durablement.
Câest difficile et on ne vous donnera pas une mĂ©daille pour cela. Si vous cherchez la facilitĂ©, le confort ou si vous voulez de la reconnaissance ou des fĂ©licitations officielles, ce nâest pas en rĂ©sistance que vous devez entrer.
SâarrĂȘter pour penser
Oui, les entreprises sont des poules sans tĂȘte qui courent dans tous les sens. Mes annĂ©es dans lâindustrie informatique mâont permis dâobserver que lâimmense majoritĂ© des employĂ©s ne fait strictement rien dâutile. Tout ce que nous faisons, câest prĂ©tendre. Lorsquâimpact il y a, ce qui est extrĂȘmement rare, câest de permettre Ă un client de faire «âŻmieux semblantâŻÂ».
Jâai arrĂȘtĂ© de le crier partout, car il est impossible de faire comprendre quelque chose Ă quelquâun si son salaire dĂ©pend du fait quâil ne le comprenne pas. Mais force est de constater que tous ceux qui sâarrĂȘtent pour penser arrivent Ă cette mĂȘme conclusion.
La merdification des entreprises peut vous toucher de maniĂšre la plus imprĂ©vue sur un produit que vous apprĂ©ciez tout particuliĂšrement. Câest mon cas avec Komoot, un outil que jâutilise en permanence pour planifier mes longs trajets Ă vĂ©lo et que jâutilise parfois "on the road", quand je suis un peu paumĂ© et que je veux un itinĂ©raire sĂ»r, mais rapide pour arriver rapidement Ă destination.
Pour celleux qui ne comprennent pas lâintĂ©rĂȘt dâun GPS Ă vĂ©lo, Thierry Crouzet a justement pondu un billet dĂ©taillant comment cet accessoire change la pratique du cyclisme.
Mais voilĂ , Komoot, startup allemande qui se prĂ©sentait comme un champion de la promotion des voyages Ă vĂ©lo, avec des fondateurs qui promettaient de ne jamais vendre leur bĂ©bĂ© a Ă©tĂ© vendu Ă un fond dâinvestissement rĂ©putĂ© pour merdifier tout ce quâil rachĂšte.
- Faut-il quitter Komoot, comme Twitter ou Facebook? (bikinvalais.ch)
- When We Get Komooted (bikepacking.com)
Je nâen veux pas aux fondateurs. Je sais bien quâĂ partir dâune certaine somme, on remet tous en question nos promesses. Les fondateurs de Whatsapp souhaitaient, Ă la base, fortement protĂ©ger la vie privĂ©e de leurs utilisateurs. Ils ont nĂ©anmoins vendu leur application Ă Facebook, car, de leurs propres aveux, on accepte certains compromis Ă partir dâune certaine somme.
Heureusement, des solutions libres se profilent comme lâexcellent Cartes.app qui a pris le problĂšme Ă bras le corps.
Il manque encore la possibilitĂ© dâenvoyer facilement un itinĂ©raire vers mon GPS de vĂ©lo pour que ce soit utilisable au quotidien, mais le symbole est clairâŻ: la dĂ©pendance envers des produits merdifiĂ©s nâest pas une fatalitĂ©âŻ!
De la nécessité du logiciel libre
Comme le démontre Gee, les ajouts de fonctionnalités non indispensables ne sont pas neutres. Elles accroissent considérablement le risque de panne et de problÚme.
Cette simplification ne peut, par essence, que passer par le logiciel libre qui force Ă la modularitĂ©. Liorel donne un exemple trĂšs parlantâŻ: Ă cause de sa complexitĂ©, Microsoft Excell utilisera pour toujours le calendrier julien. Contrairement Ă LibreOffice, qui utilise lâactuel calendrier grĂ©gorien.
Simplification, libertĂ©, ralentissement, dĂ©croissance de notre consommation ne sont que les faces dâune mĂȘme forme de rĂ©sistance, dâune mĂȘme conscientisation de la vie dans sa globalitĂ©.
Ralentir et prendre du recul. Câest dâailleurs ce que mâa violemment offert Chris Brannons, avec son dernier post sur sa capsule Gemini. Et quand je dis le dernierâŠ
Barring unforeseen circumstances or unexpected changes, my last day on earth will be June 13th, 2025.
Chris avait 46 ans et il a pris le temps dâĂ©crire le comment et le pourquoi de sa procĂ©dure dâeuthanasie. AprĂšs ce post, il a pris le temps de rĂ©pondre Ă mes emails alors que je lâencourageais Ă ne pas le faire.
Le symbole du vélo
On ne peut pas sâen foutre. On ne peut pas sâindigner. Il faut alors, avec les quelques millions de secondes qui nous reste Ă vivre, agir. Agir en faisant ce que lâon pense ĂȘtre le mieux pour soi-mĂȘme, le mieux pour nos enfants, le mieux pour lâhumanitĂ©.
Comme rouler Ă vĂ©loâŻ!
Et tant pis si ça ne change rien. Et tant pis si ça nous fait paraĂźtre Ă©trange aux yeux de certains. Et tant pis si ça a certains dĂ©savantages. Faire du vĂ©lo, câest entrer en rĂ©sistanceâŻ!
Symbole de libertĂ©, de simplification, dâindĂ©pendance et pourtant extrĂȘmement technologique, le vĂ©lo nâa jamais Ă©tĂ© aussi politique. Comme le souligne Klaus-Gerd Giesen, le Bikepunk est philosophique et politiqueâŻ!
Cela mâamuse dâailleurs beaucoup quand on prĂ©sente lâunivers de Bikepunk comme un monde dâoĂč a disparu la technologie. Parce que le vĂ©lo ce nâest pas de la technologie peut-ĂȘtreâŻ?
Dâailleurs, si vous nâavez pas encore le bouquin, il ne vous reste quâĂ courir faire coucou Ă votre libraire prĂ©fĂ©ré·e et entrer en rĂ©sistanceâŻ!
La photo dâillustration mâa Ă©tĂ© envoyĂ©e par Julien Ursini et est sous CC-By. PlongĂ© dans la lecture de Bikepunk, il a Ă©tĂ© saisi de dĂ©couvrir ce cadre de vĂ©lo rouillĂ©, debout dans le lit de la riviĂšre BlĂ©one, comme un acte de rĂ©sistance symbolique. Je ne pouvais rĂȘver meilleure illustration pour ce billet.
Je suis Ploum et je viens de publier Bikepunk, une fable Ă©colo-cycliste entiĂšrement tapĂ©e sur une machine Ă Ă©crire mĂ©canique. Pour me soutenir, achetez mes livres (si possible chez votre libraire)âŻ!
Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.
August 20, 2025
Or: Why you should stop worrying and love the LTS releases.
TL;DR: Stick to MediaWiki 1.43 LTS, avoid MediaWiki 1.44.
There are two major MediaWiki releases every year, and every fourth such release gets Long Term Support (LTS). Two consistent approaches to upgrading MediaWiki are to upgrade every major release or to upgrade every LTS version. Let’s compare the pros and cons.
Which Upgrade Strategy Is Best
I used to upgrade my wikis for every MediaWiki release, or even run the master (development) branch. Having become more serious about MediaWiki operations by hosting wikis for many customers at Professional Wiki, I now believe sticking to LTS versions is the better trade-off for most people.
Benefits and drawbacks of upgrading every major MediaWiki version (compared to upgrading every LTS version):
- Pro: You get access to all the latest features
- Pro: You might be able to run more modern PHP or operating system versions
- Con: You have to spend effort on upgrades four times as often (twice a year instead of once every two years)
- Con: You have to deal with breaking changes four times as often
- Con: You have to deal with extension compatibility issues four times as often
- Con: You run versions with shorter support windows. Regular major releases are supported for 1 year, while LTS releases receive support for 3 years
What about the latest features? MediaWiki is mature software. Its features evolve slowly, and most innovation happens in the extension ecosystem. Most releases only contain a handful of notable changes, and there is a good chance none of them matter for your use cases. If there is something you would benefit from in a more recent non-LTS major release, then that’s an argument for not sticking to that LTS version, and it’s up to you to determine if that benefit outweighs all the cons. I think it rarely does, with the comparison not even being close.
The Case Of MediaWiki 1.44
MediaWiki 1.44 is the first major MediaWiki release after MediaWiki 1.43 LTS, and at the time of writing this post, it is also the most recent major release.
As with many releases, MediaWiki 1.44 brings several breaking changes to its internal APIs. This means that MediaWiki extensions that work with the previous versions might no longer work with MediaWiki 1.44. This version brings a high number of these breaking changes, including some particularly nasty ones that prevent extensions from easily supporting both MediaWiki 1.43 LTS and MediaWiki 1.44. That means if you upgrade now, you will run into various compatibility problems with extensions.
Examples of the type of errors you will encounter:
PHP Fatal error: Uncaught Error: Class “Html” not found
PHP Fatal error: Uncaught Error: Class “WikiMap” not found
PHP Fatal error: Uncaught Error: Class “Title” not found
Given that most wikis use dozens of MediaWiki extensions, this makes the “You have to deal with extension compatibility issues” con particularly noteworthy for MediaWiki 1.44.
Unless you have specific reasons to upgrade to MediaWiki 1.44, just stick to MediaWiki 1.43 LTS and wait for MediaWiki 1.47 LTS, which will be released around December 2026.
See also: When To Upgrade MediaWiki (And Understanding MediaWiki versions)
The post Why You Should Skip MediaWiki 1.44 appeared first on Entropy Wins.
After nearly two decades and over 1,600 blog posts written in raw HTML, I've made a change that feels long overdue: I've switched to Markdown.
Don't worry, I'm not moving away from Drupal. I'm just moving from a "HTML text format" to a "Markdown format". My last five posts have all been written in Markdown.
I've actually written in Markdown for years. I started with Bear for note-taking, and for the past four years Obsidian has been my go-to tool. Until recently, though, I've always published my blog posts in HTML.
For almost 20 years, I wrote every blog post in raw HTML, typing out every tag by hand. For longer posts, it could take me 45 minutes wrapping everything in <p> tags, adding links, and closing HTML tags like it was still 2001. It was tedious, but also a little meditative. I stuck with it, partly out of pride and partly out of habit.
Getting Markdown working in Drupal
So when I decided to make the switch, I had to figure out how to get Markdown working in Drupal. Drupal has multiple great Markdown modules to choose from but I picked Markdown Easy because it's lightweight, fully tested, and built on the popular CommonMark library.
I documented my installation and upgrade steps in a public note titled Installing and configuring Markdown Easy for Drupal.
I ran into one problem: the module's security-first approach stripped all HTML tags from my posts. This was an issue because I mostly write in Markdown but occasionally mix in HTML for things Markdown doesn't support, like custom styling. One example is creating pull quotes with a custom CSS class:
After 20 years of writing in HTML, I switched to *Markdown*.
<p class="pullquote">HTML for 20 years. Markdown from now on.</p>
Now I can publish faster while still using [Drupal](https://drupal.org).
HTML in Markdown by design
Markdown was always meant to work hand in hand with HTML, and Markdown parsers are supposed to leave HTML tags untouched. John Gruber, the creator of Markdown, makes this clear in the original Markdown specification:
HTML is a publishing format; Markdown is a writing format. Thus, Markdown's formatting syntax only addresses issues that can be conveyed in plain text. [...] For any markup that is not covered by Markdown's syntax, you simply use HTML itself. There is no need to preface it or delimit it to indicate that you're switching from Markdown to HTML; you just use the tags.
In Markdown Easy 1.x, allowing HTML tags required writing a custom Drupal module with a specific "hook" implementation. This felt like too much work for something that should be a simple configuration option. I've never enjoyed writing and maintaining custom Drupal modules for cases like this.
I reached out to Mike Anello, the maintainer of Markdown Easy, to discuss a simpler way to mix HTML and Markdown.
I suggested making it a configuration option and helped test and review the necessary changes. I was happy when that became part of the built-in settings in version 2.0. A few weeks later, Markdown Easy 2.0 was released, and this capability is now available out of the box.
Now that everything is working, I am considering converting my 1,600+ existing posts from HTML to Markdown. Part of me wants everything to be consistent, but another part hesitates to overwrite hundreds of hours of carefully crafted HTML. The obsessive in me debates the archivist. We'll see who wins.
The migration itself would be a fun technical challenge. Plenty of tools exist to convert HTML to Markdown so no need to reinvent the wheel. Maybe I'll test a few converters on some posts to see which handles my particular setup best.
Extending Markdown with tokens
Like Deane Barker, I often mix HTML and Markdown with custom "tokens". In my case, they aren't official web components, but they serve a similar purpose.
For example, here is a snippet that combines standard Markdown with a token that embeds an image:
Nothing beats starting the day with [coffee](https://dri.es/tag/coffee) and this view:
[âimage beach-sunrise.jpg lazy=true schema=true caption=false]
These tokens get processed by my custom Drupal module and transformed into full HTML. That basic image token? It becomes a responsive picture element complete with lazy loading, alt-text from my database, Schema.org support, and optional caption. I use similar tokens for videos and other dynamic content.
The real power of tokens is future proofing. When responsive images became a web standard, I could update my image token processor once and instantly upgrade all my blog posts. No need to edit old content. Same when lazy loading became standard, or when new image formats arrive. One code change updates all 10,000 images or so that I've ever posted.
My tokens has evolved over 15 years and deserves its own blog post. Down the road, I might turn some of them into web components like Deane describes.
Closing thoughts
In the end, this was not a syntax decision: it was a workflow decision. I want less friction between an idea and publishing it. Five Markdown posts in, publishing is faster, cleaner, and more enjoyable, while still giving me the flexibility I need.
Those 45 minutes I used to spend on HTML tags? I now spend on things that matter more, or on writing another blog post.
Letâs talk about environment variables in GitHub Actions â those little gremlins that either make your CI/CD run silky smooth or throw a wrench in your perfectly crafted YAML.
If you’ve ever squinted at your pipeline and wondered, âWhere the heck should I declare this ANSIBLE_CONFIG thing so it doesnât vanish into the void between steps?â, youâre not alone. Iâve been there. Iâve screamed at $GITHUB_ENV. Iâve misused export. Iâve over-engineered echo. But fear not, dear reader â Iâve distilled it down so you donât have to.
In this post, weâll look at the right ways (and a few less right ways) to set environment variables â and more importantly, when to use static vs dynamic approaches.
Static Variables: Set It and Forget It
Got a variable like ANSIBLE_STDOUT_CALLBACK=yaml thatâs the same every time? Congratulations, youâve got yourself a static variable! These are the boring, predictable, low-maintenance types that make your CI life a dream.
Best Practice: Job-Level env
If your variable is static and used across multiple steps, this is the cleanest, classiest, and least shouty way to do it:
jobs:
my-job:
runs-on: ubuntu-latest
env:
ANSIBLE_CONFIG: ansible.cfg
ANSIBLE_STDOUT_CALLBACK: yaml
steps:
- name: Use env vars
run: echo "ANSIBLE_CONFIG is $ANSIBLE_CONFIG"
Why it rocks:
Super readable
Available in every step of the job
Keeps your YAML clean â no extra echo commands, no nonsense
Unless you have a very specific reason not to, this should be your default.
Dynamic Variables: Born to Be Wild
Now what if your variables arenât so chill? Maybe you calculate something in one step and need to pass it to another â a file path, a version number, an API token from a secret backend ritual…
Thatâs when you reach for the slightly more⊠creative option:
$GITHUB_ENV to the rescue
- name: Set dynamic environment vars
run: |
echo "BUILD_DATE=$(date +%F)" >> $GITHUB_ENV
echo "RELEASE_TAG=v1.$(date +%s)" >> $GITHUB_ENV
- name: Use them later
run: echo "Tag: $RELEASE_TAG built on $BUILD_DATE"
What it does:
- Persists the variables across steps
- Works well when values are calculated during the run
- Makes you feel powerful
Fancy Bonus: Heredoc Style
If you like your YAML with a side of Bash wizardry:
- name: Set vars with heredoc
run: |
cat <<EOF >> $GITHUB_ENV
FOO=bar
BAZ=qux
EOF
Because sometimes, you just want to feel fancy.
What Not to Do (Unless You Really Mean It)
- name: Set env with export
run: |
export FOO=bar
echo "FOO is $FOO"
This only works within that step. The minute your pipeline moves on, FOO is gone. Poof. Into the void. If thatâs what you want, fine. If not, donât say I didnât warn you.
TL;DR â The Cheat Sheet
| Scenario | Best Method |
|---|---|
| Static variable used in all steps | env at the job level ![]() |
| Static variable used in one step | env at the step level |
| Dynamic value needed across steps | $GITHUB_ENV ![]() |
| Dynamic value only needed in one step | export (but donât overdo it) |
| Need to show off with Bash skills | cat <<EOF >> $GITHUB_ENV ![]() |
My Use Case: Ansible FTW
In my setup, I wanted to use:
ANSIBLE_CONFIG=ansible.cfg
ANSIBLE_STDOUT_CALLBACK=yaml
These are rock-solid, boringly consistent values. So instead of writing this in every step:
- name: Set env
run: |
echo "ANSIBLE_CONFIG=ansible.cfg" >> $GITHUB_ENV
I now do this:
jobs:
deploy:
runs-on: ubuntu-latest
env:
ANSIBLE_CONFIG: ansible.cfg
ANSIBLE_STDOUT_CALLBACK: yaml
steps:
...
Cleaner. Simpler. One less thing to trip over when Iâm debugging at 2am.
Final Thoughts
Environment variables in GitHub Actions arenât hard â once you know the rules of the game. Use env for the boring stuff. Use $GITHUB_ENV when you need a little dynamism. And remember: if youâre writing export in step after step, something probably smells.
Got questions? Did I miss a clever trick? Want to tell me my heredoc formatting is ugly? Hit me up in the comments or toot at me on Mastodon.
Posted by Amedee, who loves YAML almost as much as dancing polskas.
Because good CI is like a good dance: smooth, elegant, and nobody falls flat on their face.
Scheduled to go live on 20 August â just as Boombalfestival kicks off. Because why not celebrate great workflows and great dances at the same time?
August 18, 2025

August 13, 2025
When using Ansible to automate tasks, the command module is your bread and butter for executing system commands. But did you know that there’s a safer, cleaner, and more predictable way to pass arguments? Meet argvâan alternative to writing commands as strings.
In this post, Iâll explore the pros and cons of using argv, and Iâll walk through several real-world examples tailored to web servers and mail servers.
Why Use argv Instead of a Command String?
â Pros
- Avoids Shell Parsing Issues: Each argument is passed exactly as intended, with no surprises from quoting or spaces.
- More Secure: No shell = no risk of shell injection.
- Clearer Syntax: Every argument is explicitly defined, improving readability.
- Predictable: Behavior is consistent across different platforms and setups.
â Cons
- No Shell Features: You can’t use pipes (
|), redirection (>), or environment variables like$HOME. - More Verbose: Every argument must be a separate list item. Itâs explicit, but more to type.
- Not for Shell Built-ins: Commands like
cd,export, orechowith redirection won’t work.
Real-World Examples
Letâs apply this to actual use cases.
đ§ Restarting Nginx with argv
- name: Restart Nginx using argv
hosts: amedee.be
become: yes
tasks:
- name: Restart Nginx
ansible.builtin.command:
argv:
- systemctl
- restart
- nginx
đŹ Check Mail Queue on a Mail-in-a-Box Server
- name: Check Postfix mail queue using argv
hosts: box.vangasse.eu
become: yes
tasks:
- name: Get mail queue status
ansible.builtin.command:
argv:
- mailq
register: mail_queue
- name: Show queue
ansible.builtin.debug:
msg: "{{ mail_queue.stdout_lines }}"
đïž Back Up WordPress Database
- name: Backup WordPress database using argv
hosts: amedee.be
become: yes
vars:
db_user: wordpress_user
db_password: wordpress_password
db_name: wordpress_db
tasks:
- name: Dump database
ansible.builtin.command:
argv:
- mysqldump
- -u
- "{{ db_user }}"
- -p{{ db_password }}
- "{{ db_name }}"
- --result-file=/root/wordpress_backup.sql
â ïž Avoid exposing credentials directlyâuse Ansible Vault instead.
Using argv with Interpolation
Ansible lets you use Jinja2-style variables ({{ }}) inside argv items.
đ Restart a Dynamic Service
- name: Restart a service using argv and variable
hosts: localhost
become: yes
vars:
service_name: nginx
tasks:
- name: Restart
ansible.builtin.command:
argv:
- systemctl
- restart
- "{{ service_name }}"
đ Timestamped Backups
- name: Timestamped DB backup
hosts: localhost
become: yes
vars:
db_user: wordpress_user
db_password: wordpress_password
db_name: wordpress_db
tasks:
- name: Dump with timestamp
ansible.builtin.command:
argv:
- mysqldump
- -u
- "{{ db_user }}"
- -p{{ db_password }}
- "{{ db_name }}"
- --result-file=/root/wordpress_backup_{{ ansible_date_time.iso8601 }}.sql
đ§© Dynamic Argument Lists
Avoid join(' '), which collapses the list into a single string.
â Wrong:
argv:
- ls
- "{{ args_list | join(' ') }}" # BAD: becomes one long string
â Correct:
argv: ["ls"] + args_list
Or if the length is known:
argv:
- ls
- "{{ args_list[0] }}"
- "{{ args_list[1] }}"
đŁ Interpolation Inside Strings
- name: Greet with hostname
hosts: localhost
tasks:
- name: Print message
ansible.builtin.command:
argv:
- echo
- "Hello, {{ ansible_facts['hostname'] }}!"
When to Use argv
â
Commands with complex quoting or multiple arguments
â
Tasks requiring safety and predictability
â
Scripts or binaries that take arguments, but not full shell expressions
When to Avoid argv
â When you need pipes, redirection, or shell expansion
â When you’re calling shell built-ins
Final Thoughts
Using argv in Ansible may feel a bit verbose, but it offers precision and security that traditional string commands lack. When you need reliable, cross-platform automation that avoids the quirks of shell parsing, argv is the better choice.
Prefer safety? Choose argv.
Need shell magic? Use the shell module.
Have a favorite argv trick or horror story? Drop it in the comments below.
August 06, 2025
Not every day do I get an email from a very serious security researcher, clearly a man on a mission to save the internet â one vague, copy-pasted email at a time.
Hereâs the message I received:
From: Peter Hooks
<peterhooks007@gmail.com>
Subject: Security Vulnerability DisclosureHi Team,
Iâve identified security vulnerabilities in your app that may put users at risk. Iâd like to report these responsibly and help ensure they are resolved quickly.
Please advise on your disclosure protocol, or share details if you have a Bug Bounty program in place.
Looking forward to your reply.
Best regards,
Peter Hooks
Right. Letâs unpack this.
”Your App” â What App?
I’m not a company. I’m not a startup. I’m not even a garage-based stealth tech bro.
I run a personal WordPress blog. Thatâs it.
There is no âapp.â There are no âusers at riskâ (unless you count me, and IÌ·ÌÌâÌ·ÌÌ mÌŽÌÌȘ ÌŽÍÌča̞̜ÍlÌ”ÌżÌŁr̞̜ÍeÌ”ÌÍa̶ÌÍdÌ”ÌÍyÌŽÌÌŒ ÌŽÍÌb̶ÌÌ e̶ÌÌ»yÌŽÍÍoÌžÌÌŁnÌžÌÌŠdÌŽÌÌ Ì¶ÍÍs̶ÍÍ a̶ÍÌĄvÌŽÍÍiÌ”ÌÍnÌ”ÌÍgÌžÌÌĄ).
The Anatomy of a Beg Bounty Email
This little email ticks all the classic marks of what the security community affectionately calls a beg bounty â someone scanning random domains, finding trivial or non-issues, and fishing for a payout.
Want to see how common this is? Check out:
- Troy Huntâs excellent blog post on beg bounties
- Sophosâ warning on random bug bounty emails
- Intigritiâs insights into the latest beg bounty scam
My (Admittedly Snarky) Reply
I couldnât resist. Hereâs the reply I sent:
Hi Peter,
Thanks for your email and your keen interest in my âappâ â spoiler alert: there isnât one. Just a humble personal blog here.
Your message hits all the classic marks of a beg bounty reconnaissance email:
Generic âHi Teamâ greeting â because who needs names?
Vague claims of âsecurity vulnerabilitiesâ with zero specifics
Polite inquiry about a bug bounty program (spoiler: none here, James)
No proof, no details, just good old-fashioned mystery
Friendly tone crafted to reel in easy targets
Email address proudly featuring â007â â very covert ops of you
Bravo. You almost had me convinced.
Iâll be featuring this charming little interaction in a blog post soon â starring you, of course. If you ever feel like upgrading from vague templates to actual evidence, Iâm all ears. Until then, happy fishing!
Cheers,
Amedee
No Reply
Sadly, Peter didnât write back.
No scathing rebuttal.
No actual vulnerabilities.
No awkward attempt at pivoting.
Just… silence.
#sadface
#crying
#missionfailed
A Note for Fellow Nerds
If youâve got a domain name, no matter how small, thereâs a good chance youâll get emails like this.
Hereâs how to handle them:
- Stay calm â most of these are low-effort probes.
- Donât pay â you owe nothing to random strangers on the internet.
- Donât panic â vague threats are just that: vague.
- Do check your stuff occasionally for actual issues.
- Bonus: write a blog post about it and enjoy the catharsis.
For more context on this phenomenon, donât miss:
tl;dr
If your âsecurity researcherâ:
- doesnât say what they found,
- doesnât mention your actual domain or service,
- asks for a bug bounty up front,
- signs with a Gmail address ending in 007…
âŠitâs probably not the start of a beautiful friendship.
Got a similar email? Want help crafting a reply thatâs equally professional and petty?
Feel free to drop a comment or reach out â Iâll even throw in a checklist.
Until then: stay patched, stay skeptical, and stay snarky. 
August 05, 2025
Rethinking DOM from first principles
Browsers are in a very weird place. While WebAssembly has succeeded, even on the server, the client still feels largely the same as it did 10 years ago.
Enthusiasts will tell you that accessing native web APIs via WASM is a solved problem, with some minimal JS glue.
But the question not asked is why you would want to access the DOM. It's just the only option. So I'd like to explain why it really is time to send the DOM and its assorted APIs off to a farm somewhere, with some ideas on how.
I won't pretend to know everything about browsers. Nobody knows everything anymore, and that's the problem.
The 'Document' Model
Few know how bad the DOM really is. In Chrome, document.body now has 350+ keys, grouped roughly like this:

This doesn't include the CSS properties in document.body.style of which there are... 660.
The boundary between properties and methods is very vague. Many are just facades with an invisible setter behind them. Some getters may trigger a just-in-time re-layout. There's ancient legacy stuff, like all the onevent properties nobody uses anymore.
The DOM is not lean and continues to get fatter. Whether you notice this largely depends on whether you are making web pages or web applications.
Most devs now avoid working with the DOM directly, though occasionally some purist will praise pure DOM as being superior to the various JS component/templating frameworks. What little declarative facilities the DOM has, like innerHTML, do not resemble modern UI patterns at all. The DOM has too many ways to do the same thing, none of them nice.
connectedCallback() {
const
shadow = this.attachShadow({ mode: 'closed' }),
template = document.getElementById('hello-world')
.content.cloneNode(true),
hwMsg = `Hello ${ this.name }`;
Array.from(template.querySelectorAll('.hw-text'))
.forEach(n => n.textContent = hwMsg);
shadow.append(template);
}
Web Components deserve a mention, being the web-native equivalent of JS component libraries. But they came too late and are unpopular. The API seems clunky, with its Shadow DOM introducing new nesting and scoping layers. Proponents kinda read like apologetics.
The achilles heel is the DOM's SGML/XML heritage, making everything stringly typed. React-likes do not have this problem, their syntax only looks like XML. Devs have learned not to keep state in the document, because it's inadequate for it.


For HTML itself, there isn't much to critique because nothing has changed in 10-15 years. Only ARIA (accessibility) is notable, and only because this was what Semantic HTML was supposed to do and didn't.
Semantic HTML never quite reached its goal. Despite dating from around 2011, there is e.g. no <thread> or <comment> tag, when those were well-established idioms. Instead, an article inside an article is probably a comment. The guidelines are... weird.
There's this feeling that HTML always had paper-envy, and couldn't quite embrace or fully define its hypertext nature, and did not trust its users to follow clear rules.
Stewardship of HTML has since firmly passed to WHATWG, really the browser vendors, who have not been able to define anything more concrete as a vision, and have instead just added epicycles at the margins.
Along the way even CSS has grown expressions, because every templating language wants to become a programming language.
Editability of HTML remains a sad footnote. While technically supported via contentEditable, actually wrangling this feature into something usable for applications is a dark art. I'm sure the Google Docs and Notion people have horror stories.
Nobody really believes in the old gods of progressive enhancement and separating markup from style anymore, not if they make apps.
Most of the applications you see nowadays will kitbash HTML/CSS/SVG into a pretty enough shape. But this comes with immense overhead, and is looking more and more like the opposite of a decent UI toolkit.
The Slack input box
Off-screen clipboard hacks
Lists and tables must be virtualized by hand, taking over for layout, resizing, dragging, and so on. Making a chat window's scrollbar stick to the bottom is somebody's TODO, every single time. And the more you virtualize, the more you have to reinvent find-in-page, right-click menus, etc.
The web blurred the distinction between UI and fluid content, which was novel at the time. But it makes less and less sense, because the UI part is a decade obsolete, and the content has largely homogenized.
CSS is inside-out
CSS doesn't have a stellar reputation either, but few can put their finger on exactly why.
Where most people go wrong is to start with the wrong mental model, approaching it like a constraint solver. This is easy to show with e.g.:
<div>
<div style="height: 50%">...</div>
<div style="height: 50%">...</div>
</div>
<div>
<div style="height: 100%">...</div>
<div style="height: 100%">...</div>
</div>
The first might seem reasonable: divide the parent into two halves vertically. But what about the second?
Viewed as a set of constraints, it's contradictory, because the parent div is twice as tall as... itself. What will happen instead in both cases is the height is ignored. The parent height is unknown and CSS doesn't backtrack or iterate here. It just shrink-wraps the contents.
If you set e.g. height: 300px on the parent, then it works, but the latter case will still just spill out.

Outside-in and inside-out layout modes
Instead, your mental model of CSS should be applying two passes of constraints, first going outside-in, and then inside-out.
When you make an application frame, this is outside-in: the available space is divided, and the content inside does not affect sizing of panels.
When paragraphs stack on a page, this is inside-out: the text stretches out its containing parent. This is what HTML wants to do naturally.
By being structured this way, CSS layouts are computationally pretty simple. You can propagate the parent constraints down to the children, and then gather up the children's sizes in the other direction. This is attractive and allows webpages to scale well in terms of elements and text content.
CSS is always inside-out by default, reflecting its document-oriented nature. The outside-in is not obvious, because it's up to you to pass all the constraints down, starting with body { height: 100%; }. This is why they always say vertical alignment in CSS is hard.

Use flex grow and shrink for spill-free auto-layouts with completely reasonable gaps
The scenario above is better handled with a CSS3 flex box (display: flex), which provides explicit control over how space is divided.
Unfortunately flexing muddles the simple CSS model. To auto-flex, the layout algorithm must measure the "natural size" of every child. This means laying it out twice: first speculatively, as if floating in aether, and then again after growing or shrinking to fit:

This sounds reasonable but can come with hidden surprises, because it's recursive. Doing speculative layout of a parent often requires full layout of unsized children. e.g. to know how text will wrap. If you nest it right, it could in theory cause an exponential blow up, though I've never heard of it being an issue.
Instead you will only discover this when someone drops some large content in somewhere, and suddenly everything gets stretched out of whack. It's the opposite of the problem on the mug.
To avoid the recursive dependency, you need to isolate the children's contents from the outside, thus making speculative layout trivial. This can be done with contain: size, or by manually setting the flex-basis size.
CSS has gained a few constructs like contain or will-change, which work directly with the layout system, and drop the pretense of one big happy layout. It reveals some of the layer-oriented nature underneath, and is a substitute for e.g. using position: absolute wrappers to do the same.
What these do is strip off some of the semantics, and break the flow of DOM-wide constraints. These are overly broad by default and too document-oriented for the simpler cases.
This is really a metaphor for all DOM APIs.


The Good Parts?
That said, flex box is pretty decent if you understand these caveats. Building layouts out of nested rows and columns with gaps is intuitive, and adapts well to varying sizes. There is a "CSS: The Good Parts" here, which you can make ergonomic with sufficient love. CSS grids also work similarly, they're just very painfully... CSSy in their syntax.
But if you designed CSS layout from scratch, you wouldn't do it this way. You wouldn't have a subtractive API, with additional extra containment barrier hints. You would instead break the behavior down into its component facets, and use them Ă la carte. Outside-in and inside-out would both be legible as different kinds of containers and placement models.
The inline-block and inline-flex display models illustrate this: it's a block or flex on the inside, but an inline element on the outside. These are two (mostly) orthogonal aspects of a box in a box model.
Text and font styles are in fact the odd ones out, in hypertext. Properties like font size inherit from parent to child, so that formatting tags like <b> can work. But most of those 660 CSS properties do not do that. Setting a border on an element does not apply the same border to all its children recursively, that would be silly.
It shows that CSS is at least two different things mashed together: a system for styling rich text based on inheritance... and a layout system for block and inline elements, nested recursively but without inheritance, only containment. They use the same syntax and APIs, but don't really cascade the same way. Combining this under one style-umbrella was a mistake.
Worth pointing out: early ideas of relative em scaling have largely become irrelevant. We now think of logical vs device pixels instead, which is a far more sane solution, and closer to what users actually expect.
SVG is natively integrated as well. Having SVGs in the DOM instead of just as <img> tags is useful to dynamically generate shapes and adjust icon styles.
But while SVG is powerful, it's neither a subset nor superset of CSS. Even when it overlaps, there are subtle differences, like the affine transform. It has its own warts, like serializing all coordinates to strings.
CSS has also gained the ability to round corners, draw gradients, and apply arbitrary clipping masks: it clearly has SVG-envy, but falls very short. SVG can e.g. do polygonal hit-testing for mouse events, which CSS cannot, and SVG has its own set of graphical layer effects.
Whether you use HTML/CSS or SVG to render any particular element is based on specific annoying trade-offs, even if they're all scalable vectors on the back-end.
In either case, there are also some roadblocks. I'll just mention three:
text-ellipsiscan only be used to truncate unwrapped text, not entire paragraphs. Detecting truncated text is even harder, as is just measuring text: the APIs are inadequate. Everyone just counts letters instead.position: stickylets elements stay in place while scrolling with zero jank. While tailor-made for this purpose, it's subtly broken. Having elements remain unconditionally sticky requires an absurd nesting hack, when it should be trivial.- The
z-indexproperty determines layering by absolute index. This inevitably leads to az-index-war.csswhere everyone is putting in a new number +1 or -1 to make things layer correctly. There is no concept of relative Z positioning.
For each of these features, we got stuck with v1 of whatever they could get working, instead of providing the right primitives.
Getting this right isn't easy, it's the hard part of API design. You can only iterate on it, by building real stuff with it before finalizing it, and looking for the holes.
Oil on Canvas
So, DOM is bad, CSS is single-digit X% good, and SVG is ugly but necessary... and nobody is in a position to fix it?
Well no. The diagnosis is that the middle layers don't suit anyone particularly well anymore. Just an HTML6 that finally removes things could be a good start.
But most of what needs to happen is to liberate the functionality that is there already. This can be done in good or bad ways. Ideally you design your system so the "escape hatch" for custom use is the same API you built the user-space stuff with. That's what dogfooding is, and also how you get good kernels.
A recent proposal here is HTML in Canvas, to draw HTML content into a <canvas>, with full control over the visual output. It's not very good.
While it might seem useful, the only reason the API has the shape that it does is because it's shoehorned into the DOM: elements must be descendants of <canvas> to fully participate in layout and styling, and to make accessibility work. There are also "technical concerns" with using it off-screen.
One example is this spinny cube:
To make it interactive, you attach hit-testing rectangles and respond to paint events. This is a new kind of hit-testing API. But it only works in 2D... so it seems 3D-use is only cosmetic? I have many questions.
Again, if you designed it from scratch, you wouldn't do it this way! In particular, it's absurd that you'd have to take over all interaction responsibilities for an element and its descendants just to be able to customize how it looks i.e. renders. Especially in a browser that has projective CSS 3D transforms.
The use cases not covered by that, e.g. curved re-projection, will also need more complicated hit-testing than rectangles. Did they think this through? What happens when you put a dropdown in there?
To me it seems like they couldn't really figure out how to unify CSS and SVG filters, or how to add shaders to CSS. Passing it thru canvas is the only viable option left. "At least it's programmable." Is it really? Screenshotting DOM content is 1 good use-case, but not what this is sold as at all.
The whole reason to do "complex UIs on canvas" is to do all the things the DOM doesn't do, like virtualizing content, just-in-time layout and styling, visual effects, custom gestures and hit-testing, and so on. It's all nuts and bolts stuff. Having to pre-stage all the DOM content you want to draw sounds... very counterproductive.
From a reactivity point-of-view it's also a bad idea to route this stuff back through the same document tree, because it sets up potential cycles with observers. A canvas that's rendering DOM content isn't really a document element anymore, it's doing something else entirely.
The actual achilles heel of canvas is that you don't have any real access to system fonts, text layout APIs, or UI utilities. It's quite absurd how basic it is. You have to implement everything from scratch, including Unicode word splitting, just to get wrapped text.
The proposal is "just use the DOM as a black box for content." But we already know that you can't do anything except more CSS/SVG kitbashing this way. text-ellipsis and friends will still be broken, and you will still need to implement UIs circa 1990 from scratch to fix it.
It's all-or-nothing when you actually want something right in the middle. That's why the lower level needs to be opened up.
Where To Go From Here
The goals of "HTML in Canvas" do strike a chord, with chunks of HTML used as free-floating fragments, a notion that has always existed under the hood. It's a composite value type you can handle. But it should not drag 20 years of useless baggage along, while not enabling anything truly novel.
The kitbashing of the web has also resulted in enormous stagnation, and a loss of general UI finesse. When UI behaviors have to be mined out of divs, it limits the kinds of solutions you can even consider. Fixing this within DOM/HTML seems unwise, because there's just too much mess inside. Instead, new surfaces should be opened up outside of it.
My schtick here has become to point awkwardly at Use.GPU's HTML-like renderer, which does a full X/Y flex model in a fraction of the complexity or code. I don't mean my stuff is super great, no, it's pretty bare-bones and kinda niche... and yet definitely nicer. Vertical centering is easy. Positioning makes sense.
There is no semantic HTML or CSS cascade, just first-class layout. You don't need 61 different accessors for border* either. You can just attach shaders to divs. Like, that's what people wanted right? Here's a blueprint, it's mostly just SDFs.
Font and markup concerns only appear at the leaves of the tree, where the text sits. It's striking how you can do like 90% of what the DOM does here, without the tangle of HTML/CSS/SVG, if you just reinvent that wheel. Done by 1 guy. And yes, I know about the second 90% too.
The classic data model here is of a view tree and a render tree. What should the view tree actually look like? And what can it be lowered into? What is it being lowered into right now, by a giant pile of legacy crud?
Alt-browser projects like Servo or Ladybird are in a position to make good proposals here. They have the freshest implementations, and are targeting the most essential features first. The big browser vendors could also do it, but well, taste matters. Good big systems grow from good small ones, not bad big ones. Maybe if Mozilla hadn't imploded... but alas.
Platform-native UI toolkits are still playing catch up with declarative and reactive UI, so that's that. Native Electron-alternatives like Tauri could be helpful, but they don't treat origin isolation as a design constraint, which makes security teams antsy.
There's a feasible carrot to dangle for them though, namely in the form of better process isolation. Because of CPU exploits like Spectre, multi-threading via SharedArrayBuffer and Web Workers is kinda dead on arrival anyway, and that affects all WASM. The details are boring but right now it's an impossible sell when websites have to have things like OAuth and Zendesk integrated into them.
Reinventing the DOM to ditch all legacy baggage could coincide with redesigning it for a more multi-threaded, multi-origin, and async web. The browser engines are already multi-process... what did they learn? A lot has happened since Netscape, with advances in structured concurrency, ownership semantics, FP effects... all could come in handy here.
* * *
Step 1 should just be a data model that doesn't have 350+ properties per node tho.
Don't be under the mistaken impression that this isn't entirely fixable.
August 04, 2025
August 03, 2025

Lookat 2.1.0rc2 is the second release candicate of release of Lookat/Bekijk 2.1.0, a user-friendly Unix file browser/viewer that supports colored man pages.
The focus of the 2.1.0 release is to add ANSI Color support.
Â
News
3 Aug 2025 Lookat 2.1.0rc2 Released
Lookat 2.1.0rc2 is the second release candicate of Lookat 2.1.0
ChangeLog
Lookat / Bekijk 2.1.0rc2
- Corrected italic color
- Donât reset the search offset when cursor mode is enabled
- Renamed strsize to charsize ( ansi_strsize -> ansi_charsize, utf8_strsize -> utf8_charsize) to be less confusing
- Support for multiple ansi streams in ansi_utf8_strlen()
- Update default color theme to green for this release
- Update manpages & documentation
- Reorganized contrib directory
- Moved ci/cd related file from contrib/* to contrib/cicd
- Moved debian dir to contrib/dist
- Moved support script to contrib/scripts
Lookat 2.1.0rc2 is available at:
- https://www.wagemakers.be/english/programs/lookat/
- Download it directly from https://download-mirror.savannah.gnu.org/releases/lookat/
- Or at the Git repository at GNU savannah https://git.savannah.gnu.org/cgit/lookat.git/
Have fun!
August 01, 2025
Net Orange via eSim geactiveerd op mijn Fairphone 6 en voor ik het door had werden “App Center”, “Phone” (beiden van Orange group) maar ook … TikTok geïnstalleerd. Ik was daar niet blij mee. App Center kan ik zelfs niet de-installeren, alleen desactiveren. Fuckers!
July 30, 2025
Fantastische cover van Jamie Woons “Night Air” door Lady Lynn. Die contrabas en die stem, magisch! Watch this video on YouTube. …
Amedee Van Gasse
Creating 10 000 Random Files & Analyzing Their Size Distribution: Because Why Not? đ§đŸ
Ever wondered what itâs like to unleash 10 000 tiny little data beasts on your hard drive? No? Well, buckle up anyway â because today, weâre diving into the curious world of random file generation, and then nerding out by calculating their size distribution. Spoiler alert: itâs less fun than it sounds. đ
Step 1: Letâs Make Some Files… Lots of Them
Our goal? Generate 10 000 files filled with random data. But not just any random sizes â we want a mean file size of roughly 68 KB and a median of about 2 KB. Sounds like a math puzzle? Thatâs because it kind of is.
If you just pick file sizes uniformly at random, youâll end up with a median close to the mean â which is boring. We want a skewed distribution, where most files are small, but some are big enough to bring that average up.
The Magic Trick: Log-normal Distribution đ©âš
Enter the log-normal distribution, a nifty way to generate lots of small numbers and a few big ones â just like real life. Using Pythonâs NumPy library, we generate these sizes and feed them to good old /dev/urandom to fill our files with pure randomness.
Hereâs the Bash script that does the heavy lifting:
#!/bin/bash
# Directory to store the random files
output_dir="random_files"
mkdir -p "$output_dir"
# Total number of files to create
file_count=10000
# Log-normal distribution parameters
mean_log=9.0 # Adjusted for ~68KB mean
stddev_log=1.5 # Adjusted for ~2KB median
# Function to generate random numbers based on log-normal distribution
generate_random_size() {
python3 -c "import numpy as np; print(int(np.random.lognormal($mean_log, $stddev_log)))"
}
# Create files with random data
for i in $(seq 1 $file_count); do
file_size=$(generate_random_size)
file_path="$output_dir/file_$i.bin"
head -c "$file_size" /dev/urandom > "$file_path"
echo "Generated file $i with size $file_size bytes."
done
echo "Done. Files saved in $output_dir."
Easy enough, right? This creates a directory random_files and fills it with 10 000 files of sizes mostly small but occasionally wildly bigger. Donât blame me if your disk space takes a little hit! đ„
Step 2: Crunching Numbers â The File Size Distribution đ
Okay, youâve got the files. Now, what can we learn from their sizes? Letâs find out the:
- Mean size: The average size across all files.
- Median size: The middle value when sizes are sorted â because averages can lie.
- Distribution breakdown: How many tiny files vs. giant files.
Hereâs a handy Bash script that reads file sizes and spits out these stats with a bit of flair:
#!/bin/bash
# Input directory (default to "random_files" if not provided)
directory="${1:-random_files}"
# Check if directory exists
if [ ! -d "$directory" ]; then
echo "Directory $directory does not exist."
exit 1
fi
# Array to store file sizes
file_sizes=($(find "$directory" -type f -exec stat -c%s {} \;))
# Check if there are files in the directory
if [ ${#file_sizes[@]} -eq 0 ]; then
echo "No files found in the directory $directory."
exit 1
fi
# Calculate mean
total_size=0
for size in "${file_sizes[@]}"; do
total_size=$((total_size + size))
done
mean=$((total_size / ${#file_sizes[@]}))
# Calculate median
sorted_sizes=($(printf '%s\n' "${file_sizes[@]}" | sort -n))
mid=$(( ${#sorted_sizes[@]} / 2 ))
if (( ${#sorted_sizes[@]} % 2 == 0 )); then
median=$(( (sorted_sizes[mid-1] + sorted_sizes[mid]) / 2 ))
else
median=${sorted_sizes[mid]}
fi
# Display file size distribution
echo "File size distribution in directory $directory:"
echo "---------------------------------------------"
echo "Number of files: ${#file_sizes[@]}"
echo "Mean size: $mean bytes"
echo "Median size: $median bytes"
# Display detailed size distribution (optional)
echo
echo "Detailed distribution (size ranges):"
awk '{
if ($1 < 1024) bins["< 1 KB"]++;
else if ($1 < 10240) bins["1 KB - 10 KB"]++;
else if ($1 < 102400) bins["10 KB - 100 KB"]++;
else bins[">= 100 KB"]++;
} END {
for (range in bins) printf "%-15s: %d\n", range, bins[range];
}' <(printf '%s\n' "${file_sizes[@]}")
Run it, and voilĂ â instant nerd satisfaction.
Example Output:
File size distribution in directory random_files:
---------------------------------------------
Number of files: 10000
Mean size: 68987 bytes
Median size: 2048 bytes
Detailed distribution (size ranges):
< 1 KB : 1234
1 KB - 10 KB : 5678
10 KB - 100 KB : 2890
>= 100 KB : 198
Why Should You Care? đ€·ââïž
Besides the obvious geek cred, generating files like this can help:
- Test backup systems â can they handle weird file size distributions?
- Stress-test storage or network performance with real-world-like data.
- Understand your data patterns if youâre building apps that deal with files.
Wrapping Up: Big Files, Small Files, and the Chaos In Between
So there you have it. Ten thousand random files later, and weâve peeked behind the curtain to understand their size story. Itâs a bit like hosting a party and then figuring out who ate how many snacks. đż
Try this yourself! Tweak the distribution parameters, generate files, crunch the numbers â and impress your friends with your mad scripting skills. Or at least have a fun weekend project that makes you sound way smarter than you actually are.
Happy hacking! đ„
July 29, 2025

















