Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

November 05, 2025

As a long-time Radiohead fan I was happy to see video of their first gig in many years and although did enjoy the set I don’t have the feeling I’m missing out on anything really. Great songs, great musicians but nothing new, nothing really unexpected (not counting a couple of songs that they only very rarely perform live). I watched the vid and then switched to a The Smile live show from 2024…

Source

After days of boxes, labels, and that one mysterious piece of furniture that no one remembers what it belongs to, we can finally say it: we’ve moved! And yes, mostly without casualties (except for a few missing screws).

The most nerve-wracking moment? Without a doubt, moving the piano. It got more attention than any other piece of furniture — and rightfully so. With a mix of brute strength, precision, and a few prayers to the gods of gravity, it’s now proudly standing in the living room.

We’ve also been officially added to the street WhatsApp group — the digital equivalent of the village well, but with emojis. It feels good to get those first friendly waves and “welcome to the neighborhood!” messages.

The house itself is slowly coming together. My IKEA PAX wardrobe is fully assembled, but the BRIMNES bed still exists mostly in theory. For now, I’m camping in style — mattress on the floor. My goal is to build one piece of furniture per day, though that might be slightly ambitious. Help is always welcome — not so much for heavy lifting, but for some body doubling and co-regulation. Just someone to sit nearby, hold a plank, and occasionally say “you’re doing great!”

There are still plenty of (banana) boxes left to unpack, but that’s part of the process. My personal mission: downsizing. Especially the books. But they won’t just be dumped at a thrift store — books are friends, and friends deserve a loving new home. 📚💚

Technically, things are running quite smoothly already: we’ve got fiber internet from Mobile Vikings, and I set up some Wi-Fi extenders and powerline adapters. Tomorrow, the electrician’s coming to service the air-conditioning units — and while he’s here, I’ll ask him to attach RJ45 connectors to the loose UTP cables that end in the fuse box. That means wired internet soon too — because nothing says “settled adult” like a stable ping.

And then there’s the garden. 🌿 Not just a tiny patch of green, but a real garden with ancient fruit trees and even a fig tree! We had a garden at the previous house too, but this one definitely feels like the deluxe upgrade. Every day I discover something new that grows, blossoms, or sneakily stings.

Ideas for cozy gatherings are already brewing. One of the first plans: living room concerts — small, warm afternoons or evenings filled with music, tea (one of us has British roots, so yes: milk included, coffee machine not required), and lovely people.

The first one will likely feature Hilde Van Belle, a (bal)folk friend who currently has a Kickstarter running for her first solo album:
👉 Hilde Van Belle – First Solo Album

I already heard her songs at the CaDansa Balfolk Festival, and I could really feel the personal emotions in her music — honest, raw, and full of heart.
You should definitely support her! 💛

The album artwork is created by another (bal)folk friend, Verena, which makes the whole project feel even more connected and personal.

Hilde (left) and Verena (right) at CaDansa
📾 Valentina Anzani

So yes: the piano’s in place, the Wi-Fi works, the garden thrives, the boxes wait patiently, and the teapot is steaming.
We’ve arrived.
Phew. We actually moved. â˜•đŸŒłđŸ“ŠđŸŽ¶

November 04, 2025

I've been shooting with Leica M cameras for a few years. This page lists the lenses I use today and vintage lenses I would love to try.

Lenses I currently own

Lens Years Notes
Leica Summilux-M 35 mm f/1.4 ASPH FLE 2010 – 2022 Sharp, fast, and versatile. My everyday lens.
Leica Summilux-M 50 mm f/1.4 ASPH 2023 – Present Sharp, fast, and balanced with beautiful color and contrast.
Leica Noctilux-M 50 mm f/0.95 2008 – Present Dreamy and full of depth, but not always sharp.

Vintage lenses I'd love to try or buy

Lately, I've become more interested in vintage lenses than new ones. Their way of handling light and color is softer and has more character than modern glass. My Noctilux showed me how much beauty there is in imperfection. These older lenses can be hard to find and often confusing to identify, so I'm keeping track of them here.

Lens Years Notes
Leica Elmarit-M 28 mm f/2.8 v4 1992 – 2006 Compact and sharp with minimal distortion. A clean, classic wide-angle.
Leica Summicron-M 35 mm f/2 v4, German-made ("Bokeh King") 1979 – 1999 Famous for its smooth bokeh and gentle rendering. Avoid Canadian version.
Leica Summilux-M 35 mm f/1.4 AA (Double Aspherical) 1990 – 1994 Rare and collectible. Produces warm tones and painterly backgrounds.
Leica Summitar 50 mm f/2 1939 – 1955 Soft and nostalgic, with the kind of glow only old glass gives.
Leica Summicron 50 mm f/2 Rigid v2 1956 – 1968 A crisp, confident lens that defined Leica's look for years.

November 02, 2025

If you prefer your content/ data not to be used to train LinkedIn’s AI, you can opt out at https://www.linkedin.com/mypreferences/d/settings/data-for-ai-improvement (but only until tomorrow, Nov 3rd?). Crazy that this requires an explicit opt-out by the way, it really should be opt-in (so off by default). More info about this can be found at tweakers.net by the way. Feel free to share…

Source

November 01, 2025

Quand éclatera la bulle IA


Comme le souligne Ed Zitron, toute la bulle actuelle est basĂ©e sur 7 ou 8 entreprises qui s’échangent des promesses de s’échanger des milliards d’euros lorsque tous les habitants de la planĂšte dĂ©penseront en moyenne 100€ par mois pour utiliser ChatGPT. Ce qui, pour Ă©viter une cascade de faillites, doit arriver dans les 4 ans au plus tard.

Spoiler : cela n’arrivera jamais.

(oui, Gee m’a convaincu que le spoiler, ce n’était finalement pas si mal et que si spoiler gĂąche rĂ©ellement une Ɠuvre, c’est que l’Ɠuvre est merdique)

Tout le monde est en train de perdre une fortune Ă  investir dans une infrastructure IA que personne ne veut payer pour utiliser.

Parce qu’à part deux ou trois managers lobotomisĂ©s par les Ă©coles de commerce et tout fiers de poster sur LinkedIn des textes gĂ©nĂ©rĂ©s qu’ils n’ont mĂȘme pas lus, ChatGPT est loin d’ĂȘtre la rĂ©volution promise. C’est juste ce qu’on veut nous faire croire. Cela fait partie du marketing !

Tout comme on veut nous faire croire qu’avoir une app pour tout nous simplifie la vie alors que si on rĂ©flĂ©chit rationnellement, le coĂ»t cognitif de l’app est souvent supĂ©rieur Ă  l’alternative, mais encore faut-il qu’alternative il y ait.

Et ce n’est pas un hasard si je parle d’apps pour illustrer la bulle IA.

La saturation du smartphone

Parce que les apps ont Ă©tĂ© promues Ă  grand renfort de marketing pour promouvoir indirectement la vente de smartphone. Qu’une gigantesque industrie s’est formĂ©e sur le fait que le marchĂ© des smartphones Ă©tait en pleine croissance. Et si la bulle n’a pas explosĂ©, un autre phĂ©nomĂšne est arrivĂ© trĂšs rĂ©cemment : la saturation.

Tout le monde a un smartphone. Celleux qui n’en ont pas n’en veulent pas en conscience. Celleux qui en ont vont le remplacer de moins en moins souvent, car les nouveaux modùles n’apportent plus rien et, au contraire, retirent des fonctions (combien s’accrochent à un vieux modùle pour garder un jack audio ?). L’industrie a fini sa croissance et va se stabiliser dans un renouvellement de l’existant.

Si j’étais mĂ©chant, je dirais que le marchĂ© du dumbphone a un plus fort potentiel de croissance que celui du smartphone.

Toutes les petites sociĂ©tĂ©s et les services qui se sont fait convaincre de « dĂ©velopper une app » ne l’ont fait que pour promouvoir la vente de smartphone et, dĂ©sormais, cela ne sert plus Ă  rien. Elles se retrouvent Ă  devoir gĂ©rer trois clientĂšles diffĂ©rentes : Android, iPhone et celleux sans aucun des deux. Bref, vous pouvez arrĂȘter de « promouvoir vos apps », cela ne vous rapportera rien d’autre que du travail et des coĂ»ts supplĂ©mentaires. Que cela vous serve de leçon Ă  l’heure oĂč vous « investissez dans l’IA ».

Spoiler : personne n’a compris la leçon.

Croissance, Ô Croissance !

Mais le capitalisme a besoin de croissance ou, au moins, la croyance d’une croissance future.

C’est exactement ce que promet l’IA et la raison pour laquelle nous sommes gavĂ©s de marketing pour l’IA partout, que les nouveaux tĂ©lĂ©phones ont des « puces IA » et que toutes les apps se mettent Ă  jour pour nous forcer Ă  utiliser l’IA.

Par rapport Ă  l’explosion du mobile, j’observe nĂ©anmoins une diffĂ©rence flagrante, fondamentale.

Beaucoup de gens « normaux » n’aiment pas. Les mĂȘmes qui Ă©taient fascinĂ©s par les premiers smartphones, qui Ă©taient fascinĂ©s par les rĂ©ponses de ChatGPT sont furieux d’avoir une IA dans Whatsapp ou dans leur brosse Ă  dents. Celleux qui croient le plus en l’IA ne sont pas celleux qui l’utilisent le plus (ce qui Ă©tait le cas du smartphone), mais celleux qui pensent que d’autres vont l’utiliser !

Mais ces autres ne l’utilisent finalement que trĂšs peu. Et ce n’est pas une rĂ©flexion Ă©thique ou Ă©cologique ou politique.

C’est juste que ça les emmerde. Que mĂȘme avec du matraquage marketing incessant, ils ont l’impression que ça complique leur vie plutĂŽt que la simplifier. J’observe de plus en plus de gens refuser de faire des mises Ă  jour, tenter de faire durer le plus longtemps possible un appareil pour Ă©viter d’avoir Ă  racheter le nouveau avec des Ă©crans tactiles Ă  la place des boutons. Ce que j’expliquais dĂ©jĂ  en 2023.

Nous avons atteint un point de saturation technologique.

Lorsque vous refusez l’IA ou que vous demandez un bouton physique pour rĂ©gler le volume de la radio dans votre nouvelle voiture, les vendeurs et le marketing se contentent de vous classer dans la catĂ©gorie des « ignares qui ne comprennent rien Ă  la technologie » (dans laquelle ils me classent dĂšs que j’ouvre la bouche, et ça m’amuse beaucoup d’ĂȘtre pris pour un ignare technologique).

Mais, à un moment, ces ignares vont devenir une masse importante. Ces non-consommateurs vont commencer à peser, économiquement, politiquement.

L’espoir des ignares technologiques

L’économie de la merdification (ou « crapitalism » comme l’appelle Charlie Stross) est, sans le vouloir, en train de promouvoir la dĂ©croissance et le low-tech !

Chaque systÚme contient en lui les germes de la révolution qui le renversera.

Du moins, je l’espùre


La bulle du web 2.0 a éclaté en laissant en place une énorme infrastructure sur laquelle ont pu prospérer quelques survivants (Amazon, Google) et quelques nouveaux venus (Facebook, Netflix) qui se sont littéralement approprié les restes.

Parfois, une bulle ne laisse rien derriĂšre elle, comme celle des subprimes en 2008.

Qu’en sera-t-il de la bulle IA dans laquelle nous sommes ? Les datacenters construits seront rapidement obsolĂštes. La destruction sera Ă©norme.

Mais ces data centers nĂ©cessitent de l’électricitĂ©. Beaucoup d’électricitĂ©. Et pour une grande partie, une Ă©lectricitĂ© renouvelable, car c’est la moins chĂšre Ă  l’heure actuelle.

Mon pari est donc que l’éclatement de la bulle IA va crĂ©er une offre d’électricitĂ© sans prĂ©cĂ©dent. Celle-ci va devenir presque gratuite et, dans certains cas, elle aura mĂȘme un prix nĂ©gatif (parce qu’une fois les batteries pleines, il faut bien Ă©vacuer la production excĂ©dentaire).

Et si l’électricitĂ© est gratuite, il devient de plus en plus difficile de justifier de brĂ»ler du pĂ©trole. La fin de la bulle IA pourrait Ă©galement sonner la fin d’une bulle pĂ©troliĂšre qui dure depuis 70 ans.

Ça paraĂźt un peu utopiste et, au moment de l’éclatement, ça ne va pas ĂȘtre joli Ă  voir. Ça peut commencer demain comme dans 10 ans. La transition risque de durer plus qu’une poignĂ©e d’annĂ©es. Mais, clairement, c’est un changement civilisationnel qui s’amorce


On n’y arrivera pas sans douleur. La rĂ©cession Ă©conomique va alimenter tous les dĂ©lires fascistes disposant d’une infrastructure d’espionnage dont l’officier de la Stasi le plus fou n’aurait jamais pu rĂȘver. Mais ptĂȘtre qu’au bout du tunnel, on se dirigera vers ce que Tristan Nitot a dĂ©crit dans sa novelette « VĂ©lorutopia ». Il prĂ©tend s’inspirer, entre autres, de Bikepunk. Mais je crois qu’il dit ça pour me flatter.

Je suis Ploum et je viens de publier Bikepunk, une fable Ă©colo-cycliste entiĂšrement tapĂ©e sur une machine Ă  Ă©crire mĂ©canique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !

Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.

October 30, 2025

We are pleased to announce the developer rooms that will be organised at FOSDEM 2026. Developer rooms are assigned to self-organising groups to work together on open source projects, to discuss topics relevant to a broader subset of the community, etc. The individual developer room organisers will issue their calls for participation in the next few days. The list below will be updated accordingly. Topic Call for Participation AI Plumbers CfP Audio, Video & Graphics Creation Bioinformatics & Computational Biology CfP Browser and web platform CfP BSD, illumos, bhyve, OpenZFS CfP Building Europe’s Public Digital Infrastructure CfP Collaboration and舰

October 29, 2025

You know how you can make your bootloader sing a little tune?
Well
 what if instead of music, you could make it play Space Invaders?

Yes, that’s a real thing.
It’s called GRUB Invaders, and it runs before your operating system even wakes up.
Because who needs Linux when you can blast aliens straight from your BIOS screen? 🚀


đŸŽ¶ From Tunes to Lasers

In a previous post — “Resurrecting My Windows Partition After 4 Years đŸ–„đŸŽźâ€ —
I fell down a delightful rabbit hole while editing my GRUB configuration.
That’s where I discovered GRUB_INIT_TUNE, spent hours turning my PC speaker into an 80s arcade machine, and learned far more about bootloader acoustics than anyone should. 😅

So naturally, the next logical step was obvious:
if GRUB can play music, surely it can play games too.
Enter: GRUB Invaders. đŸ‘ŸđŸ’„


đŸ§© What the Heck Is GRUB Invaders?

grub-invaders is a multiboot-compliant kernel game — basically, a program that GRUB can launch like it’s an OS.
Except it’s not Linux, not BSD, not anything remotely useful

it’s a tiny Space Invaders clone that runs on bare metal.

To install it (on Ubuntu or Debian derivatives):

sudo apt install grub-invaders

Then, in GRUB’s boot menu, it’ll show up as GRUB Invaders.
Pick it, hit Enter, and bam! — no kernel, no systemd, just pew-pew-pew.
Your CPU becomes a glorified arcade cabinet. đŸ•č

Image: https://libregamewiki.org/GRUB_Invaders

🛠 How It Works

Under the hood, GRUB Invaders is a multiboot kernel image (yep, same format as Linux).
That means GRUB can load it into memory, set up registers, and jump straight into its entry point.

There’s no OS, no drivers — just BIOS interrupts, VGA mode, and a lot of clever 8-bit trickery.
Basically: the game runs in real mode, paints directly to video memory, and uses the keyboard interrupt for controls.
It’s a beautiful reminder that once upon a time, you could build a whole game in a few kilobytes.


🧼 Technical Nostalgia

Installed size?

Installed-Size: 30
Size: 8726 bytes

Yes, you read that right: under 9 KB.
That’s less than one PNG icon on your desktop.
Yet it’s fully playable — proof that programmers in the ’80s had sorcery we’ve since forgotten. đŸ§™â€â™‚ïž

The package is ancient but still maintained enough to live in the Ubuntu repositories:

Homepage: http://www.erikyyy.de/invaders/
Maintainer: Debian Games Team
Enhances: grub2-common

So you can still apt install it in 2025, and it just works.


🧠 Why Bother?

Because you can.

Because sometimes it’s nice to remember that your bootloader isn’t just a boring chunk of C code parsing configs.
It’s a tiny virtual machine, capable of loading kernels, playing music, and — if you’re feeling chaotic — defending the Earth from pixelated aliens before breakfast. ☕

It’s also a wonderful conversation starter at tech meetups:

“Oh, my GRUB doesn’t just boot Linux. It plays Space Invaders. What does yours do?”


⚙ A Note on Shenanigans

Don’t worry — GRUB Invaders doesn’t modify your boot process or mess with your partitions.
It’s launched manually, like any other GRUB entry.
When you’re done, reboot, and you’re back to your normal OS.
Totally safe. (Mostly. Unless you lose track of time blasting aliens.)


🏁 TL;DR

  • grub-invaders lets you play Space Invaders in GRUB.
  • It’s under 9 KB, runs without an OS, and is somehow still in Ubuntu repos.
  • Totally useless. Totally delightful.
  • Perfect for when you want to flex your inner 8-bit gremlin.

Abstract art of a figure surrounded by swirling blue energy and birds, symbolizing motion and orchestration.

Last summer, I was building a small automation in n8n when I came across Activepieces. Both tools promise the same thing: connect your applications, automate your workflows, and host it yourself. But when I clicked through to Activepieces' GitHub repo, I noticed it's released under the MIT license. Truly Open Source, not just source-available like n8n.

As I dug deeper into these tools, something crystallized for me: business logic is moving out of individual applications and into the orchestration layer.

Today, most organizations run on dozens of disconnected tools. A product launch means logging into Mailchimp for email campaigns, Salesforce for lead tracking, Google Analytics for performance monitoring, Drupal for content publishing, Slack for team coordination, and a spreadsheet to keep everything synchronized. We copy data between systems, paste it into different formats, and manually trigger each step. In other words, most organizations are still doing orchestration by hand.

With orchestration tools maturing, this won't stay manual forever. That led me to an investment thesis that I call the Orchestration Shift: the tools we use to connect systems are becoming as important as the systems themselves.

This shift could change how we think about enterprise software architecture. For the last decade, we've talked about the "marketing technology stack" or "martech stack": collections of tools connected through rigid, point-to-point integrations. Orchestration changes this fundamentally. Instead of each tool integrating directly with others, an orchestration layer coordinates how they work together: the "martech stack" becomes a "martech network".

Why I invested in Activepieces

I believe that in the next five to ten years, orchestration platforms like Activepieces are likely to become critical infrastructure in many organizations. If that happens, this shift needs Open Source infrastructure. Not only proprietary SaaS platforms or source-available licenses with commercial restrictions, but truly open infrastructure.

The world benefits when critical infrastructure has strong Open Source alternatives. Linux gave us an alternative to proprietary operating systems. MySQL and PostgreSQL gave us alternatives to Oracle. And of course, Drupal and WordPress gave us alternatives to dozens of proprietary CMSes.

That is why Activepieces stood out: it is Open Source and positioned for an important market shift.

So I reached out to Ash Samhouri, their co-founder and CEO, to learn more about their vision. After a Zoom call, I came away impressed by both the mission and the momentum. When I got the opportunity to invest, I took it.

A couple months later, n8n raised over $240 million at a $2.5 billion valuation, validation that the orchestration market was maturing rapidly.

I invested not just money, but also time and effort. Over the summer, I worked with JĂŒrgen Haas to create a Drupal integration for Activepieces and the orchestration module for Drupal. Both shipped the week before DrupalCon Vienna, where I demonstrated them in my opening keynote.

How orchestration changes platforms

Consider what this means for platforms like Drupal, which I have led for more than two decades. Drupal has thousands of contributed modules that integrate with external services. But if orchestration tools begin offering those same integrations in a way that is easier and more powerful to use, we have to ask how Drupal's role should evolve.

Drupal could move from being the central hub that manages integrations to becoming a key node within this larger orchestration network.

In this model, Drupal continues managing and publishing content while also acting as a connected participant in such a network. Events in Drupal can trigger workflows across other systems, and orchestration tools can trigger actions back in Drupal. This bidirectional connection makes both more powerful. Drupal gains capabilities without adding complexity to its core, while orchestration platforms gain access to rich content, structured data, publishing workflows, and more.

Drupal can also learn architecturally from these orchestration platforms. Tools like n8n and Activepieces use a simple but powerful pattern: every operation has defined inputs and outputs that can be chained together to build workflows. Drupal could adopt this same approach, making it easier to build internal automations and positioning Drupal as an even more natural participant in orchestration networks.

We have seen similar shifts before. TCP/IP did not make telephones irrelevant; it changed where the intelligence lived. Phones became endpoints in a network defined by the protocol connecting them. Orchestration may follow a similar path, becoming the layer that coordinates how business systems work together.

Where orchestration is heading

Today, orchestration platforms handle workflow automation: when X happens, do Y. Form submissions create CRM entries, send email notifications, post Slack updates. I demonstrated this pattern in my DrupalCon Vienna keynote, showing how predefined workflows eliminate manual work and custom integration code.

But orchestration is evolving toward something more powerful: digital workers. These AI-driven agents will understand context, make decisions, and execute complex tasks across platforms. A digital worker could interpret a goal like "Launch the European campaign for our product launch", analyze what needs to happen, build the workflows, coordinate across your martech network, execute them, and report results.

Tools like Activepieces and protocols like the Model Context Protocol are laying the groundwork for this future. We're moving from automation (executing predefined steps) to autonomy (understanding intent and figuring out how to achieve it). The future will likely require both: deterministic workflows for reliability and consistency, combined with AI-driven decision-making for flexibility and intelligence.

This shift makes the orchestration layer even more critical. It's not just connecting systems anymore; it's where business intelligence and decision-making will live.

Conclusion

When I first clicked through to Activepieces' GitHub repo last summer, I was looking for a tool to automate a workflow. What I found was something bigger: a glimpse of how business software architecture is fundamentally changing. I've been thinking about it since.

To me, the question isn't whether orchestration will become critical infrastructure. It's whether that infrastructure will be open and built collaboratively. That is a future worth investing in, both with capital and with code.

October 28, 2025

On October 10th, 2025, we released MySQL 9.5, the latest Innovation Release. As usual, we released bug fixes for 8.0 and 8.4 LTS, but this post focuses on the newest release. In this release, we can see contributions related to Connector J and Connector Net, as well as to different server categories. Connector / J [
]

October 27, 2025

Qu’est-ce que l’outil va faire de moi ?

Je ne peux rĂ©sister Ă  vous partager cet extrait issu de « L’odyssĂ©e du pingouin cannibale », de l’inĂ©narrable Yann Kerninon, philosophe et punk rocker anarchocycliste :

Quand on m’envie d’écrire des livres et d’ĂȘtre un philosophe, j’ai toujours envie de rĂ©pondre « allez vous faire foutre ». Dans une interview tĂ©lĂ©visĂ©e, le philosophe Kostas Axelos affirmait que ce n’était jamais le penseur qui faisait la pensĂ©e, mais bien toujours la pensĂ©e qui faisait le penseur. Il ajoutait qu’il aurait bien aimĂ© qu’il en soit autrement. Au journaliste Ă©tonnĂ© qui lui demandait pourquoi, il rĂ©pondit avec un lĂ©ger sourire : « Parce que c’est la source d’une grande souffrance. »

Cette idĂ©e que la pensĂ©e fait le penseur est poussĂ©e encore plus loin par Marcello Vitali-Rosati dans son excellent « Éloge du bug ». Dans cet ouvrage, que je recommande chaudement, Marcello critique la dualitĂ© platonicienne qui imprĂšgne la pensĂ©e occidentale depuis 2000 ans. Il y aurait les penseurs et les petites mains, les dirigeants et les obĂ©issants, le virtuel et le rĂ©el. Ce dĂ©nigrement de la matĂ©rialitĂ© aurait Ă©tĂ© poussĂ© Ă  son paroxysme par les GAFAM qui tentent de cacher toute l’infrastructure sur laquelle elles s’appuient. Nous avons nos donnĂ©es dans « le cloud », nous cliquons pour passer une commande et, magiquement, le paquet arrive Ă  notre porte le lendemain.

Lorsqu’un Ă©tudiant me dit que son tĂ©lĂ©phone se connecte Ă  un satellite, lorsqu’un politicien s’étonne que les cĂąbles sous-marins existent encore, lorsqu’un usager associe « wifi » et internet, ce n’est pas de la simple ignorance comme je l’ai toujours cru. C’est en rĂ©alitĂ© le rĂ©sultat de dĂ©cennies de lavage de cerveau et de marketing pour tenter de nous faire oublier la matĂ©rialitĂ©, pour tenter de nous convaincre que nous sommes tous des « dĂ©cideurs » Ă  qui obĂ©it un gĂ©nie magique.

Marcello fait le parallĂšle avec le gĂ©nie d’Aladdin. Car, au cas oĂč vous ne l’auriez pas remarquĂ©, Aladdin est inculte. Il veut « des beaux vĂȘtements » mais n’a aucune idĂ©e de ce qui fait que des vĂȘtements sont beaux ou non. Il ne pose aucun choix. Il est sous la coupe totale du gĂ©nie qui prend l’entiĂšretĂ© des dĂ©cisions. Il croit ĂȘtre le maĂźtre, il est le jouet du gĂ©nie.

Je me permets mĂȘme de pousser l’analogie en faisant appel aux habits neufs de l’empereur : lorsqu’Aladdin sera complĂštement dĂ©pendant du gĂ©nie, celui-ci lui fournira des habits « invisibles » en le convainquant que ce sont les plus beaux. Ce processus est dĂ©sormais connu sous le nom de « merdification ».

Le nĂ©oplatonicisme de Plotin voulait que l’écrit ne soit qu’une tĂąche vulgaire, subalterne de la pensĂ©e.

Avec ChatGPT et consorts, la Silicon Valley a inventĂ© le nĂ©o-nĂ©oplatonicisme. La pensĂ©e elle-mĂȘme devient vulgaire, subalterne Ă  l’idĂ©e. Le grand entrepreneur a l’ébauche d’une idĂ©e, plutĂŽt un dĂ©sir intuitif. Charge aux sous-fifres de le rĂ©aliser ou, pour le moins, de crĂ©er une campagne marketing pour modifier la rĂ©alitĂ©, pour convaincre que ce dĂ©sir est rĂ©el, gĂ©nial, souhaitable et rĂ©aliste. Que son auteur mĂ©rite les lauriers. C’est ce que j’ai appelĂ© « la mystification de la Grande IdĂ©e ».

Mais ce n’est pas le penseur qui fait la pensĂ©e. C’est la pensĂ©e qui fait le penseur.

Ce n’est pas la pensĂ©e qui fait l’écrit, c’est l’écrit qui fait la pensĂ©e.

L’acte d’écriture est physique, matĂ©riel. L’utilisation d’un outil ou d’un autre va grandement affecter l’écrit et, par consĂ©quent, la pensĂ©e et donc le penseur lui-mĂȘme. Sur ce sujet, je ne peux que vous recommander chaudement « La mĂ©canique du texte » de Thierry Crouzet. L’absence de cette rĂ©fĂ©rence m’a d’ailleurs sautĂ© aux yeux dans « Éloge du bug », car les deux livres sont trĂšs complĂ©mentaires.

Si toute cette rĂ©flexion semble pour le moins abstraite, j’en ai fait l’expĂ©rience de premiĂšre main. En Ă©crivant Ă  la machine Ă  Ă©crire, bien entendu, comme c’est le cas pour mon roman Bikepunk.

Mais le changement le plus profond que j’ai vĂ©cu est probablement liĂ© Ă  ce blog.

Il y a 3 ans, j’ai enfin rĂ©ussi Ă  quitter Wordpress pour faire un blog statique que je gĂ©nĂšre avec mon propre script.

De maniĂšre amusante, Marcello Vitali-Rosati vient de faire un cheminement identique.

Mais ce n’est pas un long processus rĂ©flexif qui m’a amenĂ© Ă  cela. C’est le fait d’ĂȘtre saoulĂ© par la complexitĂ© de Wordpress, de me rendre compte que j’avais perdu le plaisir d’écrire et que je le retrouvais sur le rĂ©seau Gemini. J’ai mis en place des choses sans en comprendre les tenants et les aboutissants. J’ai expĂ©rimentĂ©. J’ai Ă©tĂ© confrontĂ© Ă  des centaines de microdĂ©cisions que je ne soupçonnais pas. J’ai appris Ă©normĂ©ment sur le HTML en dĂ©veloppant Offpunk et je l’ai appliquĂ© sur ce blog. Pour ĂȘtre honnĂȘte, je me suis rendu compte que j’avais oubliĂ© qu’il Ă©tait possible de faire une simple page HTML sans JavaScript, sans un thĂšme CSS fait par un professionnel. Et pourtant, une fois en ligne, je n’ai reçu que des Ă©loges sur un site pourtant minimal.

Mon processus de blogging s’est complĂštement modifiĂ©. Je me suis remis Ă  Vim aprĂšs m’ĂȘtre remis pleinement Ă  Debian. Mes Ă©crits s’en sont ressentis. J’ai Ă©tĂ© invitĂ© Ă  parler de minimalisme numĂ©rique, de low-tech.

Mais je n’ai pas rejoint Gemini parce que je me sentais un minimaliste numĂ©rique dans l’ñme. Je n’ai pas quittĂ© Wordpress par amour de la low-tech. Je n’ai pas créé Offpunk parce que je suis un guru de la ligne de commande.

C’est exactement le contraire ! Gemini m’a illuminĂ© sur une maniĂšre de voir et de vivre un minimalisme numĂ©rique. Programmer ce blog m’a fait comprendre l’intĂ©rĂȘt de la low-tech. CrĂ©er Offpunk et l’utiliser ont fait de moi un adepte de la ligne de commande.

La pensĂ©e fait le penseur ! L’outil fait le crĂ©ateur ! Le logiciel libre fait le hacker ! La plateforme fait l’idĂ©ologie ! Le vĂ©lo fait la condition physique !

Peut-ĂȘtre que nous devrions arrĂȘter de nous poser la question « Qu’est-ce que cet outil peut faire pour moi ? » et la remplacer par « Qu’est-ce que cet outil va faire de moi ? ».

Car si la pensĂ©e fait le penseur, le rĂ©seau social propriĂ©taire fait le fasciste, le robot conversationnel fait l’abruti naĂŻf, le slide PowerPoint fait le dĂ©cideur crĂ©tin.

Qu’est-ce que cet outil va faire de moi ?

En Ă©nonçant cette question Ă  haute voix, je soupçonne que nous verrons d’un autre Ɠil l’utilisation de certains outils, surtout ceux qui sont « plus faciles » ou « que tout le monde utilise ».

Qu’est-ce que cet outil va faire de moi ?

En regardant autour de nous, il y a finalement peu d’outils dont la rĂ©ponse Ă  cette question est rassurante.

Je suis Ploum et je viens de publier Bikepunk, une fable Ă©colo-cycliste entiĂšrement tapĂ©e sur une machine Ă  Ă©crire mĂ©canique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !

Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.

October 23, 2025

The web is changing fast. AI now writes content, builds web pages, and answers questions directly, often bypassing websites entirely.

People often wonder what this means for Drupal, so at DrupalCon Vienna, I tackled this head-on. My message was simple: AI is the storm, but it's also the way through it. Instead of fighting AI, we're leaning into it.

My keynote focused on how Drupal is evolving across four product areas. We're making it easier to get started with Site Templates, enabling visual site building through Drupal Canvas, accelerating development with AI assistance, and exploring complex workflows with new orchestration tools.

If you missed the keynote, you can watch the video below, or download my slides (62 MB).

Vienna felt like a turning point. People could see the pieces coming together. Drupal is finding its footing in the AI era, leading in AI innovation, and ready to help shape what comes next for the web.

Growing Drupal with Site Templates

One of the most important ways to grow Drupal is to make it easier and faster to build new sites. We began that work with Recipes, a way to quickly add common features to a site. Recipes help people go from idea to a website in hours instead of days.

At DrupalCon Vienna, I talked about the next step in that journey: our first Site Template. Site Templates build on Recipes and also include a complete design with layouts, visual style, and sample content. The result is that you can go from a new Drupal install to a fully working website in minutes. It will be the easiest way yet to get started with Drupal.

Next, we plan to introduce more Site Templates and launch a Site Template Marketplace where anyone can discover, share, and build on templates for different use cases.

A new visual editing experience

At DrupalCon Vienna, the energy around Drupal Canvas was infectious. Some even called it "CanvasCon". Drupal Canvas sessions were often standing room only, just like the Drupal AI sessions.

I first showed an early version of Drupal Canvas at DrupalCon Barcelona in September 2024, when we launched Drupal's Starshot initiative. The progress we've made in just one year is remarkable. My keynote showed parts of Drupal Canvas in action, but for a deeper dive, I recommend watching this breakout session.

Version 1.0 of Drupal Canvas is scheduled for November 2025. Starting in January 2026, it will become the default page builder in Drupal CMS 2.0. After more than 15 months of development and countless contributors working to make Drupal easier for everyone, it's hard to believe we're almost there. This marks the beginning of a new chapter for how people create with Drupal.

What excites me most is what this solves. For years, building pages in Drupal required technical expertise. Drupal Canvas gives end-users a visual page builder that is both more powerful and easy to use. Plus, it supports React, which means front-end developers can contribute using skills they already have.

Drupal's accidental AI advantage

Every content management system faces defining moments. For Drupal, one came with the release of Drupal 8. We rebuilt Drupal from the ground up, adopting modern design patterns and improving configuration management, versioning, workflows, and more.

The transition was hard, but here is the surprising part: ten years later those decisions gave Drupal an unexpected advantage in today's AI-driven web. The architecture we created is exactly what AI systems need today. When AI modifies content, you need version control to roll back mistakes. When it builds pages, you need structured data, permissions, and workflows. Drupal already has those capabilities.

For years, Drupal prioritized flexibility and robustness while other platforms focused on ease of use. What once felt like extra complexity now makes perfect sense. Drupal has quietly become one of the most AI-ready platforms available.

AI is the storm, and the way through the storm

A human in a space suit and a large cyborg stand side by side before a vast blue wave or cloud, stirred up by a mysterious technological behemoth on the horizon. The image includes the text: "AI is the storm, and the way through it."

As I said in my keynote: "Some days AI terrifies me. An hour later it excites me. By the evening, I'm tired of hearing about it.". Still, we can't ignore AI.

I first introduced AI as part of Starshot. Five months ago, it became its own dedicated track with the launch of the Drupal AI initiative. Since then, twenty two agencies have backed it with funding and contributors, together contributing over one million dollars. This is the largest fundraising effort in Drupal's history.

The initiative is already producing impressive results. At DrupalCon Vienna, we released Drupal AI version 1.2, a major step forward for the initiative.

In my keynote, I also demonstrated three new AI capabilities:

  1. AI-powered page building: Drupal AI can now generate complete, designed pages in minutes using a component-based design system in Drupal Canvas. What site builders used to build in hours now happens in minutes while maintaining your site's structure and style.
  2. Context Control Center: Teams can define brand voice, target audiences, and key messages from a single UI. All AI agents draw from this source of truth.
  3. Autonomous agents: When you update information in the Context Control Center, such as a product price or company statistic, agents automatically find every instance throughout your site and propose updates. You review and approve changes before they go live.

Orchestration as a path to explore

Earlier this year, I wrote about the great digital agency unbundling. As AI automates more technical work, agencies need to evolve their business models and find new ways to create value.

One promising direction is orchestration: building systems and workflows that connect AI agents, content platforms, CRMs, and marketing tools into intelligent, automated workflows. I think of it as DXP 2.0.

Most organizations have complex marketing technology stacks. Connecting all the systems in their stack often requires custom code or repetitive manual tasks. This integration work can be time-consuming and hard to maintain.

Modern orchestration tools solve this by automating how information flows between systems. Instead of writing custom code, you can use no-code tools to define workflows that trigger automatically. When someone fills out a form, the system creates a CRM contact, sends a welcome email, and notifies your team without any manual work.

In my keynote, I showed how ECA and ActivePieces can work together. JĂŒrgen Haas, who created ECA, and I collaborated on this integration. ECA lets you define automations inside Drupal using events, conditions, and actions. ActivePieces is an open source automation platform similar to Zapier or n8n.

This approach allows us to build user experiences that are not only better and smarter, but also positions Drupal to benefit from AI innovation happening across the broader ecosystem. The idea resonated in Vienna. People approached me enthusiastically with related projects and demos, including tools like Flowdrop or Drupal's MCP module.

Between now and DrupalCon Chicago, we're inviting the community to explore and expand on this work. Join us in #orchestration on Drupal Slack, test the new Orchestration module, connect more automation platforms, or help improve ECA. If this direction proves valuable, we'll share what we learned at DrupalCon Chicago.

Building the future together

At DrupalCon Vienna, I felt something shift. Sessions were packed. People were excited about Site Templates and the Marketplace. Drupal Canvas drew huge crowds, and even more agencies signed up to join the Drupal AI initiative. During contribution day, more people than usual showed up looking for ways to help.

That energy in Vienna reflected something bigger. AI is changing how people use the web and how we build for it. It can feel threatening, and it can feel full of possibility, but what became clear in Vienna is that Drupal is well positioned at this inflection point, with both momentum and direction.

What makes this moment special is how the community is responding with focus and collaboration. We are approaching it as a much more coordinated effort, while still leaving room for experimentation.

Vienna showed me that the Drupal community is ready to take this on together. We have navigated uncharted territory before, but this time there is a boldness and unity I have not seen in years. That is the way through the storm. I am proud to be part of it.

I want to extend my gratitude to everyone who contributed to making my presentation and demos a success. A special thank you to Adam G-H, Aidan Foster, ASH Sullivan, Bålint Kléri, Cristina Chumillas, Elliott Mower, Emma Horrell, Gåbor Hojtsy, Gurwinder Antal, James Abrahams, Jurgen Haas, Kristen Pol, Lauri Timmanee, Marcus Johansson, Martin Anderson-Clutz, Pamela Barone, Tiffany Farriss, Tim Lehnen, and Witze Van der Straeten. Many others contributed indirectly to make this possible. If I've inadvertently omitted anyone, please reach out.

October 22, 2025

I had just published my post about having dinner with Garry Kasparov when I got a call. Belgium's Prime Minister Bart De Wever had dropped out of a simultaneous exhibition where Kasparov would play 20 people at once. Did I want to take his seat?

Of course I didn't hesitate. Within hours, I was sitting across a chessboard from Kasparov. I was never going to win. The question was: in how many moves would I lose?

Kasparov stands across the chessboard, focused on his opponent's position. Playing against Garry Kasparov. Photo © Jelle Jansegers.

Kasparov opened with white, and I defended with black using the Caro-Kann defense. I blundered my rook on move 11. A mistake I'm still kicking myself over. But I kept fighting. A few times I made him pause and think for a minute or so. I resigned at move 25. None of the twenty players managed a draw or a win.

Kasparov leans toward his opponent across a long row of chessboards during the simultaneous exhibition. Playing against Garry Kasparov. Photo © Jelle Jansegers.

The event was livestreamed, with GM Nigel Short and FM Lennert Lenaerts providing commentary. Here is a snippet where they review my position:

This morning, I entered our game into Chess.com's analysis tool. Kasparov played with 94% accuracy, while I managed 80% accuracy (estimated 2100 ELO performance). Not bad for someone who hung a rook early.

I'm grateful I got to play him. It's a game I'll remember for the rest of my life.

Kasparov leans over the chessboard, thinking deeply during a game. Playing against Garry Kasparov. Photo © Jelle Jansegers.

Here is the game in PGN notation for anyone who wants to analyze it:

[Event "Simul Exhibition Antwerp"]
[Date "2025.10.21"]
[White "Kasparov, Garry"]
[Black "Buytaert, Dries"]
[Result "1-0"]
[Variant "Standard"]
[ECO "B12"]
[Opening "Caro-Kann Defense: Advance Variation, Tal Variation"]
1. e4 c6 2. d4 d5 3. e5 Bf5 4. h4 h5 
5. c4 e6 6. Nc3 Bb4 7. Qb3 Bxc3+ 8. bxc3 Qc7 9. Ba3 Nd7 
10. Nf3 Rb8 11. Bd6 Qc8 12. Bxb8 Qxb8 13. cxd5 exd5 14. c4 Ne7 
15. Bd3 Bxd3 16. Qxd3 dxc4 17. Qxc4 Nb6 18. Qc2 Qc8 19. Kf1 Qd7 
20. Re1 Kd8 21. Ng5 Rf8 22. e6 Qd5 23. exf7 Qc4+ 24. Qxc4 Nxc4 
25. Ne6+

And in FEN notation:

3k1r2/pp2nPp1/2p1N3/7p/2nP3P/8/P4PP1/4RK1R b - - 1 25

We hebben de sleutels nog niet, maar ik plan toch al een verhuisdag op woensdag 29 oktober. (To be confirmed, maar het komt dichterbij!)


Waar ik hulp bij kan gebruiken đŸ’Ș

  • Bananendozen verhuizen — ik zorg dat het meeste op voorhand is ingepakt.
    Ruwe schatting: zo’n 30 dozen (ik heb ze niet geteld, ik leef graag gevaarlijk).
  • Demonteren en verhuizen van meubels:
    • Boekenrek (IKEA KALLAX 5×5)
    • Kleerkast
    • Bed
    • Bureau
  • Diepvries verhuizen (van de keuken op het gelijkvloers naar de kelder in het nieuwe huis).

De dozen en meubels staan nu op de 2de verdieping. Ik probeer vooraf al wat dozen naar beneden te sleuren — want trappen, ja.

Assembleren van meubels op het nieuwe adres doen we een andere dag.
Doel van de dag: niet overprikkeld geraken.


Wat ik zelf regel 🚐

Ik voorzie een kleine bestelwagen via Dégage autodelen.


Wat ik nog nodig heb đŸ§°â€

  • Een elektrische schroevendraaier (voor het IKEA-spul).
  • Handige, stressbestendige mensen met een vleugje organisatietalent.
  • Enkele auto’s die over en weer kunnen rijden — zelfs al is het maar voor een paar dozen.
  • Emotionele support crew die op tijd kunnen zeggen: “Hey, pauze.”

Praktisch 📍

  • Oud adres: Ledeberg
  • Nieuw adres: tussen station Gent-Sint-Pieters en de Sterre
  • Afstand: ongeveer 4 km
    (Exacte adressen deel ik met de helpers.)

Ik maak een WhatsApp-groep voor coördinatie.


Afsluiter 🍕

Verhuisdag Part 1 eindigt met gratis pizza’s.
Want eerlijk: dozen sleuren is zwaar, maar pizza maakt alles beter.


Wil je komen helpen (met spierkracht, auto, gereedschap of goeie vibes)?
Laat iets weten — hoe meer handen, hoe minder stress!

October 21, 2025

When I was about 10 years old, my uncle gave me a chess computer. It was the "Kasparov Team-Mate Advanced Trainer", released in 1988. More than 35 years later, I still have it. Last night I was lucky enough to have dinner with the man whose name is on that device.

Garry Kasparov is one of the greatest chess players of all time. He was the number one chess player in the world for 21 years and became famous for his matches against IBM's Deep Blue. Since retiring from chess, he has become a prominent advocate for democracy, even running against Putin in 2008.

Dries Buytaert and Garry Kasparov. With Garry Kasparov in Antwerp, October 2025

During our conversation, Garry broke the news that Grandmaster Daniel Naroditsky had passed away unexpectedly. He was only 29. The news stopped me cold. For a moment, I just sat there, trying to process it.

I've probably watched every video Daniel Naroditsky published on his YouTube channel over the past four years. His videos made me fall in love with chess in a way I never had before. I was drawn not only to his mastery but to how generously he shared it. His real achievement wasn't his chess rating but how many people he made better.

Here I was, sitting across from the chess legend whose computer first introduced me to the game as a child, learning about the sudden loss of the person who reignited that passion decades later. One sitting in front of me, very much alive and passionately debating. The other suddenly gone.

It's strange how we can form connections with people we never meet. Through a name on a device, through videos we watch online, they become a part of our lives. When you meet one in person, the excitement is real. When you learn another has died, so is the grief.

I left that dinner thinking about the strangeness of it all. Two people who shaped my relationship with chess, colliding in one unexpected evening.

October 17, 2025

class AbstractCommand : public QObject
{
    Q_OBJECT
public:
    explicit AbstractCommand(QObject* a_parent = nullptr);
    Q_INVOKABLE virtual void execute() = 0;
    virtual bool canExecute() const = 0;
signals:
    void canExecuteChanged( bool a_canExecute );
};

AbstractCommand::AbstractCommand(QObject *a_parent)
    : QObject( a_parent )
{
}

AbstractConfigurableCommand::AbstractConfigurableCommand(QObject *a_parent)
    :   AbstractCommand( a_parent )
    , m_canExecute( false ) { }

bool AbstractConfigurableCommand::canExecute() const
{
    return m_canExecute;
}
void AbstractConfigurableCommand::setCanExecute( bool a_canExecute )
{
    if( a_canExecute != m_canExecute ) {
        m_canExecute = a_canExecute;
        emit canExecuteChanged( m_canExecute );
        emit localCanExecuteChanged( m_canExecute );
    }
}

#include 

#include "CompositeCommand.h"

/*! \brief Constructor for a empty initial composite command */
CompositeCommand::CompositeCommand( QObject *a_parent )
    : AbstractCommand ( a_parent ) {}

/*! \brief Constructor for a list of members */
CompositeCommand::CompositeCommand( QList a_members, QObject *a_parent )
    : AbstractCommand ( a_parent )
{
    foreach (AbstractCommand* member, a_members) {
        registration(member);
        m_members.append( QSharedPointer(member) );
    }
}

/*! \brief Constructor for a list of members */
CompositeCommand::CompositeCommand( QList> a_members, QObject *a_parent )
    : AbstractCommand ( a_parent )
    , m_members ( a_members )
{
    foreach (const QSharedPointer& member, m_members) {
        registration(member.data());
    }
}

/*! \brief Destructor */
CompositeCommand::~CompositeCommand()
{
    foreach (const QSharedPointer& member, m_members) {
        deregistration(member.data());
    }
}

void CompositeCommand::executeAsync()
{
    foreach (const QSharedPointer& member, m_members) {
        member->executeAsync();
    }
}

bool CompositeCommand::canExecute() const
{
    foreach (const QSharedPointer& member, m_members) {
        if (!member->canExecute()) {
            return false;
        }
    }
    return true;
}

/*! \brief When one's canExecute changes */
void CompositeCommand::onCanExecuteChanged( bool a_canExecute )
{
    bool oldstate = !a_canExecute;
    bool newstate = a_canExecute;
    foreach (const QSharedPointer& member, m_members) {
        if ( member.data() != sender() ) {
            oldstate &= member->canExecute();
            newstate &= member->canExecute();
        }
    }

    if (oldstate != newstate) {
        emit canExecuteChanged( newstate );
    }
}

/*! \brief When one's execution completes */
void CompositeCommand::onExecutionCompleted( )
{
    m_completedCount++;

    if ( m_completedCount == m_members.count( ) ) {
        m_completedCount = 0;
        emit executionCompleted();
    }
}


void CompositeCommand::registration( AbstractCommand* a_member )
{
    connect( a_member, &AbstractCommand::canExecuteChanged,
             this, &CompositeCommand::onCanExecuteChanged );
    connect( a_member, &AbstractCommand::executionCompleted,
             this, &CompositeCommand::onExecutionCompleted );
}

void CompositeCommand::deregistration( AbstractCommand* a_member )
{
    disconnect( a_member, &AbstractCommand::canExecuteChanged,
                this, &CompositeCommand::onCanExecuteChanged );
    disconnect( a_member, &AbstractCommand::executionCompleted,
                this, &CompositeCommand::onExecutionCompleted );
}

void CompositeCommand::handleCanExecuteChanged(bool a_oldCanExecute)
{
    bool newCanExecute = canExecute();
    if( a_oldCanExecute != newCanExecute )
    {
        emit canExecuteChanged( newCanExecute );
    }
}

void CompositeCommand::add(AbstractCommand* a_member)
{
    bool oldCanExecute = canExecute();

    QQmlEngine::setObjectOwnership ( a_member, QQmlEngine::CppOwnership );
    m_members.append( QSharedPointer( a_member ) );
    registration ( a_member );

    handleCanExecuteChanged(oldCanExecute);
}

void CompositeCommand::add(const QSharedPointer& a_member)
{
    bool oldCanExecute = canExecute();

    m_members.append( a_member );
    registration ( a_member.data() );

    handleCanExecuteChanged(oldCanExecute);
}

void CompositeCommand::remove(AbstractCommand* a_member)
{
    bool oldCanExecute = canExecute();

    QMutableListIterator > i( m_members );
    while (i.hasNext()) {
        QSharedPointer val = i.next();
        if ( val.data() == a_member) {
            deregistration(val.data());
            i.remove();
        }
    }

    handleCanExecuteChanged(oldCanExecute);
}

void CompositeCommand::remove(const QSharedPointer& a_member)
{
    bool oldCanExecute = canExecute();

    deregistration(a_member.data());
    m_members.removeAll( a_member );

    handleCanExecuteChanged(oldCanExecute);
}

After a few months of maintaining ‘desinformatizia’ We took away their mystery.

Ik heb mezelf eindelijk een (refurbished) nieuw toestel gekocht waarmee je kan bellen en zo van die andere nuttige dingen.

Uiteraard moet daar SailfishOS van me opstaan.

En deze keer werkt echt alles dat ik nodig heb. Goede ontvangst, GPS, 4G

October 15, 2025

So, you know that feeling when you’re editing GRUB for the thousandth time, because dual-booting is apparently a lifestyle choice?
In a previous post — Resurrecting My Windows Partition After 4 Years đŸ–„ïžđŸŽź — I was neck-deep in grub.cfg, poking at boot entries, fixing UUIDs, and generally performing a ritual worthy of system resurrection.

While I was at it, I decided to take a closer look at all those mysterious variables lurking in /etc/default/grub.
That’s when I stumbled upon something… magical. ✹


đŸŽ¶ GRUB_INIT_TUNE — Your Bootloader Has a Voice

Hidden among all the serious-sounding options like GRUB_TIMEOUT and GRUB_CMDLINE_LINUX_DEFAULT sits this gem:

# Uncomment to get a beep at grub start
#GRUB_INIT_TUNE="480 440 1"

Wait, what? GRUB can beep?
Oh, not just beep. GRUB can play a tune. đŸŽș

Here’s how it actually works (per the GRUB manpage):

Format:

tempo freq duration [freq duration freq duration ...]
  • tempo — The base time for all note durations, in beats per minute.
    • 60 BPM → 1 second per beat
    • 120 BPM → 0.5 seconds per beat
  • freq — The note frequency in hertz.
    • 262 = Middle C, 0 = silence
  • duration — Measured in “bars” relative to the tempo.
    • With tempo 60, 1 = 1 second, 2 = 2 seconds, etc.

So 480 440 1 is basically GRUB saying “Hello, world!” through your motherboard speaker: 0.25 seconds at 440 Hz, which is A4 in standard concert pitch as defined by ISO 16:1975.
And yes, this works even before your sound card drivers have loaded — pure, raw, BIOS-level nostalgia.


🧠 From Beep to Bop

Naturally, I couldn’t resist. One line turned into a small Python experiment, which turned into an audio preview tool, which turned into
 let’s say, “bootloader performance art.”

Want to make GRUB play a polska when your system starts?
You can. It’s just a matter of string length — and a little bit of mischief. 😏

There’s technically no fixed “maximum size” for GRUB_INIT_TUNE, but remember: the bootloader runs in a very limited environment. Push it too far, and your majestic overture becomes a segmentation fault sonata.

So maybe keep it under a few kilobytes unless you enjoy debugging hex dumps at 2 AM.


đŸŽŒ How to Write a Tune That Won’t Make Your Laptop Cry

Practical rules of thumb (don’t be that person):

  • Keep the inline tune under a few kilobytes if you want it to behave predictably.
  • Hundreds to a few thousands of notes is usually fine; tens of thousands is pushing luck.
  • Each numeric value (pitch or duration) must be ≀ 65535.
  • Very long tunes simply delay the menu — that’s obnoxious for you and terrifying for anyone asking you for help.
    Keep tunes short and tasteful (or obnoxious on purpose).

đŸŽ” Little Musical Grammar: Notes, Durations and Chords (Fake Ones)

Write notes as frequency numbers (Hz). Example: A4 = 440.

Prefer readable helpers: write a tiny script that converts D4 F#4 A4 into the numbers.

Example minimal tune:

GRUB_INIT_TUNE="480 294 1 370 1 440 1 370 1 392 1 494 1 294 1"

That’ll give you a jaunty, bouncy opener — suitable for mild neighbour complaints. đŸ’ƒđŸŽ»

Chords? GRUB can’t play them simultaneously — but you can fake them by rapid time-multiplexing (cycling the chord notes quickly).
It sounds like a buzzing organ, not a symphony, but it’s delightful in small doses.

Fun fact đŸ’Ÿ: this time-multiplexing trick isn’t new — it’s straight out of the 8-bit video game era.
Old sound chips (like those in the Commodore 64 and NES) used the same sleight of hand to make
a single channel pretend to play multiple notes at once.
If you’ve ever heard a chiptune shimmer with impossible harmonies, that’s the same magic. ✹🎼


🧰 Tools I Like (and That You Secretly Want)

If you’re not into manually counting numbers, do this:

Use a small composer script (I wrote one) that:

  • Accepts melodic notation like D4 F#4 A4 or C4+E4+G4 (chord syntax).
  • Can preview via your system audio (so you don’t have to reboot to hear it).
  • Can install the result into /etc/default/grub and run update-grub (only as sudo).

Preview before you install. Always.
Your ears will tell you if your “ode to systemd” is charming or actually offensive.

For chords, the script time-multiplexes: e.g. for a 500 ms chord and 15 ms slices,
it cycles the chord notes quickly so the ear blends them.
It’s not true polyphony, but it’s a fun trick.

(If you want the full script I iterated on: drop me a comment. But it’s more fun to leave as an exercise to the reader.)


🧼 Limits, Memory, and “How Big Before It Breaks?”

Yes, my Red Team colleague will love this paragraph — and no, I’m not going to hand over a checklist for breaking things.

Short answer: GRUB doesn’t advertise a single fixed limit for GRUB_INIT_TUNE length.

Longer answer, responsibly phrased:

  • Numeric limits: per note pitch/duration ≀ 65535 (uint16_t).
  • Tempo: can go up to uint32_t.
  • Parser & memory: the tune is tokenized at boot, so parsing buffers and allocators impose practical limits.
    Expect a few kilobytes to be safe; hundreds of kilobytes is where things get flaky.
  • Usability: if your tune is measured in minutes, you’ve already lost. Don’t be that.

If you want to test where the parser chokes, do it in a disposable VM, never on production hardware.
If you’re feeling brave, you can even audit the GRUB source for buffer sizes in your specific version. đŸ§©


⚙ How to Make It Sing

Edit /etc/default/grub and add a line like this:

GRUB_INIT_TUNE="480 440 1 494 1 523 1  587 1  659 3"

Then rebuild your config:

sudo update-grub

Reboot, and bask in the glory of your new startup sound.
Your BIOS will literally play you in. đŸŽ¶


💡 Final Thoughts

GRUB_INIT_TUNE is the operating-system equivalent of a ringtone for your toaster:
ridiculously low fidelity, disproportionately satisfying,
and a perfect tiny place to inject personality into an otherwise beige boot.

Use it for a smile, not for sabotage.

And just when I thought I’d been all clever reverse-engineering GRUB beeps myself

I discovered that someone already built a web-based GRUB tune tester!
👉 https://breadmaker.github.io/grub-tune-tester/

Yes, you can compose and preview tunes right in your browser —
no need to sacrifice your system to the gods of early boot audio.
It’s surprisingly slick.

Even better, there’s a small but lively community posting their GRUB masterpieces on Reddit and other forums.
From Mario theme beeps to Doom startup riffs, there’s something both geeky and glorious about it.
You’ll find everything from tasteful minimalist dings to full-on “someone please stop them” anthems. đŸŽźđŸŽ¶

Boot loud, boot proud — but please boot considerate. đŸ˜„đŸŽ»đŸ’»

October 14, 2025

La justesse au lieu de l’exactitude

OĂč je parle de hockey sous-marin, d’avions militaires et du pourrissement des oranges punks.

L’arbitraire de l’arbitrage

C’est un fait historique peu connu, mais j’ai, briĂšvement, Ă©tĂ© arbitre sportif. J’ai en effet arbitrĂ© des matchs de premiĂšre division belge de hockey subaquatique (si, c’est un sport qui existe). Bon, en rĂ©alitĂ©, j’ai Ă©tĂ© arbitre parce que chaque Ă©quipe de division 2 devait envoyer des joueurs arbitrer des matchs de division 1. Mais, au final, je l’ai quand mĂȘme fait.

Je me souviens d’un match particuliĂšrement important entre les deux meilleures Ă©quipes de Belgique qui s’affrontaient pour le titre de champion de Belgique.

En hockey subaquatique, il y a normalement deux arbitres dans l’eau. Mais, lors de ce match, le second arbitre s’avĂ©ra dĂ©passĂ© et, Ă  chaque action, me faisait le geste signifiant "je n’ai pas vu l’action" (Ă©tant Ă©quipĂ©s de masque et de tubas, les arbitres communiquent par gestes codifiĂ©s).

Je me suis donc retrouvĂ© Ă  arbitrer presque seul ce qui Ă©tait probablement le match le plus important du championnat. MalgrĂ© mon manque d’expĂ©rience, j’ai trĂšs vite compris que la seule maniĂšre de garder le contrĂŽle d’un match trĂšs engagĂ© Ă©tait de prendre des dĂ©cisions fermes avec assurance. Le palet Ă©tait sorti aprĂšs une mĂȘlĂ©e confuse ? Pas le temps d’analyser au millimĂštre qui Ă©tait le dernier joueur Ă  l’avoir touché : je devais simplement prendre une dĂ©cision. Quoi que je dĂ©cide, l’autre Ă©quipe allait rĂ©clamer. C’est d’ailleurs arrivĂ© trĂšs vite. En hockey subaquatique, seul le capitaine peut, en thĂ©orie, s’adresser Ă  l’arbitre. Un joueur est venu, en se plaignant. J’ai fait le geste de demander s’il Ă©tait capitaine et, comme ce n’était pas le cas, je l’ai exclu pour 3 minutes. Il s’est mis Ă  hurler, J’ai rajoutĂ© 2 minutes d’exclusion. C’était particuliĂšrement sĂ©vĂšre. Mais, Ă  partir de ce moment, plus aucune de mes dĂ©cisions n’a Ă©tĂ© contestĂ©e.

J’ai essayĂ© de les rendre les plus justes possible et le match s’est trĂšs bien dĂ©roulĂ©.

J’ai appris une chose importante : l’arbitrage n’est pas une discipline scientifique. Est-il physiquement possible de dĂ©terminer exactement quelle a Ă©tĂ© la derniĂšre crosse Ă  toucher le palet avant qu’il sorte ? À partir de quand exactement un shoot est-il considĂ©rĂ© comme dangereux ? MĂȘme la frontiĂšre entre un goal et un sauvetage de justesse sur la ligne possĂšde un certain degrĂ© d’arbitraire.

Pour prendre une dĂ©cision juste, l’arbitre peut utiliser son intuition humaine. Si un dĂ©fenseur a foncĂ© vers un attaquant et que le palet est sorti, on peut, dans le doute, estimer que la sortie est la faute du dĂ©fenseur. L’arbitre peut Ă©galement « sentir » l’aspect volontaire ou non d’une faute.

Mais tout cela n’était possible que parce que, contrairement au football, le hockey subaquatique n’est pas Ă©quipĂ© de camĂ©ras qui scrutent tout au ralenti. Le football qui est devenu un sport que je trouve absolument impossible Ă  apprĂ©cier : aprĂšs avoir marquĂ© un goal, les joueurs se tournent dĂ©sormais vers l’arbitre et attendent pour savoir s’il n’y avait pas eu un hors-jeu millimĂ©trĂ© 5 minutes plus tĂŽt. Le tout est analysĂ© en coulisse par un type devant un ordinateur qui transmet ses dĂ©cisions dans l’oreillette de l’arbitre. Ou plutĂŽt les dĂ©cisions prises par un ordinateur.

L’aspect humain du jeu a complĂštement disparu et prend les attributs d’une dĂ©cision pseudoscientifique, tentant de dĂ©couvrir une « vĂ©rité ». Or, scientifiquement, il n’y a pas de vĂ©ritĂ© possible. Le hors-jeu se dĂ©clare au moment oĂč le ballon quitte le pied du passeur. Ce moment n’existe pas. Le ballon se dĂ©formant, je mets au dĂ©fi quiconque de dĂ©terminer la milliseconde exacte de cet Ă©vĂ©nement. Il en est de mĂȘme pour dĂ©cider si une ligne Ă  Ă©tĂ© franchie ou non. À partir de quel millimĂštre peut-on dire qu’une sphĂšre a franchi une ligne tracĂ©e sur des brins d’herbe ? Rien que le placement des camĂ©ras et l’éclairage du stade vont influencer la dĂ©cision. MĂȘme en cyclisme il est parfois incroyablement difficile de dĂ©terminer quel vĂ©lo a franchi la ligne en premier. Et la dĂ©cision est alors prise sans appel possible.

Scientifiquement, c’est trĂšs compliquĂ© de tracer une limite exacte. Un de mes profs de polytechnique disait que les appareils Ă  aiguille sont toujours plus prĂ©cis que les afficheurs numĂ©riques, car on peut voir la mesure « rĂ©elle » 
 quitte Ă  bouger un peu la tĂȘte pour qu’elle corresponde Ă  ce que l’on veut !

Pour mesurer scientifiquement, il faut poser des hypothĂšses, discuter, prendre plusieurs mesures, rĂ©pĂ©ter une expĂ©rience. Humainement, au contraire, il est possible de prendre la dĂ©cision qui parait la plus juste possible sur le moment mĂȘme. La dĂ©cision pourra toujours ĂȘtre discutĂ©e par aprĂšs, mais, dans le feu de l’action, c’est celle qui a Ă©tĂ© prise.

Et mĂȘme si les dĂ©cisions ne sont pas parfaites, le fait qu’elles paraissent justes Ă  premiĂšre vue va crĂ©er une relation de confiance envers l’arbitre. L’arbitre se sentira responsable et utilisera son intuition pour prĂ©server sa rĂ©putation. Lorsque j’ai arbitrĂ© ce fameux match de hockey, je n’ai jamais cherchĂ© Ă  prendre la dĂ©cision la plus exacte, mais toujours la plus juste.

Mais la machine ne permet plus la justesse. La justesse s’efface au profit d’une arbitraire exactitude. L’arbitre obĂ©it dĂ©sormais Ă  des instructions qui lui sont soufflĂ©es dans l’oreillette. Il ne peut plus prendre de dĂ©cisions. Il ne peut plus prendre de dĂ©cisions, mais, paradoxalement, il en reste responsable.

De la complexité comme justification de la non-décision

Il n’y a pas que les arbitres de sport. Les pilotes de chasse sont dĂ©sormais confrontĂ©s au mĂȘme problĂšme.

Le F-35 est un avion tellement complexe qu’il est devenu tout bonnement inpilotable. Le 27 aoĂ»t 2025, un appareil s’est Ă©crasĂ©. Le train d’atterrissage Ă©tait bloquĂ© en position semi-ouverte et le pilote a tentĂ© une sĂ©rie de « touch down », une procĂ©dure vieille comme l’aviation et que Buck Danny utilise notamment dans Prototype FX-13, un album de 1961, pour rĂ©soudre le mĂȘme problĂšme.

Buck Danny n’avait pas un ordinateur hyper complexe Ă  son bord et il sauve finalement l’avion. En 2025, l’ordinateur a considĂ©rĂ© que la procĂ©dure Ă©tait un atterrissage classique. L’avion s’est mis en mode « roulage au sol » alors qu’il Ă©tait en train de redĂ©coller. En mode roulage Ă  plusieurs centaines de mĂštres d’altitude, l’engin Ă©tait bien entendu ingouvernable, forçant l’éjection du pilote.

Un problĂšme mĂ©canique prĂ©visible et « classique » s’est transformĂ©, grĂące aux ordinateurs en catastrophe.

Comme ce prof d’électronique qui, pour justifier l’importance de l’électronique moderne, nous avait expliquĂ© que grĂące Ă  l’électronique, sa voiture avait pu ĂȘtre rĂ©parĂ©e en moins d’une heure le jour mĂȘme de son dĂ©part en vacances. La panne en question ? Un dĂ©faut du capteur Ă©lectronique qui inventait de fausses pannes.

Plus besoin d’avoir un problĂšme rĂ©el. DĂ©sormais, tout est automatisé ! En 2024, un pilote s’est Ă©jectĂ© de son F-35, car, malgrĂ© plusieurs reboot, son casque connectĂ© indiquait des erreurs critiques.

ProblĂšme : aprĂšs l’éjection du pilote, l’avion a continuĂ© Ă  voler correctement pendant de trĂšs longues minutes. Il semblerait que son casque avait un simple bug informatique.

La subtilitĂ© de l’histoire c’est que le pilote en question voit dĂ©sormais sa carriĂšre mise entre parenthĂšses et est poursuivi pour abandon d’avion fonctionnel. Sauf qu’il a suivi Ă  la lettre la procĂ©dure relative aux messages d’erreur affichĂ©s dans son casque.

Non seulement la complexitĂ© crĂ©e artificiellement des problĂšmes, mais elle empĂȘche les humains d’acquĂ©rir de l’expĂ©rience et de prendre des dĂ©cisions. Nous n’avons plus des pilotes qui « sentent » leur avion, mais des opĂ©rateurs suivants des procĂ©dures informatisĂ©es. C’est pareil quand mon garagiste me dit que l’erreur 550 de mon vĂ©hicule force Ă  un retour chez le concessionnaire. Lequel n’a, au final, fait que remplacer une durite, ce que mon garagiste indĂ©pendant aurait pu faire directement si le logiciel ne l’en avait pas empĂȘchĂ©.

Le prix de l’espionnage permanent

Si vous lisez ce blog, vous avez conscience de l’espionnage permanent dont nous sommes victimes. Et l’une des consĂ©quences directes de cet espionnage, c’est que tout peut dĂ©sormais ĂȘtre scrutĂ© mĂȘme longtemps aprĂšs. Toutes les dĂ©cisions peuvent ĂȘtre discutĂ©es pour savoir si, scientifiquement, c’était bien la bonne dĂ©cision.

Le foot, encore lui, est l’exemple parfait : aprĂšs 90 minutes de match suivent des heures voire, dans certains cas, des journĂ©es entiĂšres de discussions entre des types qui regardent chaque image au ralenti pour conclure que l’arbitre, l’entraĂźneur ou les joueurs ont pris de mauvaises dĂ©cisions.

Qu’on soit arbitre, pilote de chasse ou simple citoyen, la seule stratĂ©gie possible pour un humain raisonnable est donc de ne plus prendre de dĂ©cisions (ce qui est dĂ©jĂ  une dĂ©cision en soi).

Nous nous sommes fait avoir. Nous servons la machine aveuglĂ©ment, n’en tirant aucun bĂ©nĂ©fice lorsque tout va bien et en nous faisant taper sur les doigts lorsque tout va mal. Ce qui sert d’excuses Ă  mettre encore plus de machines dans l’histoire.

Nous sommes devenus cet assemblage biologiquement mĂ©canique absurde et contre nature qu’Anthony Burgess appelle « L’orange mĂ©canique » (la signification du titre est en effet parfaitement explicite dans le livre, bien plus que dans le film).

Et si vous pensez que l’IA peut vous dĂ©passer, c’est parce que, justement, vous agissez comme une IA, comme une orange mĂ©canique. Par dĂ©finition, ChatGPT surpassera toujours en intelligence celleux qui font confiance Ă  ChatGPT.

Comme le disait l’inĂ©narrable Yann Kerninon en 2017, il faut juste arrĂȘter de nous prendre pour des machines. Redevenir des humains. Des oranges biologiques qui pourrissent et crĂšvent, mais qui sont, juste avant, pleines de saveurs et de vitamines.

Andreas raconte son expĂ©rience avec un de ses collĂšgues incapable de rĂ©soudre un problĂšme particulier (ce qui arrive Ă  tout le monde, surtout quand on a le nez dans le guidon). Mais la particularitĂ©, c’est que le collĂšgue en question n’a jamais cherchĂ© Ă  rĂ©soudre le problĂšme. Il cherchait Ă  ce que ChatGPT lui donne la rĂ©ponse.

Lorsqu’Andreas a compris la cause du problĂšme en question, il a tentĂ© de l’expliquer Ă  son collĂšgue, mais ce n’est que lorsque ce dernier a donnĂ© l’explication Ă  ChatGPT et que ChatGPT a acquiescĂ© qu’ils ont pu enfin avancer.

Andreas conclut avec le fait que le cerveau est un muscle. Moins on l’utilise, plus il s’atrophie et plus l’acte de penser devient douloureux et moins on a envie de l’utiliser (et donc plus il s’atrophie).

J’en profite pour rappeler que ChatGPT et consorts sont littĂ©ralement des machines Ă  reformuler vos questions, Ă  acquiescer Ă  tous vos biais cognitifs. Mais Olivier Ertzscheid l’explique mieux que moi dans ce court billet « 3 minutes chrono »

Mais lĂ  oĂč Andrea est trop optimiste est lorsqu’il imagine que savoir penser va devenir une qualitĂ© rare et enviable sur le marchĂ© du travail.

Rare, oui.

Enviable, certainement pas. Car penser, c’est remettre en question. Le capitalisme actuel me fait penser Ă  la guerre 14-18. Les travailleurs sont la chair Ă  canon. Les intellectuels sont les pacifistes, les objecteurs de conscience. En tentant de remettre en question l’épouvantable boucherie dont ils Ă©taient tĂ©moins, ils n’ont gagnĂ© que le droit de se faire fusiller.

Questionner, se rebeller, c’est ĂȘtre un perdant !

Comme je le disais : on ne reçoit pas de mĂ©dailles pour rĂ©sister. C’est mĂȘme plutĂŽt le contraire : les mĂ©dailles sont lĂ  pour rĂ©compenser ceux qui perpĂ©tuent le systĂšme sans poser de questions.

Refuser de devenir une orange mĂ©canique, c’est accepter de pourrir. C’est mĂȘme le cĂ©lĂ©brer en s’enfonçant des clous de girofle dans la chair pour que ce pourrissement sente bon.

En plus, ça donne un look punk, vous ne trouvez pas ?

Je suis Ploum et je viens de publier Bikepunk, une fable Ă©colo-cycliste entiĂšrement tapĂ©e sur une machine Ă  Ă©crire mĂ©canique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !

Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.

October 13, 2025

Dear lazyweb,

At work, we are trying to rotate the GPG signing keys for the Linux packages of the eID middleware

We created new keys, and they will be installed on all Linux machines that have the eid-archive package installed soon (they were already supposed to be, but we made a mistake).

Running some tests, however, I have a bit of a problem:

[wouter@rhel rpm-gpg]$ sudo rpm --import RPM-GPG-KEY-BEID-RELEASE
[wouter@rhel rpm-gpg]$ sudo rpm --import RPM-GPG-KEY-BEID-RELEASE-2025
fout: RPM-GPG-KEY-BEID-RELEASE-2025: key 1 import failed.
[wouter@rhel rpm-gpg]$ sudo rpm --import RPM-GPG-KEY-BEID-CONTINUOUS

This is on RHEL9.

The only difference between the old keys and the new one, apart of course from the fact that the old one is, well, old, is that the old one uses the RSA algorithm whereas the new one uses ECDSA on the NIST P-384 curve (the same algorithm as the one used by the eID card).

Does RPM not support ECDSA keys? Does anyone know where this is documented?

(Yes, I probably should have tested this before publishing the new key, but this is where we are)

October 09, 2025

When working on presentations, I like to extract my speaker notes to review the flow and turn them into blog posts. I'm doing this right now for my DrupalCon Vienna talk.

I used to do this manually, but with presentations often having 100+ slides, it gets tedious and isn't very repeatable. So I ended up automating this with a Python script.

Since I use Apple Keynote or Google Slides rather than Microsoft PowerPoint, I first export my presentations to PowerPoint format, then run my Python script.

If you've ever needed to pull speaker notes from a presentation for review, editing or blogging, here is my script and how to use it.

Speaker notes extractor script

Save this code as powerpoint-to-text.py:

#!/usr/bin/env python3
"""Extract speaker notes from PowerPoint presentations to text files."""

import sys
from pathlib import Path
from pptx import Presentation

def extract_speaker_notes(pptx_path: Path) -> tuple[str, int]:
    presentation = Presentation(pptx_path)
    notes_text = []

    for i, slide in enumerate(presentation.slides, 1):
        if slide.notes_slide and slide.notes_slide.notes_text_frame:
            notes = slide.notes_slide.notes_text_frame.text.strip()
            if notes:
                notes_text.append(f"=== Slide {i} ===\n{notes}\n")

    return "\n".join(notes_text), len(notes_text)

def main():
    if len(sys.argv) != 2:
        print("Usage: python powerpoint-to-text.py presentation.pptx")
        sys.exit(1)

    input_path = Path(sys.argv[1])

    if not input_path.exists():
        print(f"Error: File '{input_path}' not found")
        sys.exit(1)

    if input_path.suffix.lower() != '.pptx':
        print(f"Warning: '{input_path}' may not be a PowerPoint file")

    try:
        notes_text, notes_count = extract_speaker_notes(input_path)
    except Exception as e:
        print(f"Error reading presentation: {e}")
        sys.exit(1)

    output_path = input_path.with_suffix('.txt')
    output_path.write_text(notes_text, encoding='utf-8')

    print(f"Extracted {notes_count} slides with notes to {output_path}")

if __name__ == "__main__":
    main()

The script uses the python-pptx library to read PowerPoint files. This library understands the internal structure of .pptx files (which are zip archives containing XML). It provides a clean Python interface to access slides and their speaker notes. The script loops through each slide, checks if it has notes, and writes them to a text file.

Usage

I like to use uv to run Python code. uv is a fast, modern Python package manager that handles dependencies automatically:

$ uv run --with python-pptx powerpoint-to-text.py your-presentation.pptx

This saves a .txt file with your notes in the same directory as the input file, not the current directory or desktop.

The text file contains:

=== Slide 1 ===
Speaker notes from slide 1 ...

=== Slide 3 ===
Speaker notes from slide 3 ...

Only slides with speaker notes are included.

October 08, 2025

As a solo developer, I wear all the hats. đŸŽ©đŸ‘·â€â™‚ïžđŸŽš
That includes the very boring Quality Assurance Hatℱ — the one that says “yes, Amedee, you do need to check for trailing whitespace again.”

And honestly? I suck at remembering those little details. I’d rather be building cool stuff than remembering to run Black or fix a missing newline. So I let my robot friend handle it.

That friend is called pre-commit. And it’s the best personal assistant I never hired. đŸ€–


🧐 What is this thing?

Pre-commit is like a bouncer for your Git repo. Before your code gets into the club (your repo), it gets checked at the door:

“Whoa there — trailing whitespace? Not tonight.”
“Missing a newline at the end? Try again.”
“That YAML looks sketchy, pal.”
“You really just tried to commit a 200MB video file? What is this, Dropbox?”
“Leaking AWS keys now, are we? Security says nope.”
“Commit message says ‘fix’? That’s not a message, that’s a shrug.”

Pre-commit runs a bunch of little scripts called hooks to catch this stuff. You choose which ones to use — it’s modular, like Lego for grown-up devs. đŸ§±

When I commit, the hooks run. If they don’t like what they see, the commit gets bounced.
No exceptions. No drama. Just “fix it and try again.”

Is it annoying? Yeah, sometimes.
But has it saved my butt from pushing broken or embarrassing code? Way too many times.


🎯 Why I bother (as a hobby dev)

I don’t have teammates yelling at me in code reviews. I am the teammate.
And future-me is very forgetful. 🧓

Pre-commit helps me:

  • 📏 Keep my code consistent
  • 💣 It catches dumb mistakes before I make them permanent.
  • 🕒 Spend less time cleaning up
  • đŸ’Œ Feel a little more “pro” even when I’m hacking on toy projects
  • 🧬 It works with any language. Even Bash, if you’re that kind of person.

Also, it feels kinda magical when it auto-fixes stuff and the commit just
 works.


🛠 Installing it with pipx (because I’m not a barbarian)

I’m not a fan of polluting my Python environment, so I use pipx to keep things tidy. It installs CLI tools globally, but keeps them isolated.
If you don’t have pipx yet:

python3 -m pip install --user pipx
pipx ensurepath

Then install pre-commit like a boss:

pipx install pre-commit

Boom. It’s installed system-wide without polluting your precious virtualenvs. Chef’s kiss. 👹‍🍳💋


📝 Setting it up

Inside my project (usually some weird half-finished script I’ll obsess over for 3 days and then forget for 3 months), I create a file called .pre-commit-config.yaml.

Here’s what mine usually looks like:

repos:
  - repo: https://github.com/pre-commit/pre-commit-hooks
    rev: v5.0.0
    hooks:
      - id: trailing-whitespace
      - id: end-of-file-fixer
      - id: check-yaml
      - id: check-added-large-files

  - repo: https://github.com/gitleaks/gitleaks
    rev: v8.28.0
    hooks:
      - id: gitleaks

  - repo: https://github.com/jorisroovers/gitlint
    rev: v0.19.1
    hooks:
      - id: gitlint

  - repo: https://gitlab.com/vojko.pribudic.foss/pre-commit-update
    rev: v0.8.0
    hooks:
      - id: pre-commit-update

đŸ§™â€â™‚ïž What this pre-commit config actually does

You’re not just tossing some YAML in your repo and calling it a day. This thing pulls together a full-on code hygiene crew — the kind that shows up uninvited, scrubs your mess, locks up your secrets, and judges your commit messages like it’s their job. Because it is.

📩 pre-commit-hooks (v5.0.0)

These are the basics — the unglamorous chores that keep your repo from turning into a dumpster fire. Think lint roller, vacuum, and passive-aggressive IKEA manual rolled into one.

  • trailing-whitespace:
    đŸš« No more forgotten spaces at the end of lines. The silent killers of clean diffs.
  • end-of-file-fixer:
    đŸ‘šâ€âš•ïž Adds a newline at the end of each file. Why? Because some tools (and nerds) get cranky if it’s missing.
  • check-yaml:
    đŸ§Ș Validates your YAML syntax. No more “why isn’t my config working?” only to discover you had an extra space somewhere.
  • check-added-large-files:
    🚹 Stops you from accidentally committing that 500MB cat video or .sqlite dump. Saves your repo. Saves your dignity.
🔐 gitleaks (v8.28.0)

Scans your code for secrets — API keys, passwords, tokens you really shouldn’t be committing.
Because we’ve all accidentally pushed our .env file at some point. (Don’t lie.)

✍ gitlint (v0.19.1)

Enforces good commit message style — like limiting subject line length, capitalizing properly, and avoiding messages like “asdf”.
Great if you’re trying to look like a serious dev, even when you’re mostly committing bugfixes at 2AM.

🔁 pre-commit-update (v0.8.0)

The responsible adult in the room. Automatically bumps your hook versions to the latest stable ones. No more living on ancient plugin versions.

đŸ§Œ In summary

This setup covers:

  • ✅ Basic file hygiene (whitespace, newlines, YAML, large files)
  • 🔒 Secret detection
  • ✉ Commit message quality
  • 🆙 Keeping your hooks fresh

You can add more later, like linters specific for your language of choice — think of this as your “minimum viable cleanliness.”

đŸ§© What else can it do?

There are hundreds of hooks. Some I’ve used, some I’ve just admired from afar:

  • black is a Python code formatter that says: “Shhh, I know better.”
  • flake8 finds bugs, smells, and style issues in Python.
  • isort sorts your imports so you don’t have to.
  • eslint for all you JavaScript kids.
  • shellcheck for Bash scripts.
  • 
 or write your own custom one-liner hook!

You can browse tons of them at: https://pre-commit.com/hooks.html


đŸ§™â€â™€ïž Make Git do your bidding

To hook it all into Git:

pre-commit install

Now every time you commit, your code gets a spa treatment before it enters version control. 💅

Wanna retroactively clean up the whole repo? Go ahead:

pre-commit run --all-files

You’ll feel better. I promise.


🎯 TL;DR

Pre-commit is a must-have.
It’s like brushing your teeth before a date: it’s fast, polite, and avoids awkward moments later. đŸȘ„💋
If you haven’t tried it yet: do it. Your future self (and your Git history, and your date) will thank you. 🙏

Use pipx to install it globally.
Add a .pre-commit-config.yaml.
Install the Git hook.
Enjoy cleaner commits, fewer review comments — and a commit history you’re not embarrassed to bring home to your parents. 😌💍

And if it ever annoys you too much?
You can always disable it
 like cancelling the date but still showing up in their Instagram story. 😈💔

git commit --no-verify

Want help writing your first config? Or customizing it for Python, Bash, JavaScript, Kotlin, or your one-man-band side project? I’ve been there. Ask away!

A person standing on a rock, arms wide open, overlooking a vast landscape.

Several years ago, I built a photo stream on my website and pretty much stopped posting on Instagram.

I didn't like how Instagram made me feel, or the fact that it tracks my friends and family when they look at my photos. And while it was a nice way to stay in touch with people, I never found a real sense of community there.

Instead, I wanted a space that felt genuinely mine. A place that felt like home, not a podium. No tracking, no popularity contests, no clickbait, no ads. Just a quiet corner to share a bit of my life, where friends and family could visit without being tracked.

Leaving Instagram meant giving up its biggest feature: subscribers and notifications. On Instagram, people follow you and automatically see your posts. On my website, you have to remember to visit.

To bridge this gap, I first added an RSS feed for my photos. But since not everyone uses RSS, I later started a monthly photo newsletter. Once a month, I pick my favorite photos, format them for email, and send them out.

After sending five or six photo newsletters, I could already feel my motivation fading. Each one only took about twenty minutes to make, but it still felt like too much friction. So, I decided to fix that.

Under the hood, my photo stream runs on Drupal, built as a custom module. I added two new routes to my custom Drupal module:

  • /photos/$year/$month: shows all photos for a given month, with the usual features: lazy loading, responsive images, Schema.org markup, and so on.
  • /photos/$year/$month?email=true: shows the same photos, but stripped down and formatted specifically for email clients.

Now my monthly workflow looks like this: visit /photos/2025/9?email=true, copy the source HTML, paste it into Buttondown, and hit 'Send'. That twenty-minute task became a one-minute task.

I spent two hours coding this to save nineteen minutes a month. In other words, it takes about six months before the time saved equals the time invested. The math checks out: 120 / 19 ≈ 6. My developer brain called it a win. My business brain called it a write-off.

But the real benefit isn't the time saved. The easier something is, the more likely I am to stick with it. Automation doesn't just save time. It also preserves good intentions.

Could I automate this completely? Sure. I'm a developer, remember. Maybe I will someday. But for now, that one-minute task each month feels right. It's just enough involvement to stay connected to what I'm sharing, without the friction that kills motivation.

October 02, 2025

Proposals for stands for FOSDEM 2026 can now be submitted! FOSDEM 2026 will take place at the ULB on the 31st of January and 1st of February 2026. As has become traditional, we offer free and open source projects a stand to display their work to the audience. You can share information, demo software, interact with your users and developers, give away goodies, sell merchandise or accept donations. All is possible! We offer you: One table (180x80cm) with a set of chairs and a power socket Fast wireless internet access You can choose if you want the spot for the舰

I see so much social media content vanish into an algorithmic black hole.

Post a photo on Instagram and hundreds of people see it. Tweet a thought and it spreads across the internet in minutes. But that same content becomes invisible within days, buried beneath the constant scroll.

Six years ago, I deleted my Facebook account. And for the past two years, I've mostly stopped posting on Instagram and X (formerly Twitter).

It's a bittersweet. I started using Twitter in 2007 after Evan Williams, one of Twitter's co-founders, personally introduced it to me at FooCamp. I loved what it stood for then.

Three men stand together, smiling and talking. Jeff Robbins (co-founder Lullabot + former rockstar), Larry Page (co-founder Google) and Evan Williams (co-founder Blogger.com, co-founder Twitter) at FooCamp 2007.

I still post on LinkedIn now and then, but LinkedIn has gone backwards too, with so many shallow, click-baity posts.

When people ask why I'm not active on social media anymore, the truth is simple: I'm happier without it.

I continue to publish on my own website, even if it's not as often as I'd like. Posting on your own site gives you something social media doesn't: permanence.

Every week I get emails from people who discover an old blog post or a photo I shared on my website years ago. That never happens with my old tweets or social media posts.

My best posts from a decade ago still show up in search results. They still spark conversations. They still get referenced, and they still help people solve problems.

Social media content has a half-life measured in hours or days. Blog posts can compound over years.

This is why I'm building my audience here, on the edge of the internet. Some days it feels like swimming against the current. But when I see a post I wrote years ago still helping someone today, I know it's worth it.

Social media gives you reach. Blog posts give you longevity.

October 01, 2025

Sometimes Linux life is bliss. I have my terminal, my editor, my tools, and Steam games that run natively. For nearly four years, I didn’t touch Windows once — and I didn’t miss it.

And then Fortnite happened.

My girlfriend Enya and her wife Kyra got hooked, and naturally I wanted to join them. But Fortnite refuses to run on Linux — apparently some copy-protection magic that digs into the Windows kernel, according to Reddit (so I don’t know if it’s true). It’s rare these days for a game to be Windows-only, but rare enough to shatter my Linux-only bubble. Suddenly, resurrecting Windows wasn’t a chore anymore; it was a quest for polyamorous Battle Royale glory. đŸ•č

My Windows 11 partition had been hibernating since November 2021, quietly gathering dust and updates in a forgotten corner of the disk. Why it stopped working back then? I honestly don’t remember, but apparently I had blogged about it. I hadn’t cared — until now.


The Awakening – Peeking Into the UEFI Abyss 🐧

I started my journey with my usual tools: efibootmgr and update-grub on Ubuntu. I wanted to see what the firmware thought was bootable:

sudo efibootmgr

Output:

BootCurrent: 0001
Timeout: 1 seconds
BootOrder: 0001,0000
Boot0000* Windows Boot Manager ...
Boot0001* Ubuntu ...

At first glance, everything seemed fine. Ubuntu booted as usual. Windows
 did not. It didn’t even show up in the GRUB boot menu. A little disappointing—but not unexpected, given that it hadn’t been touched in years. 😬

I knew the firmware knew about Windows—but the OS itself refused to wake up.


The Hidden Enemy – Why os-prober Was Disabled ⚙

I soon learned that recent Ubuntu versions disable os-prober by default. This is partly to speed up boot and partly to avoid probing unknown partitions automatically, which could theoretically be a security risk.

I re-enabled it in /etc/default/grub:

GRUB_DISABLE_OS_PROBER=false

Then ran:

sudo update-grub

Even after this tweak, Windows still didn’t appear in the GRUB menu.


The Manual Attempt – GRUB to the Rescue ✍

Determined, I added a manual GRUB entry in /etc/grub.d/40_custom:

menuentry "Windows" {
    insmod part_gpt
    insmod fat
    insmod chain
    search --no-floppy --fs-uuid --set=root 99C1-B96E
    chainloader /EFI/Microsoft/Boot/bootmgfw.efi
}

How I found the EFI partition UUID:

sudo blkid | grep EFI

Result: UUID="99C1-B96E"

Ran sudo update-grub
 Windows showed up in GRUB! But clicking it? Nothing.

At this stage, Windows still wouldn’t boot. The ghost remained untouchable.


The Missing File – Hunt for bootmgfw.efi 🗂

The culprit? bootmgfw.efi itself was gone. My chainloader had nothing to point to.

I mounted the NTFS Windows partition (at /home/amedee/windows) and searched for the missing EFI file:

sudo find /home/amedee/windows/ -type f -name "bootmgfw.efi"
/home/amedee/windows/Windows/Boot/EFI/bootmgfw.efi

The EFI file was hidden away, but thankfully intact. I copied it into the proper EFI directory:

sudo cp /home/amedee/windows/Windows/Boot/EFI/bootmgfw.efi /boot/efi/EFI/Microsoft/Boot/

After a final sudo update-grub, Windows appeared automatically in the GRUB menu. Finally, clicking the entry actually booted Windows. Victory! đŸ„ł


Four Years of Sleeping Giants 🕰

Booting Windows after four years was like opening a time capsule. I was greeted with thousands of updates, drivers, software installations, and of course, the installation of Fortnite itself. It took hours, but it was worth it. The old system came back to life.

Every “update complete” message was a heartbeat closer to joining Enya and Kyra in the Battle Royale.


The GRUB Disappearance – Enter Ventoy 🔧

After celebrating Windows resurrection, I rebooted
 and panic struck.

The GRUB menu had vanished. My system booted straight into Windows, leaving me without access to Linux. How could I escape?

I grabbed my trusty Ventoy USB stick (the same one I had used for performance tests months ago) and booted it in UEFI mode. Once in the live environment, I inspected the boot entries:

sudo efibootmgr -v

Output:

BootCurrent: 0002
Timeout: 1 seconds
BootOrder: 0002,0000,0001
Boot0000* Windows Boot Manager ...
Boot0001* Ubuntu ...
Boot0002* USB Ventoy ...

To restore Ubuntu to the top of the boot order:

sudo efibootmgr -o 0001,0000

Console output:

BootOrder changed from 0002,0000,0001 to 0001,0000

After rebooting, the GRUB menu reappeared, listing both Ubuntu and Windows. I could finally choose my OS again without further fiddling. đŸ’Ș


A Word on Secure Boot and Signed Kernels 🔐

Since we’re talking bootloaders: Secure Boot only allows EFI binaries signed with a trusted key to execute. Ubuntu Desktop ships with signed kernels and a signed shim so it boots fine out of the box. If you build your own kernel or use unsigned modules, you’ll either need to sign them yourself or disable Secure Boot in firmware.


Diagram of the Boot Flow đŸ–Œ

Here’s a visual representation of the boot process after the fix:

flowchart TD
    UEFI["⚙ UEFI Firmware BootOrder:<br/>0001 (Ubuntu) →<br/>0000 (Windows)<br/>(BootCurrent: 0001)"]

    subgraph UbuntuEFI["shimx64.efi"]
        GRUB["📂 GRUB menu"]
        LINUX["🐧 Ubuntu Linux<br/>kernel + initrd"]
        CHAINLOAD["đŸȘŸ Windows<br/>bootmgfw.efi"]
    end

    subgraph WindowsEFI["bootmgfw.efi"]
        WBM["đŸȘŸÂ Windows Boot Manager"]
        WINOS["đŸ’» Windows 11<br/>(C:)"]
    end

    UEFI --> UbuntuEFI
    GRUB -->|boots| LINUX
    GRUB -.->|chainloads| CHAINLOAD
    UEFI --> WindowsEFI
    WBM -->|boots| WINOS

From the GRUB menu, the Windows entry chainloads bootmgfw.efi, which then points to the Windows Boot Manager, finally booting Windows itself.


First Battle Royale 🎼✹

After all the technical drama and late-night troubleshooting, I finally joined Enya and Kyra in Fortnite.

I had never played Fortnite before, but my FPS experience (Borderlands hype, anyone?) and PUBG knowledge from Viva La Dirt League on YouTube gave me a fighting chance.

We won our first Battle Royale together! đŸ†đŸ’„ The sense of triumph was surreal—after resurrecting a four-year-old Windows partition, surviving driver hell, and finally joining the game, victory felt glorious.


TL;DR: Quick Repair Steps ⚡

  1. Enable os-prober in /etc/default/grub.
  2. If Windows isn’t detected, try a manual GRUB entry.
  3. If boot fails, copy bootmgfw.efi from the NTFS Windows partition to /boot/efi/EFI/Microsoft/Boot/.
  4. Run sudo update-grub.
  5. If GRUB disappears after booting Windows, boot a Live USB (UEFI mode) and adjust efibootmgr to set Ubuntu first.
  6. Reboot and enjoy both OSes. 🎉

This little adventure taught me more about GRUB, UEFI, and EFI files than I ever wanted to know, but it was worth it. Most importantly, I got to join my polycule in a Fortnite victory and prove that even a four-year-old Windows partition can rise again! 💖🎼

September 30, 2025

This command on a Raspberry Pi 3:

stty -F /dev/ttyACM0 ispeed 9600 ospeed 9600 raw

Resulted in this error:

stty: /dev/ttyACM0: Inappropriate ioctl for device

This happened after the sd-card (with the OS) of the Pi entered failsafe mode so it is read only. I used dd to copy the 16GB sd-card to a new 128GB one:

dd if=/dev/sdd of=/srv/iso/pi_20250928.iso
dd if=/srv/iso/pi_20250928.iso of=/dev/sdd

The solution was to stop the module, remove the device, and start it again:

# modprobe -r cdc_acm
# rm -rf /dev/ttyACM0
# modprobe cdc_acm

... as it was probably copied with dd from the read-only sd-card.


epilogue: The 16GB SD was ten to fifteen years old. The Pi expanded the new card to 118GB not 128GB as advertised: relevant XKCD https://xkcd.com/394/

September 24, 2025

We need to talk.

You and I have been together for a long time. I wrote blog posts, you provided a place to share them. For years that worked. But lately you’ve been treating my posts like spam — my own blog links! Apparently linking to an external site on my Page is now a cardinal sin unless I pay to “boost” it.
And it’s not just Facebook. Threads — another Meta platform — also keeps taking down my blog links.

So this is goodbye
 at least for my Facebook Page.
I’m not deleting my personal Profile. I’ll still pop in to see what events are coming up, and to look at photos after the balfolk and festivals. But our Page-posting days are over.

Here’s why:

  • Your algorithm is a slot machine. What used to be “share and be seen” has become “share, pray, and maybe pay.” I’d rather drop coins in an actual jukebox than feed a zuckerbot just so friends can see my work.
  • Talking into a digital void. Posting to my Page now feels like performing in an empty theatre while an usher whispers “boost post?” The real conversations happen by email, on Mastodon, or — imagine — in real life.
  • Privacy, ads, and that creepy feeling. Every login is a reminder that Facebook isn’t free. I’m paying with my data to scroll past ads for things I only muttered near my phone. That’s not the backdrop I want for my writing.
  • The algorithm ate my audience. Remember when following a Page meant seeing its posts? Cute era. Now everything’s at the mercy of an opaque feed.
  • My house, my rules. I built amedee.be to be my own little corner of the web. No arbitrary takedowns, no algorithmic chokehold, no random “spam” labels. Subscribe by RSS or email and you’ll get my posts in the order I publish them — not the order an algorithm thinks you should.
  • Better energy elsewhere. Time spent arm-wrestling Facebook is time I could spend writing, playing the nyckelharpa, or dancing a Swedish polska at a balfolk. All of that beats arguing with a zuckerbot.

From now on, if people actually want to read what I write, they’ll find me at amedee.be, via RSS, email, or Mastodon. No algorithms, no takedowns, no mystery boxes.

So yes, we’ll still bump into each other when I check events or browse photos. But the part where I dutifully feed you my blog posts? That’s over.

With zero boosted posts and one very happy nyckelharpa,
Amedee

September 20, 2025

Proposals for developer rooms and main track talks for FOSDEM 2026 can now be submitted! FOSDEM offers open source and free software developers a place to meet, share ideas and collaborate. Renowned for being highly developer-oriented, the event brings together some 8000+ geeks from all over the world. The twenty-sixth edition will take place on Saturday 31st January and Sunday 1st February 2026 at the usual location, ULB Campus Solbosch in Brussels. Developer Rooms Developer rooms are assigned to self-organising groups to work together on open source and free software projects, to discuss topics relevant to a broader subset of舰

September 18, 2025

De la mystification de la Grande Idée

et de la nĂ©gation de l’expĂ©rience

Festival Hypermondes à Mérignac

Je dĂ©dicacerai ce samedi 20 et dimanche 21 septembre Ă  MĂ©rignac, dans le cadre du festival Hypermondes. Je participe Ă©galement Ă  une table ronde le dimanche. Et pour tout vous dire, j’ai sacrĂ©ment le trac, car je serai entourĂ© de noms qui peuplent ma bibliothĂšque et dont j’ai lu et relu les livres : Pierre Bordage, J.C. Dunyach, Pierre Raufast, Catherine Dufour, Laurent Genefort
 Sans oublier Schuiten et Peeters, qui ont marquĂ© mon adolescence et surtout, mon idole, le plus grand scĂ©nariste BD de ce siĂšcle, Alain Ayroles (parce que pour le siĂšcle prĂ©cĂ©dent, c’est Goscinny).

Bref, je me sens tout petit au milieu de ces gĂ©ants alors n’hĂ©sitez pas Ă  venir me faire un coucou pour que je me sente moins seul sur le stand !

La mythologie de l’idĂ©e

Dans le film « Glass Onion » (Rian Johnson, 2022), un milliardaire de la Tech, parodie de ZuckerMusk, invite des amis sur son Ăźle privĂ©e pour une sorte de cluedo gĂ©ant. Qui dĂ©gĂ©nĂšre Ă©videmment lorsqu’un vĂ©ritable crime est commis.

Ce que j’ai beaucoup aimĂ© dans ce film, c’est l’insistance sur un point trop souvent oublié : ce n’est pas parce qu’on est riche et/ou cĂ©lĂšbre qu’on est intelligent. Et ce n’est pas parce qu’on arrive Ă  faire croire au public qu’on est surintelligent, au point de le croire soi-mĂȘme, qu’on l’est rĂ©ellement.

Bref, c’est une belle remise à leur place du monde des milliardaires, des influenceurs, starlettes et tout ce qui gravite autour.

NĂ©anmoins, un point particulier m’a chagriné : toute une partie de l’intrigue repose sur savoir qui a eu le premier l’idĂ©e de la startup qui fera le succĂšs du milliardaire, idĂ©e qui est littĂ©ralement griffonnĂ©e sur une serviette en papier.

C’est trĂšs amusant dans le film, mais comme je l’ai dĂ©jĂ  dit : une idĂ©e seule ne vaut rien !

L’idĂ©e n’est que l’étincelle initiale d’un projet, mais le rĂ©sultat final sera impactĂ© par les milliers de dĂ©cisions et d’adaptations prises en cours de route.

Le rîle de l’architecte

Si vous n’avez jamais fait construire de maison, vous pensez peut-ĂȘtre que vous dĂ©crivez la maison de vos rĂȘves Ă  un architecte. Celui-ci vous propose un plan. Vous validez, les ingĂ©nieurs et les ouvriers s’emparent du plan et la maison se construit.

Sauf qu’en rĂ©alitĂ©, vous ĂȘtes incapable de dĂ©crire la maison de vos rĂȘves. Vos intuitions sont toutes contradictoires. Ce que j’appelle le syndrome de « la maison de plain-pied sur deux Ă©tages ». Et quand bien mĂȘme vous avez rĂ©flĂ©chi en profondeur, l’architecte va pointer tout un tas de problĂšmes pratiques avec vos idĂ©es. De choses auxquelles vous n’avez pas pensĂ©. C’est trĂšs joli toutes ces vitres, mais comment allez-vous les entretenir ?

Il va falloir faire des compromis, prendre des dĂ©cisions. Et une fois le plan validĂ©, les dĂ©cisions continueront sur le chantier. À cause des imprĂ©vus ou des milliers de petits problĂšmes qui n’apparaissaient pas sur le plan. Voulez-vous vraiment un Ă©vier Ă  cet endroit vu que la porte s’ouvre dessus ?

Au final, la maison de vos rĂȘves sera trĂšs diffĂ©rente de ce que vous avez imaginĂ©. Pendant des annĂ©es, vous lui trouverez des dĂ©fauts. Mais ces dĂ©fauts sont des compromis que vous avez expressĂ©ment choisis.

L’idĂ©e d’un roman

En tant qu’écrivain, il m’arrive rĂ©guliĂšrement de me voir poser la question : « D’oĂč te viennent toutes ces idĂ©es ? »

Comme si avoir l’idĂ©e Ă©tait un problĂšme. Des idĂ©es, j’en ai des centaines dans mes tiroirs. Le travail n’est pas d’avoir l’idĂ©e, c’est de faire le plan puis de transformer ce plan en construction.

J’ai plusieurs fois reçu des propositions de type : « J’ai une super idĂ©e pour un roman, je te la partage, tu Ă©cris et on fait 50/50 ».

Vous imaginez un instant arriver chez un architecte avec un truc griffonnĂ© et dire : « J’ai une super idĂ©e pour une maison, je vous la montre, vous la construisez, vous trouvez un entrepreneur et on partage » ?

Contrairement Ă  Printeurs, que j’ai rĂ©digĂ© sans scĂ©nario prĂ©alable, j’ai Ă©crit Bikepunk avec une vĂ©ritable structure. Je suis parti d’une idĂ©e initiale. J’ai brainstormĂ© avec Thierry Crouzet (nos Ă©changes ont fait naĂźtre le fameux flash de l’histoire). Puis j’ai creusĂ© les personnages. J’ai Ă©crit une nouvelle dans cet univers (crĂ©ant le personnage de Dale), j’ai ensuite travaillĂ© la structure pendant un mois avec un tableau de liĂšge sur lequel je punaisais des fiches. Enfin, je me suis mis Ă  l’écriture. Bien des fois, je me suis retrouvĂ© confrontĂ© Ă  des incohĂ©rences, j’ai dĂ» prendre des dĂ©cisions.

Le rĂ©sultat final ne ressemble en rien Ă  ce que j’imaginais. Certaines scĂšnes clĂ© de mon synopsis se sont rĂ©vĂ©lĂ©es, Ă  la relecture de simples transitions. Des improvisations de derniĂšres minutes semblent, au contraire, avoir marquĂ© toute une frange de lecteurices.

Le code n’est qu’une sĂ©rie de dĂ©cisions

Une idĂ©e n’est qu’une Ă©tincelle qui peut potentiellement se propager, se mĂ©langer Ă  d’autres. Mais, pour allumer un feu, la source initiale de l’étincelle compte bien moins que le combustible.

L’invention qui mit cela en exergue est certainement l’ordinateur. Car un ordinateur est, par essence, une machine qui fait ce qu’on lui demande.

Exactement ce qu’on lui demande. Ni plus ni moins.

L’humain a Ă©tĂ© confrontĂ© au fait qu’il est extrĂȘmement compliquĂ© de savoir ce que l’on veut. Que c’est presque impossible de l’exprimer sans ambiguĂŻtĂ©. Que cela nĂ©cessite un langage dĂ©diĂ©.

Un langage de programmation.

Et maitriser un langage de programmation demande un esprit tellement analytique et rationnel qu’un mĂ©tier s’est créé pour l’utiliser: programmeur, codeuse, dĂ©veloppeur. Le terme importe peu.

Mais, tout comme un architecte, une programmeuse doit en permanence prendre des dĂ©cisions qu’elle pense ĂȘtre les meilleures pour le projet. Pour l’avenir. Ou bien elle identifie les paradoxes pour en discuter avec le client. « Vous m’avez demandĂ© une interface simple avec un seul bouton tout en me spĂ©cifiant douze fonctionnalitĂ©s qui doivent avoir un accĂšs direct avec un bouton dĂ©diĂ©. On fait quoi ? » (cas vĂ©cu).

De la stupidité de croire en une IA productive

Ce que je dis paraĂźt peut-ĂȘtre Ă©vident, mais lorsque j’entends le nombre de personnes qui parlent de « vibe programming », je me dis que beaucoup trop de monde a Ă©tĂ© bercĂ© avec le paradigme de « l’idĂ©e magique » comme dans Onion Glass.

Les IAs sont perçues comme des machines magiques qui font ce que vous voulez.

Sauf que, quand bien mĂȘme elles seraient parfaites, vous ne savez pas ce que vous voulez.

Les IA ne peuvent pas prendre correctement ces milliers de dĂ©cisions. Des algorithmes statistiques ne peuvent produire que des rĂ©sultats alĂ©atoires. Vous ne pouvez pas juste Ă©mettre votre idĂ©e et voir le rĂ©sultat apparaĂźtre (ce qui est le fantasme des crĂ©tins-managers, cette race d’idiots formĂ©s dans les Ă©coles de management qui est persuadĂ©e que les exĂ©cutants sont une charge dont il faudrait idĂ©alement se passer).

Le fantasme ultime est une machine « intuitive », qu’il ne faut pas apprendre. Mais l’apprentissage n’est pas seulement technique. L’expĂ©rience humaine est globale. Un architecte va penser aux problĂšmes de la maison de vos rĂȘves parce qu’il a dĂ©jĂ  suivi vingt chantiers et eu un aperçu des problĂšmes. Chaque nouveau livre d’un Ă©crivain reflĂšte son expĂ©rience avec les prĂ©cĂ©dents. Certaines dĂ©cisions sont les mĂȘmes, d’autres, au contraire, sont diffĂ©rentes pour expĂ©rimenter.

Ne pas vouloir apprendre son outil, c’est la dĂ©finition mĂȘme de la stupiditĂ© la plus crasse.

Penser qu’une IA pourrait remplacer un dĂ©veloppeur, c’est montrer sa totale incompĂ©tence quant au travail du dĂ©veloppeur en question. Penser qu’une IA peut Ă©crire un livre ne peut provenir que de gens qui ne lisent pas eux-mĂȘmes, qui ne voient que du papier imprimĂ©.

Ce n’est pas que c’est techniquement impossible. D’ailleurs, beaucoup le font parce que vendre du papier imprimĂ©, ça peut ĂȘtre rentable avec le bon marketing, peu importe ce qui est imprimĂ©.

C’est juste que le rĂ©sultat ne pourra jamais ĂȘtre satisfaisant. Tous les compromis, les dĂ©cisions seront le fruit d’un alĂ©a statistique sur lequel vous n’avez aucun contrĂŽle. Les paradoxes ne seront pas rĂ©solus. Bref, c’est et ce sera toujours de la merde.

Un business, c’est bien plus qu’une idĂ©e !

Facebook n’était pas la premiĂšre tentative de rĂ©seau social entre Ă©tudiants. Amazon n’était pas le premier site de vente de livre en ligne. Whatsapp Ă©tait une application pour afficher sa disponibilitĂ© pour un coup de fil Ă  ses amis. Instagram servait Ă  la base Ă  partager sa position. Microsoft n’avait jamais dĂ©veloppĂ© de systĂšme d’exploitation lorsqu’ils ont vendu la licence DOS Ă  IBM.

Bref, l’idĂ©e initiale ne vaut rien. Ce qui a fait le succĂšs de ces entreprises, ce sont les milliards de dĂ©cisions prises Ă  chaque instant, les rĂ©ajustements.

Prendre ces dĂ©cisions est ce qui construit le succĂšs, fĂ»t-il commercial, artistique ou personnel. Croire qu’un ordinateur pourrait prendre ces dĂ©cisions Ă  votre place c’est faire preuve non seulement de naĂŻvetĂ©, mais c’est Ă©galement prouver totalement son incompĂ©tence dans le domaine concernĂ©.

Dans Onion Glass, ce point m’a particuliĂšrement chagrinĂ©, car poussĂ© Ă  l’absurde. Comme si une serviette avec trois traits de crayon pouvait valoir des milliards.

Ce petit quelque chose en plus

Et si je me rĂ©jouis de frĂ©quenter tant d’auteurs que j’admire Ă  MĂ©rignac, ce n’est pas pour Ă©changer des idĂ©es, mais m’imprĂ©gner de leurs expĂ©riences, de leur personnalitĂ© qui leur fait construire des Ɠuvres que j’admire.

J’ai dĂ» relire des dizaines et des dizaines de fois l’intĂ©gralitĂ© de « De capes et de crocs », le chef d’Ɠuvre de Masbou et Ayroles.

À chaque relecture, je savoure chaque case. Je sens que les auteurs s’amusent, se laissent porter, emporter par leurs personnages dans des bifurcations a priori imprĂ©vues, improbables. Quelle IA aurait l’idĂ©e de faire intervenir le caquĂštement d’un poulailler dans la complĂ©tion d’un alexandrin ? Quel algorithme se pavanerait de la cĂ©sure Ă  l’hĂ©mistiche ?

L’humain et son expĂ©rience auront toujours quelque chose en plus, quelque chose d’indĂ©finissable dont le mot m’échappe.

Ah si


Quelque chose que, sans un pli, sans une tache, l’humain emporte malgrĂ© lui


Et c’est


— C’est ?

Son panache !

Je suis Ploum et je viens de publier Bikepunk, une fable Ă©colo-cycliste entiĂšrement tapĂ©e sur une machine Ă  Ă©crire mĂ©canique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !

Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.

September 17, 2025

It's that time again! FOSDEM 2026 will take place on Saturday 31st of January and Sunday 1st of February 2026. Further details and calls for participation will be announced in the coming days and weeks.

Mood: Slightly annoyed at CI pipelines 🧹
CI runs shouldn’t feel like molasses. Here’s how I got Ansible to stop downloading the internet. You’re welcome.


Let’s get one thing straight: nobody likes waiting on CI.
Not you. Not me. Not even the coffee you brewed while waiting for Galaxy roles to install — again.

So I said “nope” and made it snappy. Enter: GitHub Actions Cache + Ansible + a generous helping of grit and retries.

đŸ§™â€â™‚ïž Why cache your Ansible Galaxy installs?

Because time is money, and your CI shouldn’t feel like it’s stuck in dial-up hell.
If you’ve ever screamed internally watching community.general get re-downloaded for the 73rd time this month — same, buddy, same.

The fix? Cache that madness. Save your roles and collections once, and reuse like a boss.

đŸ’Ÿ The basics: caching 101

Here’s the money snippet:

path: .ansible/
key: ansible-deps-${{ hashFiles('requirements.yml') }}
restoreKeys: |
  ansible-deps-

🧠 Translation:

  • Store everything Ansible installs in .ansible/
  • Cache key changes when requirements.yml changes — nice and deterministic
  • If the exact match doesn’t exist, fall back to the latest vaguely-similar key

Result? Fast pipelines. Happy devs. Fewer rage-tweets.

🔁 Retry like you mean it

Let’s face it: ansible-galaxy has
 moods.

Sometimes Galaxy API is down. Sometimes it’s just bored. So instead of throwing a tantrum, I taught it patience:

for i in {1..5}; do
  if ansible-galaxy install -vv -r requirements.yml; then
    break
  else
    echo "Galaxy is being dramatic. Retrying in $((i * 10)) seconds
" >&2
    sleep $((i * 10))
  fi
done

That’s five retries. With increasing delays.
💬 “You good now, Galaxy? You sure? Because I’ve got YAML to lint.”

⚠ The catch (a.k.a. cache wars)

Here’s where things get spicy:

actions/cache only saves when a job finishes successfully.

So if two jobs try to save the exact same cache at the same time?
đŸ’„ Boom. Collision. One wins. The other walks away salty:

Unable to reserve cache with key ansible-deps-...,
another job may be creating this cache.

Rude.

🧊 Fix: preload the cache in a separate job

The solution is elegant:
Warm-up job. One that only does Galaxy installs and saves the cache. All your other jobs just consume it. Zero drama. Maximum speed. 💃

đŸȘ„ Tempted to symlink instead of copy?

Yeah, I thought about it too.
“But what if we symlink .ansible/ and skip the copy?”

Nah. Not worth the brainpower. Just cache the thing directly.
✅ It works. đŸ§Œ It’s clean. 😌 You sleep better.

🧠 Pro tips

  • Use the hash of requirements.yml as your cache key. Trust me.
  • Add a fallback prefix like ansible-deps- so you’re never left cold.
  • Don’t overthink it. Let the cache work for you, not the other way around.

✹ TL;DR

  • ✅ GitHub Actions cache = fast pipelines
  • ✅ Smart keys based on requirements.yml = consistency
  • ✅ Retry loops = less flakiness
  • ✅ Preload job = no more cache collisions
  • ❌ Re-downloading Galaxy junk every time = madness

đŸ”„ Go forth and cache like a pro.

Got better tricks? Hit me up on Mastodon and show me your CI magic.
And remember: Friends don’t let friends wait on Galaxy.

💚 Peace, love, and fewer ansible-galaxy downloads.

September 14, 2025

lookat 2.1.0

Lookat 2.1.0 is the latest stable release of Lookat/Bekijk, a user-friendly Unix file browser/viewer that supports colored man pages.

The focus of the 2.1.0 release is to add ANSI Color support.


 

News

14 Sep 2025 Lookat 2.1.0 Released

Lookat / Bekijk 2.1.0rc2 has been released as Lookat / Bekijk 2.1.0

3 Aug 2025 Lookat 2.1.0rc2 Released

Lookat 2.1.0rc2 is the second release candicate of Lookat 2.1.0

ChangeLog

Lookat / Bekijk 2.1.0rc2
  • Corrected italic color
  • Don’t reset the search offset when cursor mode is enabled
  • Renamed strsize to charsize ( ansi_strsize -> ansi_charsize, utf8_strsize -> utf8_charsize) to be less confusing
  • Support for multiple ansi streams in ansi_utf8_strlen()
  • Update default color theme to green for this release
  • Update manpages & documentation
  • Reorganized contrib directory
    • Moved ci/cd related file from contrib/* to contrib/cicd
    • Moved debian dir to contrib/dist
    • Moved support script to contrib/scripts

Lookat 2.1.0 is available at:

Have fun!

September 11, 2025

I once read a blurb about the benefits of bureaucracy, and how it is intended to resist political influences, autocratic leadership, priority-of-the-day decision-making, silo'ed views, and more things that we generally see as "Bad Thingsâ„ąïž". I'm sad that I can't recall where it was, but its message was similar as what The Benefits Of Bureaucracy: How I Learned To Stop Worrying And Love Red Tape by Rita McGrath presents. When I read it, I was strangely supportive to the message, because I am very much confronted, and perhaps also often the cause, for bureaucracy and governance-related deliverables in the company that I work for.

Bureacracy and (hyper)governance

Bureaucracy, or governance in general, often puts a bad taste in the mouth of whomever dares to speak about it though. And I fully agree, hypergovernance or hyperbureaucracy will put too much burden in the organization. The benefits will no longer be visible, and the creativity and innovation of people will be stifled.

Hypergovernance is a bad thing indeed, and often comes up in the news. Companies loathing the so-called overregulation of the European Union for instance, getting together in action groups to ask for deregulation. A recent topic here was Europe's attempt for moving towards a more sustainable environment given the lack of attention on sustainability by the various industries and governments. The premise to regulate this was driven by the observation that principally guiding and asking doesn't work: sustainability is a long-term goal, yet most industries and governments focus on short-term benefits.

The need to simplify regulation, and the reaction on the bureacracy needed to align with the reporting expectations of Europe, triggered the update by the European Commission in a simplification package it calls the Omnibus package.

I think that is the right way forward, not for this particular case (I don't know enough about ESG to be any useful resource on that), but also within regulated industries and companies where the bureaucracy is considered to dampen progression and efficiency. Simplification and optimization here is key, not just running down things. In the Capability Maturity Model, a process is considered efficient if it includes deliberate process optimization and improvement. So why not deliberately optimize and improve? Evaluate and steer?

Benefits of bureaucracy

It would be bad if bureaucracy itself would be considered a negative point of any organization. Many of the benefits of bureaucracy I fully endorse myself.

Standardization, where procedures and policies are created to ensure consistency in operations and decision-making. Without standardization, you gain inefficiencies, not benefits. If a process is considered too daunting, standardization might be key to improve on it.

Accountability, where it is made clear who does what. Holding people or teams accountable is not a judgement, but a balance of expectations and responsibilities. If handled positively, accountability is also an expression of expertise, endorsement for what you are or can do.

Risk management, which is coincidentally the most active one in my domain (the Digital Operational Resilience Act has a very strong focus on risk management), has a primary focus on reducing the likelihood of misconduct and errors. Regulatory requirements and internal controls are not the goal, but a method.

Efficiency, by streamlining processes through established protocols and procedures. Sure, new approaches and things come along, but after the two-hundredth request to do or set up something only to realize it still takes 50 mandays... well, perhaps you should focus on streamlining the process, introduce some bureaucracy to help yourself out.

Transparency, promoting clear communication and documentation, as well as insights into why something is done. This improves trust among the teams and people.

In a world where despotic leadership exists, you will find that a good working bureacracy can be a inhibitor for too aggressive change. That can frustrate the wanna-be autocrat (if they are truly autocrat, then there is no bureacracy), but with the right support, it can indicate and motivate why this resistance exists. If the change is for the good - well, bureaucracy even has procedures for change.

Bureaucracy also prohibits islands and isolated decision making. People demanding all the budgets for themselves because they find that their ideas are the only ones worth working on (everybody has these in the company) will also find that the bureacracy is there to balance budgeting, allocate resources to the right projects that benefit the company as a whole, and not just the 4 people you just onboarded in your team and gave macbooks...

Bureaucracy isn't bad, and some people prefer to have strict rules or procedures. Resisting change is a human behavior, but promoting anarchy is also not the way forward. Instead, nurture a culture of continuous improvement: be able to point out when things go beyond their reach, and learn about the reasoning and motivation that others bring up. Those in favor of bureacracy will see this as a maturity increase, and those that are affected by over-regulation will see this as an improvement.

We can all strive to remain in a bureaucracy and be happy with it.

Feedback? Comments? Don't hesitate to get in touch on Mastodon.

September 10, 2025

There comes a time in every developer’s life when you just know a certain commit existed. You remember its hash: deadbeef1234. You remember what it did. You know it was important. And yet, when you go looking for it…

đŸ’„ fatal: unable to read tree <deadbeef1234>

Great. Git has ghosted you.

That was me today. All I had was a lonely commit hash. The branch that once pointed to it? Deleted. The local clone that once had it? Gone in a heroic but ill-fated attempt to save disk space. And GitHub? Pretending like it never happened. Typical.

đŸȘŠ Act I: The NaĂŻve Clone

“Let’s just clone the repo and check out the commit,” I thought. Spoiler alert: that’s not how Git works.

git clone --no-checkout https://github.com/user/repo.git
cd repo
git fetch --all
git checkout deadbeef1234

🧹 fatal: unable to read tree 'deadbeef1234'

Thanks Git. Very cool. Apparently, if no ref points to a commit, GitHub doesn’t hand it out with the rest of the toys. It’s like showing up to a party and being told your friend never existed.

đŸ§Ș Act II: The Desperate fsck

Surely it’s still in there somewhere? Let’s dig through the guts.

git fsck --full --unreachable

Nope. Nothing but the digital equivalent of lint and old bubblegum wrappers.

đŸ•” Act III: The Final Trick

Then I stumbled across a lesser-known Git dark art:

git fetch origin deadbeef1234

And lo and behold, GitHub replied with a shrug and handed it over like, “Oh, that commit? Why didn’t you just say so?”

Suddenly the commit was in my local repo, fresh as ever, ready to be inspected, praised, and perhaps even resurrected into a new branch:

git checkout -b zombie-branch deadbeef1234

Mission accomplished. The dead walk again.


☠ Moral of the Story

If you’re ever trying to recover a commit from a deleted branch on GitHub:

  1. Cloning alone won’t save you.
  2. git fetch origin <commit> is your secret weapon.
  3. If GitHub has completely deleted the commit from its history, you’re out of luck unless:
    • You have an old local clone
    • Someone forked the repo and kept it
    • CI logs or PR diffs include your precious bits

Otherwise, it’s digital dust.


🧛 Bonus Tip

Once you’ve resurrected that commit, create a branch immediately. Unreferenced commits are Git’s version of vampires: they disappear without a trace when left in the shadows.

git checkout -b safe-now deadbeef1234

And there you have it. One undead commit, safely reanimated.

September 09, 2025

Last month, my son Axl turned eighteen. He also graduated high school and will leave for university in just a few weeks. It is a big milestone and we decided to mark it with a three-day backpacking trip in the French Alps near Lake Annecy.

We planned a loop that would start and end at Col de la Forclaz, located at 1,150 meters (3,770 feet). From there the trail would climb through forests and alpine pastures, eventually taking us to the summit of La Tournette at 2,351 meters (7,713 feet). It is the highest peak around Lake Annecy and one of the most iconic hikes in the region.

A topographical map of our hike to Tournette and back. A topographical map of our hike to La Tournette, with each of the three hiking days highlighted in a different color.

Over the past months we studied maps, gathered supplies, and prepared for the hike. My own training started only three weeks before, just enough to feel unprepared in a more organized way. I took up jogging and did stairs with my pack on, though I should have also practiced sleeping on uneven ground.

Flying to Annecy

Small Cessna plane in a hangar with its door open, while a person in a safety vest loads backpacks into the cabin, preparing for departure. Loading our packs into the plane before departure.
Two people sit in a Cessna cockpit, watching navigation screens and gauges. In the cockpit of Ben's Cessna en route to Annecy.
View from a small plane cockpit approaching Annecy, with Lake Annecy and surrounding mountains ahead. As we approached Annecy and prepared to land, La Tournette (left) rose above the other peaks, showing the climb that lay ahead.

Axl thought the trip would begin with a nine-hour drive south to Annecy. What he didn't know was that one of my best friends, Ben, who owns a Cessna, would be flying us instead. It felt like the perfect surprise to begin our trip.

Two hours later we landed in Annecy. After a quick lunch by the lake, we taxied to Col de la Forclaz, famous for paragliding and its views over Lake Annecy.

A paraglider glides high above Lake Annecy, the blue water curving between green hills with small towns scattered along the shore below. Paragliders circling above Lake Annecy.

Day 1: Col de la Forclaz → Refuge de la Tournette

Our trail started near the paragliding launch point at Col de la Forclaz. From there it climbed steadily through the forest. It was quiet and enclosed, with only brief glimpses of the valley through the branches.

A hiker with a backpack adds a small rock to a stacked cairn on a rocky ledge. We paused to add a stone to a cairn along the trail.

Eventually we emerged above the tree line near Chalet de l'Aulp. In September, the chalet is closed during the week, but we sat on a hill nearby, ate the rice bars Vanessa had prepared, and simply enjoyed the moment.

After our break the trail grew steeper, winding through alpine meadows and rocky slopes. The valley stretched out far below us while big mountains rose ahead.

View from the trail near La Tournette: mountain ridges and forested slopes, with a small chalet in a grassy clearing below. The view on our way up to La Tournette, just past Chalet de l'Aulp.

After a couple of hours we reached the site of the old Refuge de la Tournette, which has been closed for years, and found a flat patch of grass nearby for our first campsite.

As we sat down, we had time for deeper conversations. We talked about how life will change now that he is eighteen and about to leave for university. I told him our hike felt symbolic. The climb up was like the years behind us, when his parents, myself included, made most of the important decisions in his life. The way down would be different. From here he would choose his own path in life, and my role would be to walk beside him and support him.

Saying it out loud caught me by surprise. It left me with a lump in my throat and tears in my eyes. It felt as if we would leave one version of Axl at the summit and return with another, stepping into adulthood and independence.

Young man sits on the grass, head down and arms around knees, resting quietly after a long day hiking on a sunny mountain trail. Axl reflecting at the end of our first day on the trail.

Dinner was a freeze-dried chili con carne that turned out to be surprisingly good. Just as we were eating, a large flock of sheep came streaming down the hillside, their bells ringing. Fortunately they kept moving, or sleep would have been impossible.

After dinner we set up our tent and took stock of our water supply. We had carried five liters of water each, knowing we could only refill at tomorrow's refuge. By now we had already used three liters for the climb and dinner. I could have drunk more, but we set two liters aside for the morning and felt good about that plan.

A green pyramid tent stands on a grassy meadow below a steep rocky mountainside, with camping gear scattered nearby. Our tent pitched in the high meadows near the old Refuge de la Tournette.

Later, as we settled into our sleeping bags, I realized how many firsts Axl had packed into a single day. He had taken his first private flight in a small Cessna, carried a heavy backpack up a mountain for the first time, figured out food and water supplies, and experienced wild camping for the first time.

Day 2: Refuge de la Tournette → Refuge de Praz D'zeures

Sleeping was difficult. The ground that looked flat in daylight was uneven, and my foldout mat offered little comfort. I doubt I slept more than two hours. By 4:30 the rain had started, and I started to worry about how slippery the trail might become.

When we unzipped the tent around 6:00 in the morning, a "bouquetin" (ibex) with one horn broken off stood quietly in the gray light, watching us without fear. Its calm presence was a gentle contrast to my own concerns.

From inside a tent, an ibex with large horns stands a few meters away in the grass, looking toward the campers. When we opened the tent in the morning, a bouquetin (ibex) was standing right outside, watching us.

By 7:00 a shepherd appeared out of nowhere while we were making coffee. He told us wild camping here was not permitted. He spoke with quiet firmness. We were startled and embarrassed, having missed the sign lower down the trail. We apologized, promised to leave no trace, and packed up to be on our way.

Before leaving he shared the forecast: rain by early afternoon, thunderstorms by 16:00. You do not want to be on a summit when lightning rolls in. With that in mind we set off quickly, pushing harder than usual. Unlike the sunny day before, when the path was busy with hikers, the mountain felt empty. It was just the two of us moving upward in the quiet.

Shortly before the final section to the summit it began to rain again. Axl put on his gloves as the drops felt icy. The rain passed quickly, but we knew more was coming.

The final stretch to La Tournette was steep, with chains and metal ladders to help the climb. We scrambled with hands and feet, and with our heavy packs it was nerve-racking at times.

Reaching the top was unforgettable. At the summit we climbed onto the Fauteuil, a massive block of rock shaped like an armchair. From there the Alps spread out in every direction. In the far distance, the snow-covered summit of Mont Blanc caught the light. Below us, Lake Annecy shimmered in the valley, while dark rain clouds gathered not too far away.

A hiker stands beside a metal cross on a rocky summit, surrounded by large boulders, looking out over distant mountains under a cloudy sky. At the summit of La Tournette.

We gave each other a big hug, a quiet celebration of the climb and the moment. We stayed only a short while, knowing the weather was turning.

From the summit we set our course for the Refuge de Praz D'zeures, a mountain hut about two hours' hike away. As we descended, the landscape softened into wide alpine pastures where bouquetin grazed among the grass and flowers.

A narrow hiking trail runs along a grassy ridge toward rocky peaks, then drops into the forest. The ridge from La Tournette toward Refuge de Praz D'zeures. If you look closely, the trail winds all the way into the woods.

Pushing to stay ahead of the rain and thunderstorms in the forecast, we reached the refuge (1,774 meters) around 14:30 in the afternoon. The early start seemed like a wise choice, but our hearts sank when we saw the sign on the door: "fermé" (closed).

For a moment we feared we had nowhere to stay. Then we noticed another sign off to the side. It took us a minute to decipher, since it was written in French, but eventually we understood: in September the refuge only opens from Thursday evening through Sunday night. Some relief washed over us. It was indeed Thursday, and most likely the hosts or guardians of the hut had not yet arrived.

We decided to wait and settled onto the wooden deck outside. Our water supplies were gone, but Axl found an outdoor kitchen sink with a faucet, fed by a nearby stream, and filled a couple of our empty water bottles. I drank one almost in a single go.

Outdoor sink at Refuge de Praz D&#039;zeures where a backpacker refills water bottles during a mountain hike. The outdoor sink at Refuge de Praz D'zeures where Axl refilled our empty water bottles.

To pass the time we opened the small travel chess set I had received for Christmas. The game gave us something to focus on as the clouds thickened around the hut. A damp chill settled in, and we pulled on extra layers while we leaned over the tiny board.

Sure enough, a little later the hosts of the refuge appeared. A couple with three children and two dogs. They came up the trail carrying heavy containers of food, bread, water, and other supplies on their backs, all the way from a small village more than an hour's hike below.

Almost immediately they warned us that the water at the refuge was not potable. I tried not to dwell on the bottle I had just finished, but luckily Axl had used his water filter when filling our bottles. We would find out soon enough how well it worked. (Spoiler: we survived!)

Wooden refuge kitchen with old moka pots and worn pans hanging on the wall. The rustic kitchen at Refuge de Praz D'zeures. I loved the moka pots and pans in the background.

The refuge itself was simple but full of character. There is no hot water, no heating, and the bunks are basic, yet the young family who runs it brings life and warmth to the place. My French was as rusty as their English, but between us we managed.

It was so cold inside the hut that we retreated into our sleeping bags for warmth. Then, as the rain began to pour, they lit a wood stove in the main room. Through the window we watched the clouds wrap around the refuge while water streamed off the roof in steady sheets. We felt grateful not to be out in our tent that night or still on the trail.

That night we had a local specialty: raclette au charbon de bois, cheese cooked on a small charcoal grill in the middle of the table. I have had raclette many times before, but never like this. The smell of melting cheese filled the room as we poured it over potatoes and tore into fresh bread. After two days of freeze-dried meals it felt like a feast. I loved it, especially because I knew Vanessa would never let me use charcoal in our living room.

After dinner we went straight to bed, exhausted from the long day and the lack of sleep the night before. Even though the bunks were hard, we slept deeply and well.

Day 3: Refuge de Praz D'zeures → La Forclaz

Morning began with breakfast in the hut. A simple bowl of cereal with milk tasted unusually good after two days of powdered meals. Two other guests joined us at the breakfast table and teased me that they could hear me snore through the thin wooden walls. Axl just grinned. Apparently my reputation had spread faster than I thought.

The day's hike covered about 8.5 kilometers (5.3 miles), starting with a 350-meter (1,150-foot) climb to a ridge, followed by a long descent of nearly 1,000 meters (3,280 feet). The storms from the night before had left everything soaked, so even the flat stretches of trail were slippery and slow.

Reaching the ridge made the effort worthwhile. On one side the clouds pressed in so tightly there was no visibility at all. On the other side the valley opened wide, green and bright in the morning light. It felt like standing with one foot in the fog and the other in the sun.

We walked along the ridge and talked, the kind of conversations that only seem to happen when you spend hours on the trail together. At one point we stopped to look for fossils and, to our delight, actually found one.

By the time we reached the village of Montmin it felt like stepping back into civilization. We left the trail for quiet roads, ate lunch in the sun near the church, and rinsed our muddy boots and rain gear at a faucet. From there it was only a short walk back to La Forclaz, where three days earlier we had set off with heavy packs and full of anticipation.

A person rinses muddy rain pants under a tap at a stone fountain. Axl rinsing the mud off his rain pants at a fountain in Montmin.

We had returned tired, lighter, and grateful for the experience and for each other. To celebrate we ordered crĂȘpes in a tiny cafĂ© and ate them in the sun, looking up at the peaks we had just conquered. Later that afternoon, we swam in Lake Annecy in our underwear and finished the day with a cold beer.

The next morning we caught a taxi to Geneva airport, flew home, and by afternoon we were back in Belgium, welcomed with Vanessa's shepherd's pie. In just a few weeks Axl will leave for university and his room will be empty, but that evening we were all together, sharing stories from our trip over dinner.

Person stands on a wooden dock at Lake Annecy, facing the water at dusk, with mountains in the background. Axl standing on a dock at Lake Annecy.

It had been a short trip, just three days in the mountains. Hard at times, but very manageable for a not-super-fit forty-six year old. Some journeys are measured less by distance, duration, or difficulty, but by the transitions they mark. This one gave us what we needed: time together before the changes ahead, and a memory we will carry for the rest of our lives. On the trail Axl often walked ahead of me. In life, I will have to learn to walk beside him.

À la recherche de l’humanitĂ© perdue


La mort ou le retour de la lucidité

Drmollytov est une bibliothĂ©caire qui a Ă©tĂ© trĂšs gravement blessĂ©e dans un accident de moto, accident oĂč son mari a perdu la vie.

AprĂšs une pĂ©riode de convalescence et de deuil, elle tente de reconstruire sa vie et, graduellement, elle prend conscience de la frĂ©nĂ©sie consumĂ©riste dans laquelle est engagĂ©e toute personne « normale ». Depuis le dĂ©sir de shopping aux rĂ©seaux sociaux en passant par les abonnements aux services de streaming.

Au plus elle fait du nettoyage dans sa vie, au plus elle retrouve du temps et de l’énergie. Au moins elle Ă©prouve le besoin « d’ĂȘtre vue ». Il faut dire que de poster uniquement sur Gemini, ça n’aide pas pour la visibilité !

Une phrase m’a marquĂ©e sur son dernier billet postĂ© sur Gemini : « Mon seul regret avec les rĂ©seaux sociaux, c’est d’avoir Ă©tĂ© dessus tout court. Ils ont vidĂ© mon Ă©nergie mentale, dĂ©vorĂ© mon temps et je suis certaine qu’ils ont extrait une part de mon Ăąme. » (traduction trĂšs libre).

Je me rends compte qu’il ne suffit pas de se libĂ©rer des mĂ©canismes d’addiction des rĂ©seaux sociaux. Il faut Ă©galement conscientiser Ă  quel point ils ont dĂ©formĂ©, dĂ©truit, dĂ©naturĂ© nos pensĂ©es, nos relations sociales, nos motivations. Pire : ils nous rendent objectivement stupides ! Depuis 2010, le QI moyen est en train de descendre, ce qui n’était jamais arrivĂ© depuis l’invention du QI (quoi qu’on pense de cet outil).

Les rĂ©seaux sociaux sont intrinsĂšquement liĂ©s au smartphone. Ils ont rĂ©ellement explosĂ© lorsqu’ils se sont optimisĂ©s pour la consommation passive sur un petit Ă©cran tactile (chose Ă  laquelle Zuckerberg ne croyait pas du tout). À l’inverse, l’addiction aux rĂ©seaux sociaux a créé une demande continue pour des smartphones toujours plus brillants et prenant des photos toujours plus susceptibles de gĂ©nĂ©rer des likes.

Comme le souligne Jose Briones, ces interactions permanentes sur une plaque de verre lisse, les écouteurs vissés sur les oreilles, nous font perdre la conscience du tactile, de la matérialité.

Au-delà de notre addiction aux chiffres colorés

Quand on a Ă©tĂ© addict, quand ou y a cru vraiment, quand on y a investi Ă©normĂ©ment de soi, il ne suffit pas d’arrĂȘter de fumer pour ĂȘtre en bonne santĂ©. ArrĂȘter, ce n’est que le premier pas nĂ©cessaire et indispensable. Mais il reste un long chemin Ă  parcourir pour se reconstruire par aprĂšs, pour retrouver l’humain qui a Ă©tĂ© blessĂ©, enfoui.

L’ĂȘtre humain que, finalement, peu de monde a intĂ©rĂȘt Ă  ce que vous retrouviez, mais qui est lĂ , enfui sous des notifications incessantes, sous la consultation compulsive de vos likes, de vos statistiques, de vos abonnĂ©s. Pour Jose Briones, il a fallu plus de trois ans sans smartphone pour que se calme son angoisse
 de ne pas avoir de smartphone !

Cela fait des annĂ©es que je n’ai plus de statistiques sur les frĂ©quentations de ce blog. Parce que ce n’est pas trĂšs Ă©thique, mais, surtout, parce que cela me rendait fou, parce que ma santĂ© mentale en pĂątissait incroyablement. Parce que je n’arrivais plus Ă  ĂȘtre satisfait de mon Ă©criture autrement que par le nombre de lecteurs que ça me ramenait. Parce que de simples statistiques dĂ©truisaient mon Ăąme.

Comme le dit le blog This day’s portion, vous n’avez pas besoin de statistiques !

Et si le rĂ©seau Mastodon est trĂšs loin d’ĂȘtre parfait, il est assez simple d’y trouver une instance qui ne vous espionne pas. La majoritĂ© ne le fait d’ailleurs pas. Au contraire de Bluesky qui traque toutes vos interactions Ă  travers la sociĂ©tĂ© Statsig. Statsig qui vient d’ĂȘtre rachetĂ©e par OpenAI, le crĂ©ateur de ChatGPT.

On dirait que Sam Altman tente de faire comme Musk et de gagner de l’influence politique en noyautant les rĂ©seaux sociaux centralisĂ©s. On s’est foutu de la gueule de Musk, mais force est de constater que ça a trĂšs bien fonctionnĂ©. Et que, comme je le disais en 2023, ce n’est qu’une question de temps avant que ça arrive Ă  Bluesky qui n’est pas du tout dĂ©centralisĂ©, contrairement Ă  ce que rĂ©pĂšte le marketing.

Mais le pire avec toutes ces statistiques, toutes ces donnĂ©es, c’est que nous sommes les premiers Ă  vouloir les rĂ©colter et Ă  nous vendre pour les optimiser et les consulter sur de jolis graphiques colorĂ©s affichĂ©s sur nos plaques de verre lisse et brillante.

L’inhumanitĂ© d’un monde qui se vend

Je ne cesse de rĂ©pĂ©ter ce qu’articule justement Thierry Crouzet dans son dernier article : les marketeux ont imposĂ© leur vision du monde, forçant les artistes, les intellectuels et les scientifiques Ă  devenir des commerciaux, ce qui est l’antithĂšse de leur nature profonde. Car artistes, intellectuels et scientifiques ont en commun d’ĂȘtre dans une quĂȘte, peut‑ĂȘtre illusoire, de vĂ©ritĂ©, d’absolu. LĂ  oĂč le marketing est, par dĂ©finition, l’art du mensonge, de la tromperie, de l’apparence et de l’exploitation de l’humain.

Cette destruction mentale enseignĂ©e dans les Ă©coles de commerce est Ă©galement Ă  l’Ɠuvre avec l’IA. Il faudra des annĂ©es pour que les personnes addicts Ă  l’IA puissent, si tout va bien, retrouver leur Ăąme d’humain, leur capacitĂ© de raisonnement autonome. Les dĂ©veloppeurs qui dĂ©pendent de Github sont en premiĂšre ligne.

En espĂ©rant que nous puissions arriver Ă  redevenir des humains sans devoir recourir Ă  la solution extrĂȘme dĂ©crite par Thierry Bayoud et LĂ©a Deneuville dans l’excellente nouvelle « Chronique d’un crevard », nouvelle prĂ©sente dans le Recueil de Nakamoto, que je recommande chaudement et prĂ©sentĂ© ici par Ysabeau. Et, oui, les nouvelles sont sous licence libre.

L’idĂ©e derriĂšre Chronique d’un crevard m’a rappelĂ© mon propre roman Printeurs. Ça serait chouette de voir les deux univers se rejoindre d’une maniĂšre ou d’une autre. Car c’est ça toute la beautĂ© de la crĂ©ation artistique libre.

De l’art comme instinct de survie

De la crĂ©ation artistique tout court, devrais-je dire, jusqu’au moment oĂč les juristes d’entreprise ont rĂ©ussi Ă  convaincre les artistes qu’ils devaient ĂȘtre des maniaques de la « protection de leur propriĂ©tĂ© intellectuelle » ce qui les a transformĂ©s en victimes de la plus formidable arnaque de ces derniĂšres dĂ©cennies. Tout comme les marketeux, les juristes d’entreprise sont, par essence, des gens qui vont t’exploiter. C’est leur mĂ©tier !

Seule la technique change : les marketeux mentent et te promettent le bonheur, les juristes menacent et corrompent. Les deux ne cherchent qu’à augmenter le bĂ©nĂ©fice de leur employeur. Les deux ont rĂ©ussi Ă  convaincre les artistes d’éteindre leur humanitĂ© pour devenir eux-mĂȘmes marketeux et juriste, de faire du « personal branding » et de la « propriĂ©tĂ© intellectuelle ».

À ce propos, Cory Doctorow explique trĂšs bien sur quelle illusion s’est construite la fameuse « propriĂ©tĂ© intellectuelle » et Ă  quel point ceux qui l’ont conçue savaient trĂšs bien que c’était une arnaque Ă  l’échelle planĂ©taire pour tenter de transformer le monde entier en une colonie Ă©tatsunienne.

Marketeux et juristes ont rĂ©ussi Ă  convaincre les artistes de haĂŻr ce qui fait la base de leur mĂ©tier : leur public, renommĂ©s « pirates » dĂšs qu’ils ne passent pas entre les barriĂšres Nadar du corporatisme de surveillance. Ils ont rĂ©ussi Ă  convaincre les scientifiques de haĂŻr ce qui fait la base de leur mĂ©tier : le partage sans restriction de la connaissance. Le fait que la plus grande base de donnĂ©es scientifiques du monde, Sci-hub, soit considĂ©rĂ©e comme pirate et interdite partout dans le monde dit tout ce que vous avez besoin de savoir sur notre sociĂ©tĂ©.

Le capitalisme de surveillance pourrit tout ce qu’il touche. Tout d’abord en rendant difficile la vie des contestataires (ce qui est de bonne guerre), mais, surtout, en achetant et corrompant les rebelles qui rĂ©ussissent malgrĂ© tout. Devenu millionnaire, cet artiste antisystĂšme deviendra le premier soutien du systĂšme en question et adaptera son slogan : « Soyez rebelles, mais pas trop, achetez mes produits dĂ©rivĂ©s ! ».

Tout comme le surrĂ©alisme a Ă©tĂ© la rĂ©ponse artistique et intellectuelle au fascisme, l’art seul peut sauver notre humanitĂ©. Un art brut, tactile, sensoriel. Mais, avant toute chose, un art libre qui se partage, qui se diffuse et qui envoie se faire foutre les notions de propriĂ©tĂ©s virtuelles.

Un art qui se partage, mais force le public Ă  partager Ă©galement, Ă  retrouver l’essence de notre humanité : le partage.

  • Photo d’illustration prise par Diegohnxiv et reprĂ©sentant une fresque en l’honneur de SciHub Ă  l’universitĂ© de Mexico
  • Printeurs et Le recueil de Nakamoto sont commandables chez votre libraire indĂ©pendant prĂ©fĂ©ré ! Les ebooks sont sur libgen, au moins pour Printeurs. Je le sais, c’est moi qui l’ai uploadĂ©.

Je suis Ploum et je viens de publier Bikepunk, une fable Ă©colo-cycliste entiĂšrement tapĂ©e sur une machine Ă  Ă©crire mĂ©canique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !

Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.

September 03, 2025

Het is al veel te lang geleden dat ik de mooiste maand ter wereld vierde met een mooi liedje. Vandaar deze Septemberige “August Moon” live door Jacob Alon: Watch this video on YouTube.

Source

At some point, I had to admit it: I’ve turned GitHub Issues into a glorified chart gallery.

Let me explain.

Over on my amedee/ansible-servers repository, I have a workflow called workflow-metrics.yml, which runs after every pipeline. It uses yykamei/github-workflows-metrics to generate beautiful charts that show how long my CI pipeline takes to run. Those charts are then posted into a GitHub Issue—one per run.

It’s neat. It’s visual. It’s entirely unnecessary to keep them forever.

The thing is: every time the workflow runs, it creates a new issue and closes the old one. So naturally, I end up with a long, trailing graveyard of “CI Metrics” issues that serve no purpose once they’re a few weeks old.

Cue the digital broom. đŸ§č


Enter cleanup-closed-issues.yml

To avoid hoarding useless closed issues like some kind of GitHub raccoon, I created a scheduled workflow that runs every Monday at 3:00 AM UTC and deletes the cruft:

schedule:
  - cron: '0 3 * * 1' # Every Monday at 03:00 UTC

This workflow:

  • Keeps at least 6 closed issues (just in case I want to peek at recent metrics).
  • Keeps issues that were closed less than 30 days ago.
  • Deletes everything else—quietly, efficiently, and without breaking a sweat.

It’s also configurable when triggered manually, with inputs for dry_run, days_to_keep, and min_issues_to_keep. So I can preview deletions before committing them, or tweak the retention period as needed.


📂 Complete Source Code for the Cleanup Workflow

name: đŸ§č Cleanup Closed Issues

on:
  schedule:
    - cron: '0 3 * * 1' # Runs every Monday at 03:00 UTC
  workflow_dispatch:
    inputs:
      dry_run:
        description: "Enable dry run mode (preview deletions, no actual delete)"
        required: false
        default: "false"
        type: choice
        options:
          - "true"
          - "false"
      days_to_keep:
        description: "Number of days to retain closed issues"
        required: false
        default: "30"
        type: string
      min_issues_to_keep:
        description: "Minimum number of closed issues to keep"
        required: false
        default: "6"
        type: string

concurrency:
  group: ${{ github.workflow }}-${{ github.ref }}
  cancel-in-progress: true

permissions:
  issues: write

jobs:
  cleanup:
    runs-on: ubuntu-latest

    steps:
      - name: Install GitHub CLI
        run: sudo apt-get install --yes gh

      - name: Delete old closed issues
        env:
          GH_TOKEN: ${{ secrets.GH_FINEGRAINED_PAT }}
          DRY_RUN: ${{ github.event.inputs.dry_run || 'false' }}
          DAYS_TO_KEEP: ${{ github.event.inputs.days_to_keep || '30' }}
          MIN_ISSUES_TO_KEEP: ${{ github.event.inputs.min_issues_to_keep || '6' }}
          REPO: ${{ github.repository }}
        run: |
          NOW=$(date -u +%s)
          THRESHOLD_DATE=$(date -u -d "${DAYS_TO_KEEP} days ago" +%s)
          echo "Only consider issues older than ${THRESHOLD_DATE}"

          echo "::group::Checking GitHub API Rate Limits..."
          RATE_LIMIT=$(gh api /rate_limit --jq '.rate.remaining')
          echo "Remaining API requests: ${RATE_LIMIT}"
          if [[ "${RATE_LIMIT}" -lt 10 ]]; then
            echo "⚠ Low API limit detected. Sleeping for a while..."
            sleep 60
          fi
          echo "::endgroup::"

          echo "Fetching ALL closed issues from ${REPO}..."
          CLOSED_ISSUES=$(gh issue list --repo "${REPO}" --state closed --limit 1000 --json number,closedAt)

          if [ "${CLOSED_ISSUES}" = "[]" ]; then
            echo "✅ No closed issues found. Exiting."
            exit 0
          fi

          ISSUES_TO_DELETE=$(echo "${CLOSED_ISSUES}" | jq -r \
            --argjson now "${NOW}" \
            --argjson limit "${MIN_ISSUES_TO_KEEP}" \
            --argjson threshold "${THRESHOLD_DATE}" '
              .[:-(if length < $limit then 0 else $limit end)]
              | map(select(
                  (.closedAt | type == "string") and
                  ((.closedAt | fromdateiso8601) < $threshold)
                ))
              | .[].number
            ' || echo "")

          if [ -z "${ISSUES_TO_DELETE}" ]; then
            echo "✅ No issues to delete. Exiting."
            exit 0
          fi

          echo "::group::Issues to delete:"
          echo "${ISSUES_TO_DELETE}"
          echo "::endgroup::"

          if [ "${DRY_RUN}" = "true" ]; then
            echo "🛑 DRY RUN ENABLED: Issues will NOT be deleted."
            exit 0
          fi

          echo "⏳ Deleting issues..."
          echo "${ISSUES_TO_DELETE}" \
            | xargs -I {} -P 5 gh issue delete "{}" --repo "${REPO}" --yes

          DELETED_COUNT=$(echo "${ISSUES_TO_DELETE}" | wc -l)
          REMAINING_ISSUES=$(gh issue list --repo "${REPO}" --state closed --limit 100 | wc -l)

          echo "::group::✅ Issue cleanup completed!"
          echo "📌 Deleted Issues: ${DELETED_COUNT}"
          echo "📌 Remaining Closed Issues: ${REMAINING_ISSUES}"
          echo "::endgroup::"

          {
            echo "### đŸ—‘ïž GitHub Issue Cleanup Summary"
            echo "- **Deleted Issues**: ${DELETED_COUNT}"
            echo "- **Remaining Closed Issues**: ${REMAINING_ISSUES}"
          } >> "$GITHUB_STEP_SUMMARY"


đŸ› ïž Technical Design Choices Behind the Cleanup Workflow

Cleaning up old GitHub issues may seem trivial, but doing it well requires a few careful decisions. Here’s why I built the workflow the way I did:

Why GitHub CLI (gh)?

While I could have used raw REST API calls or GraphQL, the GitHub CLI (gh) provides a nice balance of power and simplicity:

  • It handles authentication and pagination under the hood.
  • Supports JSON output and filtering directly with --json and --jq.
  • Provides convenient commands like gh issue list and gh issue delete that make the script readable.
  • Comes pre-installed on GitHub runners or can be installed easily.

Example fetching closed issues:

gh issue list --repo "$REPO" --state closed --limit 1000 --json number,closedAt

No messy headers or tokens, just straightforward commands.

Filtering with jq

I use jq to:

  • Retain a minimum number of issues to keep (min_issues_to_keep).
  • Keep issues closed more recently than the retention period (days_to_keep).
  • Parse and compare issue closed timestamps with precision.
  • Exclude pull requests from deletion by checking the presence of the pull_request field.

The jq filter looks like this:

jq -r --argjson now "$NOW" --argjson limit "$MIN_ISSUES_TO_KEEP" --argjson threshold "$THRESHOLD_DATE" '
  .[:-(if length < $limit then 0 else $limit end)]
  | map(select(
      (.closedAt | type == "string") and
      ((.closedAt | fromdateiso8601) < $threshold)
    ))
  | .[].number
'

Secure Authentication with Fine-Grained PAT

Because deleting issues is a destructive operation, the workflow uses a Fine-Grained Personal Access Token (PAT) with the narrowest possible scopes:

  • Issues: Read and Write
  • Limited to the repository in question

The token is securely stored as a GitHub Secret (GH_FINEGRAINED_PAT).

Note: Pull requests are not deleted because they are filtered out and the CLI won’t delete PRs via the issues API.

Dry Run for Safety

Before deleting anything, I can run the workflow in dry_run mode to preview what would be deleted:

inputs:
  dry_run:
    description: "Enable dry run mode (preview deletions, no actual delete)"
    default: "false"

This lets me double-check without risking accidental data loss.

Parallel Deletion

Deletion happens in parallel to speed things up:

echo "$ISSUES_TO_DELETE" | xargs -I {} -P 5 gh issue delete "{}" --repo "$REPO" --yes

Up to 5 deletions run concurrently — handy when cleaning dozens of old issues.

User-Friendly Output

The workflow uses GitHub Actions’ logging groups and step summaries to give a clean, collapsible UI:

echo "::group::Issues to delete:"
echo "$ISSUES_TO_DELETE"
echo "::endgroup::"

And a markdown summary is generated for quick reference in the Actions UI.


Why Bother?

I’m not deleting old issues because of disk space or API limits — GitHub doesn’t charge for that. It’s about:

  • Reducing clutter so my issue list stays manageable.
  • Making it easier to find recent, relevant information.
  • Automating maintenance to free my brain for other things.
  • Keeping my tooling neat and tidy, which is its own kind of joy.

Steal It, Adapt It, Use It

If you’re generating temporary issues or ephemeral data in GitHub Issues, consider using a cleanup workflow like this one.

It’s simple, secure, and effective.

Because sometimes, good housekeeping is the best feature.


đŸ§Œâœš Happy coding (and cleaning)!

How I fell in love with calendar.txt

The more I learn about Unix tools, the more I realise we are reinventing everyday Rube Goldberg’s wheels and that Unix tools are, often, elegantly enough.

Months ago, I discovered calendar.txt. A simple file with all your dates which was so simple and stupid that I wondered 1) why I didn’t think about it myself and, 2) how it could be useful.

I downloaded the file and tried it. Without thinking much about it, I realised that I could add the following line to my offpunk startup:

!grep `date -I` calendar.txt --color

And, just like that, I suddenly have important things for my day everytime I start Offpunk. In my "do_the_internet.sh", I added the following:

grep `date -I`calendar.txt --color -A 7

Which allows me to have an overview of the next seven days.

But what about editing? This is the alias I added to my shell to automatically edit today’s date:

alias calendar="vim +/`date -I` ~/inbox/calendar.txt"

It feels so easy, so elegant, so simple. All those aliases came naturally, without having to spend more than a few seconds in the man page of "date". No need to fiddle with a heavy web interface. I can grep through my calendar. I can edit it with vim. I can share it, save it and synchronise it without changing anything else, without creating any account. Looking for a date, even far in the future, is as simple as typing "/YEAR-MONTH-DAY" in vim.

Recurring events

The icing on the cake became apparent when I received my teaching schedule for the next semester. I had to add a recurring event every Tuesday minus some special cases where the university is closed.

Not a big deal. I do it each year, fiddling with the web interface of my calendar to find the good options to make the event recurrent then removing those special cases without accidentally removing the whole series.

It takes at most 10 minutes, 15 if I miss something. Ten minutes of my life that I hate, forced to use a mouse and click on menus which are changing every 6 months because, you know, "yeah, redesign".

But, with my calendar.txt, it takes exactly 15 seconds.

/Tue

To find the first Tuesday.

i

To write the course number and classroom number, escape then

n.n.n.n.n.n.nn.n.n.n.nn.n.n.n.n.

I’m far from being a Vim expert but this occurred naturally, without really thinking about the tool. I was only focused on the date being correct. It was quick and pleasant.

Shared events and collaboration

I read my email in Neomutt. When I’m invited to an event, I must open a browser to access the email through my webmail and click the "Yes" button in order to have it added to my calendar. Events I didn’t respond show in my calendar, even if I don’t want them. It took me some settings-digging not to display events I refused. Which is kinda dumb but so are the majority of our tools those days.

With calendar.txt, I manually enter the details from the invitation, which is not perfect but takes less time than opening a browser, login into a webmail and clicking a button while waiting at each step the loading of countless of JavaScript libraries.

Invitations are rare enough that I don’t mind entering the details by hand. But I’m thinking about doing a small bash script that would read an ICS file and add it to calendar.txt. It looks quite easy to do.

I also thought about doing the reverse : a small script that would create an ICS and send it by email to any address added to an event. But it would be hard to track down which events were already sent and which ones are new. Let’s stick to the web interface when I need to create a shared event.

Calendar.txt should remain simple and for my personal use. The point of Unix tools is to allow you to create the tools you need for yourself, not create a startup with a shiny name/logo that will attract investors hoping to make billions in a couple of years by enshitifying the life of captive users.

And when you work with a team, you are stuck anyway with the worst possible tool that satisfies the need of the dumbest member of the team. Usually the manager.

With Unix tools, each solution is personal and different from the others.

Simplifying calendaring

Another unexpected advantage of the system is that you don’t need to guess the end date of events anymore. All I need to know is that I have a meeting at 10 and a lunch at 12. I don’t need to estimate the duration of the meeting which is, anyway, usually only a rough estimation and not an important information. But you can’t create an event in modern calendar without giving a precise end.

Calendar.txt is simple, calendar.txt is good.

I can add events without thinking about it, without calendaring being a chore. Sandra explains how she realised that using an online calendar was a chore when she started to use a paper agenda.

Going back to a paper calendar is probably something I will end up doing but, in the meantime, calendar.txt is a breeze.

Trusting my calendar

Most importantly, I now trust my calendar.

I’ve been burned by this before: I had created my whole journey to a foreign country on my online calendar only to discover upon landing that my calendar had decided to be "smart" and to change all events because I was not in the same time zone. Since then, I actually write the time of an event in the title of the event, even if it looks redundant. This also helps with events being moved by accident while scrolling on a smartphone or in a browser. Which is rare but happened enough to make me anxious.

I had the realisation that I don’t trust any calendar application because, for events with a very precise time (like a train), I always fall back on checking the confirmation email or PDFs.

It’s not the case anymore with calendar.txt. I trust the file. I trust the tool.

There are not many tools you can trust.

Mobile calendar.txt

I don’t need notifications about events on my smartphone. If a notification tells me about an event I forgot, it would be too late anyway. And if my phone is on silent, like always, the notification is useless anyway. We killed notifications with too much notification, something I addressed here :

I do want to consult/edit my calendar on my phone. Getting the file on my phone is easy as having it synchronised with my computer through any mean. It’s a simple txt file.

Using it is another story.

Looking at my phone, I realise how far we have fallen: Android doesn’t allow me to do a simple shortcut to that calendar.txt file which would open on the current day. There’s probably a way but I can’t think of one. Probably because I don’t understand that system. After all, I’m not supposed to even try understanding it.

Android is not Unix. Android, like other proprietary Operating System, is a cage you need to fight against if you don’t want to surrender your choices, your data, your soul. Unix is freedom: hard to conquer but impossible to let go as soon as you tasted it.

I’m Ploum, a writer and an engineer. I like to explore how technology impacts society. You can subscribe by email or by rss. I value privacy and never share your adress.

I write science-fiction novels in French. For Bikepunk, my new post-apocalyptic-cyclist book, my publisher is looking for contacts in other countries to distribute it in languages other than French. If you can help, contact me!

September 02, 2025

Recently, I published an article related to MRS (MySQL REST Service), which we released as a lab. I wanted to explore how I could use this new cool feature within an application. I decided to create an application from scratch using Helidon. The application uses the Sakila sample database. Why Helidon? I decided to use [
]

Une vie sans notifications

Avertissement : Cet article parle de mon expĂ©rience avec le Mudita Kompakt, mais, n’étant pas liĂ© Ă  cette firme, je ne rĂ©pondrai Ă  aucune question concernant cet appareil ni le Hisense A5. Tout ce que j’ai Ă  dire au sujet de cet appareil est dans ce billet. Pour le reste, voyez les nombreuses vidĂ©os et articles sur le Web ou rejoignez le forum de Mudita.

Le grand problĂšme du minimalisme numĂ©rique, c’est qu’il n’y a pas une solution satisfaisante pour tout le monde. Sur les forums consacrĂ©s au minimalisme numĂ©rique, chaque solution est critiquĂ©e pour faire « trop » et, parfois par les mĂȘmes personnes, pas assez.

J’ai vu des personnes en quĂȘte d’un dumbphone pour soigner leur addiction Ă  l’hyperconnexion se plaindre de ne pouvoir installer Whatsapp et Facebook dessus. D’une maniĂšre gĂ©nĂ©rale, j’ai suffisamment d’expĂ©rience dans l’industrie pour savoir que les humains sont trĂšs mauvais pour dĂ©terminer leurs propres besoins, ce que j’appelle « le syndrome de la maison de plain-pied Ă  3 Ă©tages ». Je veux tout, mais que ce soit minimaliste.

L’addiction Ă  l’écran

Il y a 6 ans, je pris conscience qu’une grande part de mon addiction Ă  mon smartphone venait de l’écran lui-mĂȘme. Comme l’a dit un jour un participant Ă  un forum que je frĂ©quente, les Ă©crans sont devenus tellement beaux, les couleurs tellement riches que l’image est plus belle que la rĂ©alitĂ©. Regarder une photo de paysage aux couleurs saturĂ©es et retouchĂ©e est plus beau que de regarder le paysage en rĂ©alitĂ©, avec sa grisaille, son autoroute qui n’apparait pas dans le cadre de la photo, sa pluie fine qui nous rentre dans le cou. Nous n’utilisons plus nos smartphones pour faire quelque chose, ils font partie de notre corps et de notre esprit.

Dans mon cas personnel, se passer d’écran avec un dumbphone ne pouvait convenir, car l’usage le plus important que je fais de mon tĂ©lĂ©phone est de m’orienter Ă  vĂ©lo et, alors que je suis au milieu de la nature, de crĂ©er un itinĂ©raire de retour Ă  charger sur mon GPS. Ça et utiliser Signal, le seul chat de ma vie quotidienne.

Pour ĂȘtre franc, le concept de dumbphone me dĂ©passe un peu, car s’il y a bien un truc dont je n’ai pas envie ni besoin, c’est d’ĂȘtre joignable partout tout le temps. Si c’est pour laisser un dumbphone en silencieux dans ma poche, autant ne rien prendre du tout !

Ma quĂȘte de sobriĂ©tĂ© numĂ©rique a donc commencĂ© avec l’un des trĂšs rares smartphones Ă  Ă©cran e-ink, le Hisense A5 (Ă  droite sur la photo d’illustration).

Le Hisense est un produit bon marchĂ© de piĂštre qualitĂ©, Ă  destination du marchĂ© chinois. S’il ne disposait d’aucun service Google, il est plein de spywares chinois impossibles Ă  dĂ©sinstaller (mais que je tentais de contenir avec le firewall Adguard, configurĂ© aux petits oignons pendant des heures).

Pendant six ans, j’ai utilisĂ© exclusivement cet appareil. Lors de mon premier voyage avec, un trip en train en Bretagne pour un projet de livre, j’ai Ă©tĂ© en permanence anxieux Ă  l’idĂ©e que « quelque chose se passe mal, car je n’avais pas un vrai smartphone ». Mais, petit Ă  petit, je me suis surpris Ă  moins l’utiliser que son prĂ©dĂ©cesseur, Ă  accepter ses limites. L’écran e-ink liĂ© Ă  la lenteur et aux bugs du logiciel en partie en chinois ne me donnait pas du tout envie de l’utiliser. TrĂšs vite, la couleur des Ă©crans de smartphones m’est apparue comme violente, agressive. Mais que je passe quelques minutes sur un tel Ă©cran et, soudain, je retrouve un bon vieux shoot de dopamine, une envie de l’utiliser pour faire quelque chose. Quoi ? Peu importe tant que je peux garder les yeux rivĂ©s sur ces lumineuses formes mouvantes.

L’hyperconnexion permanente

Il m’est souvent arrivĂ© de prĂ©tendre que le Hisense n’était pas un smartphone pour Ă©viter d’installer une app soi-disant indispensable pour un service dont j’avais besoin ou, tout simplement, pour obtenir une carte papier dans ces restaurants qui ont l’impression d’ĂȘtre Ă  la pointe de la technologie, car ils ont un QR code scotchĂ© sur la table. Mais c’était un mensonge. Car le Hisense est, au fond, un smartphone des plus classiques.

J’y avais mes emails, un navigateur web et mĂȘme Mastodon, que j’ai trĂšs vite supprimĂ©, mais auquel je pouvais accĂ©der via Firefox ou Inkbro, navigateur optimisĂ© pour les Ă©crans e-ink.

Mon addiction va beaucoup mieux. J’ai perdu l’habitude d’avoir mon smartphone tout le temps sur moi, ne le prenant pour sortir que si je pense en avoir rĂ©ellement besoin. Ça n’a l’air de rien, mais, pour certains addicts, sortir sans tĂ©lĂ©phone est une vĂ©ritable aventure. Faire l’expĂ©rience est une excellente maniĂšre de rĂ©aliser Ă  quel point on est addict.

Le Hisense a beau ĂȘtre gros et moche, lorsque je l’avais avec moi et que j’avais un temps mort, je vĂ©rifiais si je n’avais pas reçu d’emails. Je devais, comme tout smartphone, penser Ă  le remettre en silencieux lorsque j’avais activĂ© la sonnerie, car je voulais ĂȘtre joignable, faire les mises Ă  jour des toutes les apps que je gardais, car je pouvais en avoir potentiellement besoin. Bref, la gestion classique d’un smartphone.

Cela me convenait, mais, les smartphones e-ink n’ayant jamais vraiment percĂ©, je me demandais comment j’allais remplacer un appareil qui prĂ©sentait des signes de faiblesse (batterie qui se vide soudainement, chargement qui ne fonctionne plus que, intermittence).

C’est alors que j’ai Ă©tĂ© convaincu par le Kompakt de Mudita.

Mudita est une entreprise polonaise qui cherche Ă  offrir des produits favorisant la pleine conscience. Des rĂ©veils au design Ă©purĂ©, des montres et un dumbphone, le Mudita Pure, qui me faisais grandement de l’Ɠil, car basĂ© sur un systĂšme entiĂšrement Open Source. Malheureusement, le Pure Ă©tait trop minimaliste pour moi, car ne permettant pas d’utiliser mon GPS de vĂ©lo.

Puis est arrivé le Kompakt.

BasĂ© sur Android, le Kompakt est techniquement un smartphone. En plus petit. Et avec un Ă©cran e-ink d’une qualitĂ© bien moindre que le Hisense. Et pourtant


Premiers pas avec le Kompakt

La premiĂšre chose qui m’a frappĂ©e en dĂ©ballant mon Kompakt, c’est que je n’ai dĂ» crĂ©er aucun compte, passer par aucune procĂ©dure autre que le choix de la langue. InsĂ©rez une carte SIM, allumez et ça fonctionne.

Mudita fait trĂšs attention Ă  la vie privĂ©e et ne propose aucun service en ligne. Le tĂ©lĂ©phone est entiĂšrement dĂ©googlisĂ© voire mĂȘme « dĂ©cloudisé » (contrairement Ă , par exemple, Murena /e/OS). Pour la premiĂšre fois depuis des lustres, je ne devais pas combattre mon tĂ©lĂ©phone, je ne devais pas le configurer, le transformer.

C’est incroyable comme ça m’a fait plaisir.

Pour ĂȘtre transparent, il faut prĂ©ciser que les dĂ©veloppeurs rĂ©cupĂšrent des donnĂ©es anonymisĂ©es de debug et que ce n’est, pour le moment, pas dĂ©sactivable. Une discussion Ă  ce sujet est en cours sur le forum Mudita.

Car, oui, Mudita dispose d’un forum de discussion auquel participe une employĂ©e de la firme qui tente d’aider les utilisateurs et se fait le relais vers les dĂ©veloppeurs. Un truc qui Ă©tait la base en 2010, mais qui semble incroyable de nos jours.

Bref, je me suis senti un client respectĂ©, pas une vache Ă  lait. Et je n’en reviens toujours pas.

Outre le forum, ce qui m’a frappĂ© avec le Mudita, c’est qu’il pousse rĂ©ellement l’idĂ©e de minimalisme jusque dans ses retranchements. L’application GPS permet de chercher une adresse et de faire un itinĂ©raire piĂ©ton, vĂ©lo ou voiture. Et c’est tout. Des options ? Aucune, nada, nihil ! Et vous savez quoi ? Ça fonctionne ! Vu que c’est OpenStreetMap, ça fonctionne trĂšs bien et j’ai dĂ©sinstallĂ© Comaps que j’avais mis par rĂ©flexe. L’appareil photo
 prend des photos et c’est tout (on peut juste activer/dĂ©sactiver le flash).

Trouver son chemin avec le Mudita Kompakt Trouver son chemin avec le Mudita Kompakt

C’est comme ça pour toutes les applis : en 10 minutes, vous aurez fait le tour de toutes les options et c’est incroyablement rafraichissant.

Bon, parfois, c’est limite trop. Le lecteur de musique affiche vos MP3 et les lit par ordre alphabĂ©tique. C’est tout. Ils ont promis d’amĂ©liorer ça, mais, au fond, je trouve ça amusant d’ĂȘtre limitĂ© de cette façon.

Les choses sérieuses

Si Mudita n’offre aucun service en ligne, la meilleure maniĂšre d’interagir avec son tĂ©lĂ©phone est le Mudita Center, un logiciel compatible Windows/MacOS/Linux. AprĂšs l’avoir tĂ©lĂ©chargĂ©, vous devez brancher votre tĂ©lĂ©phone Ă  votre ordinateur avec
 retenez votre souffle
 un cĂąble USB. (sur Debian/Ubuntu, vous devez ĂȘtre membre du groupe "dialout". "sudo adduser ploum dialout", reboot et puis c’est bon)

Un cĂąble ! En 2025 ! Incroyable, non ? Quand je vois comme j’ai dĂ» me battre avec le Freewrite d’Astrohaus qui force l’utilisation de son cloud propriĂ©taire, j’apprĂ©cie Ă  outrance le fait de brancher mon tĂ©lĂ©phone avec un cĂąble.

Avec le Mudita Center, vous pouvez envoyer des fichiers sur l’appareil. Les MP3 pour la musique, les epub ou les pdf, qui seront ouverts dans l’appli E-reader. Pratique pour les billets de train et autres tickets Ă©lectroniques. Une section est rĂ©servĂ©e pour transfĂ©rer les fichiers APK que vous voulez « sideloader ». On s’est tellement fait entuber par Google et Apple qu’on a perdu le droit d’utiliser le mot « installer » en parlant d’un logiciel. Ce mot est dĂ©sormais privatisĂ© et il faut « sideloader » (du moins tant que c’est encore lĂ©galement possible
).

Dans mon cas, je n’ai sideloadĂ© qu’un seul APK : F-Droid. Avec F-Droid, j’ai pu installer Molly (un client Signal), mon gestionnaire de mot de passe, mon appli 2FA, le clavier Flickboard et Aurora Store. Avec Aurora Store, j’ai pu installer Komoot, Garmin, Proton Calendar et l’app SNCB pour les horaires de train. Pas de navigateur ni d’email cette fois ! Je me suis quand mĂȘme accordĂ© l’application WikipĂ©dia, pour tester.

Tout est petit, en noir et blanc (ça, j’avais dĂ©jĂ  l’habitude), mais, dans mon cas, tout fonctionne. Attention que ce ne sera peut-ĂȘtre pas votre cas. Les applis qui ont besoin des services Google, comme Strava, refuseront de se lancer (ce qui ne change pas de mon Hisense). Mon appli bancaire nĂ©cessite une camĂ©ra Ă  selfie (ce que le Kompakt n’a pas) et je vais devoir trouver une solution de rechange.

Au fait, Mudita ne permet pas de personnaliser la liste des applications. Celles-ci sont classĂ©es par ordre alphabĂ©tique. Pas de raccourcis, pas d’options. Si c’est perturbant au dĂ©but, cela se rĂ©vĂšle trĂšs vite trĂšs apprĂ©ciable, car c’est, une fois encore, un truc de moins Ă  penser, une excuse de moins pour chipoter.

À noter qu’il est cependant possible de cacher des applications. Si vous n’utilisez pas l’app de mĂ©ditation, vous pouvez la cacher, tout simplement. Simple et efficace.

Les notifications

C’est lorsque j’ai reçu mon premier message Signal que j’ai rĂ©alisĂ© un truc Ă©trange. J’ai bien entendu le son, mais je ne voyais pas de notifications.

Et pour cause
 Le Mudita Kompakt n’a pas de notifications ! L’écran d’accueil vous montre si vous avez eu des appels ou des SMS, mais, pour le reste, il n’y a pas de notifications du tout !

Mon premier rĂ©flexe a Ă©tĂ© d’investiguer l’installation d’un launcher alternatif, InkOS, qui permet une plus grande configurabilitĂ© et des notifications.

Mais
 Attendez une seconde ! Que suis-je en train de faire ? Je cherche Ă  refaire un smartphone ! Et si je tentais d’utiliser le Mudita de la maniĂšre pour laquelle il a Ă©tĂ© conçu ?Sans notifications !

Le seul rĂ©el problĂšme avec cette approche c’est que les notifications existent, mais que Mudita les cache. On les entend donc, mais on ne peut pas savoir d’oĂč elles proviennent.

Dans mon cas, c’est essentiellement Signal (enfin, Molly pour celleux qui suivent). Dans Signal, j’ai donc configurĂ© un profil de notification qui soit silencieux sauf pour les membres de ma famille proche.

Si j’entends mon tĂ©lĂ©phone faire un son, je sais que c’est un message de ma famille. Pour les autres, je ne les vois que lorsque je choisis d’ouvrir Signal. Ce qui est exactement ce qu’un systĂšme de communication devrait ĂȘtre.

Bien entendu, les choses se compliquent si vous avez plusieurs applications qui envoient des notifications. Il est possible d’avoir accĂšs aux notifications et de les dĂ©sactiver par applications en utilisant "Activity Launcher" disponible sur F-Droid. C’est un peu du chipotage, mais ça m’a permis de dĂ©sactiver les notifications « parasites » de tout ce qui n’est pas Signal.

La vie sans notifications

Lorsqu’on accepte ce mode de fonctionnement, le Mudita prend soudainement tout son sens.

J’avais dĂ©jĂ  fortement rĂ©duit les notifications sur le Hisense. Il Ă©tait d’ailleurs en silencieux la plupart du temps. Mais, Ă  chaque fois que je consultais l’écran, je voyais les petites icĂŽnes. Machinalement, je glissais mon doigt pour faire apparaĂźtre le tiroir Ă  notifications et « vĂ©rifier » avant de glisser latĂ©ralement pour supprimer.

Bon sang que j’ai en horreur ces gestes de glissement des doigts, jamais prĂ©cis, jamais satisfaisant comme le bruit d’une touche qu’on enfonce. Tiens, le Mudita ne permet d’ailleurs pas de « swiper » dans la liste des applications. Il faut faire dĂ©filer avec une flĂšche. C’est minime, mais j’apprĂ©cie !

Mais mĂȘme si je n’avais pas de notifications sur le Hisense, je vĂ©rifiais de temps en temps si je n’avais pas reçu un mail important. Je vĂ©rifiais une information sur un site web.

Oh, rien de bien mĂ©chant. Une addiction parfaitement sous contrĂŽle. Mais une sĂ©rie de rĂ©flexes dont j’avais envie de nettoyer ma vie.

Avec le Mudita Kompakt, l’expĂ©rience est trĂšs perturbante. Machinalement, je saisis l’appareil et
 rien. Il n’y a rien Ă  faire. La seule chose que je peux vĂ©rifier, c’est Signal. Je suis ensuite forcĂ© de reposer ce petit Ă©cran.

Pas besoin non plus de mettre en silencieux ou en mode avion. Le Kompakt dispose d’un switch hardware « Offline+ » qui dĂ©sactive tous les rĂ©seaux, tous les capteurs, y compris l’appareil photo et le micro. En Offline+, rien ne rentre et rien ne sort de l’appareil. Et quand je suis connectĂ©, je sais que je n’aurai que les appels tĂ©lĂ©phoniques, les sms et les messages Signal de ma famille proche.

C’est comme un nouveau monde


Tout n’est pas parfait

On ne va pas se leurrer, le Mudita Kompakt est loin d’ĂȘtre parfait. Il est petit, mais un peu trop gros. Il y a des bugs comme l’alarme qui se dĂ©clenche en retard ou pas du tout, comme l’appareil photo qui met prĂšs de deux secondes entre la pression sur le bouton et la prise effective de l’image (mais c’est dĂ©jĂ  mieux que l’appareil photo du Hisense dont la lentille s’est bloquĂ©e aprĂšs quelques semaines d’utilisation, rendant toutes mes photos irrĂ©mĂ©diablement floues).

Le forum regorge d’utilisateurs insatisfaits. Pour certains car je pense qu’ils n’ont pas conscientisĂ© les limites du minimalisme numĂ©rique, qu’ils espĂ©raient un smartphone complet avec un Ă©cran e-ink. Mais, dans d’autres cas, c’est clairement Ă  cause de bugs dans le systĂšme pour des cas d’usage qui ne me concernent pas directement, comme les problĂšmes Bluetooth alors que je suis tout heureux d’avoir un jack audio. Le Kompakt utilise une version trĂšs fortement modifiĂ©e et dĂ©googlisĂ©e d’Android 12 (les tĂ©lĂ©phones Googe sont Ă  Android 16). Le hardware lui-mĂȘme est assez ancien (plus que mon Hisense). Ce sont des dĂ©tails qui peuvent se rĂ©vĂ©ler importants. La batterie, par exemple, tient 3/4 jours, ce qui n’est pas extraordinaire en comparaison avec le Hisense. Une heure de hotspot wifi consomme 10% de batterie lĂ  oĂč le Hisense n’en perdait pas 3%.

L’application musicale est vraiment trùs minimaliste ! L’application musicale est vraiment trùs minimaliste !

MalgrĂ© ses limites, l’appareil n’est pas bon marchĂ© et il est probablement possible de configurer n’importe quel appareil Android pour avoir un Ă©cran Ă©purĂ© et pas de notifications. Mais ce qui me plaĂźt avec le Mudita c’est justement le fait que je ne le configure pas. Que je ne cherche pas Ă  comprendre ce que je dois bloquer, que je ne passe pas du temps Ă  optimiser mon Ă©cran d’accueil ou le placement des applications.

Ça ne conviendra certainement pas Ă  tout le monde. Il y a certainement plein de dĂ©fauts qui rendent le Kompakt inutilisable pour vous. Mais pour mon petit cas personnel et pour mon mode de vie actuel, c’est un vrai bonheur (Ă  l’exception de cette saletĂ© d’appli bancaire pour laquelle je n’ai pas encore trouvĂ© de solution).

Accepter de lĂącher prise

Car, oui, je vais rater des choses. Je verrai des mails urgents bien plus tard. Je ne pourrai pas chercher une information rapide quand je suis en déplacement. Je ne pourrai pas prendre de belles photos.

C’est le principe mĂȘme ! Le minimalisme numĂ©rique c’est, par essence, ne plus pouvoir tout faire tout le temps. C’est ĂȘtre forcĂ© de s’ennuyer dans les temps morts, de planifier certaines expĂ©ditions, de demander une information autour de soi si nĂ©cessaire, de se dire, dans certaines situations, que la vie aurait Ă©tĂ© plus facile avec un smartphone traditionnel.

Si vous n’avez pas effectuĂ© Ă  l’avance ce travail de faire le tri entre ce qui est vraiment nĂ©cessaire pour vous, ne songez mĂȘme pas Ă  prendre un tĂ©lĂ©phone minimaliste comme le Kompakt. Ce tĂ©lĂ©phone me convient parce que ça fait des annĂ©es que je rĂ©flĂ©chis Ă  ce sujet et parce qu’il est compatible avec mon mode de vie et mes obligations.

Si vous idĂ©alisez le minimalisme sans rĂ©flĂ©chir aux consĂ©quences, vous serez frustrĂ©s Ă  la premiĂšre friction, Ă  la premiĂšre perte de cette facilitĂ© omniprĂ©sente qu’est le smartphone. Le minimalisme numĂ©rique est Ă©galement un concept trĂšs personnel. Quand je vois le nombre d’appareils connectĂ©s que je possĂšde, j’ai du mal Ă  dire que je suis un « minimaliste ». Je cherche juste Ă  conscientiser et Ă  faire en sorte que chaque appareil ait pour moi un bĂ©nĂ©fice trĂšs clair avec le moins possible de dĂ©savantages (comme mon GPS de vĂ©lo ou mon analyseur de qualitĂ© d’air). Je ne suis pas vraiment minimaliste, je pense juste que le smartphone traditionnel a sur ma vie un impact nĂ©gatif trĂšs important que je cherche Ă  minimiser.

Certain·es me disent qu’ils n’ont pas le choix. Comme le dit trĂšs bien Jose Briones, c’est faux. Nous avons, pour le moment, le choix. C’est juste que ce n’est pas un choix facile et qu’il faut assumer les consĂ©quences, accepter de changer son mode de vie pour cela.

L’obligation du smartphone

Et, justement, ce qui est le plus effrayant avec cette dĂ©marche de tenter de minimiser l’usage du smartphone, c’est de rĂ©aliser Ă  quel point il devient presque obligatoire d’en avoir un, Ă  quel point ce choix devient de plus en plus tĂ©nu. Des services commerciaux, mais Ă©galement des administrations publiques considĂšrent que vous avez obligatoirement un smartphone rĂ©cent, que vous disposez d’un compte Google, d’une connexion Internet permanente, d’une batterie bien chargĂ©e et que vous ĂȘtes d’accord d’installer une Ă©niĂšme application dessus et de crĂ©er un compte en ligne pour quelque chose d’aussi mondain que de payer un emplacement de parking ou accĂ©der Ă  un Ă©vĂ©nement. Un contrĂŽleur de train me confiait rĂ©cemment que la SNCB planifiait de supprimer le ticket papier pour mettre en avant l’usage de l’app et du smartphone.

Sur les forums minimalistes, j’ai mĂȘme dĂ©couvert une catĂ©gorie d’utilisateurs qui possĂšdent un smartphone classique pour aller au travail, par simple peur d’ĂȘtre moquĂ© ou de passer pour un rebelle auprĂšs de leurs collĂšgues. De la mĂȘme maniĂšre, certains refusent d’installer Signal ou d’effacer leur compte Facebook par crainte que cela puisse paraĂźtre suspect. Une chose me semble claire : si c’est la peur de potentiellement paraĂźtre suspect qui vous retient, il est urgent d’agir maintenant, tant que cette suspicion n’est que potentielle. Installez Signal et GrapheneOS ou /e/OS maintenant pour avoir l’excuse, dans le futur, de dire que ça fait des mois ou des annĂ©es que vous fonctionnez comme cela.

Dans le cas du smartphone, je constate avec effroi que s’il est thĂ©oriquement possible de ne jamais en avoir eu, il est extrĂȘmement difficile de revenir en arriĂšre. Les banques, par exemple, empĂȘchent souvent de revenir Ă  une mĂ©thode d’authentification sans smartphone une fois que celle-ci a Ă©tĂ© activĂ©e !

Je souris quand je pense aux fois oĂč mon refus du smartphone m’a valu une rĂ©flexion de type : « Ah ? Vous n’ĂȘtes pas Ă  l’aise avec les nouvelles technologies ? ». Souvent, je ne rĂ©ponds pas. Ou je me contente d’un « si, justement  ». Au moins, avec le Mudita, je pourrai le brandir et affirmer haut et fort : je n’ai pas de smartphone !

Vous et moi, nous savons que ce n’est techniquement pas tout Ă  fait vrai, mais ceux qui veulent imposer l’ubiquitĂ© du smartphone GoogApple sont, par dĂ©finition, des ignares technologiques. Ils n’y verront que du feu
 Et, pour un temps, ils seront encore forcĂ©s de s’adapter, d’accepter que, non, tout le monde n’a pas tout le temps un smartphone.

Je suis Ploum et je viens de publier Bikepunk, une fable Ă©colo-cycliste entiĂšrement tapĂ©e sur une machine Ă  Ă©crire mĂ©canique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !

Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.

August 28, 2025

TLDR: A software engineer with a dream, undertook the world’s most advanced personal trainer course. Despite being the odd duck amongst professional athletes, coaches, and body builders, graduated top of class, and is now starting a mission to build a free and open source fitness platform to power next-gen fitness apps and a wikiPedia-style public service. But he needs your help!

The fitness community & industry seem largely dysfunctional.

  1. Content mess on social media

Social media is flooded with redundant, misleading content, e.g. endless variations of the same “how to do ‘some exercise’” videos, repackaged daily by creators chasing relevance. Deceiving clickbait to farm engagement. (notable exception: Sean Nalewanyj has been consistently authentic, accurate, and entertaining). Some content is actually great, but extremely hard to find, needs curation, and sometimes context.

  1. Selling “programs” and “exercise libraries”

Coaches/vendors sell workout programs and exercise instructions as if they are proprietary “secret sauce”. They aren’t. As much as they like you to believe otherwise, workout plans and exercise instructions are not copyrightable. Specific expressions of them, such as video demonstrations are, though there are ways such videos can be legally used by 3rd parties, and I see an opportunity for a win-win model with the creator, but more on that later. This is why the seller usually simply repeat what is already widely understood, replicate (possibly old and misguided) instructions or overcomplicate in an attempt to differentiate. Furthermore, workout programs (or rather, a “rendition” of it) often have no or limited personalization options. The actually important parts, such as adjusting programs across multiple goals (e.g. combining with sports), across time (based on observed progression) or to accommodate injuries, assuring consistency with current scientific evidence, requires expensive expertise, and is often missing from the “product”.

Many exercise libraries exist, but they require royalties. To build a new app (even a non-commercial one) you need to either license a commercial library or recreate your own. I checked multiple free open source ones, but let’s just say there’s serious quality and legal concerns. Finally, WikiPedia is legal and is favorably licensed, but is too text-based and can’t easily be used by applications.

  1. Sub-optimal AI’s

When you ask an LLM for guidance, it sometimes does a pretty good job, often it doesn’t, because:

  • Unclear sources: AI’s regurgitate whatever programs they were fed during training, sometimes written by experts, sometimes by amateurs.
  • Output degrades when you need specific personalized advice or when you need adjustments over time which is actually a critical piece for progressing in your fitness journey. Try to make it too custom and it starts hallucinating.
  • Today’s systems are text based. They may seam cheap today, because vendors are subsidizing the true cost in an attempt to capture the market. But it’s inefficient, inaccurate and financially unsustainable. It also makes for a very crude user interface.

I believe future AI’s will use data models and UI’s that are both domain specific and richer than just text, so we need a library to match. Even “traditional” (non-AI) applications can gain a lot of functionality if they can leverage such structured data.

  1. Closed source days are numbered

Good coaches can bring value via in-person demonstrations, personalization, holding clients accountable and helping them adopt new habits. This unscalable model puts an upper limit on their income. The better known coaches on social media solve this by launching their own apps, some of these seem actually quite good but suffer from the typical downsides that we’ve seen in other software domains such as vendor lock-in, lack of data ownership, incompatibility across apps, lack of customization, high fees, etc; We’ve seen how this plays out in devtools, enterprise, in cloud. According to more and more investment firms, it’s starting to play out in every other industry (just check the OSS Capital portfolio or see what Andreesen Horowitz, one of the most successful VC funds of all time has to say on it): software, is becoming open source in all markets. It leads to more user-friendly products and is the better way to build more successful businesses. It is a disruptive force. Board the train or be left in the dust.

What am I going to do about it?

I’m an experienced software engineer. I have experience building teams and companies. I’m lucky enough to have a window of time and some budget, but I need to make it count. First thing I did is to educate myself properly on fitness. In 2024-2025 I participated in the Menno Henselmans Personal Trainer course. This is the most in-depth, highly accredited, science based course program for personal trainers that I could find. It was an interesting experience being the lone software developer amongst a group of athletes, coaches and body builders. Earlier this year I graduated Magna Cum Laude, top of class.

Right: one of the best coaches in the world. Left: this guy

Now I am a certified coach, who learned from one of the top coaches worldwide with decades of experience. I can train and coach individuals. But as a software engineer I know that even a small software project can grow to change the world. What Wikipedia did for articles, is what I aspire to make for fitness: a free public service comprising information, but more so hands-on tools applications to put information into action. Perhaps give opportunities to industry professionals to differentiate in more meaningful ways.

To support the applications, we also need:

  • an exercise library
  • an algorithms library (or “tools library” in AI lingo)

I’ve started prototyping both the libraries and some applications on body.build. I hope to grow a project and a community around it that will outlive me. It is therefore open source.

Next-gen applications and workflows

Thus far body.build has:

  • a calorie calculator
  • a weight lifting volume calculator
  • a program builder
  • a crude exercise explorer (prototype)

I have some ideas for more stuff that can be built by leveraging the new libraries (discussed below):

  • a mobile companion app to perform your workouts, have quick access to exercise demonstrations/cues, and log performance. You’ll own your data to do your own analysis, and a personal interest to me is the ability to try different variations or cues (see below), track their results and analyze which work better for you. (note: I’m maintaining a list of awesome OSS health/fitness apps, perhaps an existing one could be reused)
  • detailed analysis of (and comparison between) different workout programs, highlighting different areas being emphasized, estimated “bang for buck” (expecting size and strength gains vs fatigue and time spent)
  • just-in-time workout generation. Imagine an app to which you can say “my legs are sore from yesterday’s hike. In 2 days I will have a soccer match. My availability is 40 min between two meetings and/or 60min this evening at the gym. You know my overall goals, my recent workouts and that I reported elbow tendinitis symptoms in my last strength workout. Now generate me an optimal workout”.

The exercise library

The library should be liberally licensed and not restrict reuse. There is no point trying to “protect” content that has very little copyright protection anyway. I believe that “opening up” to reuse (also commercial) is not only key to a successful project, but also unlocks commercial opportunities for everyone.

The library needs in-depth awareness that goes beyond what apps typically contain and include:

  • exercises alternatives and customization options (and their trade-offs)
  • detailed biomechanical data (such as muscle involvements and loading patterns across the range of muscle length and different joint movements

Furthermore, we also need the usual textual description and exercise demonstration videos. Unlike “traditional” libraries that present a singular authoritative view, I find it valuable to clarify what is commonly agreed upon vs where coaches or studies still disagree, and present an overview of the disagreement with further links to the relevant resources, which could be an Instagram post, a YouTube video or a scientific study) A layman can go with the standard instructions, whereas advanced trainees get an overview of different options and cues, which they can check out in more detail, try and see what works best for them. No need to scroll social media for random tips, they’re all aggregated and curated in one place.

Our library (liberally licensed) includes the text and links to 3rd party public content. The 3rd party content itself is typically subject to stronger limitations, but can always be linked to, and often also embedded under fair use and under the standard YouTube license, for free applications anyway. Perhaps one day we’ll have our own content library, but for now we can avoid a lot of work and promote existing creators’ quality content. Win-win.

Body.build today has a prototype of this. I’m gradually adding more and more exercises. Today it’s part of the codebase, but at some point it’ll make more sense to separate them out, and use one of the Creative Commons licenses.

The algorithms library

Through the course, I learned about various principles (validated by decades of coaching experience and by scientific research) to construct optimal training based on input factors (e.g. optimal workout volume depends on many factors, including sex, sleep quality, food intake, etc.) Similarly, things like optimal recovery timing or exercise swapping can be calculated, using well understood principles).

Rather than thinking of workout programs as the main product I think of them as just a derivative, an artifact that can be generated by first determining an individual’s personal parameters, and then applying these algorithms on them. Better yet, instead of pre-defining a rigid multi-month program (which only works well for people with very consistent schedules), this approach allows to generate guidance at any point in any day. Which would work better for people with inconsistent agenda’s.

Body.build today has a few of such codified algorithms, I’ve only implemented what I’ve needed so far.

I need help

I may know how to build some software, and have several ideas on new types of applications that can be built that can be beneficial to people. But there is a lot that I haven’t figured out yet! Maybe you can help?

Particular pain points:

  1. Marketing: developers don’t like it, but it’s critical. We need to determine who to build for? (coaches? developers? end-users? beginners or experts?). What are their biggest issues to solve? How do we reach them? Marketing firms are quite expensive. Perhaps AI will make this a lot more accessible. For now, I think perhaps it makes most sense to build apps for technology & open source enthusiasts who geek out about optimizing their weight lifting and body building. The type of people who would obsessively scroll Instagram or YouTube hoping to find novel workout tips (who will now hopefully have a better way). I’ld like to bring sports&conditioning more to the forefront too, but can’t prioritize this now.
  2. UX and UI design.

Developer help is less critical, but of course welcome too.

If you think you can help, please reach out on body.build discord or X/Twitter, and of course Check out body.build! and please forward this to people who are into both fitness and open source technology! Thanks!

August 27, 2025

Have you ever fired up a Vagrant VM, provisioned a project, pulled some Docker images, ran a build
 and ran out of disk space halfway through? Welcome to my world. Apparently, the default disk size in Vagrant is tiny—and while you can specify a bigger virtual disk, Ubuntu won’t magically use the extra space. You need to resize the partition, the physical volume, the logical volume, and the filesystem. Every. Single. Time.

Enough of that nonsense.

🛠 The setup

Here’s the relevant part of my Vagrantfile:

Vagrant.configure(2) do |config|
  config.vm.box = 'boxen/ubuntu-24.04'
  config.vm.disk :disk, size: '20GB', primary: true

  config.vm.provision 'shell', path: 'resize_disk.sh'
end

This makes sure the disk is large enough and automatically resized by the resize_disk.sh script at first boot.

✹ The script

#!/bin/bash
set -euo pipefail
LOGFILE="/var/log/resize_disk.log"
exec > >(tee -a "$LOGFILE") 2>&1
echo "[$(date)] Starting disk resize process..."

REQUIRED_TOOLS=("parted" "pvresize" "lvresize" "lvdisplay" "grep" "awk")
for tool in "${REQUIRED_TOOLS[@]}"; do
  if ! command -v "$tool" &>/dev/null; then
    echo "[$(date)] ERROR: Required tool '$tool' is missing. Exiting."
    exit 1
  fi
done

# Read current and total partition size (in sectors)
parted_output=$(parted --script /dev/sda unit s print || true)
read -r PARTITION_SIZE TOTAL_SIZE < <(echo "$parted_output" | awk '
  / 3 / {part = $4}
  /^Disk \/dev\/sda:/ {total = $3}
  END {print part, total}
')

# Trim 's' suffix
PARTITION_SIZE_NUM="${PARTITION_SIZE%s}"
TOTAL_SIZE_NUM="${TOTAL_SIZE%s}"

if [[ "$PARTITION_SIZE_NUM" -lt "$TOTAL_SIZE_NUM" ]]; then
  echo "[$(date)] Resizing partition /dev/sda3..."
  parted --fix --script /dev/sda resizepart 3 100%
else
  echo "[$(date)] Partition /dev/sda3 is already at full size. Skipping."
fi

if [[ "$(pvresize --test /dev/sda3 2>&1)" != *"successfully resized"* ]]; then
  echo "[$(date)] Resizing physical volume..."
  pvresize /dev/sda3
else
  echo "[$(date)] Physical volume is already resized. Skipping."
fi

LV_SIZE=$(lvdisplay --units M /dev/ubuntu-vg/ubuntu-lv | grep "LV Size" | awk '{print $3}' | tr -d 'MiB')
PE_SIZE=$(vgdisplay --units M /dev/ubuntu-vg | grep "PE Size" | awk '{print $3}' | tr -d 'MiB')
CURRENT_LE=$(lvdisplay /dev/ubuntu-vg/ubuntu-lv | grep "Current LE" | awk '{print $3}')

USED_SPACE=$(echo "$CURRENT_LE * $PE_SIZE" | bc)
FREE_SPACE=$(echo "$LV_SIZE - $USED_SPACE" | bc)

if (($(echo "$FREE_SPACE > 0" | bc -l))); then
  echo "[$(date)] Resizing logical volume..."
  lvresize -rl +100%FREE /dev/ubuntu-vg/ubuntu-lv
else
  echo "[$(date)] Logical volume is already fully extended. Skipping."
fi

💡 Highlights

  • ✅ Uses parted with --script to avoid prompts.
  • ✅ Automatically fixes GPT mismatch warnings with --fix.
  • ✅ Calculates exact available space using lvdisplay and vgdisplay, with bc for floating point math.
  • ✅ Extends the partition, PV, and LV only when needed.
  • ✅ Logs everything to /var/log/resize_disk.log.

🚹 Gotchas

  • Your disk must already use LVM. This script assumes you’re resizing /dev/ubuntu-vg/ubuntu-lv, the default for Ubuntu server installs.
  • You must use a Vagrant box that supports VirtualBox’s disk resizing—thankfully, boxen/ubuntu-24.04 does.
  • If your LVM setup is different, you’ll need to adapt device paths.

🔁 Automation FTW

Calling this script as a provisioner means I never have to think about disk space again during development. One less yak to shave.

Feel free to steal this setup, adapt it to your team, or improve it and send me a patch. Or better yet—don’t wait until your filesystem runs out of space at 3 AM.

August 26, 2025

Pas de médaille pour les résistants

Si vous voulez changer le monde, il faut entrer en rĂ©sistance. Il faut accepter d’agir et de se taire. Il faut accepter de perdre du confort, des opportunitĂ©s, des relations. Et il ne faut espĂ©rer aucune rĂ©compense, aucune reconnaissance.

Un espionnage pire que tout ce que vous imaginez

Prenez notre dĂ©pendance envers quelques monopoles technologiques. Je pense qu’on ne se rend pas compte de l’espionnage permanent que nous imposent les smartphones. Et que ces donnĂ©es ne sont pas simplement stockĂ©es « chez Google ».

Tim Sh a dĂ©cidĂ© d’investiguer. Il a ajoutĂ© un simple jeu gratuit sur un iPhone vierge dont tous les services de localisation Ă©taient dĂ©sactivĂ©s. Cela semble raisonnable, non ?

En analysant les paquets, il a dĂ©couvert la quantitĂ© incroyable d’information qui Ă©tait envoyĂ©e par le moteur du jeu Unity. Cela signifie que le concepteur du jeu lui-mĂȘme ne sait sans doute pas que son jeu vous espionne.

Mais Tim Sh a fait mieux : il a traquĂ© ces donnĂ©es et dĂ©couvertes oĂč elles Ă©taient revendues. Ce sont des entreprises ayant pignon sur rue qui revendent, en temps rĂ©els, les donnĂ©es utilisateurs : position, historique et instantanĂ©e, niveau de la batterie et luminositĂ©, connexion internet utilisĂ©e, opĂ©rateur tĂ©lĂ©phonique, espace libre disponible sur le tĂ©lĂ©phone.

Le tout est accessible en temps rĂ©el pour des millions d’utilisateurs. Y compris des utilisateurs persuadĂ©s de protĂ©ger leur vie privĂ©e en dĂ©sactivant les permissions de localisation, en faisant attention voire mĂȘme en utilisant des containers GrapheneOS : il n’y a en effet aucune malice, aucun piratage, aucune illĂ©galitĂ©. Si l’application fonctionne, c’est qu’elle envoie ses donnĂ©es, point Ă  la ligne.

À noter Ă©galement : les donnĂ©es concernant les EuropĂ©ens sont plus chĂšres. En effet, le RGPD les rend plus difficiles Ă  obtenir. Ce qui est la preuve que la rĂ©gulation politique fonctionne. Le RGPD est trĂšs loin d’ĂȘtre suffisant. Sa seule utilitĂ© rĂ©elle est de dĂ©montrer que le pouvoir politique peut agir.

Nous avons tendance Ă  nous moquer de la petitesse de l’Europe, car nous la mesurons en utilisant les mĂ©triques amĂ©ricaines. Nos politiciens rĂȘvent de « licornes » et de monopoles europĂ©ens. C’est une erreur, la force europĂ©enne est opposĂ©e Ă  ces valeurs.

Comme le souligne Marcel Sel : malgrĂ© tous ses dĂ©fauts, l’Europe est trĂšs imparfaite, mais, peut-ĂȘtre, la structure dans le monde la plus progressiste et qui protĂšge le mieux ses citoyens.

La saturation de l’indignation

Face Ă  ce constat, nous observons deux rĂ©actions. Le « jemenfoutisme » et l’indignation violente. Mais, contrairement Ă  ce qu’on pourrait croire, la seconde n’a pas plus d’impact que la premiĂšre.

Olivier Ertzscheid parle de la saturation de l’indignation. Une indignation permanente qui nous fait perdre toute capacitĂ© d’agir.

Je vois beaucoup d’indignation concernant le gĂ©nocide qu’IsraĂ«l commet Ă  Gaza. Mais peu ou prou d’actions. Pourtant, une action simple est de supprimer ses comptes Whatsapp. Il est presque certain que les donnĂ©es Whatsapp servent pour cibler des frappes. Supprimer son compte, c’est donc une action rĂ©elle. Moins il y aura de comptes Whatsapp, moins les Gazaouis trouveront l’app indispensable, moins il y aura de donnĂ©es pour IsraĂ«l.

Au lieu de s’indigner, entrez en rĂ©sistance active. Coupez autant que vous pouvez les cordons. Vous allez perdre des opportunitĂ©s ? Des contacts ? Vous allez rater des informations ?

C’est le but ! C’est l’objectif ! C’est la rĂ©sistance, le nouveau maquis. Oui, mais "machin", il est sur Facebook. Quand on entre sa rĂ©sistance, on y va pas en pantoufle avec toute la famille. C’est le principe mĂȘme de la rĂ©sistance : de prendre des risques, d’accomplir des actions que tout le monde ne comprend ou n’approuve pas avec l’espoir de faire changer les choses durablement.

C’est difficile et on ne vous donnera pas une mĂ©daille pour cela. Si vous cherchez la facilitĂ©, le confort ou si vous voulez de la reconnaissance ou des fĂ©licitations officielles, ce n’est pas en rĂ©sistance que vous devez entrer.

S’arrĂȘter pour penser

Oui, les entreprises sont des poules sans tĂȘte qui courent dans tous les sens. Mes annĂ©es dans l’industrie informatique m’ont permis d’observer que l’immense majoritĂ© des employĂ©s ne fait strictement rien d’utile. Tout ce que nous faisons, c’est prĂ©tendre. Lorsqu’impact il y a, ce qui est extrĂȘmement rare, c’est de permettre Ă  un client de faire « mieux semblant ».

J’ai arrĂȘtĂ© de le crier partout, car il est impossible de faire comprendre quelque chose Ă  quelqu’un si son salaire dĂ©pend du fait qu’il ne le comprenne pas. Mais force est de constater que tous ceux qui s’arrĂȘtent pour penser arrivent Ă  cette mĂȘme conclusion.

La merdification des entreprises peut vous toucher de maniĂšre la plus imprĂ©vue sur un produit que vous apprĂ©ciez tout particuliĂšrement. C’est mon cas avec Komoot, un outil que j’utilise en permanence pour planifier mes longs trajets Ă  vĂ©lo et que j’utilise parfois "on the road", quand je suis un peu paumĂ© et que je veux un itinĂ©raire sĂ»r, mais rapide pour arriver rapidement Ă  destination.

Pour celleux qui ne comprennent pas l’intĂ©rĂȘt d’un GPS Ă  vĂ©lo, Thierry Crouzet a justement pondu un billet dĂ©taillant comment cet accessoire change la pratique du cyclisme.

Mais voilĂ , Komoot, startup allemande qui se prĂ©sentait comme un champion de la promotion des voyages Ă  vĂ©lo, avec des fondateurs qui promettaient de ne jamais vendre leur bĂ©bĂ© a Ă©tĂ© vendu Ă  un fond d’investissement rĂ©putĂ© pour merdifier tout ce qu’il rachĂšte.

Je n’en veux pas aux fondateurs. Je sais bien qu’à partir d’une certaine somme, on remet tous en question nos promesses. Les fondateurs de Whatsapp souhaitaient, Ă  la base, fortement protĂ©ger la vie privĂ©e de leurs utilisateurs. Ils ont nĂ©anmoins vendu leur application Ă  Facebook, car, de leurs propres aveux, on accepte certains compromis Ă  partir d’une certaine somme.

Heureusement, des solutions libres se profilent comme l’excellent Cartes.app qui a pris le problùme à bras le corps.

Il manque encore la possibilitĂ© d’envoyer facilement un itinĂ©raire vers mon GPS de vĂ©lo pour que ce soit utilisable au quotidien, mais le symbole est clair : la dĂ©pendance envers des produits merdifiĂ©s n’est pas une fatalité !

De la nécessité du logiciel libre

Comme le démontre Gee, les ajouts de fonctionnalités non indispensables ne sont pas neutres. Elles accroissent considérablement le risque de panne et de problÚme.

Cette simplification ne peut, par essence, que passer par le logiciel libre qui force Ă  la modularitĂ©. Liorel donne un exemple trĂšs parlant : Ă  cause de sa complexitĂ©, Microsoft Excell utilisera pour toujours le calendrier julien. Contrairement Ă  LibreOffice, qui utilise l’actuel calendrier grĂ©gorien.

Simplification, libertĂ©, ralentissement, dĂ©croissance de notre consommation ne sont que les faces d’une mĂȘme forme de rĂ©sistance, d’une mĂȘme conscientisation de la vie dans sa globalitĂ©.

Ralentir et prendre du recul. C’est d’ailleurs ce que m’a violemment offert Chris Brannons, avec son dernier post sur sa capsule Gemini. Et quand je dis le dernier


Barring unforeseen circumstances or unexpected changes, my last day on earth will be June 13th, 2025.

Chris avait 46 ans et il a pris le temps d’écrire le comment et le pourquoi de sa procĂ©dure d’euthanasie. AprĂšs ce post, il a pris le temps de rĂ©pondre Ă  mes emails alors que je l’encourageais Ă  ne pas le faire.

Le symbole du vélo

On ne peut pas s’en foutre. On ne peut pas s’indigner. Il faut alors, avec les quelques millions de secondes qui nous reste Ă  vivre, agir. Agir en faisant ce que l’on pense ĂȘtre le mieux pour soi-mĂȘme, le mieux pour nos enfants, le mieux pour l’humanitĂ©.

Comme rouler Ă  vĂ©lo !

Et tant pis si ça ne change rien. Et tant pis si ça nous fait paraĂźtre Ă©trange aux yeux de certains. Et tant pis si ça a certains dĂ©savantages. Faire du vĂ©lo, c’est entrer en rĂ©sistance !

Symbole de libertĂ©, de simplification, d’indĂ©pendance et pourtant extrĂȘmement technologique, le vĂ©lo n’a jamais Ă©tĂ© aussi politique. Comme le souligne Klaus-Gerd Giesen, le Bikepunk est philosophique et politique !

Cela m’amuse d’ailleurs beaucoup quand on prĂ©sente l’univers de Bikepunk comme un monde d’oĂč a disparu la technologie. Parce que le vĂ©lo ce n’est pas de la technologie peut-ĂȘtre ?

D’ailleurs, si vous n’avez pas encore le bouquin, il ne vous reste qu’à courir faire coucou Ă  votre libraire prĂ©fĂ©ré·e et entrer en rĂ©sistance !

La photo d’illustration m’a Ă©tĂ© envoyĂ©e par Julien Ursini et est sous CC-By. PlongĂ© dans la lecture de Bikepunk, il a Ă©tĂ© saisi de dĂ©couvrir ce cadre de vĂ©lo rouillĂ©, debout dans le lit de la riviĂšre BlĂ©one, comme un acte de rĂ©sistance symbolique. Je ne pouvais rĂȘver meilleure illustration pour ce billet.

Je suis Ploum et je viens de publier Bikepunk, une fable Ă©colo-cycliste entiĂšrement tapĂ©e sur une machine Ă  Ă©crire mĂ©canique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !

Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.

August 20, 2025

Or: Why you should stop worrying and love the LTS releases.

TL;DR: Stick to MediaWiki 1.43 LTS, avoid MediaWiki 1.44.

There are two major MediaWiki releases every year, and every fourth such release gets Long Term Support (LTS). Two consistent approaches to upgrading MediaWiki are to upgrade every major release or to upgrade every LTS version. Let’s compare the pros and cons.

Which Upgrade Strategy Is Best

I used to upgrade my wikis for every MediaWiki release, or even run the master (development) branch. Having become more serious about MediaWiki operations by hosting wikis for many customers at Professional Wiki, I now believe sticking to LTS versions is the better trade-off for most people.

Benefits and drawbacks of upgrading every major MediaWiki version (compared to upgrading every LTS version):

  • Pro: You get access to all the latest features
  • Pro: You might be able to run more modern PHP or operating system versions
  • Con: You have to spend effort on upgrades four times as often (twice a year instead of once every two years)
  • Con: You have to deal with breaking changes four times as often
  • Con: You have to deal with extension compatibility issues four times as often
  • Con: You run versions with shorter support windows. Regular major releases are supported for 1 year, while LTS releases receive support for 3 years

What about the latest features? MediaWiki is mature software. Its features evolve slowly, and most innovation happens in the extension ecosystem. Most releases only contain a handful of notable changes, and there is a good chance none of them matter for your use cases. If there is something you would benefit from in a more recent non-LTS major release, then that’s an argument for not sticking to that LTS version, and it’s up to you to determine if that benefit outweighs all the cons. I think it rarely does, with the comparison not even being close.

The Case Of MediaWiki 1.44

MediaWiki 1.44 is the first major MediaWiki release after MediaWiki 1.43 LTS, and at the time of writing this post, it is also the most recent major release.

As with many releases, MediaWiki 1.44 brings several breaking changes to its internal APIs. This means that MediaWiki extensions that work with the previous versions might no longer work with MediaWiki 1.44. This version brings a high number of these breaking changes, including some particularly nasty ones that prevent extensions from easily supporting both MediaWiki 1.43 LTS and MediaWiki 1.44. That means if you upgrade now, you will run into various compatibility problems with extensions.

Examples of the type of errors you will encounter:

PHP Fatal error: Uncaught Error: Class “Html” not found

PHP Fatal error: Uncaught Error: Class “WikiMap” not found

PHP Fatal error: Uncaught Error: Class “Title” not found

Given that most wikis use dozens of MediaWiki extensions, this makes the “You have to deal with extension compatibility issues” con particularly noteworthy for MediaWiki 1.44.

Unless you have specific reasons to upgrade to MediaWiki 1.44, just stick to MediaWiki 1.43 LTS and wait for MediaWiki 1.47 LTS, which will be released around December 2026.

See also: When To Upgrade MediaWiki (And Understanding MediaWiki versions)

 

The post Why You Should Skip MediaWiki 1.44 appeared first on Entropy Wins.

After nearly two decades and over 1,600 blog posts written in raw HTML, I've made a change that feels long overdue: I've switched to Markdown.

Don't worry, I'm not moving away from Drupal. I'm just moving from a "HTML text format" to a "Markdown format". My last five posts have all been written in Markdown.

I've actually written in Markdown for years. I started with Bear for note-taking, and for the past four years Obsidian has been my go-to tool. Until recently, though, I've always published my blog posts in HTML.

For almost 20 years, I wrote every blog post in raw HTML, typing out every tag by hand. For longer posts, it could take me 45 minutes wrapping everything in <p> tags, adding links, and closing HTML tags like it was still 2001. It was tedious, but also a little meditative. I stuck with it, partly out of pride and partly out of habit.

Getting Markdown working in Drupal

So when I decided to make the switch, I had to figure out how to get Markdown working in Drupal. Drupal has multiple great Markdown modules to choose from but I picked Markdown Easy because it's lightweight, fully tested, and built on the popular CommonMark library.

I documented my installation and upgrade steps in a public note titled Installing and configuring Markdown Easy for Drupal.

I ran into one problem: the module's security-first approach stripped all HTML tags from my posts. This was an issue because I mostly write in Markdown but occasionally mix in HTML for things Markdown doesn't support, like custom styling. One example is creating pull quotes with a custom CSS class:

After 20 years of writing in HTML, I switched to *Markdown*.

<p class="pullquote">HTML for 20 years. Markdown from now on.</p>

Now I can publish faster while still using [Drupal](https://drupal.org).

HTML in Markdown by design

Markdown was always meant to work hand in hand with HTML, and Markdown parsers are supposed to leave HTML tags untouched. John Gruber, the creator of Markdown, makes this clear in the original Markdown specification:

HTML is a publishing format; Markdown is a writing format. Thus, Markdown's formatting syntax only addresses issues that can be conveyed in plain text. [...] For any markup that is not covered by Markdown's syntax, you simply use HTML itself. There is no need to preface it or delimit it to indicate that you're switching from Markdown to HTML; you just use the tags.

In Markdown Easy 1.x, allowing HTML tags required writing a custom Drupal module with a specific "hook" implementation. This felt like too much work for something that should be a simple configuration option. I've never enjoyed writing and maintaining custom Drupal modules for cases like this.

I reached out to Mike Anello, the maintainer of Markdown Easy, to discuss a simpler way to mix HTML and Markdown.

I suggested making it a configuration option and helped test and review the necessary changes. I was happy when that became part of the built-in settings in version 2.0. A few weeks later, Markdown Easy 2.0 was released, and this capability is now available out of the box.

Now that everything is working, I am considering converting my 1,600+ existing posts from HTML to Markdown. Part of me wants everything to be consistent, but another part hesitates to overwrite hundreds of hours of carefully crafted HTML. The obsessive in me debates the archivist. We'll see who wins.

The migration itself would be a fun technical challenge. Plenty of tools exist to convert HTML to Markdown so no need to reinvent the wheel. Maybe I'll test a few converters on some posts to see which handles my particular setup best.

Extending Markdown with tokens

Like Deane Barker, I often mix HTML and Markdown with custom "tokens". In my case, they aren't official web components, but they serve a similar purpose.

For example, here is a snippet that combines standard Markdown with a token that embeds an image:

Nothing beats starting the day with [coffee](https://dri.es/tag/coffee) and this view:

[​image beach-sunrise.jpg lazy=true schema=true caption=false]

These tokens get processed by my custom Drupal module and transformed into full HTML. That basic image token? It becomes a responsive picture element complete with lazy loading, alt-text from my database, Schema.org support, and optional caption. I use similar tokens for videos and other dynamic content.

The real power of tokens is future proofing. When responsive images became a web standard, I could update my image token processor once and instantly upgrade all my blog posts. No need to edit old content. Same when lazy loading became standard, or when new image formats arrive. One code change updates all 10,000 images or so that I've ever posted.

My tokens has evolved over 15 years and deserves its own blog post. Down the road, I might turn some of them into web components like Deane describes.

Closing thoughts

In the end, this was not a syntax decision: it was a workflow decision. I want less friction between an idea and publishing it. Five Markdown posts in, publishing is faster, cleaner, and more enjoyable, while still giving me the flexibility I need.

Those 45 minutes I used to spend on HTML tags? I now spend on things that matter more, or on writing another blog post.

Let’s talk about environment variables in GitHub Actions — those little gremlins that either make your CI/CD run silky smooth or throw a wrench in your perfectly crafted YAML.

If you’ve ever squinted at your pipeline and wondered, “Where the heck should I declare this ANSIBLE_CONFIG thing so it doesn’t vanish into the void between steps?”, you’re not alone. I’ve been there. I’ve screamed at $GITHUB_ENV. I’ve misused export. I’ve over-engineered echo. But fear not, dear reader — I’ve distilled it down so you don’t have to.

In this post, we’ll look at the right ways (and a few less right ways) to set environment variables — and more importantly, when to use static vs dynamic approaches.


🧊 Static Variables: Set It and Forget It

Got a variable like ANSIBLE_STDOUT_CALLBACK=yaml that’s the same every time? Congratulations, you’ve got yourself a static variable! These are the boring, predictable, low-maintenance types that make your CI life a dream.

✅ Best Practice: Job-Level env

If your variable is static and used across multiple steps, this is the cleanest, classiest, and least shouty way to do it:

jobs:
  my-job:
    runs-on: ubuntu-latest
    env:
      ANSIBLE_CONFIG: ansible.cfg
      ANSIBLE_STDOUT_CALLBACK: yaml
    steps:
      - name: Use env vars
        run: echo "ANSIBLE_CONFIG is $ANSIBLE_CONFIG"

Why it rocks:

  • 👀 Super readable
  • 📩 Available in every step of the job
  • đŸ§Œ Keeps your YAML clean — no extra echo commands, no nonsense

Unless you have a very specific reason not to, this should be your default.


đŸŽ© Dynamic Variables: Born to Be Wild

Now what if your variables aren’t so chill? Maybe you calculate something in one step and need to pass it to another — a file path, a version number, an API token from a secret backend ritual…

That’s when you reach for the slightly more
 creative option:

🔧 $GITHUB_ENV to the rescue

- name: Set dynamic environment vars
  run: |
    echo "BUILD_DATE=$(date +%F)" >> $GITHUB_ENV
    echo "RELEASE_TAG=v1.$(date +%s)" >> $GITHUB_ENV

- name: Use them later
  run: echo "Tag: $RELEASE_TAG built on $BUILD_DATE"

What it does:

  • Persists the variables across steps
  • Works well when values are calculated during the run
  • Makes you feel powerful

đŸȘ„ Fancy Bonus: Heredoc Style

If you like your YAML with a side of Bash wizardry:

- name: Set vars with heredoc
  run: |
    cat <<EOF >> $GITHUB_ENV
    FOO=bar
    BAZ=qux
    EOF

Because sometimes, you just want to feel fancy.


đŸ˜”â€đŸ’« What Not to Do (Unless You Really Mean It)

- name: Set env with export
  run: |
    export FOO=bar
    echo "FOO is $FOO"

This only works within that step. The minute your pipeline moves on, FOO is gone. Poof. Into the void. If that’s what you want, fine. If not, don’t say I didn’t warn you.


🧠 TL;DR – The Cheat Sheet

ScenarioBest Method
Static variable used in all stepsenv at the job level ✅
Static variable used in one stepenv at the step level
Dynamic value needed across steps$GITHUB_ENV ✅
Dynamic value only needed in one stepexport (but don’t overdo it)
Need to show off with Bash skillscat <<EOF >> $GITHUB_ENV 😎

đŸ§Ș My Use Case: Ansible FTW

In my setup, I wanted to use:

ANSIBLE_CONFIG=ansible.cfg
ANSIBLE_STDOUT_CALLBACK=yaml

These are rock-solid, boringly consistent values. So instead of writing this in every step:

- name: Set env
  run: |
    echo "ANSIBLE_CONFIG=ansible.cfg" >> $GITHUB_ENV

I now do this:

jobs:
  deploy:
    runs-on: ubuntu-latest
    env:
      ANSIBLE_CONFIG: ansible.cfg
      ANSIBLE_STDOUT_CALLBACK: yaml
    steps:
      ...

Cleaner. Simpler. One less thing to trip over when I’m debugging at 2am.


💬 Final Thoughts

Environment variables in GitHub Actions aren’t hard — once you know the rules of the game. Use env for the boring stuff. Use $GITHUB_ENV when you need a little dynamism. And remember: if you’re writing export in step after step, something probably smells.

Got questions? Did I miss a clever trick? Want to tell me my heredoc formatting is ugly? Hit me up in the comments or toot at me on Mastodon.


✍ Posted by Amedee, who loves YAML almost as much as dancing polskas.
đŸ’„ Because good CI is like a good dance: smooth, elegant, and nobody falls flat on their face.
đŸŽ» Scheduled to go live on 20 August — just as Boombalfestival kicks off. Because why not celebrate great workflows and great dances at the same time?

August 18, 2025

On July 22nd, 2025, we released MySQL 9.4, the latest Innovation Release. As usual, we released bug fixes for 8.0 and 8.4 LTS, but this post focuses on the newest release. In this release, we can notice several contributions related to NDB and the Connectors. Connectors MySQL Server – Replication InnoDB Optimizer C API (client [
]

August 13, 2025

When using Ansible to automate tasks, the command module is your bread and butter for executing system commands. But did you know that there’s a safer, cleaner, and more predictable way to pass arguments? Meet argv—an alternative to writing commands as strings.

In this post, I’ll explore the pros and cons of using argv, and I’ll walk through several real-world examples tailored to web servers and mail servers.


Why Use argv Instead of a Command String?

✅ Pros

  • Avoids Shell Parsing Issues: Each argument is passed exactly as intended, with no surprises from quoting or spaces.
  • More Secure: No shell = no risk of shell injection.
  • Clearer Syntax: Every argument is explicitly defined, improving readability.
  • Predictable: Behavior is consistent across different platforms and setups.

❌ Cons

  • No Shell Features: You can’t use pipes (|), redirection (>), or environment variables like $HOME.
  • More Verbose: Every argument must be a separate list item. It’s explicit, but more to type.
  • Not for Shell Built-ins: Commands like cd, export, or echo with redirection won’t work.

Real-World Examples

Let’s apply this to actual use cases.

🔧 Restarting Nginx with argv

- name: Restart Nginx using argv
  hosts: amedee.be
  become: yes
  tasks:
    - name: Restart Nginx
      ansible.builtin.command:
        argv:
          - systemctl
          - restart
          - nginx

📬 Check Mail Queue on a Mail-in-a-Box Server

- name: Check Postfix mail queue using argv
  hosts: box.vangasse.eu
  become: yes
  tasks:
    - name: Get mail queue status
      ansible.builtin.command:
        argv:
          - mailq
      register: mail_queue

    - name: Show queue
      ansible.builtin.debug:
        msg: "{{ mail_queue.stdout_lines }}"

đŸ—ƒïž Back Up WordPress Database

- name: Backup WordPress database using argv
  hosts: amedee.be
  become: yes
  vars:
    db_user: wordpress_user
    db_password: wordpress_password
    db_name: wordpress_db
  tasks:
    - name: Dump database
      ansible.builtin.command:
        argv:
          - mysqldump
          - -u
          - "{{ db_user }}"
          - -p{{ db_password }}
          - "{{ db_name }}"
          - --result-file=/root/wordpress_backup.sql

⚠ Avoid exposing credentials directly—use Ansible Vault instead.


Using argv with Interpolation

Ansible lets you use Jinja2-style variables ({{ }}) inside argv items.

🔄 Restart a Dynamic Service

- name: Restart a service using argv and variable
  hosts: localhost
  become: yes
  vars:
    service_name: nginx
  tasks:
    - name: Restart
      ansible.builtin.command:
        argv:
          - systemctl
          - restart
          - "{{ service_name }}"

🕒 Timestamped Backups

- name: Timestamped DB backup
  hosts: localhost
  become: yes
  vars:
    db_user: wordpress_user
    db_password: wordpress_password
    db_name: wordpress_db
  tasks:
    - name: Dump with timestamp
      ansible.builtin.command:
        argv:
          - mysqldump
          - -u
          - "{{ db_user }}"
          - -p{{ db_password }}
          - "{{ db_name }}"
          - --result-file=/root/wordpress_backup_{{ ansible_date_time.iso8601 }}.sql

đŸ§© Dynamic Argument Lists

Avoid join(' '), which collapses the list into a single string.

❌ Wrong:

argv:
  - ls
  - "{{ args_list | join(' ') }}"  # BAD: becomes one long string

✅ Correct:

argv: ["ls"] + args_list

Or if the length is known:

argv:
  - ls
  - "{{ args_list[0] }}"
  - "{{ args_list[1] }}"

📣 Interpolation Inside Strings

- name: Greet with hostname
  hosts: localhost
  tasks:
    - name: Print message
      ansible.builtin.command:
        argv:
          - echo
          - "Hello, {{ ansible_facts['hostname'] }}!"


When to Use argv

✅ Commands with complex quoting or multiple arguments
✅ Tasks requiring safety and predictability
✅ Scripts or binaries that take arguments, but not full shell expressions

When to Avoid argv

❌ When you need pipes, redirection, or shell expansion
❌ When you’re calling shell built-ins


Final Thoughts

Using argv in Ansible may feel a bit verbose, but it offers precision and security that traditional string commands lack. When you need reliable, cross-platform automation that avoids the quirks of shell parsing, argv is the better choice.

Prefer safety? Choose argv.
Need shell magic? Use the shell module.

Have a favorite argv trick or horror story? Drop it in the comments below.

August 06, 2025

Not every day do I get an email from a very serious security researcher, clearly a man on a mission to save the internet — one vague, copy-pasted email at a time.

Here’s the message I received:

From: Peter Hooks <peterhooks007@gmail.com>
Subject: Security Vulnerability Disclosure

Hi Team,

I’ve identified security vulnerabilities in your app that may put users at risk. I’d like to report these responsibly and help ensure they are resolved quickly.

Please advise on your disclosure protocol, or share details if you have a Bug Bounty program in place.

Looking forward to your reply.

Best regards,
Peter Hooks

Right. Let’s unpack this.


🧯”Your App” — What App?

I’m not a company. I’m not a startup. I’m not even a garage-based stealth tech bro.
I run a personal WordPress blog. That’s it.

There is no “app.” There are no “users at risk” (unless you count me, and I̷̜̓’̷̠̋mÌŽÌ“ÌȘ ̎́Ìča̞̜͙lÌ”ÌżÌŁr̞̜͇e͖̔̈a̶͖̋d͓̔̇y̎̂̌ ̖̎͂b̶̠̋é̶̻y͇̎̈́oÌžÌ’ÌŁń̞̊d̟̎̆ ̶͉͒s̶̀ͅaÌ¶Í—ÌĄvÌŽÍŠÍ™i͖̔̊n͖̔̆gÌžÌ”ÌĄ).


đŸ•”ïžâ€â™‚ïž The Anatomy of a Beg Bounty Email

This little email ticks all the classic marks of what the security community affectionately calls a beg bounty — someone scanning random domains, finding trivial or non-issues, and fishing for a payout.

Want to see how common this is? Check out:


📼 My (Admittedly Snarky) Reply

I couldn’t resist. Here’s the reply I sent:

Hi Peter,

Thanks for your email and your keen interest in my “app” — spoiler alert: there isn’t one. Just a humble personal blog here.

Your message hits all the classic marks of a beg bounty reconnaissance email:

  • ✅ Generic “Hi Team” greeting — because who needs names?
  • ✅ Vague claims of “security vulnerabilities” with zero specifics
  • ✅ Polite inquiry about a bug bounty program (spoiler: none here, James)
  • ✅ No proof, no details, just good old-fashioned mystery
  • ✅ Friendly tone crafted to reel in easy targets
  • ✅ Email address proudly featuring “007” — very covert ops of you

Bravo. You almost had me convinced.

I’ll be featuring this charming little interaction in a blog post soon — starring you, of course. If you ever feel like upgrading from vague templates to actual evidence, I’m all ears. Until then, happy fishing!

Cheers,
Amedee


😱 No Reply

Sadly, Peter didn’t write back.

No scathing rebuttal.
No actual vulnerabilities.
No awkward attempt at pivoting.
Just… silence.

#sadface
#crying
#missionfailed


🛡 A Note for Fellow Nerds

If you’ve got a domain name, no matter how small, there’s a good chance you’ll get emails like this.

Here’s how to handle them:

  • Stay calm — most of these are low-effort probes.
  • Don’t pay — you owe nothing to random strangers on the internet.
  • Don’t panic — vague threats are just that: vague.
  • Do check your stuff occasionally for actual issues.
  • Bonus: write a blog post about it and enjoy the catharsis.

For more context on this phenomenon, don’t miss:


đŸ§” tl;dr

If your “security researcher”:

  • doesn’t say what they found,
  • doesn’t mention your actual domain or service,
  • asks for a bug bounty up front,
  • signs with a Gmail address ending in 007


it’s probably not the start of a beautiful friendship.


Got a similar email? Want help crafting a reply that’s equally professional and petty?
Feel free to drop a comment or reach out — I’ll even throw in a checklist.

Until then: stay patched, stay skeptical, and stay snarky. 😎

August 05, 2025

Rethinking DOM from first principles

Cover Image

Browsers are in a very weird place. While WebAssembly has succeeded, even on the server, the client still feels largely the same as it did 10 years ago.

Enthusiasts will tell you that accessing native web APIs via WASM is a solved problem, with some minimal JS glue.

But the question not asked is why you would want to access the DOM. It's just the only option. So I'd like to explain why it really is time to send the DOM and its assorted APIs off to a farm somewhere, with some ideas on how.

I won't pretend to know everything about browsers. Nobody knows everything anymore, and that's the problem.

Netscape or something

The 'Document' Model

Few know how bad the DOM really is. In Chrome, document.body now has 350+ keys, grouped roughly like this:

document.body properties

This doesn't include the CSS properties in document.body.style of which there are... 660.

The boundary between properties and methods is very vague. Many are just facades with an invisible setter behind them. Some getters may trigger a just-in-time re-layout. There's ancient legacy stuff, like all the onevent properties nobody uses anymore.

The DOM is not lean and continues to get fatter. Whether you notice this largely depends on whether you are making web pages or web applications.

Most devs now avoid working with the DOM directly, though occasionally some purist will praise pure DOM as being superior to the various JS component/templating frameworks. What little declarative facilities the DOM has, like innerHTML, do not resemble modern UI patterns at all. The DOM has too many ways to do the same thing, none of them nice.

connectedCallback() {
  const
    shadow = this.attachShadow({ mode: 'closed' }),
    template = document.getElementById('hello-world')
      .content.cloneNode(true),
    hwMsg = `Hello ${ this.name }`;

  Array.from(template.querySelectorAll('.hw-text'))
    .forEach(n => n.textContent = hwMsg);

  shadow.append(template);
}

Web Components deserve a mention, being the web-native equivalent of JS component libraries. But they came too late and are unpopular. The API seems clunky, with its Shadow DOM introducing new nesting and scoping layers. Proponents kinda read like apologetics.

The achilles heel is the DOM's SGML/XML heritage, making everything stringly typed. React-likes do not have this problem, their syntax only looks like XML. Devs have learned not to keep state in the document, because it's inadequate for it.

W3C logo
WHATWG logo

For HTML itself, there isn't much to critique because nothing has changed in 10-15 years. Only ARIA (accessibility) is notable, and only because this was what Semantic HTML was supposed to do and didn't.

Semantic HTML never quite reached its goal. Despite dating from around 2011, there is e.g. no <thread> or <comment> tag, when those were well-established idioms. Instead, an article inside an article is probably a comment. The guidelines are... weird.

There's this feeling that HTML always had paper-envy, and couldn't quite embrace or fully define its hypertext nature, and did not trust its users to follow clear rules.

Stewardship of HTML has since firmly passed to WHATWG, really the browser vendors, who have not been able to define anything more concrete as a vision, and have instead just added epicycles at the margins.

Along the way even CSS has grown expressions, because every templating language wants to become a programming language.

netscape composer

Editability of HTML remains a sad footnote. While technically supported via contentEditable, actually wrangling this feature into something usable for applications is a dark art. I'm sure the Google Docs and Notion people have horror stories.

Nobody really believes in the old gods of progressive enhancement and separating markup from style anymore, not if they make apps.

Most of the applications you see nowadays will kitbash HTML/CSS/SVG into a pretty enough shape. But this comes with immense overhead, and is looking more and more like the opposite of a decent UI toolkit.

slack input editor

The Slack input box

layout hack

Off-screen clipboard hacks

Lists and tables must be virtualized by hand, taking over for layout, resizing, dragging, and so on. Making a chat window's scrollbar stick to the bottom is somebody's TODO, every single time. And the more you virtualize, the more you have to reinvent find-in-page, right-click menus, etc.

The web blurred the distinction between UI and fluid content, which was novel at the time. But it makes less and less sense, because the UI part is a decade obsolete, and the content has largely homogenized.

'css is awesome' mug, truncated layout

CSS is inside-out

CSS doesn't have a stellar reputation either, but few can put their finger on exactly why.

Where most people go wrong is to start with the wrong mental model, approaching it like a constraint solver. This is easy to show with e.g.:

<div>
  <div style="height: 50%">...</div>
  <div style="height: 50%">...</div>
</div>
<div>
  <div style="height: 100%">...</div>
  <div style="height: 100%">...</div>
</div>

The first might seem reasonable: divide the parent into two halves vertically. But what about the second?

Viewed as a set of constraints, it's contradictory, because the parent div is twice as tall as... itself. What will happen instead in both cases is the height is ignored. The parent height is unknown and CSS doesn't backtrack or iterate here. It just shrink-wraps the contents.

If you set e.g. height: 300px on the parent, then it works, but the latter case will still just spill out.

Outside-in vs inside-out layout

Outside-in and inside-out layout modes

Instead, your mental model of CSS should be applying two passes of constraints, first going outside-in, and then inside-out.

When you make an application frame, this is outside-in: the available space is divided, and the content inside does not affect sizing of panels.

When paragraphs stack on a page, this is inside-out: the text stretches out its containing parent. This is what HTML wants to do naturally.

By being structured this way, CSS layouts are computationally pretty simple. You can propagate the parent constraints down to the children, and then gather up the children's sizes in the other direction. This is attractive and allows webpages to scale well in terms of elements and text content.

CSS is always inside-out by default, reflecting its document-oriented nature. The outside-in is not obvious, because it's up to you to pass all the constraints down, starting with body { height: 100%; }. This is why they always say vertical alignment in CSS is hard.

Flex grow/shrink

Use flex grow and shrink for spill-free auto-layouts with completely reasonable gaps

The scenario above is better handled with a CSS3 flex box (display: flex), which provides explicit control over how space is divided.

Unfortunately flexing muddles the simple CSS model. To auto-flex, the layout algorithm must measure the "natural size" of every child. This means laying it out twice: first speculatively, as if floating in aether, and then again after growing or shrinking to fit:

Flex speculative layout

This sounds reasonable but can come with hidden surprises, because it's recursive. Doing speculative layout of a parent often requires full layout of unsized children. e.g. to know how text will wrap. If you nest it right, it could in theory cause an exponential blow up, though I've never heard of it being an issue.

Instead you will only discover this when someone drops some large content in somewhere, and suddenly everything gets stretched out of whack. It's the opposite of the problem on the mug.

To avoid the recursive dependency, you need to isolate the children's contents from the outside, thus making speculative layout trivial. This can be done with contain: size, or by manually setting the flex-basis size.

CSS has gained a few constructs like contain or will-change, which work directly with the layout system, and drop the pretense of one big happy layout. It reveals some of the layer-oriented nature underneath, and is a substitute for e.g. using position: absolute wrappers to do the same.

What these do is strip off some of the semantics, and break the flow of DOM-wide constraints. These are overly broad by default and too document-oriented for the simpler cases.

This is really a metaphor for all DOM APIs.

CSS props
CSS props

The Good Parts?

That said, flex box is pretty decent if you understand these caveats. Building layouts out of nested rows and columns with gaps is intuitive, and adapts well to varying sizes. There is a "CSS: The Good Parts" here, which you can make ergonomic with sufficient love. CSS grids also work similarly, they're just very painfully... CSSy in their syntax.

But if you designed CSS layout from scratch, you wouldn't do it this way. You wouldn't have a subtractive API, with additional extra containment barrier hints. You would instead break the behavior down into its component facets, and use them Ă  la carte. Outside-in and inside-out would both be legible as different kinds of containers and placement models.

The inline-block and inline-flex display models illustrate this: it's a block or flex on the inside, but an inline element on the outside. These are two (mostly) orthogonal aspects of a box in a box model.

Text and font styles are in fact the odd ones out, in hypertext. Properties like font size inherit from parent to child, so that formatting tags like <b> can work. But most of those 660 CSS properties do not do that. Setting a border on an element does not apply the same border to all its children recursively, that would be silly.

It shows that CSS is at least two different things mashed together: a system for styling rich text based on inheritance... and a layout system for block and inline elements, nested recursively but without inheritance, only containment. They use the same syntax and APIs, but don't really cascade the same way. Combining this under one style-umbrella was a mistake.

Worth pointing out: early ideas of relative em scaling have largely become irrelevant. We now think of logical vs device pixels instead, which is a far more sane solution, and closer to what users actually expect.

Tiger SVG

SVG is natively integrated as well. Having SVGs in the DOM instead of just as <img> tags is useful to dynamically generate shapes and adjust icon styles.

But while SVG is powerful, it's neither a subset nor superset of CSS. Even when it overlaps, there are subtle differences, like the affine transform. It has its own warts, like serializing all coordinates to strings.

CSS has also gained the ability to round corners, draw gradients, and apply arbitrary clipping masks: it clearly has SVG-envy, but falls very short. SVG can e.g. do polygonal hit-testing for mouse events, which CSS cannot, and SVG has its own set of graphical layer effects.

Whether you use HTML/CSS or SVG to render any particular element is based on specific annoying trade-offs, even if they're all scalable vectors on the back-end.

In either case, there are also some roadblocks. I'll just mention three:

  • text-ellipsis can only be used to truncate unwrapped text, not entire paragraphs. Detecting truncated text is even harder, as is just measuring text: the APIs are inadequate. Everyone just counts letters instead.
  • position: sticky lets elements stay in place while scrolling with zero jank. While tailor-made for this purpose, it's subtly broken. Having elements remain unconditionally sticky requires an absurd nesting hack, when it should be trivial.
  • The z-index property determines layering by absolute index. This inevitably leads to a z-index-war.css where everyone is putting in a new number +1 or -1 to make things layer correctly. There is no concept of relative Z positioning.

For each of these features, we got stuck with v1 of whatever they could get working, instead of providing the right primitives.

Getting this right isn't easy, it's the hard part of API design. You can only iterate on it, by building real stuff with it before finalizing it, and looking for the holes.

Oil on Canvas

So, DOM is bad, CSS is single-digit X% good, and SVG is ugly but necessary... and nobody is in a position to fix it?

Well no. The diagnosis is that the middle layers don't suit anyone particularly well anymore. Just an HTML6 that finally removes things could be a good start.

But most of what needs to happen is to liberate the functionality that is there already. This can be done in good or bad ways. Ideally you design your system so the "escape hatch" for custom use is the same API you built the user-space stuff with. That's what dogfooding is, and also how you get good kernels.

A recent proposal here is HTML in Canvas, to draw HTML content into a <canvas>, with full control over the visual output. It's not very good.

While it might seem useful, the only reason the API has the shape that it does is because it's shoehorned into the DOM: elements must be descendants of <canvas> to fully participate in layout and styling, and to make accessibility work. There are also "technical concerns" with using it off-screen.

One example is this spinny cube:

html-in-canvas spinny cube thing

To make it interactive, you attach hit-testing rectangles and respond to paint events. This is a new kind of hit-testing API. But it only works in 2D... so it seems 3D-use is only cosmetic? I have many questions.

Again, if you designed it from scratch, you wouldn't do it this way! In particular, it's absurd that you'd have to take over all interaction responsibilities for an element and its descendants just to be able to customize how it looks i.e. renders. Especially in a browser that has projective CSS 3D transforms.

The use cases not covered by that, e.g. curved re-projection, will also need more complicated hit-testing than rectangles. Did they think this through? What happens when you put a dropdown in there?

To me it seems like they couldn't really figure out how to unify CSS and SVG filters, or how to add shaders to CSS. Passing it thru canvas is the only viable option left. "At least it's programmable." Is it really? Screenshotting DOM content is 1 good use-case, but not what this is sold as at all.

The whole reason to do "complex UIs on canvas" is to do all the things the DOM doesn't do, like virtualizing content, just-in-time layout and styling, visual effects, custom gestures and hit-testing, and so on. It's all nuts and bolts stuff. Having to pre-stage all the DOM content you want to draw sounds... very counterproductive.

From a reactivity point-of-view it's also a bad idea to route this stuff back through the same document tree, because it sets up potential cycles with observers. A canvas that's rendering DOM content isn't really a document element anymore, it's doing something else entirely.

sheet-happens

Canvas-based spreadsheet that skips the DOM entirely

The actual achilles heel of canvas is that you don't have any real access to system fonts, text layout APIs, or UI utilities. It's quite absurd how basic it is. You have to implement everything from scratch, including Unicode word splitting, just to get wrapped text.

The proposal is "just use the DOM as a black box for content." But we already know that you can't do anything except more CSS/SVG kitbashing this way. text-ellipsis and friends will still be broken, and you will still need to implement UIs circa 1990 from scratch to fix it.

It's all-or-nothing when you actually want something right in the middle. That's why the lower level needs to be opened up.

Where To Go From Here

The goals of "HTML in Canvas" do strike a chord, with chunks of HTML used as free-floating fragments, a notion that has always existed under the hood. It's a composite value type you can handle. But it should not drag 20 years of useless baggage along, while not enabling anything truly novel.

The kitbashing of the web has also resulted in enormous stagnation, and a loss of general UI finesse. When UI behaviors have to be mined out of divs, it limits the kinds of solutions you can even consider. Fixing this within DOM/HTML seems unwise, because there's just too much mess inside. Instead, new surfaces should be opened up outside of it.

use-gpu-layout use-gpu-layout

WebGPU-based box model

My schtick here has become to point awkwardly at Use.GPU's HTML-like renderer, which does a full X/Y flex model in a fraction of the complexity or code. I don't mean my stuff is super great, no, it's pretty bare-bones and kinda niche... and yet definitely nicer. Vertical centering is easy. Positioning makes sense.

There is no semantic HTML or CSS cascade, just first-class layout. You don't need 61 different accessors for border* either. You can just attach shaders to divs. Like, that's what people wanted right? Here's a blueprint, it's mostly just SDFs.

Font and markup concerns only appear at the leaves of the tree, where the text sits. It's striking how you can do like 90% of what the DOM does here, without the tangle of HTML/CSS/SVG, if you just reinvent that wheel. Done by 1 guy. And yes, I know about the second 90% too.

The classic data model here is of a view tree and a render tree. What should the view tree actually look like? And what can it be lowered into? What is it being lowered into right now, by a giant pile of legacy crud?

servo ladybird

Alt-browser projects like Servo or Ladybird are in a position to make good proposals here. They have the freshest implementations, and are targeting the most essential features first. The big browser vendors could also do it, but well, taste matters. Good big systems grow from good small ones, not bad big ones. Maybe if Mozilla hadn't imploded... but alas.

Platform-native UI toolkits are still playing catch up with declarative and reactive UI, so that's that. Native Electron-alternatives like Tauri could be helpful, but they don't treat origin isolation as a design constraint, which makes security teams antsy.

There's a feasible carrot to dangle for them though, namely in the form of better process isolation. Because of CPU exploits like Spectre, multi-threading via SharedArrayBuffer and Web Workers is kinda dead on arrival anyway, and that affects all WASM. The details are boring but right now it's an impossible sell when websites have to have things like OAuth and Zendesk integrated into them.

Reinventing the DOM to ditch all legacy baggage could coincide with redesigning it for a more multi-threaded, multi-origin, and async web. The browser engines are already multi-process... what did they learn? A lot has happened since Netscape, with advances in structured concurrency, ownership semantics, FP effects... all could come in handy here.

* * *

Step 1 should just be a data model that doesn't have 350+ properties per node tho.

Don't be under the mistaken impression that this isn't entirely fixable.

netscape wheel

August 04, 2025

 Tot zover de nieuwe hobby. Venuskes zijn niet de gemakkelijkste om gelukkig te maken.







 Enkele van de Sarracenia zoals ze nu buiten staan. Die eten veel insecten, vooral wespen.







Nieuwe hobby sinds 2021; vleesetende plantjes kweken.

Hier vijf bekertjes die Nepenthes mij geven.







August 03, 2025

lookat 2.1.0rc1

Lookat 2.1.0rc2 is the second release candicate of release of Lookat/Bekijk 2.1.0, a user-friendly Unix file browser/viewer that supports colored man pages.

The focus of the 2.1.0 release is to add ANSI Color support.


 

News

3 Aug 2025 Lookat 2.1.0rc2 Released

Lookat 2.1.0rc2 is the second release candicate of Lookat 2.1.0

ChangeLog

Lookat / Bekijk 2.1.0rc2
  • Corrected italic color
  • Don’t reset the search offset when cursor mode is enabled
  • Renamed strsize to charsize ( ansi_strsize -> ansi_charsize, utf8_strsize -> utf8_charsize) to be less confusing
  • Support for multiple ansi streams in ansi_utf8_strlen()
  • Update default color theme to green for this release
  • Update manpages & documentation
  • Reorganized contrib directory
    • Moved ci/cd related file from contrib/* to contrib/cicd
    • Moved debian dir to contrib/dist
    • Moved support script to contrib/scripts

Lookat 2.1.0rc2 is available at:

Have fun!

August 01, 2025

Net Orange via eSim geactiveerd op mijn Fairphone 6 en voor ik het door had werden “App Center”, “Phone” (beiden van Orange group) maar ook … TikTok geïnstalleerd. Ik was daar niet blij mee. App Center kan ik zelfs niet de-installeren, alleen desactiveren. Fuckers!

Source

July 30, 2025

Fantastische cover van Jamie Woons “Night Air” door Lady Lynn. Die contrabas en die stem, magisch! Watch this video on YouTube. …

Source

Ever wondered what it’s like to unleash 10 000 tiny little data beasts on your hard drive? No? Well, buckle up anyway — because today, we’re diving into the curious world of random file generation, and then nerding out by calculating their size distribution. Spoiler alert: it’s less fun than it sounds. 😏

Step 1: Let’s Make Some Files… Lots of Them

Our goal? Generate 10 000 files filled with random data. But not just any random sizes — we want a mean file size of roughly 68 KB and a median of about 2 KB. Sounds like a math puzzle? That’s because it kind of is.

If you just pick file sizes uniformly at random, you’ll end up with a median close to the mean — which is boring. We want a skewed distribution, where most files are small, but some are big enough to bring that average up.

The Magic Trick: Log-normal Distribution đŸŽ©âœš

Enter the log-normal distribution, a nifty way to generate lots of small numbers and a few big ones — just like real life. Using Python’s NumPy library, we generate these sizes and feed them to good old /dev/urandom to fill our files with pure randomness.

Here’s the Bash script that does the heavy lifting:

#!/bin/bash

# Directory to store the random files
output_dir="random_files"
mkdir -p "$output_dir"

# Total number of files to create
file_count=10000

# Log-normal distribution parameters
mean_log=9.0  # Adjusted for ~68KB mean
stddev_log=1.5  # Adjusted for ~2KB median

# Function to generate random numbers based on log-normal distribution
generate_random_size() {
    python3 -c "import numpy as np; print(int(np.random.lognormal($mean_log, $stddev_log)))"
}

# Create files with random data
for i in $(seq 1 $file_count); do
    file_size=$(generate_random_size)
    file_path="$output_dir/file_$i.bin"
    head -c "$file_size" /dev/urandom > "$file_path"
    echo "Generated file $i with size $file_size bytes."
done

echo "Done. Files saved in $output_dir."

Easy enough, right? This creates a directory random_files and fills it with 10 000 files of sizes mostly small but occasionally wildly bigger. Don’t blame me if your disk space takes a little hit! đŸ’„

Step 2: Crunching Numbers — The File Size Distribution 📊

Okay, you’ve got the files. Now, what can we learn from their sizes? Let’s find out the:

  • Mean size: The average size across all files.
  • Median size: The middle value when sizes are sorted — because averages can lie.
  • Distribution breakdown: How many tiny files vs. giant files.

Here’s a handy Bash script that reads file sizes and spits out these stats with a bit of flair:

#!/bin/bash

# Input directory (default to "random_files" if not provided)
directory="${1:-random_files}"

# Check if directory exists
if [ ! -d "$directory" ]; then
    echo "Directory $directory does not exist."
    exit 1
fi

# Array to store file sizes
file_sizes=($(find "$directory" -type f -exec stat -c%s {} \;))

# Check if there are files in the directory
if [ ${#file_sizes[@]} -eq 0 ]; then
    echo "No files found in the directory $directory."
    exit 1
fi

# Calculate mean
total_size=0
for size in "${file_sizes[@]}"; do
    total_size=$((total_size + size))
done
mean=$((total_size / ${#file_sizes[@]}))

# Calculate median
sorted_sizes=($(printf '%s\n' "${file_sizes[@]}" | sort -n))
mid=$(( ${#sorted_sizes[@]} / 2 ))
if (( ${#sorted_sizes[@]} % 2 == 0 )); then
    median=$(( (sorted_sizes[mid-1] + sorted_sizes[mid]) / 2 ))
else
    median=${sorted_sizes[mid]}
fi

# Display file size distribution
echo "File size distribution in directory $directory:"
echo "---------------------------------------------"
echo "Number of files: ${#file_sizes[@]}"
echo "Mean size: $mean bytes"
echo "Median size: $median bytes"

# Display detailed size distribution (optional)
echo
echo "Detailed distribution (size ranges):"
awk '{
    if ($1 < 1024) bins["< 1 KB"]++;
    else if ($1 < 10240) bins["1 KB - 10 KB"]++;
    else if ($1 < 102400) bins["10 KB - 100 KB"]++;
    else bins[">= 100 KB"]++;
} END {
    for (range in bins) printf "%-15s: %d\n", range, bins[range];
}' <(printf '%s\n' "${file_sizes[@]}")

Run it, and voilà — instant nerd satisfaction.

Example Output:

File size distribution in directory random_files:
---------------------------------------------
Number of files: 10000
Mean size: 68987 bytes
Median size: 2048 bytes

Detailed distribution (size ranges):
&lt; 1 KB         : 1234
1 KB - 10 KB   : 5678
10 KB - 100 KB : 2890
>= 100 KB      : 198

Why Should You Care? đŸ€·â€â™€ïž

Besides the obvious geek cred, generating files like this can help:

  • Test backup systems — can they handle weird file size distributions?
  • Stress-test storage or network performance with real-world-like data.
  • Understand your data patterns if you’re building apps that deal with files.

Wrapping Up: Big Files, Small Files, and the Chaos In Between

So there you have it. Ten thousand random files later, and we’ve peeked behind the curtain to understand their size story. It’s a bit like hosting a party and then figuring out who ate how many snacks. 🍿

Try this yourself! Tweak the distribution parameters, generate files, crunch the numbers — and impress your friends with your mad scripting skills. Or at least have a fun weekend project that makes you sound way smarter than you actually are.

Happy hacking! đŸ”„

July 29, 2025

Have you already tried to upgrade the MySQL version of your MySQL HeatWave instance in OCI that is deployed with Terraform? When you tried, you realized, I hope you didn’t turn off backups, that the instance is destroyed and recreated new! This is our current MySQL HeatWave DB System deployed using Terrafrom: And this is [
]