Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

September 18, 2025

De la mystification de la Grande Idée

et de la négation de l’expérience

Festival Hypermondes à Mérignac

Je dédicacerai ce samedi 20 et dimanche 21 septembre à Mérignac, dans le cadre du festival Hypermondes. Je participe également à une table ronde le dimanche. Et pour tout vous dire, j’ai sacrément le trac, car je serai entouré de noms qui peuplent ma bibliothèque et dont j’ai lu et relu les livres : Pierre Bordage, J.C. Dunyach, Pierre Raufast, Catherine Dufour, Laurent Genefort… Sans oublier Shuiten et Peeters, qui ont marqué mon adolescence et surtout, mon idole, le plus grand scénariste BD de ce siècle, Alain Ayrolles (parce que pour le siècle précédent, c’est Gosciny).

Bref, je me sens tout petit au milieu de ces géants alors n’hésitez pas à venir me faire un coucou pour que je me sente moins seul sur le stand !

La mythologie de l’idée

Dans le film « Glass Onion » (Rian Johnson, 2022), un milliardaire de la Tech, parodie de ZuckerMusk, invite des amis sur son île privée pour une sorte de cluedo géant. Qui dégénère évidemment lorsqu’un véritable crime est commis.

Ce que j’ai beaucoup aimé dans ce film, c’est l’insistance sur un point trop souvent oublié : ce n’est pas parce qu’on est riche et/ou célèbre qu’on est intelligent. Et ce n’est pas parce qu’on arrive à faire croire au public qu’on est surintelligent, au point de le croire soi-même, qu’on l’est réellement.

Bref, c’est une belle remise à leur place du monde des milliardaires, des influenceurs, starlettes et tout ce qui gravite autour.

Néanmoins, un point particulier m’a chagriné : toute une partie de l’intrigue repose sur savoir qui a eu le premier l’idée de la startup qui fera le succès du milliardaire, idée qui est littéralement griffonnée sur une serviette en papier.

C’est très amusant dans le film, mais comme je l’ai déjà dit : une idée seule ne vaut rien !

L’idée n’est que l’étincelle initiale d’un projet, mais le résultat final sera impacté par les milliers de décisions et d’adaptations prises en cours de route.

Le rôle de l’architecte

Si vous n’avez jamais fait construire de maison, vous pensez peut-être que vous décrivez la maison de vos rêves à un architecte. Celui-ci vous propose un plan. Vous validez, les ingénieurs et les ouvriers s’emparent du plan et la maison se construit.

Sauf qu’en réalité, vous êtes incapable de décrire la maison de vos rêves. Vos intuitions sont toutes contradictoires. Ce que j’appelle le syndrome de « la maison de plain-pied sur deux étages ». Et quand bien même vous avez réfléchi en profondeur, l’architecte va pointer tout un tas de problèmes pratiques avec vos idées. De choses auxquelles vous n’avez pas pensé. C’est très joli toutes ces vitres, mais comment allez-vous les entretenir ?

Il va falloir faire des compromis, prendre des décisions. Et une fois le plan validé, les décisions continueront sur le chantier. À cause des imprévus ou des milliers de petits problèmes qui n’apparaissaient pas sur le plan. Voulez-vous vraiment un évier à cet endroit vu que la porte s’ouvre dessus ?

Au final, la maison de vos rêves sera très différente de ce que vous avez imaginé. Pendant des années, vous lui trouverez des défauts. Mais ces défauts sont des compromis que vous avez expressément choisis.

L’idée d’un roman

En tant qu’écrivain, il m’arrive régulièrement de me voir poser la question : « D’où te viennent toutes ces idées ? »

Comme si avoir l’idée était un problème. Des idées, j’en ai des centaines dans mes tiroirs. Le travail n’est pas d’avoir l’idée, c’est de faire le plan puis de transformer ce plan en construction.

J’ai plusieurs fois reçu des propositions de type : « J’ai une super idée pour un roman, je te la partage, tu écris et on fais 50/50 ».

Vous imaginez un instant arriver chez un architecte avec un truc griffonné et dire : « J’ai une super idée pour une maison, je vous la montre, vous la construisez, vous trouvez un entrepreneur et on partage » ?

Contrairement à Printeurs, que j’ai rédigé sans scénario préalable, j’ai écrit Bikepunk avec une véritable structure. Je suis parti d’une idée initiale. J’ai brainstormé avec Thierry Crouzet (nos échanges ont fait naître le fameux flash de l’histoire). Puis j’ai creusé les personnages. J’ai écrit une nouvelle dans cet univers (créant le personnage de Dale), j’ai ensuite travaillé la structure pendant un mois avec un tableau de liège sur lequel je punaisais des fiches. Enfin, je me suis mis à l’écriture. Bien des fois, je me suis retrouvé confronté à des incohérences, j’ai dû prendre des décisions.

Le résultat final ne ressemble en rien à ce que j’imaginais. Certaines scènes clé de mon synopsis se sont révélées, à la relecture de simples transitions. Des improvisations de dernières minutes semblent, au contraire, avoir marqué toute une frange de lecteurices.

Le code n’est qu’une série de décisions

Une idée n’est qu’une étincelle qui peut potentiellement se propager, se mélanger à d’autres. Mais, pour allumer un feu, la source initiale de l’étincelle compte bien moins que le combustible.

L’invention qui a mit cela en exergue est certainement l’ordinateur. Car un ordinateur est, par essence, une machine qui fait ce qu’on lui demande.

Exactement ce qu’on lui demande. Ni plus ni moins.

L’humain a été confronté au fait qu’il est extrêmement compliqué de savoir ce que l’on veut. Que c’est presque impossible de l’exprimer sans ambiguïté. Que cela nécessite un langage dédié.

Un langage de programmation.

Et maitriser un langage de programmation demande un esprit tellement analytique et rationnel qu’un métier s’est créé pour l’utiliser: programmeur, codeuse, développeur. Le terme importe peu.

Mais, tout comme un architecte, une programmeuse doit en permanence prendre des décisions qu’elle pense être les meilleures pour le projet. Pour l’avenir. Ou bien elle identifie les paradoxes pour en discuter avec le client. « Vous m’avez demandé une interface simple avec un seul bouton tout en me spécifiant douze fonctionnalités qui doivent avoir un accès direct avec un bouton dédié. On fait quoi ? » (cas vécu).

De la stupidité de croire en une IA productive

Ce que je dis paraît peut-être évident, mais lorsque j’entends le nombre de personnes qui parlent de « vibe programming », je me dis que beaucoup trop de monde a été bercé avec le paradigme de « l’idée magique » comme dans Onion Glass.

Les IAs sont perçues comme des machines magiques qui font ce que vous voulez.

Sauf que, quand bien même elles seraient parfaites, vous ne savez pas ce que vous voulez.

Les IA ne peuvent pas prendre correctement ces milliers de décisions. Des algorithmes statistiques ne peuvent produire que des résultats aléatoires. Vous ne pouvez pas juste émettre votre idée et voir le résultat apparaître (ce qui est le fantasme des crétins-managers, cette race d’idiots formés dans les écoles de management qui est persuadée que les exécutants sont une charge dont il faudrait idéalement se passer).

Le fantasme ultime est une machine « intuitive », qu’il ne faut pas apprendre. Mais l’apprentissage n’est pas seulement technique. L’expérience humaine est globale. Un architecte va penser aux problèmes de la maison de vos rêves parce qu’il a déjà suivi vingt chantiers et eu un aperçu des problèmes. Chaque nouveau livre d’un écrivain reflète son expérience avec les précédents. Certaines décisions sont les mêmes, d’autres, au contraire, sont différentes pour expérimenter.

Ne pas vouloir apprendre son outil, c’est la définition même de la stupidité la plus crasse.

Penser qu’une IA pourrait remplacer un développeur, se montrer sa totale incompétence quant au travail du développeur en question. Penser qu’une IA peut écrire un livre ne peut provenir de gens qui ne lisent pas eux-mêmes, qui ne voient que du papier imprimé.

Ce n’est pas que c’est techniquement impossible. D’ailleurs, beaucoup le font parce que vendre du papier imprimé, ça peut être rentable avec le bon marketing, peu importe ce qui est imprimé.

C’est juste que le résultat ne pourra jamais être satisfaisant. Tous les compromis, les décisions seront le fruit d’un aléa statistique sur lequel vous n’avez aucun contrôle. Les paradoxes ne seront pas résolus. Bref, c’est et ce sera toujours de la merde.

Un business, c’est bien plus qu’une idée !

Facebook n’était pas la première tentative de réseau social entre étudiants. Amazon n’était pas le premier site de vente de livre en ligne. Whatsapp était une application pour afficher sa disponibilité pour un coup de fil à ses amis. Instagram servait à la base à partager sa position. Microsoft n’avait jamais développé de système d’exploitation lorsqu’ils ont vendu la licence DOS à IBM.

Bref, l’idée initiale ne vaut rien. Ce qui a fait le succès de ces entreprises, ce sont les milliards de décisions prises à chaque instant, les réajustements.

Prendre ces décisions est ce qui construit le succès, fût-il commercial, artistique ou personnel. Croire qu’un ordinateur pourrait prendre ces décisions à votre place c’est faire preuve non seulement de naïveté, mais c’est également prouver totalement son incompétence dans le domaine concerné.

Dans Onion Glass, ce point m’a particulièrement chagriné, car poussé à l’absurde. Comme si une serviette avec trois traits de crayon pouvait valoir des milliards.

Ce petit quelque chose en plus

Et si je me réjouis de fréquenter tant d’auteurs que j’admire à Mérignac, ce n’est pas pour échanger des idées, mais m’imprégner de leurs expériences, de leur personnalité qui leur fait construire des œuvres que j’admire.

J’ai dû relire des dizaines et des dizaines de fois l’intégralité de « De capes et de crocs », le chef d’œuvre de Masbou et Ayrolles.

À chaque relecture, je savoure chaque case. Je sens que les auteurs s’amusent, se laissent porter, emporter par leurs personnages dans des bifurcations a priori imprévues, improbables. Quelle IA aurait l’idée de faire intervenir le caquètement d’un poulailler dans la complétion d’un alexandrin ? Quel algorithme se pavanerait de la césure à l’hémistiche ?

L’humain et son expérience auront toujours quelque chose en plus, quelque chose d’indéfinissable dont le mot m’échappe.

Ah si…

Quelque chose que, sans un pli, sans une tache, l’humain emporte malgré lui…

Et c’est…

— C’est ?

Son panache !

Je suis Ploum et je viens de publier Bikepunk, une fable écolo-cycliste entièrement tapée sur une machine à écrire mécanique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !

Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.

September 17, 2025

Mood: Slightly annoyed at CI pipelines 🧨
CI runs shouldn’t feel like molasses. Here’s how I got Ansible to stop downloading the internet. You’re welcome.


Let’s get one thing straight: nobody likes waiting on CI.
Not you. Not me. Not even the coffee you brewed while waiting for Galaxy roles to install — again.

So I said “nope” and made it snappy. Enter: GitHub Actions Cache + Ansible + a generous helping of grit and retries.

🧙‍♂️ Why cache your Ansible Galaxy installs?

Because time is money, and your CI shouldn’t feel like it’s stuck in dial-up hell.
If you’ve ever screamed internally watching community.general get re-downloaded for the 73rd time this month — same, buddy, same.

The fix? Cache that madness. Save your roles and collections once, and reuse like a boss.

💾 The basics: caching 101

Here’s the money snippet:

path: .ansible/
key: ansible-deps-${{ hashFiles('requirements.yml') }}
restoreKeys: |
  ansible-deps-

🧠 Translation:

  • Store everything Ansible installs in .ansible/
  • Cache key changes when requirements.yml changes — nice and deterministic
  • If the exact match doesn’t exist, fall back to the latest vaguely-similar key

Result? Fast pipelines. Happy devs. Fewer rage-tweets.

🔁 Retry like you mean it

Let’s face it: ansible-galaxy has… moods.

Sometimes Galaxy API is down. Sometimes it’s just bored. So instead of throwing a tantrum, I taught it patience:

for i in {1..5}; do
  if ansible-galaxy install -vv -r requirements.yml; then
    break
  else
    echo "Galaxy is being dramatic. Retrying in $((i * 10)) seconds…" >&2
    sleep $((i * 10))
  fi
done

That’s five retries. With increasing delays.
💬 “You good now, Galaxy? You sure? Because I’ve got YAML to lint.”

⚠ The catch (a.k.a. cache wars)

Here’s where things get spicy:

actions/cache only saves when a job finishes successfully.

So if two jobs try to save the exact same cache at the same time?
💥 Boom. Collision. One wins. The other walks away salty:

Unable to reserve cache with key ansible-deps-...,
another job may be creating this cache.

Rude.

🧊 Fix: preload the cache in a separate job

The solution is elegant:
Warm-up job. One that only does Galaxy installs and saves the cache. All your other jobs just consume it. Zero drama. Maximum speed. 💃

🪄 Tempted to symlink instead of copy?

Yeah, I thought about it too.
“But what if we symlink .ansible/ and skip the copy?”

Nah. Not worth the brainpower. Just cache the thing directly.
✅ It works. 🧼 It’s clean. 😌 You sleep better.

🧠 Pro tips

  • Use the hash of requirements.yml as your cache key. Trust me.
  • Add a fallback prefix like ansible-deps- so you’re never left cold.
  • Don’t overthink it. Let the cache work for you, not the other way around.

✨ TL;DR

  • ✅ GitHub Actions cache = fast pipelines
  • ✅ Smart keys based on requirements.yml = consistency
  • ✅ Retry loops = less flakiness
  • ✅ Preload job = no more cache collisions
  • ❌ Re-downloading Galaxy junk every time = madness

🔥 Go forth and cache like a pro.

Got better tricks? Hit me up on Mastodon and show me your CI magic.
And remember: Friends don’t let friends wait on Galaxy.

💚 Peace, love, and fewer ansible-galaxy downloads.

September 14, 2025

lookat 2.1.0

Lookat 2.1.0 is the latest stable release of Lookat/Bekijk, a user-friendly Unix file browser/viewer that supports colored man pages.

The focus of the 2.1.0 release is to add ANSI Color support.


 

News

14 Sep 2025 Lookat 2.1.0 Released

Lookat / Bekijk 2.1.0rc2 has been released as Lookat / Bekijk 2.1.0

3 Aug 2025 Lookat 2.1.0rc2 Released

Lookat 2.1.0rc2 is the second release candicate of Lookat 2.1.0

ChangeLog

Lookat / Bekijk 2.1.0rc2
  • Corrected italic color
  • Don’t reset the search offset when cursor mode is enabled
  • Renamed strsize to charsize ( ansi_strsize -> ansi_charsize, utf8_strsize -> utf8_charsize) to be less confusing
  • Support for multiple ansi streams in ansi_utf8_strlen()
  • Update default color theme to green for this release
  • Update manpages & documentation
  • Reorganized contrib directory
    • Moved ci/cd related file from contrib/* to contrib/cicd
    • Moved debian dir to contrib/dist
    • Moved support script to contrib/scripts

Lookat 2.1.0 is available at:

Have fun!

September 11, 2025

I once read a blurb about the benefits of bureaucracy, and how it is intended to resist political influences, autocratic leadership, priority-of-the-day decision-making, silo'ed views, and more things that we generally see as "Bad Things™️". I'm sad that I can't recall where it was, but its message was similar as what The Benefits Of Bureaucracy: How I Learned To Stop Worrying And Love Red Tape by Rita McGrath presents. When I read it, I was strangely supportive to the message, because I am very much confronted, and perhaps also often the cause, for bureaucracy and governance-related deliverables in the company that I work for.

Bureacracy and (hyper)governance

Bureaucracy, or governance in general, often puts a bad taste in the mouth of whomever dares to speak about it though. And I fully agree, hypergovernance or hyperbureaucracy will put too much burden in the organization. The benefits will no longer be visible, and the creativity and innovation of people will be stifled.

Hypergovernance is a bad thing indeed, and often comes up in the news. Companies loathing the so-called overregulation of the European Union for instance, getting together in action groups to ask for deregulation. A recent topic here was Europe's attempt for moving towards a more sustainable environment given the lack of attention on sustainability by the various industries and governments. The premise to regulate this was driven by the observation that principally guiding and asking doesn't work: sustainability is a long-term goal, yet most industries and governments focus on short-term benefits.

The need to simplify regulation, and the reaction on the bureacracy needed to align with the reporting expectations of Europe, triggered the update by the European Commission in a simplification package it calls the Omnibus package.

I think that is the right way forward, not for this particular case (I don't know enough about ESG to be any useful resource on that), but also within regulated industries and companies where the bureaucracy is considered to dampen progression and efficiency. Simplification and optimization here is key, not just running down things. In the Capability Maturity Model, a process is considered efficient if it includes deliberate process optimization and improvement. So why not deliberately optimize and improve? Evaluate and steer?

Benefits of bureaucracy

It would be bad if bureaucracy itself would be considered a negative point of any organization. Many of the benefits of bureaucracy I fully endorse myself.

Standardization, where procedures and policies are created to ensure consistency in operations and decision-making. Without standardization, you gain inefficiencies, not benefits. If a process is considered too daunting, standardization might be key to improve on it.

Accountability, where it is made clear who does what. Holding people or teams accountable is not a judgement, but a balance of expectations and responsibilities. If handled positively, accountability is also an expression of expertise, endorsement for what you are or can do.

Risk management, which is coincidentally the most active one in my domain (the Digital Operational Resilience Act has a very strong focus on risk management), has a primary focus on reducing the likelihood of misconduct and errors. Regulatory requirements and internal controls are not the goal, but a method.

Efficiency, by streamlining processes through established protocols and procedures. Sure, new approaches and things come along, but after the two-hundredth request to do or set up something only to realize it still takes 50 mandays... well, perhaps you should focus on streamlining the process, introduce some bureaucracy to help yourself out.

Transparency, promoting clear communication and documentation, as well as insights into why something is done. This improves trust among the teams and people.

In a world where despotic leadership exists, you will find that a good working bureacracy can be a inhibitor for too aggressive change. That can frustrate the wanna-be autocrat (if they are truly autocrat, then there is no bureacracy), but with the right support, it can indicate and motivate why this resistance exists. If the change is for the good - well, bureaucracy even has procedures for change.

Bureaucracy also prohibits islands and isolated decision making. People demanding all the budgets for themselves because they find that their ideas are the only ones worth working on (everybody has these in the company) will also find that the bureacracy is there to balance budgeting, allocate resources to the right projects that benefit the company as a whole, and not just the 4 people you just onboarded in your team and gave macbooks...

Bureaucracy isn't bad, and some people prefer to have strict rules or procedures. Resisting change is a human behavior, but promoting anarchy is also not the way forward. Instead, nurture a culture of continuous improvement: be able to point out when things go beyond their reach, and learn about the reasoning and motivation that others bring up. Those in favor of bureacracy will see this as a maturity increase, and those that are affected by over-regulation will see this as an improvement.

We can all strive to remain in a bureaucracy and be happy with it.

Feedback? Comments? Don't hesitate to get in touch on Mastodon.

September 10, 2025

There comes a time in every developer’s life when you just know a certain commit existed. You remember its hash: deadbeef1234. You remember what it did. You know it was important. And yet, when you go looking for it…

💥 fatal: unable to read tree <deadbeef1234>

Great. Git has ghosted you.

That was me today. All I had was a lonely commit hash. The branch that once pointed to it? Deleted. The local clone that once had it? Gone in a heroic but ill-fated attempt to save disk space. And GitHub? Pretending like it never happened. Typical.

🪦 Act I: The Naïve Clone

“Let’s just clone the repo and check out the commit,” I thought. Spoiler alert: that’s not how Git works.

git clone --no-checkout https://github.com/user/repo.git
cd repo
git fetch --all
git checkout deadbeef1234

🧨 fatal: unable to read tree 'deadbeef1234'

Thanks Git. Very cool. Apparently, if no ref points to a commit, GitHub doesn’t hand it out with the rest of the toys. It’s like showing up to a party and being told your friend never existed.

🧪 Act II: The Desperate fsck

Surely it’s still in there somewhere? Let’s dig through the guts.

git fsck --full --unreachable

Nope. Nothing but the digital equivalent of lint and old bubblegum wrappers.

🕵 Act III: The Final Trick

Then I stumbled across a lesser-known Git dark art:

git fetch origin deadbeef1234

And lo and behold, GitHub replied with a shrug and handed it over like, “Oh, that commit? Why didn’t you just say so?”

Suddenly the commit was in my local repo, fresh as ever, ready to be inspected, praised, and perhaps even resurrected into a new branch:

git checkout -b zombie-branch deadbeef1234

Mission accomplished. The dead walk again.


☠ Moral of the Story

If you’re ever trying to recover a commit from a deleted branch on GitHub:

  1. Cloning alone won’t save you.
  2. git fetch origin <commit> is your secret weapon.
  3. If GitHub has completely deleted the commit from its history, you’re out of luck unless:
    • You have an old local clone
    • Someone forked the repo and kept it
    • CI logs or PR diffs include your precious bits

Otherwise, it’s digital dust.


🧛 Bonus Tip

Once you’ve resurrected that commit, create a branch immediately. Unreferenced commits are Git’s version of vampires: they disappear without a trace when left in the shadows.

git checkout -b safe-now deadbeef1234

And there you have it. One undead commit, safely reanimated.

September 09, 2025

Last month, my son Axl turned eighteen. He also graduated high school and will leave for university in just a few weeks. It is a big milestone and we decided to mark it with a three-day backpacking trip in the French Alps near Lake Annecy.

We planned a loop that would start and end at Col de la Forclaz, located at 1,150 meters (3,770 feet). From there the trail would climb through forests and alpine pastures, eventually taking us to the summit of La Tournette at 2,351 meters (7,713 feet). It is the highest peak around Lake Annecy and one of the most iconic hikes in the region.

A topographical map of our hike to Tournette and back. A topographical map of our hike to La Tournette, with each of the three hiking days highlighted in a different color.

Over the past months we studied maps, gathered supplies, and prepared for the hike. My own training started only three weeks before, just enough to feel unprepared in a more organized way. I took up jogging and did stairs with my pack on, though I should have also practiced sleeping on uneven ground.

Flying to Annecy

Small Cessna plane in a hangar with its door open, while a person in a safety vest loads backpacks into the cabin, preparing for departure. Loading our packs into the plane before departure. Two people sit in a Cessna cockpit, watching navigation screens and gauges. In the cockpit of Ben's Cessna en route to Annecy.
View from a small plane cockpit approaching Annecy, with Lake Annecy and surrounding mountains ahead. As we approached Annecy and prepared to land, La Tournette (left) rose above the other peaks, showing the climb that lay ahead.

Axl thought the trip would begin with a nine-hour drive south to Annecy. What he didn't know was that one of my best friends, Ben, who owns a Cessna, would be flying us instead. It felt like the perfect surprise to begin our trip.

Two hours later we landed in Annecy. After a quick lunch by the lake, we taxied to Col de la Forclaz, famous for paragliding and its views over Lake Annecy.

A paraglider glides high above Lake Annecy, the blue water curving between green hills with small towns scattered along the shore below. Paragliders circling above Lake Annecy.

Day 1: Col de la Forclaz → Refuge de la Tournette

Our trail started near the paragliding launch point at Col de la Forclaz. From there it climbed steadily through the forest. It was quiet and enclosed, with only brief glimpses of the valley through the branches.

A hiker with a backpack adds a small rock to a stacked cairn on a rocky ledge. We paused to add a stone to a cairn along the trail.

Eventually we emerged above the tree line near Chalet de l'Aulp. In September, the chalet is closed during the week, but we sat on a hill nearby, ate the rice bars Vanessa had prepared, and simply enjoyed the moment.

After our break the trail grew steeper, winding through alpine meadows and rocky slopes. The valley stretched out far below us while big mountains rose ahead.

View from the trail near La Tournette: mountain ridges and forested slopes, with a small chalet in a grassy clearing below. The view on our way up to La Tournette, just past Chalet de l'Aulp.

After a couple of hours we reached the site of the old Refuge de la Tournette, which has been closed for years, and found a flat patch of grass nearby for our first campsite.

As we sat down, we had time for deeper conversations. We talked about how life will change now that he is eighteen and about to leave for university. I told him our hike felt symbolic. The climb up was like the years behind us, when his parents, myself included, made most of the important decisions in his life. The way down would be different. From here he would choose his own path in life, and my role would be to walk beside him and support him.

Saying it out loud caught me by surprise. It left me with a lump in my throat and tears in my eyes. It felt as if we would leave one version of Axl at the summit and return with another, stepping into adulthood and independence.

Young man sits on the grass, head down and arms around knees, resting quietly after a long day hiking on a sunny mountain trail. Axl reflecting at the end of our first day on the trail.

Dinner was a freeze-dried chili con carne that turned out to be surprisingly good. Just as we were eating, a large flock of sheep came streaming down the hillside, their bells ringing. Fortunately they kept moving, or sleep would have been impossible.

After dinner we set up our tent and took stock of our water supply. We had carried five liters of water each, knowing we could only refill at tomorrow's refuge. By now we had already used three liters for the climb and dinner. I could have drunk more, but we set two liters aside for the morning and felt good about that plan.

A green pyramid tent stands on a grassy meadow below a steep rocky mountainside, with camping gear scattered nearby. Our tent pitched in the high meadows near the old Refuge de la Tournette.

Later, as we settled into our sleeping bags, I realized how many firsts Axl had packed into a single day. He had taken his first private flight in a small Cessna, carried a heavy backpack up a mountain for the first time, figured out food and water supplies, and experienced wild camping for the first time.

Day 2: Refuge de la Tournette → Refuge de Praz D'zeures

Sleeping was difficult. The ground that looked flat in daylight was uneven, and my foldout mat offered little comfort. I doubt I slept more than two hours. By 4:30 the rain had started, and I started to worry about how slippery the trail might become.

When we unzipped the tent around 6:00 in the morning, a "bouquetin" (ibex) with one horn broken off stood quietly in the gray light, watching us without fear. Its calm presence was a gentle contrast to my own concerns.

From inside a tent, an ibex with large horns stands a few meters away in the grass, looking toward the campers. When we opened the tent in the morning, a bouquetin (ibex) was standing right outside, watching us.

By 7:00 a shepherd appeared out of nowhere while we were making coffee. He told us wild camping here was not permitted. He spoke with quiet firmness. We were startled and embarrassed, having missed the sign lower down the trail. We apologized, promised to leave no trace, and packed up to be on our way.

Before leaving he shared the forecast: rain by early afternoon, thunderstorms by 16:00. You do not want to be on a summit when lightning rolls in. With that in mind we set off quickly, pushing harder than usual. Unlike the sunny day before, when the path was busy with hikers, the mountain felt empty. It was just the two of us moving upward in the quiet.

Shortly before the final section to the summit it began to rain again. Axl put on his gloves as the drops felt icy. The rain passed quickly, but we knew more was coming.

The final stretch to La Tournette was steep, with chains and metal ladders to help the climb. We scrambled with hands and feet, and with our heavy packs it was nerve-racking at times.

Reaching the top was unforgettable. At the summit we climbed onto the Fauteuil, a massive block of rock shaped like an armchair. From there the Alps spread out in every direction. In the far distance, the snow-covered summit of Mont Blanc caught the light. Below us, Lake Annecy shimmered in the valley, while dark rain clouds gathered not too far away.

A hiker stands beside a metal cross on a rocky summit, surrounded by large boulders, looking out over distant mountains under a cloudy sky. At the summit of La Tournette.

We gave each other a big hug, a quiet celebration of the climb and the moment. We stayed only a short while, knowing the weather was turning.

From the summit we set our course for the Refuge de Praz D'zeures, a mountain hut about two hours' hike away. As we descended, the landscape softened into wide alpine pastures where bouquetin grazed among the grass and flowers.

A narrow hiking trail runs along a grassy ridge toward rocky peaks, then drops into the forest. The ridge from La Tournette toward Refuge de Praz D'zeures. If you look closely, the trail winds all the way into the woods.

Pushing to stay ahead of the rain and thunderstorms in the forecast, we reached the refuge (1,774 meters) around 14:30 in the afternoon. The early start seemed like a wise choice, but our hearts sank when we saw the sign on the door: "fermé" (closed).

For a moment we feared we had nowhere to stay. Then we noticed another sign off to the side. It took us a minute to decipher, since it was written in French, but eventually we understood: in September the refuge only opens from Thursday evening through Sunday night. Some relief washed over us. It was indeed Thursday, and most likely the hosts or guardians of the hut had not yet arrived.

We decided to wait and settled onto the wooden deck outside. Our water supplies were gone, but Axl found an outdoor kitchen sink with a faucet, fed by a nearby stream, and filled a couple of our empty water bottles. I drank one almost in a single go.

Outdoor sink at Refuge de Praz D&#039;zeures where a backpacker refills water bottles during a mountain hike. The outdoor sink at Refuge de Praz D'zeures where Axl refilled our empty water bottles.

To pass the time we opened the small travel chess set I had received for Christmas. The game gave us something to focus on as the clouds thickened around the hut. A damp chill settled in, and we pulled on extra layers while we leaned over the tiny board.

Sure enough, a little later the hosts of the refuge appeared. A couple with three children and two dogs. They came up the trail carrying heavy containers of food, bread, water, and other supplies on their backs, all the way from a small village more than an hour's hike below.

Almost immediately they warned us that the water at the refuge was not potable. I tried not to dwell on the bottle I had just finished, but luckily Axl had used his water filter when filling our bottles. We would find out soon enough how well it worked. (Spoiler: we survived!)

Wooden refuge kitchen with old moka pots and worn pans hanging on the wall. The rustic kitchen at Refuge de Praz D'zeures. I loved the moka pots and pans in the background.

The refuge itself was simple but full of character. There is no hot water, no heating, and the bunks are basic, yet the young family who runs it brings life and warmth to the place. My French was as rusty as their English, but between us we managed.

It was so cold inside the hut that we retreated into our sleeping bags for warmth. Then, as the rain began to pour, they lit a wood stove in the main room. Through the window we watched the clouds wrap around the refuge while water streamed off the roof in steady sheets. We felt grateful not to be out in our tent that night or still on the trail.

That night we had a local specialty: raclette au charbon de bois, cheese cooked on a small charcoal grill in the middle of the table. I have had raclette many times before, but never like this. The smell of melting cheese filled the room as we poured it over potatoes and tore into fresh bread. After two days of freeze-dried meals it felt like a feast. I loved it, especially because I knew Vanessa would never let me use charcoal in our living room.

After dinner we went straight to bed, exhausted from the long day and the lack of sleep the night before. Even though the bunks were hard, we slept deeply and well.

Day 3: Refuge de Praz D'zeures → La Forclaz

Morning began with breakfast in the hut. A simple bowl of cereal with milk tasted unusually good after two days of powdered meals. Two other guests joined us at the breakfast table and teased me that they could hear me snore through the thin wooden walls. Axl just grinned. Apparently my reputation had spread faster than I thought.

The day's hike covered about 8.5 kilometers (5.3 miles), starting with a 350-meter (1,150-foot) climb to a ridge, followed by a long descent of nearly 1,000 meters (3,280 feet). The storms from the night before had left everything soaked, so even the flat stretches of trail were slippery and slow.

Reaching the ridge made the effort worthwhile. On one side the clouds pressed in so tightly there was no visibility at all. On the other side the valley opened wide, green and bright in the morning light. It felt like standing with one foot in the fog and the other in the sun.

We walked along the ridge and talked, the kind of conversations that only seem to happen when you spend hours on the trail together. At one point we stopped to look for fossils and, to our delight, actually found one.

By the time we reached the village of Montmin it felt like stepping back into civilization. We left the trail for quiet roads, ate lunch in the sun near the church, and rinsed our muddy boots and rain gear at a faucet. From there it was only a short walk back to La Forclaz, where three days earlier we had set off with heavy packs and full of anticipation.

A person rinses muddy rain pants under a tap at a stone fountain. Axl rinsing the mud off his rain pants at a fountain in Montmin.

We had returned tired, lighter, and grateful for the experience and for each other. To celebrate we ordered crêpes in a tiny café and ate them in the sun, looking up at the peaks we had just conquered. Later that afternoon, we swam in Lake Annecy in our underwear and finished the day with a cold beer.

The next morning we caught a taxi to Geneva airport, flew home, and by afternoon we were back in Belgium, welcomed with Vanessa's shepherd's pie. In just a few weeks Axl will leave for university and his room will be empty, but that evening we were all together, sharing stories from our trip over dinner.

Person stands on a wooden dock at Lake Annecy, facing the water at dusk, with mountains in the background. Axl standing on a dock at Lake Annecy.

It had been a short trip, just three days in the mountains. Hard at times, but very manageable for a not-super-fit forty-six year old. Some journeys are measured less by distance, duration, or difficulty, but by the transitions they mark. This one gave us what we needed: time together before the changes ahead, and a memory we will carry for the rest of our lives. On the trail Axl often walked ahead of me. In life, I will have to learn to walk beside him.

À la recherche de l’humanité perdue…

La mort ou le retour de la lucidité

Drmollytov est une bibliothécaire qui a été très gravement blessée dans un accident de moto, accident où son mari a perdu la vie.

Après une période de convalescence et de deuil, elle tente de reconstruire sa vie et, graduellement, elle prend conscience de la frénésie consumériste dans laquelle est engagée toute personne « normale ». Depuis le désir de shopping aux réseaux sociaux en passant par les abonnements aux services de streaming.

Au plus elle fait du nettoyage dans sa vie, au plus elle retrouve du temps et de l’énergie. Au moins elle éprouve le besoin « d’être vue ». Il faut dire que de poster uniquement sur Gemini, ça n’aide pas pour la visibilité !

Une phrase m’a marquée sur son dernier billet posté sur Gemini : « Mon seul regret avec les réseaux sociaux, c’est d’avoir été dessus tout court. Ils ont vidé mon énergie mentale, dévoré mon temps et je suis certaine qu’ils ont extrait une part de mon âme. » (traduction très libre).

Je me rends compte qu’il ne suffit pas de se libérer des mécanismes d’addiction des réseaux sociaux. Il faut également conscientiser à quel point ils ont déformé, détruit, dénaturé nos pensées, nos relations sociales, nos motivations. Pire : ils nous rendent objectivement stupides ! Depuis 2010, le QI moyen est en train de descendre, ce qui n’était jamais arrivé depuis l’invention du QI (quoi qu’on pense de cet outil).

Les réseaux sociaux sont intrinsèquement liés au smartphone. Ils ont réellement explosé lorsqu’ils se sont optimisés pour la consommation passive sur un petit écran tactile (chose à laquelle Zuckerberg ne croyait pas du tout). À l’inverse, l’addiction aux réseaux sociaux a créé une demande continue pour des smartphones toujours plus brillants et prenant des photos toujours plus susceptibles de générer des likes.

Comme le souligne Jose Briones, ces interactions permanentes sur une plaque de verre lisse, les écouteurs vissés sur les oreilles, nous font perdre la conscience du tactile, de la matérialité.

Au-delà de notre addiction aux chiffres colorés

Quand on a été addict, quand ou y a cru vraiment, quand on y a investi énormément de soi, il ne suffit pas d’arrêter de fumer pour être en bonne santé. Arrêter, ce n’est que le premier pas nécessaire et indispensable. Mais il reste un long chemin à parcourir pour se reconstruire par après, pour retrouver l’humain qui a été blessé, enfoui.

L’être humain que, finalement, peu de monde a intérêt à ce que vous retrouviez, mais qui est là, enfui sous des notifications incessantes, sous la consultation compulsive de vos likes, de vos statistiques, de vos abonnés. Pour Jose Briones, il a fallu plus de trois ans sans smartphone pour que se calme son angoisse… de ne pas avoir de smartphone !

Cela fait des années que je n’ai plus de statistiques sur les fréquentations de ce blog. Parce que ce n’est pas très éthique, mais, surtout, parce que cela me rendait fou, parce que ma santé mentale en pâtissait incroyablement. Parce que je n’arrivais plus à être satisfait de mon écriture autrement que par le nombre de lecteurs que ça me ramenait. Parce que de simples statistiques détruisaient mon âme.

Comme le dit le blog This day’s portion, vous n’avez pas besoin de statistiques !

Et si le réseau Mastodon est très loin d’être parfait, il est assez simple d’y trouver une instance qui ne vous espionne pas. La majorité ne le fait d’ailleurs pas. Au contraire de Bluesky qui traque toutes vos interactions à travers la société Statsig. Statsig qui vient d’être rachetée par OpenAI, le créateur de ChatGPT.

On dirait que Sam Altman tente de faire comme Musk et de gagner de l’influence politique en noyautant les réseaux sociaux centralisés. On s’est foutu de la gueule de Musk, mais force est de constater que ça a très bien fonctionné. Et que, comme je le disais en 2023, ce n’est qu’une question de temps avant que ça arrive à Bluesky qui n’est pas du tout décentralisé, contrairement à ce que répète le marketing.

Mais le pire avec toutes ces statistiques, toutes ces données, c’est que nous sommes les premiers à vouloir les récolter et à nous vendre pour les optimiser et les consulter sur de jolis graphiques colorés affichés sur nos plaques de verre lisse et brillante.

L’inhumanité d’un monde qui se vend

Je ne cesse de répéter ce qu’articule justement Thierry Crouzet dans son dernier article : les marketeux ont imposé leur vision du monde, forçant les artistes, les intellectuels et les scientifiques à devenir des commerciaux, ce qui est l’antithèse de leur nature profonde. Car artistes, intellectuels et scientifiques ont en commun d’être dans une quête, peut‑être illusoire, de vérité, d’absolu. Là où le marketing est, par définition, l’art du mensonge, de la tromperie, de l’apparence et de l’exploitation de l’humain.

Cette destruction mentale enseignée dans les écoles de commerce est également à l’œuvre avec l’IA. Il faudra des années pour que les personnes addicts à l’IA puissent, si tout va bien, retrouver leur âme d’humain, leur capacité de raisonnement autonome. Les développeurs qui dépendent de Github sont en première ligne.

En espérant que nous puissions arriver à redevenir des humains sans devoir recourir à la solution extrême décrite par Thierry Bayoud et Léa Deneuville dans l’excellente nouvelle « Chronique d’un crevard », nouvelle présente dans le Recueil de Nakamoto, que je recommande chaudement et présenté ici par Ysabeau. Et, oui, les nouvelles sont sous licence libre.

L’idée derrière Chronique d’un crevard m’a rappelé mon propre roman Printeurs. Ça serait chouette de voir les deux univers se rejoindre d’une manière ou d’une autre. Car c’est ça toute la beauté de la création artistique libre.

De l’art comme instinct de survie

De la création artistique tout court, devrais-je dire, jusqu’au moment où les juristes d’entreprise ont réussi à convaincre les artistes qu’ils devaient être des maniaques de la « protection de leur propriété intellectuelle » ce qui les a transformés en victimes de la plus formidable arnaque de ces dernières décennies. Tout comme les marketeux, les juristes d’entreprise sont, par essence, des gens qui vont t’exploiter. C’est leur métier !

Seule la technique change : les marketeux mentent et te promettent le bonheur, les juristes menacent et corrompent. Les deux ne cherchent qu’à augmenter le bénéfice de leur employeur. Les deux ont réussi à convaincre les artistes d’éteindre leur humanité pour devenir eux-mêmes marketeux et juriste, de faire du « personal branding » et de la « propriété intellectuelle ».

À ce propos, Cory Doctorow explique très bien sur quelle illusion s’est construite la fameuse « propriété intellectuelle » et à quel point ceux qui l’ont conçue savaient très bien que c’était une arnaque à l’échelle planétaire pour tenter de transformer le monde entier en une colonie étatsunienne.

Marketeux et juristes ont réussi à convaincre les artistes de haïr ce qui fait la base de leur métier : leur public, renommés « pirates » dès qu’ils ne passent pas entre les barrières Nadar du corporatisme de surveillance. Ils ont réussi à convaincre les scientifiques de haïr ce qui fait la base de leur métier : le partage sans restriction de la connaissance. Le fait que la plus grande base de données scientifiques du monde, Sci-hub, soit considérée comme pirate et interdite partout dans le monde dit tout ce que vous avez besoin de savoir sur notre société.

Le capitalisme de surveillance pourrit tout ce qu’il touche. Tout d’abord en rendant difficile la vie des contestataires (ce qui est de bonne guerre), mais, surtout, en achetant et corrompant les rebelles qui réussissent malgré tout. Devenu millionnaire, cet artiste antisystème deviendra le premier soutien du système en question et adaptera son slogan : « Soyez rebelles, mais pas trop, achetez mes produits dérivés ! ».

Tout comme le surréalisme a été la réponse artistique et intellectuelle au fascisme, l’art seul peut sauver notre humanité. Un art brut, tactile, sensoriel. Mais, avant toute chose, un art libre qui se partage, qui se diffuse et qui envoie se faire foutre les notions de propriétés virtuelles.

Un art qui se partage, mais force le public à partager également, à retrouver l’essence de notre humanité : le partage.

  • Photo d’illustration prise par Diegohnxiv et représentant une fresque en l’honneur de SciHub à l’université de Mexico
  • Printeurs et Le recueil de Nakamoto sont commandables chez votre libraire indépendant préféré ! Les ebooks sont sur libgen, au moins pour Printeurs. Je le sais, c’est moi qui l’ai uploadé.

Je suis Ploum et je viens de publier Bikepunk, une fable écolo-cycliste entièrement tapée sur une machine à écrire mécanique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !

Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.

September 03, 2025

Het is al veel te lang geleden dat ik de mooiste maand ter wereld vierde met een mooi liedje. Vandaar deze Septemberige “August Moon” live door Jacob Alon: Watch this video on YouTube.

Source

At some point, I had to admit it: I’ve turned GitHub Issues into a glorified chart gallery.

Let me explain.

Over on my amedee/ansible-servers repository, I have a workflow called workflow-metrics.yml, which runs after every pipeline. It uses yykamei/github-workflows-metrics to generate beautiful charts that show how long my CI pipeline takes to run. Those charts are then posted into a GitHub Issue—one per run.

It’s neat. It’s visual. It’s entirely unnecessary to keep them forever.

The thing is: every time the workflow runs, it creates a new issue and closes the old one. So naturally, I end up with a long, trailing graveyard of “CI Metrics” issues that serve no purpose once they’re a few weeks old.

Cue the digital broom. 🧹


Enter cleanup-closed-issues.yml

To avoid hoarding useless closed issues like some kind of GitHub raccoon, I created a scheduled workflow that runs every Monday at 3:00 AM UTC and deletes the cruft:

schedule:
  - cron: '0 3 * * 1' # Every Monday at 03:00 UTC

This workflow:

  • Keeps at least 6 closed issues (just in case I want to peek at recent metrics).
  • Keeps issues that were closed less than 30 days ago.
  • Deletes everything else—quietly, efficiently, and without breaking a sweat.

It’s also configurable when triggered manually, with inputs for dry_run, days_to_keep, and min_issues_to_keep. So I can preview deletions before committing them, or tweak the retention period as needed.


📂 Complete Source Code for the Cleanup Workflow

name: 🧹 Cleanup Closed Issues

on:
  schedule:
    - cron: '0 3 * * 1' # Runs every Monday at 03:00 UTC
  workflow_dispatch:
    inputs:
      dry_run:
        description: "Enable dry run mode (preview deletions, no actual delete)"
        required: false
        default: "false"
        type: choice
        options:
          - "true"
          - "false"
      days_to_keep:
        description: "Number of days to retain closed issues"
        required: false
        default: "30"
        type: string
      min_issues_to_keep:
        description: "Minimum number of closed issues to keep"
        required: false
        default: "6"
        type: string

concurrency:
  group: ${{ github.workflow }}-${{ github.ref }}
  cancel-in-progress: true

permissions:
  issues: write

jobs:
  cleanup:
    runs-on: ubuntu-latest

    steps:
      - name: Install GitHub CLI
        run: sudo apt-get install --yes gh

      - name: Delete old closed issues
        env:
          GH_TOKEN: ${{ secrets.GH_FINEGRAINED_PAT }}
          DRY_RUN: ${{ github.event.inputs.dry_run || 'false' }}
          DAYS_TO_KEEP: ${{ github.event.inputs.days_to_keep || '30' }}
          MIN_ISSUES_TO_KEEP: ${{ github.event.inputs.min_issues_to_keep || '6' }}
          REPO: ${{ github.repository }}
        run: |
          NOW=$(date -u +%s)
          THRESHOLD_DATE=$(date -u -d "${DAYS_TO_KEEP} days ago" +%s)
          echo "Only consider issues older than ${THRESHOLD_DATE}"

          echo "::group::Checking GitHub API Rate Limits..."
          RATE_LIMIT=$(gh api /rate_limit --jq '.rate.remaining')
          echo "Remaining API requests: ${RATE_LIMIT}"
          if [[ "${RATE_LIMIT}" -lt 10 ]]; then
            echo "⚠️ Low API limit detected. Sleeping for a while..."
            sleep 60
          fi
          echo "::endgroup::"

          echo "Fetching ALL closed issues from ${REPO}..."
          CLOSED_ISSUES=$(gh issue list --repo "${REPO}" --state closed --limit 1000 --json number,closedAt)

          if [ "${CLOSED_ISSUES}" = "[]" ]; then
            echo "✅ No closed issues found. Exiting."
            exit 0
          fi

          ISSUES_TO_DELETE=$(echo "${CLOSED_ISSUES}" | jq -r \
            --argjson now "${NOW}" \
            --argjson limit "${MIN_ISSUES_TO_KEEP}" \
            --argjson threshold "${THRESHOLD_DATE}" '
              .[:-(if length < $limit then 0 else $limit end)]
              | map(select(
                  (.closedAt | type == "string") and
                  ((.closedAt | fromdateiso8601) < $threshold)
                ))
              | .[].number
            ' || echo "")

          if [ -z "${ISSUES_TO_DELETE}" ]; then
            echo "✅ No issues to delete. Exiting."
            exit 0
          fi

          echo "::group::Issues to delete:"
          echo "${ISSUES_TO_DELETE}"
          echo "::endgroup::"

          if [ "${DRY_RUN}" = "true" ]; then
            echo "🛑 DRY RUN ENABLED: Issues will NOT be deleted."
            exit 0
          fi

          echo "⏳ Deleting issues..."
          echo "${ISSUES_TO_DELETE}" \
            | xargs -I {} -P 5 gh issue delete "{}" --repo "${REPO}" --yes

          DELETED_COUNT=$(echo "${ISSUES_TO_DELETE}" | wc -l)
          REMAINING_ISSUES=$(gh issue list --repo "${REPO}" --state closed --limit 100 | wc -l)

          echo "::group::✅ Issue cleanup completed!"
          echo "📌 Deleted Issues: ${DELETED_COUNT}"
          echo "📌 Remaining Closed Issues: ${REMAINING_ISSUES}"
          echo "::endgroup::"

          {
            echo "### 🗑️ GitHub Issue Cleanup Summary"
            echo "- **Deleted Issues**: ${DELETED_COUNT}"
            echo "- **Remaining Closed Issues**: ${REMAINING_ISSUES}"
          } >> "$GITHUB_STEP_SUMMARY"


🛠️ Technical Design Choices Behind the Cleanup Workflow

Cleaning up old GitHub issues may seem trivial, but doing it well requires a few careful decisions. Here’s why I built the workflow the way I did:

Why GitHub CLI (gh)?

While I could have used raw REST API calls or GraphQL, the GitHub CLI (gh) provides a nice balance of power and simplicity:

  • It handles authentication and pagination under the hood.
  • Supports JSON output and filtering directly with --json and --jq.
  • Provides convenient commands like gh issue list and gh issue delete that make the script readable.
  • Comes pre-installed on GitHub runners or can be installed easily.

Example fetching closed issues:

gh issue list --repo "$REPO" --state closed --limit 1000 --json number,closedAt

No messy headers or tokens, just straightforward commands.

Filtering with jq

I use jq to:

  • Retain a minimum number of issues to keep (min_issues_to_keep).
  • Keep issues closed more recently than the retention period (days_to_keep).
  • Parse and compare issue closed timestamps with precision.
  • Exclude pull requests from deletion by checking the presence of the pull_request field.

The jq filter looks like this:

jq -r --argjson now "$NOW" --argjson limit "$MIN_ISSUES_TO_KEEP" --argjson threshold "$THRESHOLD_DATE" '
  .[:-(if length < $limit then 0 else $limit end)]
  | map(select(
      (.closedAt | type == "string") and
      ((.closedAt | fromdateiso8601) < $threshold)
    ))
  | .[].number
'

Secure Authentication with Fine-Grained PAT

Because deleting issues is a destructive operation, the workflow uses a Fine-Grained Personal Access Token (PAT) with the narrowest possible scopes:

  • Issues: Read and Write
  • Limited to the repository in question

The token is securely stored as a GitHub Secret (GH_FINEGRAINED_PAT).

Note: Pull requests are not deleted because they are filtered out and the CLI won’t delete PRs via the issues API.

Dry Run for Safety

Before deleting anything, I can run the workflow in dry_run mode to preview what would be deleted:

inputs:
  dry_run:
    description: "Enable dry run mode (preview deletions, no actual delete)"
    default: "false"

This lets me double-check without risking accidental data loss.

Parallel Deletion

Deletion happens in parallel to speed things up:

echo "$ISSUES_TO_DELETE" | xargs -I {} -P 5 gh issue delete "{}" --repo "$REPO" --yes

Up to 5 deletions run concurrently — handy when cleaning dozens of old issues.

User-Friendly Output

The workflow uses GitHub Actions’ logging groups and step summaries to give a clean, collapsible UI:

echo "::group::Issues to delete:"
echo "$ISSUES_TO_DELETE"
echo "::endgroup::"

And a markdown summary is generated for quick reference in the Actions UI.


Why Bother?

I’m not deleting old issues because of disk space or API limits — GitHub doesn’t charge for that. It’s about:

  • Reducing clutter so my issue list stays manageable.
  • Making it easier to find recent, relevant information.
  • Automating maintenance to free my brain for other things.
  • Keeping my tooling neat and tidy, which is its own kind of joy.

Steal It, Adapt It, Use It

If you’re generating temporary issues or ephemeral data in GitHub Issues, consider using a cleanup workflow like this one.

It’s simple, secure, and effective.

Because sometimes, good housekeeping is the best feature.


🧼✨ Happy coding (and cleaning)!

How I fell in love with calendar.txt

The more I learn about Unix tools, the more I realise we are reinventing everyday Rube Goldberg’s wheels and that Unix tools are, often, elegantly enough.

Months ago, I discovered calendar.txt. A simple file with all your dates which was so simple and stupid that I wondered 1) why I didn’t think about it myself and, 2) how it could be useful.

I downloaded the file and tried it. Without thinking much about it, I realised that I could add the following line to my offpunk startup:

!grep `date -I` calendar.txt --color

And, just like that, I suddenly have important things for my day everytime I start Offpunk. In my "do_the_internet.sh", I added the following:

grep `date -I`calendar.txt --color -A 7

Which allows me to have an overview of the next seven days.

But what about editing? This is the alias I added to my shell to automatically edit today’s date:

alias calendar="vim +/`date -I` ~/inbox/calendar.txt"

It feels so easy, so elegant, so simple. All those aliases came naturally, without having to spend more than a few seconds in the man page of "date". No need to fiddle with a heavy web interface. I can grep through my calendar. I can edit it with vim. I can share it, save it and synchronise it without changing anything else, without creating any account. Looking for a date, even far in the future, is as simple as typing "/YEAR-MONTH-DAY" in vim.

Recurring events

The icing on the cake became apparent when I received my teaching schedule for the next semester. I had to add a recurring event every Tuesday minus some special cases where the university is closed.

Not a big deal. I do it each year, fiddling with the web interface of my calendar to find the good options to make the event recurrent then removing those special cases without accidentally removing the whole series.

It takes at most 10 minutes, 15 if I miss something. Ten minutes of my life that I hate, forced to use a mouse and click on menus which are changing every 6 months because, you know, "yeah, redesign".

But, with my calendar.txt, it takes exactly 15 seconds.

/Tue

To find the first Tuesday.

i

To write the course number and classroom number, escape then

n.n.n.n.n.n.nn.n.n.n.nn.n.n.n.n.

I’m far from being a Vim expert but this occurred naturally, without really thinking about the tool. I was only focused on the date being correct. It was quick and pleasant.

Shared events and collaboration

I read my email in Neomutt. When I’m invited to an event, I must open a browser to access the email through my webmail and click the "Yes" button in order to have it added to my calendar. Events I didn’t respond show in my calendar, even if I don’t want them. It took me some settings-digging not to display events I refused. Which is kinda dumb but so are the majority of our tools those days.

With calendar.txt, I manually enter the details from the invitation, which is not perfect but takes less time than opening a browser, login into a webmail and clicking a button while waiting at each step the loading of countless of JavaScript libraries.

Invitations are rare enough that I don’t mind entering the details by hand. But I’m thinking about doing a small bash script that would read an ICS file and add it to calendar.txt. It looks quite easy to do.

I also thought about doing the reverse : a small script that would create an ICS and send it by email to any address added to an event. But it would be hard to track down which events were already sent and which ones are new. Let’s stick to the web interface when I need to create a shared event.

Calendar.txt should remain simple and for my personal use. The point of Unix tools is to allow you to create the tools you need for yourself, not create a startup with a shiny name/logo that will attract investors hoping to make billions in a couple of years by enshitifying the life of captive users.

And when you work with a team, you are stuck anyway with the worst possible tool that satisfies the need of the dumbest member of the team. Usually the manager.

With Unix tools, each solution is personal and different from the others.

Simplifying calendaring

Another unexpected advantage of the system is that you don’t need to guess the end date of events anymore. All I need to know is that I have a meeting at 10 and a lunch at 12. I don’t need to estimate the duration of the meeting which is, anyway, usually only a rough estimation and not an important information. But you can’t create an event in modern calendar without giving a precise end.

Calendar.txt is simple, calendar.txt is good.

I can add events without thinking about it, without calendaring being a chore. Sandra explains how she realised that using an online calendar was a chore when she started to use a paper agenda.

Going back to a paper calendar is probably something I will end up doing but, in the meantime, calendar.txt is a breeze.

Trusting my calendar

Most importantly, I now trust my calendar.

I’ve been burned by this before: I had created my whole journey to a foreign country on my online calendar only to discover upon landing that my calendar had decided to be "smart" and to change all events because I was not in the same time zone. Since then, I actually write the time of an event in the title of the event, even if it looks redundant. This also helps with events being moved by accident while scrolling on a smartphone or in a browser. Which is rare but happened enough to make me anxious.

I had the realisation that I don’t trust any calendar application because, for events with a very precise time (like a train), I always fall back on checking the confirmation email or PDFs.

It’s not the case anymore with calendar.txt. I trust the file. I trust the tool.

There are not many tools you can trust.

Mobile calendar.txt

I don’t need notifications about events on my smartphone. If a notification tells me about an event I forgot, it would be too late anyway. And if my phone is on silent, like always, the notification is useless anyway. We killed notifications with too much notification, something I addressed here :

I do want to consult/edit my calendar on my phone. Getting the file on my phone is easy as having it synchronised with my computer through any mean. It’s a simple txt file.

Using it is another story.

Looking at my phone, I realise how far we have fallen: Android doesn’t allow me to do a simple shortcut to that calendar.txt file which would open on the current day. There’s probably a way but I can’t think of one. Probably because I don’t understand that system. After all, I’m not supposed to even try understanding it.

Android is not Unix. Android, like other proprietary Operating System, is a cage you need to fight against if you don’t want to surrender your choices, your data, your soul. Unix is freedom: hard to conquer but impossible to let go as soon as you tasted it.

I’m Ploum, a writer and an engineer. I like to explore how technology impacts society. You can subscribe by email or by rss. I value privacy and never share your adress.

I write science-fiction novels in French. For Bikepunk, my new post-apocalyptic-cyclist book, my publisher is looking for contacts in other countries to distribute it in languages other than French. If you can help, contact me!

September 02, 2025

Recently, I published an article related to MRS (MySQL REST Service), which we released as a lab. I wanted to explore how I could use this new cool feature within an application. I decided to create an application from scratch using Helidon. The application uses the Sakila sample database. Why Helidon? I decided to use […]

Une vie sans notifications

Avertissement : Cet article parle de mon expérience avec le Mudita Kompakt, mais, n’étant pas lié à cette firme, je ne répondrai à aucune question concernant cet appareil ni le Hisense A5. Tout ce que j’ai à dire au sujet de cet appareil est dans ce billet. Pour le reste, voyez les nombreuses vidéos et articles sur le Web ou rejoignez le forum de Mudita.

Le grand problème du minimalisme numérique, c’est qu’il n’y a pas une solution satisfaisante pour tout le monde. Sur les forums consacrés au minimalisme numérique, chaque solution est critiquée pour faire « trop » et, parfois par les mêmes personnes, pas assez.

J’ai vu des personnes en quête d’un dumbphone pour soigner leur addiction à l’hyperconnexion se plaindre de ne pouvoir installer Whatsapp et Facebook dessus. D’une manière générale, j’ai suffisamment d’expérience dans l’industrie pour savoir que les humains sont très mauvais pour déterminer leurs propres besoins, ce que j’appelle « le syndrome de la maison de plain-pied à 3 étages ». Je veux tout, mais que ce soit minimaliste.

L’addiction à l’écran

Il y a 6 ans, je pris conscience qu’une grande part de mon addiction à mon smartphone venait de l’écran lui-même. Comme l’a dit un jour un participant à un forum que je fréquente, les écrans sont devenus tellement beaux, les couleurs tellement riches que l’image est plus belle que la réalité. Regarder une photo de paysage aux couleurs saturées et retouchée est plus beau que de regarder le paysage en réalité, avec sa grisaille, son autoroute qui n’apparait pas dans le cadre de la photo, sa pluie fine qui nous rentre dans le cou. Nous n’utilisons plus nos smartphones pour faire quelque chose, ils font partie de notre corps et de notre esprit.

Dans mon cas personnel, se passer d’écran avec un dumbphone ne pouvait convenir, car l’usage le plus important que je fais de mon téléphone est de m’orienter à vélo et, alors que je suis au milieu de la nature, de créer un itinéraire de retour à charger sur mon GPS. Ça et utiliser Signal, le seul chat de ma vie quotidienne.

Pour être franc, le concept de dumbphone me dépasse un peu, car s’il y a bien un truc dont je n’ai pas envie ni besoin, c’est d’être joignable partout tout le temps. Si c’est pour laisser un dumbphone en silencieux dans ma poche, autant ne rien prendre du tout !

Ma quête de sobriété numérique a donc commencé avec l’un des très rares smartphones à écran e-ink, le Hisense A5 (à droite sur la photo d’illustration).

Le Hisense est un produit bon marché de piètre qualité, à destination du marché chinois. S’il ne disposait d’aucun service Google, il est plein de spywares chinois impossibles à désinstaller (mais que je tentais de contenir avec le firewall Adguard, configuré aux petits oignons pendant des heures).

Pendant six ans, j’ai utilisé exclusivement cet appareil. Lors de mon premier voyage avec, un trip en train en Bretagne pour un projet de livre, j’ai été en permanence anxieux à l’idée que « quelque chose se passe mal, car je n’avais pas un vrai smartphone ». Mais, petit à petit, je me suis surpris à moins l’utiliser que son prédécesseur, à accepter ses limites. L’écran e-ink lié à la lenteur et aux bugs du logiciel en partie en chinois ne me donnait pas du tout envie de l’utiliser. Très vite, la couleur des écrans de smartphones m’est apparue comme violente, agressive. Mais que je passe quelques minutes sur un tel écran et, soudain, je retrouve un bon vieux shoot de dopamine, une envie de l’utiliser pour faire quelque chose. Quoi ? Peu importe tant que je peux garder les yeux rivés sur ces lumineuses formes mouvantes.

L’hyperconnexion permanente

Il m’est souvent arrivé de prétendre que le Hisense n’était pas un smartphone pour éviter d’installer une app soi-disant indispensable pour un service dont j’avais besoin ou, tout simplement, pour obtenir une carte papier dans ces restaurants qui ont l’impression d’être à la pointe de la technologie, car ils ont un QR code scotché sur la table. Mais c’était un mensonge. Car le Hisense est, au fond, un smartphone des plus classiques.

J’y avais mes emails, un navigateur web et même Mastodon, que j’ai très vite supprimé, mais auquel je pouvais accéder via Firefox ou Inkbro, navigateur optimisé pour les écrans e-ink.

Mon addiction va beaucoup mieux. J’ai perdu l’habitude d’avoir mon smartphone tout le temps sur moi, ne le prenant pour sortir que si je pense en avoir réellement besoin. Ça n’a l’air de rien, mais, pour certains addicts, sortir sans téléphone est une véritable aventure. Faire l’expérience est une excellente manière de réaliser à quel point on est addict.

Le Hisense a beau être gros et moche, lorsque je l’avais avec moi et que j’avais un temps mort, je vérifiais si je n’avais pas reçu d’emails. Je devais, comme tout smartphone, penser à le remettre en silencieux lorsque j’avais activé la sonnerie, car je voulais être joignable, faire les mises à jour des toutes les apps que je gardais, car je pouvais en avoir potentiellement besoin. Bref, la gestion classique d’un smartphone.

Cela me convenait, mais, les smartphones e-ink n’ayant jamais vraiment percé, je me demandais comment j’allais remplacer un appareil qui présentait des signes de faiblesse (batterie qui se vide soudainement, chargement qui ne fonctionne plus que, intermittence).

C’est alors que j’ai été convaincu par le Kompakt de Mudita.

Mudita est une entreprise polonaise qui cherche à offrir des produits favorisant la pleine conscience. Des réveils au design épuré, des montres et un dumbphone, le Mudita Pure, qui me faisais grandement de l’œil, car basé sur un système entièrement Open Source. Malheureusement, le Pure était trop minimaliste pour moi, car ne permettant pas d’utiliser mon GPS de vélo.

Puis est arrivé le Kompakt.

Basé sur Android, le Kompakt est techniquement un smartphone. En plus petit. Et avec un écran e-ink d’une qualité bien moindre que le Hisense. Et pourtant…

Premiers pas avec le Kompakt

La première chose qui m’a frappée en déballant mon Kompakt, c’est que je n’ai dû créer aucun compte, passer par aucune procédure autre que le choix de la langue. Insérez une carte SIM, allumez et ça fonctionne.

Mudita fait très attention à la vie privée et ne propose aucun service en ligne. Le téléphone est entièrement dégooglisé voire même « décloudisé » (contrairement à, par exemple, Murena /e/OS). Pour la première fois depuis des lustres, je ne devais pas combattre mon téléphone, je ne devais pas le configurer, le transformer.

C’est incroyable comme ça m’a fait plaisir.

Pour être transparent, il faut préciser que les développeurs récupèrent des données anonymisées de debug et que ce n’est, pour le moment, pas désactivable. Une discussion à ce sujet est en cours sur le forum Mudita.

Car, oui, Mudita dispose d’un forum de discussion auquel participe une employée de la firme qui tente d’aider les utilisateurs et se fait le relais vers les développeurs. Un truc qui était la base en 2010, mais qui semble incroyable de nos jours.

Bref, je me suis senti un client respecté, pas une vache à lait. Et je n’en reviens toujours pas.

Outre le forum, ce qui m’a frappé avec le Mudita, c’est qu’il pousse réellement l’idée de minimalisme jusque dans ses retranchements. L’application GPS permet de chercher une adresse et de faire un itinéraire piéton, vélo ou voiture. Et c’est tout. Des options ? Aucune, nada, nihil ! Et vous savez quoi ? Ça fonctionne ! Vu que c’est OpenStreetMap, ça fonctionne très bien et j’ai désinstallé Comaps que j’avais mis par réflexe. L’appareil photo… prend des photos et c’est tout (on peut juste activer/désactiver le flash).

Trouver son chemin avec le Mudita Kompakt Trouver son chemin avec le Mudita Kompakt

C’est comme ça pour toutes les applis : en 10 minutes, vous aurez fait le tour de toutes les options et c’est incroyablement rafraichissant.

Bon, parfois, c’est limite trop. Le lecteur de musique affiche vos MP3 et les lit par ordre alphabétique. C’est tout. Ils ont promis d’améliorer ça, mais, au fond, je trouve ça amusant d’être limité de cette façon.

Les choses sérieuses

Si Mudita n’offre aucun service en ligne, la meilleure manière d’interagir avec son téléphone est le Mudita Center, un logiciel compatible Windows/MacOS/Linux. Après l’avoir téléchargé, vous devez brancher votre téléphone à votre ordinateur avec… retenez votre souffle… un câble USB. (sur Debian/Ubuntu, vous devez être membre du groupe "dialout". "sudo adduser ploum dialout", reboot et puis c’est bon)

Un câble ! En 2025 ! Incroyable, non ? Quand je vois comme j’ai dû me battre avec le Freewrite d’Astrohaus qui force l’utilisation de son cloud propriétaire, j’apprécie à outrance le fait de brancher mon téléphone avec un câble.

Avec le Mudita Center, vous pouvez envoyer des fichiers sur l’appareil. Les MP3 pour la musique, les epub ou les pdf, qui seront ouverts dans l’appli E-reader. Pratique pour les billets de train et autres tickets électroniques. Une section est réservée pour transférer les fichiers APK que vous voulez « sideloader ». On s’est tellement fait entuber par Google et Apple qu’on a perdu le droit d’utiliser le mot « installer » en parlant d’un logiciel. Ce mot est désormais privatisé et il faut « sideloader » (du moins tant que c’est encore légalement possible…).

Dans mon cas, je n’ai sideloadé qu’un seul APK : F-Droid. Avec F-Droid, j’ai pu installer Molly (un client Signal), mon gestionnaire de mot de passe, mon appli 2FA, le clavier Flickboard et Aurora Store. Avec Aurora Store, j’ai pu installer Komoot, Garmin, Proton Calendar et l’app SNCB pour les horaires de train. Pas de navigateur ni d’email cette fois ! Je me suis quand même accordé l’application Wikipédia, pour tester.

Tout est petit, en noir et blanc (ça, j’avais déjà l’habitude), mais, dans mon cas, tout fonctionne. Attention que ce ne sera peut-être pas votre cas. Les applis qui ont besoin des services Google, comme Strava, refuseront de se lancer (ce qui ne change pas de mon Hisense). Mon appli bancaire nécessite une caméra à selfie (ce que le Kompakt n’a pas) et je vais devoir trouver une solution de rechange.

Au fait, Mudita ne permet pas de personnaliser la liste des applications. Celles-ci sont classées par ordre alphabétique. Pas de raccourcis, pas d’options. Si c’est perturbant au début, cela se révèle très vite très appréciable, car c’est, une fois encore, un truc de moins à penser, une excuse de moins pour chipoter.

À noter qu’il est cependant possible de cacher des applications. Si vous n’utilisez pas l’app de méditation, vous pouvez la cacher, tout simplement. Simple et efficace.

Les notifications

C’est lorsque j’ai reçu mon premier message Signal que j’ai réalisé un truc étrange. J’ai bien entendu le son, mais je ne voyais pas de notifications.

Et pour cause… Le Mudita Kompakt n’a pas de notifications ! L’écran d’accueil vous montre si vous avez eu des appels ou des SMS, mais, pour le reste, il n’y a pas de notifications du tout !

Mon premier réflexe a été d’investiguer l’installation d’un launcher alternatif, InkOS, qui permet une plus grande configurabilité et des notifications.

Mais… Attendez une seconde ! Que suis-je en train de faire ? Je cherche à refaire un smartphone ! Et si je tentais d’utiliser le Mudita de la manière pour laquelle il a été conçu ?Sans notifications !

Le seul réel problème avec cette approche c’est que les notifications existent, mais que Mudita les cache. On les entend donc, mais on ne peut pas savoir d’où elles proviennent.

Dans mon cas, c’est essentiellement Signal (enfin, Molly pour celleux qui suivent). Dans Signal, j’ai donc configuré un profil de notification qui soit silencieux sauf pour les membres de ma famille proche.

Si j’entends mon téléphone faire un son, je sais que c’est un message de ma famille. Pour les autres, je ne les vois que lorsque je choisis d’ouvrir Signal. Ce qui est exactement ce qu’un système de communication devrait être.

Bien entendu, les choses se compliquent si vous avez plusieurs applications qui envoient des notifications. Il est possible d’avoir accès aux notifications et de les désactiver par applications en utilisant "Activity Launcher" disponible sur F-Droid. C’est un peu du chipotage, mais ça m’a permis de désactiver les notifications « parasites » de tout ce qui n’est pas Signal.

La vie sans notifications

Lorsqu’on accepte ce mode de fonctionnement, le Mudita prend soudainement tout son sens.

J’avais déjà fortement réduit les notifications sur le Hisense. Il était d’ailleurs en silencieux la plupart du temps. Mais, à chaque fois que je consultais l’écran, je voyais les petites icônes. Machinalement, je glissais mon doigt pour faire apparaître le tiroir à notifications et « vérifier » avant de glisser latéralement pour supprimer.

Bon sang que j’ai en horreur ces gestes de glissement des doigts, jamais précis, jamais satisfaisant comme le bruit d’une touche qu’on enfonce. Tiens, le Mudita ne permet d’ailleurs pas de « swiper » dans la liste des applications. Il faut faire défiler avec une flèche. C’est minime, mais j’apprécie !

Mais même si je n’avais pas de notifications sur le Hisense, je vérifiais de temps en temps si je n’avais pas reçu un mail important. Je vérifiais une information sur un site web.

Oh, rien de bien méchant. Une addiction parfaitement sous contrôle. Mais une série de réflexes dont j’avais envie de nettoyer ma vie.

Avec le Mudita Kompakt, l’expérience est très perturbante. Machinalement, je saisis l’appareil et… rien. Il n’y a rien à faire. La seule chose que je peux vérifier, c’est Signal. Je suis ensuite forcé de reposer ce petit écran.

Pas besoin non plus de mettre en silencieux ou en mode avion. Le Kompakt dispose d’un switch hardware « Offline+ » qui désactive tous les réseaux, tous les capteurs, y compris l’appareil photo et le micro. En Offline+, rien ne rentre et rien ne sort de l’appareil. Et quand je suis connecté, je sais que je n’aurai que les appels téléphoniques, les sms et les messages Signal de ma famille proche.

C’est comme un nouveau monde…

Tout n’est pas parfait

On ne va pas se leurrer, le Mudita Kompakt est loin d’être parfait. Il est petit, mais un peu trop gros. Il y a des bugs comme l’alarme qui se déclenche en retard ou pas du tout, comme l’appareil photo qui met près de deux secondes entre la pression sur le bouton et la prise effective de l’image (mais c’est déjà mieux que l’appareil photo du Hisense dont la lentille s’est bloquée après quelques semaines d’utilisation, rendant toutes mes photos irrémédiablement floues).

Le forum regorge d’utilisateurs insatisfaits. Pour certains car je pense qu’ils n’ont pas conscientisé les limites du minimalisme numérique, qu’ils espéraient un smartphone complet avec un écran e-ink. Mais, dans d’autres cas, c’est clairement à cause de bugs dans le système pour des cas d’usage qui ne me concernent pas directement, comme les problèmes Bluetooth alors que je suis tout heureux d’avoir un jack audio. Le Kompakt utilise une version très fortement modifiée et dégooglisée d’Android 12 (les téléphones Googe sont à Android 16). Le hardware lui-même est assez ancien (plus que mon Hisense). Ce sont des détails qui peuvent se révéler importants. La batterie, par exemple, tient 3/4 jours, ce qui n’est pas extraordinaire en comparaison avec le Hisense. Une heure de hotspot wifi consomme 10% de batterie là où le Hisense n’en perdait pas 3%.

L’application musicale est vraiment très minimaliste ! L’application musicale est vraiment très minimaliste !

Malgré ses limites, l’appareil n’est pas bon marché et il est probablement possible de configurer n’importe quel appareil Android pour avoir un écran épuré et pas de notifications. Mais ce qui me plaît avec le Mudita c’est justement le fait que je ne le configure pas. Que je ne cherche pas à comprendre ce que je dois bloquer, que je ne passe pas du temps à optimiser mon écran d’accueil ou le placement des applications.

Ça ne conviendra certainement pas à tout le monde. Il y a certainement plein de défauts qui rendent le Kompakt inutilisable pour vous. Mais pour mon petit cas personnel et pour mon mode de vie actuel, c’est un vrai bonheur (à l’exception de cette saleté d’appli bancaire pour laquelle je n’ai pas encore trouvé de solution).

Accepter de lâcher prise

Car, oui, je vais rater des choses. Je verrai des mails urgents bien plus tard. Je ne pourrai pas chercher une information rapide quand je suis en déplacement. Je ne pourrai pas prendre de belles photos.

C’est le principe même ! Le minimalisme numérique c’est, par essence, ne plus pouvoir tout faire tout le temps. C’est être forcé de s’ennuyer dans les temps morts, de planifier certaines expéditions, de demander une information autour de soi si nécessaire, de se dire, dans certaines situations, que la vie aurait été plus facile avec un smartphone traditionnel.

Si vous n’avez pas effectué à l’avance ce travail de faire le tri entre ce qui est vraiment nécessaire pour vous, ne songez même pas à prendre un téléphone minimaliste comme le Kompakt. Ce téléphone me convient parce que ça fait des années que je réfléchis à ce sujet et parce qu’il est compatible avec mon mode de vie et mes obligations.

Si vous idéalisez le minimalisme sans réfléchir aux conséquences, vous serez frustrés à la première friction, à la première perte de cette facilité omniprésente qu’est le smartphone. Le minimalisme numérique est également un concept très personnel. Quand je vois le nombre d’appareils connectés que je possède, j’ai du mal à dire que je suis un « minimaliste ». Je cherche juste à conscientiser et à faire en sorte que chaque appareil ait pour moi un bénéfice très clair avec le moins possible de désavantages (comme mon GPS de vélo ou mon analyseur de qualité d’air). Je ne suis pas vraiment minimaliste, je pense juste que le smartphone traditionnel a sur ma vie un impact négatif très important que je cherche à minimiser.

Certain·es me disent qu’ils n’ont pas le choix. Comme le dit très bien Jose Briones, c’est faux. Nous avons, pour le moment, le choix. C’est juste que ce n’est pas un choix facile et qu’il faut assumer les conséquences, accepter de changer son mode de vie pour cela.

L’obligation du smartphone

Et, justement, ce qui est le plus effrayant avec cette démarche de tenter de minimiser l’usage du smartphone, c’est de réaliser à quel point il devient presque obligatoire d’en avoir un, à quel point ce choix devient de plus en plus ténu. Des services commerciaux, mais également des administrations publiques considèrent que vous avez obligatoirement un smartphone récent, que vous disposez d’un compte Google, d’une connexion Internet permanente, d’une batterie bien chargée et que vous êtes d’accord d’installer une énième application dessus et de créer un compte en ligne pour quelque chose d’aussi mondain que de payer un emplacement de parking ou accéder à un événement. Un contrôleur de train me confiait récemment que la SNCB planifiait de supprimer le ticket papier pour mettre en avant l’usage de l’app et du smartphone.

Sur les forums minimalistes, j’ai même découvert une catégorie d’utilisateurs qui possèdent un smartphone classique pour aller au travail, par simple peur d’être moqué ou de passer pour un rebelle auprès de leurs collègues. De la même manière, certains refusent d’installer Signal ou d’effacer leur compte Facebook par crainte que cela puisse paraître suspect. Une chose me semble claire : si c’est la peur de potentiellement paraître suspect qui vous retient, il est urgent d’agir maintenant, tant que cette suspicion n’est que potentielle. Installez Signal et GrapheneOS ou /e/OS maintenant pour avoir l’excuse, dans le futur, de dire que ça fait des mois ou des années que vous fonctionnez comme cela.

Dans le cas du smartphone, je constate avec effroi que s’il est théoriquement possible de ne jamais en avoir eu, il est extrêmement difficile de revenir en arrière. Les banques, par exemple, empêchent souvent de revenir à une méthode d’authentification sans smartphone une fois que celle-ci a été activée !

Je souris quand je pense aux fois où mon refus du smartphone m’a valu une réflexion de type : « Ah ? Vous n’êtes pas à l’aise avec les nouvelles technologies ? ». Souvent, je ne réponds pas. Ou je me contente d’un « si, justement… ». Au moins, avec le Mudita, je pourrai le brandir et affirmer haut et fort : je n’ai pas de smartphone !

Vous et moi, nous savons que ce n’est techniquement pas tout à fait vrai, mais ceux qui veulent imposer l’ubiquité du smartphone GoogApple sont, par définition, des ignares technologiques. Ils n’y verront que du feu… Et, pour un temps, ils seront encore forcés de s’adapter, d’accepter que, non, tout le monde n’a pas tout le temps un smartphone.

Je suis Ploum et je viens de publier Bikepunk, une fable écolo-cycliste entièrement tapée sur une machine à écrire mécanique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !

Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.

August 28, 2025

TLDR: A software engineer with a dream, undertook the world’s most advanced personal trainer course. Despite being the odd duck amongst professional athletes, coaches, and body builders, graduated top of class, and is now starting a mission to build a free and open source fitness platform to power next-gen fitness apps and a wikiPedia-style public service. But he needs your help!

The fitness community & industry seem largely dysfunctional.

  1. Content mess on social media

Social media is flooded with redundant, misleading content, e.g. endless variations of the same “how to do ‘some exercise’” videos, repackaged daily by creators chasing relevance. Deceiving clickbait to farm engagement. (notable exception: Sean Nalewanyj has been consistently authentic, accurate, and entertaining). Some content is actually great, but extremely hard to find, needs curation, and sometimes context.

  1. Selling “programs” and “exercise libraries”

Coaches/vendors sell workout programs and exercise instructions as if they are proprietary “secret sauce”. They aren’t. As much as they like you to believe otherwise, workout plans and exercise instructions are not copyrightable. Specific expressions of them, such as video demonstrations are, though there are ways such videos can be legally used by 3rd parties, and I see an opportunity for a win-win model with the creator, but more on that later. This is why the seller usually simply repeat what is already widely understood, replicate (possibly old and misguided) instructions or overcomplicate in an attempt to differentiate. Furthermore, workout programs (or rather, a “rendition” of it) often have no or limited personalization options. The actually important parts, such as adjusting programs across multiple goals (e.g. combining with sports), across time (based on observed progression) or to accommodate injuries, assuring consistency with current scientific evidence, requires expensive expertise, and is often missing from the “product”.

Many exercise libraries exist, but they require royalties. To build a new app (even a non-commercial one) you need to either license a commercial library or recreate your own. I checked multiple free open source ones, but let’s just say there’s serious quality and legal concerns. Finally, WikiPedia is legal and is favorably licensed, but is too text-based and can’t easily be used by applications.

  1. Sub-optimal AI’s

When you ask an LLM for guidance, it sometimes does a pretty good job, often it doesn’t, because:

  • Unclear sources: AI’s regurgitate whatever programs they were fed during training, sometimes written by experts, sometimes by amateurs.
  • Output degrades when you need specific personalized advice or when you need adjustments over time which is actually a critical piece for progressing in your fitness journey. Try to make it too custom and it starts hallucinating.
  • Today’s systems are text based. They may seam cheap today, because vendors are subsidizing the true cost in an attempt to capture the market. But it’s inefficient, inaccurate and financially unsustainable. It also makes for a very crude user interface.

I believe future AI’s will use data models and UI’s that are both domain specific and richer than just text, so we need a library to match. Even “traditional” (non-AI) applications can gain a lot of functionality if they can leverage such structured data.

  1. Closed source days are numbered

Good coaches can bring value via in-person demonstrations, personalization, holding clients accountable and helping them adopt new habits. This unscalable model puts an upper limit on their income. The better known coaches on social media solve this by launching their own apps, some of these seem actually quite good but suffer from the typical downsides that we’ve seen in other software domains such as vendor lock-in, lack of data ownership, incompatibility across apps, lack of customization, high fees, etc; We’ve seen how this plays out in devtools, enterprise, in cloud. According to more and more investment firms, it’s starting to play out in every other industry (just check the OSS Capital portfolio or see what Andreesen Horowitz, one of the most successful VC funds of all time has to say on it): software, is becoming open source in all markets. It leads to more user-friendly products and is the better way to build more successful businesses. It is a disruptive force. Board the train or be left in the dust.

What am I going to do about it?

I’m an experienced software engineer. I have experience building teams and companies. I’m lucky enough to have a window of time and some budget, but I need to make it count. First thing I did is to educate myself properly on fitness. In 2024-2025 I participated in the Menno Henselmans Personal Trainer course. This is the most in-depth, highly accredited, science based course program for personal trainers that I could find. It was an interesting experience being the lone software developer amongst a group of athletes, coaches and body builders. Earlier this year I graduated Magna Cum Laude, top of class.

Right: one of the best coaches in the world. Left: this guy

Now I am a certified coach, who learned from one of the top coaches worldwide with decades of experience. I can train and coach individuals. But as a software engineer I know that even a small software project can grow to change the world. What Wikipedia did for articles, is what I aspire to make for fitness: a free public service comprising information, but more so hands-on tools applications to put information into action. Perhaps give opportunities to industry professionals to differentiate in more meaningful ways.

To support the applications, we also need:

  • an exercise library
  • an algorithms library (or “tools library” in AI lingo)

I’ve started prototyping both the libraries and some applications on body.build. I hope to grow a project and a community around it that will outlive me. It is therefore open source.

Next-gen applications and workflows

Thus far body.build has:

  • a calorie calculator
  • a weight lifting volume calculator
  • a program builder
  • a crude exercise explorer (prototype)

I have some ideas for more stuff that can be built by leveraging the new libraries (discussed below):

  • a mobile companion app to perform your workouts, have quick access to exercise demonstrations/cues, and log performance. You’ll own your data to do your own analysis, and a personal interest to me is the ability to try different variations or cues (see below), track their results and analyze which work better for you. (note: I’m maintaining a list of awesome OSS health/fitness apps, perhaps an existing one could be reused)
  • detailed analysis of (and comparison between) different workout programs, highlighting different areas being emphasized, estimated “bang for buck” (expecting size and strength gains vs fatigue and time spent)
  • just-in-time workout generation. Imagine an app to which you can say “my legs are sore from yesterday’s hike. In 2 days I will have a soccer match. My availability is 40 min between two meetings and/or 60min this evening at the gym. You know my overall goals, my recent workouts and that I reported elbow tendinitis symptoms in my last strength workout. Now generate me an optimal workout”.

The exercise library

The library should be liberally licensed and not restrict reuse. There is no point trying to “protect” content that has very little copyright protection anyway. I believe that “opening up” to reuse (also commercial) is not only key to a successful project, but also unlocks commercial opportunities for everyone.

The library needs in-depth awareness that goes beyond what apps typically contain and include:

  • exercises alternatives and customization options (and their trade-offs)
  • detailed biomechanical data (such as muscle involvements and loading patterns across the range of muscle length and different joint movements

Furthermore, we also need the usual textual description and exercise demonstration videos. Unlike “traditional” libraries that present a singular authoritative view, I find it valuable to clarify what is commonly agreed upon vs where coaches or studies still disagree, and present an overview of the disagreement with further links to the relevant resources, which could be an Instagram post, a YouTube video or a scientific study) A layman can go with the standard instructions, whereas advanced trainees get an overview of different options and cues, which they can check out in more detail, try and see what works best for them. No need to scroll social media for random tips, they’re all aggregated and curated in one place.

Our library (liberally licensed) includes the text and links to 3rd party public content. The 3rd party content itself is typically subject to stronger limitations, but can always be linked to, and often also embedded under fair use and under the standard YouTube license, for free applications anyway. Perhaps one day we’ll have our own content library, but for now we can avoid a lot of work and promote existing creators’ quality content. Win-win.

Body.build today has a prototype of this. I’m gradually adding more and more exercises. Today it’s part of the codebase, but at some point it’ll make more sense to separate them out, and use one of the Creative Commons licenses.

The algorithms library

Through the course, I learned about various principles (validated by decades of coaching experience and by scientific research) to construct optimal training based on input factors (e.g. optimal workout volume depends on many factors, including sex, sleep quality, food intake, etc.) Similarly, things like optimal recovery timing or exercise swapping can be calculated, using well understood principles).

Rather than thinking of workout programs as the main product I think of them as just a derivative, an artifact that can be generated by first determining an individual’s personal parameters, and then applying these algorithms on them. Better yet, instead of pre-defining a rigid multi-month program (which only works well for people with very consistent schedules), this approach allows to generate guidance at any point in any day. Which would work better for people with inconsistent agenda’s.

Body.build today has a few of such codified algorithms, I’ve only implemented what I’ve needed so far.

I need help

I may know how to build some software, and have several ideas on new types of applications that can be built that can be beneficial to people. But there is a lot that I haven’t figured out yet! Maybe you can help?

Particular pain points:

  1. Marketing: developers don’t like it, but it’s critical. We need to determine who to build for? (coaches? developers? end-users? beginners or experts?). What are their biggest issues to solve? How do we reach them? Marketing firms are quite expensive. Perhaps AI will make this a lot more accessible. For now, I think perhaps it makes most sense to build apps for technology & open source enthusiasts who geek out about optimizing their weight lifting and body building. The type of people who would obsessively scroll Instagram or YouTube hoping to find novel workout tips (who will now hopefully have a better way). I’ld like to bring sports&conditioning more to the forefront too, but can’t prioritize this now.
  2. UX and UI design.

Developer help is less critical, but of course welcome too.

If you think you can help, please reach out on body.build discord or X/Twitter, and of course Check out body.build! and please forward this to people who are into both fitness and open source technology! Thanks!

August 27, 2025

Have you ever fired up a Vagrant VM, provisioned a project, pulled some Docker images, ran a build… and ran out of disk space halfway through? Welcome to my world. Apparently, the default disk size in Vagrant is tiny—and while you can specify a bigger virtual disk, Ubuntu won’t magically use the extra space. You need to resize the partition, the physical volume, the logical volume, and the filesystem. Every. Single. Time.

Enough of that nonsense.

🛠 The setup

Here’s the relevant part of my Vagrantfile:

Vagrant.configure(2) do |config|
  config.vm.box = 'boxen/ubuntu-24.04'
  config.vm.disk :disk, size: '20GB', primary: true

  config.vm.provision 'shell', path: 'resize_disk.sh'
end

This makes sure the disk is large enough and automatically resized by the resize_disk.sh script at first boot.

✨ The script

#!/bin/bash
set -euo pipefail
LOGFILE="/var/log/resize_disk.log"
exec > >(tee -a "$LOGFILE") 2>&1
echo "[$(date)] Starting disk resize process..."

REQUIRED_TOOLS=("parted" "pvresize" "lvresize" "lvdisplay" "grep" "awk")
for tool in "${REQUIRED_TOOLS[@]}"; do
  if ! command -v "$tool" &>/dev/null; then
    echo "[$(date)] ERROR: Required tool '$tool' is missing. Exiting."
    exit 1
  fi
done

# Read current and total partition size (in sectors)
parted_output=$(parted --script /dev/sda unit s print || true)
read -r PARTITION_SIZE TOTAL_SIZE < <(echo "$parted_output" | awk '
  / 3 / {part = $4}
  /^Disk \/dev\/sda:/ {total = $3}
  END {print part, total}
')

# Trim 's' suffix
PARTITION_SIZE_NUM="${PARTITION_SIZE%s}"
TOTAL_SIZE_NUM="${TOTAL_SIZE%s}"

if [[ "$PARTITION_SIZE_NUM" -lt "$TOTAL_SIZE_NUM" ]]; then
  echo "[$(date)] Resizing partition /dev/sda3..."
  parted --fix --script /dev/sda resizepart 3 100%
else
  echo "[$(date)] Partition /dev/sda3 is already at full size. Skipping."
fi

if [[ "$(pvresize --test /dev/sda3 2>&1)" != *"successfully resized"* ]]; then
  echo "[$(date)] Resizing physical volume..."
  pvresize /dev/sda3
else
  echo "[$(date)] Physical volume is already resized. Skipping."
fi

LV_SIZE=$(lvdisplay --units M /dev/ubuntu-vg/ubuntu-lv | grep "LV Size" | awk '{print $3}' | tr -d 'MiB')
PE_SIZE=$(vgdisplay --units M /dev/ubuntu-vg | grep "PE Size" | awk '{print $3}' | tr -d 'MiB')
CURRENT_LE=$(lvdisplay /dev/ubuntu-vg/ubuntu-lv | grep "Current LE" | awk '{print $3}')

USED_SPACE=$(echo "$CURRENT_LE * $PE_SIZE" | bc)
FREE_SPACE=$(echo "$LV_SIZE - $USED_SPACE" | bc)

if (($(echo "$FREE_SPACE > 0" | bc -l))); then
  echo "[$(date)] Resizing logical volume..."
  lvresize -rl +100%FREE /dev/ubuntu-vg/ubuntu-lv
else
  echo "[$(date)] Logical volume is already fully extended. Skipping."
fi

💡 Highlights

  • ✅ Uses parted with --script to avoid prompts.
  • ✅ Automatically fixes GPT mismatch warnings with --fix.
  • ✅ Calculates exact available space using lvdisplay and vgdisplay, with bc for floating point math.
  • ✅ Extends the partition, PV, and LV only when needed.
  • ✅ Logs everything to /var/log/resize_disk.log.

🚨 Gotchas

  • Your disk must already use LVM. This script assumes you’re resizing /dev/ubuntu-vg/ubuntu-lv, the default for Ubuntu server installs.
  • You must use a Vagrant box that supports VirtualBox’s disk resizing—thankfully, boxen/ubuntu-24.04 does.
  • If your LVM setup is different, you’ll need to adapt device paths.

🔁 Automation FTW

Calling this script as a provisioner means I never have to think about disk space again during development. One less yak to shave.

Feel free to steal this setup, adapt it to your team, or improve it and send me a patch. Or better yet—don’t wait until your filesystem runs out of space at 3 AM.

August 26, 2025

Pas de médaille pour les résistants

Si vous voulez changer le monde, il faut entrer en résistance. Il faut accepter d’agir et de se taire. Il faut accepter de perdre du confort, des opportunités, des relations. Et il ne faut espérer aucune récompense, aucune reconnaissance.

Un espionnage pire que tout ce que vous imaginez

Prenez notre dépendance envers quelques monopoles technologiques. Je pense qu’on ne se rend pas compte de l’espionnage permanent que nous imposent les smartphones. Et que ces données ne sont pas simplement stockées « chez Google ».

Tim Sh a décidé d’investiguer. Il a ajouté un simple jeu gratuit sur un iPhone vierge dont tous les services de localisation étaient désactivés. Cela semble raisonnable, non ?

En analysant les paquets, il a découvert la quantité incroyable d’information qui était envoyée par le moteur du jeu Unity. Cela signifie que le concepteur du jeu lui-même ne sait sans doute pas que son jeu vous espionne.

Mais Tim Sh a fait mieux : il a traqué ces données et découvertes où elles étaient revendues. Ce sont des entreprises ayant pignon sur rue qui revendent, en temps réels, les données utilisateurs : position, historique et instantanée, niveau de la batterie et luminosité, connexion internet utilisée, opérateur téléphonique, espace libre disponible sur le téléphone.

Le tout est accessible en temps réel pour des millions d’utilisateurs. Y compris des utilisateurs persuadés de protéger leur vie privée en désactivant les permissions de localisation, en faisant attention voire même en utilisant des containers GrapheneOS : il n’y a en effet aucune malice, aucun piratage, aucune illégalité. Si l’application fonctionne, c’est qu’elle envoie ses données, point à la ligne.

À noter également : les données concernant les Européens sont plus chères. En effet, le RGPD les rend plus difficiles à obtenir. Ce qui est la preuve que la régulation politique fonctionne. Le RGPD est très loin d’être suffisant. Sa seule utilité réelle est de démontrer que le pouvoir politique peut agir.

Nous avons tendance à nous moquer de la petitesse de l’Europe, car nous la mesurons en utilisant les métriques américaines. Nos politiciens rêvent de « licornes » et de monopoles européens. C’est une erreur, la force européenne est opposée à ces valeurs.

Comme le souligne Marcel Sel : malgré tous ses défauts, l’Europe est très imparfaite, mais, peut-être, la structure dans le monde la plus progressiste et qui protège le mieux ses citoyens.

La saturation de l’indignation

Face à ce constat, nous observons deux réactions. Le « jemenfoutisme » et l’indignation violente. Mais, contrairement à ce qu’on pourrait croire, la seconde n’a pas plus d’impact que la première.

Olivier Ertzscheid parle de la saturation de l’indignation. Une indignation permanente qui nous fait perdre toute capacité d’agir.

Je vois beaucoup d’indignation concernant le génocide qu’Israël commet à Gaza. Mais peu ou prou d’actions. Pourtant, une action simple est de supprimer ses comptes Whatsapp. Il est presque certain que les données Whatsapp servent pour cibler des frappes. Supprimer son compte, c’est donc une action réelle. Moins il y aura de comptes Whatsapp, moins les Gazaouis trouveront l’app indispensable, moins il y aura de données pour Israël.

Au lieu de s’indigner, entrez en résistance active. Coupez autant que vous pouvez les cordons. Vous allez perdre des opportunités ? Des contacts ? Vous allez rater des informations ?

C’est le but ! C’est l’objectif ! C’est la résistance, le nouveau maquis. Oui, mais "machin", il est sur Facebook. Quand on entre sa résistance, on y va pas en pantoufle avec toute la famille. C’est le principe même de la résistance : de prendre des risques, d’accomplir des actions que tout le monde ne comprend ou n’approuve pas avec l’espoir de faire changer les choses durablement.

C’est difficile et on ne vous donnera pas une médaille pour cela. Si vous cherchez la facilité, le confort ou si vous voulez de la reconnaissance ou des félicitations officielles, ce n’est pas en résistance que vous devez entrer.

S’arrêter pour penser

Oui, les entreprises sont des poules sans tête qui courent dans tous les sens. Mes années dans l’industrie informatique m’ont permis d’observer que l’immense majorité des employés ne fait strictement rien d’utile. Tout ce que nous faisons, c’est prétendre. Lorsqu’impact il y a, ce qui est extrêmement rare, c’est de permettre à un client de faire « mieux semblant ».

J’ai arrêté de le crier partout, car il est impossible de faire comprendre quelque chose à quelqu’un si son salaire dépend du fait qu’il ne le comprenne pas. Mais force est de constater que tous ceux qui s’arrêtent pour penser arrivent à cette même conclusion.

La merdification des entreprises peut vous toucher de manière la plus imprévue sur un produit que vous appréciez tout particulièrement. C’est mon cas avec Komoot, un outil que j’utilise en permanence pour planifier mes longs trajets à vélo et que j’utilise parfois "on the road", quand je suis un peu paumé et que je veux un itinéraire sûr, mais rapide pour arriver rapidement à destination.

Pour celleux qui ne comprennent pas l’intérêt d’un GPS à vélo, Thierry Crouzet a justement pondu un billet détaillant comment cet accessoire change la pratique du cyclisme.

Mais voilà, Komoot, startup allemande qui se présentait comme un champion de la promotion des voyages à vélo, avec des fondateurs qui promettaient de ne jamais vendre leur bébé a été vendu à un fond d’investissement réputé pour merdifier tout ce qu’il rachète.

Je n’en veux pas aux fondateurs. Je sais bien qu’à partir d’une certaine somme, on remet tous en question nos promesses. Les fondateurs de Whatsapp souhaitaient, à la base, fortement protéger la vie privée de leurs utilisateurs. Ils ont néanmoins vendu leur application à Facebook, car, de leurs propres aveux, on accepte certains compromis à partir d’une certaine somme.

Heureusement, des solutions libres se profilent comme l’excellent Cartes.app qui a pris le problème à bras le corps.

Il manque encore la possibilité d’envoyer facilement un itinéraire vers mon GPS de vélo pour que ce soit utilisable au quotidien, mais le symbole est clair : la dépendance envers des produits merdifiés n’est pas une fatalité !

De la nécessité du logiciel libre

Comme le démontre Gee, les ajouts de fonctionnalités non indispensables ne sont pas neutres. Elles accroissent considérablement le risque de panne et de problème.

Cette simplification ne peut, par essence, que passer par le logiciel libre qui force à la modularité. Liorel donne un exemple très parlant : à cause de sa complexité, Microsoft Excell utilisera pour toujours le calendrier julien. Contrairement à LibreOffice, qui utilise l’actuel calendrier grégorien.

Simplification, liberté, ralentissement, décroissance de notre consommation ne sont que les faces d’une même forme de résistance, d’une même conscientisation de la vie dans sa globalité.

Ralentir et prendre du recul. C’est d’ailleurs ce que m’a violemment offert Chris Brannons, avec son dernier post sur sa capsule Gemini. Et quand je dis le dernier…

Barring unforeseen circumstances or unexpected changes, my last day on earth will be June 13th, 2025.

Chris avait 46 ans et il a pris le temps d’écrire le comment et le pourquoi de sa procédure d’euthanasie. Après ce post, il a pris le temps de répondre à mes emails alors que je l’encourageais à ne pas le faire.

Le symbole du vélo

On ne peut pas s’en foutre. On ne peut pas s’indigner. Il faut alors, avec les quelques millions de secondes qui nous reste à vivre, agir. Agir en faisant ce que l’on pense être le mieux pour soi-même, le mieux pour nos enfants, le mieux pour l’humanité.

Comme rouler à vélo !

Et tant pis si ça ne change rien. Et tant pis si ça nous fait paraître étrange aux yeux de certains. Et tant pis si ça a certains désavantages. Faire du vélo, c’est entrer en résistance !

Symbole de liberté, de simplification, d’indépendance et pourtant extrêmement technologique, le vélo n’a jamais été aussi politique. Comme le souligne Klaus-Gerd Giesen, le Bikepunk est philosophique et politique !

Cela m’amuse d’ailleurs beaucoup quand on présente l’univers de Bikepunk comme un monde d’où a disparu la technologie. Parce que le vélo ce n’est pas de la technologie peut-être ?

D’ailleurs, si vous n’avez pas encore le bouquin, il ne vous reste qu’à courir faire coucou à votre libraire préféré·e et entrer en résistance !

La photo d’illustration m’a été envoyée par Julien Ursini et est sous CC-By. Plongé dans la lecture de Bikepunk, il a été saisi de découvrir ce cadre de vélo rouillé, debout dans le lit de la rivière Bléone, comme un acte de résistance symbolique. Je ne pouvais rêver meilleure illustration pour ce billet.

Je suis Ploum et je viens de publier Bikepunk, une fable écolo-cycliste entièrement tapée sur une machine à écrire mécanique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !

Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.

August 20, 2025

Or: Why you should stop worrying and love the LTS releases.

TL;DR: Stick to MediaWiki 1.43 LTS, avoid MediaWiki 1.44.

There are two major MediaWiki releases every year, and every fourth such release gets Long Term Support (LTS). Two consistent approaches to upgrading MediaWiki are to upgrade every major release or to upgrade every LTS version. Let’s compare the pros and cons.

Which Upgrade Strategy Is Best

I used to upgrade my wikis for every MediaWiki release, or even run the master (development) branch. Having become more serious about MediaWiki operations by hosting wikis for many customers at Professional Wiki, I now believe sticking to LTS versions is the better trade-off for most people.

Benefits and drawbacks of upgrading every major MediaWiki version (compared to upgrading every LTS version):

  • Pro: You get access to all the latest features
  • Pro: You might be able to run more modern PHP or operating system versions
  • Con: You have to spend effort on upgrades four times as often (twice a year instead of once every two years)
  • Con: You have to deal with breaking changes four times as often
  • Con: You have to deal with extension compatibility issues four times as often
  • Con: You run versions with shorter support windows. Regular major releases are supported for 1 year, while LTS releases receive support for 3 years

What about the latest features? MediaWiki is mature software. Its features evolve slowly, and most innovation happens in the extension ecosystem. Most releases only contain a handful of notable changes, and there is a good chance none of them matter for your use cases. If there is something you would benefit from in a more recent non-LTS major release, then that’s an argument for not sticking to that LTS version, and it’s up to you to determine if that benefit outweighs all the cons. I think it rarely does, with the comparison not even being close.

The Case Of MediaWiki 1.44

MediaWiki 1.44 is the first major MediaWiki release after MediaWiki 1.43 LTS, and at the time of writing this post, it is also the most recent major release.

As with many releases, MediaWiki 1.44 brings several breaking changes to its internal APIs. This means that MediaWiki extensions that work with the previous versions might no longer work with MediaWiki 1.44. This version brings a high number of these breaking changes, including some particularly nasty ones that prevent extensions from easily supporting both MediaWiki 1.43 LTS and MediaWiki 1.44. That means if you upgrade now, you will run into various compatibility problems with extensions.

Examples of the type of errors you will encounter:

PHP Fatal error: Uncaught Error: Class “Html” not found

PHP Fatal error: Uncaught Error: Class “WikiMap” not found

PHP Fatal error: Uncaught Error: Class “Title” not found

Given that most wikis use dozens of MediaWiki extensions, this makes the “You have to deal with extension compatibility issues” con particularly noteworthy for MediaWiki 1.44.

Unless you have specific reasons to upgrade to MediaWiki 1.44, just stick to MediaWiki 1.43 LTS and wait for MediaWiki 1.47 LTS, which will be released around December 2026.

See also: When To Upgrade MediaWiki (And Understanding MediaWiki versions)

 

The post Why You Should Skip MediaWiki 1.44 appeared first on Entropy Wins.

After nearly two decades and over 1,600 blog posts written in raw HTML, I've made a change that feels long overdue: I've switched to Markdown.

Don't worry, I'm not moving away from Drupal. I'm just moving from a "HTML text format" to a "Markdown format". My last five posts have all been written in Markdown.

I've actually written in Markdown for years. I started with Bear for note-taking, and for the past four years Obsidian has been my go-to tool. Until recently, though, I've always published my blog posts in HTML.

For almost 20 years, I wrote every blog post in raw HTML, typing out every tag by hand. For longer posts, it could take me 45 minutes wrapping everything in <p> tags, adding links, and closing HTML tags like it was still 2001. It was tedious, but also a little meditative. I stuck with it, partly out of pride and partly out of habit.

Getting Markdown working in Drupal

So when I decided to make the switch, I had to figure out how to get Markdown working in Drupal. Drupal has multiple great Markdown modules to choose from but I picked Markdown Easy because it's lightweight, fully tested, and built on the popular CommonMark library.

I documented my installation and upgrade steps in a public note titled Installing and configuring Markdown Easy for Drupal.

I ran into one problem: the module's security-first approach stripped all HTML tags from my posts. This was an issue because I mostly write in Markdown but occasionally mix in HTML for things Markdown doesn't support, like custom styling. One example is creating pull quotes with a custom CSS class:

After 20 years of writing in HTML, I switched to *Markdown*.

<p class="pullquote">HTML for 20 years. Markdown from now on.</p>

Now I can publish faster while still using [Drupal](https://drupal.org).

HTML in Markdown by design

Markdown was always meant to work hand in hand with HTML, and Markdown parsers are supposed to leave HTML tags untouched. John Gruber, the creator of Markdown, makes this clear in the original Markdown specification:

HTML is a publishing format; Markdown is a writing format. Thus, Markdown's formatting syntax only addresses issues that can be conveyed in plain text. [...] For any markup that is not covered by Markdown's syntax, you simply use HTML itself. There is no need to preface it or delimit it to indicate that you're switching from Markdown to HTML; you just use the tags.

In Markdown Easy 1.x, allowing HTML tags required writing a custom Drupal module with a specific "hook" implementation. This felt like too much work for something that should be a simple configuration option. I've never enjoyed writing and maintaining custom Drupal modules for cases like this.

I reached out to Mike Anello, the maintainer of Markdown Easy, to discuss a simpler way to mix HTML and Markdown.

I suggested making it a configuration option and helped test and review the necessary changes. I was happy when that became part of the built-in settings in version 2.0. A few weeks later, Markdown Easy 2.0 was released, and this capability is now available out of the box.

Now that everything is working, I am considering converting my 1,600+ existing posts from HTML to Markdown. Part of me wants everything to be consistent, but another part hesitates to overwrite hundreds of hours of carefully crafted HTML. The obsessive in me debates the archivist. We'll see who wins.

The migration itself would be a fun technical challenge. Plenty of tools exist to convert HTML to Markdown so no need to reinvent the wheel. Maybe I'll test a few converters on some posts to see which handles my particular setup best.

Extending Markdown with tokens

Like Deane Barker, I often mix HTML and Markdown with custom "tokens". In my case, they aren't official web components, but they serve a similar purpose.

For example, here is a snippet that combines standard Markdown with a token that embeds an image:

Nothing beats starting the day with [coffee](https://dri.es/tag/coffee) and this view:

[​image beach-sunrise.jpg lazy=true schema=true caption=false]

These tokens get processed by my custom Drupal module and transformed into full HTML. That basic image token? It becomes a responsive picture element complete with lazy loading, alt-text from my database, Schema.org support, and optional caption. I use similar tokens for videos and other dynamic content.

The real power of tokens is future proofing. When responsive images became a web standard, I could update my image token processor once and instantly upgrade all my blog posts. No need to edit old content. Same when lazy loading became standard, or when new image formats arrive. One code change updates all 10,000 images or so that I've ever posted.

My tokens has evolved over 15 years and deserves its own blog post. Down the road, I might turn some of them into web components like Deane describes.

Closing thoughts

In the end, this was not a syntax decision: it was a workflow decision. I want less friction between an idea and publishing it. Five Markdown posts in, publishing is faster, cleaner, and more enjoyable, while still giving me the flexibility I need.

Those 45 minutes I used to spend on HTML tags? I now spend on things that matter more, or on writing another blog post.

Let’s talk about environment variables in GitHub Actions — those little gremlins that either make your CI/CD run silky smooth or throw a wrench in your perfectly crafted YAML.

If you’ve ever squinted at your pipeline and wondered, “Where the heck should I declare this ANSIBLE_CONFIG thing so it doesn’t vanish into the void between steps?”, you’re not alone. I’ve been there. I’ve screamed at $GITHUB_ENV. I’ve misused export. I’ve over-engineered echo. But fear not, dear reader — I’ve distilled it down so you don’t have to.

In this post, we’ll look at the right ways (and a few less right ways) to set environment variables — and more importantly, when to use static vs dynamic approaches.


🧊 Static Variables: Set It and Forget It

Got a variable like ANSIBLE_STDOUT_CALLBACK=yaml that’s the same every time? Congratulations, you’ve got yourself a static variable! These are the boring, predictable, low-maintenance types that make your CI life a dream.

✅ Best Practice: Job-Level env

If your variable is static and used across multiple steps, this is the cleanest, classiest, and least shouty way to do it:

jobs:
  my-job:
    runs-on: ubuntu-latest
    env:
      ANSIBLE_CONFIG: ansible.cfg
      ANSIBLE_STDOUT_CALLBACK: yaml
    steps:
      - name: Use env vars
        run: echo "ANSIBLE_CONFIG is $ANSIBLE_CONFIG"

Why it rocks:

  • 👀 Super readable
  • 📦 Available in every step of the job
  • 🧼 Keeps your YAML clean — no extra echo commands, no nonsense

Unless you have a very specific reason not to, this should be your default.


🎩 Dynamic Variables: Born to Be Wild

Now what if your variables aren’t so chill? Maybe you calculate something in one step and need to pass it to another — a file path, a version number, an API token from a secret backend ritual…

That’s when you reach for the slightly more… creative option:

🔧 $GITHUB_ENV to the rescue

- name: Set dynamic environment vars
  run: |
    echo "BUILD_DATE=$(date +%F)" >> $GITHUB_ENV
    echo "RELEASE_TAG=v1.$(date +%s)" >> $GITHUB_ENV

- name: Use them later
  run: echo "Tag: $RELEASE_TAG built on $BUILD_DATE"

What it does:

  • Persists the variables across steps
  • Works well when values are calculated during the run
  • Makes you feel powerful

🪄 Fancy Bonus: Heredoc Style

If you like your YAML with a side of Bash wizardry:

- name: Set vars with heredoc
  run: |
    cat <<EOF >> $GITHUB_ENV
    FOO=bar
    BAZ=qux
    EOF

Because sometimes, you just want to feel fancy.


😵‍💫 What Not to Do (Unless You Really Mean It)

- name: Set env with export
  run: |
    export FOO=bar
    echo "FOO is $FOO"

This only works within that step. The minute your pipeline moves on, FOO is gone. Poof. Into the void. If that’s what you want, fine. If not, don’t say I didn’t warn you.


🧠 TL;DR – The Cheat Sheet

ScenarioBest Method
Static variable used in all stepsenv at the job level ✅
Static variable used in one stepenv at the step level
Dynamic value needed across steps$GITHUB_ENV ✅
Dynamic value only needed in one stepexport (but don’t overdo it)
Need to show off with Bash skillscat <<EOF >> $GITHUB_ENV 😎

🧪 My Use Case: Ansible FTW

In my setup, I wanted to use:

ANSIBLE_CONFIG=ansible.cfg
ANSIBLE_STDOUT_CALLBACK=yaml

These are rock-solid, boringly consistent values. So instead of writing this in every step:

- name: Set env
  run: |
    echo "ANSIBLE_CONFIG=ansible.cfg" >> $GITHUB_ENV

I now do this:

jobs:
  deploy:
    runs-on: ubuntu-latest
    env:
      ANSIBLE_CONFIG: ansible.cfg
      ANSIBLE_STDOUT_CALLBACK: yaml
    steps:
      ...

Cleaner. Simpler. One less thing to trip over when I’m debugging at 2am.


💬 Final Thoughts

Environment variables in GitHub Actions aren’t hard — once you know the rules of the game. Use env for the boring stuff. Use $GITHUB_ENV when you need a little dynamism. And remember: if you’re writing export in step after step, something probably smells.

Got questions? Did I miss a clever trick? Want to tell me my heredoc formatting is ugly? Hit me up in the comments or toot at me on Mastodon.


✍ Posted by Amedee, who loves YAML almost as much as dancing polskas.
💥 Because good CI is like a good dance: smooth, elegant, and nobody falls flat on their face.
🎻 Scheduled to go live on 20 August — just as Boombalfestival kicks off. Because why not celebrate great workflows and great dances at the same time?

I recently installed Markdown Easy for Drupal and then upgraded from version 1.0 to 2.0.

I decided to document my steps in a public note in case they help others.

On my local machine, I run Drupal with DDEV. It sets up pre-configured Docker containers for the web server, database, and other required Drupal services. DDEV also installs Composer and Drush, which we will use in the steps below.

First, I installed version 2.0 of Markdown Easy using Composer:

ddev composer require drupal/markdown_easy

If you are upgrading from version 1.0, you will need to run the database updates so Drupal can apply any changes required by the new version. You can do this using Drush:

ddev drush updatedb

As explained in Switching to Markdown after 20 years of HTML, I want to use HTML and Markdown interchangeably. By default, Markdown Easy strips all HTML. This default approach is the safest option for most sites, but it also means you can't freely mix HTML tags and Markdown.

To change that behavior, I needed to adjust two configuration settings. These settings are not exposed anywhere in Drupal's admin interface, which is intentional. Markdown Easy keeps its configuration surface small to stay true to its "easy" name, and it leads with a secure-by-default philosophy. If you choose to relax those defaults, you can do so using Drush.

ddev drush config:set markdown_easy.settings skip_html_input_stripping 1

ddev drush config:set markdown_easy.settings skip_filter_enforcement 1

The skip_html_input_stripping setting turns off input stripping in the CommonMark Markdown parser, which means your HTML tags remain untouched while Markdown is processed.

The skip_filter_enforcement setting lets you turn off input stripping in Drupal itself. It allows you to disable the "Limit allowed HTML tags" filter without warnings from Markdown Easy.

You can enable just the first setting if you want Markdown to allow HTML but still let Drupal filter certain tags using the "Limit allowed HTML tags" filter. Or you can enable both if you want full control over your HTML with no stripping at either stage.

Just know that disabling HTML input stripping and disabling HTML filter enforcement can have security implications. Only disable these features if you trust your content creators and understand the risks.

Next, I verified my settings:

ddev drush config:get markdown_easy.settings

You should see:

skip_html_input_stripping: true
skip_filter_enforcement: true

Finally, clear the cache:

ddev drush cache-rebuild

Next, I updated my existing Markdown text format. I went to /admin/config/content/formats/ and made the following changes:

  • Set the Markdown flavor to Smorgasbord.
  • Disabled the "Limit allowed HTML tags and correct faulty HTML" filter.
  • Disabled the "Convert line breaks into HTML" filter.

That's it!

August 19, 2025

Today we are announcing that Chris Tranquill has been appointed CEO of Acquia, succeeding Steve Reny. Steve will be stepping down after nearly seven years with Acquia, including almost three as CEO.

I feel a mix of emotions. Sadness, because Steve has been a valued leader and colleague. Gratitude, for everything he has done for Acquia and our community. Optimism, because he is leaving Acquia in a strong position. And happiness, knowing he has been looking forward to spending more time with his family and dedicating himself to giving back after an incredible 40-year career.

Steve worked incredibly hard, cared deeply about customers, and built genuine relationships with so many people across the company. Under his leadership, Acquia became more efficient, launched new products, strengthened its Open Source commitment, and grew with financial discipline. He created a strong foundation for our future, and I want to sincerely thank him for his leadership.

Steve Reny shakes hands with a partner on stage at Acquia Engage London. Steve Reny congratulating a partner at Acquia Engage.

At the same time, I am very excited to welcome Chris Tranquill as our new CEO. Chris brings more than 25 years of experience in enterprise software and customer experience. Most recently he was CEO of Khoros, and before that he co-founded Topbox, which was later acquired by Khoros.

In our conversations, I was impressed by how quickly Chris understood our business, the thoughtful questions he asked, and his enthusiasm for our innovation roadmap. I also appreciate that he has been a founder and brings that perspective. Most importantly, he recognized right away how essential Drupal is to Acquia's success.

Some in the Drupal community may wonder what this leadership change means for Acquia's commitment to Drupal and Open Source. The answer is simple: our commitment remains unchanged. One of the reasons we chose Chris is because he understands that Drupal's success is essential to Acquia's success.

Today, Acquia is investing more in Drupal than at any time in our 18-year history, with record numbers of contributors and active involvement in many of Drupal's most important initiatives. We are proud to contribute alongside the community. Supporting Drupal is core to who we are, and that will not change.

I am grateful for Steve's leadership, and I look forward to working with Chris!

August 18, 2025

On July 22nd, 2025, we released MySQL 9.4, the latest Innovation Release. As usual, we released bug fixes for 8.0 and 8.4 LTS, but this post focuses on the newest release. In this release, we can notice several contributions related to NDB and the Connectors. Connectors MySQL Server – Replication InnoDB Optimizer C API (client […]

August 13, 2025

When using Ansible to automate tasks, the command module is your bread and butter for executing system commands. But did you know that there’s a safer, cleaner, and more predictable way to pass arguments? Meet argv—an alternative to writing commands as strings.

In this post, I’ll explore the pros and cons of using argv, and I’ll walk through several real-world examples tailored to web servers and mail servers.


Why Use argv Instead of a Command String?

✅ Pros

  • Avoids Shell Parsing Issues: Each argument is passed exactly as intended, with no surprises from quoting or spaces.
  • More Secure: No shell = no risk of shell injection.
  • Clearer Syntax: Every argument is explicitly defined, improving readability.
  • Predictable: Behavior is consistent across different platforms and setups.

❌ Cons

  • No Shell Features: You can’t use pipes (|), redirection (>), or environment variables like $HOME.
  • More Verbose: Every argument must be a separate list item. It’s explicit, but more to type.
  • Not for Shell Built-ins: Commands like cd, export, or echo with redirection won’t work.

Real-World Examples

Let’s apply this to actual use cases.

🔧 Restarting Nginx with argv

- name: Restart Nginx using argv
  hosts: amedee.be
  become: yes
  tasks:
    - name: Restart Nginx
      ansible.builtin.command:
        argv:
          - systemctl
          - restart
          - nginx

📬 Check Mail Queue on a Mail-in-a-Box Server

- name: Check Postfix mail queue using argv
  hosts: box.vangasse.eu
  become: yes
  tasks:
    - name: Get mail queue status
      ansible.builtin.command:
        argv:
          - mailq
      register: mail_queue

    - name: Show queue
      ansible.builtin.debug:
        msg: "{{ mail_queue.stdout_lines }}"

🗃️ Back Up WordPress Database

- name: Backup WordPress database using argv
  hosts: amedee.be
  become: yes
  vars:
    db_user: wordpress_user
    db_password: wordpress_password
    db_name: wordpress_db
  tasks:
    - name: Dump database
      ansible.builtin.command:
        argv:
          - mysqldump
          - -u
          - "{{ db_user }}"
          - -p{{ db_password }}
          - "{{ db_name }}"
          - --result-file=/root/wordpress_backup.sql

⚠️ Avoid exposing credentials directly—use Ansible Vault instead.


Using argv with Interpolation

Ansible lets you use Jinja2-style variables ({{ }}) inside argv items.

🔄 Restart a Dynamic Service

- name: Restart a service using argv and variable
  hosts: localhost
  become: yes
  vars:
    service_name: nginx
  tasks:
    - name: Restart
      ansible.builtin.command:
        argv:
          - systemctl
          - restart
          - "{{ service_name }}"

🕒 Timestamped Backups

- name: Timestamped DB backup
  hosts: localhost
  become: yes
  vars:
    db_user: wordpress_user
    db_password: wordpress_password
    db_name: wordpress_db
  tasks:
    - name: Dump with timestamp
      ansible.builtin.command:
        argv:
          - mysqldump
          - -u
          - "{{ db_user }}"
          - -p{{ db_password }}
          - "{{ db_name }}"
          - --result-file=/root/wordpress_backup_{{ ansible_date_time.iso8601 }}.sql

🧩 Dynamic Argument Lists

Avoid join(' '), which collapses the list into a single string.

❌ Wrong:

argv:
  - ls
  - "{{ args_list | join(' ') }}"  # BAD: becomes one long string

✅ Correct:

argv: ["ls"] + args_list

Or if the length is known:

argv:
  - ls
  - "{{ args_list[0] }}"
  - "{{ args_list[1] }}"

📣 Interpolation Inside Strings

- name: Greet with hostname
  hosts: localhost
  tasks:
    - name: Print message
      ansible.builtin.command:
        argv:
          - echo
          - "Hello, {{ ansible_facts['hostname'] }}!"


When to Use argv

✅ Commands with complex quoting or multiple arguments
✅ Tasks requiring safety and predictability
✅ Scripts or binaries that take arguments, but not full shell expressions

When to Avoid argv

❌ When you need pipes, redirection, or shell expansion
❌ When you’re calling shell built-ins


Final Thoughts

Using argv in Ansible may feel a bit verbose, but it offers precision and security that traditional string commands lack. When you need reliable, cross-platform automation that avoids the quirks of shell parsing, argv is the better choice.

Prefer safety? Choose argv.
Need shell magic? Use the shell module.

Have a favorite argv trick or horror story? Drop it in the comments below.

An illustration of a small wedge propping up a massive block, symbolizing how a small group of contributors supports critical infrastructure.

Fifteen years ago, I laid out a theory about the future of Open Source. In The Commercialization of a Volunteer-Driven Open Source Project, I argued that if Open Source was going to thrive, people had to get paid to work on it. At the time, the idea was controversial. Many feared money would corrupt the spirit of volunteerism and change the nature of Open Source contribution.

In that same post, I actually went beyond discussing the case for commercial sponsorship and outlined a broader pattern I believed Open Source would follow. I suggested it would develop in three stages: (1) starting with volunteers, then (2) expanding to include commercial involvement and sponsorship, and finally (3) gaining government support.

I based this on how other public goods and public infrastructure have evolved. Trade routes, for example, began as volunteer-built paths, were improved for commerce by private companies, and later became government-run. The same pattern shaped schools, national defense, and many other public services. What begins as a volunteer effort often ends up being maintained by governments for the benefit of society. I suggested that Open Source would and should follow the same three-phase path.

Over the past fifteen years, paying people to maintain Open Source has shifted from controversial to widely accepted. Platforms like Open Collective, an organization I invested in as an angel investor in 2015, have helped make this possible by giving Open Source communities an easy way to receive and manage funding transparently.

Today, Open Source runs much of the world's critical infrastructure. It powers government services, supports national security, and enables everything from public health systems to elections. This reliance means the third and final step in its evolution is here: governments must help fund Open Source.

Public funding would complement the role of volunteers and commercial sponsors, not replace them. This is not charity or a waste of tax money. It is an investment in the software that runs our essential services. Without it, we leave critical infrastructure fragile at the moment the world needs it most.

The $8.8 trillion dependency

A 2024 Harvard Business School study, The Value of Open Source Software, estimates that replacing the most widely used Open Source software would cost the world $8.8 trillion. If Open Source suddenly disappeared, organizations would have to spend 3.5 times more on software than they do today. Even more striking: 96% of that $8.8 trillion depends on just 5% of contributors.

This concentration creates fragility. Most of our digital infrastructure depends on a small group of maintainers who often lack stable funding or long-term support. When they burn out or step away, critical systems can be at risk.

Maintaining Open Source is not free. It takes developers to fix bugs, maintainers to coordinate releases, security teams to patch vulnerabilities, and usability experts to keep the software accessible. Without reliable funding, these essential tasks are difficult to sustain, leaving the foundations of our digital society exposed to risk.

Addressing this risk means rethinking not just funding, but also governance, succession planning, and how we support the people and projects that keep our society running.

When digital sovereignty becomes survival

Recent geopolitical tensions and policy unpredictability have made governments more aware of the risks of relying on foreign-controlled, proprietary software. Around the world, there is growing recognition that they cannot afford to lose control over their digital infrastructure.

Denmark recently announced a national plan to reduce their dependency on proprietary software by adopting Open Source tools across its public sector.

This reflects a simple reality: when critical public services depend on foreign-controlled software, governments lose the ability to guarantee continuity and security to their citizens. They become vulnerable to policy changes and geopolitical pressures beyond their control.

As Denmark's Ministry for Digitalisation explained, this shift is about control, accountability, and resilience, not just cost savings. Other European cities and countries are developing similar strategies. This is no longer just an IT decision, but a strategic necessity for protecting national security and guaranteeing the continuity of essential public services.

From Open Source consumption to contribution

Most government institutions rely heavily on Open Source but contribute little in return. Sponsorship usually flows through vendor contracts, and while some vendors contribute upstream, the overall level of support is small compared to how much these institutions depend on said projects.

Procurement practices often make the problem worse. Contracts are typically awarded to the lowest bidder or to large, well-known IT vendors rather than those with deep Open Source expertise and a track record of contributing back. Companies that help maintain Open Source projects are often undercut by firms that give nothing in return. This creates a race to the bottom that ultimately weakens the Open Source projects governments rely on.

As I discussed in Balancing makers and takers to scale and sustain Open Source, sustainable Open Source requires addressing the fundamental mismatch between use and contribution.

Governments need to shift from Open Source consumption to Open Source contribution. The digital infrastructure that powers government services demands the same investment commitment as the roads and bridges that connect our communities.

Drupal tells the story

I have helped lead Drupal for almost 25 years, and in that time I have seen how deeply governments depend on Open Source.

The European Commission runs more than a hundred Drupal sites, France operates over a thousand Drupal sites, and Australia's government has standardized on Drupal as its national digital platform. Yet despite this widespread use, most of these institutions contribute little back to Drupal's development or maintenance.

This is not just a Drupal problem, and it is entirely within the rights of Open Source users. There is no requirement to contribute. But in many projects, a small group of maintainers and a few companies carry the burden for infrastructure that millions rely on. Without broader support, this imbalance risks the stability of the very systems governments depend on.

Many public institutions use Open Source without contributing to its upkeep. While this is legal, it shifts all maintenance costs onto a small group of contributors. Over time, that risks the services those institutions depend on. Better procurement and policy choices could help turn more public institutions into active contributors.

The rise of government stewardship

I am certainly not the only one calling for government involvement in Open Source infrastructure. In recent years, national governments and intergovernmental bodies, including the United Nations, have begun increasing investment in Open Source.

In 2020, the UN Secretary General's Roadmap for Digital Cooperation called for global investment in "digital public goods" such as Open Source software to help achieve the Sustainable Development Goals. Five years later, the UN introduced the UN Open Source Principles, encouraging practices like "open by default" and "contributing back".

At the European level, the EU's Cyber Resilience Act recognizes Open Source software stewards as "economic actors", acknowledging their role in keeping infrastructure secure and reliable. In Germany, the Sovereign Tech Agency has invested €26 million in more than 60 Open Source projects that support critical digital infrastructure.

Governments and public institutions are also creating Open Source Program Offices (OSPOs) to coordinate policy, encourage contributions, and ensure long-term sustainability. In Europe, the European Commission's EC OSPO operates the code.europa.eu platform for cross-border collaboration. In the United States, agencies such as the Centers for Medicare & Medicaid Services, the United States Digital Service, the Cybersecurity and Infrastructure Security Agency, and the U.S. Digital Corps play similar roles. In Latin America, Brazil's Free Software Portal supports collaboration across governments.

These efforts signal a shift from simply using Open Source to actively stewarding and investing in it at the institutional level.

The math borders on absurd

If the top 100 countries each contributed $200,000 a year to an Open Source project, the project would have a twenty million dollar annual budget. That is about what it costs to maintain less than ten miles of highway.

In my home country, Belgium, which has just over ten million people, more than one billion euros is spent each year maintaining roads. A small fraction of that could help secure the future of Open Source software like Drupal, which supports public services for millions of Belgians.

For the cost of maintaining 10 miles of highway, we could secure the future of several critical Open Source projects that power essential public services. The math borders on absurd.

How governments can help

Just as governments maintain roads, bridges and utilities that society depends on, they should also help sustain the Open Source projects that power essential services, digitally and otherwise. The scale of investment needed is modest compared to other public infrastructure.

Governments could implement this through several approaches:

  • Track the health of critical Open Source projects. Just like we have safety ratings for bridges, governments should regularly check the health of the Open Source projects they rely on. This means setting clear targets, such as addressing security issues within x days, having y active maintainers, keeping all third-party software components up to date, and more. When a project falls behind, governments should step in and help with targeted support. This could include direct funding, employing contributors, or working with partners to stabilize the project.

  • Commit to long-term funding with stable timelines. Just as governments plan highway maintenance years in advance, we'd benefit from multi-year funding commitments and planning for critical digital infrastructure. Long-term funding allows projects to address technical debt, plan major updates, and recruit talent without the constant uncertainty of short-term fundraising.

  • Encourage contribution in government contracts. Governments can use procurement to strengthen the Open Source projects they depend on. Vendor contribution should be a key factor in awarding contracts, alongside price, quality, and other criteria. Agencies or vendors can be required or encouraged to give back through coding, documentation, security reviews, design work, or direct funding. This ensures governments work with true experts while helping keep critical Open Source projects healthy and sustainable.

  • Adopt "Public Money, Public Code" policies. When taxpayer money funds software for public use, that software should be released as Open Source. This avoids duplicate spending and builds shared digital infrastructure that anyone can reuse, improve, and help secure. The principle of "Public Money? Public Code!" offers a clear framework: code paid for by the people should be available to the people. Switzerland recently embraced this approach at the federal level with its EMBAG law, which requires government-developed software to be published as Open Source unless third-party rights or security concerns prevent it.

  • Scale successful direct funding models. The Sovereign Tech Agency has shown how government programs can directly fund the maintenance and security of critical Open Source software. Other nations should follow and expand this model. Replacing widely used Open Source software could cost an estimated 8.8 trillion dollars. Public investment should match that importance, with sustained global funding in the billions of dollars across countries and projects.

  • Teach Open Source in public schools and universities. Instead of relying solely on proprietary vendors like Microsoft, governments should integrate Open Source tools, practices, and values into school and university curricula, along with related areas such as open standards and open data. This prepares students to participate fully in Open Source, builds a talent pipeline that understands Open Source, and strengthens digital self-reliance.

Keeping the core strong

Concerns about political interference or loss of independence are valid. That is why we need systems that allow all stakeholders to coexist without undermining each other.

Government funding should reinforce the ecosystem that makes Open Source thrive, not replace it or control it. Companies and volunteers are strong drivers of innovation, pushing forward new features, experiments, and rapid improvements. Governments are better suited to a different but equally vital role: ensuring stability, security, and long-term reliability.

The most critical tasks in Open Source are often the least glamorous. Fixing bugs, patching vulnerabilities, updating third-party dependencies, improving accessibility, and maintaining documentation rarely make headlines, but without them, innovation cannot stand on a stable base. These tasks are also the most likely to be underfunded because they do not directly generate revenue for companies, require sustained effort, and are less appealing for volunteers.

Governments already maintain roads, bridges, and utilities, infrastructure that is essential but not always profitable or exciting for the private sector. Digital infrastructure deserves the same treatment. Public investment can keep these core systems healthy, while innovation and feature direction remain in the hands of the communities and companies that know the technology best.

Conclusion

Fifteen years ago, I argued that Open Source needed commercial sponsorship to thrive. Now we face the next challenge: governments must shift from consuming Open Source to sustaining it.

Today, some Open Source has become public infrastructure. Leaving critical infrastructure dependent on too few maintainers is a risk no society should accept.

The solution requires coordinated policy reforms: dedicated funding mechanisms, procurement that rewards upstream contributions, and long-term investment frameworks.

Special thanks to Baddy Sonja Breidert, Tim Doyle, Tiffany Farriss, Mike Gifford, Owen Lansbury and Nick Veenhof for their review and contributions to this blog post.

August 06, 2025

Not every day do I get an email from a very serious security researcher, clearly a man on a mission to save the internet — one vague, copy-pasted email at a time.

Here’s the message I received:

From: Peter Hooks <peterhooks007@gmail.com>
Subject: Security Vulnerability Disclosure

Hi Team,

I’ve identified security vulnerabilities in your app that may put users at risk. I’d like to report these responsibly and help ensure they are resolved quickly.

Please advise on your disclosure protocol, or share details if you have a Bug Bounty program in place.

Looking forward to your reply.

Best regards,
Peter Hooks

Right. Let’s unpack this.


🧯”Your App” — What App?

I’m not a company. I’m not a startup. I’m not even a garage-based stealth tech bro.
I run a personal WordPress blog. That’s it.

There is no “app.” There are no “users at risk” (unless you count me, and I̷̜̓’̷̠̋m̴̪̓ ̴̹́a̸͙̽ḷ̵̿r̸͇̽ë̵͖a̶͖̋ḋ̵͓ŷ̴̼ ̴̖͂b̶̠̋é̶̻ÿ̴͇́ọ̸̒ń̸̦d̴̟̆ ̶͉͒s̶̀ͅa̶̡͗v̴͙͊i̵͖̊n̵͖̆g̸̡̔).


🕵️‍♂️ The Anatomy of a Beg Bounty Email

This little email ticks all the classic marks of what the security community affectionately calls a beg bounty — someone scanning random domains, finding trivial or non-issues, and fishing for a payout.

Want to see how common this is? Check out:


📮 My (Admittedly Snarky) Reply

I couldn’t resist. Here’s the reply I sent:

Hi Peter,

Thanks for your email and your keen interest in my “app” — spoiler alert: there isn’t one. Just a humble personal blog here.

Your message hits all the classic marks of a beg bounty reconnaissance email:

  • ✅ Generic “Hi Team” greeting — because who needs names?
  • ✅ Vague claims of “security vulnerabilities” with zero specifics
  • ✅ Polite inquiry about a bug bounty program (spoiler: none here, James)
  • ✅ No proof, no details, just good old-fashioned mystery
  • ✅ Friendly tone crafted to reel in easy targets
  • ✅ Email address proudly featuring “007” — very covert ops of you

Bravo. You almost had me convinced.

I’ll be featuring this charming little interaction in a blog post soon — starring you, of course. If you ever feel like upgrading from vague templates to actual evidence, I’m all ears. Until then, happy fishing!

Cheers,
Amedee


😢 No Reply

Sadly, Peter didn’t write back.

No scathing rebuttal.
No actual vulnerabilities.
No awkward attempt at pivoting.
Just… silence.

#sadface
#crying
#missionfailed


🛡 A Note for Fellow Nerds

If you’ve got a domain name, no matter how small, there’s a good chance you’ll get emails like this.

Here’s how to handle them:

  • Stay calm — most of these are low-effort probes.
  • Don’t pay — you owe nothing to random strangers on the internet.
  • Don’t panic — vague threats are just that: vague.
  • Do check your stuff occasionally for actual issues.
  • Bonus: write a blog post about it and enjoy the catharsis.

For more context on this phenomenon, don’t miss:


🧵 tl;dr

If your “security researcher”:

  • doesn’t say what they found,
  • doesn’t mention your actual domain or service,
  • asks for a bug bounty up front,
  • signs with a Gmail address ending in 007

…it’s probably not the start of a beautiful friendship.


Got a similar email? Want help crafting a reply that’s equally professional and petty?
Feel free to drop a comment or reach out — I’ll even throw in a checklist.

Until then: stay patched, stay skeptical, and stay snarky. 😎

August 05, 2025

Rethinking DOM from first principles

Cover Image

Browsers are in a very weird place. While WebAssembly has succeeded, even on the server, the client still feels largely the same as it did 10 years ago.

Enthusiasts will tell you that accessing native web APIs via WASM is a solved problem, with some minimal JS glue.

But the question not asked is why you would want to access the DOM. It's just the only option. So I'd like to explain why it really is time to send the DOM and its assorted APIs off to a farm somewhere, with some ideas on how.

I won't pretend to know everything about browsers. Nobody knows everything anymore, and that's the problem.

Netscape or something

The 'Document' Model

Few know how bad the DOM really is. In Chrome, document.body now has 350+ keys, grouped roughly like this:

document.body properties

This doesn't include the CSS properties in document.body.style of which there are... 660.

The boundary between properties and methods is very vague. Many are just facades with an invisible setter behind them. Some getters may trigger a just-in-time re-layout. There's ancient legacy stuff, like all the onevent properties nobody uses anymore.

The DOM is not lean and continues to get fatter. Whether you notice this largely depends on whether you are making web pages or web applications.

Most devs now avoid working with the DOM directly, though occasionally some purist will praise pure DOM as being superior to the various JS component/templating frameworks. What little declarative facilities the DOM has, like innerHTML, do not resemble modern UI patterns at all. The DOM has too many ways to do the same thing, none of them nice.

connectedCallback() {
  const
    shadow = this.attachShadow({ mode: 'closed' }),
    template = document.getElementById('hello-world')
      .content.cloneNode(true),
    hwMsg = `Hello ${ this.name }`;

  Array.from(template.querySelectorAll('.hw-text'))
    .forEach(n => n.textContent = hwMsg);

  shadow.append(template);
}

Web Components deserve a mention, being the web-native equivalent of JS component libraries. But they came too late and are unpopular. The API seems clunky, with its Shadow DOM introducing new nesting and scoping layers. Proponents kinda read like apologetics.

The achilles heel is the DOM's SGML/XML heritage, making everything stringly typed. React-likes do not have this problem, their syntax only looks like XML. Devs have learned not to keep state in the document, because it's inadequate for it.

W3C logo
WHATWG logo

For HTML itself, there isn't much to critique because nothing has changed in 10-15 years. Only ARIA (accessibility) is notable, and only because this was what Semantic HTML was supposed to do and didn't.

Semantic HTML never quite reached its goal. Despite dating from around 2011, there is e.g. no <thread> or <comment> tag, when those were well-established idioms. Instead, an article inside an article is probably a comment. The guidelines are... weird.

There's this feeling that HTML always had paper-envy, and couldn't quite embrace or fully define its hypertext nature, and did not trust its users to follow clear rules.

Stewardship of HTML has since firmly passed to WHATWG, really the browser vendors, who have not been able to define anything more concrete as a vision, and have instead just added epicycles at the margins.

Along the way even CSS has grown expressions, because every templating language wants to become a programming language.

netscape composer

Editability of HTML remains a sad footnote. While technically supported via contentEditable, actually wrangling this feature into something usable for applications is a dark art. I'm sure the Google Docs and Notion people have horror stories.

Nobody really believes in the old gods of progressive enhancement and separating markup from style anymore, not if they make apps.

Most of the applications you see nowadays will kitbash HTML/CSS/SVG into a pretty enough shape. But this comes with immense overhead, and is looking more and more like the opposite of a decent UI toolkit.

slack input editor

The Slack input box

layout hack

Off-screen clipboard hacks

Lists and tables must be virtualized by hand, taking over for layout, resizing, dragging, and so on. Making a chat window's scrollbar stick to the bottom is somebody's TODO, every single time. And the more you virtualize, the more you have to reinvent find-in-page, right-click menus, etc.

The web blurred the distinction between UI and fluid content, which was novel at the time. But it makes less and less sense, because the UI part is a decade obsolete, and the content has largely homogenized.

'css is awesome' mug, truncated layout

CSS is inside-out

CSS doesn't have a stellar reputation either, but few can put their finger on exactly why.

Where most people go wrong is to start with the wrong mental model, approaching it like a constraint solver. This is easy to show with e.g.:

<div>
  <div style="height: 50%">...</div>
  <div style="height: 50%">...</div>
</div>
<div>
  <div style="height: 100%">...</div>
  <div style="height: 100%">...</div>
</div>

The first might seem reasonable: divide the parent into two halves vertically. But what about the second?

Viewed as a set of constraints, it's contradictory, because the parent div is twice as tall as... itself. What will happen instead in both cases is the height is ignored. The parent height is unknown and CSS doesn't backtrack or iterate here. It just shrink-wraps the contents.

If you set e.g. height: 300px on the parent, then it works, but the latter case will still just spill out.

Outside-in vs inside-out layout

Outside-in and inside-out layout modes

Instead, your mental model of CSS should be applying two passes of constraints, first going outside-in, and then inside-out.

When you make an application frame, this is outside-in: the available space is divided, and the content inside does not affect sizing of panels.

When paragraphs stack on a page, this is inside-out: the text stretches out its containing parent. This is what HTML wants to do naturally.

By being structured this way, CSS layouts are computationally pretty simple. You can propagate the parent constraints down to the children, and then gather up the children's sizes in the other direction. This is attractive and allows webpages to scale well in terms of elements and text content.

CSS is always inside-out by default, reflecting its document-oriented nature. The outside-in is not obvious, because it's up to you to pass all the constraints down, starting with body { height: 100%; }. This is why they always say vertical alignment in CSS is hard.

Flex grow/shrink

Use flex grow and shrink for spill-free auto-layouts with completely reasonable gaps

The scenario above is better handled with a CSS3 flex box (display: flex), which provides explicit control over how space is divided.

Unfortunately flexing muddles the simple CSS model. To auto-flex, the layout algorithm must measure the "natural size" of every child. This means laying it out twice: first speculatively, as if floating in aether, and then again after growing or shrinking to fit:

Flex speculative layout

This sounds reasonable but can come with hidden surprises, because it's recursive. Doing speculative layout of a parent often requires full layout of unsized children. e.g. to know how text will wrap. If you nest it right, it could in theory cause an exponential blow up, though I've never heard of it being an issue.

Instead you will only discover this when someone drops some large content in somewhere, and suddenly everything gets stretched out of whack. It's the opposite of the problem on the mug.

To avoid the recursive dependency, you need to isolate the children's contents from the outside, thus making speculative layout trivial. This can be done with contain: size, or by manually setting the flex-basis size.

CSS has gained a few constructs like contain or will-change, which work directly with the layout system, and drop the pretense of one big happy layout. It reveals some of the layer-oriented nature underneath, and is a substitute for e.g. using position: absolute wrappers to do the same.

What these do is strip off some of the semantics, and break the flow of DOM-wide constraints. These are overly broad by default and too document-oriented for the simpler cases.

This is really a metaphor for all DOM APIs.

CSS props
CSS props

The Good Parts?

That said, flex box is pretty decent if you understand these caveats. Building layouts out of nested rows and columns with gaps is intuitive, and adapts well to varying sizes. There is a "CSS: The Good Parts" here, which you can make ergonomic with sufficient love. CSS grids also work similarly, they're just very painfully... CSSy in their syntax.

But if you designed CSS layout from scratch, you wouldn't do it this way. You wouldn't have a subtractive API, with additional extra containment barrier hints. You would instead break the behavior down into its component facets, and use them à la carte. Outside-in and inside-out would both be legible as different kinds of containers and placement models.

The inline-block and inline-flex display models illustrate this: it's a block or flex on the inside, but an inline element on the outside. These are two (mostly) orthogonal aspects of a box in a box model.

Text and font styles are in fact the odd ones out, in hypertext. Properties like font size inherit from parent to child, so that formatting tags like <b> can work. But most of those 660 CSS properties do not do that. Setting a border on an element does not apply the same border to all its children recursively, that would be silly.

It shows that CSS is at least two different things mashed together: a system for styling rich text based on inheritance... and a layout system for block and inline elements, nested recursively but without inheritance, only containment. They use the same syntax and APIs, but don't really cascade the same way. Combining this under one style-umbrella was a mistake.

Worth pointing out: early ideas of relative em scaling have largely become irrelevant. We now think of logical vs device pixels instead, which is a far more sane solution, and closer to what users actually expect.

Tiger SVG

SVG is natively integrated as well. Having SVGs in the DOM instead of just as <img> tags is useful to dynamically generate shapes and adjust icon styles.

But while SVG is powerful, it's neither a subset nor superset of CSS. Even when it overlaps, there are subtle differences, like the affine transform. It has its own warts, like serializing all coordinates to strings.

CSS has also gained the ability to round corners, draw gradients, and apply arbitrary clipping masks: it clearly has SVG-envy, but falls very short. SVG can e.g. do polygonal hit-testing for mouse events, which CSS cannot, and SVG has its own set of graphical layer effects.

Whether you use HTML/CSS or SVG to render any particular element is based on specific annoying trade-offs, even if they're all scalable vectors on the back-end.

In either case, there are also some roadblocks. I'll just mention three:

  • text-ellipsis can only be used to truncate unwrapped text, not entire paragraphs. Detecting truncated text is even harder, as is just measuring text: the APIs are inadequate. Everyone just counts letters instead.
  • position: sticky lets elements stay in place while scrolling with zero jank. While tailor-made for this purpose, it's subtly broken. Having elements remain unconditionally sticky requires an absurd nesting hack, when it should be trivial.
  • The z-index property determines layering by absolute index. This inevitably leads to a z-index-war.css where everyone is putting in a new number +1 or -1 to make things layer correctly. There is no concept of relative Z positioning.

For each of these features, we got stuck with v1 of whatever they could get working, instead of providing the right primitives.

Getting this right isn't easy, it's the hard part of API design. You can only iterate on it, by building real stuff with it before finalizing it, and looking for the holes.

Oil on Canvas

So, DOM is bad, CSS is single-digit X% good, and SVG is ugly but necessary... and nobody is in a position to fix it?

Well no. The diagnosis is that the middle layers don't suit anyone particularly well anymore. Just an HTML6 that finally removes things could be a good start.

But most of what needs to happen is to liberate the functionality that is there already. This can be done in good or bad ways. Ideally you design your system so the "escape hatch" for custom use is the same API you built the user-space stuff with. That's what dogfooding is, and also how you get good kernels.

A recent proposal here is HTML in Canvas, to draw HTML content into a <canvas>, with full control over the visual output. It's not very good.

While it might seem useful, the only reason the API has the shape that it does is because it's shoehorned into the DOM: elements must be descendants of <canvas> to fully participate in layout and styling, and to make accessibility work. There are also "technical concerns" with using it off-screen.

One example is this spinny cube:

html-in-canvas spinny cube thing

To make it interactive, you attach hit-testing rectangles and respond to paint events. This is a new kind of hit-testing API. But it only works in 2D... so it seems 3D-use is only cosmetic? I have many questions.

Again, if you designed it from scratch, you wouldn't do it this way! In particular, it's absurd that you'd have to take over all interaction responsibilities for an element and its descendants just to be able to customize how it looks i.e. renders. Especially in a browser that has projective CSS 3D transforms.

The use cases not covered by that, e.g. curved re-projection, will also need more complicated hit-testing than rectangles. Did they think this through? What happens when you put a dropdown in there?

To me it seems like they couldn't really figure out how to unify CSS and SVG filters, or how to add shaders to CSS. Passing it thru canvas is the only viable option left. "At least it's programmable." Is it really? Screenshotting DOM content is 1 good use-case, but not what this is sold as at all.

The whole reason to do "complex UIs on canvas" is to do all the things the DOM doesn't do, like virtualizing content, just-in-time layout and styling, visual effects, custom gestures and hit-testing, and so on. It's all nuts and bolts stuff. Having to pre-stage all the DOM content you want to draw sounds... very counterproductive.

From a reactivity point-of-view it's also a bad idea to route this stuff back through the same document tree, because it sets up potential cycles with observers. A canvas that's rendering DOM content isn't really a document element anymore, it's doing something else entirely.

sheet-happens

Canvas-based spreadsheet that skips the DOM entirely

The actual achilles heel of canvas is that you don't have any real access to system fonts, text layout APIs, or UI utilities. It's quite absurd how basic it is. You have to implement everything from scratch, including Unicode word splitting, just to get wrapped text.

The proposal is "just use the DOM as a black box for content." But we already know that you can't do anything except more CSS/SVG kitbashing this way. text-ellipsis and friends will still be broken, and you will still need to implement UIs circa 1990 from scratch to fix it.

It's all-or-nothing when you actually want something right in the middle. That's why the lower level needs to be opened up.

Where To Go From Here

The goals of "HTML in Canvas" do strike a chord, with chunks of HTML used as free-floating fragments, a notion that has always existed under the hood. It's a composite value type you can handle. But it should not drag 20 years of useless baggage along, while not enabling anything truly novel.

The kitbashing of the web has also resulted in enormous stagnation, and a loss of general UI finesse. When UI behaviors have to be mined out of divs, it limits the kinds of solutions you can even consider. Fixing this within DOM/HTML seems unwise, because there's just too much mess inside. Instead, new surfaces should be opened up outside of it.

use-gpu-layout use-gpu-layout

WebGPU-based box model

My schtick here has become to point awkwardly at Use.GPU's HTML-like renderer, which does a full X/Y flex model in a fraction of the complexity or code. I don't mean my stuff is super great, no, it's pretty bare-bones and kinda niche... and yet definitely nicer. Vertical centering is easy. Positioning makes sense.

There is no semantic HTML or CSS cascade, just first-class layout. You don't need 61 different accessors for border* either. You can just attach shaders to divs. Like, that's what people wanted right? Here's a blueprint, it's mostly just SDFs.

Font and markup concerns only appear at the leaves of the tree, where the text sits. It's striking how you can do like 90% of what the DOM does here, without the tangle of HTML/CSS/SVG, if you just reinvent that wheel. Done by 1 guy. And yes, I know about the second 90% too.

The classic data model here is of a view tree and a render tree. What should the view tree actually look like? And what can it be lowered into? What is it being lowered into right now, by a giant pile of legacy crud?

servo ladybird

Alt-browser projects like Servo or Ladybird are in a position to make good proposals here. They have the freshest implementations, and are targeting the most essential features first. The big browser vendors could also do it, but well, taste matters. Good big systems grow from good small ones, not bad big ones. Maybe if Mozilla hadn't imploded... but alas.

Platform-native UI toolkits are still playing catch up with declarative and reactive UI, so that's that. Native Electron-alternatives like Tauri could be helpful, but they don't treat origin isolation as a design constraint, which makes security teams antsy.

There's a feasible carrot to dangle for them though, namely in the form of better process isolation. Because of CPU exploits like Spectre, multi-threading via SharedArrayBuffer and Web Workers is kinda dead on arrival anyway, and that affects all WASM. The details are boring but right now it's an impossible sell when websites have to have things like OAuth and Zendesk integrated into them.

Reinventing the DOM to ditch all legacy baggage could coincide with redesigning it for a more multi-threaded, multi-origin, and async web. The browser engines are already multi-process... what did they learn? A lot has happened since Netscape, with advances in structured concurrency, ownership semantics, FP effects... all could come in handy here.

* * *

Step 1 should just be a data model that doesn't have 350+ properties per node tho.

Don't be under the mistaken impression that this isn't entirely fixable.

netscape wheel

August 04, 2025

 Tot zover de nieuwe hobby. Venuskes zijn niet de gemakkelijkste om gelukkig te maken.







 Enkele van de Sarracenia zoals ze nu buiten staan. Die eten veel insecten, vooral wespen.







Nieuwe hobby sinds 2021; vleesetende plantjes kweken.

Hier vijf bekertjes die Nepenthes mij geven.







August 03, 2025

lookat 2.1.0rc1

Lookat 2.1.0rc2 is the second release candicate of release of Lookat/Bekijk 2.1.0, a user-friendly Unix file browser/viewer that supports colored man pages.

The focus of the 2.1.0 release is to add ANSI Color support.


 

News

3 Aug 2025 Lookat 2.1.0rc2 Released

Lookat 2.1.0rc2 is the second release candicate of Lookat 2.1.0

ChangeLog

Lookat / Bekijk 2.1.0rc2
  • Corrected italic color
  • Don’t reset the search offset when cursor mode is enabled
  • Renamed strsize to charsize ( ansi_strsize -> ansi_charsize, utf8_strsize -> utf8_charsize) to be less confusing
  • Support for multiple ansi streams in ansi_utf8_strlen()
  • Update default color theme to green for this release
  • Update manpages & documentation
  • Reorganized contrib directory
    • Moved ci/cd related file from contrib/* to contrib/cicd
    • Moved debian dir to contrib/dist
    • Moved support script to contrib/scripts

Lookat 2.1.0rc2 is available at:

Have fun!

August 01, 2025

Net Orange via eSim geactiveerd op mijn Fairphone 6 en voor ik het door had werden “App Center”, “Phone” (beiden van Orange group) maar ook … TikTok geïnstalleerd. Ik was daar niet blij mee. App Center kan ik zelfs niet de-installeren, alleen desactiveren. Fuckers!

Source

July 30, 2025

Fantastische cover van Jamie Woons “Night Air” door Lady Lynn. Die contrabas en die stem, magisch! Watch this video on YouTube. …

Source

An astronaut explores a surreal landscape beneath rainbow-colored planetary rings, symbolizing the journey into AI&#039;s transformative potential for Drupal.

In my previous post, The great digital agency unbundling, I explored how AI is transforming the work of digital agencies. As AI automates more technical tasks, agencies will be shifting their focus toward orchestration, strategic thinking, and accountability. This shift also changes what they need from their tools.

Content management systems like Drupal must evolve with them. This is not just about adding AI features. It is about becoming a platform that strengthens the new agency model. Because as agencies take on new roles, they will adopt the tools that help them succeed.

As I wrote then:

"As the Project Lead of Drupal, I think about how Drupal, the product, and its ecosystem of digital agencies can evolve together. They need to move in step to navigate change and help shape what comes next"

The good news is that the Drupal community is embracing AI in a coordinated and purposeful way. Today, Drupal CMS already ships with 22 AI agents, and through the Drupal AI Initiative, we are building additional infrastructure and tooling to bring more AI capabilities to Drupal.

In this post, I want to share why I believe Drupal is not just ready to evolve, but uniquely positioned to thrive in the AI era.

Drupal is built for AI

Imagine an AI agent that plans, executes, and measures complex marketing campaigns across your CMS, CRM, email platform, and analytics tools without requiring manual handoff at every step.

To support that level of orchestration, a platform must expose its content models, configuration data, state, user roles and permissions, and business logic in a structured, machine-readable way. That means making things like entity types, field definitions, relationships, and workflows available through APIs that AI systems can discover, inspect, and act on safely.

Most platforms were not designed with this kind of structured access in mind. Drupal has been moving in that direction for more than a decade.

Since Drupal 7, the community has invested deeply in modernizing the platform. We introduced a unified Entity API, adopted a service container with dependency injection, and expanded support for REST, JSON:API, and GraphQL. We also built a robust configuration management system, improved testability, and added more powerful workflows with granular revisioning and strong rollback support. Drupal also has excellent API documentation.

These changes made Drupal not only more programmable but also more introspectable. AI agents can query Drupal's structure, understand relationships between entities, and make informed decisions based on both content and configuration. This enables AI to take meaningful action inside the system rather than just operating at the surface. And because Drupal's APIs are open and well-documented, these capabilities are easier for developers and AI systems to discover and build on.

Making these architectural improvements was not easy. Upgrading from Drupal 7 was painful for many, and at the time, the benefits of Drupal 8's redesign were not obvious. We were not thinking about AI at the time, but in hindsight, we built exactly the kind of modern, flexible foundation that makes deep AI integration possible today. As is often the case, there is pain before the payoff.

AI makes Drupal's power more accessible

I think this is exciting because AI can help make Drupal's flexibility more accessible. Drupal is one of the most flexible content management systems available. It powers everything from small websites to large, complex digital platforms. That flexibility is a strength, but it also introduces complexity.

For newcomers, Drupal's flexibility can be overwhelming. Building a Drupal site requires understanding how to select and configure contributed modules, creating content types and relationships, defining roles and permissions, building Views, developing a custom theme, and more. The learning curve is steep and often prevents people from experiencing Drupal's power and flexibility.

AI has the potential to change that. In the future, you might describe your needs by saying something like, "I need a multi-language news site with editorial workflows and social media integration". An AI assistant could ask a few follow-up questions, then generate a working starting point.

I've demonstrated early prototypes of this vision in recent DriesNotes, including DrupalCon Barcelona 2024 and DrupalCon Atlanta 2025. Much of that code has been productized in the Drupal AI modules.

In my Barcelona keynote, I said that "AI is the new UI". AI helps lower the barrier to entry by turning complex setup tasks into simple prompts and conversations. With the right design, it can guide new users while still giving experts full control.

In my last post, The great digital agency unbundling, I shared a similar perspective:

"Some of the hardest challenges the Drupal community has faced, such as improving usability or maintaining documentation, may finally become more manageable. I see ways AI can support Drupal's mission, lower barriers to online publishing, make Drupal more accessible, and help build a stronger, more inclusive Open Web. The future is both exciting and uncertain."

Of course, AI comes with both promise and risk. It raises ethical questions and often fails to meet expectations. But ignoring AI is not a strategy. AI is already changing how digital work gets done. If we want Drupal to stay relevant, we need to explore its potential. That means experimenting thoughtfully, sharing what we learn, and helping shape how these tools are built and used.

Drupal's AI roadmap helps agencies

AI is changing how digital work gets done. Some platforms can now generate full websites, marketing campaigns, or content strategies in minutes. For simple use cases, that may be enough.

But many client needs are more complex. As requirements grow and automations become more sophisticated, agencies continue to play a critical role. They bring context, strategy, and accountability to challenges that off-the-shelf tools cannot solve.

That is the future we want Drupal to support. We are not building AI to replace digital agencies, but to strengthen them. Through the Drupal AI Initiative, Drupal agencies are actively helping shape the tools they want to use in an AI-driven world.

As agencies evolve in response to AI, they will need tools that evolve with them. Drupal is not only keeping pace but helping lead the way. By investing in AI in collaboration with the agencies who rely on it, we are making Drupal stronger, more capable, and more relevant.

Now is the moment to move

The shift toward AI-powered digital work is inevitable. Platforms will succeed or fail based on how well they adapt to this reality.

Drupal's investments in modern architecture, open development, and community collaboration has created something unique: a platform that doesn't just add AI features but fundamentally supports AI-driven workflows. While other systems scramble to retrofit AI capabilities, Drupal's foundation makes deep integration possible.

The question isn't whether AI will change digital agencies and content management. It already has. The question is which platforms will help agencies and developers thrive in that new reality. Drupal is positioning itself to be one of them.

Ever wondered what it’s like to unleash 10 000 tiny little data beasts on your hard drive? No? Well, buckle up anyway — because today, we’re diving into the curious world of random file generation, and then nerding out by calculating their size distribution. Spoiler alert: it’s less fun than it sounds. 😏

Step 1: Let’s Make Some Files… Lots of Them

Our goal? Generate 10 000 files filled with random data. But not just any random sizes — we want a mean file size of roughly 68 KB and a median of about 2 KB. Sounds like a math puzzle? That’s because it kind of is.

If you just pick file sizes uniformly at random, you’ll end up with a median close to the mean — which is boring. We want a skewed distribution, where most files are small, but some are big enough to bring that average up.

The Magic Trick: Log-normal Distribution 🎩✨

Enter the log-normal distribution, a nifty way to generate lots of small numbers and a few big ones — just like real life. Using Python’s NumPy library, we generate these sizes and feed them to good old /dev/urandom to fill our files with pure randomness.

Here’s the Bash script that does the heavy lifting:

#!/bin/bash

# Directory to store the random files
output_dir="random_files"
mkdir -p "$output_dir"

# Total number of files to create
file_count=10000

# Log-normal distribution parameters
mean_log=9.0  # Adjusted for ~68KB mean
stddev_log=1.5  # Adjusted for ~2KB median

# Function to generate random numbers based on log-normal distribution
generate_random_size() {
    python3 -c "import numpy as np; print(int(np.random.lognormal($mean_log, $stddev_log)))"
}

# Create files with random data
for i in $(seq 1 $file_count); do
    file_size=$(generate_random_size)
    file_path="$output_dir/file_$i.bin"
    head -c "$file_size" /dev/urandom > "$file_path"
    echo "Generated file $i with size $file_size bytes."
done

echo "Done. Files saved in $output_dir."

Easy enough, right? This creates a directory random_files and fills it with 10 000 files of sizes mostly small but occasionally wildly bigger. Don’t blame me if your disk space takes a little hit! 💥

Step 2: Crunching Numbers — The File Size Distribution 📊

Okay, you’ve got the files. Now, what can we learn from their sizes? Let’s find out the:

  • Mean size: The average size across all files.
  • Median size: The middle value when sizes are sorted — because averages can lie.
  • Distribution breakdown: How many tiny files vs. giant files.

Here’s a handy Bash script that reads file sizes and spits out these stats with a bit of flair:

#!/bin/bash

# Input directory (default to "random_files" if not provided)
directory="${1:-random_files}"

# Check if directory exists
if [ ! -d "$directory" ]; then
    echo "Directory $directory does not exist."
    exit 1
fi

# Array to store file sizes
file_sizes=($(find "$directory" -type f -exec stat -c%s {} \;))

# Check if there are files in the directory
if [ ${#file_sizes[@]} -eq 0 ]; then
    echo "No files found in the directory $directory."
    exit 1
fi

# Calculate mean
total_size=0
for size in "${file_sizes[@]}"; do
    total_size=$((total_size + size))
done
mean=$((total_size / ${#file_sizes[@]}))

# Calculate median
sorted_sizes=($(printf '%s\n' "${file_sizes[@]}" | sort -n))
mid=$(( ${#sorted_sizes[@]} / 2 ))
if (( ${#sorted_sizes[@]} % 2 == 0 )); then
    median=$(( (sorted_sizes[mid-1] + sorted_sizes[mid]) / 2 ))
else
    median=${sorted_sizes[mid]}
fi

# Display file size distribution
echo "File size distribution in directory $directory:"
echo "---------------------------------------------"
echo "Number of files: ${#file_sizes[@]}"
echo "Mean size: $mean bytes"
echo "Median size: $median bytes"

# Display detailed size distribution (optional)
echo
echo "Detailed distribution (size ranges):"
awk '{
    if ($1 < 1024) bins["< 1 KB"]++;
    else if ($1 < 10240) bins["1 KB - 10 KB"]++;
    else if ($1 < 102400) bins["10 KB - 100 KB"]++;
    else bins[">= 100 KB"]++;
} END {
    for (range in bins) printf "%-15s: %d\n", range, bins[range];
}' <(printf '%s\n' "${file_sizes[@]}")

Run it, and voilà — instant nerd satisfaction.

Example Output:

File size distribution in directory random_files:
---------------------------------------------
Number of files: 10000
Mean size: 68987 bytes
Median size: 2048 bytes

Detailed distribution (size ranges):
&lt; 1 KB         : 1234
1 KB - 10 KB   : 5678
10 KB - 100 KB : 2890
>= 100 KB      : 198

Why Should You Care? 🤷‍♀️

Besides the obvious geek cred, generating files like this can help:

  • Test backup systems — can they handle weird file size distributions?
  • Stress-test storage or network performance with real-world-like data.
  • Understand your data patterns if you’re building apps that deal with files.

Wrapping Up: Big Files, Small Files, and the Chaos In Between

So there you have it. Ten thousand random files later, and we’ve peeked behind the curtain to understand their size story. It’s a bit like hosting a party and then figuring out who ate how many snacks. 🍿

Try this yourself! Tweak the distribution parameters, generate files, crunch the numbers — and impress your friends with your mad scripting skills. Or at least have a fun weekend project that makes you sound way smarter than you actually are.

Happy hacking! 🔥

July 29, 2025

Have you already tried to upgrade the MySQL version of your MySQL HeatWave instance in OCI that is deployed with Terraform? When you tried, you realized, I hope you didn’t turn off backups, that the instance is destroyed and recreated new! This is our current MySQL HeatWave DB System deployed using Terrafrom: And this is […]

July 28, 2025

The MySQL REST Service is a next-generation JSON Document Store solution, enabling fast and secure HTTPS access to data stored in MySQL, HeatWave, InnoDB Cluster, InnoDB ClusterSet, and InnoDB ReplicaSet. The MySQL REST Service was first released on https://labs.mysql.com in 2023 using MySQL Router. During spring 2025, it was released on MySQL HeatWave and standard […]

July 27, 2025

Deze ochtend op dagelijkse wandel in het “Mechels Bos” met onze iets grotere hond (Maya, Roemeense adoptiehond met naar we vermoeden wat collie en wat berghond genen, ze is idd groter dan Mamita onze quasi-chihuahua) hoorde ik een vreemd geluid. Luister hieronder even (geen bewegend beeld, maar foto van hier in de buurt); het beest bleef een hele tijd stil en het is niet heel luid…

Source

July 26, 2025

Ik heb vandaag de (groene) Fairphone 6 besteld om mijn Nokia X20 te vervangen, na lang twijfelen toch met Google Android ipv e/OS want itsme/ bank apps. Als het ooit “veilig” is kan ik nog altijd naar e/OS flashen eh Redenen; 5 jaar garantie, 7 jaar updates, een resem aan vervangbare onderdelen…

Source

July 23, 2025

Two small figures watch a massive vessel launch, symbolizing digital agencies witnessing the AI transformation of their industry.

"To misuse a woodworking metaphor, I think we're experiencing a shift from hand tools to power tools. You still need someone who understands the basics to get good results from the tools, but they're not chiseling fine furniture by hand anymore. They're throwing heaps of wood through the tablesaw instead. More productive, but more likely to lose a finger if you're not careful." – mrmincent, Hacker News comment on Claude Code, via Simon Willison

If, like me, you work in web development, design, or digital strategy, this quote might hit close to home. But it may not go far enough. We are not just moving from chisels to table saws. We are about to hand out warehouse-sized CNC machines and robotic arms.

This is not just an upgrade in tools. The Industrial Revolution didn't just replace handcraft with machines. It upended entire industries.

History does not repeat itself, but it often rhymes. For over two centuries, new tools have changed not just how work gets done, but what we can accomplish.

AI is changing how websites are built and how people find information online. Individual developers are already using AI tools today, but broader organizational adoption will unfold over the years ahead.

It's clear that AI will have a deep impact on the web industry. Over time, this shift will affect those of us who have built our careers in web development, marketing, design, and digital strategy. I am one of them. Most of my career has been rooted in Drupal, which makes this both personal and difficult to write.

But this shift is bigger than any one person or platform. There are tens of thousands of digital agencies around the world, employing millions of people who design, build, and maintain the digital experiences we all rely on. Behind those numbers are teams, individuals, and livelihoods built over decades. Our foundation is shifting. It touches all of us, and we all need to adapt.

If you are feeling uncertain about where this is heading, you are not alone.

Why I am writing this

I am not writing this to be an alarmist. I actually feel a mix of emotions. I am excited about the possibilities AI offers, but also concerned about the risks and uneasy about the speed and scale of change.

As the project lead of Drupal, I ask myself: "How can I best guide our community of contributors, agencies, and end users through these changes?".

Like many of you, I am trying to understand what the rise of AI means for our users, teams, partners, contributors, products, and values. I want to help however I can.

I don't claim to have all the answers, but I hope this post sparks discussion, encourages deeper thinking, and helps us move forward together. This is not a roadmap, just a reflection of where my thinking is today.

I do feel confident that we need to keep moving forward, stay open-minded, and engage with the changes AI brings head-on.

Even with all that uncertainty, I feel energized. Some of the hardest challenges the Drupal community has faced, such as improving usability or maintaining documentation, may finally become more manageable. I see ways AI can support Drupal's mission, lower barriers to online publishing, make Drupal more accessible, and help build a stronger, more inclusive Open Web. The future is both exciting and uncertain.

But this post isn't just for the Drupal community. It's for anyone working in or around a digital agency who is asking: "What does AI mean for my team, my clients, and my future?". I will focus more directly on Drupal in my next blog post, so feel free to subscribe.

If you are thinking about how AI is affecting your work, whether in the Drupal ecosystem or elsewhere, I would love to hear from you. The more we share ideas, concerns, and experiments, the better prepared we will all be to shape this next chapter together.

The current downturn is real, but will pass

Before diving into AI, I'd be remiss not to acknowledge the current economic situation. Agencies across all platforms, not just those working with Drupal, are experiencing challenging market conditions, especially in the US and parts of Europe.

While much of the industry is focused on AI, the immediate pain many agencies are feeling is not caused by it. High interest rates, inflation, and global instability have made client organizations more cautious with spending. Budgets are tighter, sales cycles are longer, competition is fiercer, and more work is staying in-house.

As difficult as this is, it is not new. Economic cycles and political uncertainty have always come and gone. What makes this moment different is not the current downturn, but what comes next.

AI will transform the industry at an accelerating pace

AI has not yet reshaped agency work in a meaningful way, but that change is knocking at the door. At the current pace of progress, web development and digital agency work are on the verge of the most significant disruption since the rise of the internet.

One of the most visible areas of change has been content creation. AI generates drafts blog posts, landing pages, social media posts, email campaigns, and more. This speeds up production but also changes the workflow. Human input shifts toward editing, strategy, and brand alignment rather than starting from a blank page.

Code generation tools are also handling more implementation tasks. Senior developers can move faster, while junior developers are taking on responsibilities that once required more experience. As a result, developers are spending more time reviewing and refining AI-generated code than writing everything from scratch.

Traditional user interfaces are becoming less important as AI shifts user interactions toward natural language, voice, and more predictive or adaptive experiences. These still require thoughtful design, but the nature of UI work is changing. AI can now turn visual mockups into functional components and, in some cases, generate complete interfaces with minimal or no human input.

These shifts also challenge the way agencies bill for their work. When AI can do in minutes what once took hours or days, hourly billing becomes harder to justify. If an agency charges $150 an hour for something clients know AI can do faster, those clients will look elsewhere. To stay competitive, agencies will need to focus less on time spent and more on outcomes, expertise, and impact.

AI is also changing how people find and interact with information online. As users turn to AI assistants for answers, the role of the website as a central destination is being disrupted. This shift changes how clients think about content, traffic, and performance, which are core areas of agency work. Traditional strategies like SEO become less effective when users get what they need without ever visiting a site.

Through all of this, human expertise will remain essential. People are needed to set direction, guide priorities, review AI output, and take responsibility for quality and business outcomes. We still rely on individuals who know what to build, why it matters, and how to ensure the results are accurate, reliable, and aligned with real-world needs. When AI gets it wrong, it is still people who are accountable. Someone must own the decisions and stand behind the results.

But taken together, these changes will reshape how agencies operate and compete. To stay viable, agencies need to evolve their service offerings and rethink how they create and deliver value. That shift will also require changes to team structures, pricing models, and delivery methods. This is not just about adopting new tools. It is about reimagining what an agency does and how it works.

The hardest part may not be the technology. It is the human cost. Some people will see their roles change faster than they can adapt. Others may lose their jobs or face pressure to use tools that conflict with their values or standards.

Adding to the challenge, adopting AI requires investment at a moment when many agencies around the world are focused on survival. For teams already stretched thin, transformation may feel out of reach. The good news is that AI's full impact will take years to unfold, giving agencies time to adapt.

Still, moments like this can create major opportunities. In past downturns, technology shifts made room for new players and helped established firms reinvent themselves. The key is recognizing that this is not just about learning new tools. It is about positioning yourself where human judgment, relationships, and accountability for outcomes remain essential, even as AI takes on more of the execution.

The diminishing value of platform expertise alone

For years, CMS-focused agencies have built their businesses on deep platform expertise. Clients relied on them for custom development, performance tuning, security, and infrastructure. This specialized knowledge commanded a premium.

In effect, AI increases the supply of skilled work without a matching rise in demand. By automating tasks that once required significant expertise, it makes technical expertise abundant and much cheaper to produce. And according to the principles of supply and demand, when supply rises and demand stays the same, prices fall.

This is not a new pattern. SaaS website builders already commoditized basic site building, reducing the perceived value of simple implementations and pushing agencies toward more complex, higher-value projects.

Now, AI is accelerating that shift. It is extending the same kind of disruption into complex and enterprise-level work, bringing speed and automation to tasks that once required expensive and experienced teams.

In other words, AI erodes the commercial value of platform expertise by making many technical tasks less scarce. Agencies responded to earlier waves of commoditization by moving up the stack, toward work that was more strategic, more customized, and harder to automate.

AI is raising the bar again. Once more, agencies need to move further up the stack. And they need to do it faster than before.

The pattern of professional survival

This is not the first time professionals have faced a major shift. Throughout history, every significant technological change has required people to adapt.

Today, skilled radiologists interpret complex scans with help from AI systems. Financial analysts use algorithmic tools to process data while focusing on high-level strategy. The professionals who understand their domain deeply find ways to work with new technology instead of competing against it.

Still, not every role survives. Elevator operators disappeared when elevators became automatic. Switchboard operators faded as direct dialing became standard.

At the same time, these shifts unlocked growth. The number of elevators increased, making tall buildings more practical. The telephone became a household staple. As routine work was automated away, new industries and careers emerged.

The same will happen with AI. Some roles will go away. Others will change. Entirely new opportunities will emerge, many in areas we have not yet imagined.

I have lived through multiple waves of technological change. I witnessed the rise of the web, which created entirely new industries and upended existing ones. I experienced the shift from hand-coding to content management systems, which helped build today's thriving agency ecosystem. I saw mobile reshape how people access information, opening up new business models.

Each transition brought real uncertainty. In the moment, the risks felt immediate and the disruption felt personal. But over time, these shifts consistently led to new forms of prosperity, new kinds of work, and new ways to create value.

The great agency unbundling

AI can help agencies streamline how they work today, but when major technology shifts happen, success rarely comes from becoming more efficient at yesterday's model.

The bigger opportunity lies in recognizing when the entire system is being restructured. The real question is not just "How do we use AI to become a more efficient agency?" but "How will the concept of an agency be redefined?".

Most agencies today bundle together strategy, design, development, project management, and ongoing maintenance. This bundle made economic sense when coordination was costly and technical skills were scarce enough to command premium rates.

AI is now unbundling that model. It separates work based on what can be automated, what clients can bring in-house, and what still requires deep expertise.

At the same time, it is rebundling services around different principles, such as speed, specialization, measurable outcomes, accountability, and the value of human judgment.

The accountability gap

As AI automates routine tasks, execution becomes commoditized. But human expertise takes on new dimensions. Strategic vision, domain expertise, and cross-industry insights remain difficult to automate.

More critically, trust and accountability stay fundamentally human. When AI hallucinates or produces unexpected results, organizations need people who can take responsibility and navigate the consequences.

We see this pattern everywhere: airline pilots remain responsible for their passengers despite autopilot handling most of the journey, insurance companies use advanced software to generate quotes but remain liable for the policies they issue, and drivers are accountable for accidents even when following GPS directions.

The tools may be automated, but responsibility for mistakes and results remains human. For agencies, this means that while AI can generate campaigns, write code, and design interfaces, clients still need someone accountable for strategy, quality, and outcomes.

This accountability gap between what AI can produce and what organizations will accept liability for creates lasting space for human expertise.

The rise of orchestration platforms

Beyond human judgment, a new architectural pattern is emerging. Traditional Digital Experience Platforms (DXPs) excel at managing complex content, workflows, and integrations within a unified system. But achieving sophisticated automation often requires significant custom development, long implementation cycles, and deep platform expertise.

Now, visual workflow builders, API orchestration platforms, and the Model Context Protocol are enabling a different approach. Instead of building custom integrations or waiting for platform vendors to add features, teams can wire together AI models, automation tools, CRMs, content systems, and analytics platforms through drag-and-drop interfaces. What once required months of development can often be prototyped in days.

But moving from prototype to production requires deep expertise. It involves architecting event-driven systems, managing state across distributed workflows, implementing proper error handling for AI failures, and ensuring compliance across automated decisions. The tools may be visual, but making them work reliably at scale, maintaining security, ensuring governance, and building systems that can evolve with changing business needs demands sophisticated technical knowledge.

This orchestration capability represents a new technical high ground. Agencies that master this expanded stack can deliver solutions faster while maintaining the reliability and scalability that enterprises require.

Six strategies for how agencies could evolve

Agencies need two types of strategies: ways to compete better in today's model and ways to position for the restructured system that's emerging.

The strategies that follow are not mutually exclusive. Many agencies will combine elements from several based on their strengths, clients, and markets.

Competing in today's market

1. Become AI-augmented, not AI-resistant. To stay competitive, agencies should explore how AI can improve efficiency across their entire operation. Developers should experiment with code assistants, project managers should use AI to draft updates and reports, and sales teams should apply it to lead qualification or proposal writing. The goal is not to replace people, but to become more effective at handling fast-paced, low-cost work while creating more space for strategic, value-added thinking.

2. Focus on outcomes, not effort. As AI reduces delivery time, billing for hours makes less sense. Agencies can shift toward pricing based on value created rather than time spent. Instead of selling a redesign, offer to improve conversion rates. This approach aligns better with client goals and helps justify pricing even as technical work becomes faster.

3. Sell through consultation, not execution. As technology changes faster than most clients can keep up with, agencies have a chance to step into a more consultative role. Instead of just delivering projects, they can help clients understand their problems and shape the right solutions. Agencies that combine technical know-how with business insight can become trusted partners, especially as clients look for clarity and results.

Positioning for what comes next

4. Become the layer between AI and clients. Don't just use AI tools to build websites faster. Position yourself as the essential layer that connects AI capabilities with real client needs. This means building quality control systems that review AI-generated code before deployment and becoming the trusted partner that translates AI possibilities into measurable results. Train your team to become "AI translators" who can explain technical capabilities in business terms and help clients understand what's worth automating versus what requires human judgment.

5. Package repeatable solutions. When custom work becomes commoditized, agencies need ways to stand out. Turn internal knowledge into named, repeatable offerings. This might look like a "membership toolkit for nonprofits" or a "lead gen system for B2B SaaS". These templated solutions are easier to explain, sell, and scale. AI lowers the cost of building and maintaining them, making this model more realistic than it was in the past. This gives agencies a way to differentiate based on expertise and value, not just technical execution.

6. Build systems that manage complex digital workflows. Stop thinking in terms of one-off websites. Start building systems that manage complex, ongoing digital workflows. Agencies should focus on orchestrating tools, data, and AI agents in real time to solve business problems and drive automation.

For example, a website might automatically generate social media posts from new blog content, update landing pages based on campaign performance, or adjust calls to action during a product launch. All of this can happen with minimal human involvement, but these systems are still non-trivial to build and require oversight and accountability.

This opportunity feels significant. As marketing stacks grow more complex and AI capabilities expand, someone needs to coordinate how these systems work together in a structured and intelligent way. This is not just about connecting APIs. It is about designing responsive, event-driven systems using low-code orchestration tools, automation platforms, and AI agents.

Open Source needs agencies, proprietary platforms don't

Every AI feature a technology platform adds potentially takes work off the agency's plate. Whether the platform is open source or proprietary, each new capability reduces the need for custom development.

But open source and proprietary platforms are driven by very different incentives.

Proprietary platforms sell directly to end clients. For them, replacing agency services is a growth strategy. The more they automate, the more revenue they keep.

This is already happening. Squarespace builds entire websites from prompts. Shopify Magic writes product descriptions and designs storefronts.

Open source platforms are adding AI features as well, but operate under different incentives. Drupal doesn't monetize end users. Drupal's success depends on a healthy ecosystem where agencies contribute improvements that keep the platform competitive. Replacing agencies doesn't help Drupal; it weakens the very ecosystem that sustains it.

As the Project Lead of Drupal, I think about how Drupal the product and its ecosystem of digital agencies can evolve together. They need to move in step to navigate change and help shape what comes next.

This creates a fundamental difference in how platforms may evolve. Proprietary platforms are incentivized to automate and sell directly. Open source platforms thrive by leaving meaningful work for agencies, who in turn strengthen the platform through contributions and market presence.

For digital agencies, one key question stands out: do you want to work with platforms that grow by replacing you, or with platforms that grow by supporting you?

Looking ahead

Digital agencies face a challenging but exciting transition. While some platform expertise is becoming commoditized, entirely new categories of value are emerging.

The long-term opportunity isn't just about getting better at being an agency using AI tools. It's about positioning yourself to capture value as digital experiences evolve around intelligent systems.

Agencies that wait for perfect tools, continue billing by the hour for custom development, try to serve all industries, or rely on platform knowledge will be fighting yesterday's battles. They're likely to struggle.

But agencies that move early, experiment with purpose, and position themselves as the essential layer between AI capabilities and real client needs are building tomorrow's competitive advantages.

Success comes from recognizing that this transition creates the biggest opportunity for differentiation that agencies have seen in years.

For those working with Drupal, the open source foundation creates a fundamental advantage. Unlike agencies dependent on proprietary platforms that might eventually compete with them, Drupal agencies can help shape the platform's AI evolution to support their success rather than replace them.

We are shifting from hand tools to power tools. The craft remains, but both how we work and what we work on are changing. We are not just upgrading our tools; we are entering a world of CNC machines and robotic arms that automate tasks once done by hand. Those who learn to use these new capabilities, combining the efficiency of automation with human judgment, will create things that were not possible before.

In the next post, I'll share why I believe Drupal is especially well positioned to lead in this new era of AI-powered digital experience.

I've rewritten this blog post at least three times. Throughout the process, I received valuable feedback from several Drupal agency leaders and contributors, whose insights helped shape the final version. In alphabetical order by last name: Jamie Abrahams, Christoph Breidert, Seth Brown, Dominique De Cooman, George DeMet, Alex Dergachev, Justin Emond, John Faber, Seth Gregory, and Michael Meyers.

If you’re running Mail-in-a-Box like me, you might rely on Duplicity to handle backups quietly in the background. It’s a great tool — until it isn’t. Recently, I ran into some frustrating issues caused by buggy Duplicity versions. Here’s the story, a useful discussion from the Mail-in-a-Box forums, and a neat trick I use to keep fallback versions handy. Spoiler: it involves an APT hook and some smart file copying! 🚀


The Problem with Duplicity Versions

Duplicity 3.0.1 and 3.0.5 have been reported to cause backup failures — a real headache when you depend on them to protect your data. The Mail-in-a-Box forum post “Something is wrong with the backup” dives into these issues with great detail. Users reported mysterious backup failures and eventually traced it back to specific Duplicity releases causing the problem.

Here’s the catch: those problematic versions sometimes sneak in during automatic updates. By the time you realize something’s wrong, you might already have upgraded to a buggy release. 😩


Pinning Problematic Versions with APT Preferences

One way to stop apt from installing those broken versions is to use APT pinning. Here’s an example file I created in /etc/apt/preferences/pin_duplicity.pref:

Explanation: Duplicity version 3.0.1* has a bug and should not be installed
Package: duplicity
Pin: version 3.0.1*
Pin-Priority: -1

Explanation: Duplicity version 3.0.5* has a bug and should not be installed
Package: duplicity
Pin: version 3.0.5*
Pin-Priority: -1

This tells apt to refuse to install these specific buggy versions. Sounds great, right? Except — it often comes too late. You could already have updated to a broken version before adding the pin.

Also, since Duplicity is installed from a PPA, older versions vanish quickly as new releases push them out. This makes rolling back to a known good version a pain. 😤


My Solution: Backing Up Known Good Duplicity .deb Files Automatically

To fix this, I created an APT hook that runs after every package operation involving Duplicity. It automatically copies the .deb package files of Duplicity from apt’s archive cache — and even from my local folder if I’m installing manually — into a safe backup folder.

Here’s the script, saved as /usr/local/bin/apt-backup-duplicity.sh:

#!/bin/bash
set -x

mkdir -p /var/backups/debs/duplicity

cp -vn /var/cache/apt/archives/duplicity_*.deb /var/backups/debs/duplicity/ 2>/dev/null || true
cp -vn /root/duplicity_*.deb /var/backups/debs/duplicity/ 2>/dev/null || true

And here’s the APT hook configuration I put in /etc/apt/apt.conf.d/99backup-duplicity-debs to run this script automatically after DPKG operations:

DPkg::Post-Invoke { "/usr/local/bin/apt-backup-duplicity.sh"; };

Use apt-mark hold to Lock a Working Duplicity Version 🔒

Even with pinning and local .deb backups, there’s one more layer of protection I recommend: freezing a known-good version with apt-mark hold.

Once you’ve confirmed that your current version of Duplicity works reliably, run:

sudo apt-mark hold duplicity

This tells apt not to upgrade Duplicity, even if a newer version becomes available. It’s a great way to avoid accidentally replacing your working setup with something buggy during routine updates.

🧠 Pro Tip: I only unhold and upgrade Duplicity manually after checking the Mail-in-a-Box forum for reports that a newer version is safe.

When you’re ready to upgrade, do this:

sudo apt-mark unhold duplicity
sudo apt update
sudo apt install duplicity

If everything still works fine, you can apt-mark hold it again to freeze the new version.


How to Use Your Backup Versions to Roll Back

If a new Duplicity version breaks your backups, you can easily reinstall a known-good .deb file from your backup folder:

sudo apt install --reinstall /var/backups/debs/duplicity/duplicity_<version>.deb

Replace <version> with the actual filename you want to roll back to. Because you saved the .deb files right after each update, you always have access to older stable versions — even if the PPA has moved on.


Final Thoughts

While pinning bad versions helps, having a local stash of known-good packages is a game changer. Add apt-mark hold on top of that, and you have a rock-solid defense against regressions. 🪨✨

It’s a small extra step but pays off hugely when things go sideways. Plus, it’s totally automated with the APT hook, so you don’t have to remember to save anything manually. 🎉

If you run Mail-in-a-Box or rely on Duplicity in any critical backup workflow, I highly recommend setting up this safety net.

Stay safe and backed up! 🛡✨

20 years of Linux on the Desktop (part 4)

Previously in "20 years of Linux on the Deskop": After contributing to the launch of Ubuntu as the "perfect Linux desktop", Ploum realises that Ubuntu is drifting away from both Debian and GNOME. In the meantime, mobile computing threatens to make the desktop irrelevant.

The big desktop schism

The fragmentation of the Ubuntu/GNOME communities became all too apparent when, in 2010, Mark Shuttleworth announced during the Ubuntu-summit that Ubuntu would drop GNOME in favour of its own in-house and secretly developed desktop: Unity.

I was in the audience. I remember shaking my head in disbelief while Mark was talking on stage, just a few metres from me.

Working at the time in the automotive industry, I had heard rumours that Canonical was secretly talking with BMW to put Ubuntu in their cars and that there was a need for a new touchscreen interface in Ubuntu. Mark hoped to make an interface that would be the same on computers and touchscreens. Hence the name: "Unity". It made sense but I was not happy.

The GNOME community was, at the time, in great agitation about the future. Some thought that GNOME was looking boring. That there was no clear sense of direction except minor improvements. In 2006, the German Linux Company SUSE had signed a patent agreement with Microsoft covering patents related to many Windows 95 concepts like the taskbar, the tray, the startmenu. SUSE was the biggest contributor to KDE and the agreement was covering the project. But Red Hat and GNOME refused to sign that agreement, meaning that Microsoft suing the GNOME project was now plausible.

An experiment of an alternative desktop breaking all Windows 95 concepts was done in JavaScript: GNOME-shell.

A JavaScript desktop? Seriously? Yeah, it was cool for screenshots but it was slow and barely usable. It was an experiment, nothing else. But there’s a rule in the software world: nobody will ever end an experiment. An experiment will always grow until it becomes too big to cancel and becomes its own project.

Providing the GNOME desktop to millions of users, Mark Shuttleworth was rightly concerned about the future of GNOME. Instead of trying to fix GNOME, he decided to abandon it. That was the end of Ubuntu as Debian+GNOME.

What concerned me was that Ubuntu was using more and more closed products. Products that were either proprietary, developed behind closed doors or, at the very least, were totally controlled by Canonical people.

In 2006, I had submitted a Summer of Code project to build a GTK interface to Ubuntu’s new bug tracker: Launchpad. Launchpad was an in-house project which looked like it was based on the Python CMS Plone and I had some experience with it. During that summer, I realised that Launchpad was, in fact, proprietary and had no API. To my surprise, there was no way I could get the source code of Launchpad. Naively, I had thought that everything Ubuntu was doing would be free software. Asking the dev team, I was promised Launchpad would become free "later". I could not understand why Canonical people were not building it in the open.

I still managed to build "Conseil" by doing web scraping but it broke with every single change done internally by the Launchpad team.

As a side note, the name "Conseil" was inspired by the book "20.000 leagues under the sea", by Jules Vernes, a book I had downloaded from the Gutenberg project and that I was reading on my Nokia 770. The device was my first e-reader and I’ve read tenths of public domain books on it. This was made possible thanks to the power of opensource: FBreader, a very good epub reading software, had been easily ported to the N770 and was easily installable.

I tried to maintain Conseil for a few months before giving up. It was my first realisation that Canonical was not 100% open source. Even technically free software was developed behind closed doors or, at the very least, with tight control over the community. This included Launchpad, Bzr, Upstard, Unity and later Mir. The worse offender would later be Snap.

To Mark Shuttleworth’s credit, it should be noted that, most of the time, they were really trying to fix core issues with Linux’s ecosystem. In retrospective, it looks easy to see those moves as "bad". But, in reality, Canonical had a strong vision and keeping control was easier than to do everything in the open. Bzr was launched before git existed (by a few days). Upstard was created before Systemd. Those decisions made sense at the time.

Even the move to Unity would later prove to be very strategical as, in 2012, GNOME would suddenly depend on Systemd, which was explicitly developed as a competitor to Upstart. Ubuntu would concede defeat in 2015 by replacing Upstart with Systemd and in 2018 by reinstating GNOME as the default desktop. But those were not a given in 2010.

But even with the benefit of doubt, Canonical would sometimes cross huge red lines, like that time where Unity came bundled with some Amazon advertisement, tracking you on your own desktop. This was, of course, not really well received.

The end of Maemo: when incompetence is not enough, be malevolent

At the same time in the nascent mobile world, Nokia was not the only one suffering from the growing Apple/Google duopoly. Microsoft was going nowhere with its own mobile operating system, WindowsCE and running like a headless chicken. The director of the "Business division" of Microsoft, a guy named Stephen Elop, signed a contract with Nokia to develop some Microsoft Office feature on Symbian. This looked like an anecdotical side business until, a few months after that contract, in September 2010, Elop leaves Microsoft to become… CEO of Nokia.

This was important news to me because, at 2010’s GUADEC (GNOME’s annual conference) in Den Haag, I had met a small tribe of free software hackers called Lanedo. After a few nice conversations, I was excited to be offered a position in the team.

In my mind at the time, I would work on GNOME technologies full-time while being less and less active in the Ubuntu world! I had chosen my side: I would be a GNOME guy.

I was myself more and more invested in GNOME, selling GNOME t-shirts at FOSDEM and developing "Getting Things GNOME!", a software that would later become quite popular.

Joining Lanedo without managing to land a job at Canonical (despite several tries) was the confirmation that my love affair with Ubuntu had to be ended.

In 2010, Lanedo biggest customer was, by far, Nokia. I had been hired to work on Maemo (or maybe Meego? This was unclear). We were not thrilled to see an ex-Microsoft executive take the reins of Nokia.

As we feared, one of Elop’s first actions as CEO of Nokia was to kill Maemo in an infamous "burning platform" memo. Elop is a Microsoft man and hates anything that looks like free software. In fact, like a good manager, he hates everything technical. It is all the fault of the developers which are not "bringing their innovation to the market fast enough". Sadly, nobody highlighted the paradox that "bringing to the market" had never been the job of the developers. Elop’s impact on the Nokia company is huge and nearly immediate: the stock is in free fall.

One Nokia developer posted on Twitter: "Developers are blamed because they did what management asked them to do". But, sometimes, management even undid the work of the developers.

The Meego team at Nokia was planning a party for the release of their first mass-produced phone, the N8. While popping Champaign during the public announcement of the N8 release, the whole team learned that the phone had eventually been shipped with… Symbian. Nobody had informed the team. Elop had been CEO for less than a week and Nokia was in total chaos.

But Stephen Elop is your typical "successful CEO". "Successful" like in inheriting one of the biggest and most successful mobile phone makers and, in a couple of years, turning it into ashes. You can’t invent such "success".

During Elop's tenure, Nokia's stock price dropped 62%, their mobile phone market share was halved, their smartphone market share fell from 33% to 3%, and the company suffered a cumulative €4.9 billion loss

It should be noted that, against all odds, the Meego powered Nokia N9, which succeeded to the N8, was a success and was giving true hope of Meego competing with Android/iOS. N9 was considered a "flagship" and it showed. At Lanedo, we had discussed having an N9 bought by the company for each employee so we could "eat our own dog food" (something which was done at Collabora). But Elop announcement was clearly underderstood as the killing of Meego/Maemo and Symbian to leave room to… Windows Phone!

The Nokia N9 was available in multiple colours (picture by Bytearray render on Wikimedia) The Nokia N9 was available in multiple colours (picture by Bytearray render on Wikimedia)

Well, Elop promised that, despite moving to Windows Phone, Nokia would release one Meego phone every year. I don’t remember if anyone bought that lie. We could not really believe that all those years of work would be killed just when the success of the N9 proved that we did it right. But that was it. The N9 was the first and the last of its kind.

Ironically, the very first Windows Phone, the Lumia 800, will basically be the N9 with Windows Phone replacing Meego. And it would receive worse reviews that the N9.

At that moment, one question is on everybody's lips: is Stephen Elop such a bad CEO or is he destroying Nokia on purpose? Is it typical management incompetence or malevolence? Or both?

The answer comes when Microsoft, Elop’s previous employer, bought Nokia for a fraction of the price it would have paid if Elop hasn’t been CEO. It’s hard to argue that this was not premeditated: Elop managed to discredit and kill every software-related project Nokia had ever done. That way, Nokia could be sold as a pure hardware maker to Microsoft, without being encumbered by a software culture which was too distant from Microsoft. And Elop goes back to his old employer as a richer man, receiving a huge bonus for having tanked a company. But remember dear MBA students, he’s a "very successful manager", you should aspire to become like him.

Les voies du capitalisme sont impénétrables.

As foolish as it sounds, this is what the situation was: the biggest historical phone maker in the world merged with the biggest historical software maker. Vic Gundotra, head of the Google+ social network, posted: "Two turkeys don’t make an eagle." But one thing was clear: Microsoft was entering the mobile computing market because everything else was suddenly irrelevant.

Every business eyes were pointed towards mobile computing where, ironically, Debian+GNOME had been a precursor.

Just when it looked like Ubuntu managed to make Linux relevant on the desktop, nobody cared about the desktop anymore. How could Mark Shuttleworth makes Ubuntu relevant in that new world?

(to be continued)

Subscribe by email or by rss to get the next episodes of "20 years of Linux on the Desktop".

I’m currently turning this story into a book. I’m looking for an agent or a publisher interested to work with me on this book and on an English translation of "Bikepunk", my new post-apocalyptic-cyclist typewritten novel which sold out in three weeks in France and Belgium.

I’m Ploum, a writer and an engineer. I like to explore how technology impacts society. You can subscribe by email or by rss. I value privacy and never share your adress.

I write science-fiction novels in French. For Bikepunk, my new post-apocalyptic-cyclist book, my publisher is looking for contacts in other countries to distribute it in languages other than French. If you can help, contact me!

July 17, 2025

We all know MySQL InnoDB ClusterSet, a solution that links multiple InnoDB Clusters and Read Replicas asynchronously to easily generate complex MySQL architectures and manage them without burdensome commands. All this thanks to the MySQL Shell’s AdminAPI. This is an example of MySQL InnoDB ClusterSet using two data centers: Let’s explore how we can automate […]

July 16, 2025

We just got back from a family vacation exploring the Grand Canyon, Zion, and Bryce Canyon. As usual, I planned to write about our travels, but Vanessa, my wife, beat me to it.

She doesn't have a blog, but something about this trip inspired her to put pen to paper. When she shared her writing with me, I knew right away her words captured our vacation better than anything I could write.

Instead of starting from scratch, I asked if I could share her writing here. She agreed. I made light edits for publication, but the story and voice are hers. The photos and captions, however, are mine.


We just wrapped up our summer vacation with the boys, and this year felt like a real milestone. Axl graduated high school, so we let him have input on our destination. His request? To see some of the U.S. National Parks. His first pick was Yosemite, but traveling to California in July felt like gambling with wildfires. So we adjusted course, still heading west, but this time to the Grand Canyon, Zion and Bryce.

As it turned out, we didn't fully avoid the fire season. When we arrived at the Grand Canyon, we learned that wildfires had already been burning near Bryce for weeks. And by the time we were leaving Bryce, the Grand Canyon itself was under evacuation orders in certain areas due to its own active fires. We slipped through a safe window without disruption.

We kicked things off with a couple of nights in Las Vegas. The boys had never been, and it felt like a rite of passage. But after two days of blinking lights, slot machines, and entertainment, we were ready for something quieter. The highlight was seeing O by Cirque du Soleil at the Bellagio. The production had us wondering how many crew members it takes to make synchronized underwater acrobatics look effortless.

A large concrete dam spanning a deep canyon. When the Hoover Dam was built, they used an enormous amount of concrete. If they had poured it all at once, it would have taken over a century to cool and harden. Instead, they poured it in blocks and used cooling pipes to manage the heat. Even today, the concrete is still hardening through a process called hydration, so the dam keeps getting stronger over time.

On the Fourth of July, we picked up our rental car and headed to the Hoover Dam for a guided tour. We learned it wasn't originally built to generate electricity, but rather to prevent downstream flooding from the Colorado River. Built in the 1930s, it's still doing its job. And fun fact: the concrete is still curing after nearly a century. It takes about 100 years to fully cure.

While we were at the Hoover Dam, we got the news that Axl was admitted to study civil engineering. A proud moment in a special place centered on engineering and ambition.

From there, we drove to the South Rim of the Grand Canyon and checked into El Tovar. When we say the hotel sits on the rim, we mean right on the rim. Built in 1905, it has hosted an eclectic list of notable guests, including multiple U.S. presidents, Albert Einstein, Liz Taylor, and Paul McCartney. Standing on the edge overlooking the canyon, we couldn't help but imagine them taking in the same view, the same golden light, the same vast silence. That sense of shared wonder, stretched across generations, made the moment special. No fireworks in the desert this Independence Day, but the sunset over the canyon was its own kind of magic.

The next morning, we hiked the Bright Angel Trail to the 3-mile resthouse. Rangers, staff, and even Google warned us to start early. But with teenage boys and jet lag, our definition of "early" meant hitting the trail by 8:30am. By 10am, a ranger reminded us that hiking after that hour is not advised. We pressed on carefully, staying hydrated and dunking our hats and shirts at every water source. Going down was warm. Coming up? Brutal. But we made it, sweaty and proud. Our reward: showers, naps, and well-earned ice cream.

Next up: Zion. We stopped at Horseshoe Bend on the way, a worthy detour with dramatic views of the Colorado River. By the time we entered Zion National Park, we were in total disbelief. The landscape was so perfectly sculpted it didn't look real. Towering red cliffs, hanging gardens, and narrow slot canyons surrounded us. I told Dries, "It's like we're driving through Disneyland", and I meant that in the best way.

A view of a wide, U-shaped bend in the Colorado River surrounded by steep red rock cliffs. We visited Horseshoe Bend, a dramatic curve in the Colorado River near the Grand Canyon. A quiet reminder of what time and a patient river can carve out together.

After a long drive, we jumped in the shared pool at our rental house and met other first-time visitors who were equally blown away. That night, we celebrated our six year wedding anniversary with tacos and cocktails at a cantina inside a converted gas station. Nothing fancy, but a good memory.

One thing that stood out in Zion was the deer. They roamed freely through the neighborhoods and seemed unbothered by our presence. Every evening, a small group would quietly wander through our yard, grazing on grass and garden beds like they owned the place.

The next morning, we hiked The Narrows, wading through the Virgin River in full gear. Our guide shared stories and trail history, and most importantly, brought a charcuterie board. We hike for snacks, after all. Learning how indigenous communities thrived in these canyons for thousands of years gave us a deeper connection to the land, especially for me, as someone with Native heritage.

A small group walking through water into the Narrows, a narrow canyon with glowing rock walls in Zion National Park. We hiked 7.5 miles through the Narrows in Zion National Park. Most of the hike is actually in the river itself, with towering canyon walls rising all around you. One of my favorite hikes ever.
A person standing on a rock in the Narrows at Zion National Park, looking up at the tall canyon walls. Taking a moment to look up and take it all in.
Three people walking up the river with their boots in the water in the Narrows at Zion National Park. Wading forward together through the Narrows in Zion.

The following day was for rappelling, scrambling, and hiking. The boys were hyped, memories of rappelling in Spain had them convinced there would be waterfalls. Spoiler: there weren't. It hadn't rained in Zion for months. But dry riverbeds didn't dull the excitement. We even found shell fossils embedded in the sandstone. Proof the area was once underwater.

Two young adults reaching down to help a parent climb up a steep sandstone wall in Zion National Park. Time has a way of flipping the roles.
A woman wearing a helmet and sunglasses, smiling while rappelling in Zion National Park. Getting ready to rappel in Zion, and enjoying every moment of it.
A person carefully walking along a narrow sandstone slot canyon in Zion National Park. Making our way through the narrow slots in Zion.

From Zion, we headed to Bryce Canyon. The forecast promised cooler temperatures, and we couldn't wait. We stayed at Under Canvas, a glamping site set in open range cattle territory. Canvas tents with beds and private bathrooms, but no electricity or WiFi. Cue the family debate: "Is this camping or hoteling?" Dries, Axl and Stan voted for "hoteling". I stood alone on "team camping". (Spoiler: it is camping when there are no outlets.) Without our usual creature comforts, we slowed down. We read. We played board games. We played cornhole. We watched sunsets and made s'mores.

A family sits around a fire pit at a campsite in the high desert outside Bryce, Utah. Glamping in the high desert outside Bryce, Utah. Even in summer, the high elevation brings cool evenings, and the fire felt perfect after a day on the trail.

The next day, we hiked the Fairyland Loop, eight miles along the rim with panoramic views into Bryce's otherworldly amphitheater of hoodoos. The towering spires and sculpted rock formations gave the park an almost storybook quality, as if the landscape had been carved by imagination rather than erosion. Though the temperature was cooler, the sun still packed a punch, so we were glad to finish before the midday heat. At night, the temperature dropped quickly once the sun went down. We woke up to 45°F (about 7°C) mornings, layering with whatever warm clothes we had packed, which, given we planned for desert heat, wasn't much.

One of our most memorable mornings came with a 4:30 am wake-up call to watch the sunrise at Sunrise Point. We had done something similar with the boys at Acadia in 2018. It's a tough sell at that hour, but always worth it. As the sun broke over the canyon, the hoodoos lit up in shades of orange and gold unlike anything we'd seen the day before. Afterward, we hiked Navajo Loop and Queen's Garden and were ready for a big breakfast at the lodge.

A young adult wearing a hoodie overlooking the hoodoos at sunrise in Bryce Canyon National Park. Up before sunrise to watch the hoodoos glow at Bryce Canyon in Utah. Cold and early, but unforgettable.
A woman with trekking poles hiking down a switchback trail among tall orange hoodoos in Bryce Canyon National Park. Vanessa making her way down through the hoodoos on the Navajo Loop in Bryce Canyon.

Later that day, we visited Mossy Cave Trail. We followed the stream, poked around the waterfall, and hunted for fossils. Axl and I were on a mission, cracking open sandstone rocks in hopes of finding hidden treasures. Mostly, we just made a mess (of ourselves). I did stumble upon a tiny sliver of geode ... nature's way of rewarding persistence, I suppose.

Before heading to Salt Lake City for laundry (yes, that's a thing after hiking in the desert for a week), we squeezed in one more thrill: whitewater rafting on the Sevier River. Our guide, Ryan, was part comedian, part chaos agent. His goal was to get the boys drenched! The Class II and III rapids were mellow but still a blast, especially since the river was higher than expected for July. We all stayed in the raft, mostly wet, mostly laughing.

Incredibly, throughout the trip, none of us got sunburned, despite hiking in triple digit heat, rappelling down canyon walls, and rafting under a cloudless sky. We each drank about 4 to 6 liters of water a day, and no one passed out, so we're calling it a win.

On our final evening, during dinner, I pulled out a vacation questionnaire I had created without telling anyone. Since the boys aren't always quick to share what they loved, I figured it was a better way to ask everyone to rate their experience. What did they love? What would they skip next time? What do they want more of, less of, or never again? It was a simple way to capture the moment, create conversation, reflect on what stood out, and maybe even help shape the next trip. Turns out, teens do have opinions, especially when sunrises and physical exertion are involved.

This trip was special. When I was a kid, I thought hiking and parks were boring. Now, it's what Dries and I seek out. We felt grateful we could hike, rappel, raft, and laugh alongside the boys. We created memories we hope they'll carry with them, long after this summer fades. We're proud of the young men they're becoming, and we can't wait for the next chapter in our family adventures.

File deduplication isn’t just for massive storage arrays or backup systems—it can be a practical tool for personal or server setups too. In this post, I’ll explain how I use hardlinking to reduce disk usage on my Linux system, which directories are safe (and unsafe) to link, why I’m OK with the trade-offs, and how I automated it with a simple monthly cron job using a neat tool called hadori.


🔗 What Is Hardlinking?

In a traditional filesystem, every file has an inode, which is essentially its real identity—the data on disk. A hard link is a different filename that points to the same inode. That means:

  • The file appears to exist in multiple places.
  • But there’s only one actual copy of the data.
  • Deleting one link doesn’t delete the content, unless it’s the last one.

Compare this to a symlink, which is just a pointer to a path. A hardlink is a pointer to the data.

So if you have 10 identical files scattered across the system, you can replace them with hardlinks, and boom—nine of them stop taking up extra space.


🤔 Why Use Hardlinking?

My servers run a fairly standard Ubuntu install, and like most Linux machines, the root filesystem accumulates a lot of identical binaries and libraries—especially across /bin, /lib, /usr, and /opt.

That’s not a problem… until you’re tight on disk space, or you’re just a curious nerd who enjoys squeezing every last byte.

In my case, I wanted to reduce disk usage safely, without weird side effects.

Hardlinking is a one-time cost with ongoing benefits. It’s not compression. It’s not archival. But it’s efficient and non-invasive.


📁 Which Directories Are Safe to Hardlink?

Hardlinking only works within the same filesystem, and not all directories are good candidates.

✅ Safe directories:

  • /bin, /sbin – system binaries
  • /lib, /lib64 – shared libraries
  • /usr, /usr/bin, /usr/lib, /usr/share, /usr/local – user-space binaries, docs, etc.
  • /opt – optional manually installed software

These contain mostly static files: compiled binaries, libraries, man pages… not something that changes often.

⚠ Unsafe or risky directories:

  • /etc – configuration files, might change frequently
  • /var, /tmp – logs, spools, caches, session data
  • /home – user files, temporary edits, live data
  • /dev, /proc, /sys – virtual filesystems, do not touch

If a file is modified after being hardlinked, it breaks the deduplication (the OS creates a copy-on-write scenario), and you’re back where you started—or worse, sharing data you didn’t mean to.

That’s why I avoid any folders with volatile, user-specific, or auto-generated files.


🧨 Risks and Limitations

Hardlinking is not magic. It comes with sharp edges:

  • One inode, multiple names: All links are equal. Editing one changes the data for all.
  • Backups: Some backup tools don’t preserve hardlinks or treat them inefficiently.
    Duplicity, which I use, does not preserve hardlinks. It backs up each linked file as a full copy, so hardlinking won’t reduce backup size.
  • Security: Linking files with different permissions or owners can have unexpected results.
  • Limited scope: Only works within the same filesystem (e.g., can’t link / and /mnt if they’re on separate partitions).

In my setup, I accept those risks because:

  • I’m only linking read-only system files.
  • I never link config or user data.
  • I don’t rely on hardlink preservation in backups.
  • I test changes before deploying.

In short: I know what I’m linking, and why.


🔍 What the Critics Say About Hardlinking

Not everyone loves hardlinks—and for good reasons. Two thoughtful critiques are:

The core arguments:

  • Hardlinks violate expectations about file ownership and identity.
  • They can break assumptions in software that tracks files by name or path.
  • They complicate file deletion logic—deleting one name doesn’t delete the content.
  • They confuse file monitoring and logging tools, since it’s hard to tell if a file is “new” or just another name.
  • They increase the risk of data corruption if accidentally modified in-place by a script that assumes it owns the file.

Why I’m still OK with it:

These concerns are valid—but mostly apply to:

  • Mutable files (e.g., logs, configs, user data)
  • Systems with untrusted users or dynamic scripts
  • Software that relies on inode isolation or path integrity

In contrast, my approach is intentionally narrow and safe:

  • I only deduplicate read-only system files in /bin, /sbin, /lib, /lib64, /usr, and /opt.
  • These are owned by root, and only changed during package updates.
  • I don’t hardlink anything under /home, /etc, /var, or /tmp.
  • I know exactly when the cron job runs and what it targets.

So yes, hardlinks can be dangerous—but only if you use them in the wrong places. In this case, I believe I’m using them correctly and conservatively.


⚡ Does Hardlinking Impact System Performance?

Good news: hardlinks have virtually no impact on system performance in everyday use.

Hardlinks are a native feature of Linux filesystems like ext4 or xfs. The OS treats a hardlinked file just like a normal file:

  • Reading and writing hardlinked files is just as fast as normal files.
  • Permissions, ownership, and access behave identically.
  • Common tools (ls, cat, cp) don’t care whether a file is hardlinked or not.
  • Filesystem caches and memory management work exactly the same.

The only difference is that multiple filenames point to the exact same data.

Things to keep in mind:

  • If you edit a hardlinked file, all links see that change because there’s really just one file.
  • Some tools (backup, disk usage) might treat hardlinked files differently.
  • Debugging or auditing files can be slightly trickier since multiple paths share one inode.

But from a performance standpoint? Your system won’t even notice the difference.


🛠 Tools for Hardlinking

There are a few tools out there:

  • fdupes – finds duplicates and optionally replaces with hardlinks
  • rdfind – more sophisticated detection
  • hardlink – simple but limited
  • jdupes – high-performance fork of fdupes

📌 About Hadori

From the Debian package description:

This might look like yet another hardlinking tool, but it is the only one which only memorizes one filename per inode. That results in less memory consumption and faster execution compared to its alternatives. Therefore (and because all the other names are already taken) it’s called “Hardlinking DOne RIght”.

Advantages over other tools:

  • Predictability: arguments are scanned in order, each first version is kept
  • Much lower CPU and memory consumption compared to alternatives

This makes hadori especially suited for system-wide deduplication where efficiency and reliability matter.


⏱ How I Use Hadori

I run hadori once per month with a cron job. Here’s the actual command:

/usr/bin/hadori --verbose /bin /sbin /lib /lib64 /usr /opt

This scans those directories, finds duplicate files, and replaces them with hardlinks when safe.

And here’s the crontab entry I installed in the file /etc/cron.d/hadori:

@monthly root /usr/bin/hadori --verbose /bin /sbin /lib /lib64 /usr /opt

📉 What Are the Results?

After the first run, I saw a noticeable reduction in used disk space, especially in /usr/lib and /usr/share. On my modest VPS, that translated to about 300–500 MB saved—not huge, but non-trivial for a small root partition.

While this doesn’t reduce my backup size (Duplicity doesn’t support hardlinks), it still helps with local disk usage and keeps things a little tidier.

And because the job only runs monthly, it’s not intrusive or performance-heavy.


🧼 Final Thoughts

Hardlinking isn’t something most people need to think about. And frankly, most people probably shouldn’t use it.

But if you:

  • Know what you’re linking
  • Limit it to static, read-only system files
  • Automate it safely and sparingly

…then it can be a smart little optimization.

With a tool like hadori, it’s safe, fast, and efficient. I’ve read the horror stories—and decided that in my case, they don’t apply.


✉ This post was brought to you by a monthly cron job and the letters i-n-o-d-e.

July 09, 2025

A few weeks ago, I was knee-deep in CSV files. Not the fun kind. These were automatically generated reports from Cisco IronPort, and they weren’t exactly what I’d call analysis-friendly. Think: dozens of columns wide, thousands of rows, with summary data buried in awkward corners.

I was trying to make sense of incoming mail categories—Spam, Clean, Malware—and the numbers that went with them. Naturally, I opened the file in Excel, intending to wrangle the data manually like I usually do. You know: transpose the table, delete some columns, rename a few headers, calculate percentages… the usual grunt work.

But something was different this time. I noticed the “Get & Transform” section in Excel’s Data ribbon. I had clicked it before, but this time I gave it a real shot. I selected “From Text/CSV”, and suddenly I was in a whole new environment: Power Query Editor.


🤯 Wait, What Is Power Query?

For those who haven’t met it yet, Power Query is a powerful tool in Excel (and also in Power BI) that lets you import, clean, transform, and reshape data before it even hits your spreadsheet. It uses a language called M, but you don’t really have to write code—although I quickly did, of course, because I can’t help myself.

In the editor, every transformation step is recorded. You can rename columns, remove rows, change data types, calculate new columns—all through a clean interface. And once you’re done, you just load the result into Excel. Even better: you can refresh it with one click when the source file updates.


🧪 From Curiosity to Control

Back to my IronPort report. I used Power Query to:

  • Transpose the data (turn columns into rows),
  • Remove columns I didn’t need,
  • Rename columns to something meaningful,
  • Convert text values to numbers,
  • Calculate the percentage of each message category relative to the total.

All without touching a single cell in Excel manually. What would have taken 15+ minutes and been error-prone became a repeatable, refreshable process. I even added a “Percent” column that showed something like 53.4%—formatted just the way I wanted.


🤓 The Geeky Bit (Optional)

I quickly opened the Advanced Editor to look at the underlying M code. It was readable! With a bit of trial and error, I started customizing my steps, renaming variables for clarity, and turning a throwaway transformation into a well-documented process.

This was the moment it clicked: Power Query is not just a tool; it’s a pipeline.


💡 Lessons Learned

  • Sometimes it pays to explore what’s already in the software you use every day.
  • Excel is much more powerful than most people realize.
  • Power Query turns tedious cleanup work into something maintainable and even elegant.
  • If you do something in Excel more than once, Power Query is probably the better way.

🎯 What’s Next?

I’m already thinking about integrating this into more of my work. Whether it’s cleaning exported logs, combining reports, or prepping data for dashboards, Power Query is now part of my toolkit.

If you’ve never used it, give it a try. You might accidentally discover your next favorite tool—just like I did.


Have you used Power Query before? Let me know your tips or war stories in the comments!

July 02, 2025

Lately, I’ve noticed something strange happening in online discussions: the humble em dash (—) is getting side-eyed as a telltale sign that a text was written with a so-called “AI.” I prefer the more accurate term: LLM (Large Language Model), because “artificial intelligence” is a bit of a stretch — we’re really just dealing with very complicated statistics 🤖📊.

Now, I get it — people are on high alert, trying to spot generated content. But I’d like to take a moment to defend this elegant punctuation mark, because I use it often — and deliberately. Not because a machine told me to, but because it helps me think 🧠.

A Typographic Tool, Not a Trend 🖋

The em dash has been around for a long time — longer than most people realize. The oldest printed examples I’ve found are in early 17th-century editions of Shakespeare’s plays, published by the printer Okes in the 1620s. That’s not just a random dash on a page — that’s four hundred years of literary service 📜. If Shakespeare’s typesetters were using em dashes before indoor plumbing was common, I think it’s safe to say they’re not a 21st-century LLM quirk.

The Tragedy of Othello, the Moor of Venice, with long dashes (typeset here with 3 dashes)

A Dash for Thoughts 💭

In Dutch, the em dash is called a gedachtestreepje — literally, a thought dash. And honestly? I think that’s beautiful. It captures exactly what the em dash does: it opens a little mental window in your sentence. It lets you slip in a side note, a clarification, an emotion, or even a complete detour — just like a sudden thought that needs to be spoken before it disappears. For someone like me, who often thinks in tangents, it’s the perfect punctuation.

Why I Use the Em Dash (And Other Punctuation Marks)

I’m autistic, and that means a few things for how I write. I tend to overshare and infodump — not to dominate the conversation, but to make sure everything is clear. I don’t like ambiguity. I don’t want anyone to walk away confused. So I reach for whatever punctuation tools help me shape my thoughts as precisely as possible:

  • Colons help me present information in a tidy list — like this one.
  • Brackets let me add little clarifications (without disrupting the main sentence).
  • And em dashes — ah, the em dash — they let me open a window mid-sentence to give you extra context, a bit of tone, or a change in pace.

They’re not random. They’re intentional. They reflect how my brain works — and how I try to bridge the gap between thoughts and words 🌉.

It’s Not Just a Line — It’s a Rhythm 🎵

There’s also something typographically beautiful about the em dash. It’s not a hyphen (-), and it’s not a middling en dash (–). It’s long and confident. It creates space for your eyes and your thoughts. Used well, it gives writing a rhythm that mimics natural speech, especially the kind of speech where someone is passionate about a topic and wants to take you on a detour — just for a moment — before coming back to the main road 🛤.

I’m that someone.

Don’t Let the Bots Scare You

Yes, LLMs tend to use em dashes. So do thoughtful human beings. Let’s not throw centuries of stylistic nuance out the window because a few bots learned how to mimic good writing. Instead of scanning for suspicious punctuation, maybe we should pay more attention to what’s being said — and how intentionally 💬.

So if you see an em dash in my writing, don’t assume it came from a machine. It came from me — my mind, my style, my history with language. And I’m not going to stop using it just because an algorithm picked up the habit 💛.

July 01, 2025

An astronaut (Cloudflare) facing giant glowing structures (crawlers) drawing energy in an alien sunset landscape.

AI is rewriting the rules of how we work and create. Expert developers can now build faster, non-developers can build software, research is accelerating, and human communication is improving. In the next 10 years, we'll probably see a 1,000x increase in AI demand. That is why Drupal is investing heavily in AI.

But at the same time, AI companies are breaking the web's fundamental economic model. This problem demands our attention.

The AI extraction problem

For 25 years, we built the Open Web on an implicit agreement: search engines could index our content because they sent users back to our websites. That model helped sustain blogs, news sites, and even open source projects.

AI companies broke that model. They train on our work and answer questions directly in their own interfaces, cutting creators out entirely. Anthropic's crawler reportedly makes 70,000 website requests for every single visitor it sends back. That is extraction, not exchange.

This is the Makers and Takers problem all over again.

The damage is real:

  • Chegg, an online learning platform, filed an antitrust lawsuit against Google, claiming that AI-powered search answers have crushed their website traffic and revenue.
  • Stack Overflow has seen a significant drop in daily active users and new questions (about 25-50%), as more developers turn to ChatGPT for faster answers.
  • I recently spoke with a recipe blogger who is a solo entrepreneur. With fewer visitors, they're earning less from advertising. They poured their heart, craft, and sweat into creating a high-quality recipe website, but now they believe their small business won't survive.

None of this should surprise us. According to Similarweb, since Google launched "AI Overviews", the number of searches that result in no click-throughs has increased from 56% in May 2024 to 69% in May 2025, meaning users get their answers directly on the results page.

This "zero-click" phenomenon reinforces the shift I described in my 2015 post, "The Big Reverse of the Web". Ten years ago, I argued that the web was moving away from sending visitors out to independent sites and instead keeping them on centralized platforms, all in the name of providing a faster and more seamless user experience.

However, the picture isn't entirely negative. Some companies find that visitors from AI tools, while small in volume, convert at much higher rates. At Acquia, the company I co-founded, traffic from AI chatbots makes up less than 1 percent of total visitors but converts at over 6 percent, compared to typical rates of 2 to 3 percent. We are still relatively early in the AI adoption cycle, so time will tell how this trend evolves, how marketers adapt, and what new opportunities it might create.

Finding a new equilibrium

There is a reason this trend has taken hold: users love it. AI-generated answers provide instant, direct information without extra clicks. It makes traditional search engines look complicated by comparison.

But this improved user experience comes at a long-term cost. When value is extracted without supporting the websites and authors behind it, it threatens the sustainability of the content we all rely on.

I fully support improving the user experience. That should always come first. But it also needs to be balanced with fair support for creators and the Open Web.

We should design systems that share value more fairly among users, AI companies, and creators. We need a new equilibrium that sustains creative work, preserves the Open Web, and still delivers the seamless experiences users expect.

Some might worry it is already too late, since large AI companies have massive scraped datasets and can generate synthetic data to fill gaps. But I'm not so sure. The web will keep evolving for decades, and no model can stay truly relevant without fresh, high-quality content.

From voluntary rules to enforcement

We have robots.txt, a simple text file that tells crawlers which parts of a website they can access. But it's purely voluntary. Creative Commons launched CC Signals last week, allowing content creators to signal how AI can reuse their work. But both robots.txt and CC Signals are "social contracts" that are hard to enforce.

Today, Cloudflare announced they will default to blocking AI crawlers from accessing content. This change lets website owners decide whether to allow access and whether to negotiate compensation. Cloudflare handles 20% of all web traffic. When an AI crawler tries to access a website protected by Cloudflare, it must pass through Cloudflare's servers first. This allows Cloudflare to detect crawlers that ignore robots.txt directives and block them.

This marks a shift from purely voluntary signals to actual technical enforcement. Large sites could already afford their own infrastructure to detect and block crawlers or negotiate licensing deals directly. For example, Reddit signed a $60 million annual deal with Google to license its content for AI training.

However, most content creators, like you and I, can do neither.

Cloudflare's actions establish a crucial principle: AI training data has a price, and creators deserve to share in the value AI generates from their work.

The missing piece: content licensing marketplaces

Accessible enforcement infrastructure is step one, and Cloudflare now provides that. Step two would be a content licensing marketplace that helps broker deals between AI companies and content creators at any scale. This would move us from simply blocking to creating a fair economic exchange.

To the best of my knowledge, such marketplaces do not exist yet, but the building blocks are starting to emerge. Matthew Prince, CEO of Cloudflare, has hinted that Cloudflare may be working on building such a marketplace, and I think it is a great idea.

I don't know what that will look like, but I imagine something like Shutterstock for AI training data, combined with programmatic pricing like Google Ads. On Shutterstock, photographers upload images, set licensing terms, and earn money when companies license their photos. Google Ads automatically prices and places millions of ads without manual negotiations. A future content licensing marketplace could work in a similar way: creators would set licensing terms (like they do on Shutterstock), while automated systems manage pricing and transactions (as Google Ads does).

Today, only large platforms like Reddit can negotiate direct licensing deals with AI companies. A marketplace with programmatic pricing would make licensing accessible to creators of all sizes. Instead of relying on manual negotiations or being scraped for free, creators could opt into fair, programmatic licensing programs.

This would transform the dynamic from adversarial blocking to collaborative value creation. Creators get compensated. AI companies get legal, high-quality training data. Users benefit from better AI tools built on ethically sourced content.

Making the Open Web sustainable

We built the Open Web to democratize access to knowledge and online publishing. AI advances this mission of democratizing knowledge. But we also need to ensure the people who write, record, code, and share that knowledge aren't left behind.

The issue is not that AI exists. The problem is that we have not built economic systems to support the people and organizations that AI relies on. This affects independent bloggers, large media companies, and open source maintainers whose code and documentation train coding assistants.

Call me naive, but I believe AI companies want to work with content creators to solve this. Their challenge is that no scalable system exists to identify, contact, and pay millions of content creators.

Content creators lack tools to manage and monetize their rights. AI companies lack systems to discover and license content at scale. Cloudflare's move is a first step. The next step is building content licensing marketplaces that connect creators directly with AI companies.

The Open Web needs economic systems that sustain the people who create its content. There is a unique opportunity here: if content creators and AI companies build these systems together, we could create a stronger, more fair, and more resilient Web than we have had in 25 years. The jury is out on that, but one can dream.

Disclaimer: Acquia, my company, has a commercial relationship with Cloudflare, but this perspective reflects my long-standing views on sustainable web economics, not any recent briefings or partnerships.

June 25, 2025

Soms zit het mee, soms nét niet. Het herenhuis waar we helemaal verliefd op waren, is helaas aan iemand anders verhuurd. Jammer, maar we blijven niet bij de pakken zitten. We zoeken verder — en hopelijk kan jij ons daarbij helpen!

Wij zijn drie mensen die samen een huis willen delen in Gent. We vormen een warme, bewuste en respectvolle woongroep, en we dromen van een plek waar we rust, verbinding en creativiteit kunnen combineren.

Wie zijn wij?

👤 Amedee (48): IT’er, balfolkdanser, amateurmuzikant, houdt van gezelschapsspelletjes en wandelen, auti en sociaal geëngageerd
👩 Chloë (bijna 52): Kunstenares, ex-Waldorfleerkracht en permacultuurontwerpster, houdt van creativiteit, koken en natuur
🎨 Kathleen (54): Doodle-artiest met sociaal-culturele achtergrond, houdt van gezelligheid, buiten zijn en schrijft graag

We willen samen een huis vormen waar communicatie, zorgzaamheid en vrijheid centraal staan. Een plek waar je je thuis voelt, en waar ruimte is voor kleine activiteiten zoals een spelavond, een workshop, een creatieve sessie of gewoon rustig samen zijn.

Wat zoeken we?

🏡 Een huis (géén appartement) in Gent, op max. 15 minuten fietsen van station Gent-Sint-Pieters
🌿 Energiezuinig: EPC B of beter
🛏 Minstens 3 ruime slaapkamers van ±20m²
💶 Huurprijs:

  • tot €1650/maand voor 3 slaapkamers
  • tot €2200/maand voor 4 slaapkamers

Extra ruimtes zoals een zolder, logeerkamer, atelier, bureau of hobbyruimte zijn heel welkom. We houden van luchtige, multifunctionele plekken die mee kunnen groeien met onze noden.

📅 Beschikbaar: vanaf nu, ten laatste oktober

💬 Heeft het huis 4 slaapkamers? Dan verwelkomen we graag een vierde huisgenoot die onze waarden deelt. Maar meer dan 4 bewoners willen we bewust vermijden — kleinschalig wonen werkt voor ons het best.

Ken jij iets? Laat van je horen!

Ken je een huis dat past in dit plaatje?
We staan open voor tips via immokantoren, vrienden, buren, collega’s of andere netwerken — alles helpt!

📩 Contact: amedee@vangasse.eu

Dankjewel om mee uit te kijken — en delen mag altijd 💜

June 24, 2025

A glowing light bulb hanging in an underground tunnel.

In my post about digital gardening and public notes, I shared a principle I follow: "If a note can be public, it should be". I also mentioned using Obsidian for note-taking. Since then, various people have asked about my Obsidian setup.

I use Obsidian to collect ideas over time rather than to manage daily tasks or journal. My setup works like a Commonplace book, where you save quotes, thoughts, and notes to return to later. It is also similar to a Zettelkasten, where small, linked notes build deeper understanding.

What makes such note-taking systems valuable is how they help ideas grow and connect. When notes accumulate over time, connections start to emerge. Ideas compound slowly. What starts as scattered thoughts or quotes becomes the foundation for blog posts or projects.

Why plain text matters

One of the things I appreciate most about Obsidian is that it stores notes as plain text Markdown files on my local filesystem.

Plain text files give you full control. I sync them with iCloud, back them up myself, and track changes using Git. You can search them with command-line tools, write scripts to process them outside of Obsidian, or edit them in other applications. Your notes stay portable and usable any way you want.

Plus, plain text files have long-term benefits. Note-taking apps come and go, companies fold, subscription models shift. But plain text files remain accessible. If you want your notes to last for decades, they need to be in a format that stays readable, editable, and portable as technology changes. A Markdown file you write today will open just fine in 2050.

All this follows what Obsidian CEO Steph Ango calls the "files over apps" philosophy: your files should outlast the tools that create them. Don't lock your thinking into formats you might not be able to access later.

My tools

Before I dive into how I use Obsidian, it is worth mentioning that I use different tools for different types of thinking. Some people use Obsidian for everything – task management, journaling, notes – but I prefer to separate those.

For daily task management and meeting notes, I rely on my reMarkable Pro. A study titled The Pen Is Mightier Than the Keyboard by Mueller and Oppenheimer found that students who took handwritten notes retained concepts better than those who typed them. Handwriting meeting notes engages deeper cognitive processing than typing, which can improve understanding and memory.

For daily journaling and event tracking, I use a custom iOS app I built myself. I might share more about that another time.

Obsidian is where I grow long-term ideas. It is for collecting insights, connecting thoughts, and building a knowledge base that compounds over time.

How I capture ideas

In Obsidian, I organize my notes around topic pages. Examples are "Coordination challenges in Open Source", "Solar-powered websites", "Open Source startup lessons", or "How to be a good dad".

I have hundreds of these topic pages. I create a new one whenever an idea feels worth tracking.

Each topic page grows slowly over time. I add short summaries, interesting links, relevant quotes, and my own thoughts whenever something relevant comes up. The idea is to build a thoughtful collection of notes that deepens and matures over time.

Some notes stay short and focused. Others grow rich with quotes, links, and personal reflections. As notes evolve, I sometimes split them into more specific topics or consolidate overlapping ones.

I do not schedule formal reviews. Instead, notes come back to me when I search, clip a new idea, or revisit a related topic. A recent thought often leads me to something I saved months or years ago, and may prompt me to reorganize related notes.

Obsidian's core features help these connections deepen. I use tags, backlinks and graph view, to connect notes and reveal patterns between notes.

How I use notes

The biggest challenge with note-taking is not capturing ideas, but actually using them. Most notes get saved and then forgotten.

Some of my blog posts grow directly from these accumulated notes. Makers and Takers, one of my most-read blog posts, pre-dates Obsidian and did not come from this system. But if I write a follow-up, it will. I have a "Makers and Takers" note where relevant quotes and ideas are slowly accumulating.

As my collection of notes grows, certain notes keep bubbling up while others fade into the background. The ones that resurface again and again often signal ideas worth writing about or projects worth pursuing.

What I like about this process is that it turns note-taking into more than just storage. As I've said many times, writing is how I think. Writing pushes me to think, and it is the process I rely on to flesh out ideas. I do not treat my notes as final conclusions, but as ongoing conversations with myself. Sometimes two notes written months apart suddenly connect in a way I had not noticed before.

My plugin setup

Obsidian has a large plugin ecosystem that reminds me of Drupal's. I mostly stick with core plugins, but use the following community ones:

  • Dataview – Think of it as SQL queries for your notes. I use it to generate dynamic lists like TABLE FROM #chess AND #opening AND #black to see all my notes on chess openings for Black. It turns your notes into a queryable database.

  • Kanban – Visual project boards for tracking progress on long-term ideas. I maintain Kanban boards for Acquia, Drupal, improvements to dri.es, and more. Unlike daily task management, these boards capture ideas that evolve over months or years.

  • Linter – Automatically formats my notes: standardizes headings, cleans up spacing, and more. It runs on save, keeping my Markdown clean.

  • Encrypt – Encrypts specific notes with password protection. Useful for sensitive information that I want in my knowledge base but need to keep secure.

  • Pandoc – Exports notes to Word documents, PDFs, HTML, and other formats using Pandoc.

  • Copilot – I'm still testing this, but the idea of chatting with your own knowledge base is compelling. You can also ask AI to help organize notes more effectively.

The Obsidian Web Clipper

The tool I'd actually recommend most isn't a traditional Obsidian plugin: it's the official Obsidian Web Clipper browser extension. I have it installed on my desktop and phone.

When I find something interesting online, I highlight it and clip it directly into Obsidian. This removes friction from the process.

I usually save just a quote or a short section of an article, not the whole article. Some days I save several clips. Other days, I save none at all.

Why this works

For me, Obsidian is not just a note-taking tool. It is a thinking environment. It gives me a place to collect ideas, let them mature, and return to them when the time is right. I do not aim for perfect organization. I aim for a system that feels natural and helps me notice connections I would otherwise miss.

June 22, 2025

OpenTofu

OpenTofu

Terraform or OpenTofu (the open-source fork supported by the Linux Foundation) is a nice tool to setup the infrastructure on different cloud environments. There is also a provider that supports libvirt.

If you want to get started with OpenTofu there is a free training available from the Linux foundation:

I also joined the talk about OpenTofu and Infrastructure As Code, in general, this year in the Virtualization and Cloud Infrastructure DEV Room at FOSDEM this year:

I’ll not start to explain “Declarative” vs “Imperative” in this blog post, there’re already enough blog posts or websites that’re (trying) to explain this in more detail (the links above are a good start).

The default behaviour of OpenTofu is not to try to update an existing environment. This makes it usable to create disposable environments.

Tails description

Tails

Tails is a nice GNU/Linux distribution to connect to the Tor network.

Personally, I’m less into the “privacy” aspect of the Tor network (although being aware that you’re tracked and followed is important), probably because I’m lucky to live in the “Free world”.

For people who are less lucky (People who live in a country where freedom of speech isn’t valued) or journalists for example, there’re good reasons to use the Tor network and hide their internet traffic.

tails/libvirt Terraform/OpenTofu module

OpenTofu

To make it easier to spin up a virtual machine with the latest tail environment I created a Terraform/OpenTofu module to spin up a virtual machine with the latest Tails version on libvirt.

There’re security considerations when you run tails in a virtual machine. See

for more information.

The source code of the module is available at the git repository:

The module is published on the Terraform Registry and the OpenTofu Registry.

Have fun!

June 18, 2025

Heb jij altijd al willen samenwonen met fijne mensen in een warme, open en respectvolle sfeer? Dan is dit misschien wel iets voor jou.

Samen met twee vrienden ben ik een nieuwe cohousing aan het opstarten in Gent. We hebben een prachtig gerenoveerd herenhuis op het oog, en we zijn op zoek naar een vierde persoon om het huis mee te delen.

Het huis

Het gaat om een ruim en karaktervol herenhuis met energielabel B+. Het beschikt over:

Vier volwaardige slaapkamers van elk 18 à 20 m²

Eén extra kamer die we kunnen inrichten als logeerkamer, bureau of hobbyruimte

Twee badkamers

Twee keukens

Een zolder met stevige balken — de creatieve ideeën borrelen al op!


De ligging is uitstekend: aan de Koning Albertlaan, op amper 5 minuten fietsen van station Gent-Sint-Pieters en 7 minuten van de Korenmarkt. De huurprijs is €2200 in totaal, wat neerkomt op €550 per persoon bij vier bewoners.

Het huis is al beschikbaar vanaf 1 juli 2025.

Wie zoeken we?

We zoeken iemand die zich herkent in een aantal gedeelde waarden en graag deel uitmaakt van een respectvolle, open en bewuste leefomgeving. Concreet betekent dat voor ons:

Je staat open voor diversiteit in al haar vormen

Je bent respectvol, communicatief en houdt rekening met anderen

Je hebt voeling met thema’s zoals inclusie, mentale gezondheid, en samenleven met aandacht voor elkaar

Je hebt een rustig karakter en draagt graag bij aan een veilige, harmonieuze sfeer in huis

Leeftijd is niet doorslaggevend, maar omdat we zelf allemaal 40+ zijn, zoeken we eerder iemand die zich in die levensfase herkent


Iets voor jou?

Voel je een klik met dit verhaal? Of heb je vragen en wil je ons beter leren kennen? Aarzel dan niet om contact op te nemen via amedee@vangasse.eu.

Is dit niets voor jou, maar ken je iemand die perfect zou passen in dit plaatje? Deel dan zeker deze oproep — dank je wel!

Samen kunnen we van dit huis een warme thuis maken.

June 11, 2025

I am excited to share some wonderful news—Sibelga and Passwerk have recently published a testimonial about my work, and it has been shared across LinkedIn, Sibelga’s website, and even on YouTube!


What Is This All About?

Passwerk is an organisation that matches talented individuals on the autism spectrum with roles in IT and software testing, creating opportunities based on strengths and precision. I have been working with them as a consultant, currently placed at Sibelga, Brussels’ electricity and gas distribution network operator.

The article and video highlight how being “different” does not have to be a limitation—in fact, it can be a real asset in the right context. It means a lot to me to be seen and appreciated for who I am and the quality of my work.


Why This Matters

For many neurodivergent people, the professional world can be full of challenges that go beyond the work itself. Finding the right environment—one that values accuracy, focus, and dedication—can be transformative.

I am proud to be part of a story that shows what is possible when companies look beyond stereotypes and embrace neurodiversity as a strength.


Thank you to Sibelga, Passwerk, and everyone who contributed to this recognition. It is an honour to be featured, and I hope this story inspires more organisations to open up to diverse talents.

👉 Want to know more? Check out the article or watch the video!

June 08, 2025

lookat 2.1.0rc1

Lookat 2.1.0rc1 is the latest development release of Lookat/Bekijk, a user-friendly Unix file browser/viewer that supports colored man pages.

The focus of the 2.1.0 release is to add ANSI Color support.


 

News

8 Jun 2025 Lookat 2.1.0rc1 Released

Lookat 2.1.0rc1 is the first release candicate of Lookat 2.1.0

ChangeLog

Lookat / Bekijk 2.1.0rc1
  • ANSI Color support

Lookat 2.1.0rc1 is available at:

Have fun!

June 04, 2025

A few weeks ago, I set off for Balilas, a balfolk festival in Janzé (near Rennes), Brittany (France). I had never been before, but as long as you have dance shoes, a tent, and good company, what more do you need?

Bananas for scale

From Ghent to Brittany… with Two Dutch Strangers

My journey began in Ghent, where I was picked up by Sterre and Michelle, two dancers from the Netherlands. I did not know them too well beforehand, but in the balfolk world, that is hardly unusual — de balfolkcommunity is één grote familie — one big family.

We took turns driving, chatting, laughing, and singing along. Google Maps logged our total drive time at 7 hours and 39 minutes.

Google knows everything Péage – one of the many

Along the way, we had the perfect soundtrack:
🎶 French Road Trip 🇫🇷🥖 — 7 hours and 49 minutes of French and Francophone tubes.

https://open.spotify.com/playlist/3jRMHCl6qVmVIqXrASAAmZ?si=746a7f78ca30488a

🍕 A Tasty Stop in Pré-en-Pail-Saint-Samson

Somewhere around dinner time, we stopped at La Sosta, a cozy Italian restaurant in Pré-en-Pail-Saint-Samson (2300 inhabitants). I had a pizza normandebase tomate, andouille, pomme, mozzarella, crème, persil . A delicious and unexpected regional twist — definitely worth remembering!

pizza normande

The pizzas wereexcellent, but also generously sized — too big to finish in one sitting. Heureusement, ils nous ont proposé d’emballer le reste à emporter. That was a nice touch — and much appreciated after a long day on the road.

Just to much to eat it all

⛺ Arrival Just Before Dark

We arrived at the Balilas festival site five minutes after sunset, with just enough light left to set up our tents before nightfall. Trugarez d’an heol — thank you, sun, for holding out a little longer.

There were two other cars filled with people coming from the Netherlands, but they had booked a B&B. We chose to camp on-site to soak in the full festival atmosphere.

Enjoy the view! Banana pancakes!

Balilas itself was magical: days and nights filled with live music, joyful dancing, friendly faces, and the kind of warm atmosphere that defines balfolk festivals.

Photo: Poppy Lens

More info and photos:
🌐 balilas.lesviesdansent.bzh
📸 @balilas.balfolk on Instagram


Balfolk is more than just dancing. It is about trust, openness, and sharing small adventures with people you barely know—who somehow feel like old friends by the end of the journey.

Tot de volgende — à la prochaine — betek ar blez a zeu!
🕺💃

Thank you Maï for proofreading the Breton expressions. ❤

May 28, 2025

In the world of DevOps and continuous integration, automation is essential. One fascinating way to visualize the evolution of a codebase is with Gource, a tool that creates animated tree diagrams of project histories.

Recently, I implemented a GitHub Actions workflow in my ansible-servers repository to automatically generate and deploy Gource visualizations. In this post, I will walk you through how the workflow is set up and what it does.

But first, let us take a quick look back…


🕰 Back in 2013: Visualizing Repos with Bash and XVFB

More than a decade ago, I published a blog post about Gource (in Dutch) where I described a manual workflow using Bash scripts. At that time, I ran Gource headlessly using xvfb-run, piped its output through pv, and passed it to ffmpeg to create a video.

It looked something like this:

#!/bin/bash -ex
 
xvfb-run -a -s "-screen 0 1280x720x24" \
  gource \
    --seconds-per-day 1 \
    --auto-skip-seconds 1 \
    --file-idle-time 0 \
    --max-file-lag 1 \
    --key -1280x720 \
    -r 30 \
    -o - \
  | pv -cW \
  | ffmpeg \
    -loglevel warning \
    -y \
    -b:v 3000K \
    -r 30 \
    -f image2pipe \
    -vcodec ppm \
    -i - \
    -vcodec libx264 \
    -preset ultrafast \
    -pix_fmt yuv420p \
    -crf 1 \
    -threads 0 \
    -bf 0 \
    ../gource.mp4

This setup worked well for its time and could even be automated via cron or a Git hook. However, it required a graphical environment workaround and quite a bit of shell-fu.


🧬 From Shell Scripts to GitHub Actions

Fast forward to today, and things are much more elegant. The modern Gource workflow lives in .github/workflows/gource.yml and is:

  • 🔁 Reusable through workflow_call
  • 🔘 Manually triggerable via workflow_dispatch
  • 📦 Integrated into a larger CI/CD pipeline (pipeline.yml)
  • ☁ Cloud-native, with video output stored on S3

Instead of bash scripts and virtual framebuffers, I now use a well-structured GitHub Actions workflow with clear job separation, artifact management, and summary reporting.


🚀 What the New Workflow Does

The GitHub Actions workflow handles everything automatically:

  1. ⏱ Decides if a new Gource video should be generated, based on time since the last successful run.
  2. 📽 Generates a Gource animation and a looping thumbnail GIF.
  3. ☁ Uploads the files to an AWS S3 bucket.
  4. 📝 Posts a clean summary with links, preview, and commit info.

It supports two triggers:

  • workflow_dispatch (manual run from the GitHub UI)
  • workflow_call (invoked from other workflows like pipeline.yml)

You can specify how frequently it should run with the skip_interval_hours input (default is every 24 hours).


🔍 Smart Checks Before Running

To avoid unnecessary work, the workflow first checks:

  • If the workflow file itself was changed.
  • When the last successful run occurred.
  • Whether the defined interval has passed.

Only if those conditions are met does it proceed to the generation step.


🛠 Building the Visualization

🧾 Step-by-step:

  1. Checkout the Repo
    Uses actions/checkout with fetch-depth: 0 to ensure full commit history.
  2. Generate Gource Video
    Uses nbprojekt/gource-action with configuration for avatars, title, and resolution.
  3. Install FFmpeg
    Uses AnimMouse/setup-ffmpeg to enable video and image processing.
  4. Create a Thumbnail
    Extracts preview frames and assembles a looping GIF for visual summaries.
  5. Upload Artifacts
    Uses actions/upload-artifact to store files for downstream use.

☁ Uploading to AWS S3

In a second job:

  • AWS credentials are securely configured via aws-actions/configure-aws-credentials.
  • Files are uploaded using a commit-specific path.
  • Symlinks (gource-latest.mp4, gource-latest.gif) are updated to always point to the latest version.

📄 A Clean Summary for Humans

At the end, a GitHub Actions summary is generated, which includes:

  • A thumbnail preview
  • A direct link to the full video
  • Video file size
  • Commit metadata

This gives collaborators a quick overview, right in the Actions tab.


🔁 Why This Matters

Compared to the 2013 setup:

2013 Bash Script2025 GitHub Actions Workflow
Manual setup via shellFully automated in CI/CD
Local onlyCloud-native with AWS S3
Xvfb workaround requiredHeadless and clean execution
Script needs maintenanceModular, reusable, and versioned
No summariesMarkdown summary with links and preview

Automation has come a long way — and this workflow is a testament to that progress.


✅ Final Thoughts

This Gource workflow is now a seamless part of my GitHub pipeline. It generates beautiful animations, hosts them reliably, and presents the results with minimal fuss. Whether triggered manually or automatically from a central workflow, it helps tell the story of a repository in a way that is both informative and visually engaging. 📊✨

Would you like help setting this up in your own project? Let me know — I am happy to share.

May 23, 2025

Reducing the digital clutter of chats

I hate modern chats. They presuppose we are always online, always available to chat. They force us to see and think about them each time we get our eyes on one of our devices. Unlike mailboxes, they are never empty. We can’t even easily search through old messages (unlike the chat providers themselves, which use the logs to learn more about us). Chats are the epitome of the business idiot: they make you always busy but prevent you from thinking and achieving anything.

It is quite astonishing to realise that modern chat systems use 100 or 1000 times more resources (in size and computing power) than 30 years ago, that they are less convenient (no custom client, no search) and that they work against us (centralisation, surveillance, ads). But, yay, custom emojis!

Do not get me wrong: chats are useful! When you need an immediate interaction or a quick on-the-go message, chats are the best.

I needed to keep being able to chat while keeping the digital clutter to a minimal and preserving my own sanity. That’s how I came up with the following rules.

Rule 1: One chat to rule them all

One of the biggest problems of centralised chats is that you must be on many of them. I decided to make Signal my main chat and to remove others.

Signal was, for me, a good compromise of respecting my privacy, being open source and without ads while still having enough traction that I could convince others to join it.

Yes, Signal is centralised and has drawbacks like relying on some Google layers (which I worked around by using Molly-FOSS). I simply do not see XMPP, Matrix or SimpleX becoming popular enough in the short term. Wire and Threema had no advantages over Signal. I could not morally justify using Whatsapp nor Telegram.

In 2022, as I decided to use Signal as my main chat, I deleted all accounts but Signal and Whatsapp and disabled every notification from Whatsapp, forcing myself to open it once a week to see if I had missed something important. People who really wanted to reach me quickly understood that it was better to use Signal. This worked so well that I forgot to open Whatsapp for a whole month which was enough for Whatsapp to decide that my account was not active anymore.

Not having Whatsapp is probably the best thing which happened to me regarding chats. Suddenly, I was out of tenths or hundreds of group chats. Yes, I missed lots of stuff. But, most importantly, I stopping fearing missing them. Seriously, I never missed having Whatsapp. Not once. Thanks Meta for removing my account!

While travelling in Europe, it is now standard that taxi and hotels will chat with you using Whatsapp. Not anymore for me. Guess what? It works just fine. In fact, I suspect it works even better because people are forced to either do what we agreed during our call or to call me, which requires more energy and planning.

Rule 2: Mute, mute, mute!

Now that Signal is becoming more popular, some group chats are migrating to it. But I’ve learned the lesson : I’m muting them. This allows me to only see the messages when I really want to look at them. Don’t hesitate to mute vocal group chats and people with whom you don’t need day-to-day interaction.

I’m also leaving group chats which are not essential. Whatsapp deletion told me that nearly no group chat is truly essential.

Many times, I’ve had people sending me emails about what was told on a group chat because they knew I was not there. Had I been on that group, I would probably have missed the messages but nobody would have cared. If you really want to get in touch with me, send me an email!

Rule 3: No read receipts nor typing indicators

I was busy, walking in the street with my phone in hands for directions. A notification popped up with an important message. It was important but not urgent. I could not deal with the message at that moment. I wanted to take the time. One part of my brain told me not to open the message because, if I did, the sender would see a "read receipt". He would see that I had read the message but would not receive any answer.

For him, that would probably translate in "he doesn’t care". I consciously avoided to open Signal until I was back home and could deal with the message.

That’s when I realised how invasive the "read receipt" was. I disabled it and never regretted that move. I’m reading messages on my own watch and replying when I want to. Nobody needs to know if I’ve seen the message. It is wrong in every aspect.

Signal preferences showing read receipts and typing indicator disabled Signal preferences showing read receipts and typing indicator disabled

Rule 4: Temporary discussions only

The artist Bruno Leyval, who did the awesome cover of my novel Bikepunk, is obsessed with deletion and disappearance. He set our Signal chat so that every message is deleted after a day. At first, I didn’t see the point.

Until I understood that this was not only about privacy, it also was about decluttering our mind, our memories.

Since then, I’ve set every chat in Signal to delete messages after one week.

Signal preferences showing disappearing messages set to one week Signal preferences showing disappearing messages set to one week

This might seem like nothing but this changes everything. Suddenly, chats are not a long history of clutter. Suddenly, you see chats as transient and save things you want to keep. Remember that you can’t search in chats? This means that chats are transient anyway. With most chats, your history is not saved and could be lost by simply dropping your phone on the floor. Something important should be kept in a chat? Save it! But it should probably have been an email.

Embracing the transient nature of chat, making it explicit greatly reduce the clutter.

Conclusion

I know that most of you will say that "That’s nice Ploum but I can’t do that because everybody is on XXX" where XXX is most often Whatsapp in my own circles. But this is wrong: you believe everybody is on XXX because you are yourself using XXX as your main chat. When surveying my students this year, I’ve discovered that nearly half of them was not on Whatsapp. Not for some hard reason but because they never saw the need for it. In fact, they were all spread over Messenger, Instagram, Snap, Whatsapp, Telegram, Discord. And they all believed that "everybody is where I am".

In the end, the only real choice to make is between being able to get immediately in touch with a lot of people or having room for your mental space. I choose the latter, you might prefer the former. That’s fine!

I still don’t like chat. I’m well aware that the centralised nature of Signal makes it a short-term solution. But I’m not looking for the best sustainable chat. I just want fewer chats in my life.

If you want to get in touch, send me an email!

I’m Ploum, a writer and an engineer. I like to explore how technology impacts society. You can subscribe by email or by rss. I value privacy and never share your adress.

I write science-fiction novels in French. For Bikepunk, my new post-apocalyptic-cyclist book, my publisher is looking for contacts in other countries to distribute it in languages other than French. If you can help, contact me!

May 21, 2025

During the last MySQL & HeatWave Summit, Wim Coekaerts announced that a new optimizer is available and is already enabled in MySQL HeatWave. Let’s have a quick look at it and how to use it. The first step is to verify that Hypergraph is available: The statement won’t return any error if the Hypergraph Optimizer […]

This spring was filled with music, learning, and connection. I had the opportunity to participate in three wonderful music courses, each offering something unique—new styles, deeper technique, and a strong sense of community. Here is a look back at these inspiring experiences.


🎶 1. Fiddlers on the Move – Ghent (5–9 March)

Photo: Filip Verpoest

In early March, I joined Fiddlers on the Move in Ghent, a five-day course packed with workshops led by musicians from all over the world. Although I play the nyckelharpa, I deliberately chose workshops that were not nyckelharpa-specific. This gave me the challenge and joy of translating techniques from other string traditions to my instrument.

Here is a glimpse of the week:

  • Wednesday: Fiddle singing with Laura Cortese – singing while playing was new for me, and surprisingly fun.
  • Thursday: Klezmer violin / Fiddlers down the roof with Amit Weisberger – beautiful melodies and ornamentation with plenty of character.
  • Friday: Arabic music with Layth Sidiq – an introduction to maqams and rhythmic patterns that stretched my ears in the best way.
  • Saturday: Swedish violin jamsession classics with Mia Marine – a familiar style, but always a joy with Mia’s energy and musicality.
  • Sunday: Live looping strings with Joris Vanvinckenroye – playful creativity with loops, layering, and rhythm.

Each day brought something different, and I came home with a head full of ideas and melodies to explore further.


🪗 2. Workshopweekend Stichting Draailier & Doedelzak – Sint-Michielsgestel, NL (18–21 April)

Photo: Arne de Laat

In mid-April, I traveled to Sint-Michielsgestel in the Netherlands for the annual Workshopweekend organized by Stichting Draailier & Doedelzak. This year marked the foundation’s 40th anniversary, and the event was extended to four days, from Friday evening to Monday afternoon, at the beautiful location of De Zonnewende.

I joined the nyckelharpa workshop with Rasmus Brinck. One of the central themes we explored was the connection between playing and dancing polska—a topic close to my heart. I consider myself a dancer first and a musician second, so it was especially meaningful to deepen the musical understanding of how movement and melody shape one another.

The weekend offered a rich variety of other workshops as well, including hurdy-gurdy, bagpipes, diatonic accordion, singing, and ensemble playing. As always, the atmosphere was warm and welcoming. With structured workshops during the day and informal jam sessions, concerts, and bals in the evenings, it was a perfect blend of learning and celebration.


🇸🇪 3. Swedish Music for Strings – Ronse (2–4 May)

At the beginning of May, I took part in a three-day course in Ronse dedicated to Swedish string music. Although we could arrive on 1 May, teaching started the next day. The course was led by David Eriksson and organized by Amate Galli. About 20 musicians participated—two violinists, one cellist, and the rest of us on nyckelharpa.

The focus was on capturing the subtle groove and phrasing that make Swedish folk music so distinctive. It was a joy to be surrounded by such a rich soundscape and to play in harmony with others who share the same passion. The music stayed with me long after the course ended.


✨ Final Thoughts

Each of these courses gave me something different: new musical perspectives, renewed technical focus, and most importantly, the joy of making music with others. I am deeply grateful to all the teachers, organizers, and fellow participants who made these experiences so rewarding. I am already looking forward to the next musical adventure!

May 19, 2025

MySQL Enterprise Monitor, aka MEM, retired in January 2025, after almost 20 years of exemplary service! What’s next? Of course, plenty of alternatives exist, open source, proprietary, and on the cloud. For MySQL customers, we provide two alternatives: This post focuses on the latter, as there is no apparent reason to deploy an Oracle Database […]

May 16, 2025

Petit manifeste low-tech

Ce samedi 17 mai, je pédalerai vers Massy en compagnie de Tristan Nitot pour parler "low-tech" et dédicacer Bikepunk lors du festival Parlons Vélo.

Attention, ce qui va suivre divulgâche une partie de ce que je dirai samedi midi à Massy. Si vous venez, arrêtez de lire ici, on se retrouve demain !

Qu’est-ce que la low-tech ?

Le terme low-tech nous fait intuitivement sentir une opposition contre l’excès technologique (le "high tech") tout en évitant l’extrémisme technophobique. Un terme qui enthousiasme, mais qu’il me semble important d’expliciter et dont je propose la définition suivante.

Une technologie est dite « low-tech » si les personnes interagissant avec cette technologie savent et peuvent en comprendre son fonctionnement.

Savoir comprendre. Pouvoir comprendre. Deux éléments essentiels (et difficiles à distinguer pour le Belge que je suis).

Savoir comprendre

Savoir comprendre une technologie implique d’avoir la possibilité de construire un modèle intellectuel de son fonctionnement interne.

Il est bien évident que tout le monde n’a pas la capacité de comprendre toutes les technologies. Mais il est possible de procéder par niveau. La majorité des automobilistes sait qu’une voiture à essence brûle le carburant qui explose dans un moteur, explosion qui entraine des pistons qui font tourner les roues. Le nom est un indice en soi : un moteur à explosion !

Si je n’en comprends pas plus sur le fonctionnement d’un moteur, j’ai la certitude qu’il existe des personnes qui comprennent mieux, souvent dans mon entourage direct. Au plus la compréhension est fine, au plus les personnes deviennent rares, mais chacun peut tenter de s’améliorer.

La technologie est simple sans être simpliste. Cela signifie que sa complexité peut être appréhendée graduellement. Et qu’il existe des experts qui appréhendent une technologie particulière dans sa globalité.

Par opposition, il est aujourd’hui humainement impossible de comprendre un smartphone moderne. Seuls quelques expert·e·s dans le monde maitrisent chacun·e un point particulier de l’objet : du dessin de l’antenne 5G au logiciel retouchant automatiquement les photos en passant par le chargement rapide de la batterie. Et aucun d’entre eux ne maitrise la conception d’un compilateur nécessaire à faire tourner le tout. Même un génie passant sa vie à démonter des smartphones serait dans l’incapacité totale de comprendre ce qui se passe à l’intérieur d’un engin que nous avons tous en permanence soit dans une poche, soit devant notre nez !

L’immense majorité des utilisateurs de smartphones n’ont pas le moindre modèle mental de son fonctionnement. Je ne parle pas d’un modèle erroné ou simpliste : non, il n’y en a pas du tout. L’objet est « magique ». Pourquoi affiche-t-il quelque chose plutôt qu’un autre ? Parce que c’est « magique ». Et comme pour la magie, il ne faut pas chercher à comprendre.

La low-tech peut être extrêmement complexe, mais l’existence même de cette complexité doit être compréhensible et justifiée. Une complexité transparente encourage naturellement les esprits curieux à se poser des questions.

Le temps de comprendre

Comprendre une technologie prend du temps. Cela implique une relation longue, une expérience qui se crée tout au long d’une vie, qui se partage, qui se transmet.

Par opposition, la high-tech impose un renouvellement, une mise à jour constante, des changements d’interface et de fonctionnalité permanents qui renforcent l’aspect « magique » et entraine le découragement de celleux qui tentent de se construire un modèle mental.

La low-tech doit donc nécessairement être durable. Pérenne. Elle doit s’enseigner et permettre une construction progressive de cet enseignement.

Cela implique parfois des efforts, des difficultés. Tout ne peut pas toujours être progressif : à un moment, il faut se lancer sur son vélo pour apprendre à garder l’équilibre.

Pouvoir comprendre

Historiquement, il semble évident que toute technologie a la possibilité d’être comprise. Les personnes interagissant avec la technologie étaient forcées de réparer, d’adapter et donc de comprendre. Une technologie était essentiellement matérielle, ce qui implique qu’elle pouvait être démontée.

Avec le logiciel apparait un nouveau concept : celui de cacher le fonctionnement. Et si, historiquement, tout logiciel est open source, l’invention du logiciel propriétaire rend difficile, voire impossible, de comprendre une technologie.

Le logiciel propriétaire n’a pu être inventé que grâce à la création d’un concept récent, au demeurant absurde, appelé « propriété intellectuelle ».

Cette propriété intellectuelle ayant permis la privatisation de la connaissance dans le logiciel, elle est ensuite étendue au monde matériel. Soudainement, il devient possible d’interdire à une personne de tenter de comprendre la technologie qu’elle utilise au quotidien. Grâce à la propriété intellectuelle, des fermiers se voient soudain interdits d’ouvrir le capot de leur propre tracteur.

La low-tech doit être ouverte. Elle doit pouvoir être réparée, modifiée, améliorée et partagée.

De l’utilisateur au consommateur

Grâce à la complexification, aux changements incessants et à l’imposition d’un régime strict de « propriété intellectuelle », les utilisateurs ont été transformés en consommateurs.

Ce n’est pas un hasard. Ce n’est pas une évolution inéluctable de la nature. Il s’agit d’un choix conscient. Toutes les écoles de commerce enseignent aux futurs entrepreneurs à se construire un marché captif, à priver autant que possible leur client de liberté, à construire ce qu’on appelle dans le jargon une "moat" (douve qui protège un château) afin d’augmenter la « rétention des utilisateurs ».

Les termes eux-mêmes deviennent flous pour renforcer ce sentiment de magie. On ne parle par exemple plus de transférer un fichier .jpg vers un ordinateur distant, mais de « sauvegarder ses souvenirs dans le cloud ».

Les marketeux nous ont fait croire qu’en supprimant les mots compliqués, ils simplifieraient la technologie. C’est évidemment le contraire. L’apparence de simplicité est une complexité supplémentaire qui emprisonne l’utilisateur. Toute technologie nécessite un apprentissage. Cet apprentissage doit être encouragé.

Pour une approche et une éthique low-tech

L’éthique low-tech consiste à se remettre au service de l’utilisateur en lui facilitant la compréhension de ses outils.

La high-tech n’est pas de la magie, c’est de la prestidigitation. Plutôt que de cacher les « trucs » sous des artifices, la low-tech cherche à montrer et à créer une utilisation en conscience de la technologie.

Cela n’implique pas nécessairement une simplification à outrance.

Prenons l’exemple d’une machine à laver le linge. Nous comprenons tous qu’une machine de base est un tambour qui tourne dans lequel est injecté de l’eau et du savon. C’est très simple et low-tech.

On pourrait arguer que l’ajout de capteurs et de contrôleurs électroniques permet de laver le linge plus efficacement et plus écologiquement en le pesant et adaptant la vitesse de rotation en fonction du type de linge.

Dans une optique low-tech, un boitier électronique est ajouté à la machine pour faire exactement cela. Si le boitier est retiré ou tombe en panne, la machine continue à fonctionner simplement. L’utilisateur peut choisir de débrancher le boitier ou de le remplacer. Il en comprend l’utilité et la justification. Il construit un modèle mental dans lequel le boitier ne fait qu’appuyer sur les boutons de réglage au bon moment. Et, surtout, il ne doit pas envoyer toute la machine à la casse parce que la puce wifi ne fonctionne plus et n’est plus mis à jour ce qui a bloqué le firmware (quoi ? Ma machine à laver dispose d’une puce wifi ?).

Pour une communauté low-tech

Une technologie low-tech encourage et donne l’occasion à l’utilisateur à la comprendre, à se l’approprier. Elle tente de rester stable dans le temps, se standardise. Elle ne cherche pas à cacher la complexité intrinsèque partant du principe que la simplicité provient de la transparence.

Cette compréhension, cette appropriation ne peut se faire que dans l’interaction. Une technologie low-tech va donc, par essence, favoriser la création de communautés et les échanges humains autour de cette même technologie.

Pour contribuer à l’humanité et aux communautés, une technologie low-tech se doit d’appartenir à tou·te·s, de faire partie des communs.

J’en arrive donc à cette définition, complémentaire et équivalente à la première :

Une technologie est dite « low-tech » si elle expose sa complexité de manière simple, ouverte, transparente et durable tout en appartenant aux communs.

Je suis Ploum et je viens de publier Bikepunk, une fable écolo-cycliste entièrement tapée sur une machine à écrire mécanique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !

Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.

May 15, 2025

MySQL provides the MySQL Community Edition, the Open-Source version. In addition, there is the Enterprise Edition for our Commercial customers and MySQL HeatWave, our managed database service (DBaaS) on the cloud (OCI, AWS, etc.). But do you know developers can freely use MySQL Enterprise for non-commercial use? The full range of MySQL Enterprise Edition features […]