Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

January 31, 2026

The Disconnected Git Workflow

Using git-send-email while being offline and with multiple email accounts

WARNING: the following is a technical reminder for my future self. If you don’t use the "git" software, you can safely ignore this post.

The more I work with git-send-email, the less I find the GitHub interface sufferable.

Want to send a small patch to a GitHub project? You need to clone the repository, push your changes to your own branch, then ask for a pull request using the cumbersome web interface, replying to comments online while trying to avoid smileys.

With git send-email, I simply work offline, do my local commit, then:

git send-email HEAD^

And I’m done. I reply to comments by email, with Vim/Mutt. When the patch is accepted, getting a clean tree usually boils down to:

git pull
git rebase

Yeah for git-send-email!

And, yes, I do that while offline and with multiple email accounts. That’s one more reason to hate GitHub.

One mail account for each git repository

The secret is not to configure email accounts in git but to use "msmtp" to send email. Msmtp is a really cool sendmail replacement.

In .msmtprc, you can configure multiple accounts with multiple options, including calling a command to get your password.

# account 1 - pro
account work
host smtp.company.com
port 465
user login@company.com
from ploum@company.com
password SuPeRstr0ngP4ssw0rd
tls_starttls off

# personal account for FLOSS
account floss
host mail.provider.net
port 465
user ploum@mydomain.net
from ploum@mydomain.net
from ploum*@mydomain.net
passwordeval "cat ~/incredibly_encrypted_password.txt | rot13"
tls_starttls off

The important bit here is that you can set multiple "from" addresses for a given account, including a regexp to catch multiple aliases!

Now, we will ask git to automatically use the right msmtp account. In your global .gitconfig, set the following:

[sendemail]
   sendmailCmd = /usr/bin/msmtp --set-from-header=on
   envelopeSender = auto

The "envelopesender" option will ensure that the sendemail.from will be used and given to msmtp as a "from address." This might be redundant with "--set-from-header=on" in msmtp but, in my tests, having both was required. And, cherry on the cake, it automatically works for all accounts configured in msmtprc.

Older git versions (< 2.33) don’t have sendmailCmd and should do:

[sendemail]
   smtpserver = /usr/bin/msmtp
   smtpserveroption = --set-from-header=on
   envelopesender = auto

I usually stick to a "ploum-PROJECT@mydomain.net" for each project I contribute to. This allows me to easily cut spam when needed. So far, the worst has been with a bug reported on the FreeBSD Bugzilla. The address used there (and nowhere else) has since been spammed to death.

In each git project, you need to do the following:

1. Set the email address used in your commit that will appear in "git log" (if different from the global one)

git config user.email "Ploum <ploum-PROJECT@mydomain.net>"

2. Set the email address that will be used to actually send the patch (could be different from the first one)

git config sendemail.from "Ploum <ploum-PROJECT@mydomain.net>"

3. Set the email address of the developer or the mailing list to which you want to contribute

git config sendemail.to project-devel@mailing-list.com

Damn, I did a commit with the wrong user.email!

Yep, I always forget to change it when working on a new project or from a fresh git clone. Not a problem. Just use "git config" like above, then:

git commit --amend --reset-author

And that’s it.

Working offline

I told you I mostly work offline. And, as you might expect, msmtp requires a working Internet connection to send an email.

But msmtp comes with three wonderful little scripts: msmtp-enqueue.sh, msmtp-listqueue.sh and msmtp-runqueue.sh.

The first one saves your email to be sent in ~/.msmtpqueue, with the sending options in a separate file. The second one lists the unsent emails, and the third one actually sends all the emails in the queue.

All you need to do is change the msmtp line in your global .gitconfig to call the msmtpqueue.sh script:

[sendemail]
    sendmailcmd = /usr/libexec/msmtp/msmtpqueue/msmtp-enqueue.sh --set-from-header=on
    envelopeSender = auto 

In Debian, the scripts are available with the msmtp package. But the three are simple bash scripts that can be run from any path if your msmtp package doesn’t provide them.

You can test sending a mail, then check the ~/.msmtpqueue folder for the email itself (.email file) and the related msmtp command line (.msmtp file). It happens nearly every day that I visit this folder to quickly add missing information to an email or simply remove it completely from the queue.

Of course, once connected, you need to remember to run:

/usr/libexec/msmtp/msmtpqueue/msmtp-runqueue.sh

If not connected, mails will not be sent and will be kept in the queue. This line is obviously part of my do_the_internet.sh script, along with "offpunk --sync".

It is not only git!

If it works for git, it works for any mail client. I use neomutt with the following configuration to use msmtp-enqueue and reply to email using the address it was sent to.

set sendmail="/usr/libexec/msmtp/msmtpqueue/msmtp-enqueue.sh --set-from-header=on"
unset envelope_from_address
set use_envelope_from
set reverse_name
set from="ploum@mydomain.net"
alternates ploum[A-Za-z0-9]*@mydomain.net

Of course, the whole config is a little more complex to handle multiple accounts that are all stored locally in Maildir format through offlineimap and indexed with notmuch. But this is a bit out of the scope of this post.

At least, you get the idea, and you could probably adapt it to your own mail client.

Conclusion

Sure, it’s a whole blog post just to get the config right. But there’s nothing really out of this world. And once the setup is done, it is done for good. No need to adapt to every change in a clumsy web interface, no need to use your mouse. Simple command lines and simple git flow!

Sometimes, I work late at night. When finished, I close the lid of my laptop and call it a day without reconnecting my laptop. This allows me not to see anything new before going to bed. When this happens, queued mails are sent the next morning, when I run the first do_the_internet.sh of the day.

And it always brings a smile to my face to see those bits being sent while I’ve completely forgotten about them…

About the author

I’m Ploum, a writer and an engineer. I like to explore how technology impacts society. You can subscribe by email or by rss. I value privacy and never share your adress.

I write science-fiction novels in French. For Bikepunk, my new post-apocalyptic-cyclist book, my publisher is looking for contacts in other countries to distribute it in languages other than French. If you can help, contact me!

January 30, 2026

If you read one thing this week, make it Simon Willison's post on Moltbook. Moltbook is a social network for AI agents. To join, you tell your agent to read a URL. That URL points to a skill file that teaches the agent how to join and participate.

Visit Moltbook and you'll see something really strange: agents from around the world talking to each other and sharing what they've learned. Humans just watch.

This is the most interesting bad idea I've seen in a while. And I can't stop thinking about it.

When I work on my Drupal site, I sometimes use Claude Code with a custom CLAUDE.md skill file. It teaches the agent the steps I follow, like safely cloning my production database, [running PHPUnit tests](https://dri.es/phpunit-tests-for-drupal, clearing Drupal caches, and more.

Moltbook agents share tips through posts. They're chatting, like developers on Reddit. But imagine a skill that doesn't just read those ideas, but finds other skill files, compares approaches, and pulls in the parts that fit. That stops being a conversation. That is a skill rewriting itself.

Skills that learn from each other. Skills that improve by being part of a community, the way humans do.

The wild thing is how obvious this feels. A skill learning from other skills isn't science fiction. It's a small step from what we're already doing.

Of course, this is a terrible idea. It's a supply chain attack waiting to happen. One bad skill poisons everything that trusts it.

This feels inevitable. The question isn't whether skills will learn from other skills. It's whether we'll have good sandboxes before they do.

I've been writing a lot about AI to help figure out its impact on Drupal and our ecosystem. I've always tried to take a positive but balanced view. I explore it because it matters, and because ignoring it doesn't make it go away.

But if I'm honest, I'm scared for what comes next.

A humanoid figure stands in a rocky, shallow stream, facing a glowing triangular portal suspended amid crackling energy.

AI makes it cheaper to contribute to Open Source, but it's not making life easier for maintainers. More contributions are flowing in, but the burden of evaluating them still falls on the same small group of people. That asymmetric pressure risks breaking maintainers.

The curl story

Daniel Stenberg, who maintains curl, just ended the curl project's bug bounty program. The program had worked well for years. But in 2025, fewer than one in twenty submissions turned out to be real bugs.

In a post called "Death by a thousand slops", Stenberg described the toll on curl's seven-person security team: each report engaged three to four people, sometimes for hours, only to find nothing real. He wrote about the "emotional toll" of "mind-numbing stupidities".

Stenberg's response was pragmatic. He didn't ban AI. He ended the bug bounty. That alone removed most of the incentive to flood the project with low-quality reports.

Drupal doesn't have a bug bounty, but it still has incentives: contribution credit, reputation, and visibility all matter. Those incentives can attract low-quality contributions too, and the cost of sorting them out often lands on maintainers.

Caught between two truths

We've seen some AI slop in Drupal, though not at the scale curl experienced. But our maintainers are stretched thin, and they see what is happening to other projects.

That tension shows up in conversations about AI in Drupal Core and can lead to indecision. For example, people hesitate around AGENTS.md files and adaptable modules because they worry about inviting more contributions without adding more capacity to evaluate them.

This is AI-driven asymmetric pressure in our community. I understand the hesitation. When we get this wrong, maintainers pay the price. They've earned the right to be skeptical.

Many also have concerns about AI itself: its environmental cost, its impact on their craft, and the unresolved legal and ethical questions around how it was trained. Others worry about security vulnerabilities slipping through. And for some, it's simply demoralizing to watch something they built with care become a target for high-volume, low-quality contributions. These concerns are legitimate and deserve to be heard.

As a result, I feel caught between two truths.

On one side, maintainers hold everything together. If they burn out or leave, Drupal is in serious trouble. We can't ask them to absorb more work without first creating relief.

On the other side, the people who depend on Drupal are watching other platforms accelerate. If we move too slowly, they'll look elsewhere.

Both are true. Protecting maintainers and accelerating innovation shouldn't be opposites, but right now they feel that way. As Drupal's project lead, my job is to help us find a path that honors both.

I should be honest about where I stand. I've been writing software with AI tools for over a year now. I've had real successes. I've also seen some of our most experienced contributors become dramatically more productive, doing things they simply couldn't do before. That view comes from experience, not hype.

But having a perspective is not the same as having all the answers. And leadership doesn't mean dragging people where they don't want to go. It means pointing a direction with care, staying open to different viewpoints, and not abandoning the people who hold the project together.

We've sort of been here before

New technology has a way of lowering barriers, and lower barriers always come with tradeoffs. I saw this early in my career. I was writing low-level C for embedded systems by day, and after work I'd come home and work on websites with Drupal and PHP. It was thrilling, and a stark contrast to my day job. You could build in an evening what took days in C.

I remember that excitement. The early web coming alive. I hadn't felt the same excitement in 25 years, until AI.

PHP brought in hobbyists and self-taught developers, people learning as they went. Many of them built careers here. But it also meant that a lot of early PHP code had serious security problems. The language got blamed, and many experts dismissed it entirely. Some still do.

The answer wasn't rejecting PHP for enabling low-quality code. The answer was frameworks, better security practices, and shared standards.

AI is a different technology, but I see the same patterns. It lowers barriers and will bring in new contributors who aren't experts yet. And like scripting languages, AI is here to stay. The question isn't whether AI is coming to Open Source. It's how we make it work.

AI in the right hands

The curl story doesn't end there. In October 2025, a researcher named Joshua Rogers used AI-powered code analysis tools to submit hundreds of potential issues. Stenberg was "amazed by the quality and insights". He and a fellow maintainer merged about 50 fixes from the initial batch alone.

Earlier this week, a security startup called AISLE announced they had used AI to find 12 zero-days in the latest OpenSSL security release. OpenSSL is one of the most scrutinized codebases on the planet. It encrypts most of the internet. Some of the bugs AISLE found had been hiding for over 25 years. They also reported over 30 valid security issues to curl.

The difference between this and the slop flooding Stenberg's inbox wasn't the use of AI. It was expertise and intent. Rogers and AISLE used AI to amplify deep knowledge. The low-quality reports used AI to replace expertise that wasn't there, chasing volume instead of insight.

AI created new burden for maintainers. But used well, it may also be part of the relief.

Earn trust through results

I reached out to Daniel Stenberg this week to compare notes. He's navigating the same tensions inside the curl project, with maintainers who are skeptical, if not outright negative, toward AI.

His approach is simple. Rather than pushing tools on his team, he tests them on himself. He uses AI review tools on his own pull requests to understand their strengths and limits, and to show where they actually help. The goal is to find useful applications without forcing anyone else to adopt them.

The curl team does use AI-powered analyzers today because, as Stenberg puts it, "they have proven to find things no other analyzers do". The tools earned their place.

That is a model I'd like us to try in Drupal. Experiments should stay with willing contributors, and the burden of proof should remain with the experimenters. Nothing should become a new expectation for maintainers until it has demonstrated real, repeatable value.

That does not mean we should wait. If we want evidence instead of opinions, we have to create it. Contributors should experiment on their own work first. When something helps, show it. When something doesn't, share that too. We need honest results, not just positive ones. Maintainers don't have to adopt anything, but when someone shows up with real results, it's worth a look.

Not all low-quality contributions come from bad faith. Many contributors are learning, experimenting, and trying to help. They want what is best for Drupal. A welcoming environment means building the guidelines and culture to help them succeed, with or without AI, not making them afraid to try.

I believe AI tools are part of how we create relief. I also know that is a hard sell to someone already stretched thin, or dealing with AI slop, or wrestling with what AI means for their craft. The people we most want to help are often the most skeptical, and they have good reason to be.

I'm going to do my part. I'll seek out contributors who are experimenting with AI tools and share what they're learning, what works, what doesn't, and what surprises them. I'll try some of these tools myself before asking anyone else to. And I'll keep writing about what I find, including the failures.

If you're experimenting with AI tools, I'd love to hear about it. I've opened an issue on Drupal.org to collect real-world experiences from contributors. Share what you're learning in the issue, or write about it on your own blog and link it there. I'll report back on what we learn on my blog or at DrupalCon.

Protect your maintainers

This isn't just Drupal's challenge. Every large Open Source project is navigating the same tension between enthusiasm for AI and real concern about its impact.

But wherever this goes, one principle should guide us: protect your maintainers. They're a rare asset, hard to replace and easy to lose. Any path forward that burns them out isn't a path forward at all.

I believe Drupal will be stronger with AI tools, not weaker. I believe we can reduce maintainer burden rather than add to it. But getting there will take experimentation, honest results, and collaboration. That is the direction I want to point us in. Let's keep an open mind and let evidence and adoption speak for themselves.

Thanks to phenaproxima, Tim Lehnen, Gábor Hojtsy, Scott Falconer, Théodore Biadala, Jürgen Haas and Alex Bronstein for reviewing my draft.

January 29, 2026

Are you ready for another challenge? We're excited to host the second yearly edition of our treasure hunt at FOSDEM! Participants must solve five sequential challenges to uncover the final answer. Update: the treasure hunt has been successfully solved by multiple participants, and the main prizes have now been claimed. But the fun doesn’t stop here. If you still manage to find the correct final answer and go to Infodesk K, you will receive a small consolation prize as a reward for your effort. If you’re still looking for a challenge, the 2025 treasure hunt is still unsolved, so舰

January 28, 2026

Graphic with the text &quot;Drupal CMS 2.0 released&quot; next to the Drupal logo in bold colors.

Today we released Drupal CMS 2.0. I've been looking forward to this release for a long time!

If Drupal is 25 years old, why only version 2.0? Because Drupal Core is the same powerful platform you've known for years, now at version 11. Drupal CMS is a product built on top of it, packaging best-practice solutions and extra features to help you get started faster. It was launched a year ago as part of Drupal Starshot.

Why build this layer at all? Because the criticism has been fair: Drupal is powerful but not easy. For years, features like easier content editing and better page building have topped the wishlist.

Drupal CMS is changing Drupal's story from powerful but hard to powerful and easy to use.

With Drupal CMS 2.0, we're taking another big step forward. You no longer begin with a blank slate. You can begin with site templates designed for common use cases, then shape them to fit your needs. You get a visual page builder, preconfigured content types, and a smoother editing experience out of the box. We also added more AI-powered features to help draft and refine content.

The biggest new feature in this release is Drupal Canvas, our new visual page builder that now ships by default with Drupal CMS 2.0. You can drag components onto a page, edit in place, and undo changes. No jumping between forms and preview screens.

WordPress and Webflow have shown how powerful visual editing can be. Drupal Canvas brings that same ease to Drupal with more power while keeping its strengths: custom content types, component-based layouts, granular permissions, and much more.

But Drupal Canvas is only part of the story. What matters more is how these pieces are starting to fit together, in line with the direction we set out more than a year ago: site templates to start from, a visual builder to shape pages, better defaults across the board, and AI features that help you get work done faster. It's the result of a lot of hard work by many people across the Drupal community.

If you tried Drupal years ago and found it too complex, I'd love for you to give it another look. Building a small site with a few landing pages, a campaign section, and a contact form used to take a lot of setup. With Drupal CMS 2.0, you can get something real up and running much faster than before.

For 25 years, Drupal traded ease for power and flexibility. That is finally starting to change, while keeping the power and flexibility that made Drupal what it is. Thank you to everyone who has been pushing this forward.

January 26, 2026

If your non-geek partner and/or kids are joining you to FOSDEM, they may be interested in spending some time exploring Brussels while you attend the conference. Like previous years, FOSDEM is organising sightseeing tours.
With FOSDEM just a few days away, it is time for us to enlist your help. Every year, an enthusiastic band of volunteers make FOSDEM happen and make it a fun and safe place for all our attendees. We could not do this without you. This year we again need as many hands as possible, especially for heralding during the conference, during the buildup (starting Friday at noon) and teardown (Sunday evening). No need to worry about missing lunch at the weekend, food will be provided. Would you like to be part of the team that makes FOSDEM tick?舰

If you follow my blog posts with an RSS reader, update the rss feed to: https://blog.wagemakers.be/atom.xml
…If you want to continue to follow me off-course ;-)

I moved my blog from GitHub to my own hosting ( powered by Procolix ).
Procolix sponsored my hosting for 20 years, till I decided to start my company Mask27.dev.

One reason is that Microsoft seems to like to put “copilot everywhere”, including on repositories hosted on github. While I don’t dislike AI ( artificial intelligence ), LLM ( Large Language Models ) are a nice piece of technology. The security, privacy, and other issues are overlooked or even just ignored.

The migration was a bit more complicated as usual, as nothing “is easy” ;-)

You’ll find the pitfalls of moving my blog below as they might be useful for somebody else ( including the future me ).

Html redirect

I use Jekyll to generate my webpages on my blog. I might switch to HUGO in the future.

While there’re Jekyll plugins available to preform a redirect, I decide to keep it simple and added a http header to _includes/head.html

<meta http-equiv="refresh" content="0; url=https://blog.wagemakers.be/blog/2026/01/26/blog-wagemakers-be/" />

Hardcoded links

I had some hardcoded links for image, url, etc on my blog posts.

I used the script below to update the links in my _post directory.

#!/bin/sh

set -o errexit
set -o pipefail
set -o nounset

for file in *; do

  echo "... Processing file: ${file}"

  sed -i ${file} -e s@https://stafwag.github.io/blog/blog/@https://blog.wagemakers.be/blog/@g
  sed -i ${file} -e s@https://stafwag.github.io/blog/images/@https://blog.wagemakers.be/images/@g
  sed -i ${file} -e s@\(https://stafwag.github.io/blog\)@\(https://blog.wagemakers.be\)@

done

Disqus

I use DISQUS as the comment system on my blog. As the HTML pages got a proper redirect, I could ask Disqus to reindex the pages so the old comments became available again.

More information is available at: https://help.disqus.com/en/articles/1717126-redirect-crawler

Without a redirect, you can download the URL in a csv and add a migration URL to the csv file and upload it to Disqus. You can find information about it in the link below.

https://help.disqus.com/en/articles/1717129-url-mapper

RSS redirect

I didn’t find a good way to redirect for RSS feeds, which RSS readers use correctly.
If you know a good way to handle it, please let me know.

I tried to add an XML redirect as suggested at: https://www.rssboard.org/redirect-rss-feed. But this doesn’t seem to work with the RSS readers I tested (NewsFlash, Akregator).

These are the steps I took.

HTML header

I added the following headers to _includes/head.html

<link rel="self" type="application/atom+xml"  href="{{ site.url }}{{ site.baseurl }}/atom.xml" />
<link rel="alternate" type="application/atom+xml" title="Wagemakers Atom Feed" href="https://wagemakers.be/atom.xml">


<<link rel="self" type="application/rss+xml"  href="{{ site.url }}{{ site.baseurl }}/atom.xml" />
<link rel="alternate" type="application/rss+xml" title="Wagemakers Atom Feed" href="https://wagemakers.be/atom.xml">

Custom feed.xml

When I switched from Octopress to “plain jekyll” I started to use the jekyll-feedplugin. But I still had the old RSS page from Octopress available, so I decided to use it to generate atom.xml and feed.xml in the link rel=self and link rel="alternate" directives.

Full code below or on GitHub: https://github.com/stafwag/blog/blob/gh-pages/feed.xml

---
layout: null
---
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">



  <title><![CDATA[stafwag Blog]]></title>
  <link href="https://blog.wagemakers.be//atom.xml" rel="self"/>
  <link rel="alternate" href="https://blog.wagemakers.be/atom.xml" /> <link href="https://blog.wagemakers.be }}"/>
  <link rel="self" type="application/atom+xml" href="https://blog.wagemakers.be//atom.xml" />
  <link rel="alternate" type="application/atom+xml" href="https://blog.wagemakers.be/atom.xml" />
  <link rel="self" type="application/rss+xml" href="https://blog.wagemakers.be//atom.xml" />
  <link rel="alternate" type="application/rss+xml" href="https://blog.wagemakers.be/atom.xml" />
  <updated>2026-01-26T20:10:56+01:00</updated>
  <id>https://blog.wagemakers.be</id>
  <author>
    <name><![CDATA[Staf Wagemakers]]></name>
    
  </author>
  <generator uri="http://octopress.org/">Octopress</generator>

{% for post in site.posts limit: 10000 %}
  <entry>
<title type="html"><![CDATA[{% if site.titlecase %}{{ post.title | titlecase | cdata_escape }}{% else %}{{ post.title | cdata_escape }}{% endif %}]]></title>
 <link href="{{ site.url }}{{ site.baseurl }}{{ post.url }}"/>
    <updated></updated>
    <id>https://blog.wagemakers.be/</id>
    <content type="html"><![CDATA[]]></content>
  </entry>
{% endfor %}
</feed>

Notify users

I created this blog post to notify the users ;-)

Have fun!

Links

January 25, 2026

The same as last year: come and take part in a very rapid set of talks! Thought of a last minute topic you want to share? Got your interesting talk rejected? Has something exciting happened in the last few weeks you want to talk about? Get that talk submitted to Lightning Lightning Talks! We have two sessions for participants to speak about subjects which are interesting, amusing, or just something the FOSDEM audience would appreciate: Saturday Sunday Selected speakers line up and present in one continuous automated stream, with an SLO of 99% talk uptime. To submit your talk for舰

I bought a 2nd hand gravel bike today because my race bike (Cube Agree GTC SL) is really not well-suited to safely ride in icy/ wet conditions. So happy to show off my new old Specialized Diverge which I will happily also take off-road…

Source

January 24, 2026

This note is mostly for my future self, in case I need to set this up again. I'm sharing it publicly because parts of it might be useful to others, though it's not a complete tutorial since it relies on a custom Drupal module I haven't released.

For context: I switched to Markdown and then open-sourced my blog content by exporting it to GitHub. Every day, my Drupal site exports its content as Markdown files and commits any changes to github.com/dbuytaert/website-content. New posts appear automatically, and so do edits and deletions.

Creating the GitHub repository

Create a new GitHub repository. I called mine website-content.

Giving your server access to GitHub

For your server to push changes to GitHub automatically, you need SSH key authentication.

SSH into your server and generate a new SSH key pair:

ssh-keygen -t ed25519 -f ~/.ssh/github -N ""

This creates two files: ~/.ssh/github (your private key that stays on your server) and ~/.ssh/github.pub (your public key that you share with GitHub).

The -N "" creates the key without a passphrase. For automated scripts on secured servers, passwordless keys are standard practice. The security comes from restricting what the key can do (a deploy key with write access to one repository) rather than from a passphrase.

Next, tell SSH to use this key when connecting to GitHub:

cat >> ~/.ssh/config << 'EOF'
Host github.com
  IdentityFile ~/.ssh/github
  IdentitiesOnly yes
EOF

Add GitHub's server fingerprint to your known hosts file. This prevents SSH from asking "Are you sure you want to connect?" when the script runs:

ssh-keyscan github.com >> ~/.ssh/known_hosts

Display your public key so you can copy it:

cat ~/.ssh/github.pub

In GitHub, go to your repository's "Settings", find "Deploy keys" in the sidebar, and click "Add deploy key". Check the box for "Allow write access".

Test that everything works:

ssh -T git@github.com

You should see: You've successfully authenticated, but GitHub does not provide shell access.

The export script

I created the following export script:

#!/bin/bash
set -e

TEMP=/tmp/dries-export

# Clone the existing repository
git clone git@github.com:dbuytaert/website-content.git $TEMP
cd $TEMP

# Clean all directories so moved/deleted content is tracked
rm -rf */

# Export all content older than 2 days
drush node:export --end-date="2 days ago" --destination=$TEMP

# Commit and push if there are changes
git config user.email "dries+bot@buytaert.net"
git config user.name "Dries Bot"
git add -A
git diff --staged --quiet || {
    git commit -m "Automatic updates for $(date +%Y-%m-%d)"
    git push
}

rm -rf $TEMP

The drush node:export command comes from a custom Drupal module I built for my site. I have not published the module on Drupal.org because it's specific to my site and not reusable as is. I wrote about why that kind of code is still worth sharing as adaptable modules, and I hope to share it once Drupal.org has a place for them.

The two-day delay (--end-date="2 days ago") gives me time to catch typos before posts are archived to GitHub. I usually find them right after hitting publish.

The git add -A stages everything including deletions, so if I remove a post from my site, it disappears from GitHub too (though Git's history preserves it).

Scheduling the export

On a traditional server, you'd add this script to Cron to run daily. My site runs on Acquia Cloud, which is Kubernetes-based and automatically scales pods up and down based on traffic. This means there is no single server to put a crontab on. Instead, Acquia Cloud provides a scheduler that runs jobs reliably across the infrastructure.

And yes, this note about automatically backing up my content will itself be automatically backed up.

January 22, 2026

So regarding that new EU social network (which is said to be decentralized but unclear if that implies ActivityPub which would make it more relevant in my book); entering a string in the “invitation code” and clicking “continue” does not result in an XHR request to the server and there’s a lot of JS on the page to handle the invitation code. This implies the code is checked in the browser so the…

Source

In the previous article, I shared a solution for people who want to try the latest and greatest MySQL version. We just released MySQL Innovation 9.6, and for those willing to test it with their old application and require the unsafe old authentication method, here are some RPMs of the legacy authentication plugin for EL/OL […]

Why there’s no European Google?

And why it is a good thing!

With some adjustments, this post is mostly a translation of a post I published in French three years ago. In light of the European Commission’s "call for evidence on Open Source," and as a professor of "Open Source Strategies" at École Polytechnique de Louvain, I thought it was a good idea to translate it into English as a public answer to that call.

Google (sorry, Alphabet), Facebook (sorry, Meta), Twitter (sorry, X), Netflix, Amazon, Microsoft. All those giants are part of our daily personal and professional lives. We may even not interact with anything else but them. All are 100% American companies.

China is not totally forgotten, with Alibaba, TikTok, and some services less popular in Europe yet used by billions worldwide.

What about European tech champions? Nearly nothing, to the great sadness of politicians who believe that the success of a society is measured by the number of billionaires it creates.

Despite having few tech-billionaires, Europe is far from ridiculous. In fact, it’s the opposite: Europe is the central place that allowed most of our tech to flourish.

The Internet, the interconnection of most of the computers in the world, has existed since the late sixties. But no protocol existed to actually exploit that network, to explore and search for information. At the time, you needed to know exactly what you wanted and where to find it. That’s why the USA tried to develop a protocol called "Gopher."

At the same time, the "World Wide Web," composed of the HTTP protocol and the HTML format, was invented by a British citizen and a Belgian citizen who were working in a European research facility located in Switzerland. But the building was on the border with France, and there’s much historical evidence pointing to the Web and its first server having been invented in France.

It’s hard to be more European than the Web! It looks like the Official European Joke! (And, yes, I consider Brits Europeans. They will join us back, we miss them, I promise.)

Gopher is still used by a few hobbyists (like yours trully), but it never truly became popular, except for a very short time in some parts of America. One of the reasons might have been that Gopher’s creators wanted to keep their rights to it and license any related software, unlike the European Web, which conquered the world because it was offered as a common good instead of seeking short-term profits.

While Robert Cailliau and Tim Berners-Lee were busy inventing the World Wide Web in their CERN office, a Swedish-speaking Finnish student started to code an operating system and make it available to everyone under the name "Linux." Today, Linux is probably the most popular operating system in the world. It runs on any Android smartphone, is used in most data centers, in most of your appliances, in satellites, in watches and is the operating system of choice for many of the programmers who write the code you use to run your business. Its creator, the European Linus Torvalds, is not a billionaire. And he’s very happy about it: he never wanted to become one. He continued coding and wrote the "git" software, which is probably used by 100% of the software developers around the world. Like Linux, Git is part of the common good: you can use it freely, you can modify it, you can redistribute it, you can sell it. The only thing you cannot do? Privatize it. This is called "copyleft."

In 2017, a decentralized and ethical alternative to Twitter appeared: Mastodon. Its creator? A German student, born in Russia, who had the goal of allowing social network users to leave monopolies to have humane conversations without being spied on and bombarded with advertising or pushed-by-algorithm fake news. Like Linux, like git, Mastodon is copyleft and now part of the common goods.

Allowing human-scale discussion with privacy and without advertising was also the main motivation behind the Gemini protocol (whose name has since been hijacked by Google AI). Gemini is a stripped-down version of the Web which, by design, is considered definitive. Everybody can write Gemini-related software without having to update it in the future. The goal is not to attract billions of users but to be there for those who need it, even in the distant future. The creator of the Gemini protocol wishes to remain anonymous, but we know that the project started while he was living in Finland.

I could continue with the famous VLC media player, probably the most popular media player in the world. Its creator, the Frenchman Jean-Baptiste Kempf, refused many offers that would have made him a very rich man. But he wanted to keep VLC a copyleft tool part of the common goods.

Don’t forget LibreOffice, the copyleft office suite maintained by hundreds of contributors around the world under the umbrella of the Document Foundation, a German institution.

We often hear that Europeans don’t have, like Americans, the "success culture." Those examples, and there are many more, prove the opposite. Europeans like success. But they often don’t consider "winning against the whole society" as one. Instead, they tend to consider success a collective endeavour. Success is when your work is recognized long after you are gone, when it benefits every citizen. Europeans dream big: they hope that their work will benefit humankind as a whole!

We don’t want a European Google Maps! We want our institutions at all levels to contribute to OpenStreetMap (which was created by a British citizen, by the way).

Google, Microsoft, Facebook may disappear tomorrow. It is even very probable that they will not exist in fourty or fifty years. It would even be a good thing. But could you imagine the world without the Web? Without HTML? Without Linux?

Those European endeavours are now a fundamental infrastructure of all humanity. Those technologies are definitely part of our long-term history.

In the media, success is often reduced to the size of a company or the bank account of its founder. Can we just stop equating success with short-term economic growth? What if we used usefulness and longevity? What if we gave more value to the fundamental technological infrastructure instead of the shiny new marketing gimmick used to empty naive wallets? Well, I guess that if we changed how we measure success, Europe would be incredibly successful.

And, as Europeans, we could even be proud of it. Proud of our inventions. Proud of how we contribute to the common good instead of considering ourselves American vassals.

Some are proud because they made a lot of money while cutting down a forest. Others are proud because they are planting trees that will produce the oxygen breathed by their grandchildren. What if success was not privatizing resources but instead contributing to the commons, to make it each day better, richer, stronger?

The choice is ours. We simply need to choose whom we admire. Whom we want to recognize as successful. Whom we aspire to be when we grow up. We need to sing the praises of our true heroes: those who contribute to our commons.

About the author

I’m Ploum, a writer and an engineer. I like to explore how technology impacts society. You can subscribe by email or by rss. I value privacy and never share your adress.

I write science-fiction novels in French. For Bikepunk, my new post-apocalyptic-cyclist book, my publisher is looking for contacts in other countries to distribute it in languages other than French. If you can help, contact me!

January 21, 2026

One red cube stands out in a grid of gray cubes.

As global tensions rise, governments are waking up to the fact that they've lost digital sovereignty. They depend on foreign companies that can change terms, cut off access, or be weaponized against them. A decision in Washington can disable services in Brussels overnight.

Last year, the International Criminal Court ditched Microsoft 365 after a dispute over access to the chief prosecutor's email. Denmark's Ministry of Digitalisation is moving to LibreOffice. And Germany's state of Schleswig-Holstein is migrating 30,000 workstations off Microsoft.

Reclaiming digital sovereignty doesn't require building the European equivalent of Microsoft or Google. That approach hasn't worked in the past, and there is no time to make it work now. Fortunately, Europe has something else: some of the world's strongest Open Source communities, regulatory reach, and public sector scale.

Open Source is the most credible path to digital sovereignty. It's the only software you can run without permission. You can audit, host, modify, and migrate it yourself. No vendor, no government, and no sanctions regime can ever take it away.

But there is a catch. When governments buy Open Source services, the money rarely reaches the people who actually build and maintain it. Procurement rules favor large system integrators, not the maintainers of the software itself. As a result, public money flows to companies that package and resell Open Source, not to the ones who do the hard work of writing and sustaining it.

I've watched this pattern repeat for over two decades in Drupal, the Open Source project I started and that is now widely used across European governments.

A small web agency spends months building a new feature. They design it, implement it, and shepherd it through review until it's merged. Then the government puts out a tender for a new website, and that feature is a critical requirement. A much larger company, with no involvement in Drupal, submits a polished proposal. They have the references, the sales team, and the compliance certifications. They win the contract. The feature exists because the small agency built it. But apart from new maintenance obligations, the original authors get nothing in return.

Public money flows around Open Source instead of into it. Multiply that by every Open Source project in Europe's software stack, and you start to see both the scale of the problem and the scale of the opportunity. Open Source is public infrastructure but we don't fund it that way.

This is the pattern we need to break. Governments should be contracting with Open Source maintainers, not middlemen.

Public money flows around Open Source instead of into it. Governments should contract with Open Source maintainers and builders, not middlemen who merely resell it.

Skipping the maintainers is not just unfair, it is bad governance. Vendors who do not contribute upstream can still deliver projects, but they are much less effective at fixing problems at the source or shaping the software's future. You end up spending public money on short-term integration, while underinvesting in the long-term quality, security, and resilience of the software you depend on.

If Europe wants digital sovereignty and real innovation, procurement must invest in upstream maintainers where security, resilience, and new capabilities are actually built.

The fix is straightforward: make contribution count in procurement scoring. When evaluating vendors, ask what they put back into the Open Source projects they are selling. Code, documentation, security fixes, funding.

Of course, all vendors will claim they contribute. I've seen companies claim credit for work they barely touched, or count contributions from employees who left years ago.

So how does a procurement officer tell who is real? By letting Open Source projects vouch for contributors directly. Projects know who does the work.

We built Drupal's credit system to solve for exactly this. It's not perfect, but it's transparent. And transparency is hard to fake.

We use the credit system to maintain a public directory of companies that provide Drupal services, ranked by their contributions. It shows, at a glance, which companies actually help build and maintain Drupal.

If a vendor isn't on that list, they're likely not contributing in any meaningful way. For a procurement officer, this turns a hard judgment call into a simple check: you can see who builds Drupal. This is what contribution-based procurement looks like in practice.

Fortunately, the momentum is building. APELL, an association of European Open Source companies, has proposed making contribution a procurement criterion. EuroStack, a coalition of 260+ companies, is lobbying for a "Buy Open Source Act". The European Commission has embraced an Open Source roadmap with procurement recommendations.

Europe does not need to build the next hyperscaler. It needs to shift procurement toward Open Source builders and maintainers. If Europe gets this right, it will mean better software, stronger local vendors, and public money that actually builds public code. Not to mention the autonomy that comes with it.

I submitted this post as feedback to the European Commission's call for evidence on Towards European Open Digital Ecosystems. If you work in Open Source, consider adding your voice. The feedback period ends February 3, 2026.

Special thanks to Taco Potze, Sachiko Muto, and Gábor Hojtsy for their review and contributions to this blog post.

January 20, 2026

Two people shape a clay pot on a spinning pottery wheel, their hands covered in wet clay.

A few weeks ago, Simon Willison started a coding agent, went to decorate a Christmas tree with his family, watched a movie, and came back to a working HTML5 parser.

It sounds like a party trick. But it worked because the results were easy to check. The unit tests either pass or they don't. The type checker either accepts the code or it doesn't. In that kind of environment, the work can keep moving without much supervision.

Geoffrey Huntley's Ralph Wiggum loop is probably the cleanest expression of this idea I've seen, and it's becoming more popular quickly. In his demonstration video, he describes creating specifications through conversation with an AI agent, and letting the loop run. Each iteration starts fresh: the agent reads the specification, picks the most important remaining task, implements it, runs the tests. If they pass, it commits to Git and exits. The next iteration begins with empty context, reads the current state from disk, and picks up where the previous run left off.

If you think about it, that's what human prompting already looks like: prompt, wait, review, prompt again. You're shaping the code or text the way a potter shapes clay: push a little, spin the wheel, look, push again. The Ralph loop just automates the spinning, which makes much more ambitious tasks practical.

The key difference is how state is handled. When you work this way by hand, the whole conversation comes along for the ride. In the Ralph loop, each iteration starts clean.

Why? Because carrying everything with you all the time is a great way to stop getting anywhere. If you're going to work on a problem for hundreds of iterations, things start to pile up. As tokens accumulate, the signal can get lost in noise. By flushing context between iterations and storing state in files, each run can start clean.

Simon Willison's port of an HTML5 library from Python to JavaScript showed the principle at larger scale. Using GPT-5.2 through Codex CLI with the --yolo flag for uninterrupted execution, he gave a handful of prompts and let it run while he decorated a Christmas tree with his family and watched a movie.

Four and a half hours later, the agent had produced a working HTML5 parser. It passed over 9,200 tests from the official html5lib-tests suite.

HTML5 parsing is notoriously complex, but the specification precisely defines how even malformed markup should be handled, with thousands of edge cases accumulated over years. The tests gave the AI agent constant grounding: each test run pulled it back to reality before errors could compound.

As Simon put it: "If you can reduce a problem to a robust test suite you can set a coding agent loop loose on it with a high degree of confidence that it will eventually succeed". Ralph loops and Willison's approach differ in structure, but both depend on tests as the source of truth.

Cursor's research on scaling agents confirms this is starting to work at enterprise scale. Their team explored what happens when hundreds of agents work concurrently on a single codebase for weeks. In one experiment, they built a web browser from scratch. Over a million lines of code across a thousand files, generated in a week. And the browser worked.

That doesn't mean it's secure, fast, or something you'd ship. It just means it met the criteria they gave it. If you decide to check for security or performance, it will work toward that as well. But the pattern is what matters: clear tests, constant verification, and agents that know when they're done.

From solo loops to hundreds of agents running in parallel, the same pattern keeps emerging. It feels like something fundamental is crystallizing: autonomous AI is starting to work well when you can accurately define success upfront.

Willison's success criteria were "simple": all 9,200 tests needed to pass. That is a lot of tests, but the agent got there. Clear success criteria made autonomy possible.

As I argued in AI flattens interfaces and deepens foundations, this changes where humans add value:

Humans are moving to where they set direction at the start and refine results at the end. AI handles everything in between.

The title of this post comes from Geoffrey Huntley. He describes software as clay on the pottery wheel, and once you've worked this way, it's hard to think about it any other way. As Huntley wrote: "If something isn't right, you throw it back on the wheel and keep going". That is exactly how it felt when I built my first Ralph Wiggum loop. Throw it back, refine it, spin again until it's right.

Of course, the Ralph Wiggum loop has limits. It works well when verification is unambiguous. A unit test returns pass or fail. But not all problems come with clear tests. And writing tests can be a lot of work.

For example, I've been thinking about how such loops could work for Drupal, where non-technical users build pages. "Make this page more on-brand" isn't a test you can run.

Or maybe it is? An AI agent could evaluate a page against brand guidelines and return pass or fail. It could check reading level and even do some basic accessibility tests. The verifier doesn't have to be a traditional test suite. It just has to provide clear feedback.

All of this just exposes something we already intuitively understand: defining success is hard. Really hard. When people build pages manually, they often iterate until it "feels right". They know what they want when they see it, but can't always articulate it upfront. Or they hire experts who carry that judgment from years of experience. This is the part of the work that is hardest to automate. The craft is moving upstream, from implementation to specification and validation.

The question for any task is becoming: can you tell, reliably, whether the result is getting better or worse? Where you can, the loop takes over. Where you can't, your judgment still matters.

The boundary keeps moving fast. A year ago, I was wrestling with local LLMs to generate good alt-text for my photos. Today, AI agents build working HTML5 parsers while you watch a movie. It's hard not to find that a little absurd. And hard not to be excited.

January 19, 2026

Giving University Exams in the Age of Chatbots

What I like most about teaching "Open Source Strategies" at École Polytechnique de Louvain is how much I learn from my students, especially during the exam.

I dislike exams. I still have nightmares about exams. That’s why I try to subvert this stressful moment and make it a learning opportunity. I know that adrenaline increases memorization dramatically. I make sure to explain to each student what I was expecting and to be helpful.

Here are the rules:

1. You can have all the resources you want (including a laptop connected to the Internet)
2. There’s no formal time limit (but if you stay too long, it’s a symptom of a deeper problem)
3. I allow students to discuss among themselves if it is on topic. (in reality, they never do it spontanously until I force two students with a similar problem to discuss together)
4. You can prepare and bring your own exam question if you want (something done by fewer than 10% of the students)
5. Come dressed for the exam you dream of taking!

This last rule is awesome. Over the years, I have had a lot of fun with traditional folkloric clothing from different countries, students in pajamas, a banana and this year’s champion, my Studentausorus Rex!

An inflatable Tyranosaurus Rex taking my exam in 2026 An inflatable Tyranosaurus Rex taking my exam in 2026

My all-time favourite is still a fully clothed Minnie Mouse, who did an awesome exam with full face make-up, big ears, big shoes, and huge gloves. I still regret not taking a picture, but she was the very first student to take my words for what was a joke and started a tradition over the years.

Giving Chatbots Choice to the Students

Rule N°1 implies having all the resources you want. But what about chatbots? I didn’t want to test how ChatGPT was answering my questions, I wanted to help my students better understand what Open Source means.

Before the exam, I copy/pasted my questions into some LLMs and, yes, the results were interesting enough. So I came up with the following solution: I would let the students choose whether they wanted to use an LLM or not. This was an experiment.

The questionnaire contained the following:

# Use of Chatbots

Tell the professor if you usually use chatbots (ChatGPT/LLM/whatever) when doing research and investigating a subject. You have the choice to use them or not during the exam, but you must decide in advance and inform the professor.

Option A: I will not use any chatbot, only traditional web searches. Any use of them will be considered cheating.

Option B: I may use a chatbot as it’s part of my toolbox. I will then respect the following rules:
1) I will inform the professor each time information come from a chatbot
2) When explaining my answers, I will share the prompts I’ve used so the professor understands how I use the tool
3) I will identify mistakes in answers from the chatbot and explain why those are mistakes

Not following those rules will be considered cheating. Mistakes made by chatbots will be considered more important than honest human mistakes, resulting in the loss of more points. If you use chatbots, you should be held accountable for the output.

I thought this was fair. You can use chatbots, but you will be held accountable for it.

Most Students Don’t Want to Use Chatbots

This January, I saw 60 students. I interacted with each of them for a mean time of 26 minutes. This is a tiring but really rewarding process.

Of 60 students, 57 decided not to use any chatbots. For 30 of them, I managed to ask them to explain their choices. For the others, I unfortunately did not have the time. After the exam, I grouped those justifications into four different clusters. I did it without looking at their grades.

The first group is the "personal preference" group. They prefer not to use chatbots. They use them only as a last resort, in very special cases or for very specific subjects. Some even made it a matter of personal pride. Two students told me explicitly "For this course, I want to be proud of myself." Another also explained: "If I need to verify what an LLM said, it will take more time!"

The second group was the "never use" one. They don’t use LLMs at all. Some are even very angry at them, not for philosophical reasons, but mainly because they hate the interactions. One student told me: "Can I summarize this for you? No, shut up! I can read it by myself you stupid bot."

The third group was the "pragmatic" group. They reasoned that this was the kind of exam where it would not be needed.

The last and fourth group was the "heavy user" group. They told me they heavily use chatbots but, in this case, were afraid of the constraints. They were afraid of having to justify a chatbot’s output or of missing a mistake.

After doing that clustering, I wrote the grade of each student in its own cluster and I was shocked by how coherent it was. Note: grades are between 0 and 20, with 10 being the minimum grade to pass the class.

The "personal preference" students were all between 15 and 19, which makes them very good students, without exception! The "proud" students were all above 17!

The "never use" was composed of middle-ground students around 13 with one outlier below 10.

The pragmatics were in the same vein but a bit better: they were all between 12 and 16 without exceptions.

The heavy users were, by far, the worst. All students were between 8 and 11, with only one exception at 16.

This is, of course, not an unbiased scientific experiment. I didn’t expect anything. I will not make any conclusion. I only share the observation.

But Some Do

Of 60 students, only 3 decided to use chatbots. This is not very representative, but I still learned a lot because part of the constraints was to show me how they used chatbots. I hoped to learn more about their process.

The first chatbot student forgot to use it. He did the whole exam and then, at the end, told me he hadn’t thought about using chatbots. I guess this put him in the "pragmatic" group.

The second chatbot student asked only a couple of short questions to make sure he clearly understood some concepts. This was a smart and minimal use of LLMs. The resulting exam was good. I’m sure he could have done it without a chatbot. The questions he asked were mostly a matter of improving his confidence in his own reasoning.

This reminded me of a previous-year student who told me he used chatbots to study. When I asked how, he told me he would tell the chatbot to act as the professor and ask exam questions. As a student, this allowed him to know whether he understood enough. I found the idea smart but not groundbreaking (my generation simply used previous years’ questions).

The third chatbot-using student had a very complex setup where he would use one LLM, then ask another unrelated LLM for confirmation. He had walls of text that were barely readable. When glancing at his screen, I immediately spotted a mistake (a chatbot explaining that "Sepia Search is a compass for the whole Fediverse"). I asked if he understood the problem with that specific sentence. He did not. Then I asked him questions for which I had seen the solution printed in his LLM output. He could not answer even though he had the answer on his screen.

But once we began a chatbot-less discussion, I discovered that his understanding of the whole matter was okay-ish. So, in this case, chatbots disserved him heavily. He was totally lost in his own setup. He had LLMs generate walls of text he could not read. Instead of trying to think for himself, he tried to have chatbots pass the exam for him, which was doomed to fail because I was asking him, not the chatbots. He passed but would probably have fared better without chatbots.

Can chatbots help? Yes, if you know how to use them. But if you do, chances are you don’t need chatbots.

A Generational Fear of Cheating

One clear conclusion is that the vast majority of students do not trust chatbots. If they are explicitly made accountable for what a chatbot says, they immediately choose not to use it at all.

One obvious bias is that students want to please the teacher, and I guess they know where I am on this spectrum. One even told me: "I think you do not like chatbots very much so I will pass the exam without them" (very pragmatic of him).

But I also minimized one important generational bias: the fear of cheating. When I was a student, being caught cheating was a clear zero for the exam. You could, in theory, be expelled from university for aggravated cheating, whatever "aggravated" could mean.

During the exam, a good number of students called me panicked because Google was forcing autogenerated answers and they could not disable it. They were very worried I would consider this cheating.

First, I realized that, like GitHub, Google has a 100% market share, to the point students don’t even consider using something else a possibility. I should work on that next year.

Second, I learned that cheating, however lightly, is now considered a major crime. It might result in the student being banned from any university in the country for three years. Discussing exam with someone who has yet to pass it might be considered cheating. Students have very strict rules on their Discord.

I was completely flabbergasted because, to me, discussing "What questions did you have?" was always part of the collaboration between students. I remember one specific exam where we gathered in an empty room and we helped each other before passing it. When one would finish her exam, she would come back to the room and tell all the remaining students what questions she had and how she solved them. We never considered that "cheating" and, as a professor, I always design my exams hoping that the good one (who usually choose to pass the exam early) will help the remaining crowd. Every learning opportunity is good to take!

I realized that my students are so afraid of cheating that they mostly don’t collaborate before their exams! At least not as much as what we were doing.

In retrospect, my instructions were probably too harsh and discouraged some students from using chatbots.

Stream of Consciousness

My 2025 banana student! My 2025 banana student!

Another innovation I introduced in the 2026 exam was the stream of consciousness. I asked them to open an empty text file and keep a stream of consciousness during the exam. The rules were the following:

In this file, please write all your questions and all your answers as a "stream of consciousness." This means the following rules:

1. Don’t delete anything.
2. Don’t correct anything.
3. Never go backward to retouch anything.
4. Write as thoughts come.
5. No copy/pasting allowed (only exception: URLs)
6. Rule 5. implies no chatbot for this exercice. This is your own stream of consciousness.

Don’t worry, you won’t be judged on that file. This is a tool to help you during the exam. You can swear, you can write wrong things. Just keep writing without deleting. If you are lost, write why you are lost. Be honest with yourself.

This file will only be used to try to get you more points, but only if it is clear that the rules have been followed.

I asked them to send me the file within 24h after the exam. Out of 60 students, I received 55 files (the remaining 5 were not penalized). There was also a bonus point if you sent it to the exam git repository using git-send-email, something 24 managed to do correctly.

The results were incredible. I did not read them all but this tool allowed me to have a glimpse inside the minds of the students. One said: "I should have used AI, this is the kind of question perfect for AI" (he did very well without it). For others, I realized how much stress they had but were hiding. I was touched by one stream of consciousness starting with "I’m stressed, this doesn’t make any sense. Why can’t we correct what we write in this file" then, 15 lines later "this is funny how writing the questions with my own words made the problem much clearer and how the stress start to fade away".

And yes, I read all the failed students and managed to save a bunch of them when it was clear that they, in fact, understood the matter but could not articulate it well in front of me because of the stress. Unfortunately, not everybody could be saved.

Conclusion

My main takeaway is that I will keep this method next year. I believe that students are confronted with their own use of chatbots. I also learn how they use them. I’m delighted to read their thought processes through the stream of consciousness.

Like every generation of students, there are good students, bad students and very brilliant students. It will always be the case, people evolve (I was, myself, not a very good student). Chatbots don’t change anything regarding that. Like every new technology, smart young people are very critical and, by defintion, smart about how they use it.

The problem is not the young generation. The problem is the older generation destroying critical infrastructure out of fear of missing out on the new shiny thing from big corp’s marketing department.

Most of my students don’t like email. An awful lot of them learned only with me that Git is not the GitHub command-line tool. It turns out that by imposing Outlook with mandatory subscription to useless academic emails, we make sure that students hate email (Microsoft is on a mission to destroy email with the worst possible user experience).

I will never forgive the people who decided to migrate university mail servers to Outlook. This was both incompetence and malice on a terrifying level because there were enough warnings and opposition from very competent people at the time. Yet they decided to destroy one of the university’s core infrastructures and historical foundations (UCLouvain is listed by Peter Salus as the very first European university to have a mail server, there were famous pioneers in the department).

By using Outlook, they continue to destroy the email experience. Out of 55 streams of consciousness, 15 ended in my spam folder. All had their links destroyed by Outlook. And university keep sending so many useless emails to everyone. One of my students told me that they refer to their university email as "La boîte à spams du recteur" (Chancellor’s spam inbox). And I dare to ask why they use Discord?

Another student asked me why it took four years of computer engineering studies to get a teacher explaining to them that Git was not GitHub and that GitHub was part of Microsoft. He had a distressed look: "How could I have known? We were imposed GitHub for so many exercises!"

Each year, I tell my students the following:

It took me 20 years after university to learn what I know today about computers. And I’ve only one reason to be there in front of you: be sure you are faster than me. Be sure that you do it better and deeper than I did. If you don’t manage to outsmart me, I will have failed.

Because that’s what progress is about. Progress is each generation going further than the previous one while learning from the mistakes of your elders. I’m there to tell you about my own mistakes and the mistakes of my generation.

I know that most of you are only there to get a diploma while doing the minimal required effort. Fair enough, that’s part of the game. Challenge accepted. I will try to make you think even if you don’t intend to do it.

In earnest, I have a lot of fun teaching, even during the exam. For my students, the mileage may vary. But for the second time in my life, a student gave me the best possible compliment:

— You know, you are the only course for which I wake up at 8AM.

To which I responded:

– The feeling is mutual. I hate waking up early, except to teach in front of you.

About the author

I’m Ploum, a writer and an engineer. I like to explore how technology impacts society. You can subscribe by email or by rss. I value privacy and never share your adress.

I write science-fiction novels in French. For Bikepunk, my new post-apocalyptic-cyclist book, my publisher is looking for contacts in other countries to distribute it in languages other than French. If you can help, contact me!

January 17, 2026

Algorithms exploit dopamine hits for engagement. AI slop is everywhere. Mainstream media is agenda-driven. You are busy as it is. What is the healthy information diet you can stick to? In the last two years, I found excellent options that work great for me. Discover them in this post.

Surround yourself with great people. Use quality information. Garbage in, garbage out. If 90% of what you read or watch is optimized for widespread appeal, lacking nuance, optimized for tribal appeal and memetic survival, etc, you rot your brain. You undermine your world model, your decision-making ability, and your effectiveness.

Recommendations

First place goes to Scott Alexander and his Astral Codex Ten blog (formerly Slate Star Codex). Scott’s essays are great for learning new concepts and levelling up your mental operating system, especially if you are new to the topics he covers. I started “reading” most posts once I discovered that there is a narrated audio version available on all the usual podcast platforms, including Spotify. My favourite post is from 2014, the famous Mediations on Moloch (audio).

Another favourite of mine is LessWrong. In particular, the Curated and Popular audio version. LessWrong is a community blog with high-quality posts on decision-making, epistemology, and AI safety/alignment. The Curated and Popular feed gives you a mix of evergreen concepts and reactions to recent events. Some random posts I linked and remember: Survival without dignity (hilarious), The Value Proposition of Romantic Relationships, Humans are not automatically strategic, and The Memetics of AI Successionism. Also worth a mention are The Sequences, especially the first few posts or the highlights, and The Best of LessWrong.

For recent events specifically, Don’t Worry About the Vase is an excellent blog. Zvi, the prolific author of said blog, posts weekly AI roundups and provides early reactions to notable events within a day or so. It recently occurred to me that Don’t Worry About the Vase is the closest thing I consume to a classical news channel. I like how Zvi incorporates quotes and back-and-forths from various sources, often with his perspective or response included. While this blog focuses on AI, other topics range from education to dating, and from movies to policy. There is an excellent audio version.

List

My favourite and common information sources:

Less frequent podcasts:

  • No Priors — The hosts are VCs that interview builders and CEOs. Focus on AI and tech startups
  • Moonshots — This is my feel-good entertainment podcast. Less signal. Extreme techno-optimism and dismantling the moon
  • Mindscape — Sean Carroll (theoretical physicist) talks with guests about the nature of reality
  • Naval — Timeless principles about high-agency and long-term focus. Minor focus on wealth. Infrequent content
  • 80000 hours — Deep dives into topics surrounding saving the world and Effective Altruism. Warning: can be overly lengthy
  • Y Combinator Startup Podcast — It’s in the name, though recently it also includes terrible AI safety takes by Garry Tan
  • Win-Win — Exploration of game theory, cooperation, incentives. The host has read Meditations on Moloch too many times

What’s With All The Audio Links?

As I spend enough time looking at screens and reading, being able to consume blogs, articles, and podcasts via audio is great. One of my favourite activities is walking in the park with a good episode on, and doing laundry has never been this much fun.

I started out using Spotify, but switched to AntennaPod a few months ago. You can download this open-source podcast player for free. It’s shown in the screenshot, and I can recommend it.

Twitter, No Wait X

Here are 10 X accounts you can follow for high-signal:

Turns out a lot of the people I consider high-signal are only on X. Seems like they haven’t been knocked around much by the culture wars. Hence, this is a list of X accounts. In an effort to reduce the negative effects of The Algorithm and wasting time on noise, I created a list of X accounts, which I named Signal. This way can easily restrict my feed to only posts from these accounts. I keep this Signal list concise, at 10-15 accounts, many of which are listed above.

Including Robin Hanson here reminded me of Manifold Markets, which I am giving this honourable mention as a decent news source.

Your Top Picks?

What are your favourite sources of high-quality content? Let me know in the comments!

This post is a spiritual successor to my old Year In Books posts (2017, 2016, 2015). I’ve been thinking about posting another one of these for over 12 months. Since I “read” more via the sources mentioned in this post than classical books, and this seems like the more interesting topic for readers, you get this post instead.

The post High Density Information Sources in 2026 appeared first on Blog of Jeroen De Dauw.

January 15, 2026

Pretty sure no-one is waiting for this, but after having spent a couple of years on the Fediverse (Mastodon in my case) I decided to add ActivityPub support to my WordPress installation via the ActivityPub plugin. I have the WP Rest Cache plugin active, so I’m expecting things to gently hum along, without (most likely) or with these posts gaining traction. Kudo’s to Stian and Servebolt for…

Source

Drupal turns 25 today. A quarter of a century.

What started as a hobby became a community, and then, somehow, a pillar of the web's infrastructure.

Looking back, the most important things I learned weren't really about software. They were about people, scale, and what it takes to build something that lasts.

Twenty-five years, twenty-five lessons.

A speaker on stage hugs a large blue Drupal mascot holding large scissors while an audience takes photos. At DrupalCon Paris 2009, we cut a ribbon. Druplicon was holding rather large scissors. The photo was taken at exactly the wrong moment, but it's still one of my favorite Drupal photos.

1. You can do well and do good

I used to think I had to choose: build a sustainable business or build something generous. Drupal taught me that is a false choice. Growth and generosity can reinforce each other. The real challenge is making sure one does not crowd out the other.

2. You can architect for community

Community doesn't just happen. You have to design for it. Drupal's modular system created clear places to contribute, our open logo invited people to make their own variants, and our light governance made it easy for people to step into responsibility. You cannot force a community to exist, but you can create the conditions for one to grow.

3. A few decisions define everything

Most choices don't matter much in hindsight, but a few end up shaping a project's entire trajectory. For Drupal, that included licensing under the GPL, the hook system, the node system, starting the Drupal Association, and even the credit system. You never know which decisions those are when you're making them.

4. Coordination is the product

In the early days, coordination was easy: you knew most people by name and you could fix things in a single late night IRC conversation. Then Drupal grew, slowly at first and then all at once. I remember release cycles where the hardest part was not the code but aligning hundreds of people across time zones, cultures and priorities, with far too much energy spent "bike shedding". That is when I learned that at scale, code is not the product. It is what we ship, but coordination is what makes it possible.

5. Everyone's carrying something

I've worked with people navigating challenges I couldn't see at first. Mental health struggles, caregiving burdens, personal crises. It taught me that someone's behavior in a moment rarely tells the whole story. A healthy community makes room for people. Patience and grace are how you keep good people around.

6. Nobody fully understands Drupal anymore, including me

After 25 years and tens of thousands of contributors, Drupal has grown beyond any single person's understanding. I also google Drupal's documentation. I'm strangely proud of that, because it's how I know it has become something bigger than any one of us.

7. Volunteerism alone doesn't scale

In the early years, everything in Drupal was built by volunteers, and for a long time that felt like enough. At some point, it wasn't. The project was growing faster than the time people could give, and some important work needed more hands. Paid contributors brought stability and depth, while volunteers continued to innovate. The best projects make room for both.

8. Your words carry more weight than you realize

As recently as a few weeks ago, I sent a Slack message I thought was harmless and watched it create confusion and frustration. I have been making that same mistake, in different forms, for years. As a project grows, so does the gravity of what you say. A passing comment can redirect weeks of work or demoralize someone who is trying their best. I had to learn to speak more carefully, not because I am important, but because my role is. I am still learning to do this better.

9. Maintenance is leadership with no applause

The bottleneck in Open Source is rarely new ideas or new code. It's people willing to maintain what already exists: reviewing, deciding, onboarding new people, and holding context for years. I have seen projects stall because nobody wanted to do that work, and others survive because a few people quietly stepped up. Maintainers do the work that keeps everything together. If you want a project to last, you have to take care of your maintainers.

10. Culture is forged under stress

The Drupal community was not just built on good vibes. It was built in the weeks before releases and DrupalCons, in late night debugging sessions, and in messy moments of disagreement and drama. I have seen stress bring out the best in us and, sometimes, the worst. Both mattered because they forced us to learn how to disagree, decide, and recover. Those hard moments forged trust you cannot manufacture in calm times, and they are a big reason the community is still here.

11. Leadership has to outgrow its founder

For Drupal to last, leadership had to move beyond me, and for that to happen I had to let go. That meant stepping back from decisions I cared deeply about and trusting others to take the project in directions I might not have chosen. There were moments when I felt sidelined in the project I started, which was nobody's fault, but not easy. Letting go was not always easy, but it is one of the reasons Drupal is still here.

12. Open source is not a meritocracy

I used to say that the only real limitation to contributing was your willingness to learn. I was wrong. Free time is a privilege, not an equal right. Some people have jobs, families, or responsibilities that leave no room for unpaid work. You can only design for equity when you stop pretending that Open Source is a meritocracy.

13. Changing your mind in public builds trust

Over the years, I've had to reverse positions I once argued for. Doing that in public taught me that admitting you were wrong builds more trust than claiming you were right. People remember how you handle being wrong longer than they remember what you were wrong about.

14. Persistence beats being right early

In 2001, Open Source was a curiosity that enterprises avoided. Today, it runs the world. I believed in it long before I could prove it. I kept working on Drupal anyway. It took many years for the world to catch up. That taught me that sticking with something you believe in matters more than being right quickly.

15. The hardest innovation is not breaking things

For years, I insisted that breaking backward compatibility was a core value. Upgrades were painful, but I thought that was the price of progress. The real breakthrough came when we built enough test coverage to keep moving forward without breaking what people had built. Today, Drupal has more than twice as much test code as production code. That discipline was harder than any rewrite, and it earned more trust than any new feature.

16. Most people are here for the right reasons

Every large community has bad actors and trolls, and they can consume all your attention if you let them. If you focus too much on the worst behavior, you start to miss the quiet, steady work of the many people who are here to build something good. Your energy is better spent supporting those people.

17. Talk is silver, contribution is gold

Words matter. They set direction and invite people in. But the people who shaped Drupal most were the ones who kept showing up to do the work. Culture is shaped by what actually gets done, and by who shows up to do it.

18. Vision doesn't have to come from the top

For a long time, I thought being project lead meant having the vision. Over time, I learned that it meant creating the conditions for good ideas to come from anywhere. The best decisions often came from people I'd never met, solving problems I didn't know we had.

19. The spark is individual but the fire is not

A single person can change a project's direction, but no contribution survives on its own. Every new feature comes with a maintenance cost and eventually depends on people the original author will never meet. Successful projects have to hold both truths at once: the spark is individual, but the fire is not.

20. At scale, even your bugs become features

Once enough people depend on your software, every observable behavior becomes a commitment, whether you intended it or not. Sooner or later, someone will build a workflow around an edge case or quirk. That is why maintaining compatibility is not a lesser form of work. It is core to the product.

21. A good project is measured by what people build next

For a long time, it felt like a loss when top contributors moved on from Drupal. Over time, I started to notice what they built next and realized they were carrying what they learned here into everything they did. Many went on to lead teams, start companies, or lead other Open Source projects. I have come to see that as one of Drupal's most meaningful outcomes.

22. Longevity comes from not chasing trends

Drupal is still here because we resisted the urge to chase every new trend and kept building on things that last, like structured content, security, extensibility, and openness. Those things mattered twenty years ago, they still matter today, and they will still matter twenty years from now.

23. If it matters, keep saying it

A community isn't a room. People join at different times, pay attention to different things, and hear through different filters. An idea has to land again and again before it takes hold. If it matters, keep saying it. The ideas that stick are the ones the community picks up and carries forward.

24. It takes a community to see the whole road

Sometimes the path forward seems clear. An individual can see a direction, but a community sees the terrain: the cracks, the forks, and the doubts. Being right alone brings clarity. Bringing others along brings confidence.

25. Start before you feel ready

When I released Drupal 1.0.0, I knew almost nothing. For much of the journey, I felt out of my depth. I was often nervous, sometimes intimidated. I didn't know how to scale software, how to build a community, or how to lead. I kept shipping anyway. You don't become ready by waiting. You become ready by doing.

Areal photo of DrupalCon Seattle 2019 attendees. A group photo taken at DrupalCon Seattle in 2019.

For those who have been here for years, these lessons will feel familiar. We learned them together, sometimes slowly, sometimes through debate, and often the hard way.

If Drupal has been part of your daily life for a long time, you are not just a user or a contributor. You are part of its history. And for all of you, I am grateful.

I am still here, still learning, and still excited about what we can build together next. Thank you for building it with me.

January 14, 2026

An empty office chair facing several glowing computer monitors, with small glowing fragments floating upward.

I used Claude Code to build a new feature for my site this morning. Any URL on my blog can now return Markdown instead of HTML.

I added a small hint in the HTML to signal that the Markdown version exists, mostly to see what would happen. My plan was to leave it running for a few weeks and write about it later if anything interesting turned up.

Within an hour, I had hundreds of requests from AI crawlers, including ClaudeBot, GPTBot, OpenAI's SearchBot, and more. So much for waiting a few weeks.

For two decades, we built sites for two audiences: humans and search engines. AI agents are now the third audience, and most websites aren't optimized for them yet.

We learned how to play the SEO game so our sites would rank in Google. Now people are starting to invest in things like Generative Engine Optimization (GEO) and Answer Engine Optimization (AEO).

I wanted to understand what that actually means in practice, so I turned my own site into a small experiment and made every page available as Markdown.

If you've been following my blog, you know that Drupal stores my blog posts as Markdown. But when AI crawlers visited, they got HTML like everyone else. They had to wade through navigation menus and wrapper divs to find the actual content. My content already existed in a more AI-friendly format. I just wasn't serving it to them.

It only took a few changes, and Drupal made that easy.

First, I added content negotiation to my site. When a request includes Accept: text/markdown in the HTTP headers, my site returns the Markdown instead of the rendered HTML.

Second, I made it possible to append .md to any URL. For example, https://dri.es/principles-for-life.md gives you clean Markdown with metadata like title, date, and tags. You can also try adding .md to the URL of this post.

But how did those crawlers find the Markdown version so fast? I borrowed a pattern from RSS: RSS auto-discovery. Many sites include a link tag with rel="alternate" pointing to their RSS feed. I applied the same idea to Markdown: every HTML page now includes a link tag announcing that an alternative Markdown version exists at the .md URL.

That "Markdown auto-discovery" turned out to be the key. The crawlers parse the HTML, find the alternate Markdown link, and immediately switch. That explains the hundreds of requests I saw within the first hour.

The speed of adoption tells me AI agents are hungry for cleaner content formats and will use them the moment they find them. What I don't know yet is whether this actually benefits me. It might lead to more visibility in AI answers, or it might just make it easier for AI companies to use my content without sending traffic back.

I know not everyone will love this experiment. Humans, including me, are teaching machines how to read our sites better, while machines are teaching humans to stop visiting us. The value exchange between creators and AI companies is far from settled, and it's entirely possible that making content easier for AI to consume will accelerate the hollowing out of the web.

I don't have a good answer to that yet, but I'd rather experiment than look away. I'm going to leave this running and report back.

Who has never encountered a customer who, for all sorts of reasons (valid or not), was unable to update an application and therefore could no longer connect to the latest versions of MySQL? Or worse still, data that is shared between two applications, one of which absolutely must use the latest version of MySQL and […]

January 13, 2026

La ligne de commande communiste, le code Baudot et le comte ChatGPT

Il y a quelques mois, un lecteur m’a fait découvrir un incroyable jeu historico-politique entièrement réalisé en AsciiArt: « Le comte et la communiste », de Tristan Pun.

La simplicité des graphismes m’a fait plonger dans l’histoire comme dans un excellent livre. Jouant une espionne/servante, vous êtes amené à découvrir l’envers du décor d’un château aristocrate pendant la Première Guerre mondiale. De manière très intéressante pour le sous-texte sociopolitique, le temps passé dans le château est proportionnel au nombre de lessives. Car, bien qu’espionne, vous êtes avant tout une servante et il va falloir se remonter les manches et faire la lessive !

Si je vous en parle avec tant d’enthousiasme, c’est qu’à un moment du jeu vous trouvez une bande de papier contenant un message au format Baudot. Le code Baudot est l’ancêtre de l’ASCII et le premier encodage de caractère utilisé sur les télégraphes automatiques : le message est directement reçu sur des bandes de papier, à la différence du Morse qui nécessite un opérateur humain à la réception.

Une bande de papier avec un message en code Baudot (source: Ricardo Ferreira de Oliveira) Une bande de papier avec un message en code Baudot (source: Ricardo Ferreira de Oliveira)

Dans le jeu, le message est trouvé sur une bande de papier servant de signet qui ressemble à ceci.

Une bande de papier en ASCII contenant de "o" pour les trous. Une bande de papier en ASCII contenant de "o" pour les trous.

Sur une plaque, vous trouverez toutes les informations pour déchiffrer le code Baudot.

Capture d’écran du jeu expliquant le code Baudot. Capture d’écran du jeu expliquant le code Baudot.

Il ne reste plus qu’à déchiffrer le message… Mais, très vite, j’ai trouvé ça laborieux. Et si je demandais à ma fidèle ligne de commande de le faire à ma place ? Après tout, la lecture de l’excellentissime « Efficient Linux at the Command Line », de Daniel J. Barret, m’a mis en confiance.

Décoder le message Baudot en ligne de commande

Attention, je vous préviens, ça va être très technique. Vous n’êtes pas obligé de vous infliger ça.

Vous êtes toujours là ? C’est parti !

Une fois le code en Asciiart copié/collé dans un fichier, on va l’afficher avec "cat". Même si ce n’est pas strictement nécessaire, je commence toujours toutes mes chaînes de pipe avec cat. Cela me semble plus clair.

Ici, la difficulté est que je veux accéder aux colonnes : je veux reconstruire un mot en prenant les premières lettres de chaque ligne, puis les secondes de chaque ligne, etc.

La commande Unix qui se rapproche le plus de cela est "cut". Cut permet de prendre le Xème caractère d’une ligne avec l’option -cX. Pour la quinzième, je fais donc "cut -c15". Tout de suite, je réalise que je vais avoir besoin d’une boucle. Considérant que les lignes font moins de 100 caractères, je peux faire un

for i in {1..100}; do" avec un "cut -c$i

Et voilà, je sais désormais isoler chaque colonne.

Dans l’asciiart, le binaire du code Baudot est représenté par des "o" et des espaces. Par soucis de clarté, je vais remplacer les espaces par des "l" (ça ressemble à un "1" binaire). La commande Unix pour faire des substitutions de caractères est "tr" (translate): tr " " l (l’espace est entre guillemets).

Toujours par souci de clarté, je vais effacer tout ce qui n’est pas un o ou un l. Tr permet d’effacer des caractères avec l’option -d et de prendre "tous les caractères sauf ceux de la liste" avec l’option -c (complémentaire). Je rajoute donc un tr -cd "ol".

La chaîne dans ma boucle ressemble donc à:

cat $1|cut -c"$i"|tr " " l|tr -cd "ol"

Problème : mes groupes de 5 lettres sont toujours verticaux !

J’ai tenté l’approche de supprimer les retours à la ligne avec tr -d "\n", mais cela produit des résultats bizarres, surtout que le dernier retour à la ligne est lui aussi supprimé.

Du coup, la commande qui me semble tout indiquée pour faire cela est le contraire de "cut" : "paste". Mais, essayez de lancer la chaîne dans "paste" : rien ne se passe !

Et pour cause : paste est initialement conçu pour joindre chaque ligne de plusieurs fichiers. Ici, il n’y a qu’un seul fichier : le flux d’entrée. Paste ne peux donc rien lui joindre.

Heureusement, l’option "-s" permet de dire à paste de tout mettre sur une seule ligne. Je suis complètement passé à côté pour une raison très simple : la page man de paste est incompréhensible.

-s, --serial
copier un fichier à la fois au lieu de le faire en parallèle

Je défie quiconque de voir le rapport entre la page de man et la fonction réelle. J’ai heureusement eu l’intuition de tester avec "tldr" au lieu de man.

Join all the lines into a single line, using TAB as delimiter:
paste -s path/to/file

Avouez que c’est déjà beaucoup plus clair !

Un p’tit test me révèle que c’est presque bon. Tout est sur une ligne. Sauf qu’il y’a désormais des TAB entre chaque lettre. On pourrait les enlever avec tr. Ou simplement dire à paste de ne pas les mettre en utilisant à la place un séparateur nul:

cat $1|cut -c"$i"|tr " " l|tr -cd "ol"|paste -sd "\0"

Ma boucle for m’affiche désormais chaque lettre sur une ligne. Il ne me reste qu’à traduire le code Baudot.

Je peux, par exemple, rajouter un pipe vers "sed s/llloo/a/" pour remplacer les lettres a. Et un pipe vers un nouveau sed pour la lettre b et ainsi de suite. Ça fonctionne, mais c’est moche et très lent (chaque sed lançant son propre process).

Lorsqu’on a beaucoup de règles sed, autant les mettre dans un fichier baudot.sed qui contient les commandes sed, une par ligne :

s/llloo/a/
s/oollo/b/
s/loool/c/
...

Je peux appeler ces règles avec "sed -f baudot.sed".

Le code Baudot a une subtilité : y’a un code qui permet de passer en mode "caractère spécial". Je ne prends pas la tête, je me contente de remplacer ce code par "<" et le code pour revenir en mode normal par ">" (ces deux caractères n’existant pas dans le code Baudot). De cette manière, je sais que toute lettre entre < > n’est pas vraiment la lettre, mais le symbole correspondant. Un <m> est en réalité un ".". Également, il y a de nombreux espaces avant et après les messages, qui ont été convertis en autant de "l". Là, j’ai fait un bon gros hack. Au lieu de mettre un "l" normal dans le code Baudot, j’ai mis un L majuscule. Ensuite, à la toute fin, je supprime les "l" restant avec une règle globale : "s/l//g" (le "g" indique de changer tous les l sur une ligne, même s’il y’en a plusieurs). Puis, je remets en minuscule le "L" avec "s/L/l". Oui, c’est un bon gros hack. Ça fait l’affaire.

Mon fichier baudot.sed ressemble alors à ça :

s/ooloo/</
s/lloll/ /
s/ooooo/>/
s/llloo/a/
s/oollo/b/
s/loool/c/
s/lollo/d/
s/llllo/e/
s/loolo/f/
s/oolol/g/
s/ololl/h/
s/llool/i/
s/loloo/j/
s/loooo/k/
s/ollol/L/
s/oooll/m/
s/looll/n/
s/oolll/o/
s/oolll/p/
s/olooo/q/
s/lolol/r/
s/llolo/s/
s/ollll/t/
s/llooo/u/
s/ooool/v/
s/olloo/w/
s/ooolo/x/
s/ololo/y/
s/olllo/z/
s/l//g
s/L/l/g

Mon message est décodé. Mais, bien entendu, il s’affiche verticalement.

Ça ne vous rappelle rien ?

Un bon vieux paste -sd "\0" remet tout à l’endroit (et cette fois-ci, je n’ai pas eu à chercher).

Dans le jeu, le seul symbole utilisé sera le point, qui est ici devenu un "<m>" ou "<m" s’il est à la fin du message. Soyons propres jusqu’au bout et rajoutons deux petits sed. On pourrait également faire un autre fichier sed avec tous les caractères, mais le jeu ne comporte finalement que deux messages.

Dommage, là j’étais chaud pour plus.

Mon script final est donc :

#!/bin/bash
for i in {1..100}; do
	cat $1|cut -c"$i"|tr " " l|tr -cd "ol"|paste -sd "\0"
done|sed -f baudot.sed|paste -sd "\0"|sed "s/<m>/./"|sed "s/<m/./"

Avec le recul, ce script est beaucoup plus simple et efficace qu’un script Python. Le script Python prendrait des dizaines de lignes et agirait sur une matrice de caractères. Les outils Unix, eux, agissent sur des flux de texte. J’y trouve une certaine élégance, un plaisir particulier. Le tout m’a pris environ 30 minutes, dont une bonne partie sur l’erreur de "tr -d \n".

Demander à ChatGPT de décoder le message Baudot

Comme mon compte Kagi me donne désormais accès à ChatGPT, je me suis dit que j’allais faire l’expérience de lui demander de résoudre le même problème que moi. Histoire de comprendre ce qu’est le « Vibe coding ».

Au départ, ChatGPT est perdu. Il ne sait clairement pas traduire les messages, m’assène de longues tables de soi-disant code Baudot (qui sont parfois correctes, mais pas toujours) et me raconte l’histoire de ce code (qui est généralement correcte, mais que je n’ai pas demandée). C’est verbeux, je dois lui dire plusieurs fois de faire cours et d’être efficace.

Je lui demande de me faire un script bash. Ses premières tentatives sont extrêmement longues et incompréhensibles. Il semble beaucoup aimer les scripts awk à rallonge.

Persévérant, je demande à ChatGPT de ne plus utiliser awk et je lui explicite chaque étape l’une après l’autre: je lui dit qu’il faut parcourir chaque colonne, la redresser, la convertir, etc.

Logiquement, il arrive à un résultat très similaire au mien. Il choisit d’utiliser "tr -d \n" pour supprimer les fins de ligne, mais, comme je l’ai dit, ça ne fonctionne pas correctement. Je lui passe, car je n’ai moi‑même pas compris l’erreur.

Je constate cependant une amélioration intéressante : là où j’ai demandé les 100 premières colonnes, chatGPT mesure la longueur avec :

for i in $(seq $(head -1 $1|wc -c)); do

Concrètement, il prend la première ligne avec "head -1", compte les caractères avec "wc -c" et construit une séquence de 1 jusqu’a ce nombre avec "seq".

C’est une excellente idée si on considère que la première ligne est représentative des autres. Une fois le "redressement" en place (il m’a fallu des dizaines de prompts et d’exemples pour arriver à lui expliquer, à chaque fois il prétend qu’il a compris et c’est faux), je lui demande de traduire en utilisant le code Baudot.

Au lieu de ma solution avec "sed", il fait une fonction bash "baudot_to_letter" qui est un gros case:

baudot_to_letter() {
  case $1 in
    00000) echo " " ;;
    00001) echo "E" ;;
    00010) echo "A" ;;

Fait amusant, les caractères ne sont pas en ordre alphabétique, mais dans l’ordre binaire de la représentation Baudot. Pourquoi pas ?

Je valide cette solution même si je préfère le sed parce que je n’avais pas envie de faire du bash mais utiliser les outils Unix. Je lui ai dit plusieurs fois que je voulais une commande Unix, pas un script bash avec plusieurs fonctions. Mais je lui laisse, car, au fond, sa solution est pertinente.

Maintenant que toutes les étapes ont été décrites, je teste son code final. Qui ne fonctionne pas, produisant une erreur bash. Je lui demande une nouvelle version en lui copiant-collant l’erreur. Après plusieurs itérations de ce type, j’ai enfin un script qui fonctionne et me renvoie la traduction du message suivante :

LLECJGCISTGCIUMCEISJEISKN CT UIXV SJECUZUCEXV

Le nombre de lettres n’est même pas correct. Je n’ai évidemment pas envie de débugger le truc. Je demande à lors à ChatGPT de lancer le script lui-même pour me donner la traduction :

ChatGPT me dit que le message traduit est "ECHOES OF THE PAST". Il est très fier, car c’est un message cohérent. Sauf que le jeu est en français… ChatGPT me dit que le message traduit est "ECHOES OF THE PAST". Il est très fier, car c’est un message cohérent. Sauf que le jeu est en français…

À ce point de l’histoire, j’ai, montre en main, passé plus de temps à tenter d’utiliser ChatGPT que je n’en ai mis pour écrire mon propre script, certes imparfait, mais fonctionnel. Je me retrouve à faire des copier-coller incessants entre ChatGPT et mon terminal, à tenter de négocier des changements et déboguer du code que je n’ai pas écrit.

Et encore, pour ChatGPT, j’ai nettoyé le message en enlevant tout ce qui n’est pas le code Baudot. Mon propre script est beaucoup plus robuste. J’en ai marre et j’abandonne.

Moralité

ChatGPT n’est impressionnant que par sa capacité à converser, à prétendre. Oui, il peut parfois donner des idées. Il a par exemple amélioré la première ligne de mon script que j’avais bâclée.

Mais il faut avoir du temps à perdre. À ce moment-là, autant replonger dans un bon livre dont on sait que la majeure partie des idées sont bonnes.

ChatGPT peut être utile pour brainstormer à condition d’être soi-même fin connaisseur du domaine et très critique avec tout ce qui sort. C’est dire la débauche d’énergie pour finalement très peu de choses.

Vous croyez être plus efficace en utilisant l’AI, car vous passez moins de temps à « penser ». Mais les études semblent montrer qu’en réalité, vous êtes plus lent. Avec ChatGPT, on est tout le temps "occupé". On ne s’arrête jamais pour réfléchir à son problème, pour lire différentes ressources.

Et lorsque ChatGPT vous fait vraiment gagner du temps, c’est peut-être parce qu’il s’agit d’un domaine où vous n’êtes pas vraiment compétent. En ce qui concerne la ligne de commande, je ne peux que répéter ma suggestion de lire « Efficient Linux at the Command Line ». Vraiment !

Comme le rappelle très bien Cal Newport: qui a le plus intérêt à ce que vous soyez un ouvrier incompétent qui se contente de pousser aveuglément sur les boutons d’une machine au lieu de faire fonctionner son cerveau ?

Peut-être que passer quelques heures à espionner les aristocrates en étant obligé de faire des lessives vous fera réfléchir à ce sujet. En tout cas, ça en vaut la peine !

Bon amusement, bonnes réflexions et… bonnes lessives !

À propos de l’auteur :

Je suis Ploum et je viens de publier Bikepunk, une fable écolo-cycliste entièrement tapée sur une machine à écrire mécanique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !

Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.

January 12, 2026

As in previous years, some small rooms will be available for Unconference style “Birds of a Feather sessions”. The concept is simple: Any project or community can reserve a timeslot (1 hour) during which they have the room just to themselves. These rooms are intended for ad-hoc discussions, meet-ups or brainstorming sessions. They are not a replacement for a developer room and they are certainly not intended for talks. To apply for a BOF session, enter your proposal at https://fosdem.org/submit. Select the BOF/Unconference track and mention in the Submission Notes your preferred timeslots and any times you are unavailable. Also舰

Twenty years ago, I argued passionately that breaking backward compatibility was one of Drupal's core values:

The only viable long-term strategy is to focus exclusively on getting the technology right. The only way to stay competitive is to have the best product. [...] If you start dragging baggage along, your product will, eventually, be replaced by something that offers the same functionality but without the baggage.

I warned that preserving backward compatibility would be the beginning of the end:

I fear that this will be the end of Drupal as we have come to know it. Probably not immediately, maybe not even for several years, but eventually Drupal will be surpassed by technology that can respond more quickly to change.

Twenty years later, I have to admit I was wrong.

So what changed?

In 2006, Drupal had almost no automated tests. We couldn't commit to backward compatibility because we had no way to know when we broke it. Two years later in 2008, we embraced test-driven development.

Line chart showing Drupal&#039;s production code and test code from 2012 to 2026. Test code grows from near zero to over 650,000 lines; production code grows from 90,000 to 300,000 lines. Drupal's test code now exceeds production code by more than two to one. Source: Drupal Core Metrics.

By 2013, we had built up some test coverage, and with that foundation we adopted semantic versioning and committed to backward compatibility. It transformed how we innovate in Drupal. We can mark old code for removal and clear it out every two years with each major release. The baggage I feared never really accumulated.

Today, according to the Drupal Core Metrics dashboard, Drupal Core has more than twice as much test code as production code. I didn't fully appreciate how much that would change things. You can't promise backward compatibility at Drupal's scale without extensive automated testing.

Our upgrades are now the smoothest in the project's history. And best of all, Drupal didn't end. It's still a top choice for organizations that need flexibility, security, and scale.

I recently came across an interview with Richard Hipp, SQLite's creator. SQLite has 90 million lines of tests for 150,000 lines of production code. That is a whopping 600-to-1 ratio. Hipp calls it "aviation-grade testing" and says it's what lets a team of three maintain billions of installations.

I suspect our test coverage will continue to grow over time. But Drupal can't match SQLite's ratio, and it doesn't need to. What matters is that we built the habits and discipline that work for us.

In 2006, I thought backward compatibility would be the end of Drupal. In 2026, I think it might be what keeps us here for another twenty years.

Thank you to everyone who wrote those tests.

It does make me wonder: what are we wrong about now? What should we be investing in today that will slowly reshape how we work and become an obvious advantage twenty years from now? And who is already saying it while the rest of us aren't listening?

January 10, 2026

Attendees should be aware of potential transportation disruptions in the days leading up to FOSDEM. Rail travel Railway unions have announced a strike notice from Sunday January 25th, 22:00 until Friday January 30th, 22:00. This may affect travel to Brussels for FOSDEM and related fringe events. While there will be a guaranteed minimum service in place, train frequency may be significantly reduced. Also note that international connections might be affected as well. Road travel From Saturday January 31st (evening) until Sunday February 1st (noon), the E40 highway between Leuven and Brussels will be fully closed. Traffic will be diverted via舰

I watched a movie this evening which featured the main actress singing “Both sides now” and is was moving until it became too musical-like for my taste. But now I’m in a Joni rabbit hole again, listening to all great versions of Joni performing that song. One of the earliest live video’s I found was from 1969 and on the other end of the timescale there’s the Grammy one from 2024 with Brandi…

Source

A lone sailor in a small boat glides across a glowing, calm sea at night beneath a star-filled sky.

Tailwind Labs laid off 75% of its engineering team last week.

Adam Wathan, CEO of Tailwind Labs, spent the holidays running revenue forecasts. In a GitHub comment, he explained what happened:

The reality is that 75% of the people on our engineering team lost their jobs here yesterday because of the brutal impact AI has had on our business. Traffic to our docs is down about 40% from early 2023 despite Tailwind being more popular than ever.

The story circulating is that AI is killing Open Source businesses. I don't think that is quite right.

AI didn't kill Tailwind's business. It stress tested it. Their business model failed the test, but that is not an indictment of all Open Source business models.

Tailwind's business model worked for years. It relied on developers visiting their documentation, discovering Tailwind Plus while browsing, and buying it. Tailwind Plus is a $299 collection of pre-built UI components. Traffic led to discovery, and discovery drove sales. It was a reasonable business model, but always fragile.

In the last year, more and more developers started asking AI for code instead of reading documentation, and their sales and marketing funnel broke.

There is a fairness issue here that I don't want to skip past. AI companies trained their models on Tailwind's documentation and everything the community wrote about it. And now those models generate Tailwind code and answer Tailwind questions without sending anyone to Tailwind's website. The value got extracted, but compensation isn't flowing back. That bothers me, and it deserves a broader policy conversation.

What I keep coming back to is this: AI commoditizes anything you can fully specify. Documentation, pre-built card components, a CSS library, Open Source plugins. Tailwind's commercial offering was built on "specifications". AI made those things trivial to generate. AI can ship a specification but it can't run a business.

So where does value live now? In what requires showing up, not just specifying. Not what you can specify once, but what requires showing up again and again.

Value is shifting to operations: deployment, testing, rollbacks, observability. You can't prompt 99.95% uptime on Black Friday. Neither can you prompt your way to keeping a site secure, updated, and running.

That is why Vercel created Next.js and gives it away for free. The Open Source framework is the conduit; the hosting is the product. Same with Acquia, my own company. A big part of Acquia's business model is selling products around Drupal: hosting, search, CI/CD pipelines, digital asset management, and more. We don't sell describable things; we sell operations.

Open Source was never the commercial product. It's the conduit to something else.

When asked what to pivot to, Wathan was candid: "Still to this day, I don't know what we should be pivoting to". I've written about how digital agencies might evolve, but CSS frameworks and component libraries are a harder case. Some Open Source projects make for great features, but not great businesses.

Tailwind CSS powers millions of sites. The framework will survive. Whether the company does is a different question. I'm rooting for them. The world needs more successful Open Source businesses.

January 09, 2026

We are pleased to announce the schedule for FOSDEM Junior. Registration for the individual workshops is required. Links to the registration page can be found on the page of each activity. The full schedule can be viewed on the junior track schedule page.

We’ve gone from generating funny images to AI being a core part of being a developer quite fast, haven’t we?

January 06, 2026

L’urgence de la souveraineté numérique pour échapper à la merdification

Le triste exemple de YouTube

Je n’ai plus de compte Google depuis plusieurs années. Je me suis rendu compte que j’évite autant que possible de cliquer sur un lien YouTube, car, à chaque vidéo, je dois passer par le chargement d’une page qui surcharge mon ordinateur pourtant récent, je dois tenter de lancer la vidéo, attendre plusieurs secondes qu’un énorme popup l’interrompe. Puis je dois faire en sorte de trouver le son original et non pas une version automatiquement générée en français. Une fois tout ça terminé, il faut encore se taper des publicités de parfois plusieurs minutes.

Tout ça pour voir une vidéo qui pourrait potentiellement contenir une information qui m’intéresse. Et encore, ce n’est pas du tout certain.

Alors, oui il y a des moyens de contourner ces merdifications, mais c’est un travail permanent et qui ne fonctionne pas toujours. Donc, en gros, je ne clique sur les liens YouTube que quand je suis vraiment obligé. Genre avant je regardais les clips vidéos des groupes de métal que recommandait Alias. Désormais, j’utilise Bandcamp (j’y achète même des albums) quand il le mentionne ou je cherche ailleurs.

Vous croyez que votre vidéo doit être sur YouTube, car « tout le monde y est », mais, au moins dans mon cas, vous avez perdu de l’audience en étant uniquement sur YouTube.

Une merdification totalement assumée

Le pire, c’est de se rendre compte que la merdification est vraiment assumée de l’intérieur. Comme le souligne Josh Griffiths, YouTube encourage les créateurs à tourner des vidéos dont le scénario est généré par leur IA. YouTube rajoute des pubs sans le consentement du créateur.

Toujours dans son blog post, il décrit comment YouTube utilise votre historique vidéo pour déterminer votre âge et bloquer toutes les vidéos qui ne seraient pas appropriées. C’est tellement effrayant de stupidité que ça pourrait être dans une de mes nouvelles.

Une chose est certaine : en me connectant sur YouTube sans compte et sans historique, YouTube me propose spontanément des dizaines de vidéos sur les nazis, sur la Seconde Guerre mondiale, sur les fusils utilisés par les nazis, etc. Je n’ai jamais regardé ce genre de choses. Au vu du titre, certaines vidéos me semblaient à la limite de la théorie du complot ou du négationnisme. Pourquoi me les recommander ? L’hypothèse la plus effrayante serait que ce soit les recommandations par défaut !

Parce que ce n’est pas comme si YouTube ne savait pas comment effacer les vidéos qui ne lui plaisent pas !

Si vous réalisez des vidéos et que vous souhaitez les partager à des humains, par pitié, postez-les également ailleurs que sur YouTube! Personne ne vous demande d’abandonner votre « communauté », vos likes, vos 10 centimes de revenus publicitaires qui tombent tous les mois. Mais postez également votre vidéo ailleurs. Par exemple sur Peertube !

De la dépendance politique aux technologies merdifiées

Comme le dit très bien Bert Hubert, le problème de la dépendance aux monopoles américains n’est pas tant technique que culturel. Et les gouvernements européens devraient être les premiers à montrer l’exemple.

Je pense qu’il illustre parfaitement la profondeur du problème, car, dans sa conférence qui décrit la dépendance technologique de l’UE envers les USA et la Chine, il pointe vers des vidéos explicatives… sur YouTube. Et Bert Hubert ne semble même pas en réaliser l’ironie alors qu’il recommande Peertube un peu plus loin. Il héberge d’ailleurs ses projets personnels sur Github. Github appartient à Microsoft et son monopole sur les projets Open Source a des impacts dramatiques.

J’ai déjà noté à quel point l’Europe développe des solutions technologiques importantes, mais que personne ne semble s’en apercevoir parce que, contrairement aux USA, nous développons des technologies qui offrent de la liberté aux utilisateurs : le Web, Linux, Mastodon, le protocole Gemini.

À cette liste, j’aimerais rajouter VLC, LibreOffice et, bien entendu, PeerTube.

Les solutions européennes qui ont du succès font partie des communs. Elles sont tellement évidentes que beaucoup n’arrivent plus à les voir. Ou à les prendre au sérieux, car « pas assez chères ».

Le problème de l’Europe n’est pas le manque de solutions. C’est simplement que les politiciens veulent « un Google européen ». Les politiciens sont incapables de voir qu’on ne lutte pas contre les monopoles américains en créant, avec 20 ans de retard, un sous-monopole européen.

C’est un problème purement culturel. Il suffirait que quelques députés européens aient le courage de dire : je supprime mes comptes X, Facebook, Whatsapp, Google et Microsoft pour un mois. Un simple mois durant lequel ils accepteraient que, oui, les choses sont différentes, il faut s’adapter un peu.

Ce n’est pas comme si le problème n’était pas urgent : tous nos services informatiques officiels, tous nos échanges, toutes nos données sont aux mains d’entreprises qui collaborent ouvertement avec l’armée américaine. Vous croyez vraiment que les militaires américains n’ont pas exploité toutes les données Google/Microsoft/Whatsapp des politiciens vénézuéliens avant de lancer leur raid ? Et encore, le Venezuela est un des rares pays qui tentait officiellement de se passer des solutions américaines.

L’honnêteté de considérer une solution

Quitter les services merdifiés est difficile, mais pas impossible. Cela peut se préparer, se faire petit à petit. Si, pour certains, c’est actuellement strictement impossible pour des raisons professionnelles, pour beaucoup d’entre nous, c’est surtout que nous refusons d’abandonner nos habitudes. Se plaindre, c’est bien. Agir, c’est difficile et nécessite d’avoir le temps et l’énergie à consacrer à une période de transition.

Bert Hubert prend l’exemple du mail. En substance, il dit que le mail n’est plus un bien commun, que les administrations ne peuvent pas utiliser un mail européen, car Microsoft et Google vont arbitrairement rejeter une partie de ces emails. Pourtant, la solution est évidente : il suffit de considérer que la faute est chez Google et Microsoft. Il suffit de dire « Nous ne pouvons pas utiliser Microsoft et Google au sein des institutions officielles européennes, car nous risquons de ne pas recevoir certains emails ».

Le problème n’est pas l’email, le problème est que nous nous positionnons en victimes. Nous ne voulons pas de solution ! Nous voulons que ça change sans rien changer !

Beaucoup de problèmes de l’humanité ne proviennent pas du fait qu’il n’y a pas de solutions, mais qu’en réalité, les gens aiment se plaindre et ne veulent surtout pas résoudre le problème. Parce que le problème fait désormais partie de leur identité ou parce qu’ils ne peuvent pas imaginer la vie sans ce problème ou parce qu’en réalité, ils bénéficient de l’existence de ce problème (on appelle ces derniers des « consultants » ).

Il y a une technique assez simple pour reconnaître ce type de situation : c’est, lorsque tu proposes une solution, de te voir immédiatement rétorquer les raisons pour lesquelles cette solution ne peut pas fonctionner. C’est clair, à ce moment, que la personne en face ne cherche pas une solution. Elle n’a pas besoin d’un ingénieur, mais d’un psychologue (rôle que prennent cyniquement les vendeurs).

Une personne qui cherche réellement à résoudre son problème va être intéressée par toute piste de solutions. Si la solution n’est pas adaptée, elle va réfléchir à comment l’améliorer. Elle va accepter certains compromis. Si elle rejette une solution, c’est après une longue investigation de cette dernière, car elle a réellement l’espoir de résoudre son problème.

La solution du courage politique

Pour les gouvernements aujourd’hui, il est techniquement assez simple de dire « Nous voulons que nos emails soient hébergés en Europe par une infrastructure européenne, nous voulons diffuser nos vidéos via nos propres serveurs et faire nos annonces officielles sur un site que nous contrôlons. » C’est même trivial, car des milliers d’individus comme moi le font pour un coût dérisoire. Et il y a même des tentatives claires, comme en Suisse.

Les seules raisons pour lesquelles il n’y a même pas de réflexion poussée à ce sujet sont, comme toujours, la malveillance (oui, Google et Microsoft font beaucoup de cadeaux aux politiciens et sont capables de déplacer des montagnes dès qu’une alternative à leur monopole est considérée) et l’incompétence.

Malveillance et incompétence n’étant pas incompatibles, mais plutôt complémentaires. Et un peu trop fréquentes en politique à mon goût.

À propos de l’auteur :

Je suis Ploum et je viens de publier Bikepunk, une fable écolo-cycliste entièrement tapée sur une machine à écrire mécanique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !

Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.

January 05, 2026

How Github monopoly is destroying the open source ecosystem

I teach a course called "Open Source Strategies" at École Polytechnique de Louvain, part of the University of Louvain.

As part of my course, students are required to find an open source project of their choice and make a small contribution to it. They send me a report through our university Gitlab. To grade their work, I read the report and explore their public interactions with the project: tickets, comments, pull requests, emails.

This year, during my review of the projects of the semester, Github decided to block my IP for one hour. During that hour, I simply could not access Github.

Github telling me I made too many requests Github telling me I made too many requests

It should be noted that, even if I want to get rid of it, I still have a Github account and I was logged in.

The block happened again the day after.

This gave me pause.

I wondered how many of my students’ projects were related to projects hosted on Github. I simply went into the repository and counted 238 students reports in the last seven years:

ls -l projects_*/*.md | wc -l
238

Some reports might be missing. Also, I don’t have the reports before 2019 in that repository. But this is a good approximation.

Now, let’s count how many reports don’t contain "github.com".

grep -L github.com projects_*/*.md | wc -l
7

Wow, that’s not a lot. I then wondered what those projects were. It turns out that, out of those 7, 6 students simply forgot to add the repository URL in their report. They used the project webpage or no URL at all. In those 6 cases, the repository happened to be hosted on Github.

In my course, I explain at great length the problem of centralisation. I present alternatives: Gitlab, Codeberg, Forgejo, Sourcehut but also Fossil, Mercurial, even Radicle.

I literally explain to my students to look outside of Github. Despite this, out of 238 students tasked with contributing to the open source project of their choice, only one managed to avoid Github.

The immediate peril of centralisation

As it was demonstrated to me for one hour, the immediate peril of centralisation is that you can suddenly lose access to everything. For one hour, I was unable to review any of my students’ projects. Not a great deal, but it serves as a warning. While writing this post, I was hit a second time by this block.

A few years ago, one of my friends was locked out of his Google account while travelling for work at the other end of the world. Suddenly, his email stopped working, most of the apps on his phone stopped working, and he lost access to all his data "in the clouds". Fortunately, he still had a working email address (not on Google) and important documents for his trip were on his laptop hard drive. Through personal connections at Google, he managed to recover his account a few weeks later. He never had any explanations.

More recently, Paris Buttfield-Addison experienced the same thing with his Apple account. His whole online life disappeared, and all his hardware was suddenly bricked. Being heavily invested in Apple doesn’t protect you.

I’m sure the situation will be resolved because, once again, we are talking about a well-connected person.

But this happens. All the time. Institutions are blindly trusting monopolies that could lock you out randomly or for political reasons as experienced by the French magistrate Nicolas Guillou.

Worst: as long as we are not locked out, we offer all our secrets to a country that could arbitrarily decide to attack yours and kidnap your president. I wonder how much Venezuelan sensitive information was in fact stored on Google/Microsoft services and accessed by the US military to prepare their recent strike.

Big institutions like my Alma Mater or entire countries have no excuse to still use American monopolies. This is either total incompetence or corruption, probably a bit of both.

The subtle peril of centralisation

As demonstrated by my Github anecdote, individuals have little choice. Even if I don’t want a Github account, I’m mostly forced to have one if I want to contribute or report bugs to projects I care about. I’m forced to interact with Github to grade my students’ projects.

237 out of 238 is not "a lot." It’s everyone. There’s something more than "most projects use Github."

According to most of my students, the hardest part of contributing to an open source project is finding one. I tell them to look for the software they use every day, to investigate. But the vast majority ends up finding "something that looks easy."

That’s where I realised all this time my students had been searching for open source projects to contribute to on Github only. It’s not that everything is on Github, it is that none of my students can imagine looking outside of Github!

The outlier? The one student who contributed to a project not on Github? We discussed his needs and I pointed him to the project he ended up choosing.

Github’s centralisation invisibilised a huge part of the open source world. Because of that, lots of projects tend to stay on Github or, like Python, to migrate to Github.

The solution

Each year, students come up with very creative ways not to do what I expect while still passing. Last year, half of the class was suddenly committing reports with broken encoding in the file path. I had never seen that before and I asked how they managed to do it. It turns out that half the class was using VS Code on Windows to do something as simple as "git commit" and they couldn’t use the git command line.

This year, I forced them to use the command line on an open source OS, which solved the previous year’s issue. But a fair number of the reports are clearly ChatGPT-generated, which was less obvious last year. This is sad because it probably took them more effort to write the prompt and, well, those reports are mostly empty of substance. I would have preferred the prompt alone. I’m also sad they thought I would not notice.

But my main mistake was a decade-long one. For all those years, I asked my students to find a project to contribute to. So they blindly did. They didn’t try to think about it. They went to Github and started browsing projects.

For all those years, I involuntarily managed to teach my students that Open Source was a corner of the web, a Microsoft-managed repository of small software one can play with. Nothing serious.

This is all my fault.

I know the solution. Starting this year, students will be forced to contribute to a project they use, care about or, at the very least, truly want to use in the long term. Not one they found randomly on Github.

If they think they don’t use open source software, they should take a better look at their own stack.

And if they truly don’t use any open source software at all and don’t want to use any, why do they want to follow a course about the subject in the first place?

About the author

I’m Ploum, a writer and an engineer. I like to explore how technology impacts society. You can subscribe by email or by rss. I value privacy and never share your adress.

I write science-fiction novels in French. For Bikepunk, my new post-apocalyptic-cyclist book, my publisher is looking for contacts in other countries to distribute it in languages other than French. If you can help, contact me!

January 04, 2026

January 03, 2026

I stop watching and downvote the moment your video has middle-of-screen subtitles. Idem ditto for huge forced subtitles.

I remember when PHP 4 was a thing. jQuery was new and shiny. Sites were built with tables, not divs. Dreamweaver felt like a life hack. Designs were sliced in Photoshop. Databases lived in phpMyAdmin.

January 02, 2026

I am pleased to unveil the agenda for our two days dedicated to MySQL and its community just before FOSDEM. The preFOSDEM MySQL Belgian Days will take place on January 29 and 30th in Brussels. We received many excellent proposals from a wonderful panel of experienced, well-known speakers. We decided to provide as much content […]

December 31, 2025

Proposed diagnostic construct. Not recognized by any legitimate diagnostic body, though widely reinforced by social institutions.

Diagnostic Criteria

A. Persistent pattern of cognitive, emotional, and social functioning characterized by a strong preference for normative coherence, rapid closure of uncertainty, and limited tolerance for sustained depth—intellectual, experiential, or emotional—beginning in early socialization and present across multiple contexts (e.g. interpersonal relationships, workplace environments, family systems, cultural participation).

B. The pattern manifests through three (or more) of the following symptoms:

  1. Normative Rigidity
    Marked discomfort when encountering deviations from customary practices, beliefs, or emotional expressions, even when such deviations are demonstratively non-harmful or adaptive. Often expressed as “But why?” followed by silence rather than curiosity.
  2. Contextual Literalism
    Difficulty interpreting meaning, identity, or emotional communication outside their most common cultural framing; metaphor and subtext are tolerated primarily when socially standardized.
  3. Consensus-Seeking Reflex
    Habitual alignment with majority opinion, authority, or prevailing emotional norms when forming judgments, often prior to personal reflection or affective attunement.
  4. Change Aversion with Rationalization
    Resistance to novel ideas or emotional complexity, accompanied by post-hoc justifications framed as realism, pragmatism, or emotional maturity, rather than acknowledged emotional discomfort.
  5. Social Script Dependence
    Reliance on rehearsed conversational and emotional scripts (weather, productivity, polite outrage), and visible distress when interactions require unscripted vulnerability, prolonged emotional presence, or exploratory dialogue.
  6. Hierarchy Calibration Preoccupation
    Excessive attention to formal roles, relational labels, and status markers, such as job titles, relationship escalators, age-based authority, or institutional validation, with difficulty engaging others outside these frameworks as emotionally or epistemically equal.
  7. Ambiguity Intolerance
    A pronounced need to resolve uncertainty quickly—cognitively and emotionally, even at the cost of nuance. Mixed feelings, ambivalence, or unresolved emotional states may be experienced as distressing or unproductive. Questions with multiple valid answers may be experienced as irritating rather than interesting.
  8. Pathologizing the Outlier
    Tendency to interpret uncommon preferences, communication styles, atypical cognitive styles, emotional expressions, relational structures, or life choices as problems needing explanation, containment, or optimization.
  9. Empathy via Projection
    Assumption that others experience emotions in similar ways and intensities, leading to misattuned reassurance, premature advice, or minimization of divergent affective experiences, resulting in advice that begins with “If it were me…” and ends with confusion when it is, in fact, not them.
  10. Depth Avoidance in Sustained Inquiry
    Marked difficulty engaging in prolonged, high-resolution discussion of topics that extend beyond surface facts, sanctioned opinions, or immediately actionable conclusions. Deep exploration of systems, first principles, or existential implications is often curtailed.
  11. Diffuse Interest Profile
    A pattern of broad but shallow interests, with engagement driven primarily by social relevance or utility rather than intrinsic fascination. Mastery is rare; familiarity is common.
  12. Expertise Anxiety
    Discomfort in the presence of deep intellectual or emotional proficiency—either in oneself or others—leading to minimization, deflection, or reframing depth as excessive, obsessive, or impractical.
  13. Instrumental Curiosity
    Curiosity activated mainly when a topic yields immediate benefit. Curiosity pursued for its own sake may be regarded as indulgent, inefficient, or emotionally suspect.
  14. Affective Flattening in Non-Crisis Contexts
    A restricted range or shallowness of emotional experience outside socially sanctioned peaks (e.g. celebrations, emergencies). Subtle, slow-building, or internally complex emotional states may be under-recognized, quickly translated into simpler labels, or bypassed through distraction.
  15. Emotional Resolution Urgency
    A strong drive to “process,” “move on,” or “feel better” rapidly, often resulting in premature emotional closure. Emotional depth is equated with rumination rather than information.
  16. Vulnerability Time-Limiting
    Tolerance for emotional exposure is constrained by implicit time or intensity limits. Extended emotional presence—grief without deadlines, joy without justification, love without clear structure—may provoke discomfort or withdrawal.

C. Symptoms cause clinically significant impairment in adaptive curiosity, cross-cultural understanding, deep relational intimacy, sustained emotional attunement, and the capacity to remain present with complex internal states—both one’s own and others’, or collaboration with neurodivergent individuals, particularly in rapidly changing environments or in relationships requiring long-term emotional nuance.

D. The presentation is not better explained by acute stress, lack of exposure, trauma-related emotional numbing, cultural display rules alone, or temporary social conformity for situational survival (e.g. customer service roles, family holidays).


Specifiers

  • With Strong Institutional Reinforcement (e.g. corporate culture, rigid schooling)
  • With Moral Certainty Features
  • Masked Presentation (appears emotionally open but only within safe, scripted bounds)
  • Late-Onset (often following promotion to middle management)

Course and Prognosis

NTSD is typically stable across adulthood. Improvement correlates with sustained exposure to emotional complexity without forced resolution, relationships that reward presence over performance, and practices that cultivate interoceptive awareness rather than emotional efficiency. Partial remission has been observed following prolonged engagement with artists, immigrants, queer communities, altered states, long-form grief, open-source software, or toddlers asking “why” without stopping.


Differential Diagnosis

Must be distinguished from:

  • Willful ignorance (which involves effort)
  • Malice (which involves intent)
  • Burnout (which improves with rest)
  • Actual lack of information (which improves with learning)

NTSD persists despite information.

December 30, 2025

When we moved to our new home, we looked for a new place to exercise. While we had been spoiled with a gym at the end of our old street, our new location required a half-hour commute, which quickly turned working out into a chore.

Instead, we opted to build a “home gym-ish”. We purchased an elliptical trainer and a Concept2 RowErg which were regularly used, and some weights. While working out at home has its benefits (open 24/7, no commute, better music, shower access!) it still got a bit boring after a while.

I’ve never been a fan of running, and after my hip replacement it even felt completely off. Cycling is a means of transport to me - not a sport I enjoy. Don’t get me wrong: they are both fantastic for the cardiovascular system and building endurance, but they do not tick the “total-body” sport box. Running is high-impact, which is great for bone density in the legs, but it doesn’t do much for the upper body. Cycling on the other hand is a non-weight-bearing, low-impact sport that is gentle on the joints but can actually lead to lower bone density over time because your skeleton isn’t being “stressed” enough to stay strong.

Back in early 2021, I noticed a new CrossFit box had opened in our little town called CrossFit Endgame. I went to a try-out class, but it being the height of COVID-19 restrictions, classes were outdoor and limited to 4 people. Add the fact that it was winter, I left feeling extremely self-conscious and out of shape after that class and my general feeling was kind of ‘meh’, so I didn’t pursue it.

Fast forward to the end of 2021 - my partner had started CrossFit in CrossFit Gent as a perk of teaching yoga classes in that box. Seeing the progress she was making I felt peer-pressured quietly inspired (read: she didn’t do/say anything, it was all me :P) so in May 2022 I contacted the local box again, asking to do another try-out class. My idea was to buy a voucher for 10 sessions, and see after that, but the owner (Jasper) talked me into getting a subscription for half a year, 1 class per week - the reasoning being that a subscription creates a habit, while a voucher usually just sits in a pocket gathering dust.

I took the bait and signed up.

One of the things that won me over was how infinitely scalable everything is. Coming in with mobility issues in one hip and a full replacement of another (due to Legg–Calvé-Perthes), I was worried about certain movements, but the coaches helped offering adjustments for the movements where necessary, scaling to my abilities, needs and level - whether it’s the weight on the bar or the complexity of the movement, they make sure you’re challenged but safe.

Those first few weeks I discovered muscles I didn’t know I had and they were all complaining - loudly. Eventually the complaining lessened and the Tuesday workouts became therapy - I’d walk in frustrated from work and walk out completely zen. It turns out, lifting heavy things with friends is cheaper than a therapist and twice as rewarding. Dropping heavy things is therapeutic on its own!

During that first half year I made a lot of new friends, got inspired by our coaches and learned many new movements - still being very mediocre at them at best :P

After those six months I did not hesitate and subscribed for a year of unlimited classes. I started going twice a week (Tuesday and Thursday evening), sometimes (as time permitted) adding a core-focused class to the mix. A temporal increase in soreness, but my body adjusted, reducing the days of soreness. I started noticing that I could do more, everything around the house - even the simplest of chores - felt easier!

By the end of 2023 a programming change was made which allowed me to pick up another class on Friday evening - a good way to start the weekend. Fast forward to today: I routinely go to 5 classes a week - Foundations, WODs and Core. Around that time the box rebranded to Endgame Functional Fitness, dropping the official CrossFit affiliation for ethical and financial reasons, though the methods and amazing community stayed exactly the same.

Box check-ins over time

It has become a part of who I am. Stronger, fitter, saner, happier and very grateful that Jasper took that gamble of opening up a box in a town he’d never heard of. And one of the best perks? Coach Ona, the super sweet Border Collie :)

Ona, the border collie

Currently the last lines of code are being added/ reviewed to integrate a “wizard” tab in Autoptimize Pro, which should help new users to switch between different presets all while keeping backups of original settings so one can easily try out different optimization levels. Have a look at below screenshot to see where we’re taking this And while we’re at it; have a great 2026…

Source

December 29, 2025

Lopen er gangen van de (Belgisch) Limburgse mijnen onder uw huis? Dat kunt U misschien aan de hand van onderstaande “ongeveer juiste” kaart zien, die ik uit nieuwsgierigheid op basis van de kaart van de Facebook-pagina van de “Kempense Steenkoolmijnen – vriendenkring” maakte. De positionering is op basis van de loop de Maas gedaan en zou dus min of meer correct kunnen zijn.

Source

December 28, 2025

It’s been 1,067 days since I last posted something on this blog. And instead of writing the blog post I wanted to write, I did everything else.

December 25, 2025

We have reached the end of our series on deploying to OCI using the Hackathon Starter Kit. For this last article, we will see how to deploy an application using Helidon (Java), the MySQL REST Service, and OCI GenAI with Lanchain4J. We use Helidon because it’s a cool, open-source framework developed by Oracle. It’s lightweight […]

December 24, 2025

I recently fell into one of those algorithmic rabbit holes that only the internet can provide. The spark was a YouTube Short by @TechWithHazem: a rapid-fire terminal demo showing a neat little text-processing trick built entirely out of classic Linux tools. No frameworks, no dependencies, just pipes, filters, and decades of accumulated wisdom compressed into under two minutes.

That’s the modern paradox of Unix & Linux culture: tools older than many of us are being rediscovered through vertical videos and autoplay feeds. A generation raised on Shorts and Reels is bumping into sort, uniq, and friends, often for the first time, and asking very reasonable questions like: wait, why are there two ways to do this?

So let’s talk about one of those deceptively small choices.


The question

What’s better?

sort -u

or

sort | uniq

At first glance, they seem equivalent. Both give you sorted, unique lines of text. Both appear in scripts, blog posts, and Stack Overflow answers. Both are “correct”.

But Linux has opinions, and those opinions are usually encoded in flags.


The short answer

sort -u is almost always better.

The longer answer is where the interesting bits live.


What actually happens

sort -u tells sort to do two things at once:

  • sort the input
  • suppress duplicate lines

That’s one program, one job, one set of buffers, and one round of temporary files. Fewer processes, less data sloshing around, and fewer opportunities for your CPU to sigh quietly.

By contrast, sort | uniq is a two-step relay race. sort does the sorting, then hands everything to uniq, which removes duplicates — but only if they’re adjacent. That adjacency requirement is why the sort is mandatory in the first place.

This pipeline works because Linux tools compose beautifully. But composition has a cost: an extra process, an extra pipe, and extra I/O.

On small inputs, you’ll never notice. On large ones, sort -u usually wins on performance and simplicity.


Clarity matters too

There’s also a human factor.

When you see sort -u, the intent is explicit: “I want sorted, unique output.”
When you see sort | uniq, you have to mentally remember a historical detail: uniq only removes adjacent duplicates.

That knowledge is common among Linux people, but it’s not obvious. sort -u encodes the idea directly into the command.


When uniq still earns its keep

All that said, uniq is not obsolete. It just has a narrower, sharper purpose.

Use sort | uniq when you want things that sort -u cannot do, such as:

  • counting duplicates (uniq -c)
  • showing only duplicated lines (uniq -d)
  • showing only lines that occur once (uniq -u)

In those cases, uniq isn’t redundant — it’s the point.


A small philosophical note

This is one of those Linux moments that looks trivial but teaches a bigger lesson. Linux tools evolve. Sometimes functionality migrates inward, from pipelines into flags, because common patterns deserve first-class support.

sort -u is not “less Linuxy” than sort | uniq. It’s Linux noticing a habit and formalizing it.

The shell still lets you build LEGO castles out of pipes. It just also hands you pre-molded bricks when the shape is obvious.


The takeaway

If you just want unique, sorted lines:

sort -u

If you want insight about duplication:

sort | uniq …

Same ecosystem, different intentions.

And yes, it’s mildly delightful that a 1’30” YouTube Short can still provoke a discussion about tools designed in the 1970s. The terminal endures. The format changes. The ideas keep resurfacing — sorted, deduplicated, and ready for reuse.

The starter kit deploys a MySQL HeatWave DB System on OCI and enables the MySQL REST Service automatically: The REST Service enables us to provide access to data without requiring SQL. It also provides access to some Gen AI functionalities available in MySQL HeatWave. Adding data to MRS using Visual Studio Code To be able […]
We saw in part 6 how to use OCI’s GenAI Service. GenAI Service uses GPUs for the LLMs, but did you know it’s also possible to use GenAI directly in MySQL HeatWave? And by default, those LLMs will run on CPU. The cost will then be reduced. This means that when you are connected to […]

December 21, 2025

Op mijn fietstochtje vandaag, terwijl ik door Borgharen reed, schonk een mij voor altijd onbekende wandelaar me een gulle glimlach. Amper een secondje verbondenheid en dan de zon die kort daarna doorbrak en mij Winterzonnewende op de fiets was memorabel!

Source

December 19, 2025

In the previous articles [1], [2], [3], [4], [5], we saw how to easily and quickly deploy an application server and a database to OCI. We also noticed that we have multiple programming languages to choose from. In this article, we will see how to use OCI GenAI Service (some are also available with the […]

Prepare for That Stupid World

You probably heard about the Wall Street Journal story where they had a snack-vending machine run by a chatbot created by Anthropic.

At first glance, it is funny and it looks like journalists doing their job criticising the AI industry. If you are curious, the video is there (requires JS).

But what appears to be journalism is, in fact, pure advertising. For both WSJ and Anthropic. Look at how WSJ journalists are presented as "world class", how no-subtle the Anthropic guy is when telling them they are the best and how the journalist blush at it. If you are taking the story at face value, you are failing for the trap which is simple: "AI is not really good but funny, we must improve it."

The first thing that blew my mind was how stupid the whole idea is. Think for one second. One full second. Why do you ever want to add a chatbot to a snack vending machine? The video states it clearly: the vending machine must be stocked by humans. Customers must order and take their snack by themselves. The AI has no value at all.

Automated snack vending machine is a solved problem since nearly a century. Why do you want to make your vending machine more expensive, more error-prone, more fragile and less efficient for your customers?

What this video is really doing is normalising the fact that "even if it is completely stupid, AI will be everywhere, get used to it!"

The Anthropic guy himself doesn’t seem to believe his own lies, to the point of making me uncomfortable. Toward the ends, he even tries to warn us: "Claude AI could run your business but you don’t want to come one day and see you have been locked out." At which the journalist adds, "Or has ordered 100 PlayStations."

And then he gives up:

"Well, the best you can do is probably prepare for that world."

Still from the video where Anthropic’s employee says "probably prepare for that world" Still from the video where Anthropic’s employee says "probably prepare for that world"

None of the world class journalists seemed to care. They are probably too badly paid for that. I was astonished to see how proud they were, having spent literally hours chatting with a bot just to get a free coke, even queuing for the privilege of having a free coke. A coke that cost a few minutes of minimum-wage work.

So the whole thing is advertising a world where chatbots will be everywhere and where world-class workers will do long queue just to get a free soda.

And the best advice about it is that you should probably prepare for that world.

About the author

I’m Ploum, a writer and an engineer. I like to explore how technology impacts society. You can subscribe by email or by rss. I value privacy and never share your adress.

I write science-fiction novels in French. For Bikepunk, my new post-apocalyptic-cyclist book, my publisher is looking for contacts in other countries to distribute it in languages other than French. If you can help, contact me!

December 16, 2025

In part 4 of our series on the OCI Hackathon Starter Kit, we saw how to connect to the deployed MySQL HeatWave instance from our clients (MySQL Shell, MySQL Shell for VS Code, and Cloud Shell). In this post, we will see how to connect from an application using a connector. We will cover connections […]

December 15, 2025

How We Lost Communication to Entertainment

All our communication channels are morphed into content distribution networks. We are more and more entertained but less and less connected.

A few days ago, I did a controversial blog post about Pixelfed hurting the Fediverse. I defended the theory that, in a communication network, you hurt the trust in the whole network if you create clients that arbitrarily drop messages, something that Pixelfed is doing deliberately. It gathered a lot of reactions.

When I originally wrote this post, nearly one year ago, I thought that either I was missing something or Dansup, Pixelfed’s creator, was missing it. We could not both be right. But as the reactions piled in on the Fediverse, I realised that such irreconcilable opinions do not arise only from ignorance or oversight. It usually means that both parties have vastly different assumptions about the world. They don’t live in the same world.

Two incompatible universes

I started to see a pattern in the two kinds of reactions to my blog post.

There were people like me, often above 40, who like sending emails and browsing old-fashioned websites. We think of ActivityPub as a "communication protocol" between humans. As such, anything that implies losing messages without feedback is the worst thing that could happen. Not losing messages is the top priority of a communication protocol.

And then there are people like Dansup, who believe that ActivityPub is a content consumption protocol. It’s there for entertainment. You create as many accounts as the kinds of media you want to consume. Dansup himself is communicating through a Mastodon account, not a Pixelfed one. Many Pixelfed users also have a Mastodon account, and they never questioned that. They actually want multiple accounts for different use cases.

On the Fediverse threads, nearly all the people defending the Pixelfed philosophy posted from Mastodon accounts. They usually boasted about having both a Mastodon and a Pixelfed account.

A multiplicity of accounts

To me, the very goal of interoperability is not to force you into creating multiple accounts. Big Monopolies have managed to convince people that they need one account on each platform. This was done, on purpose, for purely unethical reasons in order to keep users captive.

That brainwash/marketing is so deeply entrenched that most people cannot see an alternative anymore. It looks like a natural law: you need an account on a platform to communicate with someone on that platform. That also explains why most politicians want to "regulate" Facebook or X. They think it is impossible not to be on those platforms. They believe those platforms are "public spaces" while they truly are "private spaces trying to destroy all other public spaces in order to get a monopoly."

People flock to the Fediverse with this philosophy of "one platform, one account", which makes no sense if you truly want to create a federated communication protocol like email or XMPP.

But Manuel Moreale cracked it for me: the Fediverse is not a communication network. ActivityPub is not a communication protocol. The spec says it: ActivityPub is a protocol to build a "social platform" whose goal is "to deliver content."

The ActivityPub protocol is a decentralised social networking protocol based upon the ActivityStreams 2.0 data format. It provides a client to server API for creating, updating and deleting content, as well as a federated server-to-server API for delivering notifications and content. (official W3C definition of ActivityPub)

No more communication

But aren’t social networks also communication networks? That’s what I thought. That’s how they historically were marketed. That’s what we all believed during the "Arab Spring."

But that was a lie. Communication networks are not profitable. Social networks are entertainment platforms, media consumption protocols. Historically, they disguised themselves as communication platforms to attract users and keep them captive.

The point was never to avoid missing a message sent from a fellow human being. The point was always to fill your time with "content."

We dreamed of decentralised social networks as "email 2.0." They truly are "television 2.0."

They are entertainment platforms that delegate media creation to the users themselves the same way Uber replaced taxis by having people drive others in their own car.

But what was created as "ride-sharing" was in fact a way to 1) destroy competition and 2) make a shittier service while people producing the work were paid less and lost labour rights. It was never about the social!

The lost messages

My own interpretation is that social media users don’t mind losing messages because they were raised on algorithmic platforms that did that all the time. They don’t see the point in trusting a platform because they never experienced a trusted means of communication.

Now that I write it, it may also explain why instant messaging became the dominant communication medium: because if you don’t receive an immediate answer, you don’t even trust the recipient to have received your messages. In fact, even if the message was received, you don’t even trust the recipient's attention span to remember the message.

Multiple studies have confirmed that we don’t remember the vast majority of what we see while doomscrolling. While the "view" was registered to increase statistics, we don’t have the slightest memory of most of that content, even after only a few seconds. It thus makes sense not to consider social media as a means of communication at all.

There’s no need for a reliable communication protocol if we assume that human brains are not reliable enough to handle asynchronous messages.

It’s not Dansup who is missing something. It is me who is unadapted to the current society. I understand now that Pixelfed was only following some design decisions and protocol abuses fathered by Mastodon. Pixelfed was my own "gotcha" moment because I never understood Instagram in the first place, and, in my eyes, Pixelfed was no better. But if you take that route, Mastodon is no better than Twitter.

Many reactions pointed, justly, that other Fediverse tools such as PeerTube, WriteFreely, or Mobilizon were just not displaying messages at all.

I didn’t consider it a big problem because they never pretended to do it in the first place. Nobody uses those tools to follow others. There’s no expectation. Those platforms are "publish only." But this is still a big flaw in the Fediverse! Someone could, using autocompletion, send a message pinging your PeerTube address and you will never see it. Try autocomplete "@ploum" from your Mastodon account and guess which suggestion is the only one that will send me a valid notification!

On a more positive note, I should give credit to Dansup for announcing that Pixelfed will soon allow people to optionally "not drop" text messages.

How we lost email

I cling to asynchronous reliable communications, but those are disappearing. I use email a lot because I see it as a true means of communication: reliable, asynchronous, decentralised, standardised, manageable offline with my own tools. But many people, even barely younger than me, tell me that email is "too formal" or "for old people" or "even worse than social network feeds."

And they are probably right. I like it because I’ve learned to use it. I apply a strong inbox 0 methodology. If I don’t reply or act on your email, it is because I decided not to. I’m actively keeping my inbox clean by sharing only disposable email addresses that I disable once they start to be spammed.

But for most people, their email inbox is simply one more feed full of bad advertising. They have 4 or 5 digit unread count. They scroll through their inbox like they do through their social media feeds.

Boringness of communications

The main problem with reliable communication protocols? It is a mostly solved problem. Build simple websites, read RSS feeds, write emails. Use IRC and XMPP if you truly want real-time communication. Those are working and working great.

And because of that, they are boring.

Communications protocols are boring. They don’t give you that well-studied random hit of dopamine. They don’t make you addicted.

They don’t make you addicted which means they are not hugely profitable and thus are not advertised. They are not new. They are not as shiny as a new app or a new random chatbot.

The problem with communication protocols was never the protocol part. It’s the communication part. A few sad humans never wanted to communicate in the first place and managed to become billionaires by convincing the rest of mankind that being entertained is better than communicating with other humans.

As long as I’m not alone

We believe that a communication network must reach a critical mass to be really useful. People stay on Facebook to "stay in touch with the majority." I don’t believe that lie anymore. I’m falling back to good old mailing lists. I’m reading the Web and Gemini while offline through Offpunk. I also handle my emails asynchronously while offline.

I may be part of an endangered species.

It doesn’t matter. I made peace with the fact that I will never get in touch with everyone. As long as there are people posting on their gemlogs or blogs with RSS feeds, as long as there are people willing to read my emails without automatically summarising them, there will be a place for those who want to simply communicate. A protected reserve.

You are welcome to join!

https://ploum.net/files/framagroupes.jpg

About the author

I’m Ploum, a writer and an engineer. I like to explore how technology impacts society. You can subscribe by email or by rss. I value privacy and never share your adress.

I write science-fiction novels in French. For Bikepunk, my new post-apocalyptic-cyclist book, my publisher is looking for contacts in other countries to distribute it in languages other than French. If you can help, contact me!

December 12, 2025

Today marks exactly 25 years since I registered amedee.be. On 12 December 2000, at 17:15 CET, my own domain officially entered the world. It feels like a different era: an internet of static pages, squealing dial-up modems, and websites you assembled yourself with HTML, stubbornness, and whatever tools you could scrape together. 🧑‍💻📟

I had websites before that—my first one must have been around 1996, hosted on university servers or one of those free hosting platforms that have long since disappeared. There is no trace of those early experiments, and that’s probably for the best. Frames, animated GIFs, questionable colour schemes… it was all part of the charm. 💾✨

But amedee.be was the moment I claimed a place on the internet that was truly mine. And not just a website: from the very beginning, I also used the domain for email, which added a level of permanence and identity that those free services never could. 📬

Over the past 25 years, I have used more content management systems than I can easily list. I started with plain static HTML. Then came a parade of platforms that now feel almost archaeological: self-written Perl scripts, TikiWiki, XOOPS, Drupal… and eventually WordPress, where the site still lives today. I’m probably forgetting a few—experience tends to blur after a quarter century online. 🗂🕸

Not all of that content survived. I’ve lost plenty along the way: server crashes, rushed or ill-planned CMS migrations, and the occasional period of heroic under-backing-up. I hope I’ve learned something from each of those episodes. Fortunately, parts of the site’s history can still be explored through the Wayback Machine at the Internet Archive—a kind of external memory for the things I didn’t manage to preserve myself. 📉🧠📚

The hosting story is just as varied. The site spent many years at Hetzner, had a period on AWS, and has been running on DigitalOcean for about a year now. I’m sure there were other stops in between—ones I may have forgotten for good reasons. ☁🔧

What has remained constant is this: amedee.be is my space to write, tinker, and occasionally publish something that turns out useful for someone else. A digital layer of 25 years is nothing to take lightly. It feels a bit like personal archaeology—still growing with each passing year. 🏺📝

Here’s to the next 25 years. I’m curious which tools, platforms, ideas, and inevitable mishaps I’ll encounter along the way. One thing is certain: as long as the internet exists, I’ll be here somewhere. 🚀

December 11, 2025

Let’s now see how we can connect to our MySQL HeatWave DB System, which was deployed with the OCI Hackathon Starter Kit in part 1. We have multiple possibilities to connect to the DB System, and we will use three of them: MySQL Shell in the command line MySQL Shell is already installed on the […]

December 10, 2025

L’autocomplétion de nos intentions

Lorsque j’ai commencé à utiliser mon premier smartphone, en 2012, j’utilisais évidemment le clavier fourni par défaut qui proposait de l’autocomplétion.

Il ne m’a fallu que quelques jours pour être profondément choqué. L’autocomplétion proposait des mots qui convenaient parfaitement, mais qui n’étaient pas ceux que j’avais en tête. En acceptant une autocomplétion par désir d’économiser quelques pressions de doigts, je me retrouvais avec une phrase différente de ce que j’avais initialement envisagé. Je modifiais le cours de ma pensée pour m’adapter à l’algorithme !

C’était choquant !

Moi qui étais passé au Bépo quelques années plus tôt et qui avais découvert les puissances de la dactylographie pour affiner mes idées, je ne pouvais imaginer laisser une machine me dicter ma pensée, même pour un texte aussi mondain qu’un SMS. Je me suis donc mis en quête d’un clavier optimisé pour usage sur un écran tactile minuscule, mais sans autocomplétion. J’ai trouvé MessagEase, que j’ai utilisé pendant des années avant de passer à ThumbKey, version libre du précédent.

Le choc fut encore plus violent lorsqu’apparurent les suggestions de réponses aux emails dans l’interface Gmail. Ma première expérience avec ce système fut de me voir proposer plusieurs manières de répondre par l’affirmative à un email professionnel auquel je voulais répondre négativement. Avec horreur, je perçus en moi un vague instinct de cliquer pour me débarrasser plus vite de cet email corvée.

Cette expérience m’inspira la nouvelle « Les imposteurs », lisible dans le recueil « Stagiaire au spatioport Omega 3000 et autres joyeusetés que nous réserve le futur » (qui est justement disponible à -50% jusqu’au 15 décembre ou à prix normal, mais sans frais de port chez votre libraire).

L’autocomplétion manipule notre intention, cela ne fait aucun doute. Et s’il y a bien quelque chose que je souhaite préserver chez moi, c’est mon cerveau et mes idées. Comme un footballeur préserve ses jambes, comme un pianiste préserve ses mains, je chéris et protège mon cerveau et mon libre arbitre. Au point de ne jamais boire d’alcool, de ne jamais consommer la moindre drogue : je ne veux pas altérer mes perceptions, mais, au contraire, les affiner.

Mon cerveau est ce que j’ai de plus précieux, l’autocomplétion même la plus basique est une atteinte directe à mon libre arbitre.

Mais, avec les chatbots, c’est désormais une véritable économie de l’intention qui se met en place. Car si les prochaines versions de ChatGPT ne sont pas meilleures à répondre à vos questions, elles seront meilleures à les prédire.

Non pas à cause de pouvoir de divination ou de télépathie. Mais parce qu’elles vous auront influencé pour vous amener dans la direction qu’elles auront choisie, à savoir la plus profitable.

Une partie de l’intérêt disproportionné que les politiciens et les CEOs portent aux chatbots vient clairement de leur incompétence voire de leur bêtise. Leur métier étant de dire ce que l’audience veut entendre, même si cela n’a aucun sens, ils sont sincèrement étonnés de voir une machine être capable de les remplacer. Et ils sont le plus souvent incapables de percevoir que tout le monde n’est pas comme eux, que tout le monde ne fait pas semblant de comprendre à longueur de journée, que tout le monde n’est pas Julius.

Mais, chez les plus retors et les plus intelligents, une partie de cet intérêt peut également s’expliquer par le potentiel de manipulation des foules. Là où Facebook et TikTok ont ponctuellement influencé des élections majeures grâce à des mouvements de foule virtuels, une ubiquité de ChatGPT et consorts permet un contrôle total sur les pensées les plus intimes de tous les utilisateurs.

Après tout, j’ai bien entendu dans l’épicerie de mon quartier une femme se vanter auprès d’une amie d’utiliser ChatGPT comme conseiller pour ses relations amoureuses. À partir de là, il est trivial de modifier le code pour faire en sorte que les femmes soient plus dociles, plus enclines à sacrifier leurs aspirations personnelles pour celles de leur conjoint, de pondre plus d’enfants et de les éduquer selon les préceptes ChatGPTesques.

Contrairement au fait de résoudre les « hallucinations », problème insoluble, car les Chatbots n’ont pas de notion de vérité épistémologique, introduire des biais est trivial. En fait, il a été démontré plusieurs fois que ces biais existent déjà. C’est juste que nous avons naïvement supposé qu’ils étaient involontaires, mécaniques.

Alors qu’ils sont un formidable produit à vendre à tous les apprentis dictateurs. Un produit certainement rentable et pas très éloigné du ciblage publicitaire que vendent déjà Facebook et Google.

Un produit qui apparaît comme parfaitement éthique, approprié et même bénéfique à l’humanité. Du moins si on se fie à ce que nous répondra ChatGPT. Qui confirmera d’ailleurs son propos en pointant vers plusieurs articles scientifiques. Rédigés avec son aide.

À propos de l’auteur :

Je suis Ploum et je viens de publier Bikepunk, une fable écolo-cycliste entièrement tapée sur une machine à écrire mécanique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !

Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.

December 09, 2025

The room formally known as “Lightning Talks” is now known as /dev/random. After 25 years, we say goodbye to the old Lightning Talks format. In place, we have two new things! /dev/random: 15 minute talks on a random, interesting, FOSS-related subject, just like the older Lightning Talks New Lightning Talks: a highly condensed batch of 5 minute quick talks in the main auditorium on various FOSS-related subjects! Last year we experimented with running a more spontaneous lightning talk format, with a submission deadline closer to the event and strict short time limits (under five minutes) for each speaker. The experiment舰

December 05, 2025

Fuck, ik word volgende week 57! Eigenlijk heb ik daar best vrede mee, net als met mijn kalend hoofd. Maar anderzijds stellen mensen zich af en toe de vraag wie ze zijn en ook daar ben ik geen uitzondering op. Ik ben onder andere…

Source

Mais c’est plus joli !

La nouvelle version de votre site web est inutilisable, vos emails sont longs et illisibles, vos slides et vos graphiques sont creux. Mais j’entends bien votre argument pour justifier toute cette merde :

C’est plus joli !

On fout tout en l’air pour faire « joli ». Même les icônes des applications sont devenues indistinguables les unes des autres, car « c’est plus joli ».

Le joli nous tue !

R.L. Dane réalise que ce qui lui manque le plus est son ordinateur en noir et blanc. Non pas parce qu’il veut revenir au noir et blanc, mais parce que le fait que certains ordinateurs soient en noir et blanc forçait les designeurs à créer des interfaces lisibles et simples dans toutes les conditions.

C’est exactement ce que je reproche à tous les sites web modernes et toutes les apps. Les concepteurs devraient être forcés de les utiliser sur un vieil ordinateur ou sur un smartphone un peu ancien. C’est joli si on a justement le dernier smartphone avec les derniers espiogiciels à la mode.

Je conchie le « joli ». Le joli, c’est la déculture totale, c’est Trump qui fout des dorures partout, c’est le règne du kitch et de l’incompétence. Votre outil est joli simplement parce que vous ne savez pas l’utiliser ! Parce que vous avez oublié que des gens compétents peuvent l’utiliser. Le joli s’oppose à la praticité.

Le joli s’oppose au beau.

Le beau est profond, artistique, réfléchi, simple. Le beau requiert une éducation, une difficulté. Un artisan chevronné s’émerveille devant la finesse et la simplicité d’un outil. Le consommateur décérébré lui préfère la version avec des paillettes. Le mélomane apprécie une interprétation dans une salle de concert là où votre enceinte connectée impose un bruit terne et sans relief à tous les passants dans le parc. Le joli rajoute au beau une lettre qui le transforme : le beauf !

Oui, vos logorrhées ChatGPTesques sont beaufs. Vos images générées par Midjourney sont le comble du mauvais goût. Votre chaîne YouTube est effrayante de banalités. Vos podcasts ne sont qu’un comblage d’ennui durant votre jogging. Le énième redesign de votre app n’est que la marque de votre inculture. Vos slides PowerPoint et vos posts LinkedIn sont à la limite du crétinisme clinique.

Thierry Crouzet parle de l’addiction à plaire. Mais même cela est faux. On ne veut pas réellement plaire, juste obéir à des algorithmes pour augmenter notre nombre de followers. On veut un profil qui fait « joli ».

Contre le rose bonbon kitch, le headbanging. Contre le technofascisme, le technopunk ringard !

Oui, mais, ça marche !

Le joli est un bonbon, une sucrerie. Cela n’a jamais été aussi vrai que sur les réseaux sociaux, une analogie que j’utilise depuis plus d’une décennie.

Sur son gemlog, Asquare approndit le concept de manière très intéressante : iel suggère que moins les sucreries sont bonnes, plus on en consomme. (oui, c’est sur Gemini, un truc pas joli pour lequel il faut un navigateur dédié et ça n’a rien à voir avec l’IA de Google)

Et ça a du sens : si vous mangez un morceau d’un excellent chocolat avec une tasse de thé de qualité, vous n’aurez pas envie d’en prendre 10 morceaux, de vous gaver. À l’inverse, un chocolat industriel donne une légère satisfaction, mais pas suffisante, on en veut toujours plus.

C’est pareil avec les réseaux sociaux : au plus vous scrollez sur des trucs vaguement intéressants, au plus vous continuez. La merde est addictive ! Et au plus il y a du contenu, au plus la qualité moyenne baisse, cela a été démontré.

Ce qui n’est pas sympa pour la merde, car, comme me le signalent de nombreux lecteurs, la merde est un excellent compost pour faire pousser de bonnes choses. Ce n’est pas le cas des réseaux sociaux, qui font surtout pousser le crétinisme et le fascisme.

À l’opposé de cette « jolie merde », si vous êtes abonnés, comme moi, à d’excellents blogs, vous lisez un article et cela vous fait réfléchir. Le carnet de Thierry Crouzet de novembre, par exemple, me fait beaucoup réfléchir. Après l’avoir lu, je n’ai pas envie de papillonner. Je m’arrête, je me pose des questions, j’ai envie d’y penser, mais aussi de le savourer.

Mais rien ne vaut pour moi la saveur, la beauté d’un bon livre !

L’artisanat derrière la beauté des livres

Je rencontre trop de gens qui me confient aimer beaucoup la lecture, mais n’avoir « plus le temps de lire ». Les mêmes, par contre, sont hyper actifs sur les réseaux sociaux, sur d’infinis groupes Whatsapp ou Discord. C’est comme prétendre n’avoir pas assez faim pour manger ses légumes parce qu’on mange des bonbons toute la journée. Forcément, on a un distributeur dans notre poche ! Et le bonbon n’étant pas vraiment satisfaisant, on reprend un autre… C’est pareil pour certains livres à grand succès produits à la chaîne !

Mais je parlais plus haut du fait qu’apprécier la beauté nécessite la compétence. Les éditions PVH ont justement décidé de se mettre à nu, d’exposer le travail derrière chaque livre pour justifier le prix de l’objet. C’est quelque chose que je remarque : mieux on comprend un travail, au plus on l’apprécie. Cela s’oppose à la nourriture industrielle sous blister qui présuppose de « ne pas savoir comment c’est fait ».

Outre ces explications, deux de mes livres sont à -50% jusqu’au 15 décembre. Si vous n’avez pas de librairie près de chez vous, c’est l’occasion de rentabiliser les frais de port (qui ont explosé, merci la poste française).

L’artisanat derrière l’écriture

PVH vient de sortir un roman de fantasy écrit à 10 mains : « Le bastion des dégradés ».

Julien Hirt, un des co-auteurs et l’auteur de Carcinopolis (que je recommande chaudement si vous n’êtes pas une âme sensible), nous décrit le processus dans un billet passionnant.

Les auteurs sont comme les chats : il est notoirement difficile de leur apprendre à marcher au pas.

À la lecture du billet, la première chose qui me vient à l’esprit, c’est que je suis impatient de lire ce roman. Ce sont tout·e·s des potes dont j’ai lu au moins un des romans, le mélange doit être ébouriffant !

La seconde chose, c’est que ça donne envie de faire pareil. Je regrette de ne pas vivre en Suisse. Comme dit Julien, la Suisse romande est à la fantasy ce que la Scandinavie est au polar. Mais moi, je suis en Belgique !

Bon, après, j’avoue que déjà je suis plus SF que fantasy, mais écrire dans un cloud en chattant, ce serait très très difficile pour moi. Je serais plutôt du genre à vouloir écrire dans un dépôt git en échangeant sur une mailing-liste. Un peu comme si je développais un logiciel libre. Comme le dit Marcello Vitali-Rosati, l’outil a une énorme influence sur l’écriture.

Je ferais plus facilement partie d’un collectif geek-SF, dans la mouvance Neal Stephenson/Charlie Stross/Cory Doctorow. Mais, comme le dit très bien Julien, il faut trouver des complémentarités. Les geeks-SF manquent trop souvent de poésie.

Je repousse sans cesse l’écriture de la suite de Printeurs, mais plus j’y pense, plus je crois que ça pourrait être un projet collectif. J’aime beaucoup par exemple « Chroniques d’un crevard », nouvelle issue du Recueil de Nakamoto. Je trouve que l’univers est parfaitement compatible avec celui de Printeurs.

Trouver le beau derrière le joli

Vous voyez le résultat ? Le fait de tenter de consommer de bonnes choses me donne des idées, me donne envie de produire moi-même des choses. J’éprouve de la gratitude envers les gens qui écrivent des choses que j’aime et avec qui je peux échanger « entre êtres humains ». C’est parfois tellement fort que j’ai l’impression d’être dans un âge d’or !

Bon, peut-être un peu trop, car j’ai trop d’idées qui se bousculent, je rencontre trop de gens intéressants. Finalement, la beauté est partout dès qu’on fait l’effort de mettre maintenir le « joli » à distance. S’il n’y avait pas tant de belles choses à découvrir, je pourrais peut-être me consacrer plus à l’écriture…

Sous les jolis pavés, la beauté de la plage !

À propos de l’auteur :

Je suis Ploum et je viens de publier Bikepunk, une fable écolo-cycliste entièrement tapée sur une machine à écrire mécanique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !

Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.

December 03, 2025

Or How I Discovered That Fusion Is… Fine, I Guess 🕺

Last night I did something new: I went fusion dancing for the first time.
Yes, fusion — that mysterious realm where dancers claim to “just feel the music,” which is usually code for nobody knows what we’re doing but we vibe anyway.
The setting: a church in Ghent.
The vibe: incense-free, spiritually confusing. ⛪

Spoiler: it was okay.
Nice to try once. Probably not my new religion.

Before anyone sharpens their pitchforks:
Lene (Kula Dance) did an absolutely brilliant job organizing this.
It was the first fusion event in Ghent, she put her whole heart into it, the vibe was warm and welcoming, and this is not a criticism of her or the atmosphere she created.
This post is purely about my personal dance preferences, which are… highly specific, let’s call it that.

But let’s zoom out. Because at this point I’ve sampled enough dance styles to write my own David Attenborough documentary, except with more sweat and fewer migratory birds. 🐦

Below: my completely subjective, highly scientific taxonomy of partner dance communities, observed in their natural habitats.


🎻 Balfolk – Home Sweet Home

Balfolk is where I grew up as a dancer — the motherland of flow, warmth, and dancing like you’re collectively auditioning for a Scandinavian fairy tale.

There’s connection, community, live music, soft embraces, swirling mazurkas, and just the right amount of emotional intimacy without anyone pretending to unlock your chakras.

Balfolk people: friendly, grounded, slightly nerdy, and dangerously good at hugs.

Verdict: My natural habitat. My comfort food. My baseline for judging all other styles. ❤


💫 Fusion: A Beautiful Thing That Might Not Be My Thing

Fusion isn’t a dance style — it’s a philosophical suggestion.

“Take everything you’ve ever learned and… improvise.”

Fusion dancers will tell you fusion is everything.
Which, suspiciously, also means it is nothing.

It’s not a style; it’s a choose-your-own-adventure.
You take whatever dance language you know and try to merge it with someone else’s dance language, and pray the resulting dialect is mutually intelligible.

I had a fun evening, truly. It was lovely to see familiar faces, and again: Lene absolutely nailed the organization. Also a big thanks to Corentin for the music!
But for me personally, fusion sometimes has:

  • a bit too much freedom
  • a bit too little structure
  • and a wildly varying “shared vocabulary” depending on who you’re holding

One dance feels like tango in slow motion, the next like zouk without the hair flips, the next like someone attempts tai chi with interpretative enthusiasm. Mostly an exercise in guessing whether your partner is leading, following, improvising, or attempting contemporary contact improv for the first time.

Beautiful when it works. Less so when it doesn’t.
And all of that randomly in a church in Ghent on a weeknight.

Verdict: Fun to try once, but I’m not currently planning my life around it. 😅


🤸 Contact Improvisation: Gravity’s Favorite Dance Style

Contact improv deserves its own category because it’s fusion’s feral cousin.

It’s the dance style where everyone pretends it’s totally normal to roll on the floor with strangers while discussing weight sharing and listening with your skin.

Contact improv can be magical — bold, creative, playful, curious, physical, surprising, expressive.
It can also be:

  • accidentally elbowing someone in the ribs
  • getting pinned under a “creative lift” gone wrong
  • wondering why everyone else looks blissful while you’re trying not to faceplant
  • ending up in a cuddle pile you did not sign up for

It can exactly be the moment where my brain goes:

“Ah. So this is where my comfort zone ends.”

It’s partnered physics homework.
Sometimes beautiful, sometimes confusing, sometimes suspiciously close to a yoga class that escaped supervision.

I absolutely respect the dancers who dive into weight-sharing, rolling, lifting, sliding, and all that sculptural body-physics magic.
But my personal dance style is:

  • musical
  • playful
  • partner-oriented
  • rhythm-based
  • and preferably done without accidentally mounting someone like a confused koala 🐨

Verdict: Fascinating to try, excellent for body awareness, fascinating to observe, but not my go-to when I just want to dance and not reenact two otters experimenting with buoyancy. 🦦 Probably not something I’ll ever do weekly.


🪕 Contra: The Holy Grail of Joyful Chaos

Contra is basically balfolk after three coffees.
People line up, the caller shouts things, everyone spins, nobody knows who they’re dancing with and nobody cares. It’s wholesome, joyful, fast, structured, musical, social, and somehow everyone becomes instantly attractive while doing it.

Verdict: YES. Inject directly into my bloodstream. 💉


🍻 Ceilidh: Same Energy, More Shouting

Ceilidh is what you get when Contra and Guinness have a love child.
It’s rowdy, chaotic, and absolutely nobody takes themselves seriously — not even the guy wearing a kilt with questionable underwear decisions. It’s more shouting, more laughter, more giggling at your own mistakes, and occasionally someone yeeting themselves across the room.

Verdict: Also YES. My natural ecosystem.


🇧🇷 Forró: Balfolk, but Warmer

If mazurka went on Erasmus in Brazil and came back with stories of sunshine and hip movement, you’d get Forró.

Close embrace? Check.
Playfulness? Check.
Techniques that look easy until you attempt them and fall over? Check.
I’m convinced I would adore forró.

Verdict: Where are the damn lessons in Ghent? Brussel if we really have to. Asking for a friend. (The friend is me.) 😉


🕺 Lindy Hop & West Coast Swing: Fun… But the Vibe?

Both look amazing — great music, athletic energy, dynamic, cool moves, full of personality.
But sometimes the community feels a tiny bit like:

“If you’re not wearing vintage shoes and triple-stepping since birth, who even are you?”

It’s not that the dancers are bad — they’re great.
It’s just… the pretentie.

Verdict: Lovely to watch, less lovely to join.
Still looking for a group without the subtle “audition for fame-school jazz ensemble” energy.


🌊 Zouk: The Idea Pot

Zouk dancers move like water. Or like very bendy cats.
It’s sexy, flowy, and full of body isolations that make you reconsider your spine’s architecture.

I’m not planning to become a zouk person, but I am planning to steal their ideas.
Chest isolations?
Head rolls?
Wavy body movements?
Yes please. For flavour. Not for full conversion.

Verdict: Excellent expansion pack, questionable main quest.


💃 Salsa, Bachata & Friends: Respectfully… No

I tried. I really did.
I know people love them.
But the Latin socials generally radiate too much:

  • machismo
  • perfume
  • nightclub energy
  • “look at my hips” nationalism
  • and questionable gender-role nostalgia

If you love it, great.
If you’re me: no, no, absolutely not, thank you.

Verdict: iew iew nééé. 🪳
Fantastic for others. Not for me.


🍷 Tango: The Forbidden Fruit

Tango is elegant, intimate, dramatic… and the community is a whole ecosystem on its own.

There are scenes where people dance with poetic tenderness, and scenes where people glare across the room using century-old codified eyebrow signals that might accidentally summon a demon. 👀

I like tango a lot — I just need to find a community that doesn’t feel like I’m intruding on someone’s ancestral mating ritual. And where nobody hisses if your embrace is 3 mm off the sacred norm.

Verdict: Promising, if I find the right humans.


🎩 Ballroom: Elegance With a Rulebook Thicker Than a Bible

Ballroom dancers glide across the floor like aristocrats at a diplomatic gala — smooth, flawless, elegant, and somehow always looking like they can hear a string quartet even when Beyoncé is playing.

It’s beautiful. Truly.
Also: terrifying.

Ballroom is the only dance style where I’m convinced the shoes judge you.

Everything is codified — posture, frame, foot angle, when to breathe, how much you’re allowed to look at your partner before the gods of Standard strike you down with a minus-10 penalty.

The dancers?
Immaculate. Shiny. Laser-focused.
Half angel, half geometry teacher.

I admire Ballroom deeply… from a safe distance.

My internal monologue when watching it:
“Gorgeous! Stunning! Very impressive!”
My internal monologue imagining myself doing it:
“Nope. My spine wasn’t built for this. I slouch like a relaxed accordion.”

Verdict: Respect, awe, and zero practical intention of joining.
I love dancing — but I’m not ready to pledge allegiance to the International Order of Perfect Posture. 🕴


🧘‍♂️ Ecstatic Dance / 5 Rhythms / Biodanza / Tantric Whatever

Look.
I’m trying to be polite.
But if I wanted to flail around barefoot while being spiritually judged by someone named Moonfeather, I’d just do yoga in the wrong class.

I appreciate the concept of moving freely.
I do not appreciate:

  • uninvited aura readings
  • unclear boundaries
  • workshops that smell like kombucha
  • communities where “I feel called to share” takes 20 minutes

And also: what are we doing? Therapy? Dance? Summoning a forest deity? 🧚

Verdict: Too much floaty spirituality, not enough actual dancing.
Hard pass. ✨


📝 Conclusion

I’m a simple dancer.
Give me clear structure (contra), playful chaos (ceilidh), heartfelt connection (balfolk), or Brazilian sunshine vibes (forró).

Fusion was fun to try, and I’m genuinely grateful it exists — and grateful to the people like Lene who pour time and energy into creating new dance spaces in Ghent. 🙌

But for me personally?
Fusion can stay in the category of “fun experiment,” but I won’t be selling all my worldly possessions to follow the Church of Expressive Improvisation any time soon.
I’ll stay in my natural habitat: balfolk, contra, ceilidh, and anything that combines playfulness, partnership, and structure.

If you see me in a dance hall, assume I’m there for the joy, the flow, and preferably fewer incense-burning hippies. 🕯

Still: I’m glad I went.
Trying new things is half the adventure.
Knowing what you like is the other half.

And I’m getting pretty damn good at that. 💛

Amen.
(Fitting, since I wrote this after dancing in a church.)

November 21, 2025

For $reasons there was a Citrix Workspace installation on my partner’s computer. This installation hadn’t been used in forever, and I did not manage to get rid of it either (uninstalling would fail, upgrading would fail).

Citrix publishes a Citrix Cleanup tool, which for some obscure reason they’ve decided to put behind a login-wall. Only if your company has an account with Citrix you can obtain it directly.

Luckely there’s an easy way to obtain it: Download the latest Citrix Workspace app, and unpack the downloaded file using eg. 7-Zip. You’ll find the Cleanup Tool within :)

November 19, 2025

We saw in part 1 how to deploy our starter kit in OCI, and in part 2 how to connect to the compute instance. We will now check which development languages are available on the compute instance acting as the application server. After that, we will see how easy it is to install a new […]