Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

July 02, 2025

A few weeks ago, I was knee-deep in CSV files. Not the fun kind. These were automatically generated reports from Cisco IronPort, and they weren’t exactly what I’d call analysis-friendly. Think: dozens of columns wide, thousands of rows, with summary data buried in awkward corners.

I was trying to make sense of incoming mail categories—Spam, Clean, Malware—and the numbers that went with them. Naturally, I opened the file in Excel, intending to wrangle the data manually like I usually do. You know: transpose the table, delete some columns, rename a few headers, calculate percentages… the usual grunt work.

But something was different this time. I noticed the ā€œGet & Transformā€ section in Excel’s Data ribbon. I had clicked it before, but this time I gave it a real shot. I selected ā€œFrom Text/CSVā€, and suddenly I was in a whole new environment: Power Query Editor.


🤯 Wait, What Is Power Query?

For those who haven’t met it yet, Power Query is a powerful tool in Excel (and also in Power BI) that lets you import, clean, transform, and reshape data before it even hits your spreadsheet. It uses a language called M, but you don’t really have to write code—although I quickly did, of course, because I can’t help myself.

In the editor, every transformation step is recorded. You can rename columns, remove rows, change data types, calculate new columns—all through a clean interface. And once you’re done, you just load the result into Excel. Even better: you can refresh it with one click when the source file updates.


🧪 From Curiosity to Control

Back to my IronPort report. I used Power Query to:

  • Transpose the data (turn columns into rows),
  • Remove columns I didn’t need,
  • Rename columns to something meaningful,
  • Convert text values to numbers,
  • Calculate the percentage of each message category relative to the total.

All without touching a single cell in Excel manually. What would have taken 15+ minutes and been error-prone became a repeatable, refreshable process. I even added a ā€œPercentā€ column that showed something like 53.4%—formatted just the way I wanted.


šŸ¤“ The Geeky Bit (Optional)

I quickly opened the Advanced Editor to look at the underlying M code. It was readable! With a bit of trial and error, I started customizing my steps, renaming variables for clarity, and turning a throwaway transformation into a well-documented process.

This was the moment it clicked: Power Query is not just a tool; it’s a pipeline.


šŸ’” Lessons Learned

  • Sometimes it pays to explore what’s already in the software you use every day.
  • Excel is much more powerful than most people realize.
  • Power Query turns tedious cleanup work into something maintainable and even elegant.
  • If you do something in Excel more than once, Power Query is probably the better way.

šŸŽÆ What’s Next?

I’m already thinking about integrating this into more of my work. Whether it’s cleaning exported logs, combining reports, or prepping data for dashboards, Power Query is now part of my toolkit.

If you’ve never used it, give it a try. You might accidentally discover your next favorite tool—just like I did.


Have you used Power Query before? Let me know your tips or war stories in the comments!

July 01, 2025

An astronaut (Cloudflare) facing giant glowing structures (crawlers) drawing energy in an alien sunset landscape.

AI is rewriting the rules of how we work and create. Expert developers can now build faster, non-developers can build software, research is accelerating, and human communication is improving. In the next 10 years, we'll probably see a 1,000x increase in AI demand. That is why Drupal is investing heavily in AI.

But at the same time, AI companies are breaking the web's fundamental economic model. This problem demands our attention.

The AI extraction problem

For 25 years, we built the Open Web on an implicit agreement: search engines could index our content because they sent users back to our websites. That model helped sustain blogs, news sites, and even open source projects.

AI companies broke that model. They train on our work and answer questions directly in their own interfaces, cutting creators out entirely. Anthropic's crawler reportedly makes 70,000 website requests for every single visitor it sends back. That is extraction, not exchange.

This is the Makers and Takers problem all over again.

The damage is real:

  • Chegg, an online learning platform, filed an antitrust lawsuit against Google, claiming that AI-powered search answers have crushed their website traffic and revenue.
  • Stack Overflow has seen a significant drop in daily active users and new questions (about 25-50%), as more developers turn to ChatGPT for faster answers.
  • I recently spoke with a recipe blogger who is a solo entrepreneur. With fewer visitors, they're earning less from advertising. They poured their heart, craft, and sweat into creating a high-quality recipe website, but now they believe their small business won't survive.

None of this should surprise us. According to Similarweb, since Google launched "AI Overviews", the number of searches that result in no click-throughs has increased from 56% in May 2024 to 69% in May 2025, meaning users get their answers directly on the results page.

This "zero-click" phenomenon reinforces the shift I described in my 2015 post, "The Big Reverse of the Web". Ten years ago, I argued that the web was moving away from sending visitors out to independent sites and instead keeping them on centralized platforms, all in the name of providing a faster and more seamless user experience.

However, the picture isn't entirely negative. Some companies find that visitors from AI tools, while small in volume, convert at much higher rates. At Acquia, the company I co-founded, traffic from AI chatbots makes up less than 1 percent of total visitors but converts at over 6 percent, compared to typical rates of 2 to 3 percent. We are still relatively early in the AI adoption cycle, so time will tell how this trend evolves, how marketers adapt, and what new opportunities it might create.

Finding a new equilibrium

There is a reason this trend has taken hold: users love it. AI-generated answers provide instant, direct information without extra clicks. It makes traditional search engines look complicated by comparison.

But this improved user experience comes at a long-term cost. When value is extracted without supporting the websites and authors behind it, it threatens the sustainability of the content we all rely on.

I fully support improving the user experience. That should always come first. But it also needs to be balanced with fair support for creators and the Open Web.

We should design systems that share value more fairly among users, AI companies, and creators. We need a new equilibrium that sustains creative work, preserves the Open Web, and still delivers the seamless experiences users expect.

Some might worry it is already too late, since large AI companies have massive scraped datasets and can generate synthetic data to fill gaps. But I'm not so sure. The web will keep evolving for decades, and no model can stay truly relevant without fresh, high-quality content.

From voluntary rules to enforcement

We have robots.txt, a simple text file that tells crawlers which parts of a website they can access. But it's purely voluntary. Creative Commons launched CC Signals last week, allowing content creators to signal how AI can reuse their work. But both robots.txt and CC Signals are "social contracts" that are hard to enforce.

Today, Cloudflare announced they will default to blocking AI crawlers from accessing content. This change lets website owners decide whether to allow access and whether to negotiate compensation. Cloudflare handles 20% of all web traffic. When an AI crawler tries to access a website protected by Cloudflare, it must pass through Cloudflare's servers first. This allows Cloudflare to detect crawlers that ignore robots.txt directives and block them.

This marks a shift from purely voluntary signals to actual technical enforcement. Large sites could already afford their own infrastructure to detect and block crawlers or negotiate licensing deals directly. For example, Reddit signed a $60 million annual deal with Google to license its content for AI training.

However, most content creators, like you and I, can do neither.

Cloudflare's actions establish a crucial principle: AI training data has a price, and creators deserve to share in the value AI generates from their work.

The missing piece: content licensing marketplaces

Accessible enforcement infrastructure is step one, and Cloudflare now provides that. Step two would be a content licensing marketplace that helps broker deals between AI companies and content creators at any scale. This would move us from simply blocking to creating a fair economic exchange.

To the best of my knowledge, such marketplaces do not exist yet, but the building blocks are starting to emerge. Matthew Prince, CEO of Cloudflare, has hinted that Cloudflare may be working on building such a marketplace, and I think it is a great idea.

I don't know what that will look like, but I imagine something like Shutterstock for AI training data, combined with programmatic pricing like Google Ads. On Shutterstock, photographers upload images, set licensing terms, and earn money when companies license their photos. Google Ads automatically prices and places millions of ads without manual negotiations. A future content licensing marketplace could work in a similar way: creators would set licensing terms (like they do on Shutterstock), while automated systems manage pricing and transactions (as Google Ads does).

Today, only large platforms like Reddit can negotiate direct licensing deals with AI companies. A marketplace with programmatic pricing would make licensing accessible to creators of all sizes. Instead of relying on manual negotiations or being scraped for free, creators could opt into fair, programmatic licensing programs.

This would transform the dynamic from adversarial blocking to collaborative value creation. Creators get compensated. AI companies get legal, high-quality training data. Users benefit from better AI tools built on ethically sourced content.

Making the Open Web sustainable

We built the Open Web to democratize access to knowledge and online publishing. AI advances this mission of democratizing knowledge. But we also need to ensure the people who write, record, code, and share that knowledge aren't left behind.

The issue is not that AI exists. The problem is that we have not built economic systems to support the people and organizations that AI relies on. This affects independent bloggers, large media companies, and open source maintainers whose code and documentation train coding assistants.

Call me naive, but I believe AI companies want to work with content creators to solve this. Their challenge is that no scalable system exists to identify, contact, and pay millions of content creators.

Content creators lack tools to manage and monetize their rights. AI companies lack systems to discover and license content at scale. Cloudflare's move is a first step. The next step is building content licensing marketplaces that connect creators directly with AI companies.

The Open Web needs economic systems that sustain the people who create its content. There is a unique opportunity here: if content creators and AI companies build these systems together, we could create a stronger, more fair, and more resilient Web than we have had in 25 years. The jury is out on that, but one can dream.

Disclaimer: Acquia, my company, has a commercial relationship with Cloudflare, but this perspective reflects my long-standing views on sustainable web economics, not any recent briefings or partnerships.

June 26, 2025

Soms zit het mee, soms nĆ©t niet. Het herenhuis waar we helemaal verliefd op waren, is helaas aan iemand anders verhuurd. Jammer, maar we blijven niet bij de pakken zitten. We zoeken verder — en hopelijk kan jij ons daarbij helpen!

Wij zijn drie mensen die samen een huis willen delen in Gent. We vormen een warme, bewuste en respectvolle woongroep, en we dromen van een plek waar we rust, verbinding en creativiteit kunnen combineren.

Wie zijn wij?

šŸ‘¤ Amedee (48): IT’er, balfolkdanser, amateurmuzikant, houdt van gezelschapsspelletjes en wandelen, auti en sociaal geĆ«ngageerd
šŸ‘© ChloĆ« (bijna 52): Kunstenares, ex-Waldorfleerkracht en permacultuurontwerpster, houdt van creativiteit, koken en natuur
šŸŽØ Kathleen (54): Doodle-artiest met sociaal-culturele achtergrond, houdt van gezelligheid, buiten zijn en schrijft graag

We willen samen een huis vormen waar communicatie, zorgzaamheid en vrijheid centraal staan. Een plek waar je je thuis voelt, en waar ruimte is voor kleine activiteiten zoals een spelavond, een workshop, een creatieve sessie of gewoon rustig samen zijn.

Wat zoeken we?

šŸ” Een huis (gƩƩn appartement) in Gent, op max. 15 minuten fietsen van station Gent-Sint-Pieters
🌿 Energiezuinig: EPC B of beter
šŸ› Minstens 3 ruime slaapkamers van ±20m²
šŸ’¶ Huurprijs:

  • tot €1650/maand voor 3 slaapkamers
  • tot €2200/maand voor 4 slaapkamers

Extra ruimtes zoals een zolder, logeerkamer, atelier, bureau of hobbyruimte zijn heel welkom. We houden van luchtige, multifunctionele plekken die mee kunnen groeien met onze noden.

šŸ“… Beschikbaar: vanaf nu, ten laatste oktober

šŸ’¬ Heeft het huis 4 slaapkamers? Dan verwelkomen we graag een vierde huisgenoot die onze waarden deelt. Maar meer dan 4 bewoners willen we bewust vermijden — kleinschalig wonen werkt voor ons het best.

Ken jij iets? Laat van je horen!

Ken je een huis dat past in dit plaatje?
We staan open voor tips via immokantoren, vrienden, buren, collega’s of andere netwerken — alles helpt!

šŸ“© Contact: amedee@vangasse.eu

Dankjewel om mee uit te kijken — en delen mag altijd šŸ’œ

June 25, 2025

Lately, I’ve noticed something strange happening in online discussions: the humble em dash (—) is getting side-eyed as a telltale sign that a text was written with a so-called ā€œAI.ā€ I prefer the more accurate term: LLM (Large Language Model), because ā€œartificial intelligenceā€ is a bit of a stretch — we’re really just dealing with very complicated statistics šŸ¤–šŸ“Š.

Now, I get it — people are on high alert, trying to spot generated content. But I’d like to take a moment to defend this elegant punctuation mark, because I use it often — and deliberately. Not because a machine told me to, but because it helps me think 🧠.

A Typographic Tool, Not a Trend šŸ–‹

The em dash has been around for a long time — longer than most people realize. The oldest printed examples I’ve found are in early 17th-century editions of Shakespeare’s plays, published by the printer Okes in the 1620s. That’s not just a random dash on a page — that’s four hundred years of literary service šŸ“œ. If Shakespeare’s typesetters were using em dashes before indoor plumbing was common, I think it’s safe to say they’re not a 21st-century LLM quirk.

The Tragedy of Othello, the Moor of Venice, with long dashes (typeset here with 3 dashes)

A Dash for Thoughts šŸ’­

In Dutch, the em dash is called a gedachtestreepje — literally, a thought dash. And honestly? I think that’s beautiful. It captures exactly what the em dash does: it opens a little mental window in your sentence. It lets you slip in a side note, a clarification, an emotion, or even a complete detour — just like a sudden thought that needs to be spoken before it disappears. For someone like me, who often thinks in tangents, it’s the perfect punctuation.

Why I Use the Em Dash (And Other Punctuation Marks)

I’m autistic, and that means a few things for how I write. I tend to overshare and infodump — not to dominate the conversation, but to make sure everything is clear. I don’t like ambiguity. I don’t want anyone to walk away confused. So I reach for whatever punctuation tools help me shape my thoughts as precisely as possible:

  • Colons help me present information in a tidy list — like this one.
  • Brackets let me add little clarifications (without disrupting the main sentence).
  • And em dashes — ah, the em dash — they let me open a window mid-sentence to give you extra context, a bit of tone, or a change in pace.

They’re not random. They’re intentional. They reflect how my brain works — and how I try to bridge the gap between thoughts and words šŸŒ‰.

It’s Not Just a Line — It’s a Rhythm šŸŽµ

There’s also something typographically beautiful about the em dash. It’s not a hyphen (-), and it’s not a middling en dash (–). It’s long and confident. It creates space for your eyes and your thoughts. Used well, it gives writing a rhythm that mimics natural speech, especially the kind of speech where someone is passionate about a topic and wants to take you on a detour — just for a moment — before coming back to the main road šŸ›¤.

I’m that someone.

Don’t Let the Bots Scare You

Yes, LLMs tend to use em dashes. So do thoughtful human beings. Let’s not throw centuries of stylistic nuance out the window because a few bots learned how to mimic good writing. Instead of scanning for suspicious punctuation, maybe we should pay more attention to what’s being said — and how intentionally šŸ’¬.

So if you see an em dash in my writing, don’t assume it came from a machine. It came from me — my mind, my style, my history with language. And I’m not going to stop using it just because an algorithm picked up the habit šŸ’›.

June 24, 2025

A glowing light bulb hanging in an underground tunnel.

In my post about digital gardening and public notes, I shared a principle I follow: "If a note can be public, it should be". I also mentioned using Obsidian for note-taking. Since then, various people have asked about my Obsidian setup.

I use Obsidian to collect ideas over time rather than to manage daily tasks or journal. My setup works like a Commonplace book, where you save quotes, thoughts, and notes to return to later. It is also similar to a Zettelkasten, where small, linked notes build deeper understanding.

What makes such note-taking systems valuable is how they help ideas grow and connect. When notes accumulate over time, connections start to emerge. Ideas compound slowly. What starts as scattered thoughts or quotes becomes the foundation for blog posts or projects.

Why plain text matters

One of the things I appreciate most about Obsidian is that it stores notes as plain text Markdown files on my local filesystem.

Plain text files give you full control. I sync them with iCloud, back them up myself, and track changes using Git. You can search them with command-line tools, write scripts to process them outside of Obsidian, or edit them in other applications. Your notes stay portable and usable any way you want.

Plus, plain text files have long-term benefits. Note-taking apps come and go, companies fold, subscription models shift. But plain text files remain accessible. If you want your notes to last for decades, they need to be in a format that stays readable, editable, and portable as technology changes. A Markdown file you write today will open just fine in 2050.

All this follows what Obsidian CEO Steph Ango calls the "files over apps" philosophy: your files should outlast the tools that create them. Don't lock your thinking into formats you might not be able to access later.

My tools

Before I dive into how I use Obsidian, it is worth mentioning that I use different tools for different types of thinking. Some people use Obsidian for everything – task management, journaling, notes – but I prefer to separate those.

For daily task management and meeting notes, I rely on my reMarkable Pro. A study titled The Pen Is Mightier Than the Keyboard by Mueller and Oppenheimer found that students who took handwritten notes retained concepts better than those who typed them. Handwriting meeting notes engages deeper cognitive processing than typing, which can improve understanding and memory.

For daily journaling and event tracking, I use a custom iOS app I built myself. I might share more about that another time.

Obsidian is where I grow long-term ideas. It is for collecting insights, connecting thoughts, and building a knowledge base that compounds over time.

How I capture ideas

In Obsidian, I organize my notes around topic pages. Examples are "Coordination challenges in Open Source", "Solar-powered websites", "Open Source startup lessons", or "How to be a good dad".

I have hundreds of these topic pages. I create a new one whenever an idea feels worth tracking.

Each topic page grows slowly over time. I add short summaries, interesting links, relevant quotes, and my own thoughts whenever something relevant comes up. The idea is to build a thoughtful collection of notes that deepens and matures over time.

Some notes stay short and focused. Others grow rich with quotes, links, and personal reflections. As notes evolve, I sometimes split them into more specific topics or consolidate overlapping ones.

I do not schedule formal reviews. Instead, notes come back to me when I search, clip a new idea, or revisit a related topic. A recent thought often leads me to something I saved months or years ago, and may prompt me to reorganize related notes.

Obsidian's core features help these connections deepen. I use tags, backlinks and graph view, to connect notes and reveal patterns between notes.

How I use notes

The biggest challenge with note-taking is not capturing ideas, but actually using them. Most notes get saved and then forgotten.

Some of my blog posts grow directly from these accumulated notes. Makers and Takers, one of my most-read blog posts, pre-dates Obsidian and did not come from this system. But if I write a follow-up, it will. I have a "Makers and Takers" note where relevant quotes and ideas are slowly accumulating.

As my collection of notes grows, certain notes keep bubbling up while others fade into the background. The ones that resurface again and again often signal ideas worth writing about or projects worth pursuing.

What I like about this process is that it turns note-taking into more than just storage. As I've said many times, writing is how I think. Writing pushes me to think, and it is the process I rely on to flesh out ideas. I do not treat my notes as final conclusions, but as ongoing conversations with myself. Sometimes two notes written months apart suddenly connect in a way I had not noticed before.

My plugin setup

Obsidian has a large plugin ecosystem that reminds me of Drupal's. I mostly stick with core plugins, but use the following community ones:

  • Dataview – Think of it as SQL queries for your notes. I use it to generate dynamic lists like TABLE FROM #chess AND #opening AND #black to see all my notes on chess openings for Black. It turns your notes into a queryable database.

  • Kanban – Visual project boards for tracking progress on long-term ideas. I maintain Kanban boards for Acquia, Drupal, improvements to dri.es, and more. Unlike daily task management, these boards capture ideas that evolve over months or years.

  • Linter – Automatically formats my notes: standardizes headings, cleans up spacing, and more. It runs on save, keeping my Markdown clean.

  • Encrypt – Encrypts specific notes with password protection. Useful for sensitive information that I want in my knowledge base but need to keep secure.

  • Pandoc – Exports notes to Word documents, PDFs, HTML, and other formats using Pandoc.

  • Copilot – I'm still testing this, but the idea of chatting with your own knowledge base is compelling. You can also ask AI to help organize notes more effectively.

The Obsidian Web Clipper

The tool I'd actually recommend most isn't a traditional Obsidian plugin: it's the official Obsidian Web Clipper browser extension. I have it installed on my desktop and phone.

When I find something interesting online, I highlight it and clip it directly into Obsidian. This removes friction from the process.

I usually save just a quote or a short section of an article, not the whole article. Some days I save several clips. Other days, I save none at all.

Why this works

For me, Obsidian is not just a note-taking tool. It is a thinking environment. It gives me a place to collect ideas, let them mature, and return to them when the time is right. I do not aim for perfect organization. I aim for a system that feels natural and helps me notice connections I would otherwise miss.

June 22, 2025

OpenTofu

OpenTofu

Terraform or OpenTofu (the open-source fork supported by the Linux Foundation) is a nice tool to setup the infrastructure on different cloud environments. There is also a provider that supports libvirt.

If you want to get started with OpenTofu there is a free training available from the Linux foundation:

I also joined the talk about OpenTofu and Infrastructure As Code, in general, this year in the Virtualization and Cloud Infrastructure DEV Room at FOSDEM this year:

I’ll not start to explain ā€œDeclarativeā€ vs ā€œImperativeā€ in this blog post, there’re already enough blog posts or websites that’re (trying) to explain this in more detail (the links above are a good start).

The default behaviour of OpenTofu is not to try to update an existing environment. This makes it usable to create disposable environments.

Tails description

Tails

Tails is a nice GNU/Linux distribution to connect to the Tor network.

Personally, I’m less into the ā€œprivacyā€ aspect of the Tor network (although being aware that you’re tracked and followed is important), probably because I’m lucky to live in the ā€œFree worldā€.

For people who are less lucky (People who live in a country where freedom of speech isn’t valued) or journalists for example, there’re good reasons to use the Tor network and hide their internet traffic.

tails/libvirt Terraform/OpenTofu module

OpenTofu

To make it easier to spin up a virtual machine with the latest tail environment I created a Terraform/OpenTofu module to spin up a virtual machine with the latest Tails version on libvirt.

There’re security considerations when you run tails in a virtual machine. See

for more information.

The source code of the module is available at the git repository:

The module is published on the Terraform Registry and the OpenTofu Registry.

Have fun!

June 20, 2025

Heb jij altijd al willen samenwonen met fijne mensen in een warme, open en respectvolle sfeer? Dan is dit misschien wel iets voor jou.

Samen met twee vrienden ben ik een nieuwe cohousing aan het opstarten in Gent. We hebben een prachtig gerenoveerd herenhuis op het oog, en we zijn op zoek naar een vierde persoon om het huis mee te delen.

Het huis

Het gaat om een ruim en karaktervol herenhuis met energielabel B+. Het beschikt over:

Vier volwaardige slaapkamers van elk 18 à 20 m²

EƩn extra kamer die we kunnen inrichten als logeerkamer, bureau of hobbyruimte

Twee badkamers

Twee keukens

Een zolder met stevige balken — de creatieve ideeĆ«n borrelen al op!


De ligging is uitstekend: aan de Koning Albertlaan, op amper 5 minuten fietsen van station Gent-Sint-Pieters en 7 minuten van de Korenmarkt. De huurprijs is €2200 in totaal, wat neerkomt op €550 per persoon bij vier bewoners.

Het huis is al beschikbaar vanaf 1 juli 2025.

Wie zoeken we?

We zoeken iemand die zich herkent in een aantal gedeelde waarden en graag deel uitmaakt van een respectvolle, open en bewuste leefomgeving. Concreet betekent dat voor ons:

Je staat open voor diversiteit in al haar vormen

Je bent respectvol, communicatief en houdt rekening met anderen

Je hebt voeling met thema’s zoals inclusie, mentale gezondheid, en samenleven met aandacht voor elkaar

Je hebt een rustig karakter en draagt graag bij aan een veilige, harmonieuze sfeer in huis

Leeftijd is niet doorslaggevend, maar omdat we zelf allemaal 40+ zijn, zoeken we eerder iemand die zich in die levensfase herkent


Iets voor jou?

Voel je een klik met dit verhaal? Of heb je vragen en wil je ons beter leren kennen? Aarzel dan niet om contact op te nemen via amedee@vangasse.eu.

Is dit niets voor jou, maar ken je iemand die perfect zou passen in dit plaatje? Deel dan zeker deze oproep — dank je wel!

Samen kunnen we van dit huis een warme thuis maken.

Have you ever dreamed of living in a beautiful house with thoughtful, like-minded people? Then this might be just what you’re looking for.

Together with two friends, I’m starting a new cohousing project in Ghent. We’ve found a recently renovated townhouse full of charm and potential, and we’re looking for a fourth housemate to complete our group.

The house

It’s a spacious, energy-efficient (label B+) townhouse with:

Four full-sized bedrooms (18–20 m² each)

One additional smaller room — perfect as a guest room, office, or hobby space

Two bathrooms

Two kitchens

A large attic with solid beams that’s inspiring all kinds of creative ideas!


šŸ” Location: Koning Albertlaan, Ghent — just a 5-minute bike ride from Gent-Sint-Pieters station and 7 minutes from the city center.

šŸ’¶ Rent: €2200 total, which comes to €550 per person with four people.

šŸ“… Available: From July 1, 2025

Who we’re looking for

We’re hoping to find someone who shares similar values and wants to build a supportive and respectful home together. That means:

You’re open-minded and value diversity

You’re communicative and considerate of others

You’re aware of and respectful toward mental health and neurodiversity

You contribute to a calm and safe living environment

Ideally, you’re not in your twenties — not because you’re not welcome, but because the three of us are all 40+ and tend to have a different life rhythm


Interested?

If this sounds like a good fit for you — or if you’re curious and want to know more — send an email to amedee@vangasse.eu.

Not for you, but know someone who might be interested? Feel free to share this post!

Let’s turn this house into a home — together.

June 18, 2025

I am excited to share some wonderful news—Sibelga and Passwerk have recently published a testimonial about my work, and it has been shared across LinkedIn, Sibelga’s website, and even on YouTube!


What Is This All About?

Passwerk is an organisation that matches talented individuals on the autism spectrum with roles in IT and software testing, creating opportunities based on strengths and precision. I have been working with them as a consultant, currently placed at Sibelga, Brussels’ electricity and gas distribution network operator.

The article and video highlight how being ā€œdifferentā€ does not have to be a limitation—in fact, it can be a real asset in the right context. It means a lot to me to be seen and appreciated for who I am and the quality of my work.


Why This Matters

For many neurodivergent people, the professional world can be full of challenges that go beyond the work itself. Finding the right environment—one that values accuracy, focus, and dedication—can be transformative.

I am proud to be part of a story that shows what is possible when companies look beyond stereotypes and embrace neurodiversity as a strength.


Thank you to Sibelga, Passwerk, and everyone who contributed to this recognition. It is an honour to be featured, and I hope this story inspires more organisations to open up to diverse talents.

šŸ‘‰ Want to know more? Check out the article or watch the video!

A few years ago, I quietly adopted a small principle that has changed how I think about publishing on my website. It's a principle I've been practicing for a while now, though I don't think I've ever written about it publicly.

The principle is: If a note can be public, it should be.

It sounds simple, but this idea has quietly shaped how I treat my personal website.

I was inspired by three overlapping ideas: digital gardens, personal memexes, and "Today I Learned" entries.

Writers like Tom Critchlow, Maggie Appleton, and Andy Matuschak maintain what they call digital gardens. They showed me that a personal website does not have to be a collection of polished blog posts. It can be a living space where ideas can grow and evolve. Think of it more as an ever-evolving notebook than a finished publication, constantly edited and updated over time.

I also learned from Simon Willison, who publishes small, focused Today I Learned (TIL) entries. They are quick, practical notes that capture a moment of learning. They don't aim to be comprehensive; they simply aim to be useful.

And then there is Cory Doctorow. In 2021, he explained his writing and publishing workflow, which he describes as a kind of personal memex. A memex is a way to record your knowledge and ideas over time. While his memex is not public, I found his approach inspiring.

I try to take a lot of notes. For the past four years, my tool of choice has been Obsidian. It is where I jot things down, think things through, and keep track of what I am learning.

In Obsidian, I maintain a Zettelkasten system. It is a method for connecting ideas and building a network of linked thoughts. It is not just about storing information but about helping ideas grow over time.

At some point, I realized that many of my notes don't contain anything private. If they're useful to me, there is a good chance they might be useful to someone else too. That is when I adopted the principle: If a note can be public, it should be.

So a few years ago, I began publishing these kinds of notes on my site. You might have seen examples like Principles for life, PHPUnit tests for Drupal, Brewing coffee with a moka pot when camping or Setting up password-free SSH logins.

These pages on my website are not blog posts. They are living notes. I update them as I learn more or come back to the topic. To make that clear, each note begins with a short disclaimer that says what it is. Think of it as a digital notebook entry rather than a polished essay.

Now, I do my best to follow my principle, but I fall short more than I care to admit. I have plenty of notes in Obsidian that could have made it to my website but never did.

Often, it's simply inertia. Moving a note from Obsidian to my Drupal site involves a few steps. While not difficult, these steps consume time I don't always have. I tell myself I'll do it later, and then 'later' often never arrives.

Other times, I hold back because I feel insecure. I am often most excited to write when I am learning something new, but that is also when I know the least. What if I misunderstood something? The voice of doubt can be loud enough to keep a note trapped in Obsidian, never making it to my website.

But I keep pushing myself to share in public. I have been learning in the open and sharing in the open for 25 years, and some of the best things in my life have come from that. So I try to remember: if notes can be public, they should be.

June 11, 2025

A few weeks ago, I set off for Balilas, a balfolk festival in JanzƩ (near Rennes), Brittany (France). I had never been before, but as long as you have dance shoes, a tent, and good company, what more do you need?

Bananas for scale

From Ghent to Brittany… with Two Dutch Strangers

My journey began in Ghent, where I was picked up by Sterre and Michelle, two dancers from the Netherlands. I did not know them too well beforehand, but in the balfolk world, that is hardly unusual — de balfolkcommunity is ƩƩn grote familie — one big family.

We took turns driving, chatting, laughing, and singing along. Google Maps logged our total drive time at 7 hours and 39 minutes.

Google knows everything PĆ©age – one of the many

Along the way, we had the perfect soundtrack:
šŸŽ¶ French Road Trip šŸ‡«šŸ‡·šŸ„– — 7 hours and 49 minutes of French and Francophone tubes.

https://open.spotify.com/playlist/3jRMHCl6qVmVIqXrASAAmZ?si=746a7f78ca30488a

šŸ• A Tasty Stop in PrĆ©-en-Pail-Saint-Samson

Somewhere around dinner time, we stopped at La Sosta, a cozy Italian restaurant in PrĆ©-en-Pail-Saint-Samson (2300 inhabitants). I had a pizza normande — base tomate, andouille, pomme, mozzarella, crĆØme, persil . A delicious and unexpected regional twist — definitely worth remembering!

pizza normande

The pizzas wereexcellent, but also generously sized — too big to finish in one sitting. Heureusement, ils nous ont proposĆ© d’emballer le reste Ć  emporter. That was a nice touch — and much appreciated after a long day on the road.

Just to much to eat it all

⛺ Arrival Just Before Dark

We arrived at the Balilas festival site five minutes after sunset, with just enough light left to set up our tents before nightfall. Trugarez d’an heol — thank you, sun, for holding out a little longer.

There were two other cars filled with people coming from the Netherlands, but they had booked a B&B. We chose to camp on-site to soak in the full festival atmosphere.

Enjoy the view! Banana pancakes!

Balilas itself was magical: days and nights filled with live music, joyful dancing, friendly faces, and the kind of warm atmosphere that defines balfolk festivals.

Photo: Poppy Lens

More info and photos:
🌐 balilas.lesviesdansent.bzh
šŸ“ø @balilas.balfolk on Instagram


Balfolk is more than just dancing. It is about trust, openness, and sharing small adventures with people you barely know—who somehow feel like old friends by the end of the journey.

Tot de volgende — Ć  la prochaine — betek ar blez a zeu!
šŸ•ŗšŸ’ƒ

Thank you MaĆÆ for proofreading the Breton expressions. ā¤

June 09, 2025

Imagine a marketer opening Drupal and with a clear goal in mind: launch a campaign for an upcoming event.

They start by uploading a brand kit to Drupal CMS: logos, fonts, and color palette. They define the campaign's audience as mid-sized business owners interested in digital transformation. Then they create a creative guide that outlines the event's goals, key messages, and tone.

With this in place, AI agents within Drupal step in to assist. Drawing from existing content and media, the agents help generate landing pages, each optimized for a specific audience segment. They suggest headlines, refine copy based on the creative guide, create components based on the brand kit, insert a sign-up form, and assemble everything into cohesive, production-ready pages.

Using Drupal's built-in support for the Model Context Protocol (MCP), the AI agents connect to analytics tools and monitor performance. If a page is not converting well, the system makes overnight updates. It might adjust layout, improve clarity, or refine the calls to action.

Every change is tracked. The marketer can review, approve, revert, or adjust anything. They stay in control, even as the system takes on more of the routine work.

Why it matters

AI is changing how websites are built and managed faster than most people expected. The digital experience space is shifting from manual workflows to outcome-driven orchestration. Instead of building everything from scratch, users will set goals, and AI will help deliver results.

This future is not about replacing people. It is about empowering them. It is about freeing up time for creative and strategic work while AI handles the rest. AI will take care of routine tasks, suggest improvements, and respond to real-time feedback. People will remain in control, but supported by powerful new tools that make their work easier and faster.

The path forward won't be perfect. Change is never easy, and there are still many lessons to learn, but standing still isn't an option. If we want AI to head in the right direction, we have to help steer it. We are excited to move fast, but just as committed to doing it thoughtfully and with purpose.

The question is not whether AI will change how we build websites, but how we as a community will shape that change.

A coordinated push forward

Drupal already has a head start in AI. At DrupalCon Barcelona 2024, I showed how Drupal's AI tools help a site creator market wine tours. Since then, we have seen a growing ecosystem of AI modules, active integrations, and a vibrant community pushing boundaries. Today, about 1,000 people are sharing ideas and collaborating in the #ai channel on Drupal Slack.

At DrupalCon Atlanta in March 2025, I shared our latest AI progress. We also brought together key contributors working on AI in Drupal. Our goal was simple: get organized and accelerate progress. After the event, the group committed to align on a shared vision and move forward together.

Since then, this team has been meeting regularly, almost every day. I've been working with the team to help guide the direction. With a lot of hard work behind us, I'm excited to introduce the Drupal AI Initiative.

The Drupal AI Initiative builds on the momentum in our community by bringing structure and shared direction to the work already in progress. By aligning around a common strategy, we can accelerate innovation.

What we're launching today

The Drupal AI Initiative is closely aligned with the broader Drupal CMS strategy, particularly in its focus on making site building both faster and easier. At the same time, this work is not limited to Drupal CMS. It is also intended to benefit people building custom solutions on Drupal Core, as well as those working with alternative distributions of Drupal.

To support this initiative, we are announcing:

  • A clear strategy to guide Drupal's AI vision and priorities (PDF mirror).
  • A Drupal AI leadership team to drive product direction, fundraising, and collaboration across work tracks.
  • A funded delivery team focused on execution, with the equivalent of several full-time roles already committed, including technical leads, UX and project managers, and release coordination.
  • Active work tracks covering areas like AI Core, AI Products, AI Marketing, and AI UX.
  • USD $100,000 in operational funding, contributed by the initiative's founding companies.

For more details, read the full announcement on the Drupal AI Initiative page on Drupal.org.

Founding members and early support

Screenshot of a Google Hangout video call with nine smiling participants, the founding members of the Drupal AI initiative. Some of the founding members of the Drupal AI initiative during our launch call on Google Hangouts.

Over the past few months, we've invested hundreds of hours shaping our AI strategy, defining structure, and taking first steps.

I want to thank the founding members of the Drupal AI Initiative. These individuals and organizations played a key role in getting things off the ground. The list is ordered alphabetically by last name to recognize all contributors equally:

These individuals, along with the companies supporting them, have already contributed significant time, energy, and funding. I am grateful for their early commitment.

I also want to thank the staff at the Drupal Association and the Drupal CMS leadership team for their support and collaboration.

What comes next

I'm glad the Drupal AI Initiative is now underway. The Drupal AI strategy is published, the structure is in place, and multiple work tracks are open and moving forward. We'll share more details and updates in the coming weeks.

With every large initiative, we are evolving how we organize, align, and collaborate. The Drupal AI Initiative builds on that progress. As part of that, we are also exploring more ways to recognize and reward meaningful contributions.

We are creating ways for more of you to get involved with Drupal AI. Whether you are a developer, designer, strategist, or sponsor, there is a place for you in this work. If you're part of an agency, we encourage you to step forward and become a Maker. The more agencies that contribute, the more momentum we build.

Update: In addition to the initiative's founding members, Amazee.io already stepped forward with another commitment of USD $20,000 and one full-time contributor. Thank you! This brings the total operating budget to USD $120,000. Please consider joining as well.

AI is changing how websites and digital experiences are built. This is our moment to be part of the change and help define what comes next.

Join us in the #ai-initiative channel on Drupal Slack to get started.

June 08, 2025

lookat 2.1.0rc1

Lookat 2.1.0rc1 is the latest development release of Lookat/Bekijk, a user-friendly Unix file browser/viewer that supports colored man pages.

The focus of the 2.1.0 release is to add ANSI Color support.


Ā 

News

8 Jun 2025 Lookat 2.1.0rc1 Released

Lookat 2.1.0rc1 is the first release candicate of Lookat 2.1.0

ChangeLog

Lookat / Bekijk 2.1.0rc1
  • ANSI Color support

Lookat 2.1.0rc1 is available at:

Have fun!

June 04, 2025

In the world of DevOps and continuous integration, automation is essential. One fascinating way to visualize the evolution of a codebase is with Gource, a tool that creates animated tree diagrams of project histories.

Recently, I implemented a GitHub Actions workflow in my ansible-servers repository to automatically generate and deploy Gource visualizations. In this post, I will walk you through how the workflow is set up and what it does.

But first, let us take a quick look back…


šŸ•° Back in 2013: Visualizing Repos with Bash and XVFB

More than a decade ago, I published a blog post about Gource (in Dutch) where I described a manual workflow using Bash scripts. At that time, I ran Gource headlessly using xvfb-run, piped its output through pv, and passed it to ffmpeg to create a video.

It looked something like this:

#!/bin/bash -ex
 
xvfb-run -a -s "-screen 0 1280x720x24" \
  gource \
    --seconds-per-day 1 \
    --auto-skip-seconds 1 \
    --file-idle-time 0 \
    --max-file-lag 1 \
    --key -1280x720 \
    -r 30 \
    -o - \
  | pv -cW \
  | ffmpeg \
    -loglevel warning \
    -y \
    -b:v 3000K \
    -r 30 \
    -f image2pipe \
    -vcodec ppm \
    -i - \
    -vcodec libx264 \
    -preset ultrafast \
    -pix_fmt yuv420p \
    -crf 1 \
    -threads 0 \
    -bf 0 \
    ../gource.mp4

This setup worked well for its time and could even be automated via cron or a Git hook. However, it required a graphical environment workaround and quite a bit of shell-fu.


🧬 From Shell Scripts to GitHub Actions

Fast forward to today, and things are much more elegant. The modern Gource workflow lives in .github/workflows/gource.yml and is:

  • šŸ” Reusable through workflow_call
  • šŸ”˜ Manually triggerable via workflow_dispatch
  • šŸ“¦ Integrated into a larger CI/CD pipeline (pipeline.yml)
  • ☁ Cloud-native, with video output stored on S3

Instead of bash scripts and virtual framebuffers, I now use a well-structured GitHub Actions workflow with clear job separation, artifact management, and summary reporting.


šŸš€ What the New Workflow Does

The GitHub Actions workflow handles everything automatically:

  1. ā± Decides if a new Gource video should be generated, based on time since the last successful run.
  2. šŸ“½ Generates a Gource animation and a looping thumbnail GIF.
  3. ☁ Uploads the files to an AWS S3 bucket.
  4. šŸ“ Posts a clean summary with links, preview, and commit info.

It supports two triggers:

  • workflow_dispatch (manual run from the GitHub UI)
  • workflow_call (invoked from other workflows like pipeline.yml)

You can specify how frequently it should run with the skip_interval_hours input (default is every 24 hours).


šŸ” Smart Checks Before Running

To avoid unnecessary work, the workflow first checks:

  • If the workflow file itself was changed.
  • When the last successful run occurred.
  • Whether the defined interval has passed.

Only if those conditions are met does it proceed to the generation step.


šŸ›  Building the Visualization

🧾 Step-by-step:

  1. Checkout the Repo
    Uses actions/checkout with fetch-depth: 0 to ensure full commit history.
  2. Generate Gource Video
    Uses nbprojekt/gource-action with configuration for avatars, title, and resolution.
  3. Install FFmpeg
    Uses AnimMouse/setup-ffmpeg to enable video and image processing.
  4. Create a Thumbnail
    Extracts preview frames and assembles a looping GIF for visual summaries.
  5. Upload Artifacts
    Uses actions/upload-artifact to store files for downstream use.

☁ Uploading to AWS S3

In a second job:

  • AWS credentials are securely configured via aws-actions/configure-aws-credentials.
  • Files are uploaded using a commit-specific path.
  • Symlinks (gource-latest.mp4, gource-latest.gif) are updated to always point to the latest version.

šŸ“„ A Clean Summary for Humans

At the end, a GitHub Actions summary is generated, which includes:

  • A thumbnail preview
  • A direct link to the full video
  • Video file size
  • Commit metadata

This gives collaborators a quick overview, right in the Actions tab.


šŸ” Why This Matters

Compared to the 2013 setup:

2013 Bash Script2025 GitHub Actions Workflow
Manual setup via shellFully automated in CI/CD
Local onlyCloud-native with AWS S3
Xvfb workaround requiredHeadless and clean execution
Script needs maintenanceModular, reusable, and versioned
No summariesMarkdown summary with links and preview

Automation has come a long way — and this workflow is a testament to that progress.


āœ… Final Thoughts

This Gource workflow is now a seamless part of my GitHub pipeline. It generates beautiful animations, hosts them reliably, and presents the results with minimal fuss. Whether triggered manually or automatically from a central workflow, it helps tell the story of a repository in a way that is both informative and visually engaging. šŸ“ŠāœØ

Would you like help setting this up in your own project? Let me know — I am happy to share.

May 28, 2025

This spring was filled with music, learning, and connection. I had the opportunity to participate in three wonderful music courses, each offering something unique—new styles, deeper technique, and a strong sense of community. Here is a look back at these inspiring experiences.


šŸŽ¶ 1. Fiddlers on the Move – Ghent (5–9 March)

Photo: Filip Verpoest

In early March, I joined Fiddlers on the Move in Ghent, a five-day course packed with workshops led by musicians from all over the world. Although I play the nyckelharpa, I deliberately chose workshops that were not nyckelharpa-specific. This gave me the challenge and joy of translating techniques from other string traditions to my instrument.

Here is a glimpse of the week:

  • Wednesday: Fiddle singing with Laura Cortese – singing while playing was new for me, and surprisingly fun.
  • Thursday: Klezmer violin / Fiddlers down the roof with Amit Weisberger – beautiful melodies and ornamentation with plenty of character.
  • Friday: Arabic music with Layth Sidiq – an introduction to maqams and rhythmic patterns that stretched my ears in the best way.
  • Saturday: Swedish violin jamsession classics with Mia Marine – a familiar style, but always a joy with Mia’s energy and musicality.
  • Sunday: Live looping strings with Joris Vanvinckenroye – playful creativity with loops, layering, and rhythm.

Each day brought something different, and I came home with a head full of ideas and melodies to explore further.


šŸŖ— 2. Workshopweekend Stichting Draailier & Doedelzak – Sint-Michielsgestel, NL (18–21 April)

Photo: Arne de Laat

In mid-April, I traveled to Sint-Michielsgestel in the Netherlands for the annual Workshopweekend organized by Stichting Draailier & Doedelzak. This year marked the foundation’s 40th anniversary, and the event was extended to four days, from Friday evening to Monday afternoon, at the beautiful location of De Zonnewende.

I joined the nyckelharpa workshop with Rasmus Brinck. One of the central themes we explored was the connection between playing and dancing polska—a topic close to my heart. I consider myself a dancer first and a musician second, so it was especially meaningful to deepen the musical understanding of how movement and melody shape one another.

The weekend offered a rich variety of other workshops as well, including hurdy-gurdy, bagpipes, diatonic accordion, singing, and ensemble playing. As always, the atmosphere was warm and welcoming. With structured workshops during the day and informal jam sessions, concerts, and bals in the evenings, it was a perfect blend of learning and celebration.


šŸ‡øšŸ‡Ŗ 3. Swedish Music for Strings – Ronse (2–4 May)

At the beginning of May, I took part in a three-day course in Ronse dedicated to Swedish string music. Although we could arrive on 1 May, teaching started the next day. The course was led by David Eriksson and organized by Amate Galli. About 20 musicians participated—two violinists, one cellist, and the rest of us on nyckelharpa.

The focus was on capturing the subtle groove and phrasing that make Swedish folk music so distinctive. It was a joy to be surrounded by such a rich soundscape and to play in harmony with others who share the same passion. The music stayed with me long after the course ended.


✨ Final Thoughts

Each of these courses gave me something different: new musical perspectives, renewed technical focus, and most importantly, the joy of making music with others. I am deeply grateful to all the teachers, organizers, and fellow participants who made these experiences so rewarding. I am already looking forward to the next musical adventure!

May 27, 2025

Four months ago, I tested 10 local vision LLMs and compared them against the top cloud models. Vision models can analyze images and describe their content, making them useful for alt-text generation.

The result? The local models missed important details or introduced hallucinations. So I switched to using cloud models, which produced better results but meant sacrificing privacy and offline capability.

Two weeks ago, Ollama released version 0.7.0 with improved support for vision models. They added support for three vision models I hadn't tested yet: Mistral 3.1, Qwen 2.5 VL and Gemma 3.

I decided to evaluate these models to see whether they've caught up to GPT-4 and Claude 3.5 in quality. Can local models now generate accurate and reliable alt-text?

Model Provider Release date Model size
Gemma 3 (27B) Google DeepMind March 2025 27B
Qwen 2.5 VL (32B) Alibaba March 2025 32B
Mistral 3.1 (24B) Mistral AI March 2025 24B

Updating my alt-text script

For my earlier experiments, I created an open-source script that generates alt-text descriptions. The script is a Python wrapper around Simon Willison's llm tool, which provides a unified interface to LLMs. It supports models from Ollama, Hugging Face and various cloud providers.

To test the new models, I added 3 new entries to my script's models.yaml, which defines each model's prompt, temperature, and token settings. Once configured, generating alt-text is simple. Here is an example using the three new vision models:

$ ./caption.py test-images/image-1.jpg –model mistral-3.1-24b gemma3-27b qwen2.5vl-32b

Which outputs something like:

{
  "image": "test-images/image-1.jpg",
  "captions": {
    "mistral-3.1-24b": "A bustling intersection at night filled with pedestrians crossing in all directions."
    "gemma3-27b": "A high-angle view shows a crowded Tokyo street filled with pedestrians and brightly lit advertising billboards at night.",
    "qwen2.5vl-32b": "A bustling city intersection at night, crowded with people crossing the street, surrounded by tall buildings with bright, colorful billboards and advertisements.",
  }
}

Evaluating the models

To keep the results consistent, I used the same test images and the same evaluation method as in my earlier blog post. The details results are in this Google spreadsheet.

Each alt-text was scored from 0 to 5 based on three criteria: how well it identified the most important elements in the image, how effectively it captured the mood or atmosphere, and whether it avoided repetition, grammar issues or hallucinated details. I then converted each score into a letter grade from A to F.

For comparison, the cloud models received the following scores: GPT-4o earned an average of 4.8 out of 5 (grade A), and Claude 3.5 Sonnet received a perfect 5 out of 5 (grade A).

Test image 1: Shibuya Crossing, Tokyo

Areal view of an intersection, capturing the evening commute with pedestrians, traffic and electronic billboards.
Model Description Grade
Mistral 3.1 (24B) A bustling intersection at night filled with pedestrians crossing in all directions. C
Gemma 3 (27B) A high-angle view shows a crowded Tokyo street filled with pedestrians and brightly lit advertising billboards. B
Qwen 2.5 VL (32B) A bustling city intersection at night, crowded with people crossing the street, surrounded by brightly lit billboards and tall buildings. B

None earned an A because they failed to identify this as Shibuya Crossing in Tokyo, while cloud models do. Mistral also missed the billboards.

Test image 2: Isabella Stewart Gardner Museum, Boston

An empty picture frame in a room with ornate wallpaper, candles, old paintings, and antique furniture.
Model Description Grade
Mistral 3.1 (24B) An ornate wall features a large empty picture frame. B
Gemma 3 (27B) An empty, ornate gold frame hangs on a patterned green wall between two framed portraits and a candle sconce. A
Qwen 2.5 VL (32B) A vintage-style room features ornate wallpaper, a framed empty canvas, a lit candelabra, and a decorative vase on a table, with portraits on either side. A

The vision models in my previous post often mistook the empty frame for a framed painting. All three models in this test correctly identified it as empty. Gemma and Qwen captured valuable details about the scene, while Mistral's description felt sparse.

Test image 3: wakeboarding in Vermont, USA

Two men in swim shorts on the back of a boat watching another person wakeboarding behind the boat.
Model Description Grade
Mistral 3.1 (24B) Two shirtless men on a boat watch another person water skiing on a lake. B
Gemma 3 (27B) Two people on a boat watch a waterskier speeding across the lake on a sunny day. B
Qwen 2.5 VL (32B) Two shirtless men on a boat watch a person water skiing in the distance on a calm lake. B

All three described a wakeboarding scene as "water skiing", while the cloud models correctly identified it as wakeboarding.

Test image 4: hiking in the Dolomites, Italy

Santa maddalena church in funes
Model Description Grade
Mistral 3.1 (24B) A wooden statue of a saint is mounted on a post with directional signs pointing to various locations. C
Gemma 3 (27B) A small wooden shrine with a statue of Mary stands beside a signpost indicating hiking trails in a grassy field. B
Qwen 2.5 VL (32B) A wooden shrine with a statue of a figure stands on a tree stump, surrounded by a scenic mountain landscape with directional signs in the foreground. B

Only Gemma recognized the statue as Mary. Both Mistral and Gemma missed the mountains in the background, which seems important.

Test image 5: backgammon by candlelight

A backgammon board on a wooden table, accompanied by candles that cast a warm glow.
Model Description Grade
Mistral 3.1 (24B) A lit candle and a glass of liquid are on a wooden table next to a wooden board game. B
Gemma 3 (27B) A lit candle and glass votive sit on a wooden table, creating a warm, inviting glow in a dimly lit space. B
Qwen 2.5 VL (32B) A cozy scene with a lit candle on a wooden table, next to a backgammon board and a glass of liquid, creating a warm and inviting atmosphere. A

Neither Mistral nor Gemma recognized the backgammon board. Only Qwen identified it correctly. Mistral also failed to capture the photo's mood.

Model accuracy

Model Repetitions Hallucinations Moods Average score Grade
Mistral 3.1 (24B) Never Never Fair 3.4/5 C
Gemma 3 (27B) Never Never Good 4.2/5 B
Qwen 2.5 VL (32B) Never Never Good 4.4/5 B

Qwen 2.5 VL performed best overall, with Gemma 3 not far behind.

Needless to say, these results are based on a small set of test images. And while I used a structured scoring system, the evaluation still involves subjective judgment. This is not a definitive ranking, but it's enough to draw some conclusions.

It was nice to say that all three LLMs avoided repetition and hallucinations, and generally captured the mood of the images.

Local models still make mistakes. All three described wakeboarding as "water skiing", most failed to recognize the statue as Mary or place the intersection in Japan. Cloud models get these details right, as I showed in my previous blog post.

Conclusion

I ran my original experiment four months ago, and at the time, none of the models I tested felt accurate enough for large-scale alt-text generation. Some, like Llama 3, showed promise but still fell short in overall quality.

Newer models like Qwen 2.5 VL and Gemma 3 have matched the performance I saw earlier with Llama 3. Both performed well in my latest test. They produced relevant, grounded descriptions without hallucinations or repetition, which earlier local models often struggled with.

Still, the quality is not yet at the level where I would trust these models to generate thousands of alt-texts without human review. They make more mistakes than GPT-4 or Claude 3.5.

My main question was: are local models now good enough for practical use? While Qwen 2.5 VL performed best overall, it still needs human review. I've started using it for small batches where manual checking is manageable. For large-scale, fully automated use, I continue using cloud models as they remain the most reliable option.

That said, local vision-language models continue to improve. My long-term goal is to return to a 100% local-first workflow that gives me more control and keeps my data private. While we're not there yet, these results show real progress.

My plan is to wait for the next generation of local vision models (or upgrade my hardware to run larger models). When those become available, I'll test them and report back.

May 23, 2025

Reducing the digital clutter of chats

I hate modern chats. They presuppose we are always online, always available to chat. They force us to see and think about them each time we get our eyes on one of our devices. Unlike mailboxes, they are never empty. We can’t even easily search through old messages (unlike the chat providers themselves, which use the logs to learn more about us). Chats are the epitome of the business idiot: they make you always busy but prevent you from thinking and achieving anything.

It is quite astonishing to realise that modern chat systems use 100 or 1000 times more resources (in size and computing power) than 30 years ago, that they are less convenient (no custom client, no search) and that they work against us (centralisation, surveillance, ads). But, yay, custom emojis!

Do not get me wrong: chats are useful! When you need an immediate interaction or a quick on-the-go message, chats are the best.

I needed to keep being able to chat while keeping the digital clutter to a minimal and preserving my own sanity. That’s how I came up with the following rules.

Rule 1: One chat to rule them all

One of the biggest problems of centralised chats is that you must be on many of them. I decided to make Signal my main chat and to remove others.

Signal was, for me, a good compromise of respecting my privacy, being open source and without ads while still having enough traction that I could convince others to join it.

Yes, Signal is centralised and has drawbacks like relying on some Google layers (which I worked around by using Molly-FOSS). I simply do not see XMPP, Matrix or SimpleX becoming popular enough in the short term. Wire and Threema had no advantages over Signal. I could not morally justify using Whatsapp nor Telegram.

In 2022, as I decided to use Signal as my main chat, I deleted all accounts but Signal and Whatsapp and disabled every notification from Whatsapp, forcing myself to open it once a week to see if I had missed something important. People who really wanted to reach me quickly understood that it was better to use Signal. This worked so well that I forgot to open Whatsapp for a whole month which was enough for Whatsapp to decide that my account was not active anymore.

Not having Whatsapp is probably the best thing which happened to me regarding chats. Suddenly, I was out of tenths or hundreds of group chats. Yes, I missed lots of stuff. But, most importantly, I stopping fearing missing them. Seriously, I never missed having Whatsapp. Not once. Thanks Meta for removing my account!

While travelling in Europe, it is now standard that taxi and hotels will chat with you using Whatsapp. Not anymore for me. Guess what? It works just fine. In fact, I suspect it works even better because people are forced to either do what we agreed during our call or to call me, which requires more energy and planning.

Rule 2: Mute, mute, mute!

Now that Signal is becoming more popular, some group chats are migrating to it. But I’ve learned the lesson : I’m muting them. This allows me to only see the messages when I really want to look at them. Don’t hesitate to mute vocal group chats and people with whom you don’t need day-to-day interaction.

I’m also leaving group chats which are not essential. Whatsapp deletion told me that nearly no group chat is truly essential.

Many times, I’ve had people sending me emails about what was told on a group chat because they knew I was not there. Had I been on that group, I would probably have missed the messages but nobody would have cared. If you really want to get in touch with me, send me an email!

Rule 3: No read receipts nor typing indicators

I was busy, walking in the street with my phone in hands for directions. A notification popped up with an important message. It was important but not urgent. I could not deal with the message at that moment. I wanted to take the time. One part of my brain told me not to open the message because, if I did, the sender would see a "read receipt". He would see that I had read the message but would not receive any answer.

For him, that would probably translate in "he doesn’t care". I consciously avoided to open Signal until I was back home and could deal with the message.

That’s when I realised how invasive the "read receipt" was. I disabled it and never regretted that move. I’m reading messages on my own watch and replying when I want to. Nobody needs to know if I’ve seen the message. It is wrong in every aspect.

Signal preferences showing read receipts and typing indicator disabled Signal preferences showing read receipts and typing indicator disabled

Rule 4: Temporary discussions only

The artist Bruno Leyval, who did the awesome cover of my novel Bikepunk, is obsessed with deletion and disappearance. He set our Signal chat so that every message is deleted after a day. At first, I didn’t see the point.

Until I understood that this was not only about privacy, it also was about decluttering our mind, our memories.

Since then, I’ve set every chat in Signal to delete messages after one week.

Signal preferences showing disappearing messages set to one week Signal preferences showing disappearing messages set to one week

This might seem like nothing but this changes everything. Suddenly, chats are not a long history of clutter. Suddenly, you see chats as transient and save things you want to keep. Remember that you can’t search in chats? This means that chats are transient anyway. With most chats, your history is not saved and could be lost by simply dropping your phone on the floor. Something important should be kept in a chat? Save it! But it should probably have been an email.

Embracing the transient nature of chat, making it explicit greatly reduce the clutter.

Conclusion

I know that most of you will say that "That’s nice Ploum but I can’t do that because everybody is on XXX" where XXX is most often Whatsapp in my own circles. But this is wrong: you believe everybody is on XXX because you are yourself using XXX as your main chat. When surveying my students this year, I’ve discovered that nearly half of them was not on Whatsapp. Not for some hard reason but because they never saw the need for it. In fact, they were all spread over Messenger, Instagram, Snap, Whatsapp, Telegram, Discord. And they all believed that "everybody is where I am".

In the end, the only real choice to make is between being able to get immediately in touch with a lot of people or having room for your mental space. I choose the latter, you might prefer the former. That’s fine!

I still don’t like chat. I’m well aware that the centralised nature of Signal makes it a short-term solution. But I’m not looking for the best sustainable chat. I just want fewer chats in my life.

If you want to get in touch, send me an email!

I’m Ploum, a writer and an engineer. I like to explore how technology impacts society. You can subscribe by email or by rss. I value privacy and never share your adress.

I write science-fiction novels in French. For Bikepunk, my new post-apocalyptic-cyclist book, my publisher is looking for contacts in other countries to distribute it in languages other than French. If you can help, contact me!

May 21, 2025

During the last MySQL & HeatWave Summit, Wim Coekaerts announced that a new optimizer is available and is already enabled in MySQL HeatWave. Let’s have a quick look at it and how to use it. The first step is to verify that Hypergraph is available: The statement won’t return any error if the Hypergraph Optimizer […]

Maintaining documentation for Ansible roles can be a tedious and easily neglected task. As roles evolve, variable names change, and new tasks are added, it is easy for the README.md files to fall out of sync. To prevent this and keep documentation continuously up to date, I wrote a GitHub Actions workflow that automatically generates and formats documentation for all Ansible roles in my repository. Even better: it writes its own commit messages using AI.

Let me walk you through why I created this workflow, how it works, and what problems it solves.


šŸ¤” Why Automate Role Documentation?

Ansible roles are modular, reusable components. Good roles include well-structured documentation—at the very least, variable descriptions, usage examples, and explanations of defaults.

However, writing documentation manually introduces several problems:

  • Inconsistency: Humans forget things. Updates to a role do not always get mirrored in its documentation.
  • Wasted time: Writing boilerplate documentation by hand is inefficient.
  • Error-prone: Manually copying variable names and descriptions invites typos and outdated content.

Enter ansible-doctor: a tool that analyzes roles and generates structured documentation automatically. Once I had that, it made perfect sense to automate its execution using GitHub Actions.


āš™ How the Workflow Works

Here is the high-level overview of what the workflow does:

  1. Triggers:
    • It can be run manually via workflow_dispatch.
    • It is also designed to be reusable in other workflows via workflow_call.
  2. Concurrency and Permissions:
    • Uses concurrency to ensure that only one documentation run per branch is active at a time.
    • Grants minimal permissions needed to write to the repository and generate OIDC tokens.
  3. Steps:
    • āœ… Check out the code.
    • šŸ Set up Python and install ansible-doctor.
    • šŸ“„ Generate documentation with ansible-doctor --recursive roles.
    • 🧼 Format the resulting Markdown using Prettier to ensure consistency.
    • šŸ¤– Configure Git with a bot identity.
    • šŸ” Detect whether any .md files changed.
    • 🧠 Generate a commit message using AI, powered by OpenRouter.ai and a small open-source model (mistralai/devstral-small:free).
    • šŸ’¾ Commit and push the changes if there are any.

🧠 AI-Generated Commit Messages

Why use AI for commit messages?

  • I want my commits to be meaningful, concise, and nicely formatted.
  • The AI model is given a diff of the staged Markdown changes (up to 3000 characters) and asked to:
    • Keep it under 72 characters.
    • Start with an emoji.
    • Summarize the nature of the documentation update.

This is a small but elegant example of how LLMs can reduce repetitive work and make commits cleaner and more expressive.

Fallbacks are in place: if the AI fails to generate a message, the workflow defaults to a generic 📝 Update Ansible role documentation.


šŸŒ A Universal Pattern for Automated Docs

Although this workflow is focused on Ansible, the underlying pattern is not specific to Ansible at all. You can apply the same approach to any programming language or ecosystem that supports documentation generation based on inline annotations, comments, or code structure.

The general steps are:

  1. Write documentation annotations in your code (e.g. JSDoc, Doxygen, Python docstrings, Rust doc comments, etc.).
  2. Run a documentation generator, such as:
  3. Generate a commit message from the diff using an LLM.
  4. Commit and push the updated documentation.

This automation pattern works best in projects where:

  • Documentation is stored in version control.
  • Changes to documentation should be traceable.
  • Developers want to reduce the overhead of writing and committing docs manually.

šŸ” A Note on OpenRouter API Keys

The AI step relies on OpenRouter.ai to provide access to language models. To keep your API key secure, it is passed via secrets.OPENROUTER_API_KEY, which is required when calling this workflow. I recommend generating a dedicated, rate-limited key for GitHub Actions use.


🧪 Try It Yourself

If you are working with Ansible roles—or any codebase with structured documentation—and want to keep your docs fresh and AI-assisted, this workflow might be useful for you too. Feel free to copy and adapt it for your own projects. You can find the full source in my GitHub repository.

Let the robots do the boring work, so you can focus on writing better code.


šŸ’¬ Feedback?

If you have ideas to improve this workflow or want to share your own automation tricks, feel free to leave a comment or reach out on Mastodon: @amedee@lou.lt.

Happy automating!

May 19, 2025

MySQL Enterprise Monitor, aka MEM, retired in January 2025, after almost 20 years of exemplary service! What’s next? Of course, plenty of alternatives exist, open source, proprietary, and on the cloud. For MySQL customers, we provide two alternatives: This post focuses on the latter, as there is no apparent reason to deploy an Oracle Database […]

We landed late in Portland, slept off the jet lag, and had breakfast at a small coffee shop. The owner of the coffee shop challenged me to a game of chess. Any other day, I would have accepted. But after a year of waiting, not even Magnus Carlsen himself could delay us from meeting our van.

After more than a year of planning, designing, shipping gear, and running through every what-if, we were finally in Portland to pick up our new van. It felt surreal to be handed the keys after all that waiting.

The plan: a one-week "shakedown trip", testing our new van while exploring Oregon. We'd stay close to the builder for backup, yet sleep by the ocean, explore vineyards and hike desert trails.

Day 1–2: ocean air in Pacific City

After picking up the van and completing our onboarding, we drove just two hours west to Pacific City on the Oregon coast. I was nervous at first since I'd never driven anything this big, but it felt manageable within minutes.

For the first two nights, we stayed at Hart's Camp, also known as Cape Kiwanda RV Park. The campground had full hookups, a small grocery store, and a brewery just across the road. While the RV park felt more like a parking lot than a forest, it was exactly what we needed. Something easy and practical. A soft landing.

Two people carrying surfboards walk toward the beach with a large rock formation in the water. Camping in Pacific City, a small surfer town on the Oregon coast. The main landmark is Haystack Rock, also known as Chief Kiawanda. It looks like a gorilla's head with a rat tail. Once you see it, you can't unsee it.

That night, we cracked the windows and slept with the ocean air drifting in. People always ask how someone six foot four (or almost 2 meters) manages to sleep in a van. The truth? I slept better than I do in most hotel beds.

We spent most of the next day settling in. We organized, unpacked the gear we had shipped ahead, figured out what all the switches did, cleaned the dishes, and debated what to put in which cabinets.

Day 3–4: Wine tasting in Willamette Valley

From the coast, we headed inland into Oregon wine country. The drive to Champoeg State Park took about an hour and a half, winding through farmland and small towns.

Champoeg is a beautifully laid-out campground, quiet and easy to navigate, with wide sites and excellent showers. The surrounding area is a patchwork of vineyards, barns and backroads.

We visited several vineyards in the Willamette Valley. The Four Graces was our favorite. Their 2021 Windborn Pinot Noir stood out, and the inn next door looked tempting for a future non-camping trip. We also visited Ken Wright, Lemelson, and Dominio IV. We ended up buying 15 bottles of wine. One of the perks of van life: your wine cellar travels with you.

That evening, Vanessa cooked salmon in red curry, outside in the rain, over a wood fire. We ate inside the van, warm and dry, with music playing. We opened a 2023 Still Life Viognier from Dominio IV that we had liked so much during the tasting. There was something magical about being cozy in a small space while rain drums overhead.

Person stretching and smiling next to a laundry cart in front of dryers at a laundromat. Laundry day in a small town in Oregon. We're camping in a van and stopped to wash some of our clothes. Life on the road!

The next morning, we drove to McMinnville to do laundry. Laundry isn't exactly a bucket-list item for me, but Vanessa found it oddly satisfying. There was a coffee kiosk nearby, so we walked through the drive-thru on foot, standing between cars to order lattes while our clothes spun.

Day 5: Smith Rock and steep trails

Person sitting in the doorway of a parked camper van at a quiet, remote campground. Arrived at Skull Hollow Campground. No power or water, but it's quiet and the sky is full of stars. It feels like we're exactly where we need to be.

We left wine country behind and headed into Central Oregon, crossing over the mountains on a three-hour drive that was scenic the entire way. Pine forests, wide rivers and lakes came and went as we climbed and descended. We passed through small towns and stretches of open road that felt far from anywhere.

I kept wanting to stop to take photos, but didn't want to interrupt the rhythm of the road. Instead, we made mental notes about places to come back to someday.

That night we stayed at Skull Hollow Campground, a basic campground without hookups. Our first real test of off-grid capability. This was deliberate. We wanted to know how our solar panels and batteries would handle a full night of heating, and whether our water supply would last without refilling.

It was cold, down to 37°F, but the van handled it well. We kept the heat at 60°F and slept soundly under a heavy duvet. When we woke up, the solar panels were already soaking up sunlight.

Tall rock cliffs surround a winding river and hiking trail at Smith Rock State Park in Oregon. We hiked the Misery Ridge and River Trail in Smith Rock State Park. The steep switchbacks and rocky terrain made it a tough climb, but the panoramic views were worth it.

Smith Rock State Park was just fifteen minutes away. Vanessa wanted to drive the van somewhere more rural, and this was the perfect chance. She handled the van like a boss.

Day 6: Bend and our first HipCamp

We continued to Bend, just forty minutes from Skull Hollow. Bend is a small city in Central Oregon known for its outdoor lifestyle. We resupplied, filled our water tank and stopped by REI (think "camping supermarket").

We had dinner at Wild Rose, a Northern Thai restaurant that had been nominated for a James Beard award. The food was excellent. The service was not.

That night we stayed at our first HipCamp, a campsite on a working ranch with cattle just outside of Bend. A lone bull stood at the entrance, watching us with mild interest. We followed the long gravel driveway past grazing cows, getting our first real taste of ranch life.

Old rusted Ford truck parked on dirt, with faded paint and wooden flatbed in the back. Our van's neighbor for the night was a beautifully rusted Ford. Retired but still stealing the spotlight. Rusty steering wheel and exposed seat springs inside an old, abandoned truck with cracked windows. I can't help but wonder what stories this truck could tell.

The setup was simple: a grassy parking area, a 30-amp electrical outlet, a metal trash bin and a water pump with well water. We shared the space with an old Ford that had clearly been there longer than we planned to be. A pair of baby owls watched us from a tree. It was peaceful, with wide views and almost total quiet. Nothing fancy, but memorable.

Day 7: Camp Sherman and quiet rivers

A black camper van is parked in a forested campground surrounded by tall pine trees. Parked at Camp Sherman Campground, next to the Metolius River in Oregon, USA.

On our final full day, we drove about an hour and twenty minutes to Camp Sherman Campground, nestled along the Metolius River. The Metolius is a spring-fed river known for its crystal-clear waters and world-class fly fishing.

The campground is "first-come, first-serve", and we were lucky to find a site right by the water. This was our first time using an "Iron Ranger", the self-pay envelope system used in many public campgrounds. A refreshing throwback to simpler times.

We hiked a trail along the river, upstream through forests that had clearly burned in recent years. Signs along the path explained it was a prescribed burn area, which gave the charred trunks and new growth a different kind of meaning.

We watched a family of deer move through the trees at dusk very close to our van. They seemed as curious about us as we were about them.

Later that evening, we cooked, read, and talked. We sat by the fire, wrapped in the camping blankets we had picked up at REI. It was quiet in the way we needed it to be.

The next morning, it was time to move on. I had to get to Chicago for a Drupal Camp, even though neither of us felt ready to leave.

530 miles, one van and zero regrets

This loop, starting in Portland, heading to the coast, through wine country, over the mountains, and back again, turned out to be the perfect test run. Each stop offered something different, from ocean breezes to vineyard views to rugged high desert hikes. The drives were short enough to stay relaxed, ranging from 40 minutes to three hours. In total, we covered about 530 miles.

By the end of the trip, we had gone from full hookups to fully self-sufficient, using solar power and our onboard water supply. The van passed every test we threw at it. Now we knew we could take it anywhere.

May 17, 2025

It's been a while since I wrote about one of my favorite songs, but Counting Crows' "Round Here" is one that has always stuck with me.

I've listened to this song hundreds of times, and this non-standard version, where Adam Duritz stretches the lyrics and lets his emotions flow, hits even harder.

To me, it feels like a quiet cry about mental health. About someone feeling disconnected, uncertain of who they are, and not at home in their own life. There is something raw, honest, and deeply human in the way the song captures that struggle.

The song has only grown on me over time. I didn't fully understand or appreciate it in my twenties, but now that I'm in my forties, I've come to see more people around me carrying quiet struggles. If that is you, I hope you're taking care of yourself.

May 16, 2025

Petit manifeste low-tech

Ce samedi 17 mai, je pƩdalerai vers Massy en compagnie de Tristan Nitot pour parler "low-tech" et dƩdicacer Bikepunk lors du festival Parlons VƩlo.

Attention, ce qui va suivre divulgĆ¢che une partie de ce que je dirai samedi midi Ć  Massy. Si vous venez, arrĆŖtez de lire ici, on se retrouve demain !

Qu’est-ce que la low-tech ?

Le terme low-tech nous fait intuitivement sentir une opposition contre l’excĆØs technologique (le "high tech") tout en Ć©vitant l’extrĆ©misme technophobique. Un terme qui enthousiasme, mais qu’il me semble important d’expliciter et dont je propose la dĆ©finition suivante.

Une technologie est dite « low-tech » si les personnes interagissant avec cette technologie savent et peuvent en comprendre son fonctionnement.

Savoir comprendre. Pouvoir comprendre. Deux ƩlƩments essentiels (et difficiles Ơ distinguer pour le Belge que je suis).

Savoir comprendre

Savoir comprendre une technologie implique d’avoir la possibilitĆ© de construire un modĆØle intellectuel de son fonctionnement interne.

Il est bien Ć©vident que tout le monde n’a pas la capacitĆ© de comprendre toutes les technologies. Mais il est possible de procĆ©der par niveau. La majoritĆ© des automobilistes sait qu’une voiture Ć  essence brĆ»le le carburant qui explose dans un moteur, explosion qui entraine des pistons qui font tourner les roues. Le nom est un indice en soi : un moteur Ć  explosion !

Si je n’en comprends pas plus sur le fonctionnement d’un moteur, j’ai la certitude qu’il existe des personnes qui comprennent mieux, souvent dans mon entourage direct. Au plus la comprĆ©hension est fine, au plus les personnes deviennent rares, mais chacun peut tenter de s’amĆ©liorer.

La technologie est simple sans ĆŖtre simpliste. Cela signifie que sa complexitĆ© peut ĆŖtre apprĆ©hendĆ©e graduellement. Et qu’il existe des experts qui apprĆ©hendent une technologie particuliĆØre dans sa globalitĆ©.

Par opposition, il est aujourd’hui humainement impossible de comprendre un smartphone moderne. Seuls quelques expertĀ·eĀ·s dans le monde maitrisent chacunĀ·e un point particulier de l’objet : du dessin de l’antenne 5G au logiciel retouchant automatiquement les photos en passant par le chargement rapide de la batterie. Et aucun d’entre eux ne maitrise la conception d’un compilateur nĆ©cessaire Ć  faire tourner le tout. MĆŖme un gĆ©nie passant sa vie Ć  dĆ©monter des smartphones serait dans l’incapacitĆ© totale de comprendre ce qui se passe Ć  l’intĆ©rieur d’un engin que nous avons tous en permanence soit dans une poche, soit devant notre nez !

L’immense majoritĆ© des utilisateurs de smartphones n’ont pas le moindre modĆØle mental de son fonctionnement. Je ne parle pas d’un modĆØle erronĆ© ou simpliste : non, il n’y en a pas du tout. L’objet est « magique ». Pourquoi affiche-t-il quelque chose plutĆ“t qu’un autre ? Parce que c’est « magique ». Et comme pour la magie, il ne faut pas chercher Ć  comprendre.

La low-tech peut ĆŖtre extrĆŖmement complexe, mais l’existence mĆŖme de cette complexitĆ© doit ĆŖtre comprĆ©hensible et justifiĆ©e. Une complexitĆ© transparente encourage naturellement les esprits curieux Ć  se poser des questions.

Le temps de comprendre

Comprendre une technologie prend du temps. Cela implique une relation longue, une expĆ©rience qui se crĆ©e tout au long d’une vie, qui se partage, qui se transmet.

Par opposition, la high-tech impose un renouvellement, une mise Ć  jour constante, des changements d’interface et de fonctionnalitĆ© permanents qui renforcent l’aspect « magique » et entraine le dĆ©couragement de celleux qui tentent de se construire un modĆØle mental.

La low-tech doit donc nĆ©cessairement ĆŖtre durable. PĆ©renne. Elle doit s’enseigner et permettre une construction progressive de cet enseignement.

Cela implique parfois des efforts, des difficultĆ©s. Tout ne peut pas toujours ĆŖtre progressif : Ć  un moment, il faut se lancer sur son vĆ©lo pour apprendre Ć  garder l’équilibre.

Pouvoir comprendre

Historiquement, il semble Ć©vident que toute technologie a la possibilitĆ© d’être comprise. Les personnes interagissant avec la technologie Ć©taient forcĆ©es de rĆ©parer, d’adapter et donc de comprendre. Une technologie Ć©tait essentiellement matĆ©rielle, ce qui implique qu’elle pouvait ĆŖtre dĆ©montĆ©e.

Avec le logiciel apparait un nouveau concept : celui de cacher le fonctionnement. Et si, historiquement, tout logiciel est open source, l’invention du logiciel propriĆ©taire rend difficile, voire impossible, de comprendre une technologie.

Le logiciel propriĆ©taire n’a pu ĆŖtre inventĆ© que grĆ¢ce Ć  la crĆ©ation d’un concept rĆ©cent, au demeurant absurde, appelĆ© « propriĆ©tĆ© intellectuelle ».

Cette propriĆ©tĆ© intellectuelle ayant permis la privatisation de la connaissance dans le logiciel, elle est ensuite Ć©tendue au monde matĆ©riel. Soudainement, il devient possible d’interdire Ć  une personne de tenter de comprendre la technologie qu’elle utilise au quotidien. GrĆ¢ce Ć  la propriĆ©tĆ© intellectuelle, des fermiers se voient soudain interdits d’ouvrir le capot de leur propre tracteur.

La low-tech doit être ouverte. Elle doit pouvoir être réparée, modifiée, améliorée et partagée.

De l’utilisateur au consommateur

GrĆ¢ce Ć  la complexification, aux changements incessants et Ć  l’imposition d’un rĆ©gime strict de « propriĆ©tĆ© intellectuelle », les utilisateurs ont Ć©tĆ© transformĆ©s en consommateurs.

Ce n’est pas un hasard. Ce n’est pas une Ć©volution inĆ©luctable de la nature. Il s’agit d’un choix conscient. Toutes les Ć©coles de commerce enseignent aux futurs entrepreneurs Ć  se construire un marchĆ© captif, Ć  priver autant que possible leur client de libertĆ©, Ć  construire ce qu’on appelle dans le jargon une "moat" (douve qui protĆØge un chĆ¢teau) afin d’augmenter la « rĆ©tention des utilisateurs ».

Les termes eux-mĆŖmes deviennent flous pour renforcer ce sentiment de magie. On ne parle par exemple plus de transfĆ©rer un fichier .jpg vers un ordinateur distant, mais de « sauvegarder ses souvenirs dans le cloud ».

Les marketeux nous ont fait croire qu’en supprimant les mots compliquĆ©s, ils simplifieraient la technologie. C’est Ć©videmment le contraire. L’apparence de simplicitĆ© est une complexitĆ© supplĆ©mentaire qui emprisonne l’utilisateur. Toute technologie nĆ©cessite un apprentissage. Cet apprentissage doit ĆŖtre encouragĆ©.

Pour une approche et une Ʃthique low-tech

L’éthique low-tech consiste Ć  se remettre au service de l’utilisateur en lui facilitant la comprĆ©hension de ses outils.

La high-tech n’est pas de la magie, c’est de la prestidigitation. PlutĆ“t que de cacher les « trucs » sous des artifices, la low-tech cherche Ć  montrer et Ć  crĆ©er une utilisation en conscience de la technologie.

Cela n’implique pas nĆ©cessairement une simplification Ć  outrance.

Prenons l’exemple d’une machine Ć  laver le linge. Nous comprenons tous qu’une machine de base est un tambour qui tourne dans lequel est injectĆ© de l’eau et du savon. C’est trĆØs simple et low-tech.

On pourrait arguer que l’ajout de capteurs et de contrĆ“leurs Ć©lectroniques permet de laver le linge plus efficacement et plus Ć©cologiquement en le pesant et adaptant la vitesse de rotation en fonction du type de linge.

Dans une optique low-tech, un boitier Ć©lectronique est ajoutĆ© Ć  la machine pour faire exactement cela. Si le boitier est retirĆ© ou tombe en panne, la machine continue Ć  fonctionner simplement. L’utilisateur peut choisir de dĆ©brancher le boitier ou de le remplacer. Il en comprend l’utilitĆ© et la justification. Il construit un modĆØle mental dans lequel le boitier ne fait qu’appuyer sur les boutons de rĆ©glage au bon moment. Et, surtout, il ne doit pas envoyer toute la machine Ć  la casse parce que la puce wifi ne fonctionne plus et n’est plus mis Ć  jour ce qui a bloquĆ© le firmware (quoi ? Ma machine Ć  laver dispose d’une puce wifi ?).

Pour une communautƩ low-tech

Une technologie low-tech encourage et donne l’occasion Ć  l’utilisateur Ć  la comprendre, Ć  se l’approprier. Elle tente de rester stable dans le temps, se standardise. Elle ne cherche pas Ć  cacher la complexitĆ© intrinsĆØque partant du principe que la simplicitĆ© provient de la transparence.

Cette comprĆ©hension, cette appropriation ne peut se faire que dans l’interaction. Une technologie low-tech va donc, par essence, favoriser la crĆ©ation de communautĆ©s et les Ć©changes humains autour de cette mĆŖme technologie.

Pour contribuer Ć  l’humanitĆ© et aux communautĆ©s, une technologie low-tech se doit d’appartenir Ć  touĀ·teĀ·s, de faire partie des communs.

J’en arrive donc Ć  cette dĆ©finition, complĆ©mentaire et Ć©quivalente Ć  la premiĆØre :

Une technologie est dite « low-tech » si elle expose sa complexitĆ© de maniĆØre simple, ouverte, transparente et durable tout en appartenant aux communs.

Je suis Ploum et je viens de publier Bikepunk, une fable Ć©colo-cycliste entiĆØrement tapĆ©e sur une machine Ć  Ć©crire mĆ©canique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !

Recevez directement par mail mes Ʃcrits en franƧais et en anglais. Votre adresse ne sera jamais partagƩe. Vous pouvez Ʃgalement utiliser mon flux RSS francophone ou le flux RSS complet.

May 15, 2025

MySQL provides the MySQL Community Edition, the Open-Source version. In addition, there is the Enterprise Edition for our Commercial customers and MySQL HeatWave, our managed database service (DBaaS) on the cloud (OCI, AWS, etc.). But do you know developers can freely use MySQL Enterprise for non-commercial use? The full range of MySQL Enterprise Edition features […]

May 14, 2025

After my last blog post about the gloriously pointless /dev/scream, a few people asked:

ā€œWasn’t /dev/null good enough?ā€

Fair question—but it misses a key point.

Let me explain: /dev/null and /dev/zero are not interchangeable. In fact, they are opposites in many ways. And to fully appreciate the joke behind /dev/scream, you need to understand where that scream is coming from—not where it ends up.


🌌 Black Holes and White Holes

To understand the difference, let us borrow a metaphor from cosmology.

  • /dev/null is like a black hole: it swallows everything. You can write data to it, but nothing ever comes out. Not even light. Not even your logs.
  • /dev/zero is like a white hole: it constantly emits data. In this case, an infinite stream of zero bytes (0x00). It produces, but does not accept.

So when I run:

dd if=/dev/zero of=/dev/null

I am pulling data out of the white hole, and sending it straight into the black hole. A perfectly balanced operation of cosmic futility.


šŸ“¦ What Are All These /dev/* Devices?

Let us break down the core players:

DeviceCan You Write To It?Can You Read From It?What You ReadCommonly Used ForNickname / Metaphor
/dev/nullYesYesInstantly empty (EOF)Discard console output of a programBlack hole šŸŒ‘
/dev/zeroYesYesEndless zeroes (0x00)Wiping drives, filling files, or allocating memory with known contentsWhite hole šŸŒ•
/dev/randomNoYesRandom bytes from entropy poolSecure wiping drives, generating random dataQuantum noise šŸŽ²
/dev/urandomNoYesPseudo-random bytes (faster, less secure)Generating random dataPseudo-random fountain šŸ”€
/dev/oneYesYesEndless 0xFF bytesWiping drives, filling files, or allocating memory with known contentsThe dark mirror of /dev/zero ☠
/dev/screamYesYesaHAAhhaHHAAHaAaAAAA…CatharsisEmotional white hole 😱

Note: /dev/one is not a standard part of Linux—it comes from a community kernel module, much like /dev/scream.


šŸ—£ Back to the Screaming

/dev/scream is a parody of /dev/zero—not /dev/null.

The point of /dev/scream was not to discard data. That is what /dev/null is for.

The point was to generate data, like /dev/zero or /dev/random, but instead of silent zeroes or cryptographic entropy, it gives you something more cathartic: an endless, chaotic scream.

aHAAhhaHHAAHaAaAAAAhhHhhAAaAAAhAaaAAAaHHAHhAaaaaAaHahAaAHaAAHaaHhAHhHaHaAaHAAHaAhhaHaAaAA

So when I wrote:

dd if=/dev/scream of=/dev/null

I was screaming into the void. The scream came from the custom device, and /dev/null politely absorbed it without complaint. Not a single bit screamed back. Like pulling screams out of a white hole and throwing them into a black hole. The ultimate cosmic catharsis.


🧪 Try Them Yourself

Want to experience the universe of /dev for yourself? Try these commands (press Ctrl+C to stop each):

# Silent, empty. Nothing comes out.
cat /dev/null

# Zero bytes forever. Very chill.
hexdump -C /dev/zero

# Random bytes from real entropy (may block).
hexdump -C /dev/random

# Random bytes, fast but less secure.
hexdump -C /dev/urandom

# If you have the /dev/one module:
hexdump -C /dev/one

# If you installed /dev/scream:
cat /dev/scream

šŸ’” TL;DR

  • /dev/null = Black hole: absorbs, never emits.
  • /dev/zero = White hole: emits zeroes, absorbs nothing.
  • /dev/random / /dev/urandom = Entropy sources: useful for cryptography.
  • /dev/one = Evil twin of /dev/zero: gives endless 0xFF bytes.
  • /dev/scream = Chaotic white hole: emits pure emotional entropy.

So no, /dev/null was not ā€œgood enoughā€ā€”it was not the right tool. The original post was not about where the data goes (of=/dev/null), but where it comes from (if=/dev/scream), just like /dev/zero. And when it comes from /dev/scream, you are tapping into something truly primal.

Because sometimes, in Linux as in life, you just need to scream into the void.

Comment l’universitĆ© tue le livre (et les intellectuels)

Il faut sauver la bibliothĆØque de Louvain-la-Neuve

MenacĆ©e d’expulsion par l’universitĆ©, la bibliothĆØque publique de Louvain-la-Neuve risque de disparaĆ®tre. Il est urgent de signer la pĆ©tition pour tenter de la sauver.

Mais ce n’est pas un Ć©vĆ©nement isolĆ©, ce n’est pas un accident. Il ne s’agit que d’une escarmouche dans la longue guerre que la ville, l’universitĆ© et la sociĆ©tĆ© de consommation mĆØnent contre les livres et, Ć  travers eux, contre l’intellectualisme.

Le livre, outil indispensable de l’intellectuel

L’une des tĆ¢ches que je demande chaque annĆ©e Ć  mes Ć©tudiants avant l’examen est de lire un livre. Si possible de fiction ou un essai, mais un livre non technique.

Au choix.

Bien sĆ»r, je donne des idĆ©es en rapport avec mon cours. Notamment « Little Brother » de Cory Doctorow qui est facile Ć  lire, prenant, et tout Ć  fait dans le sujet. Mais les Ć©tudiants sont libres.

Chaque annĆ©e, plusieurs Ć©tudiants me glissent lors de l’examen qu’ils n’avaient plus lu un livre depuis l’école secondaire, mais que, en fait, c’était vraiment chouette et que Ƨa fait vraiment rĆ©flĆ©chir. Que sans moi, ils auraient fait toutes leurs Ć©tudes d’ingĆ©nieur sans lire un seul livre autre que des manuels.

Les livres, qui forcent une lecture sur un temps long, qui forcent une immersion, sont l’outil indispensable de l’intellectuel et de l’humaniste. Il est impossible de rĆ©flĆ©chir sans livre. Il est impossible de prendre du recul, de faire de nouveaux liens et d’innover sans ĆŖtre baignĆ© dans la diversitĆ© d’époques, de lieux et d’expĆ©riences humaines que sont les livres. On peut surnager pendant des annĆ©es dans un domaine voire devenir compĆ©tent sans lire. Mais la comprĆ©hension profonde, l’expertise nĆ©cessite des livres.

Ceux qui ne lisent pas de livres sont condamnĆ©s Ć  se satisfaire de superficialitĆ©, Ć  se laisser manipuler, Ć  obĆ©ir aveuglĆ©ment. Et c’est peut-ĆŖtre Ƨa l’objectif.

J’estime que l’universitĆ© ne doit pas former de bons petits consultants obĆ©issants et employables, mais des intellectuels humanistes. La mission premiĆØre de l’universitĆ© passe par la diffusion, la promotion, l’appropriation de la culture intellectuelle du livre.

Entre l’humanisme et le profit immobilier, l’universitĆ© a choisi

Mais, Ć  Louvain-la-Neuve, l’universitĆ© semble se transformer en simple agence immobiliĆØre. La ville qui, en 50 ans, s’est crƩƩe autour de l’universitĆ© est en train de se transformer pour n’offrir graduellement plus que deux choses : de la bouffe et des fringues.

En 2021, le bouquiniste de la place des Wallons, prĆ©sent depuis 40 ans grĆ¢ce Ć  un bail historique, a vu son propriĆ©taire, l’universitĆ©, lui infliger une augmentation de loyer vertigineuse. Je l’ai vu, les yeux pleins de larmes, mettant en caisse les milliers de bandes dessinĆ©es de son stock afin de laisser la place à… un vendeur de gauffres !

Ce fut ensuite le tour du second bouquiniste de la ville, une minuscule Ć©choppe aux murs noircis de livres de philosophie où nous nous retrouvions rĆ©guliĆØrement entre habituĆ©s pour nous disputer quelques piĆØces rares. Le couple qui tenait la bouquinerie m’a confiĆ© que, devant le prix du loyer, Ć©galement versĆ© Ć  l’universitĆ©, il Ć©tait plus rentable pour eux de devenir bouquinistes itinĆ©rants. « Ça ne va pas vous plaire ! » m’a confiĆ© la gĆ©rante lorsque j’ai demandĆ© qui reprendrait son espace. Quelques semaines plus tard, en effet, surgissait une vitrine vendant des sacs Ć  mains !

Quant Ć  la librairie principale de la ville, l’historique librairie Agora, elle fut rachetĆ©e par le groupe Furet du Nord dont la section belge a fait faillite. Il faut dire que la librairie occupait un Ć©norme espace appartenant en partie au promoteur immobilier KlĆ©pierre et Ć  l’universitĆ©. D’aprĆØs mes sources, le loyer mensuel s’élevait à… 35.000€ !

De cette faillite, j’ai rĆ©cupĆ©rĆ© plusieurs meubles bibliothĆØques qui Ć©taient Ć  donner. L’ouvrier qui Ć©tait en train de nettoyer le magasin me souffla, avec un air goguenard, que les Ć©tudiants allaient ĆŖtre contents du changement ! Il n’avait pas le droit de me dire ce qui remplacerait la librairie, mais, promis, ils allaient ĆŖtre contents.

En effet, il s’agissait d’un projet de… Luna Park ! (qui, bien que terminĆ©, n’a pas obtenu l’autorisation d’ouvrir ses portes suite aux craintes des riverains concernant le tapage qu’un tel lieu engendre)

Mais l’universitĆ© ne comptait pas en rester lĆ . DĆ©sireuse de rĆ©cupĆ©rer des locaux pourtant sans aucun potentiel commercial, elle a Ć©galement mis dehors le centre de livres d’occasion Cerfaux Lefort. Une pĆ©tition pour tenter de le sauver a rĆ©coltĆ© 3000 signatures. Sans succĆØs.

Puisque Ƨa fonctionne, enfonƧons le clou !

Pendant quelques mois, Louvain-la-Neuve, ville universitaire et intellectuelle, s’est retrouvĆ©e sans librairie ! Consciente que Ƨa faisait dĆ©sordre, l’universitĆ© a offert des conditions correctes Ć  une Ć©quipe motivĆ©e pour crĆ©er la librairie « La Page d’AprĆØs » dans une petite surface. La libraire est petite et, par consĆ©quent, doit faire des choix (la littĆ©rature de genre, mon domaine de prĆ©dilection, occupe moins d’une demi-table).

Je me suis Ć©videmment enthousiasmĆ© pour le projet de la Page d’AprĆØs, dont je suis immĆ©diatement devenu un fidĆØle. Je n’avais pas imaginĆ© l’esprit retors du promoteur immobilier qu’est devenue l’université : le soutien Ć  la Page d’AprĆØs (qui n’est que trĆØs relatif, la surface n’est pas offerte non plus) est devenu l’excuse Ć  la moindre critique !

Car c’est aujourd’hui la bibliothĆØque publique de Louvain-la-Neuve elle-mĆŖme qui est menacĆ©e Ć  trĆØs court terme. La partie ludothĆØque et livres jeunesse est d’ores et dĆ©jĆ  condamnĆ©e pour laisser la place Ć  une extension du restaurant universitaire. Le reste de la bibliothĆØque est sur la sellette. L’universitĆ© estime en effet qu’elle pourrait tirer 100.000€ par an de loyer et qu’elle n’a aucune raison d’offrir 100.000€ Ć  une institution qui ne pourrait Ć©videmment pas payer une telle somme. PrĆ©cisons plutĆ“t que l’universitĆ© ne voit plus d’intĆ©rĆŖt Ć  cette bibliothĆØque qu’elle a pourtant dĆ©sirĆ©e ardemment et qu’elle n’a obtenue que grĆ¢ce Ć  une convention signĆ©e en 1988, Ć  l’époque où Louvain-la-Neuve n’était encore qu’un jeune assemblage d’auditoires et de logements Ć©tudiants.

ƀ la remarque « Pouvez-vous imaginer une ville universitaire sans bibliothĆØque ? » posĆ©e par de multiples citoyens, la rĆ©ponse de certains dĆ©cideurs est sans ambiguĆÆté : « Nous avons la Page d’AprĆØs ». Comme si c’était pareil. Comme si c’était suffisant. Mais, comme le glissent parfois Ć  demi-mot certains politiques qui n’ont pas peur d’étaler leur dĆ©ficience intellectuelle : « Le livre, c’est mort, l’avenir c’est l’IA. Et puis, si nĆ©cessaire, il y a Amazon ».

L’universitĆ© propose Ć  la bibliothĆØque de garder une fraction de l’espace actuel Ć  la condition que les travaux d’amĆ©nagement soient pris en charge… par la bibliothĆØque publique elle-mĆŖme (le rĆ©sultat restant propriĆ©tĆ© de l’universitĆ©). De bibliothĆØque, la section de Louvain-la-Neuve se transformerait en "antenne" avec un stock trĆØs faible et où l’on pourrait se procurer les livres commandĆ©s.

Mais c’est complĆØtement se mĆ©prendre sur le rĆ“le d’une bibliothĆØque. Un lieu où l’on peut flĆ¢ner et faire des dĆ©couvertes littĆ©raires improbables, dĆ©couvertes d’ailleurs encouragĆ©es par les initiatives du personnel (mise en Ć©vidence de titres mĆ©connus, tirage alĆ©atoire d’une suggestion de lecture …). Dans la bibliothĆØque de Louvain-la-Neuve, j’ai croisĆ© des bĆ©nĆ©voles aidant des immigrĆ©s adultes Ć  se choisir des livres pour enfant afin d’apprendre le franƧais. J’ai vu mon fils se mettre Ć  lire spontanĆ©ment les journaux quotidiens offerts Ć  la lecture.

Une bibliothĆØque n’est pas un point d’enlĆØvement ou un commerce, une bibliothĆØque est un lieu de vie !

La bibliothĆØque doit subsister. Il faut la sauver. (et signer la pĆ©tition si ce n’est pas encore fait)

La disparition progressive de tout un secteur

Loin de se faire de la concurrence, les diffĆ©rents acteurs du livre se renforcent, s’entraident. Les meilleurs clients de l’un sont souvent les meilleurs clients de l’autre. Un achat d’un cĆ“tĆ© entraine, par ricochet, un achat de l’autre. La bibliothĆØque publique de Louvain-la-Neuve est le plus gros client du fournisseur de BD Slumberland (ou le second aprĆØs moi, me siffle mon portefeuille). L’universitĆ© pourrait faire le choix de participer Ć  cet Ć©cosystĆØme.

Slumberland, lieu mythique vers lequel se tournent mes cinq priĆØres quotidiennes, occupe un espace KlĆ©pierre. Car, Ć  Louvain-la-Neuve, tout appartient soit Ć  l’universitĆ©, soit au groupe KlĆ©pierre, propriĆ©taire du centre commercial. Le bail de Slumberland arrivant Ć  expiration, ils viennent de se voir notifier une augmentation soudaine de plus de 30% !

15.000€ par mois. En Ć©tant ouvert 60h par semaine (ce qui est Ć©norme pour un magasin), cela signifie plus d’un euro par minute d’ouverture. Rien que pour payer son loyer, Slumberland doit vendre une bande dessinĆ©e toutes les 5 minutes ! ƀ ce tarif-lĆ , mes (nombreux et rĆ©currents) achats ne remboursent mĆŖme pas le temps que je passe Ć  flĆ¢ner dans le magasin !

Ces loyers m’interpellent : comment un magasin de loques criardes produites par des enfants dans des caves en Asie peut-il gagner de quoi payer de telles sommes lĆ  où les meilleurs fournisseurs de livres peinent Ć  joindre les deux bouts ? Comment se fait-il que l’épicerie de mon quartier, prĆ©sente depuis 22 ans, favorisant les produits bio et locaux, remplie tous les jours Ć  ras bord de clients, doive brusquement mettre la clĆ© sous le paillasson ? Comme aux Ɖtats-Unis, où on ne dit pas « boire un café », mais « prendre un Starbucks », il ne nous restera bientĆ“t que les grandes chaĆ®nes.

Face Ć  l’hĆ©gĆ©monie de ces monopoles, je croyais que l’universitĆ© Ć©tait un soutien. Mais force est de constater que le modĆØle est plutĆ“t celui de Monaco : le seul pays du monde qui ne dispose pas d’une seule librairie !

Quelle sociĆ©tĆ© les universitaires sont-ils en train de construire ?

Je vous rassure, Slumberland survivra encore un peu Ć  Louvain-la-Neuve. Le magasin a trouvĆ© une surface moins chĆØre (car moins bien exposĆ©e) et va dĆ©mĆ©nager. Son nouveau propriĆ©taire ? L’universitĆ© bien sĆ»r ! Derniers bastions livresques de la ville qui fĆ»t, un jour, une utopie intellectuelle et humaniste, Slumberland et La Page d’AprĆØs auront le droit de subsister jusqu’au jour où les gestionnaires immobiliers qui se prĆ©tendent intellectuels dĆ©cideront que ce serait plus rentable de vendre un peu plus de gaufres, un peu plus de sacs Ć  main ou d’abrutir un peu plus les Ć©tudiants avec un Luna Park.

L’universitĆ© est devenue un business. Le verdict commercial est sans appel : la production de dĆ©biles formatĆ©s Ć  la consommation instagrammable rapporte plus que la formation d’intellectuels.

Mais ce n’est pas une fatalitĆ©.

L’avenir est ce que nous dĆ©ciderons d’en faire. L’universitĆ© n’est pas forcĆ©e de devenir un simple gestionnaire immobilier. Nous sommes l’universitĆ©, nous pouvons la transformer.

J’invite tous les membres du personnel de l’universitĆ©, les professeurĀ·eĀ·s, les Ć©tudiantĀ·eĀ·s, les lecteurices, les intellectuelĀ·leĀ·s et les humanistes Ć  agir, Ć  parler autour d’eux, Ć  dĆ©fendre les livres en les diffusant, en les prĆŖtant, en encourageant leur lecture, en les conseillant, en diffusant leurs opinions, en ouvrant les dĆ©bats sur la place des intellectuels dans la ville.

Pour prĆ©server le savoir et la culture, pour sauvegarder l’humanisme et l’intelligence de l’absurde marchandisation Ć  court terme, nous avons le devoir de communiquer, de partager sans restriction, de faire entendre notre voix de toutes les maniĆØres imaginables.

Je suis Ploum et je viens de publier Bikepunk, une fable Ć©colo-cycliste entiĆØrement tapĆ©e sur une machine Ć  Ć©crire mĆ©canique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !

Recevez directement par mail mes Ʃcrits en franƧais et en anglais. Votre adresse ne sera jamais partagƩe. Vous pouvez Ʃgalement utiliser mon flux RSS francophone ou le flux RSS complet.

May 12, 2025

Pour une poignĆ©e de bits…

Toute l’infrastructure gigantesque d’Internet, tous ces milliers de cĆ¢bles sous-marins, ces milliards de serveurs clignotants ne servent aux humains qu’à Ć©changer des sĆ©ries de bits.

Nos tĆ©lĆ©phones produisent des bits qui sont envoyĆ©s, dupliquĆ©s, stockĆ©s et, parfois, arrivent sur d’autres tĆ©lĆ©phones. Souvent, ces bits ne sont utiles que pour quelques secondes Ć  peine. Parfois, ils ne le sont pas du tout.

Nous produisons trop de bits pour ĆŖtre capables de les consommer ou pour tout simplement en avoir envie.

Or, toute la promesse de l’IA, c’est d’automatiser cette gĆ©nĆ©ration de bits en faisant deux choses : enregistrer les sĆ©quences de bits existantes pour les analyser puis reproduire des sĆ©quences de bits nouvelles, mais « ressemblantes ».

L’IA, les LLMs, ce ne sont que Ƨa : des gĆ©nĆ©rateurs de bits.

Comme me le souffle trĆØs justement StĆ©phane "Alias" Gallay : la course Ć  l’IA, ce n’est finalement qu’un concours de bits.

Enregistrer les sƩquences de bits

Tous les producteurs d’IA doivent donc d’abord enregistrer autant de sĆ©quences de bits existantes que possible. Pour cette raison, le Web est en train de subir une attaque massive. Ces fournisseurs de crĆ©ateurs de bits pompent agressivement toutes les donnĆ©es qui passent Ć  leur portĆ©e. En continu. Ce qui met Ć  mal toute l’infrastructure du web.

Mais comment arrivent-ils Ć  faire cela ? Et bien une partie de la solution serait que ce soit votre tĆ©lĆ©phone qui le fasse. La sociĆ©tĆ© Infatica, met en effet Ć  disposition des dĆ©veloppeurs d’app Android et iPhone des morceaux de code Ć  intĆ©grer dans leurs apps contre paiement.

Ce que fait ce code ? Tout simplement, Ć  chaque fois que vous utilisez l’app, il donne l’accĆØs Ć  votre bande passante Ć  des clients. Clients qui peuvent donc faire les requĆŖtes de leur choix comme pomper autant de sites que possible. Cela, sans que l’utilisateur du tĆ©lĆ©phone en soi informĆ© le moins du monde.

Cela rend l’attaque impossible Ć  bloquer efficacement, car les requĆŖtes proviennent de n’importe où, n’importe quand.

Tout comme le spam, l’activitĆ© d’un virus informatique se fait dĆ©sormais Ć  visage dĆ©couvert, avec de vraies sociĆ©tĆ©s qui vendent leurs « services ». Et les geeks sont trop naĆÆfs : ils cherchent des logiciels malveillants qui exploitent des failles de sĆ©curitĆ© compliquĆ©es alors que tout se fait de maniĆØre transparente, Ć  ciel ouvert, mais avec ce qu’on appelle la "plausible deniability" grĆ¢ce Ć  des couches de services commerciaux. Il y a mĆŖme des sites avec des reviews et des Ć©toiles pour choisir son meilleur rĆ©seau de botnets pseudolĆ©gal.

Le dĆ©veloppeur de l’app Android dira que Ā« il ne savait pas que son app serait utilisĆ©e pour faire des choses nĆ©fastes ». Les fournisseurs de ce code et revendeurs diront Ā« on voulait surtout aider la recherche scientifique et le dĆ©veloppeur est censĆ© prĆ©venir l’utilisateur ». Le client final, qui lance ces attaques pour entrainer ses gĆ©nĆ©rateurs de bits dira Ā« je n’ai fait qu’utiliser un service commercial ».

En fait, c’est mĆŖme pire que cela : comme je l’ai dĆ©montrĆ© lorsque j’ai dĆ©tectĆ© la prĆ©sence d’un tracker Facebook dans l’application officielle de l’institut royal de mĆ©tĆ©orologie belge, il est probable que le maĆ®tre d’œuvre de l’application n’en sache lui-mĆŖme rien, car il aura utilisĆ© un sous-traitant pour dĆ©velopper l’app. Et le sous-traitant aura lui-mĆŖme crƩƩ l’app en question sur base d’un modĆØle existant (un template).

GrĆ¢ce Ć  ces myriades de couches, personne ne sait rien. Personne n’est responsable de rien. Et le web est en train de s’effondrer. AllĆ©gorie virtuelle du reste de la sociĆ©tĆ©.

GƩnƩrer des sƩquences de bits

Une fois qu’on a enregistrĆ© assez de sĆ©quences de bits, on va tenter d’y trouver une logique pour gĆ©nĆ©rer des sĆ©quences nouvelles, mais « ressemblantes ». Techniquement, ce qui est trĆØs impressionnant avec les ChatGPT et consorts, c’est l’échelle Ć  laquelle est fait ce que les chercheurs en informatique font depuis vingt ans.

Mais si Ƨa doit ĆŖtre « ressemblant », Ƨa ne peut pas l’être trop ! En effet, cela fait des dĆ©cennies que l’on nous rabĆ¢che les oreilles avec le "plagiat", avec le "vol de propriĆ©tĆ© intellectuelle". Houlala, "pirater", c’est mal.

Eh bien non, allez-y ! Piratez mes livres ! D’ailleurs, ils sont faits pour, ils sont sous licence libre. Parce que j’ai envie d’être lu. C’est pour Ƨa que j’écris. Je ne connais aucun artiste qui a augmentĆ© la taille de son public en "protĆ©geant sa propriĆ©tĆ© intellectuelle".

Have you ever considered piracy? Have you ever considered piracy?

Parait que c’est mal de pirater.

Sauf quand ce sont les IA qui le font. Ce que montre trĆØs bien Otakar G. Hubschmann dans une expĆ©rience Ć©difiante. Il demande Ć  ChatGPT de gĆ©nĆ©rer des images de « superhĆ©ros qui utilise des toiles d’araignĆ©es pour se dĆ©placer », d’un « jeune sorcier qui va Ć  l’école avec ses amis » ou un « plombier italien avec une casquette rouge ».

Et l’IA refuse. Parce que ce serait enfreindre un copyright. DĆ©solĆ© donc Ć  tous les plombiers italiens qui voudraient mettre une casquette rouge : vous ĆŖtes la propriĆ©tĆ© intellectuelle de Nintendo.

Mais lĆ  où c’est encore plus hallucinant, c’est lorsqu’il s’éloigne des toutes grandes franchises actuelles. S’il demande « photo d’une femme combattant un alien », il obtient… une image de Sigourney Weaver. Une image d’un aventurier archĆ©ologue qui porte un chapeau et utilise un fouet ? Il obtient une photo d’Harrisson Ford.

Comme je vous disais : une simple sĆ©rie de bits ressemblant Ć  une autre.

Ce qui nous apprend Ć  quel point les IA n’ont aucune, mais alors lĆ  aucune originalitĆ©. Mais, surtout, que le copyright est vĆ©ritablement un outil de censure qui ne sert que les trĆØs trĆØs grands. GrĆ¢ce aux IA, il est dĆ©sormais impossible d’illustrer voire d’imaginer un enfant sorcier allant Ć  l’école parce que c’est du plagiat d’Harry Potter (lui-mĆŖme Ć©tant, selon moi, un plagiat d’un roman d’Anthony Horowitz, mais passons…).

Comme le dit IrĆ©nĆ©e RĆ©gnauld, il s’agit de pousser un usage normatif des technologies Ć  un point trĆØs effrayant.

Mais pour protĆ©ger ces franchises et ce copyright, les mĆŖmes IA n’hĆ©sitent pas Ć  se servir dans les bases de donnĆ©es pirates et Ć  foutre en l’air tous les petits services d’hĆ©bergement.

Les humains derriĆØre les bits

Mais le pire c’est que c’est tellement Ć  la mode de dire qu’on a gĆ©nĆ©rĆ© ses bits automatiquement que, souvent, on le fait faire par des humains camouflĆ©s en gĆ©nĆ©rateurs automatiques. Comme cette app de shopping "AI" qui n’était, en rĆ©alitĆ©, que des travailleurs philippins sous-payĆ©s.

Les luddites l’avaient compris, Charlie Chaplin l’avait illustrĆ© dans « Les temps modernes », Arnold Schwarzeneger a essayĆ© de nous avertir : nous servons les machines que nous croyons avoir conƧu pour nous servir. Nous sommes esclaves de gĆ©nĆ©rateurs de bits.

Pour l’amour des bits !

Dans le point presse de ma ville, j’ai dĆ©couvert qu’il n’y avait qu’un magazine en prĆ©sentoir consacrĆ© Ć  Linux, mais pas moins de 5 magazines consacrĆ©s entiĆØrement aux gĆ©nĆ©rateurs de bits. Avec des couvertures du genre « Mieux utiliser ChatGPT ». Comme si on pouvait l’utiliser « mieux ». Et comme si le contenu de ces magazines n’était lui-mĆŖme pas gĆ©nĆ©rĆ©.

C’est tellement fatigant que j’ai pris la rĆ©solution de ne plus lire les articles parlant de ces gĆ©nĆ©rateurs de bits, mĆŖme s’ils ont l’air intĆ©ressants. Je vais essayer de lire moins sur le sujet, d’en parler moins. AprĆØs tout, je pense que j’ai dit tout ce que j’avais Ć  dire dans ces deux billets :

Vous ĆŖtes dĆ©jĆ  assez assaillis par les gĆ©nĆ©rateurs de bits et par les bits qui parlent des gĆ©nĆ©rateurs de bits. Je vais tenter de ne pas trop en rajouter et revenir Ć  mon mĆ©tier d’artisan. Chaque sĆ©rie de bits que je vous offre est entiĆØrement faƧonnĆ©e Ć  la main, d’un humain vers un autre. C’est plus cher, plus rare, plus long Ć  lire, mais, je l’espĆØre, autrement plus qualitatif.

Vous sentez l’amour de l’art et la passion derriĆØre ces bits dont chacun Ć  une signification profonde et une utilitĆ© rĆ©elle ? C’est pour les transmettre, les partager que je cherche Ć  prĆ©server notre infrastructure et nos cerveaux.

Bonnes lectures et bons Ć©changes entre humains !

Je suis Ploum et je viens de publier Bikepunk, une fable Ć©colo-cycliste entiĆØrement tapĆ©e sur une machine Ć  Ć©crire mĆ©canique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !

Recevez directement par mail mes Ʃcrits en franƧais et en anglais. Votre adresse ne sera jamais partagƩe. Vous pouvez Ʃgalement utiliser mon flux RSS francophone ou le flux RSS complet.

May 11, 2025

Plushtodon

I decided to leave twitter.
Ā 
Yes, this has something to do with the change of ownership, the name change to x, …
Ā 
There is only 1 X to me, and that’s X.org

Twitter has become a platform that doesn’t value #freedomofspeech anymore.

My account even got flagged as possible spam to ā€œfactcheckingā€ #fakenews

The mean reason is that there is a better alternative in the form of the Fediverse #Fediverse is the protocol that Mastodon uses.

It allows for a truly decentralised social media platform.

It allows organizations to set up their own Mastodon instance and take ownership and accountability for their content and accounts.

Mastodon is a nice platform; you probably feel at home there.

People who follow me on twitter can continue to follow me at Mastodon if they want.

https://mastodon.social/@stafwag

I’ll post this message a couple of times to twitter before I close my twitter account, so people can decide if they want to follow me on Mastodon …or not ;-).

Have fun!

May 09, 2025

Before the MySQL & HeatWave Summit, we released MySQL 9.3, the latest Innovation Release. The event was terrific, and I had the chance to meet some of the MySQL contributors. As usual, we released bug fixes for 8.0 and 8.4 LTS, but I focus on the newest release in this post.We included patches and code […]

May 07, 2025

It started innocently enough. I was reading a thread about secure file deletion on Linux—a topic that has popped up in discussions for decades. You know the kind: “Is shred still reliable? Should I overwrite with random data or zeroes? What about SSDs and wear leveling?”

As I followed the thread, I came across a mention of /dev/zero, the classic Unix device that outputs an endless stream of null bytes (0x00). It is often used in scripts and system maintenance tasks like wiping partitions or creating empty files.

That led me to wonder: if there is /dev/zero, is there a /dev/one?

Turns out, not in the standard kernel—but someone did write a kernel module to simulate it. It outputs a continuous stream of 0xFF, which is essentially all bits set to one. It is a fun curiosity with some practical uses in testing or wiping data in a different pattern.

But then came the real gem of the rabbit hole: /dev/scream.

Yes, it is exactly what it sounds like.

What is /dev/scream?

/dev/scream is a Linux kernel module that creates a character device which, when read, outputs a stream of text that mimics a chaotic, high-pitched scream. Think:

aHAAhhaHHAAHaAaAAAAhhHhhAAaAAAhAaaAAAaHHAHhAaaaaAaHahAaAHaAAHaaHhAHhHaHaAaHAAHaAhhaHaAaAA

It is completely useless… and completely delightful.

Originally written by @matlink, the module is a humorous take on the Unix philosophy: “Everything is a file”—even your existential dread. It turns your terminal into a primal outlet. Just run:

cat /dev/scream

And enjoy the textual equivalent of a scream into the void.

Why?

Why not?

Sometimes the joy of Linux is not about solving problems, but about exploring the weird and wonderful corners of its ecosystem. From /dev/null swallowing your output silently, to /dev/urandom serving up chaos, to /dev/scream venting it—all of these illustrate the creativity of the open source world.

Sure, shred and secure deletion are important. But so is remembering that your system is a playground.

Try it Yourself

If you want to give /dev/scream a go, here is how to install it:

⚠ Warning

This is a custom kernel module. It is not dangerous, but do not run it on production systems unless you know what you are doing.

Build and Load the Module

git clone https://github.com/matlink/dev_scream.git
cd dev_scream
make build
sudo make install
sudo make load
sudo insmod dev_scream.ko

Now read from the device:

cat /dev/scream

Or, if you are feeling truly poetic, try screaming into the void:

dd if=/dev/scream of=/dev/null

In space, nobody can hear you scream… but on Linux, /dev/scream is loud and clear—even if you pipe it straight into oblivion.

When you are done screaming:

sudo rmmod dev_scream

Final Thoughts

I started with secure deletion, and I ended up installing a kernel module that screams. This is the beauty of curiosity-driven learning in Linux: you never quite know where you will end up. And sometimes, after a long day, maybe all you need is to cat /dev/scream.

Let me know if you tried it—and whether your terminal feels a little lighter afterward.

May 04, 2025

Unbound

Unbound is a popular DNS resolver, that has native DNS-over-TLS support.
Ā 

Unbound and Stubby were among the first resolvers to implement DNS-over-TLS.

I wrote a few blog posts on how to use Stubby on GNU/Linux and FreeBSD.

The implementation status of DNS-over-TLS and other DNS privacy options is available at: https://dnsprivacy.org/.

See https://dnsprivacy.org/implementation_status/ for more details.

It’s less known that it can also be used as authoritative DNS server (aka a real DNS server). Since I discovered this feature and Unbound got native DNS-over-TLS support I started to it as my DNS server.

I created a docker container for it a couple of years back to use it as an authoritative DNS server.

I recently updated the container, the latest version (2.1.0) is available at: https://github.com/stafwag/docker-stafwag-unbound

ChangeLog

Version 2.1.0

Upgrade to debian:bookworm

  • Updated BASE_IMAGE to debian:bookworm
  • Add ARG DEBIAN_FRONTEND=noninteractive
  • Run unbound-control-setup to generate the default certificate
  • Documentation updated


Ā 

docker-stafwag-unbound

Dockerfile to run unbound inside a docker container. The unbound daemon will run as the unbound user. The uid/gid is mapped to 5000153.

Installation

clone the git repo

$ git clone https://github.com/stafwag/docker-stafwag-unbound.git
$ cd docker-stafwag-unbound

Configuration

Port

The default DNS port is set to 5353 this port is mapped with the docker command to the default port 53 (see below). If you want to use another port, you can edit etc/unbound/unbound.conf.d/interface.conf.

scripts/create_zone_config.sh helper script

The create_zone_config.sh helper script, can help you to create the zones.conf configuration file. It’s executed during the container build and creates the zones.conf from the datafiles in etc/unbound/zones.

If you want to use a docker volume or configmaps/persistent volumes on Kubernetes. You can use this script to generate the zones.conf a zones data directory.

create_zone_config.sh has following arguments:

  • -f Default: /etc/unbound/unbound.conf.d/zones.conf The zones.conf file to create
  • -d Default: /etc/unbound/zones/ The zones data source files
  • -p Default: the realpath of zone files
  • -s Skip chown/chmod

Use unbound as an authoritative DNS server

To use unbound as an authoritative authoritive DNS server - a DNS server that hosts DNS zones - add your zones file etc/unbound/zones/.

During the creation of the image scripts/create_zone_config.sh is executed to create the zones configuration file.

Alternatively, you can also use a docker volume to mount /etc/unbound/zones/ to your zone files. And a volume mount for the zones.conf configuration file.

You can use subdirectories. The zone file needs to have $ORIGIN set to our zone origin.

Use DNS-over-TLS

The default configuration uses quad9 to forward the DNS queries over TLS. If you want to use another vendor or you want to use the root DNS servers director you can remove this file.

Build the image

$ docker build -t stafwag/unbound . 

To use a different BASE_IMAGE, you can use the –build-arg BASE_IMAGE=your_base_image.

$ docker build --build-arg BASE_IMAGE=stafwag/debian:bookworm -t stafwag/unbound .

Run

Recursive DNS server with DNS-over-TLS

Run

$ docker run -d --rm --name myunbound -p 127.0.0.1:53:5353 -p 127.0.0.1:53:5353/udp stafwag/unbound

Test

$ dig @127.0.0.1 www.wagemakers.be

Authoritative dns server.

If you want to use unbound as an authoritative dns server you can use the steps below.

Create a directory with your zone files:

[staf@vicky ~]$ mkdir -p ~/docker/volumes/unbound/zones/stafnet
[staf@vicky ~]$ 
[staf@vicky stafnet]$ cd ~/docker/volumes/unbound/zones/stafnet
[staf@vicky ~]$ 

Create the zone files

Zone files

stafnet.zone:

$TTL  86400 ; 24 hours
$ORIGIN stafnet.local.
@  1D  IN  SOA @  root (
            20200322001 ; serial
            3H ; refresh
            15 ; retry
            1w ; expire
            3h ; minimum
           )
@  1D  IN  NS @ 

stafmail IN A 10.10.10.10

stafnet-rev.zone:

$TTL    86400 ;
$ORIGIN 10.10.10.IN-ADDR.ARPA.
@       IN      SOA     stafnet.local. root.localhost.  (
                        20200322001; Serial
                        3h      ; Refresh
                        15      ; Retry
                        1w      ; Expire
                        3h )    ; Minimum
        IN      NS      localhost.
10      IN      PTR     stafmail.

Make sure that the volume directoy and zone files have the correct permissions.

$ sudo chmod 750 ~/docker/volumes/unbound/zones/stafnet/
$ sudo chmod 640 ~/docker/volumes/unbound/zones/stafnet/*
$ sudo chown -R root:5000153 ~/docker/volumes/unbound/

Create the zones.conf configuration file.

[staf@vicky stafnet]$ cd ~/github/stafwag/docker-stafwag-unbound/
[staf@vicky docker-stafwag-unbound]$ 

The script will execute a chown and chmod on the generated zones.conf file and is excute with sudo for this reason.

[staf@vicky docker-stafwag-unbound]$ sudo scripts/create_zone_config.sh -f ~/docker/volumes/unbound/zones.conf -d ~/docker/volumes/unbound/zones/stafnet -p /etc/unbound/zones
Processing: /home/staf/docker/volumes/unbound/zones/stafnet/stafnet.zone
origin=stafnet.local
Processing: /home/staf/docker/volumes/unbound/zones/stafnet/stafnet-rev.zone
origin=1.168.192.IN-ADDR.ARPA
[staf@vicky docker-stafwag-unbound]$ 

Verify the generated zones.conf

[staf@vicky docker-stafwag-unbound]$ sudo cat ~/docker/volumes/unbound/zones.conf
auth-zone:
  name: stafnet.local
  zonefile: /etc/unbound/zones/stafnet.zone

auth-zone:
  name: 1.168.192.IN-ADDR.ARPA
  zonefile: /etc/unbound/zones/stafnet-rev.zone

[staf@vicky docker-stafwag-unbound]$ 

run the container

$ docker run --rm --name myunbound -v ~/docker/volumes/unbound/zones/stafnet:/etc//unbound/zones/ -v ~/docker/volumes/unbound/zones.conf:/etc/unbound/unbound.conf.d/zones.conf -p 127.0.0.1:53:5353 -p 127.0.0.1:53:5353/udp stafwag/unbound

Test

[staf@vicky ~]$ dig @127.0.0.1 soa stafnet.local

; <<>> DiG 9.16.1 <<>> @127.0.0.1 soa stafnet.local
; (1 server found)
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 37184
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;stafnet.local.     IN  SOA

;; ANSWER SECTION:
stafnet.local.    86400 IN  SOA stafnet.local. root.stafnet.local. 3020452817 10800 15 604800 10800

;; Query time: 0 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Sun Mar 22 19:41:09 CET 2020
;; MSG SIZE  rcvd: 83

[staf@vicky ~]$ 

Test reverse lookup.

[staf@vicky ~]$ dig -x 10.10.10.10 @127.0.0.1

; <<>> DiG 9.16.21 <<>> -x 10.10.10.10 @127.0.0.1
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 36250
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;10.10.10.10.in-addr.arpa.	IN	PTR

;; ANSWER SECTION:
10.10.10.10.in-addr.arpa. 86400	IN	PTR	stafmail.

;; Query time: 0 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Tue Oct 19 19:51:47 CEST 2021
;; MSG SIZE  rcvd: 75

[staf@vicky ~]$ 

Have fun!

April 30, 2025

If you are part of the Fediverse—on Mastodon, Pleroma, or any other ActivityPub-compatible platform—you can now follow this blog directly from your favorite platform.

Thanks to the excellent ActivityPub plugin for WordPress, each blog post I publish on amedee.be is now automatically shared in a way that federated social platforms can understand and display.

Follow me from Mastodon

If you are on Mastodon, you can follow this blog just like you would follow another person:

Search for: @amedee.be@amedee.be

Or click this link if your Mastodon instance supports it:
https://amedee.be/@amedee.be

New blog posts will appear in your timeline, and you can even reply to them from Mastodon. Your comments will appear as replies on the blog post page—Fediverse and WordPress users interacting seamlessly!

Why I enabled ActivityPub

I have been active on Mastodon for a while as @amedee@lou.lt, and I really enjoy the decentralized, open nature of the Fediverse. It is a refreshing change from the algorithm-driven social media platforms.

Adding ActivityPub support to my blog aligns perfectly with those values: open standards, decentralization, and full control over my own content.

This change was as simple as adding the activitypub plugin to my blog’s Ansible configuration on GitHub:

 blog_wp_plugins_install:
+  - activitypub
   - akismet
   - google-site-kit
   - health-check

Once deployed, GitHub Actions and Ansible took care of the rest.

What this means for you

If you already follow me on Mastodon (@amedee@lou.lt), nothing changes—you will still see the occasional personal post, boost, or comment.

But if you are more interested in my blog content—technical articles, tutorials, and occasional personal reflections—you might prefer following @amedee.be@amedee.be. It is an automated account that only shares blog posts.

This setup lets me keep content separate and organized, while still engaging with the broader Fediverse community.

Want to do the same for your blog?

Setting this up is easy:

  1. Make sure you are running WordPress version 6.4 or later.
  2. Install and activate the ActivityPub plugin.
  3. After activation, your author profile (and optionally, your blog itself) becomes followable via the Fediverse.
  4. Start publishing—and federate your writing with the world!

April 27, 2025

2025

While the code ( if you call YAML ā€œcodeā€ ) is already more than 5 years old. I finally took the take the make a proper release of my test ā€œhelloā€ OCI container.

I use this container to demo a container build and how to deploy it with helm on a Kubernetes cluster. Some test tools (ping, DNS, curl, wget) are included to execute some tests on the deployed pod.

It also includes a Makefile to build the container and deploy it on a Red Hat OpenShift Local (formerly Red Hat CodeReady Containers) cluster.

To deploy the container with the included helm charts to OpenShift local (Code Ready Containers), execute make crc_deploy.

This will:

  1. Build the container image
  2. Login to the internal OpenShift registry
  3. Push the image to the internal OpenShift register
  4. Deploy the helm chart in the tsthelm namespace, the helm chart will also create a route for the application.

I might include support for other Kubernetes in the future when I find the time.

docker-stafwag-hello_nginx v1.0.0 is available at:

https://github.com/stafwag/docker-stafwag-hello_nginx

ChangeLog

v1.0.0 Initial stable release

  • Included dns utilities and documentation update by @stafwag in #3
  • Updated Run section by @stafwag in #4

Have fun!

April 25, 2025

Performance hack seen on a customer site; fix the bad LCP (due to an animation in revslider) by loading an inline (base64’ed) png image which according to FF is broken and later in the rendering process hiding & removing it. Even though that image is not *really* used, tools such as Google Pagespeed Insights pick it up as the LCP image and the score is “in the green”. Not sure this is really…

Source

April 23, 2025

Managing multiple servers can be a daunting task, especially when striving for consistency and efficiency. To tackle this challenge, I developed a robust automation system using Ansible, GitHub Actions, and Vagrant. This setup not only streamlines server configuration but also ensures that deployments are repeatable and maintainable.

A Bit of History: How It All Started

This project began out of necessity. I was maintaining a handful of Ubuntu servers — one for email, another for a website, and a few for experiments — and I quickly realized that logging into each one to make manual changes was both tedious and error-prone. My first step toward automation was a collection of shell scripts. They worked, but as the infrastructure grew, they became hard to manage and lacked the modularity I needed.

That is when I discovered Ansible. I created the ansible-servers repository in early 2024 as a way to centralize and standardize my infrastructure automation. Initially, it only contained a basic playbook for setting up users and updating packages. But over time, it evolved to include multiple roles, structured inventories, and eventually CI/CD integration through GitHub Actions.

Every addition was born out of a real-world need. When I got tired of testing changes manually, I added Vagrant to simulate my environments locally. When I wanted to be sure my configurations stayed consistent after every push, I integrated GitHub Actions to automate deployments. When I noticed the repo growing, I introduced linting and security checks to maintain quality.

The repository has grown steadily and organically, each commit reflecting a small lesson learned or a new challenge overcome.

The Foundation: Ansible Playbooks

At the core of my automation strategy are Ansible playbooks, which define the desired state of my servers. These playbooks handle tasks such as installing necessary packages, configuring services, and setting up user accounts. By codifying these configurations, I can apply them consistently across different environments.

To manage these playbooks, I maintain a structured repository that includes:

  • Inventory Files: Located in the inventory directory, these YAML files specify the hosts and groups for deployment targets.
  • Roles: Under the roles directory, I define reusable components that encapsulate specific functionalities, such as setting up a web server or configuring a database.
  • Configuration File: The ansible.cfg file sets important defaults, like enabling fact caching and specifying the inventory path, to optimize Ansible’s behavior.

Seamless Deployments with GitHub Actions

To automate the deployment process, I leverage GitHub Actions. This integration allows me to trigger Ansible playbooks automatically upon code changes, ensuring that my servers are always up-to-date with the latest configurations.

One of the key workflows is Deploy to Production, which executes the main playbook against the production inventory. This workflow is defined in the ansible-deploy.yml file and is triggered on specific events, such as pushes to the main branch.

Additionally, I have set up other workflows to maintain code quality and security:

  • Super-Linter: Automatically checks the codebase for syntax errors and adherence to best practices.
  • Codacy Security Scan: Analyzes the code for potential security vulnerabilities.
  • Dependabot Updates: Keeps dependencies up-to-date by automatically creating pull requests for new versions.

Local Testing with Vagrant

Before deploying changes to production, it is crucial to test them in a controlled environment. For this purpose, I use Vagrant to spin up virtual machines that mirror my production servers.

The deploy_to_staging.sh script automates this process by:

  1. Starting the Vagrant environment and provisioning it.
  2. Installing required Ansible roles specified in requirements.yml.
  3. Running the Ansible playbook against the staging inventory.

This approach allows me to validate changes in a safe environment before applying them to live servers.

Embracing Open Source and Continuous Improvement

Transparency and collaboration are vital in the open-source community. By hosting my automation setup on GitHub, I invite others to review, suggest improvements, and adapt the configurations for their own use cases.

The repository is licensed under the MIT License, encouraging reuse and modification. Moreover, I actively monitor issues and welcome contributions to enhance the system further.


In summary, by combining Ansible, GitHub Actions, and Vagrant, I have created a powerful and flexible automation framework for managing my servers. This setup not only reduces manual effort but also increases reliability and scalability. I encourage others to explore this approach and adapt it to their own infrastructure needs. What began as a few basic scripts has now evolved into a reliable automation pipeline I rely on every day.

If you are managing servers and find yourself repeating the same configuration steps, I invite you to check out the ansible-servers repository on GitHub. Clone it, explore the structure, try it in your own environment — and if you have ideas or improvements, feel free to open a pull request or start a discussion. Automation has made a huge difference for me, and I hope it can do the same for you.


April 17, 2025

One of the most surprising moments at Drupal Dev Days Leuven? Getting a phone call from Drupal. Yes, really.

Marcus Johansson gave me a spontaneous demo of a Twilio-powered AI agent built for Drupal, which triggered a phone call right from within the Drupal interface. It was unexpected, fun, and a perfect example of the kind of creative energy in the room.

That moment reminded me why I love Drupal. People were building, sharing, and exploring what Drupal can do next. The energy was contagious.

From MCP (Model Context Protocol) modules to AI-powered search, I saw Drupal doing things I wouldn't have imagined two years ago. AI is no longer just an idea. It's already finding its way into Drupal in practical, thoughtful ways.

Dries Buytaert speaking during a Q&amp;A session at Drupal Dev Days Leuven, facing a large, engaged audience seated in a university lecture hall. Doing a Q&A at Drupal Dev Days in Leuven. I loved the energy and great questions from the Drupal community. Ā© Paul Johnson

Outside of doing a Q&A session, I spent much of my time at Drupal Dev Days working on the next phase of Drupal's AI strategy. We have an early lead in AI, but we need to build on it. We will be sharing more on that in the coming month.

In the meantime, huge thanks to the organizers of Drupal Dev Days for making this event happen, and to Paul Johnson for the fantastic photo. I love that it shows so many happy faces.

DƩdicace Ơ Trolls & VƩlo et magie cycliste

Je serai ce samedi 19 avril Ơ Mons au festival Trolls & LƩgende en dƩdicace au stand PVH.

La star de la table sera sans conteste Sara Schneider, autrice fantasy de la saga des enfants d’Aliel et qui est toute aurĆ©olĆ©e du Prix SFFF Suisse 2024 pour son superbe roman « Place d’âmes » (dont je vous ai dĆ©jĆ  parlĆ©).

C’est la premiĆØre fois que je dĆ©dicacerai Ć  cĆ“tĆ© d’une autrice ayant reƧu un prix majeur. Je suis pas sĆ»r qu’elle acceptera encore que je la tutoie.

Sara Schneider avec son roman et son prix SFFF Suisse 2024 Sara Schneider avec son roman et son prix SFFF Suisse 2024

Bref, si Sara vient pour faire la lĆ©gende, le nom du festival implique qu’il faille complĆ©ter avec des trolls. D’où la prĆ©sence Ć©galement Ć  la table PVH de Tirodem, Allius et moi-mĆŖme. Ƈa, les trolls, on sait faire !

Les belles mĆ©caniques de l’imaginaire

S’il y a des trolls et des lĆ©gendes, il y a aussi tout un cĆ“tĆ© Steampunk. Et quoi de plus Steampunk qu’un vĆ©lo ?

Ce qui fait la beautĆ© de la bicyclette, c’est sa sincĆ©ritĆ©. Elle ne cache rien, ses mouvements sont apparents, l’effort chez elle se voit et se comprend; elle proclame son but, elle dit qu’elle veut aller vite, silencieusement et lĆ©gĆØrement. Pourquoi la voiture automobile est-elle si vilaine et nous inspire-t-elle un sentiment de malaise ? Parce qu’elle dissimule ses organes comme une honte. On ne sait pas ce qu’elle veut. Elle semble inachevĆ©e.
– Voici des ailes, Maurice Leblanc

Le vĆ©lo, c’est l’aboutissement d’un transhumanisme humaniste rĆŖvĆ© par la science-fiction.

La bicyclette a rĆ©solu le problĆØme, qui remĆ©die Ć  notre lenteur et supprime la fatigue. L’homme maintenant est pourvu de tous ses moyens. La vapeur, l’électricitĆ© n’étaient que des progrĆØs servant Ć  son bien-ĆŖtre; la bicyclette est un perfectionnement de son corps mĆŖme, un achĆØvement pourrait-on dire. C’est une paire de jambes plus rapides qu’on lui offre. Lui et sa machine ne font qu’un, ce ne sont pas deux ĆŖtres diffĆ©rents comme l’homme et le cheval, deux instincts en opposition; non, c’est un seul ĆŖtre, un automate d’un seul morceau. Il n’y a pas un homme et une machine, il y a un homme plus vite.
– Voici des ailes, Maurice Leblanc

Un aboutissement technologique qui, paradoxalement, connecte avec la nature. Le vélo est une technologie respectueuse et utilisable par les korrigans, les fées, les elfes et toutes les peuplades qui souffrent de notre croissance technologique. Le vélo étend notre cerveau pour nous connecter à la nature, induit une transe chamanique dès que les pédales se mettent à tourner.

Nos rapports avec la nature sont bouleversĆ©s ! Imaginez deux hommes sur un grand chemin : l’un marche, l’autre roule; leur situation Ć  l’égard de la nature sera-t-elle la mĆŖme ? Oh ! non. L’un recevra d’elle de menues sensations de dĆ©tails, l’autre une vaste impression d’ensemble. ƀ pied, vous respirez le parfum de cette plante, vous admirez la nuance de cette fleur, vous entendez le chant de cet oiseau; Ć  bicyclette, vous respirez, vous admirez, vous entendez la nature elle-mĆŖme. C’est que le mouvement produit tend nos nerfs jusqu’à leur maximum d’intensitĆ© et nous dote d’une sensibilitĆ© inconnue jusqu’alors.
– Voici des ailes, Maurice Leblanc

Oui, le vĆ©lo a amplement sa place Ć  Trolls & LĆ©gendes, comme le dĆ©montrent ses extraits de « Voici des ailes » de Maurice Leblanc, roman Ć©crit… en 1898, quelques annĆ©es avant la crĆ©ation d’ArsĆØne Lupin !

CĆ©lĆ©brer l’univers Bikepunk

Moi aussi, j’aime me faire lyrique pour cĆ©lĆ©brer le vĆ©lo, comme le prouvent les extraits que sĆ©lectionnent les critiques de mon roman Bikepunk.

Chierie chimique de bordel nuclƩaire de saloperie vomissoire de permamerde !
— Bikepunk, Ploum

Ouais bon, d’accord… C’est un style lĆ©gĆØrement diffĆ©rent. J’essaie juste de toucher un public un poil plus moderne quoi. Et puis on avait dit « pas cet extrait-là ! ».

Allez, comme on dit chez les cyclisteurs : on enchaĆ®ne, on enchaĆ®ne…

Donc, pour cĆ©lĆ©brer le vĆ©lo et l’imaginaire cycliste, je me propose d’offrir une petite surprise Ć  toute personne qui se prĆ©sentera sur le stand PVH avec un dĆ©guisement dans le thĆØme Bikepunk ce samedi (et si vous me prĆ©venez Ć  l’avance, c’est encore mieux).

Parce qu’on va leur montrer Ć  ces elfes, ces barbares et ces mages ce que c’est la vĆ©ritable magie, la vĆ©ritable puissance : des pĆ©dales, deux roues et un guidon !

ƀ samedi les cyclotrolls !

Je suis Ploum et je viens de publier Bikepunk, une fable Ć©colo-cycliste entiĆØrement tapĆ©e sur une machine Ć  Ć©crire mĆ©canique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !

Recevez directement par mail mes Ʃcrits en franƧais et en anglais. Votre adresse ne sera jamais partagƩe. Vous pouvez Ʃgalement utiliser mon flux RSS francophone ou le flux RSS complet.

April 16, 2025

Introduction

In my previous post, I shared the story of why I needed a new USB stick and how I used ChatGPT to write a benchmark script that could measure read performance across various methods. In this follow-up, I will dive into the technical details of how the script evolved—from a basic prototype into a robust and feature-rich tool—thanks to incremental refinements and some AI-assisted development.


Starting Simple: The First Version

The initial idea was simple: read a file using dd and measure the speed.

dd if=/media/amedee/Ventoy/ISO/ubuntu-24.10-desktop-amd64.iso \
   of=/dev/null bs=8k

That worked, but I quickly ran into limitations:

  • No progress indicator
  • Hardcoded file paths
  • No USB auto-detection
  • No cache flushing, leading to inflated results when repeating the measurement

With ChatGPT’s help, I started addressing each of these issues one by one.


Tools check

On a default Ubuntu installation, some tools are available by default, while others (especially benchmarking tools) usually need to be installed separately.

Tools used in the script:

ToolInstalled by default?Needs require?
hdparmāŒ Not installedāœ… Yes
ddāœ… YesāŒ No
pvāŒ Not installedāœ… Yes
catāœ… YesāŒ No
iopingāŒ Not installedāœ… Yes
fioāŒ Not installedāœ… Yes
lsblkāœ… Yes (in util-linux)āŒ No
awkāœ… Yes (in gawk)āŒ No
grepāœ… YesāŒ No
basenameāœ… Yes (in coreutils)āŒ No
findāœ… YesāŒ No
sortāœ… YesāŒ No
statāœ… YesāŒ No

This function ensures the system has all tools needed for benchmarking. It exits early if any tool is missing.

This was the initial version:

check_required_tools() {
  local required_tools=(dd pv hdparm fio ioping awk grep sed tr bc stat lsblk find sort)
  for tool in "${required_tools[@]}"; do
    if ! command -v "$tool" &>/dev/null; then
      echo "āŒ Required tool '$tool' is not installed."
      exit 1
    fi
  done
}

That’s already nice, but maybe I just want to run the script anyway if some of the tools are missing.

This is a more advanced version:

ALL_TOOLS=(hdparm dd pv ioping fio lsblk stat grep awk find sort basename column gnuplot)
MISSING_TOOLS=()

require() {
  if ! command -v "$1" >/dev/null; then
    return 1
  fi
  return 0
}

check_required_tools() {
  echo "šŸ” Checking required tools..."
  for tool in "${ALL_TOOLS[@]}"; do
    if ! require "$tool"; then
      MISSING_TOOLS+=("$tool")
    fi
  done

  if [[ ${#MISSING_TOOLS[@]} -gt 0 ]]; then
    echo "āš ļø  The following tools are missing: ${MISSING_TOOLS[*]}"
    echo "You can install them using: sudo apt install ${MISSING_TOOLS[*]}"
    if [[ -z "$FORCE_YES" ]]; then
      read -rp "Do you want to continue and skip tests that require them? (y/N): " yn
      case $yn in
        [Yy]*)
          echo "Continuing with limited tests..."
          ;;
        *)
          echo "Aborting. Please install the required tools."
          exit 1
          ;;
      esac
    else
      echo "Continuing with limited tests (auto-confirmed)..."
    fi
  else
    echo "āœ… All required tools are available."
  fi
}

Device Auto-Detection

One early challenge was identifying which device was the USB stick. I wanted the script to automatically detect a mounted USB device. My first version was clunky and error-prone.

detect_usb() {
  USB_DEVICE=$(lsblk -o NAME,TRAN,MOUNTPOINT -J | jq -r '.blockdevices[] | select(.tran=="usb") | .name' | head -n1)
  if [[ -z "$USB_DEVICE" ]]; then
    echo "āŒ No USB device detected."
    exit 1
  fi
  USB_PATH="/dev/$USB_DEVICE"
  MOUNT_PATH=$(lsblk -no MOUNTPOINT "$USB_PATH" | head -n1)
  if [[ -z "$MOUNT_PATH" ]]; then
    echo "āŒ USB device is not mounted."
    exit 1
  fi
  echo "āœ… Using USB device: $USB_PATH"
  echo "āœ… Mounted at: $MOUNT_PATH"
}

After a few iterations, we (ChatGPT and I) settled on parsing lsblk with filters on tran=usb and hotplug=1, and selecting the first mounted partition.

We also added a fallback prompt in case auto-detection failed.

detect_usb() {
  if [[ -n "$USB_DEVICE" ]]; then
    echo "šŸ“Ž Using provided USB device: $USB_DEVICE"
    MOUNT_PATH=$(lsblk -no MOUNTPOINT "$USB_DEVICE")
    return
  fi

  echo "šŸ” Detecting USB device..."
  USB_DEVICE=""
  while read -r dev tran hotplug type _; do
    if [[ "$tran" == "usb" && "$hotplug" == "1" && "$type" == "disk" ]]; then
      base="/dev/$dev"
      part=$(lsblk -nr -o NAME,MOUNTPOINT "$base" | awk '$2 != "" {print "/dev/"$1; exit}')
      if [[ -n "$part" ]]; then
        USB_DEVICE="$part"
        break
      fi
    fi
  done < <(lsblk -o NAME,TRAN,HOTPLUG,TYPE,MOUNTPOINT -nr)

  if [ -z "$USB_DEVICE" ]; then
    echo "āŒ No mounted USB partition found on any USB disk."
    lsblk -o NAME,TRAN,HOTPLUG,TYPE,SIZE,MOUNTPOINT -nr | grep part
    read -rp "Enter the USB device path manually (e.g., /dev/sdc1): " USB_DEVICE
  fi

  MOUNT_PATH=$(lsblk -no MOUNTPOINT "$USB_DEVICE")
  if [ -z "$MOUNT_PATH" ]; then
    echo "āŒ USB device is not mounted."
    exit 1
  fi

  echo "āœ… Using USB device: $USB_DEVICE"
  echo "āœ… Mounted at: $MOUNT_PATH"
}

Finding the Test File

To avoid hardcoding filenames, we implemented logic to search for the latest Ubuntu ISO on the USB stick.

find_ubuntu_iso() {
  # Function to find an Ubuntu ISO on the USB device
  find "$MOUNT_PATH" -type f -regextype posix-extended \
    -regex ".*/ubuntu-[0-9]{2}\.[0-9]{2}-desktop-amd64\\.iso" | sort -V | tail -n1
}

Later, we enhanced it to accept a user-provided file, and even verify that the file was located on the USB stick. If it was not, the script would gracefully fall back to the Ubuntu ISO search.

find_test_file() {
  if [[ -n "$TEST_FILE" ]]; then
    echo "šŸ“Ž Using provided test file: $(basename "$TEST_FILE")"
    
    # Check if the provided test file is on the USB device
    TEST_FILE_MOUNT_PATH=$(realpath "$TEST_FILE" | grep -oP "^$MOUNT_PATH")
    if [[ -z "$TEST_FILE_MOUNT_PATH" ]]; then
      echo "āŒ The provided test file is not located on the USB device."
      # Look for an Ubuntu ISO if it's not on the USB
      TEST_FILE=$(find_ubuntu_iso)
    fi
  else
    TEST_FILE=$(find_ubuntu_iso)
  fi

  if [ -z "$TEST_FILE" ]; then
    echo "āŒ No valid test file found."
    exit 1
  fi

  if [[ "$TEST_FILE" =~ ubuntu-[0-9]{2}\.[0-9]{2}-desktop-amd64\.iso ]]; then
    UBUNTU_VERSION=$(basename "$TEST_FILE" | grep -oP 'ubuntu-\d{2}\.\d{2}')
    echo "🧪 Selected Ubuntu version: $UBUNTU_VERSION"
  else
    echo "šŸ“Ž Selected test file: $(basename "$TEST_FILE")"
  fi
}

Read Methods and Speed Extraction

To get a comprehensive view, we added multiple methods:

  • hdparm (direct disk access)
  • dd (simple block read)
  • dd + pv (with progress bar)
  • cat + pv (alternative stream reader)
  • ioping (random access)
  • fio (customizable benchmark tool)
    if require hdparm; then
      drop_caches
      speed=$(sudo hdparm -t --direct "$USB_DEVICE" 2>/dev/null | extract_speed)
      mb=$(speed_to_mb "$speed")
      echo "${idx}. ${TEST_NAMES[$idx]}: $speed"
      TOTAL_MB[$idx]=$(echo "${TOTAL_MB[$idx]} + $mb" | bc)
      echo "${TEST_NAMES[$idx]},$run,$mb" >> "$CSVFILE"
    fi
    ((idx++))

    drop_caches
    speed=$(dd if="$TEST_FILE" of=/dev/null bs=8k 2>&1 |& extract_speed)
    mb=$(speed_to_mb "$speed")
    echo "${idx}. ${TEST_NAMES[$idx]}: $speed"
    TOTAL_MB[$idx]=$(echo "${TOTAL_MB[$idx]} + $mb" | bc)
    echo "${TEST_NAMES[$idx]},$run,$mb" >> "$CSVFILE"
    ((idx++))

    if require pv; then
      drop_caches
      FILESIZE=$(stat -c%s "$TEST_FILE")
      speed=$(dd if="$TEST_FILE" bs=8k status=none | pv -s "$FILESIZE" -f -X 2>&1 | extract_speed)
      mb=$(speed_to_mb "$speed")
      echo "${idx}. ${TEST_NAMES[$idx]}: $speed"
      TOTAL_MB[$idx]=$(echo "${TOTAL_MB[$idx]} + $mb" | bc)
      echo "${TEST_NAMES[$idx]},$run,$mb" >> "$CSVFILE"
    fi
    ((idx++))

    if require pv; then
      drop_caches
      speed=$(cat "$TEST_FILE" | pv -f -X 2>&1 | extract_speed)
      mb=$(speed_to_mb "$speed")
      echo "${idx}. ${TEST_NAMES[$idx]}: $speed"
      TOTAL_MB[$idx]=$(echo "${TOTAL_MB[$idx]} + $mb" | bc)
      echo "${TEST_NAMES[$idx]},$run,$mb" >> "$CSVFILE"
    fi
    ((idx++))

    if require ioping; then
      drop_caches
      speed=$(ioping -c 10 -A "$USB_DEVICE" 2>/dev/null | grep 'read' | extract_speed)
      mb=$(speed_to_mb "$speed")
      echo "${idx}. ${TEST_NAMES[$idx]}: $speed"
      TOTAL_MB[$idx]=$(echo "${TOTAL_MB[$idx]} + $mb" | bc)
      echo "${TEST_NAMES[$idx]},$run,$mb" >> "$CSVFILE"
    fi
    ((idx++))

    if require fio; then
      drop_caches
      speed=$(fio --name=readtest --filename="$TEST_FILE" --direct=1 --rw=read --bs=8k \
            --size=100M --ioengine=libaio --iodepth=16 --runtime=5s --time_based --readonly \
            --minimal 2>/dev/null | awk -F';' '{print $6" KB/s"}' | extract_speed)
      mb=$(speed_to_mb "$speed")
      echo "${idx}. ${TEST_NAMES[$idx]}: $speed"
      TOTAL_MB[$idx]=$(echo "${TOTAL_MB[$idx]} + $mb" | bc)
      echo "${TEST_NAMES[$idx]},$run,$mb" >> "$CSVFILE"
    fi

Parsing their outputs proved tricky. For example, pv outputs speed with or without spaces, and with different units. We created a robust extract_speed function with regex, and a speed_to_mb function that could handle both MB/s and MiB/s, with or without a space between value and unit.

extract_speed() {
  grep -oP '(?i)[\d.,]+\s*[KMG]i?B/s' | tail -1 | sed 's/,/./'
}

speed_to_mb() {
  if [[ "$1" =~ ([0-9.,]+)[[:space:]]*([a-zA-Z/]+) ]]; then
    value="${BASH_REMATCH[1]}"
    unit=$(echo "${BASH_REMATCH[2]}" | tr '[:upper:]' '[:lower:]')
  else
    echo "0"
    return
  fi

  case "$unit" in
    kb/s)   awk -v v="$value" 'BEGIN { printf "%.2f", v / 1000 }' ;;
    mb/s)   awk -v v="$value" 'BEGIN { printf "%.2f", v }' ;;
    gb/s)   awk -v v="$value" 'BEGIN { printf "%.2f", v * 1000 }' ;;
    kib/s)  awk -v v="$value" 'BEGIN { printf "%.2f", v / 1024 }' ;;
    mib/s)  awk -v v="$value" 'BEGIN { printf "%.2f", v }' ;;
    gib/s)  awk -v v="$value" 'BEGIN { printf "%.2f", v * 1024 }' ;;
    *) echo "0" ;;
  esac
}

Dropping Caches for Accurate Results

To prevent cached reads from skewing the results, each test run begins by dropping system caches using:

sync && echo 3 | sudo tee /proc/sys/vm/drop_caches

What it does:

CommandPurpose
syncFlushes all dirty (pending write) pages to disk
echo 3 > /proc/sys/vm/drop_cachesClears page cache, dentries, and inodes from RAM

We wrapped this in a helper function and used it consistently.


Multiple Runs and Averaging

We made the script repeat each test N times (default: 3), collect results, compute averages, and display a summary at the end.

  echo "šŸ“Š Read-only USB benchmark started ($RUNS run(s))"
  echo "==================================="

  declare -A TEST_NAMES=(
    [1]="hdparm"
    [2]="dd"
    [3]="dd + pv"
    [4]="cat + pv"
    [5]="ioping"
    [6]="fio"
  )

  declare -A TOTAL_MB
  for i in {1..6}; do TOTAL_MB[$i]=0; done
  CSVFILE="usb-benchmark-$(date +%Y%m%d-%H%M%S).csv"
  echo "Test,Run,Speed (MB/s)" > "$CSVFILE"

  for ((run=1; run<=RUNS; run++)); do
    echo "ā–¶ Run $run"
    idx=1

  ### tests run here

  echo "šŸ“„ Summary of average results for $UBUNTU_VERSION:"
  echo "==================================="
  SUMMARY_TABLE=""
  for i in {1..6}; do
    if [[ ${TOTAL_MB[$i]} != 0 ]]; then
      avg=$(echo "scale=2; ${TOTAL_MB[$i]} / $RUNS" | bc)
      echo "${TEST_NAMES[$i]} average: $avg MB/s"
      RESULTS+=("${TEST_NAMES[$i]} average: $avg MB/s")
      SUMMARY_TABLE+="${TEST_NAMES[$i]},$avg\n"
    fi
  done

Output Formats

To make the results user-friendly, we added:

  • A clean table view
  • CSV export for spreadsheets
  • Log file for later reference
  if [[ "$VISUAL" == "table" || "$VISUAL" == "both" ]]; then
    echo -e "šŸ“‹ Table view:"
    echo -e "Test Method,Average MB/s\n$SUMMARY_TABLE" | column -t -s ','
  fi

  if [[ "$VISUAL" == "bar" || "$VISUAL" == "both" ]]; then
    if require gnuplot; then
      echo -e "$SUMMARY_TABLE" | awk -F',' '{print $1" "$2}' | \
      gnuplot -p -e "
        set terminal dumb;
        set title 'USB Read Benchmark Results ($UBUNTU_VERSION)';
        set xlabel 'Test Method';
        set ylabel 'MB/s';
        plot '-' using 2:xtic(1) with boxes notitle
      "
    fi
  fi

  LOGFILE="usb-benchmark-$(date +%Y%m%d-%H%M%S).log"
  {
    echo "Benchmark for USB device: $USB_DEVICE"
    echo "Mounted at: $MOUNT_PATH"
    echo "Ubuntu version: $UBUNTU_VERSION"
    echo "Test file: $TEST_FILE"
    echo "Timestamp: $(date)"
    echo "Number of runs: $RUNS"
    echo ""
    echo "Read speed averages:"
    for line in "${RESULTS[@]}"; do
      echo "$line"
    done
  } > "$LOGFILE"

  echo "šŸ“ Results saved to: $LOGFILE"
  echo "šŸ“ˆ CSV exported to: $CSVFILE"
  echo "==================================="

The Full Script

Here is the complete version of the script used to benchmark the read performance of a USB drive:

#!/bin/bash

# ==========================
# CONFIGURATION
# ==========================
RESULTS=()
USB_DEVICE=""
TEST_FILE=""
RUNS=1
VISUAL="none"
SUMMARY=0

# (Consider grouping related configuration into a config file or associative array if script expands)

# ==========================
# ARGUMENT PARSING
# ==========================
while [[ $# -gt 0 ]]; do
  case $1 in
    --device)
      USB_DEVICE="$2"
      shift 2
      ;;
    --file)
      TEST_FILE="$2"
      shift 2
      ;;
    --runs)
      RUNS="$2"
      shift 2
      ;;
    --visual)
      VISUAL="$2"
      shift 2
      ;;
    --summary)
      SUMMARY=1
      shift
      ;;
    --yes|--force)
      FORCE_YES=1
      shift
      ;;
    *)
      echo "Unknown option: $1"
      exit 1
      ;;
  esac
done

# ==========================
# TOOL CHECK
# ==========================
ALL_TOOLS=(hdparm dd pv ioping fio lsblk stat grep awk find sort basename column gnuplot)
MISSING_TOOLS=()

require() {
  if ! command -v "$1" >/dev/null; then
    return 1
  fi
  return 0
}

check_required_tools() {
  echo "šŸ” Checking required tools..."
  for tool in "${ALL_TOOLS[@]}"; do
    if ! require "$tool"; then
      MISSING_TOOLS+=("$tool")
    fi
  done

  if [[ ${#MISSING_TOOLS[@]} -gt 0 ]]; then
    echo "āš ļø  The following tools are missing: ${MISSING_TOOLS[*]}"
    echo "You can install them using: sudo apt install ${MISSING_TOOLS[*]}"
    if [[ -z "$FORCE_YES" ]]; then
      read -rp "Do you want to continue and skip tests that require them? (y/N): " yn
      case $yn in
        [Yy]*)
          echo "Continuing with limited tests..."
          ;;
        *)
          echo "Aborting. Please install the required tools."
          exit 1
          ;;
      esac
    else
      echo "Continuing with limited tests (auto-confirmed)..."
    fi
  else
    echo "āœ… All required tools are available."
  fi
}

# ==========================
# AUTO-DETECT USB DEVICE
# ==========================
detect_usb() {
  if [[ -n "$USB_DEVICE" ]]; then
    echo "šŸ“Ž Using provided USB device: $USB_DEVICE"
    MOUNT_PATH=$(lsblk -no MOUNTPOINT "$USB_DEVICE")
    return
  fi

  echo "šŸ” Detecting USB device..."
  USB_DEVICE=""
  while read -r dev tran hotplug type _; do
    if [[ "$tran" == "usb" && "$hotplug" == "1" && "$type" == "disk" ]]; then
      base="/dev/$dev"
      part=$(lsblk -nr -o NAME,MOUNTPOINT "$base" | awk '$2 != "" {print "/dev/"$1; exit}')
      if [[ -n "$part" ]]; then
        USB_DEVICE="$part"
        break
      fi
    fi
  done < <(lsblk -o NAME,TRAN,HOTPLUG,TYPE,MOUNTPOINT -nr)

  if [ -z "$USB_DEVICE" ]; then
    echo "āŒ No mounted USB partition found on any USB disk."
    lsblk -o NAME,TRAN,HOTPLUG,TYPE,SIZE,MOUNTPOINT -nr | grep part
    read -rp "Enter the USB device path manually (e.g., /dev/sdc1): " USB_DEVICE
  fi

  MOUNT_PATH=$(lsblk -no MOUNTPOINT "$USB_DEVICE")
  if [ -z "$MOUNT_PATH" ]; then
    echo "āŒ USB device is not mounted."
    exit 1
  fi

  echo "āœ… Using USB device: $USB_DEVICE"
  echo "āœ… Mounted at: $MOUNT_PATH"
}

# ==========================
# FIND TEST FILE
# ==========================
find_ubuntu_iso() {
  # Function to find an Ubuntu ISO on the USB device
  find "$MOUNT_PATH" -type f -regextype posix-extended \
    -regex ".*/ubuntu-[0-9]{2}\.[0-9]{2}-desktop-amd64\\.iso" | sort -V | tail -n1
}

find_test_file() {
  if [[ -n "$TEST_FILE" ]]; then
    echo "šŸ“Ž Using provided test file: $(basename "$TEST_FILE")"
    
    # Check if the provided test file is on the USB device
    TEST_FILE_MOUNT_PATH=$(realpath "$TEST_FILE" | grep -oP "^$MOUNT_PATH")
    if [[ -z "$TEST_FILE_MOUNT_PATH" ]]; then
      echo "āŒ The provided test file is not located on the USB device."
      # Look for an Ubuntu ISO if it's not on the USB
      TEST_FILE=$(find_ubuntu_iso)
    fi
  else
    TEST_FILE=$(find_ubuntu_iso)
  fi

  if [ -z "$TEST_FILE" ]; then
    echo "āŒ No valid test file found."
    exit 1
  fi

  if [[ "$TEST_FILE" =~ ubuntu-[0-9]{2}\.[0-9]{2}-desktop-amd64\.iso ]]; then
    UBUNTU_VERSION=$(basename "$TEST_FILE" | grep -oP 'ubuntu-\d{2}\.\d{2}')
    echo "🧪 Selected Ubuntu version: $UBUNTU_VERSION"
  else
    echo "šŸ“Ž Selected test file: $(basename "$TEST_FILE")"
  fi
}



# ==========================
# SPEED EXTRACTION
# ==========================
extract_speed() {
  grep -oP '(?i)[\d.,]+\s*[KMG]i?B/s' | tail -1 | sed 's/,/./'
}

speed_to_mb() {
  if [[ "$1" =~ ([0-9.,]+)[[:space:]]*([a-zA-Z/]+) ]]; then
    value="${BASH_REMATCH[1]}"
    unit=$(echo "${BASH_REMATCH[2]}" | tr '[:upper:]' '[:lower:]')
  else
    echo "0"
    return
  fi

  case "$unit" in
    kb/s)   awk -v v="$value" 'BEGIN { printf "%.2f", v / 1000 }' ;;
    mb/s)   awk -v v="$value" 'BEGIN { printf "%.2f", v }' ;;
    gb/s)   awk -v v="$value" 'BEGIN { printf "%.2f", v * 1000 }' ;;
    kib/s)  awk -v v="$value" 'BEGIN { printf "%.2f", v / 1024 }' ;;
    mib/s)  awk -v v="$value" 'BEGIN { printf "%.2f", v }' ;;
    gib/s)  awk -v v="$value" 'BEGIN { printf "%.2f", v * 1024 }' ;;
    *) echo "0" ;;
  esac
}

drop_caches() {
  echo "🧹 Dropping system caches..."
  if [[ $EUID -ne 0 ]]; then
    echo "  (requires sudo)"
  fi
  sudo sh -c "sync && echo 3 > /proc/sys/vm/drop_caches"
}

# ==========================
# RUN BENCHMARKS
# ==========================
run_benchmarks() {
  echo "šŸ“Š Read-only USB benchmark started ($RUNS run(s))"
  echo "==================================="

  declare -A TEST_NAMES=(
    [1]="hdparm"
    [2]="dd"
    [3]="dd + pv"
    [4]="cat + pv"
    [5]="ioping"
    [6]="fio"
  )

  declare -A TOTAL_MB
  for i in {1..6}; do TOTAL_MB[$i]=0; done
  CSVFILE="usb-benchmark-$(date +%Y%m%d-%H%M%S).csv"
  echo "Test,Run,Speed (MB/s)" > "$CSVFILE"

  for ((run=1; run<=RUNS; run++)); do
    echo "ā–¶ Run $run"
    idx=1

    if require hdparm; then
      drop_caches
      speed=$(sudo hdparm -t --direct "$USB_DEVICE" 2>/dev/null | extract_speed)
      mb=$(speed_to_mb "$speed")
      echo "${idx}. ${TEST_NAMES[$idx]}: $speed"
      TOTAL_MB[$idx]=$(echo "${TOTAL_MB[$idx]} + $mb" | bc)
      echo "${TEST_NAMES[$idx]},$run,$mb" >> "$CSVFILE"
    fi
    ((idx++))

    drop_caches
    speed=$(dd if="$TEST_FILE" of=/dev/null bs=8k 2>&1 |& extract_speed)
    mb=$(speed_to_mb "$speed")
    echo "${idx}. ${TEST_NAMES[$idx]}: $speed"
    TOTAL_MB[$idx]=$(echo "${TOTAL_MB[$idx]} + $mb" | bc)
    echo "${TEST_NAMES[$idx]},$run,$mb" >> "$CSVFILE"
    ((idx++))

    if require pv; then
      drop_caches
      FILESIZE=$(stat -c%s "$TEST_FILE")
      speed=$(dd if="$TEST_FILE" bs=8k status=none | pv -s "$FILESIZE" -f -X 2>&1 | extract_speed)
      mb=$(speed_to_mb "$speed")
      echo "${idx}. ${TEST_NAMES[$idx]}: $speed"
      TOTAL_MB[$idx]=$(echo "${TOTAL_MB[$idx]} + $mb" | bc)
      echo "${TEST_NAMES[$idx]},$run,$mb" >> "$CSVFILE"
    fi
    ((idx++))

    if require pv; then
      drop_caches
      speed=$(cat "$TEST_FILE" | pv -f -X 2>&1 | extract_speed)
      mb=$(speed_to_mb "$speed")
      echo "${idx}. ${TEST_NAMES[$idx]}: $speed"
      TOTAL_MB[$idx]=$(echo "${TOTAL_MB[$idx]} + $mb" | bc)
      echo "${TEST_NAMES[$idx]},$run,$mb" >> "$CSVFILE"
    fi
    ((idx++))

    if require ioping; then
      drop_caches
      speed=$(ioping -c 10 -A "$USB_DEVICE" 2>/dev/null | grep 'read' | extract_speed)
      mb=$(speed_to_mb "$speed")
      echo "${idx}. ${TEST_NAMES[$idx]}: $speed"
      TOTAL_MB[$idx]=$(echo "${TOTAL_MB[$idx]} + $mb" | bc)
      echo "${TEST_NAMES[$idx]},$run,$mb" >> "$CSVFILE"
    fi
    ((idx++))

    if require fio; then
      drop_caches
      speed=$(fio --name=readtest --filename="$TEST_FILE" --direct=1 --rw=read --bs=8k \
            --size=100M --ioengine=libaio --iodepth=16 --runtime=5s --time_based --readonly \
            --minimal 2>/dev/null | awk -F';' '{print $6" KB/s"}' | extract_speed)
      mb=$(speed_to_mb "$speed")
      echo "${idx}. ${TEST_NAMES[$idx]}: $speed"
      TOTAL_MB[$idx]=$(echo "${TOTAL_MB[$idx]} + $mb" | bc)
      echo "${TEST_NAMES[$idx]},$run,$mb" >> "$CSVFILE"
    fi
  done

  echo "šŸ“„ Summary of average results for $UBUNTU_VERSION:"
  echo "==================================="
  SUMMARY_TABLE=""
  for i in {1..6}; do
    if [[ ${TOTAL_MB[$i]} != 0 ]]; then
      avg=$(echo "scale=2; ${TOTAL_MB[$i]} / $RUNS" | bc)
      echo "${TEST_NAMES[$i]} average: $avg MB/s"
      RESULTS+=("${TEST_NAMES[$i]} average: $avg MB/s")
      SUMMARY_TABLE+="${TEST_NAMES[$i]},$avg\n"
    fi
  done

  if [[ "$VISUAL" == "table" || "$VISUAL" == "both" ]]; then
    echo -e "šŸ“‹ Table view:"
    echo -e "Test Method,Average MB/s\n$SUMMARY_TABLE" | column -t -s ','
  fi

  if [[ "$VISUAL" == "bar" || "$VISUAL" == "both" ]]; then
    if require gnuplot; then
      echo -e "$SUMMARY_TABLE" | awk -F',' '{print $1" "$2}' | \
      gnuplot -p -e "
        set terminal dumb;
        set title 'USB Read Benchmark Results ($UBUNTU_VERSION)';
        set xlabel 'Test Method';
        set ylabel 'MB/s';
        plot '-' using 2:xtic(1) with boxes notitle
      "
    fi
  fi

  LOGFILE="usb-benchmark-$(date +%Y%m%d-%H%M%S).log"
  {
    echo "Benchmark for USB device: $USB_DEVICE"
    echo "Mounted at: $MOUNT_PATH"
    echo "Ubuntu version: $UBUNTU_VERSION"
    echo "Test file: $TEST_FILE"
    echo "Timestamp: $(date)"
    echo "Number of runs: $RUNS"
    echo ""
    echo "Read speed averages:"
    for line in "${RESULTS[@]}"; do
      echo "$line"
    done
  } > "$LOGFILE"

  echo "šŸ“ Results saved to: $LOGFILE"
  echo "šŸ“ˆ CSV exported to: $CSVFILE"
  echo "==================================="
}

# ==========================
# MAIN
# ==========================
check_required_tools
detect_usb
find_test_file
run_benchmarks

You van also find the latest revision of this script as a GitHub Gist.


Lessons Learned

This script has grown from a simple one-liner into a reliable tool to test USB read performance. Working with ChatGPT sped up development significantly, especially for bash edge cases and regex. But more importantly, it helped guide the evolution of the script in a structured way, with clean modular functions and consistent formatting.


Conclusion

This has been a fun and educational project. Whether you are benchmarking your own USB drives or just want to learn more about shell scripting, I hope this walkthrough is helpful.

Next up? Maybe a graphical version, or write benchmarking on a RAM disk to avoid damaging flash storage.

Stay tuned—and let me know if you use this script or improve it!

April 15, 2025

If you are testing MySQL with sysbench, here is a RPM version for Fedora 31 and OL 8 & 9 linked with the last libmysql (libmysqlclient.so.24) from MySQL 9.3. This version of sysbench is from the latest master branch in GitHub. I used version 1.1, but this is to make a differentiation with the code […]

April 14, 2025

ƀ la recherche de l’attention perdue

La messagerie instantanƩe et la politique

Vous l’avez certainement vu passer : Un journaliste amĆ©ricain s’est fait inviter par erreur sur un chat Signal où des personnes trĆØs haut placĆ©es de l’administration amĆ©ricaine (y compris le vice-prĆ©sident) discutent de l’organisation top secrĆØte d’une frappe militaire au YĆ©men le 15 mars.

La raison de cette erreur est que le porte-parole de Trump, Brian Hughes, avait, durant la campagne Ć©lectorale, reƧu un email du journaliste en question pour demander des prĆ©cisions sur un autre sujet. Brian Hughes avait alors copiĆ©/collĆ© la totalitĆ© de l’email, incluant la signature contenant le numĆ©ro de tĆ©lĆ©phone du journaliste, dans un message instantanĆ© Apple iMessage Ć  destination de Mike Waltz, qui allait devenir le conseiller Ć  la sĆ©curitĆ© de Trump. Recevant ce numĆ©ro par message de la part de Brian Hughes, Mike Waltz aurait ensuite sauvegardĆ© ce numĆ©ro sous le nom de Brian Hughes. En voulant inviter plus tard Brian Hughes dans le chat Signal, Mike Waltz a par erreur invitĆ© le journaliste amĆ©ricain.

Cette anecdote nous apprend plusieurs choses:

Premièrement, Signal est devenu une réelle infrastructure critique de sécurité, y compris dans les cercles les plus hauts placés.

DeuxiĆØmement, les discussions de guerre ultra-stratĆ©gique ont dĆ©sormais lieu… par chat. Pas difficile d’imaginer que chaque participant rĆ©pond machinalement, poste un Ć©moji entre deux rĆ©unions, lors d’une pause pipi. Et lĆ  se dĆ©cident la vie et la mort du reste du monde : dans les toilettes et les rĆ©unions qui n’ont rien Ć  voir !

L’erreur initiale provient du fait que Mike Waltz ne lit vraisemblablement pas ses emails (sinon, on lui aurait fait suivre l’email au lieu de l’envoyer par message) et que Brian Hughues est incapable de rĆ©sumer efficacement un long texte (sinon il n’aurait pas collĆ© l’intĆ©gralitĆ© du message).

Non seulement Mike Waltz ne lit pas ses emails, mais on peut soupƧonner qu’il ne lit pas les messages trop longs : il a quand mĆŖme ajoutĆ© un numĆ©ro de tĆ©lĆ©phone qui se trouvait Ć  la fin d’un message sans prendre le temps de lire et de comprendre ledit message. ƀ sa dĆ©charge, il semblerait qu’il soit possible que ce soit "l’intelligence artificielle" de l’iPhone qui ait ajoutĆ© ce numĆ©ro automatiquement au contact.

Je ne sais pas si cette fonctionnalitĆ© existe, mais le fait d’utiliser un tĆ©lĆ©phone qui peut dĆ©cider automatiquement de changer le numĆ©ro de ses contacts est quand mĆŖme assez effrayant. Et bien dans le genre d’Apple dont j’interprĆØte les slogans marketing comme « achetez avec nos produits l’intelligence qui vous fait dĆ©faut, bande de crĆ©tins ! ».

Crise politique attentionnelle et surveillance gƩnƩralisƩe

La crise attentionnelle est rĆ©elle : nous sommes de moins en moins capables de nous concentrer et nous votons pour des gens qui le sont encore moins ! Un ami ayant Ć©tĆ© embauchĆ© pour participer Ć  une campagne Ć©lectorale en Belgique m’a racontĆ© avoir Ć©tĆ© abasourdi par l’addiction des politiciens les plus en vue aux rĆ©seaux sociaux. Ils sont en permanence rivĆ©s Ć  leurs Ć©crans Ć  comptabiliser les likes et les partages de leurs publications et, quand ils reƧoivent un dossier de plus de dix lignes, demandent un rĆ©sumĆ© ultra-succinct Ć  leurs conseillers.

Vos politiques ne comprennent rien Ć  rien. Ils font semblant. Et dĆ©sormais, ils demandent Ć  ChatGPT qui a l’avantage de ne pas dormir, contrairement aux conseillers humains. Les fameuses intelligences artificielles qui, justement, sont peut-ĆŖtre coupables d’avoir ajoutĆ© le numĆ©ro Ć  ce contact et d’avoir rĆ©digĆ© la politique fiscale de Trump.

Mais pourquoi utiliser Signal et pas une solution officielle qui empĆŖcherait ce genre de fuite ? Officiellement, il n’y aurait pas d’alternative aussi facile. Mais je vois une raison non officielle trĆØs simple : les personnes haut placĆ©es ont dĆ©sormais peur de leur propre infrastructure, car ils savent que tout est sauvegardĆ© et peut-ĆŖtre utilisĆ© contre eux lors d’une Ć©ventuelle enquĆŖte ou d’un procĆØs, mĆŖme des annĆ©es plus tard.

Trump a Ć©tĆ© Ć©lu la premiĆØre fois en faisant campagne sur le fait qu’Hillary Clinton avait utilisĆ© un serveur email personnel, ce qui lui permettait, selon Trump lui-mĆŖme, d’échapper Ć  la justice en ayant ses mails soustraits aux services de surveillance internes amĆ©ricains.

Même ceux qui mettent en place le système de surveillance généralisé en ont peur.

L’éducation Ć  la comprĆ©hension

La derniĆØre leƧon que je tire de cette anecdote c’est, encore une fois, celle de l’éducation : vous pouvez avoir l’infrastructure cryptographique la plus sĆ©curisĆ©e, si vous ĆŖtes incompĆ©tent au point d’inviter n’importe qui dans votre chat, on ne peut rien faire pour vous.

La plus grosse faille de sĆ©curitĆ© est toujours entre la chaise et le clavier, la seule maniĆØre de sĆ©curiser un systĆØme est de faire en sorte que l’utilisateur soit Ć©duquĆ©.

Le meilleur exemple reste celui des voitures autonomes : nous sommes en train de mettre des gĆ©nĆ©rations entiĆØres dans des Tesla qui se conduisent toutes seules 99% du temps. Et lorsqu’un accident arrive, dans le 1% restant, nous demandons au conducteur : « Mais pourquoi tu n’as pas rĆ©agi comme un bon conducteur ? »

Et la rĆ©ponse est trĆØs simple : « Parce que je n’ai jamais conduit de ma vie, je ne sais pas ce que c’est conduire, je n’ai jamais appris Ć  rĆ©agir quand le systĆØme ne fonctionne pas correctement ».

Vous pensez que j’exagĆØre ? Attendez…

Se faire engager grĆ¢ce Ć  l’IA

Eric Lu a reƧu le CV d’un candidat trĆØs prometteur pour bosser dans sa startup. CV qui semblait fort optimisĆ© en mots clĆ©s, mais qui Ć©tait particuliĆØrement pointu dans les technologies utilisĆ©es par Eric. Il a donc proposĆ© au candidat une interview par vidĆ©o.

Au dĆ©but, tout s’est trĆØs bien passĆ© jusqu’à ce que le candidat commence Ć  s’emmĆŖler dans ses rĆ©ponses. « Vous dites que le service d’envoi de SMS sur lequel vous avez bossĆ© Ć©tait saturĆ©, mais vous dĆ©crivez le service comme Ć©tant utilisĆ© par une classe de 30 personnes. Comment 30 SMS peuvent-ils saturer le service ? » … euh… « Pouvez-vous me dire quelle interface utilisateur vous avez mise en place avec ce que vous dites avoir implĆ©menté ? » … euh, je ne me souviens plus…

Eric comprend alors que le candidat baratine. Le CV a Ć©tĆ© gĆ©nĆ©rĆ© par ChatGPT. Le candidat s’est prĆ©parĆ© en simulant un entretien d’embauche avec ChatGPT et en Ć©tudiant par cœur ce qu’il devait rĆ©pondre. Il panique dĆØs qu’on sort de son script.

Ce qui est particuliĆØrement dommage, c’est que le candidat avait un profil vraiment adaptĆ©. S’il avait Ć©tĆ© honnĆŖte et franc au regard de son manque d’expĆ©rience, il aurait pu se faire engager comme junior et acquĆ©rir l’expĆ©rience souhaitĆ©e. S’il avait consacrĆ© son temps Ć  lire des explications techniques sur les technologies concernĆ©es plutĆ“t que d’utiliser ChatGPT, il aurait pu convaincre l’employeur de sa motivation, de sa curiositĆ©. « Je ne connais pas encore grand-chose, mais je suis dĆ©sireux d’apprendre ».

Mais le plus triste dans tout cela, c’est qu’il a sincĆØrement pensĆ© que Ƨa pouvait fonctionner. Il a dĆ©truit sa rĆ©putation parce que Ƨa ne lui a mĆŖme pas traversĆ© l’esprit que, quand bien mĆŖme il aurait Ć©tĆ© engagĆ©, il n’aurait pas tenu deux jours dans son boulot avant de passer pour un crĆ©tin. Il a Ć©tĆ© malhonnĆŖte parce qu’il Ć©tait persuadĆ© que c’était la bonne maniĆØre de fonctionner.

Bref, il Ʃtait un vrai Julius.

Il a « appris Ć  conduire une Tesla » en s’asseyant sur le siĆØge et regardant celle-ci faire 100 fois le tour du quartier. Confiant, il est parti dans une autre ville et s’est pris le premier platane.

Sauver une gƩnƩration

Les smartphones, l’IA, les monopoles publicitaires, les rĆ©seaux sociaux sont toutes les facettes d’un mĆŖme problĆØme : la volontĆ© de rendre la technologie incomprĆ©hensible afin de nous asservir commercialement et de nous occuper l’esprit.

J’ai Ć©crit comment je pensais que nous devions agir pour Ć©duquer la prochaine gĆ©nĆ©ration d’adultes :

Mais c’est un point de vue de parent. C’est pour cela que je trouve trĆØs pertinente l’analyse de Thual qui, lui, est un jeune adulte Ć  peine sorti de l’adolescence. Il peut parler de tout cela Ć  la premiĆØre personne.

La grande leƧon que j’en tire est que la gĆ©nĆ©ration qui nous suit est loin d’être perdue. Comme toutes les gĆ©nĆ©rations, elle est dĆ©sireuse d’apprendre, de se battre. Nous devons avoir l’humilitĆ© de rĆ©aliser que ma gĆ©nĆ©ration s’est complĆØtement plantĆ©e. Que nous dĆ©truisons tout, que nous sommes des fascistes addicts Ć  Facebook et Candy Crush qui roulons en SUV.

Nous n’avons pas de leƧons Ć  leur donner. Nous avons le devoir de les aider, de nous mettre Ć  leur service en dĆ©sactivant le pilote automatique et en brĆ»lant les slides PowerPoint dont nous sommes si fiers.

Je suis Ploum et je viens de publier Bikepunk, une fable Ć©colo-cycliste entiĆØrement tapĆ©e sur une machine Ć  Ć©crire mĆ©canique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !

Recevez directement par mail mes Ʃcrits en franƧais et en anglais. Votre adresse ne sera jamais partagƩe. Vous pouvez Ʃgalement utiliser mon flux RSS francophone ou le flux RSS complet.

April 13, 2025

Seven years ago, I wrote a post about a tiny experiment: publishing my phone's battery status to my website. The updates have quietly continued ever since, appearing at https://dri.es/status.

Every 20 minutes or so, my phone sends its battery level and charging state to a REST endpoint on my Drupal site. The exact timing depends on iOS background scheduling, which has a mind of its own.

For years, this lived quietly at https://dri.es/status. I never linked to it outside the original blog post, so it felt like a forgotten corner of my site. Still working, but mostly invisible.

Despite its low profile, people still mention it occasionally after all this time. This prompted me to bring it into the light.

I have now added a battery icon to my site's header. It's a dynamically generated SVG that displays my phone's current battery level and charging state.

It's a bit goofy, but that is what makes personal websites special. You get to experiment with it and make it yours.

April 10, 2025

De l’utilisation des smartphones et des tablettes chez les adolescents

Chers parents, chers enseignants, chers Ʃducateurs,

Nous le savons toutes et tous, le smartphone est devenu un objet incontournable de notre quotidien, nous connectant en permanence au rĆ©seau Internet qui, avant cela, restait cantonnĆ© aux ordinateurs sur nos bureaux. En voyant grandir nos enfants, la question se pose : quand, comment et pourquoi les faire entrer dans le monde de cette hyperconnexion permanente.

L’adolescence est une phase critique de la vie durant laquelle le cerveau est particuliĆØrement rĆ©ceptif et forme des rĆ©flexes qui resteront ancrĆ©s toute une vie. C’est Ć©galement une pĆ©riode durant laquelle la pression du groupe et le dĆ©sir de conformitĆ© sociale sont les plus importants. Ce n’est pas un hasard si les producteurs de cigarettes et d’alcool ciblent explicitement les adolescents dans le marketing de leur produit.

Le smartphone Ć©tant une invention incroyablement rĆ©cente, nous manquons totalement de recul sur l’impact qu’il peut avoir durant la croissance. Est-il totalement inoffensif ou sera-t-il considĆ©rĆ©, d’ici quelques annĆ©es, comme le tabac l’est aujourd’hui ? Personne ne le sait avec certitude. Nos enfants sont les cobayes de cette technologie.

Il me parait important de souligner certains points importants, qui ne sont que quelques ƩlƩments parmi les nombreuses problƩmatiques ƩtudiƩes dans le domaine

L’attention et la concentration

Il est dĆ©sormais dĆ©montrĆ© que le smartphone perturbe grandement l’attention et la concentration, y compris chez les adultes. Ce n’est pas un hasard : il est conƧu pour cela. Les entreprises comme Google et Meta (Facebook, Whatsapp, Instagram) sont payĆ©es proportionnellement au temps que nous passons devant l’écran. Tout est optimisĆ© en ce sens. Le simple fait d’avoir un tĆ©lĆ©phone prĆØs de soi, mĆŖme Ć©teint, perturbe le raisonnement et fait baisser sensiblement les rĆ©sultats de tests de QI.

Le cerveau acquiert le rĆ©flexe d’attendre des notifications de nouveaux messages de cet appareil, sa seule prĆ©sence est donc un handicap majeur dans toutes les tĆ¢ches qui requiĆØrent de l’attention : lecture, apprentissage, rĆ©flexion, calculs. Il ne suffit pas de l’éteindre : il faut le mettre Ć  distance, si possible dans une piĆØce diffĆ©rente !

Il est dĆ©montrĆ© que l’utilisation des rĆ©seaux sociaux comme Tik-Tok perturbe complĆØtement la notion du temps et la formation de la mĆ©moire. Nous en avons tous fait l’expĆ©rience : nous jurons avoir passĆ© 10 minutes sur notre smartphone alors qu’il s’est en rĆ©alitĆ© Ć©coulĆ© prĆØs d’une heure.

Pour mĆ©moriser et apprendre, le cerveau a besoin de temps de repos, de vide, d’ennui et de rĆ©flexion. Ces nĆ©cessaires temps « morts » dans les trajets, dans les files d’attente, dans la solitude d’une chambre d’adolescent voire mĆŖme durant un cours rĆ©barbatif ont Ć©tĆ© supplantĆ©s par une hyperconnexion.

L’angoisse sociale et la perturbation du sommeil

MĆŖme lorsque nous ne l’utilisons pas, nous savons que les conversations continuent. Que des messages importants sont peut-ĆŖtre Ć©changĆ©s en notre absence. Cette sensation bien connue appelĆ©e Ā« FOMO » (Fear Of Missing Out, peur de manquer quelque chose) nous pousse Ć  consulter notre tĆ©lĆ©phone jusque tard dans la nuit et dĆØs le rĆ©veil. Une proportion inquiĆ©tante de jeunes reconnaissent se rĆ©veiller durant la nuit pour consulter leur smartphone. Or la qualitĆ© du sommeil est fondamentale dans le processus d’apprentissage et de formation du cerveau.

La santƩ mentale

De rĆ©centes avancĆ©es dĆ©montrent une corrĆ©lation forte entre le degrĆ© d’utilisation des rĆ©seaux sociaux et les symptĆ“mes de dĆ©pression. Le monde occidental semble atteint d’une Ć©pidĆ©mie de dĆ©pression adolescente, Ć©pidĆ©mie dont la temporalitĆ© correspond exactement avec l’apparition du smartphone. Les filles en dessous de 16 ans sont la population la plus touchĆ©e.

Le harcèlement et la prédation

Sur les rĆ©seaux sociaux, il est trivial de crĆ©er un compte anonyme ou usurpant l’identitĆ© d’une autre personne (contrairement Ć  ce qu’il est parfois affirmĆ© dans les mĆ©dias, il n’est pas nĆ©cessaire d’être un gĆ©nie de l’informatique pour mettre un faux nom dans un formulaire). ƀ l’abri sous cet anonymat, il est parfois trĆØs tentant de faire des blagues de mauvais goĆ»t, de tenir des propos injurieux, de rĆ©vĆ©ler aux grands jours les secrets dont les adolescents sont friands voire de calomnier pour rĆ©gler des diffĆ©rends de cours de rĆ©crĆ©. Ces comportements ont toujours fait partie de l’adolescence et font partie d’une exploration naturelle normale des relations sociales. Cependant, le fonctionnement des rĆ©seaux sociaux aggrave fortement l’impact de ces actions tout en favorisant l’impunitĆ© du responsable. Cela peut conduire Ć  des consĆ©quences graves allant au-delĆ  de ce qu’imaginent initialement les participants.

Ce pseudonymat est Ć©galement une bĆ©nĆ©diction pour les personnes mal intentionnĆ©es qui se font passer pour des enfants et, aprĆØs des semaines de discussion, proposent Ć  l’enfant de se retrouver en vrai, mais sans rien dire aux adultes.

Au lieu d’en tirer des leƧons sociales Ć©ducatives, nous appelons les adolescents faisant des blagues de mauvais goĆ»t des « pirates informatiques », stigmatisant l’utilisation de la technologie plutĆ“t que le comportement. Le thĆØme des prĆ©dateurs sexuels est mis en exergue pour rĆ©clamer Ć  cor et Ć  cri des solutions de contrĆ“le technologiques. Solutions que les gĆ©ants de l’informatique se font un plaisir de nous vendre, jouant sur la peur et stigmatisant la technologie ainsi que celles et ceux qui ont le malheur d’en avoir une comprĆ©hension intuitive.

La peur et l’incomprĆ©hension deviennent les moteurs centraux pour mettre en avant une seule valeur Ć©ducative : obĆ©ir aveuglĆ©ment Ć  ce qui est incomprĆ©hensible et ce qu’il ne faut surtout pas essayer de comprendre.

La fausse idĆ©e de l’apprentissage de l’informatique

Car il faut Ć  tout prix dĆ©construire le mythe de la « gĆ©nĆ©ration numĆ©rique ».

Contrairement Ć  ce qui est parfois exprimĆ©, l’utilisation d’un smartphone ou d’une tablette ne prĆ©pare en rien Ć  l’apprentissage de l’informatique. Les smartphones sont, au contraire, conƧus pour cacher la maniĆØre dont ils fonctionnent et sont majoritairement utilisĆ©s pour discuter et suivre des publications sponsorisĆ©es. Ils prĆ©parent Ć  l’informatique autant que lire un magazine people Ć  l’arriĆØre d’un taxi prĆ©pare Ć  devenir mĆ©canicien. Ce n’est pas parce que vous ĆŖtes assis dans une voiture que vous apprenez son fonctionnement.

Une dame de 87 ans se sert d’une tablette sans avoir Ć©tĆ© formĆ©e, mais il faudrait former les enfants Ć  l’école ? Une dame de 87 ans se sert d’une tablette sans avoir Ć©tĆ© formĆ©e, mais il faudrait former les enfants Ć  l’école ?

Former Ć  utiliser Word ou PowerPoint ? Les enfants doivent apprendre Ć  dĆ©couvrir les gĆ©nĆ©ralitĆ©s des logiciels, Ć  tester, Ć  « chipoter », pas Ć  reproduire Ć  l’aveugle un comportement propre Ć  un logiciel propriĆ©taire donnĆ© afin de les prĆ©parer Ć  devenir des clients captifs. Et que dire d’un PowerPoint qui force Ć  casser la textualitĆ©, la capacitĆ© d’écriture pour rĆ©duire des idĆ©es complexes sous forme de bullet points ? Former Ć  PowerPoint revient Ć  inviter ses Ć©lĆØves dans un fast-food sous prĆ©texte de leur apprendre Ć  cuisiner.

L’aspect propriĆ©taire et fermĆ© de ces logiciels est incroyablement pervers. Introduire Microsoft Windows, Google Android ou Apple iOS dans les classes, c’est forcer les Ć©tudiants Ć  fumer Ć  l’intĆ©rieur sans ouvrir les fenĆŖtres pour en faire de bons apnĆ©istes qui savent retenir leur souffle. C’est Ć  la fois dangereusement stupide et contre-productif.

De maniĆØre Ć©tonnante, c’est d’ailleurs dans les milieux de l’informatique professionnelle que l’on trouve le plus de personnes retournant aux « dumbphones », tĆ©lĆ©phones simples. Car, comme dit le proverbe « Quand on sait comment se prĆ©pare la saucisse, on perd l’envie d’en manger… »

Que faire ?

Le smartphone est omniprĆ©sent. Chaque gĆ©nĆ©ration transmet Ć  ses enfants ses propres peurs. S’il y a tant de discussions, de craintes, de volontĆ© « d’éducation », c’est avant tout parce que la gĆ©nĆ©ration des parents d’aujourd’hui est celle qui est le plus addict Ć  son smartphone, qui est la plus espionnĆ©e par les monopoles publicitaires. Nous avons peur de l’impact du smartphone sur nos enfants parce que nous nous rendons confusĆ©ment compte de ce qu’il nous inflige.

Mais les adolescents ne sont pas forcĆ©s d’être aussi naĆÆfs que nous face Ć  la technologie.

Commencer le plus tard possible

Les pĆ©diatres et les psychiatres recommandent de ne pas avoir une utilisation rĆ©guliĆØre du smartphone avant 15 ou 16 ans, le systĆØme nerveux et visuel Ć©tant encore trop sensible avant cela. Les adolescents eux-mĆŖmes, lorsqu’on les interroge, considĆØrent qu’ils ne devraient pas avoir de tĆ©lĆ©phone avant 12 ou 13 ans.

Si une limite d’âge n’est pas rĆ©aliste pour tout le monde, il semble important de retarder au maximum l’utilisation quotidienne et rĆ©guliĆØre du smartphone. Lorsque votre enfant devient autonome, privilĆ©giez un « dumbphone », un simple tĆ©lĆ©phone lui permettant de vous appeler et de vous envoyer des SMS. Votre enfant arguera, bien entendu, qu’il est le seul de sa bande Ć  ne pas avoir de smartphone. Nous avons tous Ć©tĆ© adolescents et utilisĆ© cet argument pour nous habiller avec le dernier jeans Ć  la mode.

Comme le signale Jonathan Haidt dans son livre « The Anxious Generation », il y a un besoin urgent de prendre des actions collectives. Nous offrons des tĆ©lĆ©phones de plus en plus tĆ“t Ć  nos enfants, car ils nous disent « Tout le monde en a sauf moi ». Nous cĆ©dons, sans le savoir, nous forƧons d’autres parents Ć  cĆ©der. Des expĆ©riences pilotes d’écoles « sans tĆ©lĆ©phone » montrent des rĆ©sultats immĆ©diats en termes de bien-ĆŖtre et de santĆ© mentale des enfants..

Parlez-en avec les autres parents. DĆ©veloppez des stratĆ©gies ensemble qui permettent de garder une utilisation raisonnable du smartphone tout en Ć©vitant l’exclusion du groupe, ce qui est la plus grande hantise de l’adolescent.

Discutez en amont avec votre enfant

Expliquez Ć  votre enfant les problĆ©matiques liĆ©es au smartphone. PlutĆ“t que de prendre des dĆ©cisions arbitraires, consultez-le et discutez avec lui de la meilleure maniĆØre pour lui d’entrer dans le monde connectĆ©. Ɖtablissez un lien de confiance en lui expliquant de ne jamais faire confiance Ć  ce qu’il pourra lire sur le tĆ©lĆ©phone.

Dans le doute, il doit avoir le rĆ©flexe d’en discuter avec vous.

Introduisez l’outil progressivement

Ne laissez pas votre enfant se dĆ©brouiller directement avec un smartphone une fois votre limite d’âge atteinte.

Bien avant cela, montrez-lui comment vous utilisez votre propre smartphone, votre ordinateur. Montrez-lui la mĆŖme page WikipĆ©dia sur les deux outils en expliquant qu’il ne s’agit que d’une maniĆØre de visualiser un contenu qui se trouve sur un autre ordinateur.

Lorsque votre enfant reƧoit son propre appareil, introduisez-le progressivement en ne lui autorisant l’utilisation que pour des cas particuliers. Vous pouvez par exemple garder le tĆ©lĆ©phone, en ne le donnant Ć  l’enfant que lorsqu’il en fait la demande pour une durĆ©e limitĆ©e et pour un usage prĆ©cis. Ne crĆ©ez pas immĆ©diatement des comptes sur toutes les plateformes Ć  la mode. Observez avec lui les rĆ©flexes qu’il acquiert, discutez sur l’inondation permanente que sont les groupes Whatsapp.

Parlez de vie privƩe

Rappelez Ć  votre enfant que l’objectif des plateformes monopolistiques est de vous espionner en permanence afin de revendre votre vie privĆ©e et de vous bombarder de publicitĆ©s. Que tout ce qui est dit et postĆ© sur les rĆ©seaux sociaux, y compris les photos, doit ĆŖtre considĆ©rĆ© comme public, le secret n’est qu’une illusion. Une rĆØgle d’or : on ne poste pas ce qu’on ne serait pas confortable de voir afficher en grand sur les murs de l’école.

Au Danemark, les Ć©coles ne peuvent dĆ©sormais plus utiliser de Chromebook pour ne pas enfreindre la vie privĆ©e des enfants. Mais ne croyez pas qu’Android, Windows ou iOS soient mieux en termes de vie privĆ©e.

Pas dans la chambre

Ne laissez jamais votre enfant dormir avec son tĆ©lĆ©phone. Le soir, le tĆ©lĆ©phone devrait ĆŖtre rangĆ© dans un endroit neutre et hors de portĆ©e. De mĆŖme, ne laissez pas le tĆ©lĆ©phone Ć  portĆ©e de main lorsque l’enfant fait ses devoirs. Il en va de mĆŖme pour les tablettes et autres laptops qui ont exactement les mĆŖmes fonctions. IdĆ©alement, les Ć©crans sont Ć  Ć©viter avant d’aller Ć  l’école pour Ć©viter de commencer la journĆ©e en Ć©tant dĆ©jĆ  en Ć©tat de fatigue attentionnelle. N’oubliez pas que le smartphone peut ĆŖtre le vecteur de messages et d’images dĆ©rangeantes, voire choquantes, mais Ć©trangement hypnotiques. L’effet de la lumiĆØre des Ć©crans sur la qualitĆ© du sommeil est Ć©galement une problĆ©matique encore mal comprise.

Continuez la discussion

Il existe des logiciels dits de « ContrĆ“le parental ». Mais aucun logiciel ne remplacera jamais la prĆ©sence des parents. Pire : les enfants les plus dĆ©brouillards trouveront trĆØs vite des astuces pour contourner ces limitations voire seront tentĆ©s de contourner ces limitations uniquement parce qu’elles sont arbitraires. PlutĆ“t que d’imposer un contrĆ“le Ć©lectronique, prenez le temps de demander Ć  vos enfants ce qu’ils font sur leur tĆ©lĆ©phone, avec qui ils parlent, ce qui se dit, quels sont les logiciels qu’ils utilisent.

L’utilisation d’Internet peut ĆŖtre Ć©galement trĆØs bĆ©nĆ©fique en permettant Ć  l’enfant d’apprendre sur des sujets hors programmes ou de dĆ©couvrir des communautĆ©s partageant des centres d’intĆ©rĆŖt diffĆ©rents de ceux de l’école.

De la mĆŖme maniĆØre que vous laissez votre enfant frĆ©quenter un club de sport ou de scoutisme tout en l’empĆŖchant de trainer avec une bande de voyous dans la rue, vous devez contrĆ“ler les frĆ©quentations de vos enfants en ligne. Loin des groupes Whatsapp scolaires, votre enfant peut trouver des communautĆ©s en ligne partageant ses centres d’intĆ©rĆŖt, communautĆ©s dans lesquelles il pourra apprendre, dĆ©couvrir et s’épanouir s’il est bien aiguillĆ©.

Donnez l’exemple, soyez l’exemple !

Nos enfants ne font pas ce qu’on leur dit de faire, ils font ce qu’ils nous voient faire. Les enfants ayant vu leurs parents fumer ont le plus grand risque de devenir fumeurs Ć  leur tour. Il en est de mĆŖme pour les smartphones. Si notre enfant nous voit en permanence sur notre tĆ©lĆ©phone, il n’a pas d’autre choix que de vouloir nous imiter. L’un des plus beaux cadeaux que vous pouvez faire est donc de ne pas utiliser compulsivement votre tĆ©lĆ©phone en prĆ©sence de votre enfant.

Oui, vous devez traiter et prendre conscience de votre propre addiction !

PrĆ©voyez des pĆ©riodes où vous le mettez-le en silencieux ou en mode avion et où il est rangĆ© Ć  l’écart. Lorsque vous prenez votre tĆ©lĆ©phone, expliquez Ć  votre enfant l’usage que vous en faites.

Devant lui, mettez-vous Ć  lire un livre papier. Et, non, la lecture sur l’iPad n’est pas « pareille ».

D’ailleurs, si vous manquez d’idĆ©e, je ne peux que vous recommander mon dernier roman : une aventure palpitante Ć©crite Ć  la machine Ć  Ć©crire qui traite de vĆ©lo, d’adolescence, de fin du monde et de smartphones Ć©teints pour toujours. Oui, la publicitĆ© s’est mĆŖme glissĆ©e dans ce texte, quel scandale !

Donnez le goĆ»t de l’informatique, pas celui d’être contrĆ“lĆ©

Il ne faut pas tirer sur le messager : le responsable n’est pas « l’écran », mais l’utilisation que nous en faisons. Les monopoles informatiques tentent de rendre les utilisateurs addicts, prisonniers pour les bombarder de publicitĆ©s, pour les faire consommer. LĆ  sont les responsables.

Apprendre la programmation (ce qui se fait au dĆ©part trĆØs bien sans Ć©cran), jouer Ć  des jeux vidĆ©os profonds avec des histoires complexes ou simplement drĆ“les pour passer un moment amusant, discuter en ligne avec des passionnĆ©s, dĆ©vorer WikipĆ©dia… L’informatique moderne nous ouvre de magnifiques portes dont il serait dommage de priver nos enfants.

Au lieu de cĆ©der Ć  nos propres peurs, angoisses et incomprĆ©hensions, nous devons donner Ć  nos enfants le goĆ»t de reprendre le contrĆ“le de l’informatique et de nos vies, contrĆ“le que nous avons un peu trop facilement cĆ©dĆ© aux monopoles publicitaires en Ć©change d’un rectangle de verre affichant des icĆ“nes de couleur.

Une enfant s’étonne de ne plus retrouver un livre sur sa tablette, la maitresse lui explique que des entreprises ont dĆ©cidĆ© que ce livre n’était pas bon pour elle. Une enfant s’étonne de ne plus retrouver un livre sur sa tablette, la maitresse lui explique que des entreprises ont dĆ©cidĆ© que ce livre n’était pas bon pour elle.

Accepter l’imperfection

« J’avais des principes, aujourd’hui j’ai des enfants » dit le proverbe. Impossible d’être parfait. Quoi que nous fassions, nos enfants seront confrontĆ©s Ć  des conversations toxiques, des dessins animĆ©s dĆ©biles et c’est bien normal. En tant que parents, nous faisons ce que nous pouvons, avec nos rĆ©alitĆ©s.

Personne n’est parfait. Surtout pas un parent.

L’important n’est pas d’empĆŖcher Ć  tout prix nos enfants d’être sur un Ć©cran, mais de prendre conscience qu’un smartphone n’est absolument pas un outil Ć©ducatif, qu’il ne prĆ©pare Ć  rien d’autre que de faire de nous de bons consommateurs passifs.

Le seul apprentissage rĆ©ellement nĆ©cessaire est celui d’un esprit critique dans l’utilisation d’un outil informatique.

Et dans cet apprentissage, les enfants ont souvent beaucoup Ć  apprendre aux adultes !

UPDATE juin 2025 : un large panel d’experts a tentĆ© de dĆ©gager un vĆ©ritable consensus scientifique. Le rĆ©sultat est que personne ne discute les impacts nocifs du smartphone sur le sommeil des adolescents, sur l’attention et sur la dĆ©gradation de la santĆ© mentale.

Je suis Ploum et je viens de publier Bikepunk, une fable Ć©colo-cycliste entiĆØrement tapĆ©e sur une machine Ć  Ć©crire mĆ©canique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !

Recevez directement par mail mes Ʃcrits en franƧais et en anglais. Votre adresse ne sera jamais partagƩe. Vous pouvez Ʃgalement utiliser mon flux RSS francophone ou le flux RSS complet.

April 09, 2025

Introduction

When I upgraded from an old 8GB USB stick to a shiny new 256GB one, I expected faster speeds and more convenience—especially for carrying around multiple bootable ISO files using Ventoy. With modern Linux distributions often exceeding 4GB per ISO, my old drive could barely hold a single image. But I quickly realized that storage space was only half the story—performance matters too.

Curious about how much of an upgrade I had actually made, I decided to benchmark the read speed of both USB sticks. Instead of hunting down benchmarking tools or manually comparing outputs, I turned to ChatGPT to help me craft a reliable, repeatable shell script that could automate the entire process. In this post, I’ll share how ChatGPT helped me go from an idea to a functional USB benchmark script, and what I learned along the way.


The Goal

I wanted to answer a few simple but important questions:

  • How much faster is my new USB stick compared to the old one?
  • Do different USB ports affect read speeds?
  • How can I automate these tests and compare the results?

But I also wanted a reusable script that would:

  • Detect the USB device automatically
  • Find or use a test file on the USB stick
  • Run several types of read benchmarks
  • Present the results clearly, with support for summary and CSV export

Getting Help from ChatGPT

I asked ChatGPT to help me write a shell script with these requirements. It guided me through:

  • Choosing benchmarking tools: hdparm, dd, pv, ioping, fio
  • Auto-detecting the mounted USB device
  • Handling different cases for user-provided test files or Ubuntu ISOs
  • Parsing and converting human-readable speed outputs
  • Displaying results in human-friendly tables and optional CSV export

We iterated over the script, addressing edge cases like:

  • USB devices not mounted
  • Multiple USB partitions
  • pv not showing output unless stderr was correctly handled
  • Formatting output consistently across tools

ChatGPT even helped optimize the code for readability, reduce duplication, and handle both space-separated and non-space-separated speed values like ā€œ18.6 MB/sā€ and ā€œ18.6MB/sā€.


Benchmark Results

With the script ready, I ran tests on three configurations:

1. Old 8GB USB Stick

hdparm       16.40 MB/s
dd 18.66 MB/s
dd + pv 17.80 MB/s
cat + pv 18.10 MB/s
ioping 4.44 MB/s
fio 93.99 MB/s

2. New 256GB USB Stick (Fast USB Port)

hdparm      372.01 MB/s
dd 327.33 MB/s
dd + pv 310.00 MB/s
cat + pv 347.00 MB/s
ioping 8.58 MB/s
fio 992.78 MB/s

3. New 256GB USB Stick (Slow USB Port)

hdparm       37.60 MB/s
dd 39.86 MB/s
dd + pv 38.13 MB/s
cat + pv 40.30 MB/s
ioping 6.88 MB/s
fio 73.52 MB/s

Observations

  • The old USB stick is not only limited in capacity but also very slow. It barely breaks 20 MB/s in most tests.
  • The new USB stick, when plugged into a fast USB 3.0 port, is significantly faster—over 10x the speed in most benchmarks.
  • Plugging the same new stick into a slower port dramatically reduces its performance—a good reminder to check where you plug it in.
  • Tools like hdparm, dd, and cat + pv give relatively consistent results. However, ioping and fio behave differently due to the way they access data—random access or block size differences can impact results.

Also worth noting: the metal casing of the new USB stick gets warm after a few test runs, unlike the old plastic one.


Conclusion

Using ChatGPT to develop this benchmark script was like pair-programming with an always-available assistant. It accelerated development, helped troubleshoot weird edge cases, and made the script more polished than if I had done it alone.

If you want to test your own USB drives—or ensure you’re using the best port for speed—this benchmark script is a great tool to have in your kit. And if you’re looking to learn shell scripting, pairing with ChatGPT is an excellent way to level up.


Want the script?
I’ll share the full version of the script and instructions on how to use it in a follow-up post. Stay tuned!

April 08, 2025

La fin d’un monde ?

La fin de nos souvenirs

Nous sommes envahis d’IA. Bien plus que vous ne le pensez.

Chaque fois que votre tĆ©lĆ©phone prend une photo, ce n’est pas la rĆ©alitĆ© qui s’affiche, mais une reconstruction « probable » de ce que vous avez envie de voir. C’est la raison pour laquelle les photos paraissent dĆ©sormais si belles, si vivantes, si prĆ©cises : parce qu’elles ne sont pas le reflet de la rĆ©alitĆ©, mais le reflet de ce que nous avons envie de voir, de ce que nous sommes le plus susceptibles de trouver « beau ». C’est aussi la raison pour laquelle les systĆØmes dĆ©googlisĆ©s prennent de moins belles photos: ils ne bĆ©nĆ©ficient pas des algorithmes Google pour amĆ©liorer la photo en temps rĆ©el.

Les hallucinations sont rares à nos yeux naïfs, car crédibles. Nous ne les voyons pas. Mais elles sont là. Comme cette future mariée essayant sa robe devant des miroirs et qui découvre que chaque reflet est différent.

J’ai moi-mĆŖme rĆ©ussi Ć  perturber les algorithmes. ƀ gauche, la photo telle que je l’ai prise et telle qu’elle apparait dans n’importe quel visualisateur de photos. ƀ droite, la mĆŖme photo affichĆ©e dans Google Photos. Pour une raison difficilement comprĆ©hensible, l’algorithme tente de reconstruire la photo et se plante lourdement.

Une photo de ma main à gauche et la même photo complètement déformée à droite Une photo de ma main à gauche et la même photo complètement déformée à droite

Or ces images, reconstruites par IA, sont ce que notre cerveau va retenir. Nos souvenirs sont littƩralement altƩrƩs par les IA.

La fin de la vƩritƩ

Tout ce que vous croyez lire sur LinkedIn a probablement ƩtƩ gƩnƩrƩ par un robot. Pour vous dire, le 2 avril il y avait dƩjƠ des robots qui se vantaient sur ce rƩseau de migrer de Offpunk vers XKCDpunk.

Capture d’écran de LinkedIn montrant le billet d’un certain Arthur Howell se vantant d’un blog post racontant la migration de Offpunk ver XKCDpunk. Capture d’écran de LinkedIn montrant le billet d’un certain Arthur Howell se vantant d’un blog post racontant la migration de Offpunk ver XKCDpunk.

La transition Offpunk vers XKCDpunk Ć©tait un poisson d’avril hyper spĆ©cifique et comprĆ©hensible uniquement par une poignĆ©e d’initiĆ©s. Il n’a pas fallu 24h pour que le sujet soit repris sur LinkedIn.

Non, franchement, vous pouvez éteindre LinkedIn. Même les posts de vos contacts sont probablement en grande partie générés par IA suite à un encouragement algorithmique à poster.

Il y a 3 ans, je mettais en garde sur le fait que les chatbots gĆ©nĆ©raient du contenu qui remplissait le web et servait de base d’apprentissage Ć  la prochaine gĆ©nĆ©ration de chatbots.

Je parlais d’une guerre silencieuse. Mais qui n’est plus tellement silencieuse. La Russie utilise notamment ce principe pour inonder le web d’articles, gĆ©nĆ©rĆ©s automatiquement, reprenant sa propagande.

Le principe est simple : vu que les chatbots font des statistiques, si vous publiez un million d’articles dĆ©crivant les expĆ©riences d’armes biologiques que les AmĆ©ricains font en Ukraine (ce qui est faux), le chatbot va considĆ©rer ce morceau de texte comme statistiquement frĆ©quent et avoir une grande probabilitĆ© de vous le ressortir.

Et mĆŖme si vous n’utilisez pas ChatGPT, vos politiciens et les journalistes, eux, les utilisent. Ils en sont mĆŖme fiers.

Ils ont entendu ChatGPT braire dans un prĆ© et en font un discours qui sera lui-mĆŖme repris par ChatGPT. Ils empoisonnent la rĆ©alitĆ© et, ce faisant, la modifient. Ils savent trĆØs bien qu’ils mentent. C’est le but.

Je pensais qu’utiliser ces outils Ć©tait une perte de temps un peu stupide. En fait, c’est dangereux aussi pour les autres. Vous vous demandez certainement c’est quoi le bazar autour des taxes frontaliĆØres que Trump vient d’annoncer ? Les Ć©conomistes se grattent la tĆŖte. Les geeks ont compris : tout le plan politique liĆ© aux taxes et son explication semblent avoir Ć©tĆ© littĆ©ralement gĆ©nĆ©rĆ©s par un chatbot devant rĆ©pondre Ć  la question « comment imposer des taxes douaniĆØres pour rĆ©duire le dĆ©ficit ? ».

Le monde n’est pas dirigĆ© par Trump, il est dirigĆ© par ChatGPT. Mais où est la Sara Conor qui le dĆ©branchera ?

Extrait de Tintin, l’étoile mystĆ©rieuse Extrait de Tintin, l’étoile mystĆ©rieuse

La fin de l’apprentissage

Slack vole notre attention, mais vole Ć©galement notre apprentissage en permettant Ć  n’importe qui de dĆ©ranger, par message privĆ©, le dĆ©veloppeur senior qui connait les rĆ©ponses, car il a bĆ¢ti le systĆØme.

La capacitĆ© d’apprendre, c’est bel et bien ce que les tĆ©lĆ©phones et l’IA sont en train de nous dĆ©rober. Comme le souligne Hilarius Bookbinder, professeur de philosophie dans une universitĆ© amĆ©ricaine, la diffĆ©rence gĆ©nĆ©rationnelle majeure qu’il observe est que les Ć©tudiants d’aujourd’hui n’ont aucune honte Ć  simplement envoyer un email au professeur pour lui demander de rĆ©sumer ce qu’il faut savoir.

Dans son journal de Mars, Thierry Crouzet fait une observation similaire. Alors qu’il annonce quitter Facebook, tout ce qu’il a pour rĆ©ponse c’est « Mais pourquoi ? ». Alors mĆŖme qu’il balance des liens sur le sujet depuis des lustres.

Les chatbots ne sont, eux-mĆŖmes, pas des systĆØmes qu’il est possible d’apprendre. Ils sont statistiques, sans cesse changeants. ƀ les utiliser, la seule capacitĆ© que l’on acquiert, c’est l’impression qu’il n’est pas possible d’apprendre. Ces systĆØmes nous volent littĆ©ralement le rĆ©flexe de rĆ©flĆ©chir et d’apprendre.

En conséquence, sans même vouloir chercher, une partie de la population veut désormais une réponse personnelle, immédiate, courte, résumée. Et si possible en vidéo.

La fin de la confiance

Apprendre nĆ©cessite d’avoir confiance en soi. Il est impossible d’apprendre si on n’a pas la certitude qu’on est capable d’apprendre. ƀ l’opposĆ©, si on acquiert cette certitude, Ć  peu prĆØs tout peut s’apprendre.

Une Ʃtude menƩe par des chercheurs de Microsoft montre que plus on a confiance en soi, moins on fait confiance aux rƩponses des chatbots. Mais, au contraire, si on a le moindre doute, on a soudainement confiance envers les rƩsultats qui nous sont envoyƩs.

Parce que les chatbots parlent comme des CEOs, des marketeux ou des arnaqueurs : ils simulent la confiance envers leurs propres rĆ©ponses. Les personnes, mĆŖme les plus expertes, qui n’ont pas le rĆ©flexe d’aller au conflit, de remettre l’autoritĆ© en question finissent par transformer leur confiance en eux-mĆŖmes en confiance envers un outil.

Un outil de gƩnƩration alƩatoire qui appartient Ơ des multinationales.

Les entreprises sont en train de nous voler notre confiance en nous-mêmes. Elles sont en train de nous voler notre compétence. Elles sont en train de nous voler nos scientifiques les plus brillants.

Et c’est dĆ©jĆ  en train de faire des dĆ©gĆ¢ts dans le domaine de « l’intelligence stratĆ©gique » (Ć  savoir les services secrets).

Ainsi que dans le domaine de la santé : les mĆ©decins ont tendance Ć  faire exagĆ©rĆ©ment confiance aux diagnostics posĆ©s automatiquement, notamment pour les cancers. Les mĆ©decins les plus expĆ©rimentĆ©s se dĆ©fendent mieux, mais restent nĆ©anmoins sensibles : ils font des erreurs qu’ils n’auraient jamais commises normalement si cette erreur est encouragĆ©e par un assistant artificiel.

La fin de la connaissance

Avec les chatbots, une idĆ©e vieille comme l’informatique refait surface : « Et si on pouvait dire Ć  la machine ce qu’on veut sans avoir besoin de la programmer ? ».

C’est le rềve de toute cette catĆ©gorie de managers qui ne voient les programmeurs que comme des pousse-bouton qu’il faut bien payer, mais dont on aimerait se passer.

Rêve qui, faut-il le préciser, est complètement stupide.

Parce que l’humain ne sait pas ce qu’il veut. Parce que la parole a pour essence d’être imprĆ©cise. Parce que lorsqu’on parle, on Ć©change des sensations, des intuitions, mais on ne peut pas ĆŖtre prĆ©cis, rigoureux, bref, scientifique.

L’humanitĆ© est sortie du moyen-Ć¢ge lorsque des Newton, Leibniz, Descartes ont commencĆ© Ć  inventer un langage de logique rationnelle : les mathĆ©matiques. Tout comme on avait inventĆ©, Ć  peine plus tĆ“t, un langage prĆ©cis pour dĆ©crire la musique.

Se satisfaire de faire tourner un programme qu’on a dĆ©crit Ć  un chatbot, c’est retourner intellectuellement au moyen-Ć¢ge.

Mais bon, encore faut-il maitriser une langue. Lorsqu’on passe sa scolaritĆ© Ć  demander Ć  un chatbot de rĆ©sumer les livres Ć  lire, ce n’est mĆŖme pas sĆ»r que nous arriverons Ć  dĆ©crire ce que nous voulons prĆ©cisĆ©ment.

En fait, ce n’est mĆŖme pas sĆ»r que nous arriverons encore Ć  penser ce que nous voulons. Ni mĆŖme Ć  vouloir. La capacitĆ© de penser, de rĆ©flĆ©chir est fortement corrĆ©lĆ©e avec la capacitĆ© de traduire en mot.

Ce qui se conƧoit bien s’énonce clairement et les mots pour le dire viennent aisĆ©ment. (Boileau)

Ce n’est plus un retour au moyen-Ć¢ge, c’est un retour Ć  l’âge de la pierre.

Ou dans le futur dĆ©crit dans mon (excellent) roman Printeurs : des injonctions publicitaires qui se sont substituĆ©es Ć  la volontĆ©. (si si, achetez-le ! Il est Ć  la fois palpitant et vous fera rĆ©flĆ©chir)

Extrait de Tintin, l’étoile mystĆ©rieuse Extrait de Tintin, l’étoile mystĆ©rieuse

La fin des diffƩrentes voix.

Je critique le besoin d’avoir une rĆ©ponse en vidĆ©o, car la notion de lecture est importante. Je me rends compte qu’une proportion incroyable, y compris d’universitaires, ne sait pas « lire ». Ils savent certes dĆ©chiffrer, mais pas rĆ©ellement lire. Et il y a un test tout simple pour savoir si vous savez lire : si vous trouvez plus facile d’écouter une vidĆ©o YouTube d’une personne qui parle plutĆ“t que de lire le texte vous-mĆŖme, c’est sans doute que vous dĆ©chiffrez. C’est que vous lisez Ć  haute voix dans votre cerveau pour vous Ć©couter parler.

Il y a bien sĆ»r bien des contextes où la vidĆ©o ou la voix ont des avantages, mais lorsqu’il s’agit, par exemple, d’apprendre une sĆ©rie de commandes et leurs paramĆØtres, la vidĆ©o est insupportablement inappropriĆ©e. Pourtant, je ne compte plus les Ć©tudiants qui me recommandent des vidĆ©os sur le sujet.

Car la lecture, ce n’est pas simplement transformer les lettres en son. C’est en percevoir directement le sens, permettant des allers-retours incessants, des pauses, des passages rapides afin de comprendre le texte. Entre un Ć©crivain et un lecteur, il existe une communication, une communion tĆ©lĆ©pathique qui font paraĆ®tre l’échange oral lent, inefficace, balourd, voire grossier.

Cet Ć©change n’est pas toujours idĆ©al. Un Ć©crivain possĆØde sa « voix » personnelle qui ne convient pas Ć  tout le monde. Il m’arrive rĆ©guliĆØrement de tomber sur des blogs dont le sujet m’intĆ©resse, mais je n’arrive pas Ć  m’abonner, car la « voix » du blogueur ne me convient pas du tout.

C’est normal et mĆŖme souhaitable. C’est une des raisons pour laquelle nous avons besoin de multitudes de voix. Nous avons besoin de gens qui lisent puis qui Ć©crivent, qui mĆ©langent les idĆ©es et les transforment pour les transmettre avec leur propre voix.

La fin de la relation humaine

Dans la file d’un magasin, j’entendais la personne en face de moi se vanter de raconter sa vie amoureuse Ć  ChatGPT et de lui demander en permanence conseil sur la maniĆØre de la gĆ©rer.

Comme si la situation nĆ©cessitait une rĆ©ponse d’un ordinateur plutĆ“t qu’une discussion avec un autre ĆŖtre humain qui comprend voir qui a vĆ©cu le mĆŖme problĆØme.

AprĆØs nous avoir volĆ© le moindre instant de solitude avec les notifications incessantes de nos tĆ©lĆ©phones et les messages sur les rĆ©seaux sociaux, l’IA va dĆ©sormais voler notre sociabilitĆ©.

Nous ne serons plus connectĆ©s qu’avec le fournisseur, l’Entreprise.

Sur Gopher, szczezuja parle des autres personnes postant sur Gopher comme Ʃtant ses amis.

Tout le monde ne sait pas que ce sont mes amis, mais comment appeler autrement quelqu’un que vous lisez rĆ©guliĆØrement et dont vous connaissez un peu de sa vie intime

La fin de la fin…

La fin d’une ĆØre est toujours le dĆ©but d’une autre. Annoncer la fin, c’est prĆ©parer une renaissance. En apprenant de nos erreurs pour reconstruire en amĆ©liorant le tout.

C’est peut-ĆŖtre ce que j’apprĆ©cie tant sur Gemini : l’impression de dĆ©couvrir, de suivre des « voix » uniques, humaines. J’ai l’impression d’être tĆ©moin d’une microfaction d’humanitĆ© qui se dĆ©solidarise du reste, qui reconstruit autre chose. Qui lit ce que d’autres humains ont Ć©crit juste parce qu’un autre humain a eu besoin de l’écrire sans espĆ©rer aucune contrepartie.

Vous vous souvenez des « planet » ? Ce sont des agrĆ©gateurs de blogs regroupant les participants d’un projet en un seul flux. L’idĆ©e a Ć©tĆ© historiquement lancĆ©e par GNOME avec planet.gnome.org (qui existe toujours) avant de se gĆ©nĆ©raliser.

Et bien bacardi55 lance Planet Gemini FR, un agrƩgateur des capsules Gemini francophone.

C’est gĆ©nial et parfait pour ceux qui ont envie de dĆ©couvrir du contenu sur Gemini.

C’est gĆ©nial pour ceux qui ont envie de lire d’autres humains qui n’ont rien Ć  vous vendre. Bref, pour dĆ©couvrir le fin du fin…

Toutes les images sont illĆ©galement issues du chef-d’œuvre d’Hergé : « L’étoile mystĆ©rieuse ». Y’a pas de raison que les chatbots soient les seuls Ć  pomper.

Je suis Ploum et je viens de publier Bikepunk, une fable Ć©colo-cycliste entiĆØrement tapĆ©e sur une machine Ć  Ć©crire mĆ©canique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !

Recevez directement par mail mes Ʃcrits en franƧais et en anglais. Votre adresse ne sera jamais partagƩe. Vous pouvez Ʃgalement utiliser mon flux RSS francophone ou le flux RSS complet.

April 06, 2025

cloud-init

I prepared a few update releases of some ansible roles related to provision virtual machines with libvirt over the last weeks.

Mainly clean up releases and makes sure that everything works on different GNU/Linux distributions out of the box.

One ā€œbigā€ change is the removal of the dependency on the cloud-localds utility to provision virtual machines with cloud-init. This enables the usage of the roles on Linux distributions that don’t provide this utility.


Ansible-k3s-on-vms v1.2.0

An Ansible playbook to deploy virtual machines and deploy K3s.

https://github.com/stafwag/ansible-k3s-on-vms

ChangeLog

Added community.libvirt to requirements.yml

  • Added community.libvirt to requirements.yml
  • Added required Suse packages installation
  • Documentation update
  • This release removes the dependency on the cloud-locals utility. On the distributes that don’t provide the cloud-localds utility GNU xorriso is used.

stafwag.delegated_vm_install v2.0.3

An Ansible role to install a virtual machine with virt-install and cloud-init (Delegated).

https://github.com/stafwag/ansible-role-delegated_vm_install

ChangeLog

Gather facts on kvm hosts only once

  • Gather facts on kvm hosts only once
  • Corrected ansible-lint errors
  • Remove the cloud-localds requirement in README

stafwag.virt_install_vm v1.1.1

An Ansible role to install a libvirt virtual machine with virt-install and cloud-init. It ā€œdesignedā€ to be flexible.

https://github.com/stafwag/ansible-role-virt_install_vm

ChangeLog

CleanUp Release

  • Corrected ansible-lint errors
  • Updated documentation
  • Avoid ansible error during check vm status check

stafwag.libvirt v2.0.0

An ansible role to install libvirt/KVM packages and enable the libvirtd service.

https://github.com/stafwag/ansible-role-libvirt

ChangeLog

Use general vars instead of tasks

  • Reorganized the role to use vars and package install install the packages
  • This version works from the Ansible version that is included in Ubuntu 24.x
  • Corrected ansible-lint errors

stafwag.cloud_localds v3.0.2

An ansible role to create cloud-init config disk images.

https://github.com/stafwag/ansible-role-cloud_localds

ChangeLog

Execute installation tasks only once

  • Added run_one: true to only execute the installation tasks only once.
  • Move ā€œname: Set OS related variablesā€ to main, to have the provider settings when the install phase isn’t executed e.g. With –skip-tags install.
  • Added apply tags install to support –tags install

stafwag.qemu_img v2.3.2

An ansible role create qemu images.

https://github.com/stafwag/ansible-role-qemu_img

ChangeLog

Enable run_once on installation tasks

  • Execute installation tasks only once to allow parallel execution of roles on the same host e.g. With delegate_to

Have fun!

April 03, 2025

Can AI actually help with real Drupal development? I wanted to find out.

This morning, I fired up Claude Code and pointed it at my personal Drupal site. In a 30-minute session, I asked it to help me build new features and modernize some of my code. I expected mixed results but was surprised by how useful it proved to be.

I didn't touch my IDE once or write a single line of code myself. Claude handled everything from creating a custom Drush command to refactoring constructors and converting Drupal annotations to PHP attributes.

If you're curious what AI-assisted Drupal development actually feels like, this video captures the experience.

April 01, 2025

Goodbye Offpunk, Welcome XKCDpunk!

For the last three years, I’ve been working on Offpunk, a command-line gemini and web browser.

While my initial goal was to browse the Geminisphere offline, the mission has slowly morphed into cleaning and unenshitiffying the modern web, offering users a minimalistic way of browsing any website with interesting content.

Focusing on essentials

From the start, it was clear that Offpunk would focus on essentials. If a website needs JavaScript to be read, it is considered as non-essential.

It worked surprisingly well. In fact, in multiple occurrence, I’ve discovered that some websites work better in Offpunk than in Firefox. I can comfortably read their content in the former, not in the latter.

By default, Offpunk blocks domains deemed as nonessentials or too enshitified like twitter, X, facebook, linkedin, tiktok. (those are configurable, of course. Defaults are in offblocklist.py).

Cleaning websites, blocking worst offenders. That’s good. But it is only a start.

It’s time to go further, to really cut out all the crap from the web.

And, honestly, besides XKCD comics, everything is crap on the modern web.

As an online technical discussion grows longer, the probability of a comparison with an existing XKCD comic approaches 1.
– XKCD’s law

If we know that we will end our discussion with an XKCD’s comic, why not cut all the fluff? Why don’t we go straight to the conclusion in a true minimalistic fashion?

Introducing XKCDpunk

That’s why I’m proud to announce that, starting with today’s release, Offpunk 2.7 will now be known as XKCDpunk 1.0.

XKCDpunk includes a new essential command "xkcd" which, as you guessed, takes an integer as a parameter and display the relevant XKCD comic in your terminal, while caching it to be able to browse it offline.

Screenshot of XKCDpunk showing comic 626 Screenshot of XKCDpunk showing comic 626

Of course, this is only an early release. I need to clean a lot of code to remove everything not related to accessing xkcd.com. Every non-xkcd related domain will be added to offblocklist.py.

I also need to clean every occurrence of "Offpunk" to change the name. All offpunk.net needs to be migrated to xkcd.net. Roma was not built in one day.

Don’t hesitate to install an "offpunk" package, as it will still be called in most distributions.

And report bugs on the xkcdpunk’s mailinglist.

Goodbye Offpunk, welcome XKCDpunk!

I’m Ploum, a writer and an engineer. I like to explore how technology impacts society. You can subscribe by email or by rss. I value privacy and never share your adress.

I write science-fiction novels in French. For Bikepunk, my new post-apocalyptic-cyclist book, my publisher is looking for contacts in other countries to distribute it in languages other than French. If you can help, contact me!

March 28, 2025

The candid naivety of geeks

I mean, come on!

Amazon recently announced that, from now on, everything you say to Alexa will be sent to their server.

What surprised me the most with this announcement is how it was met with surprise and harsh reactions. People felt betrayed.

I mean, come on!

Did you really think that Amazon was not listening to you before that? Did you really buy an Alexa trusting Amazon to "protect your privacy"?

Recently, I came across a comment on Hacker News where the poster defended Apple as protecting privacy of its users because "They market their product as protecting our privacy".

I mean, once again, come on!

Did you really think that "marketing" is telling the truth? Are you a freshly debarked Thermian? (In case you missed it, this is a Galaxy Quest reference.)

The whole point of marketing is to lie, lie and lie again.

What is the purpose of that gadget?

The whole point of the whole Amazon Alexa tech stack is to send information to Amazon. That’s the main goal of the thing. The fact that it is sometimes useful to you is a direct consequence of the thing sending information to Amazon. Just like Facebook linking you with friends is a consequence of you giving your information to Meta. Usefulness is only a byproduct of privacy invasion.

Having a fine-grained setting enabling "do not send all information to Amazon please" is, at best, wishful thinking. We had the same in the browser ("do-not-track"). It didn’t work.

I’ve always been convinced that the tech geeks who bought an Amazon Alexa perfectly knew what they were doing. One of my friends has a Google Echo and justify it with "Google already knows everything about our family through our phones, so I’m trading only a bit more of our privacy for convenience". I don’t agree with him but, at the very least, it’s a logical opinion.

We all know that what can be done with a tool will be done eventually. And you should prepare for it. On a side note, I also postulate that the reason Amazon removed that setting is because they were already gathering too much data to justify its existence in case there’s a complaint or an investigation in the future."How did you manage to get those data while your product says it will not send data?".

But, once again, any tech person knows that pushing a button in an interface is not a proof of anything in the underlying software.

Please stop being naive about Apple

That’s also the point with Apple: Apple is such a big company that the right hand has no idea about what the left hand is doing. Some privacy people are working at Apple and doing good job. But their work is continuously diluted through the interests of quick and cheap production, marketing, release, new features, gathering data for advertising purpose. Apple is not a privacy company and has never been: it is an opportunistic company which advertise privacy when it feels it could help sell more iPhones. But deeply inside, they absolutely don’t care and they will absolutely trade the (very little) privacy they have if it means selling more.

Sometimes, geek naivety is embarrassingly stupid. Like "brand loyalty". Marketing lies to you. As a rule of thumb, the bigger the company, the bigger the lie. In tech, there’s no way for a big company to not lie because marketers have no real understanding of they are selling. Do you really think that people who chose to advertise "privacy" at Apple have any strong knowledge about "privacy"? That they could simply give you a definition of "privacy"?

I know that intelligent people go to great intellectual contortions to justify buying the latest overpriced spying shiny coloured screen with an apple logo. It looks like most humans actively look to see their freedom restricted. Seirdy calls it "the domestication of users".

And that’s why I see Apple as a cult: most tech people cannot be reasoned about it.

You can’t find a technical solution to a lie

Bill Cole, contributor to Spamassassin, recently posted on Mastodon that the whole DNS stack to protect spammers was not working.

spammers are more consistent at making SPF, DKIM, and DMARC correct than are legitimate senders.

It is, once again, a naive approach to spam. The whole stack was designed with the mindset "bad spammers will try to hide themselves". But was is happening in your inbox, really?

Most spam is not "black hat spam". It is what I call "white-collar spam": perfectly legitimate company, sending you emails from legitimate address. You slept in a hotel during a business trip? Now you will receive weekly emails about our hotel for the rest of your life. And it is the same for any shop, any outlet, anything you have done. Your inbox is filled with "white-collar" junk. And they know this perfectly well.

In Europe, we have a rule, the RGPD, which forbid businesses to keep your data without your express consent. I did the experiment for several months to send a legal threat to every single white-collar spam I received. Guess what: they always replied that it was a mistake, that I was now removed, that it should not have happened, that I checked the box (which was false but how could I prove it?) or even, on one occasion, that they restored a backup containing my email before I unsubscribed (I unsubscribed from that one 10 years before, which makes it very unlikely).

In short, they lied. All of them. All of them are spammers and they lie pretending that "they thought you were interested".

In one notable case, they told me that they had erased all my data while, still having the cookie on my laptop, I could see and use my account. Thirty days later, I was still connected and I figured that they simply managed to change my user id from "ploum" to "deleted_ploum" in the database. While answering me straight in the face that they had no information about me in their database.

Corporations are lying. You must treat every corporate word as a straight lie until proved otherwise.

But Ploum, if all marketing is a lie, why trusting Signal?

If you can’t trust marketing, why do I use Signal and Protonmail?

First of all, Signal is open source. And, yes, I’ve read some of the source code for some feature I was interested in. I’ve also read through some very deep audit of Signal source code.

I’m also trusting the people behind Signal. I’m trusting people who recommend Signal. I’m trusting the way Signal is built.

But most importantly, Signal sole existence is to protect privacy of its users. It’s not even a corporation and, yes, this is important.

Yes, they could lie in their marketing. Like Telegram did (and still does AFAIK). But this would undermine their sole reason to exist.

I don’t say that Signal is perfect: I say I trust them to believe themselves what they announce. For now.

What about Protonmail?

For the same reasons, Protonmail can, to some extent, be trusted. Technically, they can access most of the emails of their customers (because those emails arrive unencrypted to PM’s servers). But I trust Protonmail not to sell any data because if there’s any doubt that they do it, the whole business will crumble. They have a strong commercial incentive to do everything they can to protect my data. I pay them for that. It’s not a "checkbox" they could remove, it’s their whole raison d’être.

This is also why I pay for Kagi as my search engine: their business incentive is to provide me the best search results with less slop, less advertising. As soon as they start doing some kind of advertising, I will stop paying them and they know it. Or if Kagi starts becoming to AI centric for my taste, like they did for Lori:

I don’t blindly trust companies. Paying them is not a commitment to obey them, au contraire. Every relation with a commercial entity is, by essence, temporary. I pay for a service with strings attached. If the service degrade, if my conditions are not respected, I stop paying. If I’m not convinced they can be trusted, I stop paying them. I know I can pay and still be the product. If I have any doubt, I don’t pay. I try to find an alternative and migrate to it. Email being critical to me, I always have two accounts on two different trustable providers with an easy migrating path (which boils down to changing my DNS config).

Fighting the Androidification

Cory Doctorow speaks a lot about enshitification. Where users are more and more exploited. But one key component of a good enshitification is what I call "Androidification".

Androidification is not about degrading the user experience. It’s about closing doors, removing special use cases, being less and less transparent. It’s about taking open source software and frog boiling it to a full closed proprietary state while killing all the competition in the process.

Android was, at first, an Open Source project. With each release, it became more closed, more proprietary. As I explain in my "20 years of Linux on the Desktop" essay, I believe it has always been part of the plan. Besides the Linux kernel, Google was always wary not to include any GPL or LGPL licensed library in Android.

It took them 15 years but they finally achieved killing the Android Open Source Project:

This is why I’m deeply concerned by the motivation of Canonical to switch Ubuntu’s coreutils to an MIT licensed version.

This is why I’m deeply concerned that Protonmail quietly removed the issue tracker from its Protonmail Bridge Github page (making the development completely opaque for what is an essential tool for technical Protonmail users).

I mean, commons!

This whole naivety is also why I’m deeply concerned by very intelligent and smart tech people not understanding what "copyleft" is, why it is different from "open source" and why they should care.

Corporations are not your friend. They never were. They lie. The only possible relationship with them is an opportunistic one. And if you want to build commons that they cannot steal, you need strong copyleft.

But firstly, my fellow geeks, you need to lose your candid naivety.

I mean, come on, let’s build the commons!

I’m Ploum, a writer and an engineer. I like to explore how technology impacts society. You can subscribe by email or by rss. I value privacy and never share your adress.

I write science-fiction novels in French. For Bikepunk, my new post-apocalyptic-cyclist book, my publisher is looking for contacts in other countries to distribute it in languages other than French. If you can help, contact me!

March 24, 2025

MySQL HeatWave integrates GenAI capabilities into MySQL on OCI. We have demonstrated how HeatWave GenAI can leverage RAG’s capability to utilize ingested documents (unstructured data) in LakeHouse and generate responses to specific questions or chats. See: The common theme here is the use of data stored in Object Storage (LakeHouse). I previously discussed how to […]

March 23, 2025

Modern SSAO in a modern run-time

Cover Image - SSAO with Image Based Lighting

Use.GPU 0.14 is out, so here's an update on my declarative/reactive rendering efforts.

The highlights in this release are:

  • dramatic inspector viewing upgrades
  • a modern ambient-occlusion (SSAO/GTAO) implementation
  • newly revised render pass infrastructure
  • expanded shader generation for bind groups
  • more use of generated WGSL struct types
SSAO with image based lighting

SSAO with Image-Based Lighting

The main effect is that out-of-the-box, without any textures, Use.GPU no longer looks like early 2000s OpenGL. This is a problem every home-grown 3D effort runs into: how to make things look good without premium, high-quality models and pre-baking all the lights.

Use.GPU's reactive run-time continues to purr along well. Its main role is to enable doing at run-time what normally only happens at build time: dealing with shader permutations, assigning bindings, and so on. I'm quite proud of the line up of demos Use.GPU has now, for the sheer diversity of rendering techniques on display, including an example path tracer. The new inspector is the cherry on top.

Example mosaic

A lot of the effort continues to revolve around mitigating flaws in GPU API design, and offering something simpler. As such, the challenge here wasn't just implementing SSAO: the basic effect is pretty easy. Rather, it brings with it a few new requirements, such as temporal accumulation and reprojection, that put new demands on the rendering pipeline, which I still want to expose in a modular and flexible way. This refines the efforts I detailed previously for 0.8.

Good SSAO also requires deep integration in the lighting pipeline. Here there is tension between modularizing and ease-of-use. If there is only one way to assemble a particular set of components, then it should probably be provided as a prefab. As such, occlusion has to remain a first class concept, tho it can be provided in several ways. It's a good case study of pragmatism over purity.

In case you're wondering: WebGPU is still not readily available on every device, so Use.GPU remains niche, tho it already excels at in-house use for adventurous clients. At this point you can imagine me and the browser GPU teams eyeing each other awkwardly from across the room: I certainly do.

Inspector Gadget

The first thing to mention is the upgraded the Use.GPU inspector. It already had a lot of quality-of-life features like highlighting, but the main issue was finding your way around the giant trees that Use.GPU now expands into.

Inspector without filtering

Old

Inspector with filtering

New

Inspector filter
Inspector with highlights

Highlights show data dependencies

The fix was filtering by type. This is very simple as a component already advertises its inspectability in a few pragmatic ways. Additionally, it uses the data dependency graph between components to identify relevant parents. This shows a surprisingly tidy overview with no additional manual tagging. For each demo, it really does show you the major parts first now.

If you've checked it out before, give it another try. The layered structure is now clearly visible, and often fits in one screen. The main split is how Live is used to reconcile different levels of representation: from data, to geometry, to renders, to dispatches. These points appear as different reconciler nodes, and can be toggled as a filter.

It's still the best way to see Live and Use.GPU in action. It can be tricky to grok that each line in the tree is really a plain function, calling other functions, as it's an execution trace you can inspect. It will now point you more in the right way, and auto-select the most useful tabs by default.

The inspector is unfortunately far heavier than the GPU rendering itself, as it all relies on HTML and React to do its thing. At some point it's probably worth to remake it into a Live-native version, maybe as a 2D canvas with some virtualization. But in the mean time it's a dev tool, so the important thing is that it still works when nothing else does.

Most of the images of buffers in this post can be viewed live in the inspector, if you have a WebGPU capable browser.

SSAO

Screen-space AO is common now: using the rendered depth buffer, you estimate occlusion in a hemisphere around every point. I opted for Ground Truth AO (GTAO) as it estimates the correct visibility integral, as opposed to a more empirical 'crease darkening' technique. It also allows me to estimate bent normals along the way, i.e. the average unoccluded direction, for better environment lighting.

Hemisphere sampling

This image shows the debug viz in the demo. Each frame will sample one green ring around a hemisphere, spinning rapidly, and you can hold ALT to capture the sampling process for the pixel you're pointing at. It was invaluable to find sampling issues, and also makes it trivial to verify alignment in 3D. The shader calls printPoint(…) and printLine(…) in WGSL, which are provided by a print helper, and linked in the same way it links any other shader functions.

SSAO normal and occlusion samples

Bent normal and occlusion samples

SSAO is expensive, and typically done at half-res, with heavy blurring to hide the sampling noise. Mine is no different, though I did take care to handle odd-sized framebuffers correctly, with no unexpected sample misalignments.

It also has accumulation over time, as the shadows change slowly from frame to frame. This is done with temporal reprojection and motion vectors, at the cost of a little bit of ghosting. Moving the camera doesn't reset the ambient occlusion, as long as it's moving smoothly.

SSAO motion vectors

Motion vectors example

SSAO normal and occlusion accumulation

Accumulated samples

As Use.GPU doesn't render continuously, you can now use <Loop converge={N}> to decide how many extra frames you want to render after every visual change.

Reprojection requires access to the last frame's depth, normal and samples, and this is trivial to provide. Use.GPU has built-in transparent history for render targets and buffers. This allows for a classic front/back buffer flipping arrangement with zero effort (also, n > 2).

Depth history

Depth history

You bind this as virtual sources, each accessing a fixed slot history[i], which will transparently cycle whenever you render to its target. Any reimagined GPU API should seriously consider buffer history as a first-class concept. All the modern techniques require it.

Interleaved Gradient Noise

IGN

Rather than use e.g. blue noise and hope the statistics work out, I chose a very precise sampling and blurring scheme. This uses interleaved gradient noise (IGN), and pre-filters samples in alternating 2x2 quads to help diffuse the speckles as quickly as possible. IGN is designed for 3x3 filters, so a more specifically tuned noise generator may work even better, but it's a decent V1.

Reprojection often doubles as a cheap blur filter, creating free anti-aliasing under motion or jitter. I avoided this however, as the data being sampled includes the bent normals, and this would cause all edges to become rounded. Instead I use a precise bilateral filter based on depth and normal, aided by 3D motion vectors. This means it knows exactly what depth to expect in the last frame, and the reprojected samples remain fully aliased, which is a good thing here. The choice of 3D motion vectors is mainly a fun experiment, it may be an unnecessary luxury.

SSAO aliased accumulation

Detail of accumulated samples

The motion vectors are based only on the camera motion for now, though there is already the option of implementing custom motion shaders similar to e.g. Unity. For live data viz and procedural geometry, motion vectors may not even be well-defined. Luckily it doesn't matter much: it converges fast enough that artifacts are hard to spot.

The final resolve can then do a bilateral upsample of these accumulated samples, using the original high-res normal and depth buffer:

SSAO upscaled and resolved samples

Upscaled and resolved samples, with overscan trimmed off

Because it's screen-space, the shadows disappear at the screen edges. To remedy this, I implemented a very precise form of overscan. It expands the framebuffer by a constant amount of pixels, and expands the projectionMatrix to match. This border is then trimmed off when doing the final resolve. In principle this is pixel-exact, barring GPU quirks. These extra pixels don't go to waste either: they can get reprojected into the frame under motion, reducing visible noise significantly.

In theory this is very simple, as it's a direct scaling of [-1..1] XY clip space. In practice you have to make sure absolutely nothing visual depends on the exact X/Y range of your projectionMatrix, either its aspect ratio or in screen-space units. This required some cleanup on the inside, as Use.GPU has some pretty subtle scaling shaders for 2.5D and 3D points and lines. I imagine this is also why I haven't seen more people do this. But it's definitely worth it.

Overall I'm very satisfied with this. Improvements and tweaks can be made aplenty, some performance tuning needs to happen, but it looks great already. It also works in both forward and deferred mode. The shader source is here.

Render Buffers & Passes

The rendering API for passes reflects the way a user wants to think about it, as 1 logical step in producing a final image. Sub-passes such as shadows or SSAO aren't really separate here, as the correct render cannot be finished without it.

The main entry point here is the <Pass> component, representing such a logical render pass. It sits inside a view, like an <OrbitCamera>, and has some kind of pre-existing render context, like the visible canvas.

<Pass
  lights
  ssao={{ radius: 3, indirect: 0.5 }}
  overscan={0.05}
>
  ...
</Pass>

You can sequence multiple logical passes to add overlays with overlay: true, or even merge two scenes in 3D using the same Z-buffer.

Inside it's a declarative recipe that turns a few flags and options into the necessary arrangement of buffers and passes required. This uses the alt-Live syntax use(…) but you can pretend that's JSX:

const resources = [
  use(ViewBuffer, options),
  lights ? use(LightBuffer, options) : null,
  shadows ? use(ShadowBuffer, options) : null,
  picking ? use(PickingBuffer, options) : null,
  overscan ? use(OverscanBuffer, options) : null,
  ...(ssao ? [
    use(NormalBuffer, options),
    use(MotionBuffer, options),
  ] : []),
  ssao ? use(SSAOBuffer, options) : null,
];
const resolved = passes ?? [
  normals ? use(NormalPass, options) : null,
  motion ? use(MotionPass, options) : null,
  ssao ? use(SSAOPass, options) : null,
  shadows ? use(ShadowPass, options) : null,
  use(DEFAULT_PASS[viewType], options),
  picking ? use(PickingPass, options) : null,
  debug ? use(DebugPass, options) : null,
]

e.g. The <SSAOBuffer> will spawn all the buffers necessary to do SSAO.

Notice what is absent here: the inputs and outputs. The render passes are wired up implicitly, because if you had to do it manually, there would only be one correct way. This is the purpose of separating the resources from the passes: it allows everything to be allocated once, up front, so that then the render passes can connect them into a suitable graph with a non-trivial but generally expected topology. They find each other using 'well-known names' like normal and motion, which is how it's done in practice anyway.

Mounted render passes

Render passes in the inspector

This reflects what I am starting to run into more and more: that decomposed systems have little value if everyone has to use it the same way. It can lead to a lot of code noise, and also tie users to unimportant details of the existing implementation. Hence the simple recipe.

But, if you want to sequence your own render exactly, nothing prevents you from using the render components Ć  la carte: the main method of composition is mounting reactive components in Live, like everything else. Your passes work exactly the same as the built-in ones.

I make use of the dynamicism of JS to e.g. not care what options are passed to the buffers and passes. The convention is that each should be namespaced so they don't collide. This provides real extensibility for custom use, while paving the cow paths that exist.

It's typical that buffers and passes come in matching pairs. However, one could swap out one variation of a <FooPass> for another, while reusing the same buffer type. Most <FooBuffer> implementations are themselves declarative recipes, with e.g. a <RenderTarget> or two, and perhaps an associated data binding. All the meat—i.e. the dispatches—is in the passes.

It's so declarative that there isn't much left inside <Renderer> itself. It maps logical calls into concrete ones by leveraging Live, and that's reflected entirely in what's there. It only gathers up some data it doesn't know details about, and helps ensure the sequence of compute before render before readback. This is a big clue that renderers really want to be reactive run-times instead.

Bind Group Soup

Use.GPU's initial design goal was "a unique shader for every draw call". This means its data binding fu has mostly been applied to local shader bindings. These apply only to one particular draw, and you bind the data to the shader at the same time as creating it.

This is the useShader hook. There is no separation where you first prepare the binding layout, and as such, you use it like a deferred function call, just like JSX.

// Prepare to call surfaceShader(matrix, ray, normal, size, ...)
const getSurface = useShader(surfaceShader, [
  matrix, ray, normal, size, insideRef, originRef,
  sdf, palette, pbr, ...sources
], defs);

Shader and pipeline reuse is handled via structural hashing behind the scenes: it's merely a happy benefit if two draw calls can reuse the same shader and pipeline, but absolutely not a problem if they don't. As batching is highly encouraged, and large data sets can be rendered as one, the number of draw calls tends to be low.

All local bindings are grouped in two bind groups, static and volatile. The latter allows for the transparent history feature, as well as just-in-time allocated atlases. Static bindings don't need to be 100% static, they just can't change during dispatch or rendering.

WebGPU only has four bind groups total. I previously used the other two for respectively the global view, and the concrete render pass, using up all the bind groups. This was wasteful but an unfortunate necessity, without an easy way to compose them at run-time.

Bind Group: #0 #1 #2 #3
Use.GPU 0.13 View Pass Static Volatile
Use.GPU 0.14 Pass Static Volatile Free

This has been fixed in 0.14, which frees up a bind group. It also means every render pass fully owns its own view. It can pick from a set of pre-provided ones (e.g. overscanned or not), or set a custom one, the same way it finds buffers and other bindings.

Having bind group 3 free also opens up the possibility of a more traditional sub-pipeline, as seen in a traditional scene graph renderer. These can handle larger amounts of individual draw calls, all sharing the same shader template, but with different textures and parameters. My goal however is to avoid monomorphizing to this degree, unless it's absolutely necessary (e.g. with the lighting).

This required upgrading the shader linker. Given e.g. a static binding snippet such as:

use '@use-gpu/wgsl/use/types'::{ Light };

@export struct LightUniforms {
  count: u32,
  lights: array<Light>,
};

@group(PASS) @binding(1) var<storage> lightUniforms: LightUniforms;

...you can import it in Typescript like any other shader module, with the @binding as an attribute to be linked. The shader linker will understand struct types like LightUniforms with array<Light> fully now, and is able to produce e.g. a correct minimum binding size for types that cross module boundaries.

The ergonomics of useShader have been replicated here, so that useBindGroupLayout takes a set of these and prepares them into a single static bind group, managing e.g. the shader stages for you. To bind data to the bind group, a render pass delegates via useApplyPassBindGroup: this allows the source of the data to be modularized, instead of requiring every pass to know about every possible binding (e.g. lighting, shadows, SSAO, etc.). That is, while there is a separation between bind group layout and data binding, it's lazy: both are still defined in the same place.

SSAO on voxels

The binding system is flexible enough end-to-end that the SSAO can e.g. be applied to the voxel raytracer from @use-gpu/voxel with zero effort required, as it also uses the shaded technique (with per fragment depth). It has a getSurface(...) shader function that raytraces and returns a surface fragment. The SSAO sampler can just attach its occlusion information to it, by decorating it in WGSL.

WGSL Types

Worth noting, this all derives from previous work on auto-generated structs for data aggregation.

It's cool tech, but it's hard to show off, because it's completely invisible on the outside, and the shader code is all ugly autogenerated glue. There's a presentation up on the site that details it at the lower level, if you're curious.

The main reason I had aggregation initially was to work around the 8 storage buffers limit in WebGPU. The Plot API needed to auto-aggregate all the different attributes of shapes, with their given spread policies, based on what the user supplied.

This allows me to offer e.g. a bulk line drawing primitive where attributes don't waste precious bandwidth on repeated data. Each ends up grouped in structs, taking up only 1 storage buffer, depending on whether it is constant or varying, per instance or per vertex:

<Line
  // Two lines
  positions={[
    [[300, 50], [350, 150], [400, 50], [450, 150]],
    [[300, 150], [350, 250], [400, 150], [450, 250]],
  ]}
  // Of the same color and width
  color={'#40c000'}
  width={5}
/>

<Line
  // Two lines
  positions={[
    [[300, 250], [350, 350], [400, 250], [450, 350]],
    [[300, 350], [350, 450], [400, 350], [450, 450]],
  ]}
  // With color per line
  color={['#ffa040', '#7f40a0']}
  // And width per vertex
  widths={[[1, 2, 2, 1], [1, 2, 2, 1]}
/>

This involves a comprehensive buffer interleaving and copying mechanism, that has to satisfy all the alignment constraints. This then leverages @use-gpu/shader's structType(…) API to generate WGSL struct types at run-time. Given a list of attributes, it returns a virtual shader module with a real symbol table. This is materialized into shader code on demand, and can be exploded into individual accessor functions as well.

Hence data sources in Use.GPU can now have a format of T or array<T> with a WGSL shader module as the type parameter. I already had most of the pieces in place for this, but hadn't quite put it all together everywhere.

Using shader modules as the representation of types is very natural, as they carry all the WGSL attributes and GPU-only concepts. It goes far beyond what I had initially scoped for the linker, as it's all source-code-level, but it was worth it. The main limitation is that type inference only happens at link time, as binding shader modules together has to remain a fast and lazy op.

Native WGSL types are somewhat poorly aligned with the WebGPU API on the CPU side. A good chunk of @use-gpu/core is lookup tables with info about formats and types, as well as alignment and size, so it can all be resolved at run-time. There's something similar for bind group creation, where it has to translate between a few different ways of saying the same thing.

The types I expose instead are simple: TextureSource, StorageSource and LambdaSource. Everything you bind to a shader is either one of these, or a constant (by reference). They carry all the necessary metadata to derive a suitable binding and accessor.

That said, I cannot shield you from the limitations underneath. Texture formats can e.g. be renderable or not, filterable or not, writeable or not, and the specific mechanisms available to you vary. If this involves native depth buffers, you may need to use a full-screen render pass to copy data, instead of just calling copyTextureToTexture. I run into this too, and can only provide a few more convenience hooks.

I did come up with a neat way to genericize these copy shaders, using the existing WGSL type inference I had, souped up a bit. This uses simple selector functions to serve the role of reassembling types. It's finally given me a concrete way to make 'root shaders' (i.e. the entry points) generic enough to support all use. I may end up using something similar to handle the ordinary vertex and fragment entry points, which still have to be provided in various permutations.

* * *

Phew. Use.GPU is always a lot to go over. But its Ć  la carte nature remains and that's great.

For in-house use it's already useful, especially if you need a decent GPU on a desktop anyway. I have been using it for some client work, and it seems to be making people happy. If you want to go off-road from there, you can.

It delivers on combining low-level shader code with its own stock components, without making you reinvent a lot of the wheels.

Visit usegpu.live for more and to view demos in a WebGPU capable browser.

PS: I upgraded the aging build of Jekyll that was driving this blog, so if you see anything out of the ordinary, please let me know.

March 19, 2025

In the previous post, we saw how to deploy MySQL HeatWave on Amazon. Multicloud refers to the coordinated use of cloud services from multiple providers. In addition to our previous post, where we deployed MySQL HeatWave on Amazon, we will explore how to connect with another cloud service. Oracle has partnered with Microsoft to offer […]

March 15, 2025

When I searched for a new LoRaWAN indoor gateway, my primary criterion was that it should be capable of running open-source firmware. The ChirpStack Gateway OS firmware caught my attention. It's based on OpenWrt and has regular releases. Its recent 4.7.0 release added support for the Seeed SenseCAP M2 Multi-Platform Gateway, which seemed like an interesting and affordable option for a LoRaWAN gateway.

Unfortunately, this device wasn't available through my usual suppliers. However, TinyTronics did stock the SenseCAP M2 Data Only, which looked to me like exactly the same hardware but with different firmware to support the Helium LongFi Network. Ten minutes before their closing time on a Friday evening, I called their office to confirm whether I could use it as a LoRaWAN gateway on an arbitrary network. I was helped by a guy who was surprisingly friendly for the time of my call, and after a quick search he confirmed that it was indeed the same hardware. After this, I ordered this Helium variant of the gateway.

Upon its arrival, the first thing I did after connecting the antenna and powering it on was to search for the Backup/Flash Firmware entry in Luci's System menu, as explained in Seeed Studio's wiki page about flashing open-source firmware to the M2 Gateway. Unfortunately, the M2 Data Only seemed to have a locked-down version of OpenWrt's Luci interface, without the ability to flash other firmware. There was no SSH access either. I tried to flash the firmware via TFTP, but to no avail..

After these disappointing attempts, I submitted a support ticket to Seeed Studio, explaining my intention to install alternative firmware on the device, as I wasn't interested in the Helium functionality. I received a helpful response by a field application engineer with the high-level steps to do this, although I had to fill in some details myself. After getting stuck on a missing step, my follow-up query was promptly answered with the missing information and an apology for the incomplete instructions, and I finally succeeded in installing the Chirpstack Gateway OS on the SenseCAP M2 Data Only. Here are the detailed steps I followed.

Initial serial connection

Connect the gateway via USB and start a serial connection with a baud rate of 57600. I used GNU Screen for this purpose:

$ screen /dev/ttyUSB0 57600

When the U-Boot boot loader shows its options, press 0 for Load system code then write to Flash via Serial:

/images/sensecap-m2-uboot-menu.png

You'll then be prompted to switch the baud rate to 230400 and press ENTER. I terminated the screen session with Ctrl+a k and reconnected with the new baud rate:

$ screen /dev/ttyUSB0 230400

Sending the firmware with Kermit

Upon pressing ENTER, you'll see the message Ready for binary (kermit) download to 0x80100000 at 230400 bps.... I never used the Kermit protocol before, but I installed ckermit and found the procedure in a StackOverflow response to the question How to send boot files over uart. After some experimenting, I found that I needed to use the following commands:

 koan@nov:~/Downloads$ kermit
C-Kermit 10.0 pre-Beta.11, 06 Feb 2024, for Linux+SSL (64-bit)
 Copyright (C) 1985, 2024,
  Trustees of Columbia University in the City of New York.
  Open Source 3-clause BSD license since 2011.
Type ? or HELP for help.
(~/Downloads/) C-Kermit>set port /dev/ttyUSB0
(~/Downloads/) C-Kermit>set speed 230400
/dev/ttyUSB0, 230400 bps
(~/Downloads/) C-Kermit>set carrier-watch off
(~/Downloads/) C-Kermit>set flow-control none
(~/Downloads/) C-Kermit>set prefixing all
(~/Downloads/) C-Kermit>send openwrt.bin

The openwrt.bin file was the firmware image from Seeed's own LoRa_Gateway_OpenWRT firmware. I decided to install this instead of the ChirpStack Gateway OS because it was a smaller image and hence flashed more quickly (although still almost 8 minutes).

/images/sensecap-m2-kermit-send.png

After the file was sent successfully, I didn't see any output when reestablishing a serial connection. After responding this to Seeed's field application engineer, he replied that the gateway should display a prompt requesting to switch the baud rate again to 57600.

Kermit can also function as a serial terminal, so I just stayed within the Kermit command line and entered the following commands:

(~/Downloads/) C-Kermit>set speed 57600
/dev/ttyUSB0, 57600 bps
(~/Downloads/) C-Kermit>connect
Connecting to /dev/ttyUSB0, speed 57600
 Escapr character: Ctrl-\ (ASCII 28, FS): enabled
Type the escape character followed by C to get back,
or followed by ? to see other options.
----------------------------------------------------
## Total Size      = 0x00840325 = 8651557 Bytes
## Start Addr      = 0x80100000
## Switch baudrate to 57600 bps and press ESC ...

And indeed, there was the prompt. After pressing ESC, the transferred image was flashed.

Reboot into the new firmware

Upon rebooting, the device was now running Seeed's open-source LoRaWAN gateway operating system. Luci's menu now included a Backup/Flash Firmware entry in the System menu, enabling me to upload the ChirpStack Gateway OS image:

/images/sensecap-m2-openwrt-new-firmware.png

Before flashing the firmware image, I deselected the Keep settings and retain the current configuration option, as outlined in ChirpStack's documentation for installation on the SenseCAP M2:

/images/sensecap-m2-openwrt-flash.png

Thus, I now have open-source firmware running on my new LoRaWAN gateway, with regular updates in place.

March 06, 2025

Multicloud is a cloud adoption strategy that utilizes services from multiple cloud providers rather than relying on just one. This approach enables organizations to take advantage of the best services for specific tasks, enhances resilience, and helps reduce costs. Additionally, a multicloud strategy offers the flexibility necessary to meet regulatory requirements and increases options for […]

March 04, 2025

At the beginning of the year, we released MySQL 9.2, the latest Innovation Release. Sorry for the delay, but I was busy with the preFOSDEM MySQL Belgian Days and FOSDEM MySQL Belgium Days. Of course, we released bug fixes for 8.0 and 8.4 LTS, but in this post, I focus on the newest release. Within […]

January 31, 2025

Treasure hunters, we have an update! Unfortunately, some of our signs have been removed or stolen, but don’t worry—the hunt is still on! To ensure everyone can continue, we will be posting all signs online so you can still access the riddles and keep progressing. However, there is one exception: the 4th riddle must still be heard in person at Building H, as it includes an important radio message. Keep your eyes on our updates, stay determined, and don’t let a few missing signs stop you from cracking the code! Good luck, and see you at Infodesk K with舰

January 29, 2025

Are you ready for a challenge? We’re hosting a treasure hunt at FOSDEM, where participants must solve six sequential riddles to uncover the final answer. Teamwork is allowed and encouraged, so gather your friends and put your problem-solving skills to the test! The six riddles are set up across different locations on campus. Your task is to find the correct locations, solve the riddles, and progress to the next step. No additional instructions will be given after this announcement, it’s up to you to navigate and decipher the clues! To keep things fair, no hints or tips will be given舰

January 27, 2025

Core to the Digital Operational Resilience Act is the notion of a critical or important function. When a function is deemed critical or important, DORA expects the company or group to take precautions and measures to ensure the resilience of the company and the markets in which it is active.

But what exactly is a function? When do we consider it critical or important? Is there a differentiation between critical and important? Can an IT function be a critical or important function?

Defining functions

Let's start with the definition of a function. Surely that is defined in the documents, right? Right?

Eh... no. The DORA regulation does not seem to provide a definition for a function. It does however refer to the definition of critical function in the Bank Recovery and Resolution Directive (BRRD), aka Directive 2014/59/EU. That's one of the regulations that focuses on the resolution in case of severe disruptions, bankrupcy or other failures of banks at a national or European level. A Delegated regulation EU/ 2016/778 further defines several definitions that inspired the DORA regulation as well.

In the latter document, we do find the definition of a function:

ā€˜function’ means a structured set of activities, services or operations that are delivered by the institution or group to third parties irrespective from the internal organisation of the institution;

Article 2, (2), of Delegated regulation 2016/778

So if you want to be blunt, you could state that an IT function which is only supporting the own group (as in, you're not insourcing IT of other companies) is not a function, and thus cannot be a "critical or important function" in DORA's viewpoint.

That is, unless you find that the definition of previous regulations do not necessarily imply the same interpretation within DORA. After all, DORA does not amend the EU 2016/778 regulation. It amends EC 1060/2009, EU 2012/648, EU 2014/600 aka MiFIR, EU 2014/909 aka CSDR and EU 2016/1011 aka Benchmark Regulation. But none of these have a definition for 'function' at first sight.

So let's humor ourselves and move on. What is a critical function? Is that defined in DORA? Not really, sort-of. DORA has a definition for critical or important function, but let's first look at more distinct definitions.

In the BRRD regulation, this is defined as follows:

ā€˜critical functions’ means activities, services or operations the discontinuance of which is likely in one or more Member States, to lead to the disruption of services that are essential to the real economy or to disrupt financial stability due to the size, market share, external and internal interconnectedness, complexity or cross-border activities of an institution or group, with particular regard to the substitutability of those activities, services or operations;

Article 2, (35), of BRRD 2014/59

This extends on the use of function, and adds in the evaluation if it is crucial for the economy, especially when it would be suddenly discontinued. The extension on the definition of function is also confirmed by guidance that the European Single Resolution Board published, namely that "the function is provided by an institution to third parties not affiliated to the institution or group".

The preamble of the Delegated regulation also mentions that its focus is at the safeguarding of the financial stability and the real economy. It gives examples of potential critical functions such as deposit taking, lending and loan services, payment, clearing, custody and settlement services, wholesale funding markets activities, and capital markets and investments activities.

Of course, your IT is supporting your company, and in case of financial institutions, IT is a very big part of the company. Is IT then not involved in all of this?

It sure is...

Defining services

The Delegated regulation EU 2016/778 in its preamble already indicates that functions are supported by services:

Critical services should be the underlying operations, activities and services performed for one (dedicated services) or more business units or legal entities (shared services) within the group which are needed to provide one or more critical functions. Critical services can be performed by one or more entities (such as a separate legal entity or an internal unit) within the group (internal service) or be outsourced to an external provider (external service). A service should be considered critical where its disruption can present a serious impediment to, or completely prevent, the performance of critical functions as they are intrinsically linked to the critical functions that an institution performs for third parties. Their identification follows the identification of a critical function.

Preamble, (8), Delegated regulation 2016/778

IT within an organization is certainly offering services to one or more of the business units within that financial institution. Once the company has defined its critical functions (or for DORA, "critical or important functions"), then the company will need to create a mapping of all assets and services that are needed to realize that function.

Out of that mapping, it is very well possible that several IT services will be considered critical services. I'm myself involved in the infrastructure side of things, which are often shared services. The delegated regulation already points to it, and a somewhat older guideline from the Financial Stability Board has the following to say about critical shared services:

a critical shared service has the following elements: (i) an activity, function or service is performed by either an internal unit, a separate legal entity within the group or an external provider; (ii) that activity, function or service is performed for one or more business units or legal entities of the group; (iii) the sudden and disorderly failure or malfunction would lead to the collapse of or present a serious impediment to the performance of, critical functions.

FSB guidance on identification of critical functions and critical shared services

For IT organizations, it is thus most important to focus on the services they offer.

Definition of critical or important function

Within DORA, the definition of critical or important function is as follows:

(22) ā€˜critical or important function’ means a function, the disruption of which would materially impair the financial performance of a financial entity, or the soundness or continuity of its services and activities, or the discontinued, defective or failed performance of that function would materially impair the continuing compliance of a financial entity with the conditions and obligations of its authorisation, or with its other obligations under applicable financial services law;

Article 3, (22), DORA

If we compare this definition with the previous ones about critical functions, we notice that it is extended with an evaluation of the impact towards the company - rather than the market. I think it is safe to say that this is the or important part of the critical or important function: whereas a function is critical if its discontinuance has market impact, a function is important if its discontinuance causes material impairment towards the company itself.

Hence, we can consider a critical or important function as being either market impact (critical) or company impact (important), but retaining externally offered (function).

This more broad definition does mean that DORA's regulation puts more expectations forward than previous regulation, which is one of the reasons that DORA is that impactful to financial institutions.

Implications towards IT

From the above, I'd wager that IT itself is not a "critical or important function", but IT offers services which could be supporting critical or important functions. Hence, it is necessary that the company has a good mapping of the functions and their underlying services, operations and systems. From that mapping, we can then see if those underlying services are crucial for the function or not. If they are, then we should consider those as critical or important systems.

This mapping is mandated by DORA as well:

Financial entities shall identify all information assets and ICT assets, including those on remote sites, network resources and hardware equipment, and shall map those considered critical. They shall map the configuration of the information assets and ICT assets and the links and interdependencies between the different information assets and ICT assets.

Article 8, (4), DORA

as well as:

As part of the overall business continuity policy, financial entities shall conduct a business impact analysis (BIA) of their exposures to severe business disruptions. Under the BIA, financial entities shall assess the potential impact of severe business disruptions by means of quantitative and qualitative criteria, using internal and external data and scenario analysis, as appropriate. The BIA shall consider the criticality of identified and mapped business functions, support processes, third-party dependencies and information assets, and their interdependencies. Financial entities shall ensure that ICT assets and ICT services are designed and used in full alignment with the BIA, in particular with regard to adequately ensuring the redundancy of all critical components.

Article 11, paragraph 2, DORA

In more complex landscapes, it is very well possible that the mapping is a multi-layered view with different types of systems or services in between, which could make the effort to identify services as being critical or important quite challenging.

For instance, it could be that the IT organization has a service catalog, but that this service catalog is too broadly defined to use the indication of critical or important. Making a more fine-grained service catalog will be necessary to properly evaluate the dependencies, but that also implies that your business (who has defined their critical or important functions) will need to indicate which fine-grained service they are depending on, rather than the high-level services.

In later posts, I'll probably dive deeper into this layered view.

Feedback? Comments? Don't hesitate to get in touch on Mastodon.

January 26, 2025

The regular FOSDEM lightning talk track isn't chaotic enough, so this year we're introducing Lightning Lightning Talks (now with added lightning!). Update: we've had a lot of proposals, so submissions are now closed! Thought of a last minute topic you want to share? Got your interesting talk rejected? Has something exciting happened in the last few weeks you want to talk about? Get that talk submitted to Lightning Lightning Talks! This is an experimental session taking place on Sunday afternoon (13:00 in k1105), containing non-stop lightning fast 5 minute talks. Submitted talks will be automatically presented by our Lightning舰

January 17, 2025

As in previous years, some small rooms will be available for Birds of a Feather sessions. The concept is simple: Any project or community can reserve a timeslot (30 minutes or 1 hour) during which they have the room just to themselves. These rooms are intended for ad-hoc discussions, meet-ups or brainstorming sessions. They are not a replacement for a developer room and they are certainly not intended for talks. Schedules: BOF Track A, BOF Track B, BOF Track C. To apply for a BOF session, enter your proposal at https://fosdem.org/submit. Select any of the BOF tracks and mention in舰

January 16, 2025

With FOSDEM just a few days away, it is time for us to enlist your help. Every year, an enthusiastic band of volunteers make FOSDEM happen and make it a fun and safe place for all our attendees. We could not do this without you. This year we again need as many hands as possible, especially for heralding during the conference, during the buildup (starting Friday at noon) and teardown (Sunday evening). No need to worry about missing lunch at the weekend, food will be provided. Would you like to be part of the team that makes FOSDEM tick?舰