Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

March 28, 2023

For this third article of the series dedicated on how a DBA can find the info he needs with MySQL Database Service in Oracle Cloud Infrastructure, we will see how we can find the error log.

When using MySQL DBAAS, the DBA doesn’t have direct access to the files on the filesystem. Hopefully, with MySQL 8.0, the error log is also available in Performance_Schema.

This is exactly where you will find the information present also in the error log file when using MDS in OCI:

select * from (select * from performance_schema.error_log order by logged desc limit 10) a order by logged\G
*************************** 1. row ***************************
    LOGGED: 2023-03-19 08:41:09.950266
      PRIO: System
      DATA: X Plugin ready for connections. Bind-address: '' port: 33060, socket: /var/run/mysqld/mysqlx.sock
*************************** 2. row ***************************
    LOGGED: 2023-03-19 08:41:09.950328
      PRIO: System
      DATA: /usr/sbin/mysqld: ready for connections. Version: '8.0.32-u1-cloud'  socket: '/var/run/mysqld/mysql.sock'  port: 3306  MySQL Enterprise - Cloud.
*************************** 3. row ***************************
    LOGGED: 2023-03-19 08:41:09.950342
      PRIO: System
      DATA: Admin interface ready for connections, address: ''  port: 7306
*************************** 4. row ***************************
    LOGGED: 2023-03-19 08:51:09.000200
      PRIO: Note
      DATA: DISK: mount point='/db', available=84.9G, total=99.9G, used=15.1%, low limit=4.0G, critical=2.0G, warnings=23.2G/13.6G/8.8G
*************************** 5. row ***************************
    LOGGED: 2023-03-19 10:49:18.394291
      PRIO: Warning
      DATA: IP address '' could not be resolved: Name or service not known
*************************** 6. row ***************************
    LOGGED: 2023-03-19 10:49:18.452995
      PRIO: Warning
      DATA: Can't set mandatory_role: There's no such authorization ID public@%.
*************************** 7. row ***************************
    LOGGED: 2023-03-19 10:52:13.818505
      PRIO: Note
      DATA: Plugin mysqlx reported: '2.1: Maximum number of authentication attempts reached, login failed.'
*************************** 8. row ***************************
    LOGGED: 2023-03-19 18:52:16.600274
      PRIO: Note
      DATA: Thread pool closed connection id 39 for `admin`@`%` after 28800.004878 seconds of inactivity. Attributes: priority:normal, type:normal, last active:2023-03-19T10:52:16.595189Z, expired:2023-03-19T18:52:16.595199Z (4868 microseconds ago)
*************************** 9. row ***************************
    LOGGED: 2023-03-19 18:52:16.600328
      PRIO: Note
      DATA: 'wait_timeout' period of 28800 seconds was exceeded for `admin`@`%`. The idle time since last command was too long.
*************************** 10. row ***************************
    LOGGED: 2023-03-20 13:47:28.843589
      PRIO: Warning
      DATA: IP address '' could not be resolved: Name or service not known
10 rows in set (0.0015 sec)

The example above lists the last 10 entries in error log.

It’s possible to get some statistics on the entries in error log much easily then parsing the file with sed and awk:

select subsystem, count(*) 
  from performance_schema.error_log 
  group by subsystem order by subsystem;
| subsystem | count(*) |
| Health    |      112 |
| InnoDB    |     1106 |
| RAPID     |       51 |
| Repl      |        4 |
| Server    |      483 |
5 rows in set (0.0018 sec)

select prio, count(*) 
  from performance_schema.error_log 
  group by prio order by prio;
| prio    | count(*) |
| System  |      105 |
| Error   |        2 |
| Warning |       50 |
| Note    |     1599 |
4 rows in set (0.0014 sec)

The error log provides a lot of information about how healthy is your system, about health monitor, InnoDB, replication, authentication failures, etc…

For example, we can see the disk usage (see the previous post) in the error_log table too:

select * from error_log where subsystem="Health" 
   and data like 'DISK:%' order by logged desc limit 4\G
*************************** 1. row ***************************
    LOGGED: 2023-03-19 08:51:09.000200
      PRIO: Note
      DATA: DISK: mount point='/db', available=84.9G, total=99.9G, used=15.1%, 
                  low limit=4.0G, critical=2.0G, warnings=23.2G/13.6G/8.8G
*************************** 2. row ***************************
    LOGGED: 2023-03-17 15:24:57.000133
      PRIO: Note
      DATA: DISK: mount point='/db', available=84.9G, total=99.9G, used=15.1%,
                  low limit=4.0G, critical=2.0G, warnings=23.2G/13.6G/8.8G
*************************** 3. row ***************************
    LOGGED: 2023-03-16 19:24:57.000122
      PRIO: Note
      DATA: DISK: mount point='/db', available=74.9G, total=99.9G, used=25.1%,
                  low limit=4.0G, critical=2.0G, warnings=23.2G/13.6G/8.8G
*************************** 4. row ***************************
    LOGGED: 2023-03-16 16:34:57.000175
      PRIO: Note
      DATA: DISK: mount point='/db', available=46.7G, total=99.9G, used=53.2%,
                  low limit=4.0G, critical=2.0G, warnings=23.2G/13.6G/8.8G

The log_error_verbosity is set to 3 in MySQL Database Service, meaning it will log the errors, the warnings and the different information messages.

These are the configuration settings related to the error log in MDS:

select * from performance_schema.global_variables
         where variable_name like 'log_error%';
| VARIABLE_NAME              | VARIABLE_VALUE                                        |
| log_error                  | /db/log/error.log                                     |
| log_error_services         | log_filter_internal; log_sink_internal; log_sink_json |
| log_error_suppression_list | MY-012111                                             |
| log_error_verbosity        | 3                                                     |

In MySQL Database Service, we can also see that the error MY-012111 is not logged:

show global variables like 'log_error_sup%';
| Variable_name              | Value     |
| log_error_suppression_list | MY-012111 |

This error is related to MySQL trying to access a missing tablespace:

$ perror MY-012111
      Trying to access missing tablespace %lu

However, a user doesn’t have the possibility to change any settings related to the error log, neither using SET GLOBAL, neither by creating a MDS configuration using the OCI console.


In MDS you don’t have access to the error log file but its content is available in Performance_Schema and easier to parse using SQL.

It’s a really good source of information that I invite every users to parse regularly.

March 27, 2023

Dédicaces à la foire du livre de Bruxelles ce samedi 1ᵉʳ avril

Ce samedi 1ᵉʳ avril, je dédicacerai mon roman et mon recueil de nouvelles à la foire du livre de Bruxelles.

Bon, dit comme ça, c’est pas très rigolo comme poisson d’avril, mais là où c’est plus marrant c’est que je serai sur le stand du Livre Suisse (stand 334). Ben oui, un Belge qui fait semblant d’être suisse pour pouvoir dédicacer à Bruxelles, c’est le genre de brol typique de mon pays. Bon, après, je vais sans doute être démasqué quand je sortirai ma tablette de « vrai » chocolat (belge !)

Y a des blagues, comme disait Coluche, où c’est plus rigolo quand c’est un Suisse…

Bref, rendez-vous de 13h30 à 15h et de 17h à 18h30 au stand 334 (Livre Suisse) dans la Gare Maritime. C’est toujours un plaisir pour moi de rencontrer des lecteurices qui me suivent parfois depuis des années. Ça va être tout bon !

Ingénieur et écrivain, j’explore l’impact des technologies sur l’humain. Abonnez-vous à mes écrits en français par mail ou par rss. Pour mes écrits en anglais, abonnez-vous à la newsletter anglophone ou au flux RSS complet. Votre adresse n’est jamais partagée et effacée au désabonnement.

Pour me soutenir, achetez mes livres (si possible chez votre libraire) ! Je viens justement de publier un recueil de nouvelles qui devrait vous faire rire et réfléchir.

March 25, 2023

New booster in Autoptimize Pro 1.3:, a 3rd party JS component that can significantly improve performance for visitors going from one page to another on your site by preloading a page based on visitor behavior. Do take into account that it could increase the number of page requests as the preloaded page might end up not being requested after all. More info on


March 24, 2023

De komende tijd zal ik koel en kil zijn. Omdat we in oorlog gaan.

Ik vrees ervoor dat ik net zo koel en kil zal zijn wanneer deze oorlog onze streken bereikt. Ik hoop dat dat niet zal gebeuren. Ik hoop het echt.

Maar ik zal koel en kil proberen te blijven. Dat en alleen dat geeft me een beetje zekerheid, over mijzelf, dat dat wat ik vertel ik vertel op een manier dat ik het kan vertellen op de manier dat ik het wil vertellen.

We moeten inzien dat oorlog betekent dat we niet meer zomaar kunnen vertellen zoals we zaken willen vertellen. Dat al onze verhalen verteld worden door de bril van dat wat verteld kan worden. Dat die bril niet meer de bril van de waarheid is. Maar wel de bril van enkel maar dat wat verteld kan worden.

We gaan met z’n allen die richting op.

Ik denk niet dat ik overdrijf. Hoewel ik denk dat ik er vroeg bij ben.

De tachtigduizend soldaten die momenteel door Oekraïne op Bakhmut ingezet worden zijn een boodschap aan Rusland alvorens China haar poging tot vredesonderhandelingen inzet.

Omgekeerd is Avdiivka Rusland’s zet om duidelijk te maken aan Oekraïne dat Rusland dit zal innemen moesten zulke onderhandelingen falen.

Met andere woorden, de standpunten zijn uitgezet. En Rusland zal Avdiivka innemen terwijl Ukraïne Bakhmut zal proberen terug te nemen.

March 21, 2023

This article is the second of the new series dedicated on how a DBA can find the info he needs with MySQL Database Service in Oracle Cloud Infrastructure.

The first article was dedicated on Backups, this one is about Disk Space Utilization.

This time we have two options to retrieve useful information related to disk space:

  1. Metrics
  2. Performance_Schema


In the OCI Web Console, there is a dedicated metric for the disk usage:

As for the backup, we can create Alarms for this metric to get informed when we reach the end of the DB System’s capacity:

We will create 2 different alerts (see Scott’s article about alerts) the first one will be a warning when the disk space usage reaches 50% and the second one, a critical alert when the disk space utilization reaches 80%:

And if the system reaches 50% of disk capacity, we get the mail:


The MySQL DBA has also access to the disk space usage via the SQL interface using Performance_Schema. In MySQL Database Service, Performance_Schema provides some extra tables that are part of the Health Monitor:

select * from health_block_device order by timestamp desc limit 10;
| xfs    | 2023-03-16 12:27:56 | 107317563392 |     89610473472 |       16.50 | /db         |
| xfs    | 2023-03-16 12:26:56 | 107317563392 |     89610489856 |       16.50 | /db         |
| xfs    | 2023-03-16 12:25:56 | 107317563392 |     89610485760 |       16.50 | /db         |
| xfs    | 2023-03-16 12:24:56 | 107317563392 |     89610485760 |       16.50 | /db         |
| xfs    | 2023-03-16 12:23:56 | 107317563392 |     89610485760 |       16.50 | /db         |
| xfs    | 2023-03-16 12:22:56 | 107317563392 |     89610489856 |       16.50 | /db         |
| xfs    | 2023-03-16 12:21:56 | 107317563392 |     89610489856 |       16.50 | /db         |
| xfs    | 2023-03-16 12:20:57 | 107317563392 |     89610485760 |       16.50 | /db         |
| xfs    | 2023-03-16 12:19:56 | 107317563392 |     89610485760 |       16.50 | /db         |
| xfs    | 2023-03-16 12:18:56 | 107317563392 |     89610485760 |       16.50 | /db         |
10 rows in set (0.0028 sec)

If you take a look at the other tables from the Health Monitor that are related to disk, you will see that those are created for the MDS operators.

Using performance_schema you can also find the size of your dataset and the space used on disk:

SELECT format_bytes(sum(data_length)) DATA_SIZE,
       format_bytes(sum(index_length)) INDEX_SIZE,
       format_bytes(sum(data_length+index_length)) TOTAL_SIZE,  
       format_bytes(sum(data_free)) DATA_FREE,
       format_bytes(sum(FILE_SIZE)) FILE_SIZE,
       format_bytes((sum(FILE_SIZE)/10 - (sum(data_length)/10 + 
                     sum(index_length)/10))*10) WASTED_SIZE
FROM information_schema.TABLES as t
JOIN information_schema.INNODB_TABLESPACES as it    
  ON = concat(table_schema,"/",table_name)
  ORDER BY (data_length + index_length);
| 2.37 GiB  | 4.70 GiB   | 7.07 GiB   | 43.00 MiB | 7.75 GiB  | 694.17 MiB  |

But don’t forget that on disk you also have plenty of other files like redo logs, undo logs, binary logs, …


In the result of the previous SQL statement, we can see in the last column (WASTED_SIZE) that there are almost 650MB of wasted disk space. This column represents gaps in tablespaces.

Let’s find out for which tables and how to recover it:

SELECT NAME, TABLE_ROWS, format_bytes(data_length) DATA_SIZE,
       format_bytes(index_length) INDEX_SIZE,     
       format_bytes(data_length+index_length) TOTAL_SIZE,
       format_bytes(data_free) DATA_FREE,
       format_bytes(FILE_SIZE) FILE_SIZE,
       format_bytes((FILE_SIZE/10 - (data_length/10 + 
                     index_length/10))*10) WASTED_SIZE
FROM information_schema.TABLES as t  
JOIN information_schema.INNODB_TABLESPACES as it
  ON = concat(table_schema,"/",table_name) 
  ORDER BY (data_length + index_length) desc LIMIT 10;
| airportdb/booking           |   54082619 | 2.11 GiB   | 4.62 GiB   | 6.74 GiB   | 4.00 MiB   | 7.34 GiB   | 615.03 MiB  |
| airportdb/weatherdata       |    4617585 | 215.80 MiB |    0 bytes | 215.80 MiB | 7.00 MiB   | 228.00 MiB | 12.20 MiB   |
| airportdb/flight            |     461286 | 25.55 MiB  | 73.64 MiB  | 99.19 MiB  | 4.00 MiB   | 108.00 MiB | 8.81 MiB    |
| airportdb/seat_sold         |     462241 | 11.52 MiB  |    0 bytes | 11.52 MiB  | 4.00 MiB   | 21.00 MiB  | 9.48 MiB    |
| airportdb/passengerdetails  |      35097 | 4.52 MiB   |    0 bytes | 4.52 MiB   | 4.00 MiB   | 12.00 MiB  | 7.48 MiB    |
| airportdb/passenger         |      36191 | 2.52 MiB   | 1.52 MiB   | 4.03 MiB   | 4.00 MiB   | 12.00 MiB  | 7.97 MiB    |
| airportdb/airplane_type     |        302 | 1.52 MiB   |    0 bytes | 1.52 MiB   | 4.00 MiB   | 9.00 MiB   | 7.48 MiB    |
| airportdb/airport_geo       |       9561 | 1.52 MiB   |    0 bytes | 1.52 MiB   | 4.00 MiB   | 11.00 MiB  | 9.48 MiB    |
| airportdb/flightschedule    |       9633 | 528.00 KiB | 736.00 KiB | 1.23 MiB   | 4.00 MiB   | 9.00 MiB   | 7.77 MiB    |
| airportdb/airport           |       9698 | 448.00 KiB | 656.00 KiB | 1.08 MiB   | 4.00 MiB   | 9.00 MiB   | 7.92 MiB    |

We can see that the itis in the table that we have the most waste of disk space. Optimizing that table (this is not an online operation!) will recover some of the wasted disk space:

optimize table ;
| Table             | Op       | Msg_type | Msg_text                                                          |
| | optimize | note     | Table does not support optimize, doing recreate + analyze instead |
| | optimize | status   | OK                                                                |
2 rows in set (14 min 45.5530 sec)

set information_schema_stats_expiry=0;

SELECT NAME, TABLE_ROWS, format_bytes(data_length) DATA_SIZE,
       format_bytes(index_length) INDEX_SIZE,     
       format_bytes(data_length+index_length) TOTAL_SIZE,
       format_bytes(data_free) DATA_FREE,
       format_bytes(FILE_SIZE) FILE_SIZE,
       format_bytes((FILE_SIZE/10 - (data_length/10 + 
                     index_length/10))*10) WASTED_SIZE
FROM information_schema.TABLES as t  
JOIN information_schema.INNODB_TABLESPACES as it
  ON = concat(table_schema,"/",table_name) 
  ORDER BY (data_length + index_length) desc LIMIT 10;
| airportdb/booking           |   54163810 | 2.59 GiB   | 2.72 GiB   | 5.31 GiB   | 4.00 MiB   | 5.37 GiB   | 63.06 MiB   |
| airportdb/weatherdata       |    4617585 | 215.80 MiB |    0 bytes | 215.80 MiB | 7.00 MiB   | 228.00 MiB | 12.20 MiB   |
| airportdb/flight            |     461286 | 25.55 MiB  | 73.64 MiB  | 99.19 MiB  | 4.00 MiB   | 108.00 MiB | 8.81 MiB    |
| airportdb/seat_sold         |     462241 | 11.52 MiB  |    0 bytes | 11.52 MiB  | 4.00 MiB   | 21.00 MiB  | 9.48 MiB    |
| airportdb/passengerdetails  |      35097 | 4.52 MiB   |    0 bytes | 4.52 MiB   | 4.00 MiB   | 12.00 MiB  | 7.48 MiB    |
| airportdb/passenger         |      36191 | 2.52 MiB   | 1.52 MiB   | 4.03 MiB   | 4.00 MiB   | 12.00 MiB  | 7.97 MiB    |
| airportdb/airplane_type     |        302 | 1.52 MiB   |    0 bytes | 1.52 MiB   | 4.00 MiB   | 9.00 MiB   | 7.48 MiB    |
| airportdb/airport_geo       |       9561 | 1.52 MiB   |    0 bytes | 1.52 MiB   | 4.00 MiB   | 11.00 MiB  | 9.48 MiB    |
| airportdb/flightschedule    |       9633 | 528.00 KiB | 736.00 KiB | 1.23 MiB   | 4.00 MiB   | 9.00 MiB   | 7.77 MiB    |
| airportdb/airport           |       9698 | 448.00 KiB | 656.00 KiB | 1.08 MiB   | 4.00 MiB   | 9.00 MiB   | 7.92 MiB    |
| airportdb/airplane          |       5583 | 224.00 KiB | 144.00 KiB | 368.00 KiB |    0 bytes | 448.00 KiB | 80.00 KiB   |
| airportdb/employee          |       1000 | 208.00 KiB | 48.00 KiB  | 256.00 KiB |    0 bytes | 336.00 KiB | 80.00 KiB   |
| airportdb/airline           |        113 | 16.00 KiB  | 32.00 KiB  | 48.00 KiB  |    0 bytes | 144.00 KiB | 96.00 KiB   |
| airportdb/flight_log        |          0 | 16.00 KiB  | 16.00 KiB  | 32.00 KiB  |    0 bytes | 128.00 KiB | 96.00 KiB   |
| sys/sys_config              |          6 | 16.00 KiB  |    0 bytes | 16.00 KiB  |    0 bytes | 112.00 KiB | 96.00 KiB   |
| airportdb/airport_reachable |          0 | 16.00 KiB  |    0 bytes | 16.00 KiB  |    0 bytes | 112.00 KiB | 96.00 KiB   |

We can see that we saved several hundreds MB.


Now you know how to find the information to monitor your disk space and be alerted directly via OCI’s alerting system or by using a third party tool.

By controlling the disk space usage, you know exactly when it’s time to expand the disk space of your DB system (or migrate to a bigger Shape).

March 20, 2023

One or two times a month I get the following question: Why don't you just use a Static Site Generator (SSG) for your blog?

Well, I'm not gonna lie, being the founder and project lead of Drupal definitely plays a role in why I use Drupal for my website. Me not using Drupal would be like Coca-Cola's CEO drinking Pepsi, a baker settling for supermarket bread, or a cabinet builder furnishing their home entirely with IKEA. People would be confused.

Of course, if I wanted to use a static site, I could. Drupal is frequently used as the content repository for Gatsby.js, Next.js, and many other frameworks.

The main reason I don't use a SSG is that I don't love their publishing workflow. It's slow. With Drupal, I can make edits, hit save, and immediately see the result. With a static site generator it becomes more complex. I have to commit Markdown to Git, rebuild my site, and push updates to a web server. I simply prefer the user-friendly authoring of Drupal.

A collage of screenshots displaying the websites of various static site generators, with prominent text emphasizing phrases such as 'fast page loads', 'peak performance', unparalleled speed', 'full speed', and more.A collage of screenshots featuring different static site generators' websites, emphasizing their marketing messaging on performance.

Proponents of static sites will be quick to point out that static sites are "much faster". Personally, I find that misleading. My Drupal-powered site,, is faster than most static sites, including the official websites of leading static site generators.

TechnologyURL testedPage load time
Drupal seconds
Gatsby.js seconds
Next.js seconds
Jekyll seconds
Elevently seconds
Docusaurus seconds
Svelte Kit seconds

In practice, most sites serve their content from a cache. As a result, we're mainly measuring (1) the caching mechanism, (2) last mile network performance and (3) client-side rendering. Of these three, client-side rendering impacts performance the most.

My site is the fastest because its HTML/CSS/JavaScript is the simplest and fastest to render. I don't use external web fonts, track visitors, or use a lot of JavaScript. Drupal also optimizes performance with lazy loading of images, CSS/JavaScript aggregation, and more.

In other words, the performance of a website depends more on the HTML, CSS, JavaScript code and assets (images, video, fonts) than the underlying technology used.

The way an asset is cached can also affect its performance. Using a reverse proxy cache, such as Varnish, is faster than caching through the filesystem. And using a global CDN yields even faster results. A CMS that uses a CDN for caching can provide better performance than a SSG that only stores assets on a filesystem.

To be clear, I'm not against SSGs. I can understand the use cases for them, and there are plenty of situations where they are a great choice.

In general, I believe that any asset that can be a static asset, should be a static asset. But I also believe that any dynamically generated asset that is cached effectively has become a static asset. A page that is created dynamically by a CMS and is cached efficiently is a static asset. Both a CMS and a SSG can generate static assets.

In short, I simply prefer the authoring experience of a CMS, and I keep my site fast by keeping the generated HTML code lightweight and well-cached.

What really tips the scale for me is that I enjoy having a server-side requests handler. Now, I know that this might sound like the nerdiest closing statement ever, but trust me: server-side request handlers bring the fun. Over the years they have enabled me to do fun and interesting things on my websites. I'm not stopping the fun anytime soon!

In this new series of articles we will explore the different sources of information available when using MySQL Database Service on OCI to effectively perform your daily DBA job.

Of course there is way less things to take care of, like backups, upgrades, operating system and hardware maintenance, …

But as a serious DBA, you want to know the status of all this and maintain some control.

Some information is available on OCI’s webconsole and some in Performance_Schema and Sys.

If you use MySQL Shell for Visual Studio Code, you have the possibility to see an overview of your server using the Performance Dashboard:

But today we will take a look at the backup, a very important responsibility of the DBA.

When you use MySQL Database Service on OCI, you can define the backup policy at the DB Instance’s creation. You can always modify it later:

In MDS, the backups are online backups using block volume snapshots. For more information, you can check the online manual.

What is important is to control the backup, see if it succeeded, compare the size and the execution time.

We can see that the recent backup is way smaller than the previous one. Let’s have a look at the details by clicking on it:

We can see that this is an incremental backup, so this is a normal and expected behavior.

In the Metrics section, we have some metrics related to backups that are useful for this purpose:

But don’t forget that as this is a snapshot, even if you reduce the size of you database, the full backup size won’t really shrink:

disk space usage total backup size

We can see above that even if we have deleted a lot of data after a backup, the total backup size didn’t shrink.

But of course the next backup is incremental and its size is small:


You also have the possibility to create an alert in case your backup didn’t happen, but it will work only if backups are enabled.

You can follow Scott’s article on how to create a new alarm.

This is the overview the created Alarm:

And if we don’t have a backup for one day, we receive an email in our mailbox like the one below:


You can now see when your backups are made, which type of backup they are and their size. You also know how to create an alarm related to your backups.

On the next article we will check the disk space consumption of your MySQL DB system on OCI.

March 18, 2023

Het uitgangspunt is het volgende:

  • Een SPA bad dat je ~ 40 °C warm wil houden met een electrische pomp die het water opwarmt. Een SPA bad is niet heel duur.
  • Zonnepanelen die (veel) meer dan voldoende energie leveren voor je huishouden. Dit is duur, maar je hebt dit voor ook andere reden.
  • Een thuisbatterij. Dit is duur, maar je hebt dit voor ook andere reden.
  • Véél isolatie voor je bad (gelukkig niet duur)
  • Teruggeven aan het net brengt maar weinig op en je kan niet terugdraaien met een oude meter (je hebt dus al zo’n digitale meter)
    • Dus we kunnen de energie maar beter zelf gebruiken

Allereerst moet je je SPA bad zoveel mogelijk isoleren. Kies ook een SPA bad met donkere kleuren. Zodat wanneer de zon schijnt, zoveel mogelijk warmte opgenomen wordt.

De bodem moet geïsoleerd zijn door bv. puzzelmatten onder je bad te leggen en eventueel ook andere isolatiematerialen. Het dun laagje isolatiemateriaal dat bij de goedkope SPA badjes zit is niet genoeg.

Je hebt bv. isolatiematten die onder parketvloeren gebruikt worden. Je kan niet teveel isoleren. Meer is altijd beter. De matten zullen het bad ook een zachtere bodem geven. Zonder de matten zal je zo’n 10% verliezen aan het opwarmen van de grond.

Je wil zeker ook een energiebesparende cover voor je SPA bad. Zonder die cover zal je zo’n 30% verliezen aan het opwarmen van de lucht. Zet je je bad binnen, dan heb je meteen een stevige electrische verwarming voor die kamer.

Het initieel vullen van je bad doe je best met warm water uit de kraan. Tenzij je dat water toch electrisch opwarmt natuurlijk. Dan maakt het weinig uit of je de pomp van het bad het laat doen of niet. 

Anders is de totale energie die daarvoor nodig is vrijwel niet of nooit haalbaar met de gehele dagopbrengst van je zonnepanelen. Denk eraan dat het water opwarmen een constant vermogen van 2 a 3 kW vraagt en dat je op die manier ongeveer één graad opwarmt per uur wanneer het bad vol is.

Dus een 8tal uren zon op je zonnepanelen warmt je bad ongeveer 8 °C op, misschien 10 °C. Misschien een beetje meer wanneer alles heel erg goed geïsoleerd is of wanneer je bad binnen staat? M.a.w. heb je dan meerdere dagen nodig of zal je s’nachts moeten doorverwarmen en zal je thuisbatterij niet opgeladen zijn. Dus koop je dan electriciteit van het net. Dat willen we niet.

De startup kostprijs is dus een volledig bad met warm water. Dat is niet weinig, dus je wil dat vermijden. Daarom ook moet je je filters goed proper houden (minimaal iedere drie dagen). Je gebruikt ook best chloortabletten en zorg ervoor dat de pH op 7,6 blijft. Je wil niet in vuil water zitten, toch?

De bedoeling is dat je het bad als een batterij bekijkt. Fysica vertelt ons dat het opgewarmde water ook net zo traag afkoelt als dat het opwarmt. Water houdt warmte goed vast. Daarom dus dat we zoveel aandacht schenken aan het isoleren van het bad. Zo wordt het een batterij.

Je wil waarschijnlijk rond 9 a 10 uur s’avonds je bad in. Tegen dan moet het dus 40 °C zijn. Het is een SPA. Dat moet goed warm zijn he.

Je wil het bad niet helemaal terug naar de omgevingstemperatuur laten vallen (tenzij het zomer en 40 °C is, maar dan wil je waarschijnlijk net kouder water). Dus heb je s’nachts je thuisbatterij nodig. Je houdt het bad na je gebruik s’avonds op ongeveer 35 °C. Door de isolatie zal je bad nu van ongeveer 40 °C terugvallen naar 35 °C rond 6 uur s’morgens. Dit hangt natuurlijk ook van de omgevingstemperatuur s’nachts af. Zonder isolatie is dat al rond 2 a 3 uur s’nachts en zal je thuisbatterij volledig opgebruikt worden.

Rond 9 uur s’morgens heb je (soms) terug zon. Dus kan je je zonnepanelen gebruiken om die 5 °C terug te winnen. Je wil ook wat van je thuisbatterij weer kunnen opladen zodat die thuisbatterij je SPA bad op temperatuur houdt gedurende de volgende nacht en s’avonds wanneer je er gebruik van wil maken.

Zonder thuisbatterij is het volgens mij niet mogelijk een SPA bad warm te houden zonder electriciteit van het net aan te kopen.

M.a.w. Gebruik best je wasmachine en droogkast wanneer het regent en de dag ervoor je thuisbatterij volgeladen werd en je in de regen toch geen gebruik van je bad wil maken.

ps. Witte wolken wil zeggen beetje energieopbrengst (nipt genoeg zelfs, hier in maart zo’n 1,5 kW). Donkere wolken is niks. Zonnig is uiteraard veel energieopbrengst (hier in maart soms 4 kW – 6 kW en meer).

ps. Een electrische wagen opladen en zo’n SPA bad warm houden beiden met zonnepanelen? Ik denk dat je dat kan vergeten. Tenzij je een heel groot dak hebt plus nog voetbalveld vol panelen en een thuisbatterij die meer dan een dure luxewagen kost.

March 17, 2023

Critical CSS (either through Autoptimize with your own Critical CSS account or through Autoptimize Pro which includes Critical CSS) requires WordPress’ scheduling system to function to be able to communicate with on a regular basis. In some cases this does not work and you might see this notification in your WordPress dashboard; If this is the case, go through these steps to...


On behalf of Acquia I’m currently working on Drupal’s next big leap: Automatic Updates & Project Browser — both are “strategic initiatives”.

In November, I started helping out the team led by Ted Bowman that’s been working on it non-stop for well over 1.5 years (!): see d.o/project/automatic_updates. It’s an enormous undertaking, with many entirely new challenges — as this post will show.

For a sense of scale: more people of Acquia’s “DAT” Drupal Acceleration Team have been working on this project than the entire original DAT/OCTO team back in 2012!

The foundation for both will be the (API-only, no UI!) package_manager module, which builds on top of the php-tuf/composer-stager library. We’re currently working hard to get that module committed to Drupal core before 10.1.0-alpha1.

Over the last few weeks, we managed to solve almost all of the remaining alpha blockers (which block the core issue that will add package_manager to Drupal core, as an alpha-experimental module. One of those was a random test failure on DrupalCI, whose failure frequency was increasing over time!

A rare random failure may be acceptable, but at this point, ~90% of test runs were failing on one or more of the dozens of Kernel tests … but always a different combination. Repeated investigations over the course of a month had not led us to the root cause. But now that the failure rate had reached new heights, we had to solve this. It brought the team’s productivity to a halt — imagine what damage this would have done to Drupal core’s progress!

A combination of prior research combined with the fact that suddenly the failure rate had gone up meant that there really could only be one explanation: this had to be a bug/race condition in Composer itself, because we were now invoking many more composer commands during test execution.

Once we changed focus to composer itself, the root cause became obvious: Composer tries to ensure the temporary directory is writable and avoids conflicts by using microtime(). That function confusingly can return the time at microsecond resolution, but defaults to mere millisecondssee for yourself.

With sufficiently high concurrency (up to 32 concurrent invocations on DrupalCI!), two composer commands could be executed on the exact same millisecond:

// Check system temp folder for usability as it can cause weird runtime issues otherwise Silencer::call(static function () use ($io): void { $tempfile = sys_get_temp_dir() . '/temp-' . md5(microtime()); if (!(file_put_contents($tempfile, __FILE__) && (file_get_contents($tempfile) === __FILE__) && unlink($tempfile) && !file_exists($tempfile))) { $io->writeError(sprintf('PHP temp directory (%s) does not exist or is not writable to Composer. Set sys_temp_dir in your php.ini', sys_get_temp_dir())); } }); src/Composer/Console/Application.php in Composer 2.5.4

We could switch to microtime(TRUE) for microseconds (reduce collision probability 1000-fold) or hrtime() (reduce collision probability by a factor of a million). But more effective would be to avoid collisions altogether. And that’s possible: composer always runs in its own process.

Simply changing sys_get_temp_dir() . '/temp-' . md5(microtime()); to sys_get_temp_dir() . '/temp-' . getmypid() . '-' . md5(microtime()); is sufficient to safeguard against collisions when using Composer in high concurrency contexts.

So that single line change is what I proposed in a Composer PR a few days ago. Earlier today it was merged into the 2.5 branch — meaning it should ship in the next version!

Eventually we’ll be able to remove our work-around. But for now, this was one of the most interesting challenges along the way :)

Update 2023-03-26

Shipped in Composer 2.5.5 on March 21, 2023!

March 16, 2023

A graph showing the state of the Digital Experience Platforms in 2023. Vendors are plotted on a grid based on their ability to execute and completeness of vision. Acquia is placed in the 'Leaders' quadrant, indicating strong performance in both vision and execution.Gartner 2023 Magic Quadrant for DXP.

For the fourth consecutive year, Acquia has been named a Leader in the Gartner Magic Quadrant for Digital Experience Platforms (DXP).

Market recognition from Gartner on our product vision is exciting, because it aligns with what customers and partners are looking for in an open, composable DXP.

Acquia's strengths lie in its ties to Drupal, our open architecture, and its ability to take advantage of APIs to integrate with third-party applications.

Last year, I covered in detail what it means to be a Composable DXP.

Mandatory disclaimer from Gartner

Gartner, Magic Quadrant for Digital Experience Platforms, Irina Guseva, John Field, Jim Murphy, Mike Lowndes, March 13, 2023.

This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from Acquia.

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Gartner is a registered trademark and service mark of Gartner and Magic Quadrant is a registered trademark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved.

March 14, 2023

Autoptimize Pro has “Boosters” to delay JavaScript (esp. interesting for external resources), CSS and HTML (which can be very impactful for long and/ or complex pages). Up until today’s release the delay was either until user interaction OR a set timeout of 5 seconds, but now you can choose the delay time yourself, setting it to e.g. 20s or 3s or -and that’s where things get a teeny bit shady- 0s...


De l’importance de comprendre ce qu’est une licence

On entend souvent que les programmes informatiques ou les œuvres en ligne sont publiées sous une licence. Qu’est-ce que cela signifie ? Et en quoi est-ce important ?

Pour simplifier, dans nos sociétés, tout échange se fait suivant un contrat. Ce contrat peut être implicite, mais il existe. Si j’achète une pomme au marché, le contrat implicite est qu’après avoir payé, je reçois ma pomme et je peux en faire ce que je veux.

Pour les biens matériels dits « rivaux », le contrat de vente implique souvent un transfert de la propriété du bien. Mais il y’a parfois d’autres clauses au contrat. Comme les garanties.

Là où les choses se corsent, c’est lorsque le bien échangé est dit « non-rival ». C’est-à-dire que le bien peut être copié ou acheté plusieurs fois sans impact pour les acheteurs. Dans le cas qui nous concerne, on parle typiquement d’un logiciel ou d’une œuvre numérique (film, livre, musique …). Il est évident que l’achat numérique ne nous donne aucune propriété sur l’œuvre.

Il faut signaler que, pendant longtemps, la non-rivalité des biens comme les musiques, les livres ou les films a été camouflée par le fait que le support, lui, était un bien rival. Si j’achète un livre papier, j’en suis propriétaire. Mais je n’ai pas pour autant les droits sur le contenu ! Les supports numériques et Internet ont dissipé cette confusion entre l’œuvre et le support.

Pour réguler tout cela, l’achat d’une œuvre numérique ou d’un programme informatique est, comme tout achat, soumis à un contrat, contrat qui stipule les droits et les obligations exactes que l’acheteur va recevoir. La licence n’est jamais qu’un contrat type, une sorte de modèle de contrat standard. Ce contrat, et une bonne partie de notre société, se base sur la présupposition que, tout comme un bien rival, un bien non-rival se doit d’avoir un propriétaire. C’est bien entendu arbitraire et je vous invite à questionner ce principe un peu trop souvent admis comme une loi naturelle.

Il est important de signaler que chaque transaction vient avec son propre contrat. Il est possible de donner des droits à un acheteur et pas à un autre. C’est d’ailleurs ce principe qui permet la pratique de « double licence » (ou dual-licensing).

Droits et obligations définis par la licence

Dans notre société, toute œuvre est, par défaut, sous la licence du copyright. C’est-à-dire que l’acheteur ne peut rien faire d’autre que consulter l’œuvre et l’utiliser à des fins personnelles. Tout autre utilisation, partage, modification est bannie par défaut.

À l’opposé, il existe le domaine public. Les œuvres dans le domaine public ne sont associées à aucun droit particulier : chacun peut les utiliser, modifier et redistribuer à sa guise.

L’une des escroqueries intellectuelles majeures des absolutistes du copyright est d’avoir réussi à nous faire croire qu’il n’y avait pas d’alternatives entre ces deux extrêmes. Tout comme on est soit propriétaire de la pomme, soit on n’en est pas propriétaire, la fiction veut qu’on soit soit propriétaire d’une œuvre (détenteur du copyright), soit rien du tout, juste bon à regarder. C’est bien entendu faux.

Si la licence est un mur d’obligations auxquelles doit se soumettre l’acheteur, il est possible de n’en prendre que certaines briques. Par exemple, on peut donner tous les droits à l’utilisateur sauf celui de s’approprier la paternité d’une œuvre. Les licences BSD, MIT ou Creative Commons By, par exemple, requièrent de citer l’auteur original. Mais on peut toujours modifier et redistribuer.

La licence CC By-ND, elle, oblige à citer l’auteur, mais ne permet pas de modifications. On peut redistribuer une telle œuvre.

Un point important c’est que lorsqu’on redistribue une œuvre existante, on peut modifier la licence, mais seulement si on rajoute des contraintes, des briques. J’ai donc le droit de prendre une œuvre sous licence CC By, de la modifier puis de la redistribuer sous CC By-ND. Par contre, je ne peux évidemment pas retirer des briques et faire l’inverse. Dans toute redistribution, la nouvelle licence doit être soit équivalente, soit plus restrictive.

Le problème de cette approche, c’est que tout va finir par se restreindre vu qu’on ne peut que restreindre les droits des utilisateurs ! C’est d’ailleurs ce qui se passe dans des grandes entreprises comme Google, Facebook ou Apple qui utilisent des milliers de programmes open source gratuits et les transforment en programmes propriétaires. Un véritable pillage du patrimoine open source !

Le copyleft ou interdiction de rajouter des briques

C’est là que l’idée de Richard Stallman tient du génie : en inventant la licence GPL, Richard Stallman a en effet inventé la brique « interdiction de rajouter d’autres briques ». Vous pouvez modifier et redistribuer un logiciel sous licence GPL. Mais la modification doit être également sous GPL.

C’est également l’idée de la clause Share-Alike des Creative Commons. Une œuvre publiée sous licence CC By-SA (comme le sont mes livres aux éditions PVH) peut être modifiée, redistribuée et même revendue. À condition d’être toujours sous une licence CC By-SA ou équivalente.

Par ironie, on désigne par « copyleft » les licences qui empêchent de rajouter des briques et donc de privatiser des ressources. Elles ont souvent été présentées comme « contaminantes » voire comme des « cancers » par Microsoft, Apple, Google ou Facebook. Ces entreprises se présentent désormais comme des grands défenseurs de l’open source. Mais elles luttent de toutes leurs forces contre le copyleft et contre l’adoption de ces licences dans le monde de l’open source. L’idée est de prétendre aux développeurs open source que si leur logiciel peut être privatisé, alors elles, grands princes, pourront l’utiliser et, éventuellement, très éventuellement, engager le développeur ou lui payer quelques cacahouètes.

La réalité est bien sûr aussi évidente qu’elle en a l’air : tant qu’elles peuvent ajouter des briques privatrices aux licences, ces monopoles peuvent continuer l’exploitation du bien commun que représentent les logiciels open source. Elles peuvent bénéficier d’une impressionnante quantité de travail gratuit ou très bon marché.

Le fait que ces monopoles morbides puissent continuer cette exploitation et soient même acclamés par les développeurs exploités illustre l’importance fondamentale de comprendre ce qu’est réellement une licence et des implications du choix d’une licence plutôt qu’une autre.

Ingénieur et écrivain, j’explore l’impact des technologies sur l’humain. Abonnez-vous à mes écrits en français par mail ou par rss. Pour mes écrits en anglais, abonnez-vous à la newsletter anglophone ou au flux RSS complet. Votre adresse n’est jamais partagée et effacée au désabonnement.

Pour me soutenir, achetez mes livres (si possible chez votre libraire) ! Je viens justement de publier un recueil de nouvelles qui devrait vous faire rire et réfléchir.

March 12, 2023

De oplossing voor Silicon Valley Bank is dat de FED eenvoudigweg die bank failliet laat gaan. Maar dat het alles van die bank één op één overneemt. Bij om het even wat: Decreet? Sure. Wet? Nog beter. Politiek akkoord? Ook goed. Het leger die de zaak overneemt met het geweer tegen de slaap? Desnoods wel ja.

Daarna verkoopt ze die papieren aan wie er interesse in heeft, of niet. Want de FED kan alles ook gewoon houden zoals het is. Zonder zich ook maar iets van de markt aan te trekken. Die markt denkt te vaak dat ze er echt toe doet. Ze doet dat niet zo veel. Veel minder dan ze zelf denkt.

Wat wel moet gebeuren, is dat Silicon Valley Bank failliet gaat. Dat al haar aandeelhouders alles kwijt zijn.

Dat reset de zaak. Dat is goed.

Ik denk nu niet dat het er toe doet dat ik het zeg of niet.

Maar het strategische belang is Avdiivka en niet Bakhmut.

Rusland neemt Bakhmut nog niet in voor een drietal Sun Tzu redenen:

  • Het houdt Oekraïne bezig met het sturen van reservetroepen en andere middelen die het daarom niet elders kan inzetten
  • Het houdt onze Westerse media als konijnen op een lichtdoos gefocust op dat wat er vooral niet toe doet
  • Het maakt het voor het Russische commando mogelijk om Wagner leeg te laten bloeden. Dat is nodig omdat Prigozjin bezig is met zijn reputatie.

Het boek Sun Tzu legt dit thans helder uit: What the ancients called a clever fighter is one who not only wins, but excels in winning with ease. Hence his victories bring him neither reputation for wisdom nor credit for courage.

Wat er strategisch wel toe doet is Avdiivka:

  • Het ligt vlak bij een grote hoofdstad van de Donbas, Donetsk
  • Meer in het Zuiden, waar duidelijk alle belangen en focus van Rusland liggen in dit conflict
  • Het is de start van de Noordflank voor dat Zuiden (want zulke flanken hoeven niet enkel West en Oost gemaakt te worden)

Maar wij Westerlingen zuigen onszelf leeg met onze eigen nonsense propaganda en beterweterigheid. Laten we nog wat naar Ursla haar rethoriek luisteren. Zij zal vast wel wat militaire inzichten hebben. Toch?

We zouden beter wat Sun Tzu lezen en begrijpen. Rusland kijkt meer en meer naar China, toch? Ik denk dat hun militair commando ook de Chinese oorlogsliteratuur goed gelezen heeft.

Ik denk dat ons Westers militair commando dat te weinig gelezen heeft. Of dat het vooral bezig is met zichzelf te verrijken door voor de oorlogindustrie opdrachten te verzilveren. Wat nu trouwens reeds nodig is. Uiteraard. Massaal zelfs.


Bon, ik kan dus nu gecancelled worden omdat ik iets heb geschreven dat onze eigen strategie in vraag stelt en niet volledig alles wat Rusland doet probeert af te breken. Want dit moet tegenwoordig he. Introspectie gaat er volledig uit. Wij zijn heilig en goed. In alles wat we doen. Ook wanneer dat totale blunders zijn. Want de vijand is slecht. En zo. Want. Ja ja.

Die hele cancel-culture daarrond is trouwens ook een gigantisch strategische blunder van onszelf.

Meer haar op onze EU tanden

Op dit moment pleit ik er voor dat de EU lidstaten militariseren in gedachtegoed: dat we niet meer streven naar Europese vrede maar wel naar een positie waar we actief bereid zijn voor het fysiek uitvechten van een eventueel conflict. Met de daarbij horende militaire uitgaven en ontwikkelingen.

Bijvoorbeeld in Kosovo moeten we Servië duidelijk te maken dat we bereid zijn om ernstig in te grijpen, tot over de grens in hun land, en desnoods ook Belgrado zullen innemen.

Het is waarschijnlijk dat het Rusland’s strategie is om de VS haar militaire capaciteit uit te dunnen door dat conflict op de spits te drijven. Daarom moeten we als EU lidstaten Servië duidelijk maken dat wij dat zullen doen. Niet de VS. Daarom moeten wij daar onze EU soldaten stationeren. Zodat Servië klaar en duidelijk weet dat het tegen de rest van heel de Europese Unie zal vechten en dat dit tot en met de inname van hun hoofdstad zal zijn.

Uiteraard moeten we ook opnieuw al hun leiders die om het even welke oorlogsmisdaad plegen veroordelen in onze rechtbanken. Zonder meer maar ook vooral hun eerste minister en militaire leidinggevenden.

Ik zou het liever anders zien, maar het conflict in Oekraïne dwingt de EU lidstaten er toe om meer haar op hun tanden te hebben en dat haar nu ook echt te gebruiken.

March 09, 2023

Losing Signal

Warning to my friends : Until further notice, consider I’m not receiving your Signal messages.

Update on March 13th: I’ve managed to get back on signal by installing a beta version. The bug was acknoweledged by the developers and fixed promptly. Which is nice! My reflections on using centralized services still apply. I should consider this as a free warning who should prompt me to get back on XMPP or to investigate Matrix. But I’m really happy to know that, for the time being, Signal is still caring about non-Google users.

Signal, the messaging system, published a blog post on how we were all different and they were trying to adapt to those differences. Signal was for everyone, told the title. Ironically, that very same day, I’ve lost access to my signal account. We are all different, they said. Except myself.

What is this difference? I’m not sure but it seems that not having a standard Android phone with Google Play services play a huge part.

How I lost access

I’m using an Hisense A5 Android phone. This is one of the very rare phones on the market with an eink screen. While this is not recommended for most users, I like my eink phone: I only need to charge it weekly, it’s not distracting, I don’t want to use it most of the time. I feel that coloured screens are very aggressive and stressful.

The Hisense A5 comes with proprietary crapware in Chinese and without Google Play Services. That’s fine for me. I don’t want Google services anyway and I’m happy with installing what I need from Aurora store and F-Droid. For the last three years, it worked for me (with some quirks, of course). Signal worked fine except for notifications that were sometimes delayed. I considered that as a feature: my phone is in do not disturb all the time, I don’t want to be interrupted.

On March 7th, I made a backup of my Signal messages and removed the application temporarily as I wanted to quickly try some open source alternatives (signal-foss and molly). Those didn’t work, so I reinstalled Signal and asked to restore the backup.

Signal asked for my phone number, warned me that I had no Google Play Services then re-asked for my number then re-warned me. Then asked me to prove that I was a human by solving a captcha.

I hate captcha. I consider the premises of captcha completely broken, stupid and an insult to all the people with disabilities. But those were the worst I had ever seen. I was asked to look on microscopic blurry pictures, obviously generated by AI, and to select only "fast cars" or "cows in their natural habitat" or "t-shirt for dogs" or "people playing soccer".

Now, I’ve a question for you. Is a car looking like an old Saab fast enough? While a cow on the beach is probably not in its natural habitat, what about a cow between two trees? What if the t-shirts are not "for" dogs but with dogs on them. And what if the drawing on the t-shirt is a mix between a dog and a cat? What if there’s a player holding a golf club but hitting a soccer ball? Even with a colour screen, I’m not sure I could answer those. So imagine on an eink one…

Signal is for everyone but you need to answer those idiocy first. It should be noted that I have a very good eyesight. I cannot imagine those with even minor disabilities.

Of course I did try to solve the captcha. But, after each try, I was sent back to the "enter your phone number" step, followed by "no Google services warning" then… "too many attempts for this number, please wait for four hours before retrying".

I have no idea if my answers were bad or if there’s a bug where the captcha assumes Google Play Services. I’ve tried with the APK official version and the Google Play Store version (through Aurora), they all fail similarly. In three days, I’ve managed twice to pass the captcha and receive an SMS with a confirmation code. But, both times, the code was rejected, which is incomprehensible. Also, I learned that I could only read the code from the notification because opening the SMS app reinitialised Signal to the "enter your number" step, before the captcha.

Centralisation is about rejection of differences

What is interesting with corporatish marketing blog posts is how they usually say the exact opposite of what they mean. Signal blog post about differences is exactly that. They acknowledge the fact that there’s no way a single centralised authority could account for all the differences in the world. Then proceed to say they will do.

There’s only one way for a centralised service to become universal: impose your vision as a new universal standard. Create a new norm and threat every divergence as a dangerous dissidence. That’s what Facebook and Google did, on purpose. Pretending to embrace differences is only a way to reject the differences you don’t explicitly agree.

Interestingly, Signal is only realising now that they have no choice but do the same. At first, Signal was only a niche. A centralised niche is not a real problem because, by definition, your users share a common background. You adapt to them. But as soon as you outgrew your initial niche, you are forced to become the villain you fought earlier.

Moxie Marlinspike, Signal’s founder, is a brilliant cryptographer. Because he was a cryptographer, he did what he found interesting. He completely rejected any idea of federation/decentralisation because it was not interesting for him. Because he thought he could solve the problems of world with cryptography only ("when you have a hammer…").

He now must face that his decision has led to a situation where the world-freeing tool he built is publishing facebookish blog post about "differences" while locking out users who do not comply with his norm.

Like Larry Page and Serguei Brin before him, Moxie Marlinspike built the oppression tool he was initially trying to fight (we have to credit Bill Gates, Steve Jobs and Mark Zuckerberg for being creepy psycho craving for power and money since the beginning. At least, they didn’t betray anything and kept following their own ideals).

That’s the reason why email is still the only universal Internet communication tool. Why, despites its hurdles, federation is a thing. Because there is no way to understand let alone accept all variations. There’s a world of difference between Gmail interface and Neomutt. Yet, one allows you to communicate with someone using the other. Centralisation is, by its very definition, finding the minority and telling them "you don’t count". "Follow the norms we impose or disappear!"

It is really about Google’s services after all…

One problem I have with my Hisense A5 is that my banking application doesn’t work on it, expecting Google Play Services.

To solve that issue, I keep in a drawer an old Android phone without sim card, with a cracked screen, a faulty charging port and a bad battery. When the bills-to-pay stack grows too much, I plug that phone in the charger, fiddle with it until the phone start, launch the banking app, pay the bills, put that phone back in the drawer.

After fiddling for two days with Signal on my eink phone, I decided to try on that old phone. I installed Signal, asked to connect to my account. There was no captcha, no hassle. I immediately received the SMS with the code (on the Hisense eink phone) and could connect to my Signal account (losing all my history as I didn’t transfer the backup).

At least, that will allow me to answer my contact that they should not contact me on Signal anymore. UPDATE: signal account was unexpectedly disconnected, telling me signal was used on another phone.

Signal automatically trusted a phone without sim card because it was somewhat connected to Google. But cannot trust a phone where it has been installed for the last three years and which is connected to the related phone number. Signal vision of the world can thus be summarised as: "We fight for your privacy as long as you agree to be spied on by Google."

Centralisation is about losing hope

One thing I’ve learned about centralised Internet services is that you can abandon all hopes of being helped.

There’s no way Signal support could help me or answer me. The problem is deep into their belief, into the model of the world they maintain. They want to promote differences as long as those differences are split between Apple and Google. They probably have no power to make an exception for an account. They could only tell me that "my phone is not supported". To solve my problem, they should probably reconsider how the whole application is built.

Technically, this specific problem is new. Three years ago, I had no problem installing Signal on my phone and no captcha to solve. But once you sign up for a centralised service, you are tied for all the future problems. That’s the deal. I was similarly locked out from my Whatsapp account because I didn’t accept a new contract then forgot to open the app for several months (I was disconnected at the time ).

That’s what I like so much about federated protocols (email, fediverse). I can choose a provider where I know I will have someone in front of me in case I have a problem. Either because I’m a customer paying the expensive tiers for quick support (Protonmail) or because I trust the philosophy and donate appropriately (my Mastodon server is hosted by La Quadrature du Net, I trust that team). I also know that I can easily migrate to another provider as soon as I want (considering instead of protonmail).

As a chat tool, Signal is better than many other. But it’s centralised. And, sooner or later, a centralised service faces you with a choice: either you comply with a rule you don’t agree, either you lose everything.

With every centralised service, the question is not if it will ever happen. The question is "when".

Either you conform to the norm, either you are too different to have your existence acknowledges.

That’s also why I’ve always fought for the right to differences, why I’ve always been utterly frightened by "normalisation". Because I know nobody is immune. Think about it: I’m a white male, cis-gendered, married with children, with a good education, a good situation and no trauma, no disability. I’m mostly playing life with the "easy" setting.

I’m sure lots of reaction to this post will be about how I made mistakes by "trying signal-foss" or by "using a completely weird and non-standard phone".

That’s exactly the point I’m trying to prove.

I’ve suddenly been excluded from all the conversations with my friends because I very slightly but unacceptably deviated from the norm.

Because, three years ago, I thought having a black and white screen on my own phone was more comfortable for my eyes.

As a writer and an engineer, I like to explore how technology impacts society. You can subscribe by email or by rss. I value privacy and never share your adress.

If you read French, you can support me by buying/sharing/reading my books and subscribing to my newsletter in French or RSS. I also develop Free Software.

March 06, 2023


The Ansible role stafwag.users is available at:

This release implements a shell parameters to define shell for an user. See the github issue for more details.


shell parameter

  • shell parameter added

Have fun!

Ansible Role: users

An ansible role to manage user and user files - files in the home directory -.



Role Variables

The following variables are set by the role.

  • root_group: The operating system root group. root by default. wheel on BSD systems.
  • sudo_group: The operating system sudo group. wheel by default. sudo on Debian systems.
  • users: Array of users to manage
    • name: name of the user.
    • group: primary group. if state is set to present the user primary group will be created. if state is set to absent the primary group will be removed.
    • uid: uid.
    • gid: gid.
    • groups: additional groups.
    • append: no (default) | yes. If yes, add the user to the groups specified in groups. If no, user will only be added to the groups specified in groups, removing them from all other groups.
    • state: absent present (default)
    • comment: user comment (GECOS)
    • home: Optionally set the user’s home directory.
    • password: Optionally set the user’s password to this crypted value.
    • password_lock: no yes lock password (ansible 2.6+)
    • ssh_authorized_keys: Array of the user ssh authorized keys
      • key: The ssh public key
      • state: absent present (default) Whether the given key (with the given key_options) should or should not be in the file.
      • exclusive: no (default) yes. Whether to remove all other non-specified keys from the authorized_keys file.
      • key_options: A string of ssh key options to be prepended to the key in the authorized_keys file.
    • user_files: array of the user files to manage.
      • path: path in the user home directoy. The home directory will be detected by getent_passwd
      • content: file content
      • state: absent present (default)
      • backup: no (default) yes. create a backup file.
      • dir_create: false (default) true.
      • dir_recurse: no (default) yes create the directory recursively.
      • mode: Default: ‘0600’. The permissions of the resulting file.
      • dir_mode: Default: ‘0700’. The permissions of the resulting directory.
      • owner: Name of the owner that should own the file/directory, as would be fed to chown.
      • owner: Name of the group that should own the file/directory, as would be fed to chown.
    • user_lineinfiles: Array of user lineinfile.
      • path: path in the user home directoy. The home directory will be detected by getent_passwd
      • regexp: The regular expression to look for in every line of the file.
      • line: The line to insert/replace into the file.
      • state: absent present (default)
      • backup: no (default) yes Create a backup
      • mode: Default: 600. The permissions of the resulting file.
      • dir_mode: Default: 700. The permissions of the resulting directory.
      • owner: Name of the owner that should own the file/directory, as would be fed to chown.
      • owner: Name of the group that should own the file/directory, as would be fed to chown.
      • create: Default: no. Create file if not exists



Example Playbooks

Create user with authorized key

- name: add user & ssh_authorized_key
  hosts: testhosts
  become: true
      - name: test0001
        group: test0001
        password: ""
        state: "present"
          - key: ""
            key_options: "no-agent-forwarding"
    - stafwag.users

Add user to the sudo group

- name: add user to the sudo group
  hosts: testhosts
  become: true
      - name: test0001
        groups: ""
        append: true
    - stafwag.users

Create .ssh/config.d/intern_config and include it in .ssh/config

- name: setup tyr ssh_config
  become: true
  hosts: tyr
      - name: staf
          - name: ssh config
            path: .ssh/config
            dir_create: true
            state: present
          - name: ssh config.d/intern_config
            path: .ssh/config.d/intern_config
            content: ""
            dir_create: true
          - name: include intern_config
            path: .ssh/config
            state: present
            regexp: "^include config.d/intern_config"
            line: "include config.d/intern_config"
    - stafwag.users



Author Information

Created by Staf Wagemakers, email:, website:

Ansible Role: users

An ansible role to manage user and user files - files in the home directory -.



Role Variables

The following variables are set by the role.

  • root_group: The operating system root group. root by default. wheel on BSD systems.
  • sudo_group: The operating system sudo group. wheel by default. sudo on Debian systems.
  • users: Array of users to manage
    • name: name of the user.
    • group: primary group. if state is set to present the user primary group will be created. if state is set to absent the primary group will be removed.
    • uid: uid.
    • gid: gid.
    • groups: additional groups.
    • append: no (default) | yes. If yes, add the user to the groups specified in groups. If no, user will only be added to the groups specified in groups, removing them from all other groups.
    • state: absent present (default)
    • comment: user comment (GECOS)
    • home: Optionally set the user’s home directory.
    • password: Optionally set the user’s password to this crypted value.
    • password_lock: no yes lock password (ansible 2.6+)
    • shell: Optionally, the user shell
    • ssh_authorized_keys: Array of the user ssh authorized keys
      • key: The ssh public key
      • state: absent present (default) Whether the given key (with the given key_options) should or should not be in the file.
      • exclusive: no (default) yes. Whether to remove all other non-specified keys from the authorized_keys file.
      • key_options: A string of ssh key options to be prepended to the key in the authorized_keys file.
    • user_files: array of the user files to manage.
      • path: path in the user home directoy. The home directory will be detected by getent_passwd
      • content: file content
      • state: absent present (default)
      • backup: no (default) yes. create a backup file.
      • dir_create: false (default) true.
      • dir_recurse: no (default) yes create the directory recursively.
      • mode: Default: ‘0600’. The permissions of the resulting file.
      • dir_mode: Default: ‘0700’. The permissions of the resulting directory.
      • owner: Name of the owner that should own the file/directory, as would be fed to chown.
      • owner: Name of the group that should own the file/directory, as would be fed to chown.
    • user_lineinfiles: Array of user lineinfile.
      • path: path in the user home directoy. The home directory will be detected by getent_passwd
      • regexp: The regular expression to look for in every line of the file.
      • line: The line to insert/replace into the file.
      • state: absent present (default)
      • backup: no (default) yes Create a backup
      • mode: Default: 600. The permissions of the resulting file.
      • dir_mode: Default: 700. The permissions of the resulting directory.
      • owner: Name of the owner that should own the file/directory, as would be fed to chown.
      • owner: Name of the group that should own the file/directory, as would be fed to chown.
      • create: Default: no. Create file if not exists



Example Playbooks

Create user with authorized key

- name: add user & ssh_authorized_key
  hosts: testhosts
  become: true
      - name: test0001
        group: test0001
        password: ""
        state: "present"
          - key: ""
            key_options: "no-agent-forwarding"
    - stafwag.users

Add user to the sudo group

- name: add user to the sudo group
  hosts: testhosts
  become: true
      - name: test0001
        groups: ""
        append: true
    - stafwag.users

Create .ssh/config.d/intern_config and include it in .ssh/config

- name: setup tyr ssh_config
  become: true
  hosts: tyr
      - name: staf
          - name: ssh config
            path: .ssh/config
            dir_create: true
            state: present
          - name: ssh config.d/intern_config
            path: .ssh/config.d/intern_config
            content: ""
            dir_create: true
          - name: include intern_config
            path: .ssh/config
            state: present
            regexp: "^include config.d/intern_config"
            line: "include config.d/intern_config"
    - stafwag.users



Author Information

Created by Staf Wagemakers, email:, website:

March 03, 2023

About Bluesky and Decentralisation

Jack Dorsey, Twitter co-founder, is trying to launch Bluesky, a "decentralised Twitter" and people are wondering how it compares to Mastodon.

I remember when Jack started to speak about "project bluesky" on Twitter, years ago. ActivityPub was a lot more niche and he ignored any message related to it. It definitely looked like a NIH syndrome as he could, at least, have started to discuss ActivityPub pros and cons. I was myself heavily invested in decentralised protocols (from blockchain to ActivityPub). It was my job to keep an eye on everything decentralised and really tried to understand what BlueSky was about.

My feeling was, in the end, clear: Jack Dorsey wanted a "decentralised protocol" on which he had full power (aka "VC-style decentralisation" or "permissioned-blockchains").

You have to keep in mind that those successful in the Silicon Valley know only one kind of thinking: raise money, get users, sell off. They can’t grasp decentralisation other than as a nice marketing term to add to their product (and, as Ripple demonstrated during the Cryptobubble, they are completely right when it comes to making tons of money with shitty tech which pretends to be decentralised while not being it at all).

To my knowledge, acknowledgement of ActivityPub existence by BlueSky came very late after the huge Mastodon burst caused by Elon Musk buying Twitter from Jack Dorsey. It’s more a "oh shit, we are not the first" kind of reaction.

But even without that history, it’s important to note that you don’t simply design a decentralised protocol behind closed doors then expect everybody to adopt it. You need to be transparent, to discuss in the open. People need to know who is in charge and why. They also need to know every single decision. Decentralisation cannot be done without being perfectly free and open source. That’s the very point of it.

If we don’t want to consider the hypothesis that "bluesky decentralisation" is simply cynical marketing fluff, I think we can safely assume that Jack Dorsey has hit his mental glass ceiling. He doesn’t get decentralisation. He doesn’t have the mental model to get it. He will probably never get it (he became a billionaire by "not getting it" so there’s no reason for him to change). The whole project is simply a billionaire throwing money at a few developers telling him what he expects to hear in order to get pay. A very-rich-man’s hobby.

There’s no need to analyse the protocol or make guess about the future. It’s a closed source beta application with invite-only membership. It is not decentralised. It cannot be decentralised.

As a writer and an engineer, I like to explore how technology impacts society. You can subscribe by email or by rss. I value privacy and never share your adress.

If you read French, you can support me by buying/sharing/reading my books and subscribing to my newsletter in French or RSS. I also develop Free Software.

February 27, 2023

An artistic rendering of an endless amount of servers stretching into the horizon.A Generative AI self-portrait by DALL·E. Via Wikimedia Commons.

I recently bought a Peloton bike as a Christmas gift for my wife. The Peloton was for our house in Belgium. Because Peloton does not deliver to Belgium yet, I had to find a way to transport one from Germany to Belgium. It was a bit of a challenge as the bike is quite large, and I wasn't sure if it would fit in the back of our car.

I tried measuring the trunk of my car, along with another Peloton. I wasn't positive if it would fit in the car. I tried Googling the answer but search engines aren't great at answering these types of questions today. Being both uncertain of the answer and too busy (okay, let's be real – lazy) to figure it out myself, I decided to ship the bike with a courier. When in doubt, outsource the problem.

To my surprise, when Microsoft launched their Bing and ChatGPT integration not long after my bike-delivery conundrum, one of their demos showed how ChatGPT can answer the question whether a package fits in the back of a car. I'll be damned! I could have saved money on a courier after all.

After watching the event, I asked ChatGPT, and it turns out the Peloton would have fit. That is, assuming we can trust the correctness of ChatGPT's answer.

Chatgpt peloton in volkswagen californiaA screenshot of ChatGPT answering the question: "Does a Peloton bike fit in the back of a Volkwsagen California T6.1?".

What is interesting about the Peloton example is that it combines data from multiple websites. Combining data from multiple sources is often more helpful than the traditional search method, where the user has to do the aggregating and combining of information on their own.

Examples like this affirm my belief that AI tools are one of the next big leaps in the internet's progress.

AI disintermediates traditional search engines

Since its commercial debut in the early 90s, the internet has repeatedly upset the established order by slowly, but certainly, eliminating middlemen. Book stores, photo shops, travel agents, stock brokers, bank tellers and music stores are just a few examples of the kinds of intermediaries who have already been disrupted by their online counterparts.

A search engine acts as a middleman between you and the information you're seeking. It, too, will be disintermediated, and AI seems to be the best way of disintermediating it.

Many people have talked about how AI could even destroy Google. Personally, I think that is overly dramatic. Google will have to change and transform itself, and it's been doing that for years now. In the end, I believe Google will be just fine. AI disintermediates traditional search engines, but search engines obviously won't go away.

The Big Reverse of the Web marches on

The automatic combining of data from multiple websites is consistent with what I've called the Big Reverse of the Web, a slow but steady evolution towards a push-based web; a web where information comes to us versus the current search-dominant web. As I wrote in 2015:

I believe that for the web to reach its full potential, it will go through a massive re-architecture and re-platforming in the next decade. The current web is "pull-based", meaning we visit websites. The future of the web is "push-based", meaning the web will be coming to us. In the next 10 years, we will witness a transformation from a pull-based web to a push-based web. When this "Big Reverse" is complete, the web will disappear into the background much like our electricity or water supply.

Facebook was an early example of what a push-based experience looks like. Facebook "pushes" a stream of aggregated information designed to tell you what is happening with your friends and family; you no longer have to "pull" them or ask them individually how they are doing.

A similar dynamic happens when AI search engines give us the answers to our questions rather than redirecting us to a variety of different websites. I no longer have to "pull" the answer from these websites; it is "pushed" to me instead. Trying to figure out if a package fits in the back of my car is the perfect example of this.

Unlocking the short term potential of Generative AI for CMS

While it might take a while for AI search to work out some early kinks, in the near term, Generative AI will lead to an increasing amount of content being produced. It's bad news for the web as a lot of that content will likely end up being spam. But it also is good news for CMSs, as there will be a lot more legitimate content to manage as well.

I was excited to see that Kevin Quillen from Velir created a number of Drupal integrations for ChatGPT. It allows us to experiment with how ChatGPT will influence CMSs like Drupal.

For example, the video below shows how the power of Generative AI can be used from within Drupal to help content creators generate fresh ideas and produce content that resonates with their audience.

Similarly, AI integrations can be used to translate content into different languages, suggest tags or taxonomy terms, help optimize content for search engines, summarize content, match your content's tone to an organizational standard, and much more.

The screenshot below shows how some of these use cases have been implemented in Drupal:

A screenshot of Drupal's editorial UI that shows a few integrations with ChatGPT.A screenshot of Drupal's editorial UI that shows a few integrations with ChatGPT in the sidebar. The ability to suggest similar titles, summarize content and recommend taxonomy terms.

The Drupal modules behind the video and screenshot are Open Source: see the OpenAI project on Anyone can experiment with these modules and use them as a foundation for their own exploration. Sidenote: another example of how Open Source innovation wins every single time.

If you look at the source code of these modules, you can see that it is relatively easy to add AI capabilities to Drupal. ChatGPT's APIs make the integration process straightforward. Extrapolating from Drupal, I believe it is very likely that in the next year, every CMS will offer AI capabilities for creating and managing content.

In short, you can expect many text fields to become "AI-enhanced" in the next 18 months.

Boost your website's visibility by optimizing for AI crawlers

Another short-term change is that marketers will seek to better promote their content to AI bots, just like they currently do with search engines.

I don't believe AI optimization to be very different from Search Engine Optimization (SEO). Like search engines, AI bots will have to put a lot of emphasis on trust, authority, relevance, and the understandability of content. It will remain essential to have high-quality content.

Right now, in AI search engines, attribution is a problem. It's often impossible to know where content is sourced, and as a result, to trust AI bots. I hope that more AI bots will provide attribution in the future.

I also expect that more websites will explicitly license their content, and specify the ways that search engines, crawlers, and chatbots can use, remix, and adopt their content.

Schema org image license markupThe HTML code for an image on my blog. metadata is used to programmatically specify that my photo is licensed under Creative Commons BY-NC 4.0. This license encourages others to copy, remix, and redistribute my photos, as long it is for noncommercial purposes and appropriate credit is given.

As can be seen from the screenshot above, I specify a license for all 10,000+ photos on my site. I make them available under Creative Commons. The license is specified in the HTML code, and can be programmatically extracted by a crawler. I do something very similar for my blog posts.

By licensing my content under Creative Commons, I'm giving tools like ChatGPT permission to use my content, as long as they follow the license conditions. I don't believe ChatGPT uses that information today, but they could, and probably should, in the future.

If a website has high-quality content, and AI tools give credit to their sources, this can result in organic traffic back to the website.

All things considered, my base case is that AI bots will become an increasingly important channel for digital experience delivery, and that websites will be the main source of input for chatbots. I suspect that websites will only need to make small, incremental changes to optimize their content for AI tools.

Predicting the longer term impact of AI tools on websites

Longer term, AI tools will likely bring significant changes to digital marketing and content management.

I predict that over time, AI bots will not only provide factual information, but also communicate with emotions and personality, providing more human-like interactions than websites.

Compared to traditional websites, AI bots will be better at marketing, sales and customer success.

Unlike humans, AI bots will possess perfect product knowledge, speak many languages, and – this is the kicker – have a keen ability to identify what emotional levers to pull. They will be able to appeal to customers' motivations, whether it's greed, pride, frustration, fear, altruism, or envy.

The downside is that AI bots will also become more "skilled" at spreading misinformation, or might be able to cause emotional distress in a way that traditional websites don't. There is undeniably a ​​dark side to AI bots.

My more speculative and long-term case is that AI chatbots will become the most effective channel for lead generation and conversion, surpassing websites in importance when it comes to digital marketing.

Without proper regulations and policies, that evolution will be tumultuous at best, and dangerous at worst. As I've been shouting from the rooftops since 2015 now: "When algorithms rule our lives, who should rule them?". I continue to believe that algorithms with significant effects on society require regulation and policies, just like the Food and Drug Administration (FDA) in the U.S. or the European Medicines Agency (EMA) in Europe oversee the food and drug industry.

The impact of AI on website development

Of course, the advantages of Generative AI extend beyond content creation and content delivery. The advantages also include software development, such as writing code (46% of all new code on GitHub is generated by GitHub's Copilot), identifying security vulnerabilities (ChatGPT finds two times as many security vulnerabilities as a professional software security scanner), and more. The impact of AI on software development is a complex topic that warrants a separate blog post. In the meantime, here is a video demonstrating how to use ChatGPT to build a Drupal module.

The risks and challenges of Generative AI

Even though I'm optimistic about the potential of AI, I would be neglectful if I failed to discuss some of the potential challenges associated with it.

Although Generative AI is really good at some tasks, like writing a sincere letter to my wife asking her to bake my favorite cookies, it still has serious issues. Some of these issues include, but are not limited to:

  • Legal concerns – Copyrighted works have been involuntarily included in training datasets. As a result, many consider Generative AI a high-tech form of plagiarism. Microsoft, GitHub, and OpenAI are already facing a class action lawsuit for allegedly violating copyright law. The ownership and protection of content generated by AI is unclear, including whether AI tools can be considered "creators" of original content for copyright law purposes. Technologists, lawyers, and policymakers will need to work together to develop appropriate legal frameworks for the use of AI.
  • Misinformation concerns – AI systems often "hallucinate", or make up facts, which could exuberate the web's misinformation problem. One of the most interesting analogies I've seen comes from The New Yorker, which describes ChatGPT as a blurry JPEG of all of the text on the web. Just as a JPEG file loses some of the quality and integrity of the original, ChatGPT summarizes and approximates text on the web.
  • Bias concerns – AI systems can have gender and racial biases. It is widely acknowledged that a significant proportion of the content available on the web is generated by white males residing in western countries. Consequently, ChatGPT's training data and outputs are prone to reflecting this demographic bias. Biases are troubling and can even be dangerous, especially considering the potential societal impact of these technologies.

The above issues related to legal authorship, misinformation, and bias have also given rise to a host of ethical concerns.

My personal strategy

Disruptive changes can be polarizing: they come with some real downsides, while bringing new opportunities.

I believe there is no stopping AI. In my opinion, it's better to embrace change and focus on moving forward productively, rather than resisting it. Iterative improvements to both these algorithms and to our legal frameworks will hopefully address concerns over time.

In the past, the internet was fraught with risk, and to a large extent, it still is. However, productivity and efficiency improvements almost always outweigh risk.

While some individuals and organizations advocate against the use of AI altogether, my personal strategy is to proceed with caution. My strategy is two-fold: (1) focus on experimenting with AI rather than day-to-day usage, and (2) highlight the challenges with AI so that people can make their own choices. The previous section of this blog post tried to do that.

I also expect that organizations will use their own data to train their custom AI bots. This would eliminate many concerns, and let organizations take advantage of AI for applications like marketing and customer success. Simon Willison shows that in a couple of hours of work, he was able to train his own model based on his website content. Time permitting, I'd like to experiment with that myself.


I'm both intrigued, wary, and inspired as to where AI will take the web in the days, months, and years to come.

In the near term, Generative AI will alter how we create content. I expect integrations into CMSs will be simple and numerous, and that websites will only have to make small changes to optimize their content for AI tools.

Longer term, AI will change the way in which we interact with the web and how the web interacts with us. AI tools will steadily alter the relative importance of websites, and potentially even surpass websites in importance when it comes to digital marketing.

Exciting times, but let's move forward with caution!

The fourth edition of the MySQL Cookbook, solutions for database developers and administrators is a huge book, 938 pages !

And the least we can say is that you get what you pay for !

This book is an excellent resource for anyone working with MySQL, whether you’re a beginner or a seasoned developer. The book provides a comprehensive collection of recipes that address various aspects of database management using MySQL.

Sveta and Alkin made an excellent job regrouping tips collected during many years of operating MySQL and helping users through support.

The book provides a list of solutions to the problems that every DBA faces regularly.

As MySQL is improving fast with the MySQL 8.0 release cycle, the authors are also helping to keep up with those changes that make easier DBA’s life.

The book is full of examples that will help beginners and advanced MySQL users and I really think the chapters about JSON and MySQL Shell will be very helpful to all readers.

There are so many of topics covered. These are an example of recipes you can find in the book:

  • Using the Admin API to Automate Replication Management
  • Finding Mismatches Between Tables
  • Checking Password Strength
  • Converting JSON into Relational Format
  • Creating JSON form Relational Format
  • Filling Test Data Using Python’s Data Science Modules

And much more going from High Availability to Backups and much, much more !

Whether you’re looking to optimize queries, implement replication, or manage security, you’ll find a recipe that addresses your needs.

Overall, the MySQL Cookbook, 4th Edition, is an excellent resource for anyone working with MySQL. This book is definitely a valuable addition to any MySQL user’s library.

"Welcome to pre-9/11 New York City, when the world was unaware of the profound political and cultural shifts about to occur, and an entire generation was thirsty for more than the post–alternative pop rock plaguing MTV. In the cafés, clubs, and bars of the Lower East Side there convened a group of outsiders and misfits full of ambition and rock star dreams."

Music was the main reason I wanted to move to New York - I wanted to walk the same streets that the Yeah Yeah Yeahs, the National, Interpol, the Walkmen, the Antlers and Sonic Youth were walking. In my mind they'd meet up and have drinks with each other at the same bars, live close to each other, and I'd just run into them all the time myself. I'm not sure that romantic version of New York ever existed. Paul Banks used to live on a corner inbetween where I live and where my kids go to school now, but that is two decades ago (though for a while, we shared a hairdresser). On one of my first visits to New York before moving here, I had a great chat with Thurston Moore at a café right before taking the taxi back to the airport. And that's as close as I got to living my dream.

But now the documentary "Meet me in the Bathroom" (based on the book of the same name) shows that version of New York that only existed for a brief moment in time.

"Meet Me In The Bathroom — ??inspired by Lizzy Goodman’s book of the same name — chronicles the last great romantic age of rock ’n’ roll through the lens of era-defining bands."

Read the book, watch the documentary (available on Google Play among other platforms), or listen to the Spotify playlist Meet Me in the Bathroom: Every Song From The Book In Chronological Order. For bonus points, listen to Losing My Edge (every band mentioned in the LCD Soundsystem song in the order they're mentioned)

Taken from The Playlist - a curated perspective on the intersection of form and content (subscribe, discuss)

flattr this!

February 24, 2023

Someone uploaded an amateur recording of an entire (?) Jeff Buckley solo concert at the Sin-é from July 1993 (one year before Grace was released). A gem!


February 22, 2023

We need to talk about your Github addiction

Listen my fellow geeks in code, we need to have a serious conversation about Github.

At first, Github was only a convenient way to host a git repository and to collaborate with others. But, as always with monopolies, once you are trapped by convenience and the network effect, the shitification process starts to try to get as much money and data from you.

First of all, let’s remember that Github is a fully proprietary service. Using it to host the development of a free software makes no sense if you value freedom. It is not like we don’t have many alternatives available (sourcehut, codeberg, gitlab, etc). It should be noted that those alternatives usually offer a better workflow and a better git integration than Github. They usually make more sense but, I agree, it might be hard to change ten years of suboptimal habits imposed by the github workflow.

One thing that always annoyed me with Github is the "fun factor". Emojis appearing automatically in messages I’m trying to post, intrusive notifications about badges and followers I earned. Annoying, to say the least. (Am I the only one using ":" in a sentence without willing to make an emoji?)

But I discovered that Github is now pushing it even more in that direction: a feed full of random projects and people I don’t care about, notifications to get me to "discover" new projects and "follow" new persons. They don’t even try to pretend to be a professional platform anymore. It’s a pure attention-grabbing personal data extorting social networks. To add insult to injury, we now know that everything published on Github is mostly there to serve as training data for Microsoft AI engines.

Developers are now raw meat encouraged to get stars, followers and commit counters, doing the most stupid things in the most appealing way to get… visibility! Yeah! Engagement! Followers! Audience!

Good code is written when people are focused, thinking hard about a problem while having the time to grasp the big picture. Modern Github seems to be purposely built as a tool to avoid people thinking about what they do and discourage them from writing anything but a new JavaScript framework.

There’s no way I can morally keep an account on Github. I’ve migrated all of my own projects to Sourcehut (where I’ve a paid account) or to my university self-hosted gitlab.

But there are so many projects I care about still on Github. So many important free software. So many small projects where I might send an occasional bug report or even a patch. For the anecdote, on at least two different occasions, I didn’t send a patch I crafted for small projects because I didn’t know how to send it by mail and was not in the mood to deal with the Github workflow at that particular time.

By keeping your project on Github, you are encouraging new developers to sign up there, to create their own project there. Most importantly, you support the idea that all geeks/developers are somehow on Github, that it is a badge of pride to be there.

If you care about only one of software freedom, privacy, focus, sane market without monopoly or if you simply believe we don’t need even more bullshit in our lives, you should move your projects out of Github and advocate a similar migration to projects you care about. Thanks to git decentralisation, you could even provide an alternative/backup while keeping github for a while.

If you don’t have any idea where to go, that should be a red light in your brain about monopoly abuses. If you are a professional developer and using anything other than Github seems hard, it should be a triple red light warning.

And I’m not saying that because grumpy-old-beard-me wants to escape those instagramesque emojis. Well, not only that but, indeed, I don’t wanna know the next innovative engagement-fostering feature. Thanks.

The best time to leave Github was before it was acquired by Microsoft. The second-best time is now. Sooner or later, you will be forced out of Github like we, oldies, were forced out of Sourceforge. Better leaving while you are free to do it on your own terms…

As a writer and an engineer, I like to explore how technology impacts society. You can subscribe by email or by rss. I value privacy and never share your adress.

If you read French, you can support me by buying/sharing/reading my books and subscribing to my newsletter in French or RSS. I also develop Free Software.

February 20, 2023

"I have hands but I am doing all I can to have daily independence so I can’t be ‘all hands’ at work. I can share ideas, show up, and ask for help when I physically can’t use my hands. Belonging means folks with one hand, no hand and limited hands are valued in the workplace." - Dr. Akilah Cadet

If you've been wondering why over the past few months you're seeing a lot more "All Teams" meetings on your calendar, it's because language is ever evolving with the time, and people are starting to become more aware and replace ableist language.

Read more:

If your team still has "all hands" meetings, propose a more creative name, or default to "all teams" instead. Take some time to familiarize yourself with other ableist language and alternatives.

Taken from The Playlist - a curated perspective on the intersection of form and content (subscribe, discuss)

flattr this!

Une boucle d’inspiration

Parodie d’une expérience biologique improbable, les tasses s’empilaient dans un coin du bureau, chacune contenant un sachet de thé ayant atteint un degré différent de décomposition, de moisissure.

D’une gorgée sèche, l’auteur aspira le restant de la tasse encore tiède qu’il tenait à la main avant de l’empiler machinalement sur les cadavres de ses prédécesseuses. Nerveusement, il jouait avec une mèche de sa barbe, tentant d’ignorer l’écran de son ordinateur sur lequel clignotaient des messages.

« Rappel : on a besoin du texte de ta nouvelle pour aujourd’hui »

« Urgent : nouvelle aujourd’hui chez imprimeur »

« Urgent : appel téléphonique maintenant ? »

Il se retourna avec sa chaise de bureau et regarda par la fenêtre. Le fil était donc cassé ? Lui qui, depuis l’adolescence, croyait disposer d’un vivier infini d’histoires était pour la première fois de sa vie paralysé par la page blanche. Il n’y arrivait plus.

Un léger grattement se fit entendre à la porte. Il grogna.

— Quoi ?

— Tu n’irais pas prendre un peu l’air mon chéri ? Tu as une mine épouvantable.

— Je travaille, je dois terminer cette nouvelle.

— Et ça avance ?

Il détourna son regard en haussant les épaules

— Je suis juste calé sur le dernier passage. J’ai bientôt fini.

Elle n’insista pas et se retira en fermant la porte. L’auteur regarda sa montre. Pour remplir son obligation, il devait désormais produire une page par quart d’heure. Dans peu de temps, ce serait une page toutes les dix minutes.

Il y a à peine une grosse semaine, il se sentait à l’aise avec l’échéance. « Une page par jour, c’est faisable ! » avait-il pensé.

Mais rien. Le vide. Il avait passé ces dernières semaines obnubilé par les œuvres produites par des algorithmes, jouant avec les demandes, partageant et admirant les résultats les plus absurdes sur les réseaux sociaux.

Il avait d’ailleurs fait le vœu de ne jamais s’aider de tels outils. Après tout, il était écrivain. Il était un artisan fier de son travail.

Par contre, il pourrait… Mais oui !

Lançant son navigateur, il se rendit sur la page de son générateur d’images préféré et se mit à taper.

« Je suis écrivain de science-fiction. Voici en lien mon recueil de nouvelles précédent. Génère l’illustration d’une de mes nouvelles inédites. »

Il attendit quelques secondes.

Une image s’afficha. Celle d’un homme au visage passablement banal assis devant un laptop. Il tenait une tasse de thé et, en y prêtant attention, sa main droite avait au moins sept doigts. Son dos était légèrement tordu selon une courbe peu réaliste. L’écran de l’ordinateur était étrangement pentagonal.

L’auteur soupira. Ce n’est pas ce qu’il avait espéré. Son téléphone sonna. Il le mit en mode avion. Sa femme vint frapper à la porte de son bureau.

— C’est ton éditeur qui demande pourquoi tu ne réponds pas, dit-elle en tenant son propre téléphone contre son oreille.

— Dis-lui que je le rappelle dans une heure !

Elle transmit puis, masquant le haut-parleur.

— Il te donne une demi-heure.

— D’accord !

Une demi-heure. Trois minutes par page. Lui qui s’estimait productif lorsqu’il écrivait une page complète par jour.

Il soupira. Il s’était juré de ne pas… Non ! Ce n’était pas possible ! Mais il n’avait pas le choix.

Nouvel onglet dans le navigateur. Ses doigts tremblants se mirent à taper sur son clavier. L’adresse du site s’auto-compléta un peu trop facilement, comme lorsqu’un barman vous appelle par votre prénom et vous demande « comme d’habitude ? » avec l’objectif d’être sympathique mais ne faisant que souligner la trop grande fréquence avec laquelle vous fréquentez son établissement.

— Génère-moi une nouvelle inédite dans le genre de celle de mon recueil principal.

— Bonjour. Je suis un assistant AI. Il s’agit d’une requête explicite de création artistique. Je suis disposé à générer cette nouvelle mais celle-ci sera alors soumise au droit d’auteur et mes créateurs devront être notifiés. Dois-je continuer ?

— Non.

L’auteur se mit à réfléchir. Il glissa-déposa l’image précédemment générée vers la page du navigateur.

— Sur cette image, un écrivain est en train de taper une nouvelle.

— Oui, c’est à cela que ressemble l’image. C’est une belle image.

— C’est une nouvelle de science-fiction.

— D’accord, j’aime la science-fiction.

— J’aimerais que tu me donnes le texte de la nouvelle que cet écrivain est en train d’écrire.

La page mit quelques secondes à se charger puis les mots commencèrent à apparaitre à l’écran.

« Parodie d’une expérience biologique improbable, les tasses s’empilaient dans un coin du bureau, chacune contenant un sachet de thé ayant atteint un degré différent de décomposition, de moisissure. D’une gorgée sèche, l’auteur aspira le restant de la tasse encore tiède qu’il tenait à la main avant de l’empiler machinalement sur les cadavres de ses prédécesseuses. Nerveusement, il jouait avec une mèche de sa barbe, tentant d’ignorer l’écran de son ordinateur sur lequel clignotaient des messages. »

Cette nouvelle nouvelle étant nouvelle, elle ne fait donc pas partie de mon premier recueil « Stagiaire au spatioport Omega 3000 et autres joyeusetés que nous réserve le futur » qui est désormais disponible dans toutes les bonnes librairies. S’il se vend bien, mon éditeur me demandera certainement un second recueil dans lequel celle-ci pourra se glisser. Vous voyez certainement où je veux en venir… Autant faire un clin d’œil à une chauve-souris aveugle !

Ingénieur et écrivain, j’explore l’impact des technologies sur l’humain. Abonnez-vous à mes écrits en français par mail ou par rss. Pour mes écrits en anglais, abonnez-vous à la newsletter anglophone ou au flux RSS complet. Votre adresse n’est jamais partagée et effacée au désabonnement.

Pour me soutenir, achetez mes livres (si possible chez votre libraire) ! Je viens justement de publier un recueil de nouvelles qui devrait vous faire rire et réfléchir.

February 17, 2023

In Western Europe (and beyond), for centuries, at least from the Middle Ages until about 1940, there was one authority for people to believe: The catholic church.

When a peasant was in doubt, they would ask a priest and have a definitive answer. Doubt was gone. The end.


Somewhere in the beginning of the 20th century the power of the church started to diminish and many new 'authorities' like science or labour unions or money started taking over.

Today there is no single authority, there is no single instance anywhere that people believe. There is mainly distrust in anything that claims to be an authority. In other words, most questions remain unanswered. Today, the peasant has no priest to take away his doubt.

Enter AI. Millions of people are talking to ChatGPT and are using it to answer questions. And I wonder: What if people start believing the AI? What if this becomes the new authority?

If you think this is far fetched, then you have not played enough with ChatGPT. Then you have not tweaked your questions. It knows a whole lot of things, it's a far better writer than me, it's a better programmer, it's a better problem solver and it can learn a hundred million times faster than me.

The motto in the next couple of years will be "When in doubt, ask the AI!".

(This post is written by me by the way, not by ChatGPT.)

February 16, 2023

Chez mon libraire…

Mon recueil de nouvelles « Stagiaire au spatioport Omega 3000 et autres joyeusetés que nous réserve le futur » est désormais, tout comme mon roman « Printeurs », disponible dans toutes les bonnes librairies de France, Suisse et Belgique.

Certains d’entre vous en ont d’ailleurs été témoins et m’ont très sympathiquement envoyé, par mail ou sur Mastodon, des photos de mes livres sur les présentoirs de leurs dealers préférés. Une initiative qui m’a fait incroyablement plaisir ! À tel point que je vous invite à continuer et, pourquoi pas, à le faire pour d’autres auteurs que vous aimez bien en les mentionnant et en ajoutant le hashtag #chezmonlibraire.

L’importance du libraire

Beaucoup d’entre nous, et surtout moi, se sont laissés attirés par les sirènes du tout-en-ligne, de la dématérialisation des services. Certains parmi vous ont tenté dès le début de tirer la sonnette d’alarme. Force est de constater qu’ils avaient amplement raison : c’était un leurre ! Maintenant que nous sommes prisonniers du tout puissant monopole d’Amazon, les livreurs sont soumis à des cadences infernales tandis que la qualité de nos bibliothèques tend à diminuer dangereusement. Loin de nous recommander, les algorithmes nous poussent essentiellement aux achats inutiles, s’appuyant sur d’autres algorithmes écrivant des recommandations factices. Le tout pour nous faire acquérir des livres qui sont, de plus en plus, écrits par des algorithmes.

C’est le phénomène de merdification, indispensable aux néomonopoles : après avoir attiré les utilisateurs en finançant des services à perte grâce à l’argent des investisseurs, il est temps de passer à la caisse et de rentabiliser en pourrissant autant que possible la vie des utilisateurs prisonniers.

Sur Amazon, cela passe par recommander les produits qui vont rapporter le plus de sous à Amazon. Notamment les livres autoédités souvent générés artificiellement.

L’idée est simple : lorsqu’un sujet est subitement à la mode, par exemple les blockchains, demander à un algorithme de rédiger un livre sur le sujet et le publier directement Amazon en utilisant les capacités de "print on demand". Le livre ne sera imprimé que lorsqu’il sera effectivement commandé. Après l’ère des fake-news, voici venu celui des fake-books. Notons qu’il n’a pas fallu attendre des algorithmes pour écrire ce genre de livres : des éditeurs peu scrupuleux ont, de tout temps, su tirer parti de la misère des écrivains pour leur faire rédiger à moindre prix des livres au titre alléchant, mais vides de contenu.

Devant le foisonnement, l’abondance des informations, une nouvelle ère s’ouvre à nous : l’ère du filtre. Nous avons besoin de construire des filtres qui nous préservent de l’agression informationnelle et sensorielle permanente.

Ces filtres existent. Ils sont humains.

Pour les livres, on les appelle les libraires ou les bibliothécaires.

Pour une personne très sensible comme moi, allergique aux centres commerciaux, les librairies et les bouquineries sont des oasis de calme et de bonheur au milieu des villes. J’aime bien fouiller, écouter les conseils. Mon portefeuille apprécie moins, mais, dans ces occasions, il n’a plus voix au chapitre.

Moi qui ne supporte pas la plupart des musiques populaires crachées par les enceintes connectées dans les parcs, les rues ou par les radios dans les magasins, je me ressource dans le silence des papiers froissés. Et, allez comprendre, lorsqu’une bouquinerie diffuse de la musique, c’est toujours de la bonne, de l’excellente musique !

Pour soutenir ce blog, allez chez votre libraire !

Ma ville a vu disparaitre coup sur coup deux bouquineries (remplacées par un commerce d’alimentation et un vendeur de sacs à main) et sa librairie principale. Cette perte m’a fait comprendre l’importance et la fragilité des petits commerces du livre (j’ai d’ailleurs dit à ma femme que le jour où Slumberland, mon fournisseur de bédés, ferme, on déménage ailleurs).

Si vous voulez soutenir ce blog, soutenir mon travail, je vous demande une chose : commandez, dans la mesure de vos moyens, mon livre dans une librairie, si possible indépendante.

Non seulement vous soutiendrez mon travail, mais vous soutiendrez également votre libraire et vous risquez de découvrir des livres imprévus. Ce faisant, vous attirerez l’attention du libraire sur mes livres ce qui lui permettra de potentiellement les recommander à d’autres.

Pour moi, soutenir son cerveau, les penseurs et créateurs se fait #chezmonlibraire.

Et lorsque ce n’est pas possible, je vous invite à préférer les librairies en ligne indépendantes.

La piste cachée

Je comprends parfaitement celles et ceux qui préfèrent la version électronique. C’est mon médium de choix pour les romans rapides comme Printeurs. Le livre papier reste cependant un bel objet à offrir.

Et puis, ce n’est pas que je veuille attiser votre curiosité, mais les acheteurs du livre papier de « Stagiaire au spatioport… » (oui, même moi je trouve ce titre trop long à taper) bénéficieront d’une surprise ! Car, à ma connaissance et s’il faut en croire Wikipédia, le livre serait le premier à disposer d’un morceau caché !

Je rassure les lecteurs électroniques : le morceau caché y est également présent. Il n’est juste pas caché, c’est moins rigolo.

PS: L’image d’illustration m’a été envoyée sympathiquement par un lecteur depuis la librairie de son quartier. Si vous m’avez envoyé ce genre de photos sur Mastodon, pourriez-vous les reposter avec le tag #chezmonlibraire ? Je découvre qu’il est impossible de retrouver des messages dans Mastodon si on ne les a pas bookmarkés…

Ingénieur et écrivain, j’explore l’impact des technologies sur l’humain. Abonnez-vous à mes écrits en français par mail ou par rss. Pour mes écrits en anglais, abonnez-vous à la newsletter anglophone ou au flux RSS complet. Votre adresse n’est jamais partagée et effacée au désabonnement.

Pour me soutenir, achetez mes livres (si possible chez votre libraire) ! Je viens justement de publier un recueil de nouvelles qui devrait vous faire rire et réfléchir.

February 15, 2023

This is an old film roll featuring an ostrich running in every frame. The ostrich is purple in color, which represents the mascot of Nostr.

I recently discovered Nostr, a decentralized social network that I find exciting and promising.

Technically, Nostr is a protocol, not a social network. However, developers can use the Nostr protocol to create a variety of applications, including social networks.

Nostr has been around a few years, but in December 2022, Jack Dorsey, the co-founder and former CEO of Twitter, announced that he had made a donation of 14 bitcoins, valued at approximately $250,000. The donation was made to @fiatjaf, the anonymous founder of Nostr.

Nostr stands for Notes and Other Stuff Transmitted by Relays. At its core, it is a system to exchange signed messages. The basic architecture can be explained in three bullets:

  • Every Nostr user is identified by a public key.
  • Users send and retrieve messages to servers. These servers are called relays.
  • Messages are called events. Users sign events with a private key. Events can be social media posts, private messages chess moves, etc.

I reviewed the Nostr protocol and found it to be straightforward to understand. The basic Nostr protocol seems simple enough to implement in a day. This is a quality I appreciate in protocols. It is why I love RSS, for example.

While the core Nostr protocol is simple, it is very extensible. It is extended using NIPs, which stands for Nostr Implementation Possibilities. NIPs can add new fields and features to Nostr messages or events. For example, NIP-2 adds usernames and contact lists (followers), NIP-8 adds mentions, NIP-36 adds support for content warnings, etc.

Joining the Nostr social network

Despite Nostr being just a few years old, there are a number of clients. I decided on Damus, a Twitter-like Nostr client for iOS. (Nostr's Damus is a clever pun on Nostradamus, the French astrologer.)

You don't need to create a traditional account to sign up. You just use a public and private key. You can use these keys to use the platform anonymously. Unlike with proprietary social networks, you don't need an email address or phone number to register.

If you want, you can choose to verify your identity. Verifying your identity links your public key to a public profile. I verified my identity using NIP-05, though different options exist. The NIP-05 verification process involved creating a static file on my website, available at It verifies that I'm the owner of the name @Dries, the public key npub176xpl3dl0agjt7vjeccw6v5grlx8f9mhc75aazwvvqfjvq5al8uszj5asu and

Nostr versus ActivityPub

Recently, Elon Musk became the world's richest troll and many people have left Twitter for Mastodon. Mastodon is a decentralized social media platform built on the ActivityPub protocol. I wanted to compare ActivityPub with Nostr, as Nostr offers many of the same promises.

Before I do, I want to stress that I am not an expert in either ActivityPub or Nostr. I have read both specifications, but I have not implemented a client myself. However, I do have a basic understanding of the differences between the two.

I also want to emphasize that both Nostr and ActivityPub are commendable for their efforts in addressing the problems encountered by traditional centralized social media platforms. I'm grateful for both.

ActivityPub has been around for longer, and is more mature, but by comparison, there is a lot more to like about Nostr:

  • Nostr is more decentralized — Nostr uses a public key to identify users, while ActivityPub utilizes a more conventional user account system. ActivityPub user accounts are based on domain names, which can be controlled by third-party entities. Nostr's identification system is more decentralized, as it does not rely on domain names controlled by outside parties.
  • Nostr is easier to use — Decentralized networks are notoriously tough to use. To gain mass adoption, the user experience of decentralized social networks needs to match and exceed that of proprietary social networks. Both Nostr and Mastodon have user experience problems that stem from being decentralized applications. That said, I found Nostr easier to use, and I believe it is because the Nostr architecture is simpler.
    • Migrating to a different Mastodon server can be challenging, as your username is tied to the domain name of the current Mastodon server. However, this is not a problem in Nostr, as users are identified using a unique public key rather than a domain name.
    • Nostr doesn't currently offer the ability to edit or delete messages easily. While there is an API available to delete a message from a relay, it requires contacting each relay that holds a copy of your message to request its deletion, which can be challenging in practice.
  • Nostr makes it easier to select your preferred content policies — Each Mastodon server or Nostr relay can have its own content policy. For example, you could have a Nostr relay that only lets verified users publish, does not allow content that has anything to do with violence, and conforms the local laws of Belgium. Being able to seamlessly switch servers or relays is very valuable because it allows user to choose a Mastodon server or Nostr relay that they align with. Unfortunately, migrating to a different Mastodon server, to opt into a different content policy, can be a challenging task.
  • Nostr is easier to develop for — The Nostr protocol is easier to implement than the ActivityPub protocol, and appears more extensible.
  • Nostr has Zaps, which is potentially game-changing — ActivityPub lacks an equivalent of Zaps, which could make it harder to address funding issues and combat spam. More on that in the next section.

Lastly, both protocols likely suffer from problems unique to decentralized architectures. For example, when you post a link to your site, most clients will try to render a preview card of that link. That preview card can contain an image, the title of the page, and a description. To create preview cards, the page is fetched and its HTML is parsed, looking for Open Graph tags. Because of the distributed nature of both Nostr and Mastodon this can cause a site to get hammered with requests.


Social networks are overrun with spam and bots. Ads are everywhere. Platform owners profit from content creators, and content creators themselves don't make money. The world needs some breakthrough in this regard, and Nostr's Zap-support might offer solutions.

A Zap is essentially a micropayment made using Bitcoin's Lightning network. Although Nostr itself does not use blockchain technology, it enables each message or event to contain a "Zap request" or "Zap invoice" (receipt). In other words, Nostr has optional blockchain integration for micropayment support.

The implementation of this protocol extension can be found in NIP-57, which was finalized last week. As a brand new development, the potential of Zap-support has yet to be fully explored. But it is not hard to see how micropayments could be used to reward content creators, fund relay upkeep, or decrease spam on social media platforms. With micropayments supported at the protocol level, trying out and implementing new solutions has become simpler than ever before.

One potential solution is for receivers to require 200 satoshi (approximately $0.05) to receive a message from someone outside of their network. This would make spamming less economically attractive to marketers. Another option is for relays to charge their users a monthly fee, which could be used to maintain a block-list or content policy.

Personally, I am a big fan of rewarding content creators, financing contributions, and implementing anti-spam techniques. It aligns with my interest in public good governance and sustainability.

For the record, I have mixed feelings about blockchains. I've HODL'd Bitcoin since 2013 and Ethereum since 2017. On one hand, I appreciate the opportunities and innovation they offer, but on the other hand, I am deeply concerned about their energy consumption and contribution to climate change.

It's worth noting that the Lightning network is much more energy efficient than Bitcoin. Lightning operates on top of the Bitcoin network. The main Bitcoin blockchain, known as a layer 1 blockchain, is very energy inefficient and can only handle fewer than 10 transactions per second. In contrast, the Lightning Network, known as a layer 2 network, uses a lot less energy and has the potential to handle millions of transactions per second on top of the Bitcoin network.

So, yes, Zap support is an important development to pay attention to. Even though it's brand new, I believe that in five years, we'll look back and agree that Zap support was a game-changer.


"Notes and Other Stuff, Transmitted by Relays" seems like a promising idea, even at this early stage. It is definitely something to keep an eye on. While for me it was love at first sight, I'm not sure how it will evolve. I am interested in exploring it further, and if time permits, I plan to create some basic integration with my own Drupal site.

Also posted on IndieNews.

Modern AI and the end of privacy

When you think about it, the gigacorps currently developing consumer-facing AI chatbots are also the same companies which are spying the most heavily on our private life.

Well, it’s obvious because every single company is now trying to spy on you as much as it can and gathering so much data that they can’t even handle it (no later than last week, I have asked to be removed from some shop databases, received a reply telling me that everything was erased yet I’m still receiving daily spam from them). Companies have so many data, duplicated in many backups, they don’t even know what to do with it.

And those data, sooner or later, will be used to train AI. In fact, they already were for years: look no further than reply suggestions from Gmail.

The first consequence is that AI chatbot will quickly start to argue with you, insult you or, why not, send you dick pics. Those are, after all, a huge part of written human communications.

But the terrifying part is probably that there’s no way to prevent leaks. Anybody using a trained chatbot will quickly find ways to ask if Alice and Bob were exchanging emails and what it was about. If Eve was sick or not.

Worst of all, most of it will probably be hallucinations: false data invented by the AI itself. But a few clickbait stories with real information leakage will be enough to cast a doubt that any answer by an AI "might be true".

Despite many warnings, we have offered total control of our lives to a few monopolies. Even if you were careful enough, public data about you are probably enough to "sounds mostly true". Most of your emails ended in a Gmail or Outlook inbox even if you don’t use those services yourself.

In my latest book, the short story "Le jour où la transparence se fit" is about the brutal and sudden disappearance of privacy. I’m glad the book is now in stores because, in a few months, it will probably not sound like science fiction any more…

As a writer and an engineer, I like to explore how technology impacts society. You can subscribe by email or by rss. I value privacy and never share your adress.

If you read French, you can support me by buying/sharing/reading my books and subscribing to my newsletter in French or RSS. I also develop Free Software.

February 14, 2023

Ik was eens naar de clip van You Can Call Me Al aan het kijken. Het alom bekende nummer van Paul Simon. Daar herkende ik toch een speler in die clip die een soort van slaafje speelt voor hoofdzanger Chevy Chase.

Dat slaafje deed mij denken aan Jonathan Holslag. Ja nee. Serieus.

Ik wilde dat maar melden. Jullie mogen zelf invullen wie Al Chevy is, en wie Betty Jonathan is. Maar stel dat het de VS en de EU zouden zijn?! Zelfs moest het niet in die volgorde opgevoerd worden!?

Nan na na nah. Nan na na nah.

I can call you Betty, Jonathan. Betty when you call me you can call me Al.

I consider myself a Paul Simon generalist, trying to see the big picture without losing sight of the details.

February 13, 2023

My friend and #WordPress performance guru Bariş is from Turkey and is asking for help from the WordPress ecosystem. Share as widely as possible!


The number one question I've gotten in the past week has been "how can I support people affected by the layoff?"

First, some general advice:

  • For everyone, remember to "comfort in, dump out" - vent your frustrations to people less affected, support those who are more affected than you.
  • To support Xooglers, don't ask "how can I help?" as that places more burden on them. Offer specific ways you can help them - "Can I write you a Linkedin recommendation? Can I connect you with this person I know at this company who's hiring?". People are affected disproportionally, and if you want to prioritize your help, consider starting with the people on a visa who are now on a tight deadline to find a new sponsor or face leaving the country.
  • To support your colleagues still here, remember we're not all having the same experience. In particular, people outside of the US will be in limbo for weeks or months to come. People can be anywhere on a spectrum of "long time Googler, first mass layoff" to "I've had to go through worse". Don't assume, lead with curiosity, and listen.

Some resources people shared you might find helpful:

Taken from The Playlist - a curated perspective on the intersection of form and content (subscribe, discuss)

flattr this!

February 11, 2023

On Humans and Machines

In the ultimate form of marketing-capitalism, companies try to transform human workers into replaceable working machines and ask them to produce machines that should sound like they are humans.

To achieve that, they build machines that learn from humans.

While humans believe that, in order to gain success, they need to act like machines acting like humans. That’s because the success is defined by some counters created by the machines. The machines, themselves, are now learning from machines that act like humans instead of learning from humans.

So, in the end, we have humans acting like "machines learning from machines acting like humans" built by humans actings like machines.

That’s make "being human" really confusing. Hopefully I don’t need to think about what "being a machine" means.

As a writer and an engineer, I like to explore how technology impacts society. You can subscribe by email or by rss. I value privacy and never share your adress.

If you read French, you can support me by buying/sharing/reading my books and subscribing to my newsletter in French or RSS. I also develop Free Software.

February 09, 2023

I published the following diary on “A Backdoor with Smart Screenshot Capability“:

Today, everything is “smart” or “intelligent”. We have smartphones, smart cars, smart doorbells, etc. Being “smart” means performing actions depending on the context, the environment, or user actions.

For a while, backdoors and trojans have implemented screenshot capabilities. From an attacker’s point of view, it’s interesting to “see” what’s displayed on the victim’s computer. To take a screenshot in Python is easy as this… [Read more]

The post [SANS ISC] A Backdoor with Smart Screenshot Capability appeared first on /dev/random.

When it comes to home automation, people often end up with devices supporting the Zigbee or Z-Wave protocols, but those devices are relatively expensive. When I was looking for a way to keep an eye on the temperature at home a few years ago, I bought a bunch of cheap temperature and humidity sensors emitting radio signals in the unlicensed ISM (Industrial, Scientific, and Medical) frequency bands instead. Thanks to Benjamin Larsson's rtl_433 and, more recently, NorthernMan54's rtl_433_ESP and Florian Robert's OpenMQTTGateway, I was able to integrate their measurements easily into my home-automation system.

I wrote an article for describing these projects and what you need to integrate them into your home-automation system such as Home Assistant: Using low-cost wireless sensors in the unlicensed bands. The article also describes how I'm migrating from a Raspberry Pi-based setup with RTL-SDR dongle running rtl_433 (as described in my home automation book Control Your Home with Raspberry Pi) towards a more distributed setup with multiple LILYGO boards with 433 MHz receiver and running the OpenMQTTGateway firmware around the house. They have less range than the RTL-SDR dongle, but they are cheap and all send their decoded sensor values to the same MQTT broker, so the result is the same as having a single receiver with a longer range.


The article doesn't go into detail about rtl_433_ESP, but this is a fairly recent and interesting development. While rtl_433 implements signal demodulation in software, rtl_433_ESP uses the transceiver chipset (SX127X on the LILYGO LoRa32 V2.1_1.6.1 433MHz board) to do this. This makes the ESP32 implementation more limited in the signals it can receive, because the transceiver only supports a single modulation scheme at a time. As NorthernMan54 had a lot of devices with OOK (on-off keying) modulation at the time he started the port and not one with FSK (frequency-shift keying) modulation, OOK devices are currently the only ones supported. More specifically, rtl_433_ESP supports rtl_433's Pulse Position Modulation (OOK_PPM), Pulse Width Modulation (OOK_PWM) and Pulse Manchester Zero Bit (OOK_PULSE_MANCHESTER_ZEROBIT) demodulation modules. This limits the available device decoders to 81 of the 234 decoders of rtl_433.

There are other microcontroller implementations of decoders for 433 MHz sensors, but the work NorthernMan54 has done building on rtl_433's code base and making it easy to use it with OpenMQTTGateway is impressive. NorthernMan54 told me he created some scripts that should help automate the port and keep his code synchronized with the roughly annual release cycle of rtl_433.

February 06, 2023

"Even the most junior SRE on call starts having director authority. [..] There is a power in that relationship that SRE does have when they think something is in danger. And it's a power we have to be careful not to misuse. But it's important, because that's our job."

Macey is the guest on Episode 1 of SRE Prodcast, Google's podcast about Site Reliability Engineering. She goes in-depth on some of the core tenets of SRE, including risk, on-call, toil, design involvement, and more. (As a side note, I'm reasonably certain that I'm not the entertaining Belgian that was causing her team failure loops, but I'm too afraid to ask.)

The whole series is worth a listen, but just like the podcast itself - start with macey's advice.

"My definition of toil: toil is boring or repetitive work that does not gain you a permanent improvement."

Taken from The Playlist - a curated perspective on the intersection of form and content (subscribe, discuss)

flattr this!

February 04, 2023

"if you’re going to open your mouth, ask yourself if what you are about to say is likely to provide comfort and support. If it isn’t, don’t say it. Don’t, for example, give advice."

Susan Silk's Ring Theory is a helpful model to navigate what not to say during times of grief and traumatic events.

Picture a center ring, and inside it the people most affected by what's going on. Picture a larger circle around it, with inside it the people closest to those in the center. Repeat outwards.

The person in the center ring can say anything they want to anyone, anywhere.
Everyone else can say those things too, but only to people in the larger outside rings. Otherwise, you support and comfort.

Now, consider where in this diagram you are, and where the people you are talking to are.

"Comfort IN, dump OUT."

This model applies in other situations - for example, managers are better off complaining to their own managers or peers, while supporting their own reports and absorbing their complaints with empathy and compassion.

Taken from The Playlist - a curated perspective on the intersection of form and content (subscribe, discuss)

flattr this!

January 29, 2023


I use KVM and cloud-init to provision virtual machines on my home network. I migrated all my services to Raspberry PIs running GNU/Linux and FreeBSD to save power.

I first wanted to use terraform, but the libvirt terraform provider wasn’t compatible with arm64 (at least at that time).

So I started to create a few ansible roles to provision the virtual machines.

delegated_vm_install is a wrapper around these roles to provision the virtual machine in a delegated way. It allows you to specify the Linux/libvirt KVM host as part of the virtual machine definition.


delegated_vm_install 1.1.0

  • update_ssh_known_hosts directive added
    • update_ssh_known_hosts directive added to allow to update the ssh host key after the virtual machine is installed.
    • Documentation updated
    • Debug code added

Have fun!

Delegated VM install

An Ansible role to install a libvirt virtual machine with virt-install and cloud-init.

This role is designed to delegate the install to a libvirt hypervisor.


The role is a wrapper around the following roles:

Install the required roles with

$ ansible-galaxy install -r requirements.yml

this will install the latest default branch releases.

Or follow the installation instruction for each role on Ansible Galaxy.


Ansible galaxy

The role is available on Ansible Galaxy.

To install the role from Ansible Galaxy execute the command below. This will install the role with the dependencies.

ansible-galaxy install stafwag.delegated_vm_install

Source Code

If you want to use the source code directly.

Clone the role source code.

$ git clone

and put into the role search path

Supported GNU/Linux Distributions

It should work on most GNU/Linux distributions. cloud-localds is required. cloud-localds was available on Centos/RedHat 7 but not on Redhat 8. You’ll need to install it manually to use the role on Centos/RedHat 8.

  • Archlinux
  • Debian
  • Centos 7
  • RedHat 7
  • Ubuntu

Role Variables and templates


Virtual Machine specific

  • vm_ip_address: Required. The virtual machine ip address.
  • vm_kvm_host: Required. The hypervisor host the role will delegate the installation.


The delegated_vm_install hash is used for the defaults to set up the virtual machine. If you need to overwrite for a virtual machine or host group, you can use the delegated_vm_install_overwrite hash. Both hashes will be merged, if a variable is in both hashes the values of delegated_vm_install_overwrite are used.

  • delegated_vm_install:
    • security: Optional
      • file:

        The file permissions of the created images.

        • owner: user Default: root
        • group: Default: kvm
        • mode: Default: 660
      • dir:

        The directory permissions of the created images.

        • owner: Default: root
        • group: Default: root
        • mode: Default: 0751
    • post:

      Post actions on the created virtual machines. By default, the post actions are only executed when the virtual machine is created by the role. If the virtual machine already exists the post actions are skipped, unless always is set to true.

      The ssh host key is not updated by default. If you set update_ssh_known_hosts to true, the ssh host key of the virtual machine is updated to ansible_host in ${HOME}/.ssh/known_hosts on the ansible host.

      • pause: Optional. Default: seconds: 10 Time to wait before executing the post actions.
      • update_etc_hosts Optional. Default: true
      • ensure_running Optional. Default: true
      • ensure_running Optional. Default: true
      • package_update Optional. Default: true
      • reboot_after_update Optional. Default: true
      • always Optional. Default: false
      • update_ssh_known_hosts Optional. Default: false
    • vm:

      The virtual machine settings.

      • template: Optional. Default. templates/vms/debian_vm_template.yml The virtual machine template.
      • hostname: Optional. Default: ``````. The vm hostname.
      • path: Optional. Default: /var/lib/libvirt/images/
      • boot_disk: Required src: The path to installation image.
      • size: Optional. Default: 100G
      • disks Optional array with stafwag.qemu_img disks.
      • default_user:
        • name: Default: ``````
        • passwd : The password hash Default !! Set it to false to lock the user.
        • ssh_authorized_keys: The ssh keys default ""
      • dns_nameservers:: Optional. Default:
      • dns_search: Optional. Default ''
      • interface: Optional. Default enp1s0
      • gateway: Required. The vm gateway.
      • reboot: Optional. Default: false. Reboot the vm after cloud-init is completed
      • poweroff: Optional. Default: true. Poweroff The vm after cloud-init is completed
      • wait: Optional. Default: -1. The wait argument passed to virt-install. A negative value will wait till The vm is shutdown. 0 will execut the virt-install command and disconnect
      • commands: Optional. Default []. List of commands to be executed duing the cloud-init phase.
  • delegated_vm_install_overwrite:

    The overwrite hash. This allows you to overwrite the above settings. Can be useful to have specific global settings, and overwrite values for a virtual machine.

Return variables

  • vm:
    • hostname: The virtual machine hostname.
    • path: The virtual machine path.
    • default_user: The default user hash.
    • dns_nameservers:: Optional. Default:
    • dns_search: Optional. Default ''
    • interface: Optional. Default enp1s0
    • vm.ip_adress*: Required. The vm ip address.
    • vm.gateway: Required. The vm gateway.
    • qemu_img: An array the qemu_img disks
    • virt_install_import.disks: Disk string array. Used by

Example playbook


Create a inventory.

  • inventory_vms.yml
    ansible_user: staf
    ansible_python_interpreter: /usr/bin/python3
        path: /var/lib/libvirt/images/k3s/
          size: 50G
          - name: "_zfsdata.qcow2"
            size: 20G
            owner: root
        memory: 4096
        cpus: 4
          passwd: false
            - ""
      vm_ip_address: ""
      vm_kvm_host: vicky
      vm_ip_address: ""
      vm_kvm_host: vicky
      vm_ip_address: ""
      vm_kvm_host: vicky


  • create_vms.yml
- name: Setup the virtual machine
  hosts: k3s_cluster
  become: true
  gather_facts: false
    - role: stafwag.delegated_vm_install
        ansible_ssh_common_args: '-o StrictHostKeyChecking=no'

Run the playbook

$ ansible-playbook -i inventory_vms.yml create_vms.yml

Have fun!

January 28, 2023

Today I wanted to program an ESP32 development board, the ESP-Pico-Kit v4, but when I connected it to my computer's USB port, the serial connection didn't appear in Linux. Suspecting a hardware issue, I tried another ESP32 board, the ESP32-DevKitC v4, but this didn't appear either, so then I tried another one, a NodeMCU ESP8266 board, which had the same problem. Time to investigate...

The dmesg output looked suspicious:

[14965.786079] usb 1-1: new full-speed USB device number 5 using xhci_hcd
[14965.939902] usb 1-1: New USB device found, idVendor=10c4, idProduct=ea60, bcdDevice= 1.00
[14965.939915] usb 1-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[14965.939920] usb 1-1: Product: CP2102 USB to UART Bridge Controller
[14965.939925] usb 1-1: Manufacturer: Silicon Labs
[14965.939929] usb 1-1: SerialNumber: 0001
[14966.023629] usbcore: registered new interface driver usbserial_generic
[14966.023646] usbserial: USB Serial support registered for generic
[14966.026835] usbcore: registered new interface driver cp210x
[14966.026849] usbserial: USB Serial support registered for cp210x
[14966.026881] cp210x 1-1:1.0: cp210x converter detected
[14966.031460] usb 1-1: cp210x converter now attached to ttyUSB0
[14966.090714] input: PC Speaker as /devices/platform/pcspkr/input/input18
[14966.613388] input: BRLTTY 6.4 Linux Screen Driver Keyboard as /devices/virtual/input/input19
[14966.752131] usb 1-1: usbfs: interface 0 claimed by cp210x while 'brltty' sets config #1
[14966.753382] cp210x ttyUSB0: cp210x converter now disconnected from ttyUSB0
[14966.754671] cp210x 1-1:1.0: device disconnected

So the ESP32 board, with a Silicon Labs, CP2102 USB to UART controller chip, was recognized, and it was attached to the /dev/ttyUSB0 device, as it should normally do. But then suddenly the brltty command intervened and disconnected the serial device.

I looked up what brltty is doing, and apparently this is a system daemon that provides access to the console for a blind person using a braille display. When looking into the contents of the package on my Ubuntu 22.04 system (with dpkg -L brltty), I saw a udev rules file, so I grepped for the product ID of my USB device in the file:

$ grep ea60 /lib/udev/rules.d/85-brltty.rules
ENV{PRODUCT}=="10c4/ea60/*", ATTRS{manufacturer}=="Silicon Labs", ENV{BRLTTY_BRAILLE_DRIVER}="sk", GOTO="brltty_usb_run"

Looking at the context, this file shows:

# Device: 10C4:EA60
# Generic Identifier
# Vendor: Cygnal Integrated Products, Inc.
# Product: CP210x UART Bridge / myAVR mySmartUSB light
# BrailleMemo [Pocket]
# Seika [Braille Display]
ENV{PRODUCT}=="10c4/ea60/*", ATTRS{manufacturer}=="Silicon Labs", ENV{BRLTTY_BRAILLE_DRIVER}="sk", GOTO="brltty_usb_run"

So apparently there's a Braille display with the same CP210x USB to UART controller as a lot of microcontroller development boards have. And because this udev rule claims the interface for the brltty daemon, UART communication with all these development boards isn't possible anymore.

As I'm not using these Braille displays, the fix for me was easy: just find the systemd unit that loads these rules, mask and stop it.

$ systemctl list-units | grep brltty
brltty-udev.service loaded active running Braille Device Support
$ sudo systemctl mask brltty-udev.service
Created symlink /etc/systemd/system/brltty-udev.service → /dev/null.
$ sudo systemctl stop brltty-udev.service

After this, I was able to use the serial interface again on all my development boards.

January 26, 2023

The latest MySQL release has been published on January 17th, 2023. MySQL 8.0.32 contains some new features and bug fixes. As usual, it also contains contributions from our great MySQL Community.

I would like to thank all contributors on behalf of the entire Oracle MySQL team !

MySQL 8.0.32 contains patches from Facebook/Meta, Alexander Reinert, Luke Weber, Vilnis Termanis, Naoki Someya, Maxim Masiutin, Casa Zhang from Tencent, Jared Lundell, Zhe Huang, Rahul Malik from Percona, Andrey Turbanov, Dimitry Kudryavtsev, Marcelo Altmann from Percona, Sander van de Graaf, Kamil Holubicki from Percona, Laurynas Biveinis, Seongman Yang, Yamasaki Tadashi, Octavio Valle, Zhao Rong, Henning Pöttker, Gabrielle Gervasi and Nico Pay.

Here is the list of the above contributions and related bugs, we can see that for this release, our connectors got several contributions, always a good sign of their increasing popularity.

We can also notice the return of a major contributor: Laurynas Biveinis!


Connector / NET

  • #74392 – Support to use (Memory-)Stream for bulk loading data – Alexander Reinert
  • #108837 – Fix unloading issues – Gabriele Gervasi

Connector / Python

  • #81572 – Allow MySQLCursorPrepared.execute() to accept %(key_name)s in operations and dic – Luke Weber
  • #81573 – Add MySQLCursorPreparedDict option – Luke Weber
  • #82366 – Waning behaviour improvements – Vilnis Termanis
  • #89345 – Reduce callproc roundtrip time – Vilnis Termanis
  • #90862 – C extension – Fix multiple reference leaks – Vilnis Termanis
  • #96280 – prepared cursor failed to fetch/decode result of varbinary columns – Naoki Someya
  • #103488 – Stubs (.pyi) for type definition for connection and cursor objects – Maxim Masiutin
  • #108076 – If extra init_command options are given for the Django connector, load them – Sander van de Graaf
  • #108733 – python connector will return a date object when time is 00:00:00 – Zhao Rong

Connector / J

  • #104954 – MysqlDataSource fails to URL encode database name when constructing JDBC URL – Jared Lundell
  • #106252 – Connector/J client hangs after prepare & execute process with old version server – Zhe Huang
  • #106981 – Remove superfluous use of boxing – Andrey Turbanov
  • #108414 – Malformed packet generation for COM_STMT_EXECUTESeongman Yang
  • #108419 – Recognize “ON DUPLICATE KEY UPDATE” in “INSERT SET” Statement – Yamasaki Tadashi
  • #108814 – Fix name of relocation POM file – Henning Pöttker

Connector / C++

  • #108652 – Moving a local object in a return statement prevents copy elision – Octavio Valle

Clients & API

  • C API (client library) Fix sha256_password_auth_client_nonblocking – Facebook
  • #105761 – mysqldump make a non-consistent backup with –single-transaction option – Marcelo Altmann
  • #108861 – Fix typo in dba.upgradeMetadata() error message (Shell AdminAPI) – Nico Pay


  • Fix race between binlog sender heartbeat timeout – Facebook

InnoDB and Clone

  • #106616 (private) – 8.0 upgrade (from 5.6) crashes with Assertion failure – Rahul Malik
  • #107854 (private) – Assertion failure: dict0mem.h – Marcelo Altmann
  • #108111 – Garbled UTF characters in SHOW ENGINE INNODB STATUS – Kamil Holubicki
  • #108317 – clone_os_copy_file_to_buf partial read handling completely broken – Laurynas Biveinis


  • #104934 (private) – Statement crash – Casa Zhang
  • #107633 – Fixing a type-o, should be “truncate” – Dimitry Kudryavtsev

If you have patches and you also want to be part of the MySQL Contributors, it’s easy, you can send Pull Requests from MySQL’s GitHub repositories or send your patches on Bugs MySQL (signing the Oracle Contributor Agreement is required).

Thank you again to all our contributors !

January 25, 2023

Twenty years ago… I decided to start a blog to share my thoughts! That’s why I called it “/dev/random”. How was the Internet twenty years ago? Well, they were good things and bad ones…

With the years, the blog content evolved, and I wrote a lot of technical stuff related to my job, experiences, tools, etc. Then, I had the opportunity to attend a lot of security conferences and started to write wrap-ups. With COVID, fewer conferences and no more reviews. For the last few months, I’m mainly writing diaries for the Internet Storm Center therefore, I publish less private stuff here, and just relay the content published on the ISC website. If you have read my stuff for a long time (or even if you are a newcomer), thank you very much!

A few stats about the site:

  • 2056 articles
  • 20593 pictures
  • 5538 unique visitors to the RSS feed in the last 30 days
  • 85000 hits/day on average (bots & attacks included ?)

I know that these numbers might seem low for many of you but I’m proud of them!

The post This Blog Has 20 Years! appeared first on /dev/random.

I published the following diary on “A First Malicious OneNote Document“:

Attackers are always trying to find new ways to deliver malware to victims. They recently started sending Microsoft OneNote files in massive phishing campaigns. OneNote files (ending the extension “.one”) are handled automatically by computers that have the Microsoft Office suite installed. Yesterday, my honeypot caught a first sample. This is a good opportunity to have a look at these files. The file, called “”, was delivered as an attachment to a classic phishing email… [Read more]

The post [SANS ISC] A First Malicious OneNote Document appeared first on /dev/random.

January 24, 2023

Phew, getting php 8.2 working in my #wordpress plugin’s travis tests required quite a bit of trial & error in travis.yaml, but 8 commits later I won The solution was in these extra lines to ensure libonig5 was installed...


January 23, 2023

Libérons la culture pour cultiver la liberté

Cette conférence a été donnée le 19 novembre 2022 à Toulouse dans le cadre du Capitole du Libre.
Le texte est ma base de travail et ne reprend pas les nombreuses improvisations et disgressions inhérentes à chaque One Ploum Show.
Attention ! Cette conréfence n’est pas une conréfence sur le cyclimse. Merci de votre compréhension.

Qui d’entre vous a compris cette référence à « La classe américaine » ? Ça me fait plaisir d’être là. Je suis content de vous voir. On va manger des chips. Quoi ? C’est tout ce que ça vous fait quand je vous dis qu’on va manger des chips ?

Sérieusement, je suis très content d’être là parmi vous. Je me sens dans mon élément. J’ai fréquenté le monde de l’industrie, celui des startups, de l’académique et même un peu de la finance. Mais il n’y a que parmi les libristes que je me sens chez moi. Parce que nous partageons la même culture. Parce que nous sommes d’accord sur le fait que Vim est bien meilleur qu’Emacs. (non, pas les tomates !)

La culture c’est ça : des références qui font qu’on se comprend, qu’on exprime une certaine complicité. Un des moments forts de mon mariage a été de montrer « La cité de la peur » à mon épouse. Elle n’a pas adoré le film. Bof. Mais nous avons étendu notre vocabulaire commun.

— J’ai faim ! J’ai faim ! J’ai faim !

— On peut se tutoyer ? T’es lourd !

("oui, mais j’ai quand même faim" répond quelqu’un du public)

La culture, c’est ça : une extension du vocabulaire. Il y’a des programmeurs dans la salle ? Et bien la langue, comme le français en ce moment, correspond au langage de programmation. La culture correspond aux bibliothèques. Langage et bibliothèque. La bibliothèque est la culture. Les mots sont magnifiques !

Pour s’exprimer, pour communiquer, pour être en relation bref pour être humain, la culture est indispensable. Lorsque deux cultures sont trop différentes, il est facile de considérer l’autre comme inhumain, comme un ennemi. La culture et le partage de celle-ci sont ce qui nous rend humains.

La culture est pourtant en danger. Elle est menacée, pourchassée, interdite. Remplacée par un succédané standardisé.

Étendre sa culture, c’est augmenter son vocabulaire, affiner sa compréhension du monde. La culture sert de support à la manière de voir le monde. Prêter un livre qu’on aime est un acte d’amour, d’intimité. C’est littéralement se mettre à nu et dire : « J’aimerais que nous ayons une compréhension mutuelle plus profonde ». C’est magnifique !

Mais combien de temps cela sera-t-il légal ? Ou même techniquement possible ? Une fois l’auteur mort, son œuvre disparait pendant 70 ans, car, pour l’immense majorité d’entre eux, il n’est pas rentable de les réimprimer et de payer les droits aux descendants. Nous tuons donc la culture avec l’auteur.

La transmission est pourtant indispensable. La culture se nourrit, évolue et se transforme grâce aux interactions, aux échanges. Or les interactions sont désormais surveillées, monétisées, espionnées. Du coup, elles sont fausses, truquées, inhumaines. Les comptes Twitter et LinkedIn sont majoritairement des faux. Les likes Facebook s’achètent à la pelle. Les visites sur votre site web sont des bots. Les contenus Tiktok et YouTube sont de plus en plus générés automatiquement. Les nouvelles dans les grands médias ? Des journalistes sous-payés (non, encore moins que ça) qui sont en compétition avec des algorithmes pour voir le contenu qui rapportera le plus de clics. Les rédactions sont désormais équipées d’écrans affichant en temps réel les clics sur chaque contenu. Le job des journalistes ? Optimiser cela. Même le code Open Source est désormais généré grâce à Github Copilot. Ces algorithmes se nourrissent de contenu pour en générer de nouveaux. Vous la voyez la boucle ? Le « while True » ?

Pendant des millénaires, notre cerveau était plus rapide que les moyens de communication. Nous apprenions, nous réfléchissions. Pour la première fois dans l’histoire de l’information, notre cerveau est désormais le goulot d’étranglement. C’est lui l’élément le plus lent de la chaîne ! Il ne peut plus tout absorber. Il se gave et s’étouffe !

Lorsque nous sommes en ligne, nous alimentons cet énorme monstre qui se nourrit de nos données, de notre attention, de notre temps, de nos clics. Nous sommes littéralement la chair exploitée du film Matrix. Sauf que dans Matrix, les corps sont nourris, logés dans leur cocon alors que nous bossons et payons pour avoir le droit d’être exploités par cette gigantesque fabrique d’attache-trombones.

Vous connaissez l’histoire de la fabrique d’attache-trombones ? C’est un concept inventé par le chercheur Nick Bostrom dans un papier intitulé « Ethical Issues in Advanced Artificial Intelligence ». Le concept est que si vous créez une intelligence artificielle en lui demandant de fabriquer le plus possible d’attache-trombones le plus rapidement possible, cette intelligence artificielle va rapidement s’arranger pour éliminer les humains qui pourraient la ralentir avant de transformer la planète entière en une montagne d’attache-trombones, ne gardant des ressources que pour coloniser d’autres planètes afin de les transformer en attache-trombones.

Dans une conférence de 2018, l’auteur de science-fiction Charlie Stross a montré qu’il n’était pas nécessaire d’attendre des intelligences artificielles très avancées pour voir se poser le problème. Qu’une entreprise est, par essence, une fabrique d’attache-trombones : une entité dont le seul et unique objectif est de générer de l’argent, quitte à détruire ses créateurs, l’humanité et la planète dans la foulée.

Le concept est parfaitement illustré par cette magnifique scène dans « Les raisins de la colère » de John Steinbeck où un fermier s’en prend à un représentant de la banque qui l’exproprie de son terrain. Il veut aller tuer le responsable de son expropriation. Le banquier lui dit alors : « La banque a une volonté à laquelle nous devons obéir même si nous sommes tous opposés à ses actions ». Bref, une fabrique d’attache-trombones.

La fabrique d’attache-trombones nous fait dépenser, devenir des zombies. Vous avez déjà vu un zombie ? Moi oui. Quand je fais aller la sonnette de mon vélo face à des gens qui tendent un téléphone au bout de leur bras. Ils sont dans un monde virtuel. Ils ont même délégué leur sens auditif à Apple avec ces écouteurs qui ne se retirent plus et qui ont la faculté de transmettre le son réel dans l’oreille. En mettant Apple comme intermédiaire. Comme dans Matrix, les gens vivent dans un monde virtuel. Ça a juste commencé par l’audition au lieu des gros casques devant les yeux comme on l’imaginait.

Pour nous échapper de la fabrique, pour ne pas être transformés en attache-trombones, nous devons créer, entretenir et occuper des espaces réservés aux humains. Pas des algorithmes. Pas des entreprises. Des humains. Et posez-moi ce smartphone qui vous fait littéralement perdre 20 points de QI. Ce n’est pas une blague : quand on dit que les entreprises se nourrissent de notre temps de cerveau, c’est littéral. On perd littéralement l’équivalent de 20 points de QI par le simple fait d’avoir un téléphone à proximité. Le simple son d’une notification distrait autant un conducteur que de ne pas regarder la route pendant une dizaine de secondes. Ces engins nous rendent cons et nous tuent ! Ce n’est pas une image.

Vous avez remarqué comme la déshumanisation du travail nous force de plus en plus à agir comme des automates, comme des algorithmes ? Métropolis, de Fritz Lang, et les Temps Modernes, de Charlie Chaplin, dénonçait l’industrialisation qui transformait nos corps en outils au service de la machine. 100 ans plus tard, c’est exactement pareil avec les cerveaux. On les transforme pour les mettre au service des algorithmes. Algorithmes qui, eux, prétendent se faire passer pour des humains. Nous sommes en train de fusionner l’homme et la machine d’une manière qui n’est pas belle à voir.

Ce qui fait l’humain, c’est sa diversité, sa différence d’un individu à l’autre, mais aussi d’un moment à l’autre. Quel est le connard qui pense sérieusement que comme t’as envoyé un jour un mail à une entreprise, cinq ans plus tard tu souhaites être spammé tous les jours avec leur newsletter ? Je n’invente rien, ça m’est arrivé récemment. L’humain évolue et la culture humaine doit être diverse. Comme la nourriture. Qui pense que manger tous les jours au macdo au point d’en vomir est une bonne idée ? Alors pourquoi accepte-t-on de le faire pour notre cerveau ?

L’archétype de l’industrialisation et de l’uniformisation de la culture est pour moi représenté par les superhéros. On réduit la culture à un combat entre exégètes Marvel ou DC. Ce n’est pas anodin. Vous avez déjà réfléchi à ce que représente un superhéros ? C’est littéralement un milliardaire avec des superpouvoirs innés. Il est supérieur au peuple. Il est également son seul espoir. Il est parfois injustement mal compris, car il est bon, même quand il dézingue toute une ville et ses habitants. Ce sont juste des dommages collatéraux. Le peuple a juste le droit de la fermer. C’est littéralement l’image du monde qu’ont les milliardaires d’eux-mêmes. À titre de comparaison, dans les années 90, la mode était aux films catastrophes. La terre était en danger et les humains normaux (on insistait sur la normalité, sur le fait que leur couple allait mal, qu’ils étaient blancs ou bien Will Smith) s’associaient pour accomplir des actions héroïques et sauver la terre d’un ennemi figurant la pollution. Les héros de Jurassique Park? Des gamins normaux et des scientifiques un peu dépassés. Aujourd’hui, l’humain normal a juste le droit de fermer sa gueule et d’attendre qu’un milliardaire vienne le protéger. Sans milliardaire, l’humain normal est forcé de se battre contre les autres normaux, car les milliardaires nous ont appris que la collaboration était morte ces 20 dernières années. Ils nous ont enseigné à voir tout humain comme un ennemi, un concurrent potentiel et à tenter d’accaparer ce qu’on peut avant une destruction finale. C’est ce qu’on appelle le survivalisme.

Cette vision du monde, nous la devons à la monopolisation de la culture. À la monoculture. Mais il y’a pire ! La culture indépendante est devenue illégale, immorale. Les gens s’excusent de pirater, de partager. À cause d’une des plus grosses arnaques intellectuelles : la propriété intellectuelle. Un concept fourre-tout assez nouveau dans lequel on balance brevets, secrets commerciaux, copyrights, trademarks…

L’intellect est un bien non-rival. Si je partage une idée, cela donne deux idées. Ou 300. Au plus on la partage, au plus la culture croît. Empêcher le partage, c’est tuer la culture. Les fabriques d’attache-trombones ont même réussi à convaincre certains artistes que leurs fans étaient leurs ennemis ! Qu’empêcher la diffusion de la culture était une bonne chose. Que le fait qu’ils crèvent de misère n’était pas dû aux monopoles, mais au fait que les fans se partagent leurs œuvres. Spotify reverse aux artistes un dixième de centime par écoute, mais les pirates seraient responsables de l’appauvrissement des artistes. Pour toucher l’équivalent de ce qu’il touchait avec une vente de CD, vous devez écouter chaque chanson de l’album un millier de fois sur Spotify !

Le libre a tenté de répliquer avec les licences. GPL, Creative Commons. Mais nous sommes trop gentils. Fuck les licences ! Partagez la culture ! Diffusez-la ! Si vous le faites de bon cœur, partagez entre êtres humains. Boycottez Amazon et tentez de découvrir autour de vous des artistes locaux, indépendants. Partagez-les. Diffusez-les. Écrivez des critiques, filmez des parodies. Vous connaissez JCFrog et ses vidéos ? Et bien c’est exactement ça la culture humaine. C’est magnifique. C’est génial.

Ne dites plus « Je veux juste me vider la tête avec une série débile ». On ne se vide pas la tête. On la remplit. Avec de la merde industrielle ou du bio local artisanal, au choix. Faites des références. L’autre jour, j’ai vu sur Mastodon quelqu’un parler de son trajet dans le métro à Paris : « J’ai l’impression d’être dans Printeurs ! ». C’est le plus beau compliment qu’on puisse à un auteur. Merci à cette personne !

Dans Printeurs, tout est publicité. Ce n’est pas un hasard. Vous avez vu comme tout ressemble à une publicité désormais ? Comme le moindre film, le moindre clip vidéo en adopte les codes ? Comme chaque vidéo YouTube n’a plus qu’un objectif : vous faire vous abonner. Fabriquer des attache-trombones.

La culture bio et libre n’est pas une culture de seconde zone. Elle n’est juste pas standard. Et c’est tout son intérêt.

Pour exister, la culture libre a besoin de plateformes libres. Les plateformes propriétaires ont été conçues par le marketing pour le marketing. Pour vendre des cigarettes et de l’alcool à des gamins de 10 ans (c’est la définition du marketing. C’est juste leur métier de prétendre qu’ils font autre chose. Comme disait Bill Hicks, si vous travaillez dans le marketing, « please kill yourself »). Une fois qu’on fume, le marketing cherche à nous prétendre que c’est notre liberté et nous faire oublier que nous polluons afin que nous perdions encore plus de libertés et que nous polluions encore plus. Comme l’alcoolique boit pour oublier qu’il est alcoolique, nous consommons pour oublier que nous consommons. Le simple fait d’être sur une plateforme marketing nous force donc à faire du marketing. Du personal branding. De l’engagment. Des KPI. Promouvoir la culture libre sur Facebook, c’est comme aller manifester pour le climat en SUV. Oui, mais j’ai un vélo électrique dans le coffre, je suis écolo ! Oui, mais Facebook, Insta, c’est là que tout le monde est ! Non, c’est là que sont certains. Mais c’est sûr que sur Facebook, on ne trouve que des gens qui sont… sur Facebook. Il y’a des milliards de gens qui n’y sont pas, pour des raisons très diverses. La manière la plus simple et la plus convaincante de lutter contre ces plateformes est de tout simplement ne pas y être.

Les plateformes libres existent. Comme un simple blog. Mais elles ont besoin de choses à raconter, d’histoires. Le mot « libre » à lui tout seul raconte une histoire. Une histoire qui peut faire peur, être inconfortable. Alors on a essayé de dépolitiser le libre, de l’appeler « open source », de le dépouiller de son histoire. Le résultat, il est dans votre poche. Un téléphone Android tourne sur un Linux open-source. Pourtant, c’est le pire instrument de privation de liberté. Il vous espionne, vous inonde de publicités, vous prive de tout contrôle. RMS avait raison : en renommant le libre « open source », nous avons fait une croix sur la liberté.

La leçon est que la technologie ne peut pas être neutre. Elle est politique par excellence. Se priver de raconter des histoires pour ne pas être politique, c’est laisser la place aux autres histoires, à la publicité. C’est prétendre, comme le disaient Tatcher et Reagan, qu’il n’y a pas d’alternative. Je le disais, mais je gardais moi-même mon compte Facebook. Cela me semblait indispensable. J’ai eu du mal à le supprimer, à me priver de ce que je croyais être un outil incontournable. À la seconde où le compte a été supprimé, le voile s’est levé. Il m’est apparu évident que c’était le contraire. Que pour exister en tant que créateur, il était indispensable de supprimer mon compte.

J’avais beau dire que je ne l’utilisais pas, le simple fait de savoir qu’il y’avait plusieurs milliers de followers liés à mon nom me donnait une illusion de succès. Mes posts avaient beau ne pas avoir d’impact (ou très rarement), je les écrivais pour Facebook ou pour Twitter. Je me suis un jour surpris sous la douche à réfléchir en tweets. Je me suis séché et j’ai effacé mon compte Twitter, effrayé. Je ne faisais que produire des attache-trombones en vous encourageant à faire de même. Ma simple présence sur un réseau permettait à d’autres d’y justifier la leur. Leur présence justifiant la mienne… J’étais plongé dans les écrits de Jaron Lanier et Cal Newport lorsque j’ai réalisé qu’aucun des deux n’avait la moindre présence sur un réseau social propriétaire. Je les lis, j’admire leur pensée. Ils existent. Ils ne sont pas sur les réseaux sociaux. Ce fut une grande inspiration pour moi…

Il faut casser le « pas le choix » ou « TINA (There’Is No Alternative) ». Il y’a 8 milliards d’alternatives. Nous les créons tous les jours, ensemble. Notre rôle n’est pas d’aller convaincre le monde entier de passer à autre chose, mais de créer des multitudes de cocons de culture humaine, d’être prêts à accueillir ceux qui sont dégoutés de leur macdo quotidien, ceux qui, à leur rythme, se lassent d’être exploités et soumis à des algorithmes publicitaires. Il suffit de voir ce qui se passe entre Twitter et Mastodon.

Ces plateformes libres, cette culture libre, il n’y a que nous qui pouvons les préparer, les développer, les faire exister, les partager.

À ceux qui disent que la priorité est la lutter contre le réchauffement climatique, je réponds que la priorité est à la création de plateformes, techniques et intellectuelles, permettant la lutte contre le réchauffement climatique. On ne peut pas être écolo dans un monde financé par la publicité. Il faut penser des alternatives, les inventer. Créer des histoires pour sauver la planète. Une nouvelle forme de culture. Une permaculture !

Mon outil à moi, c’est ma machine à écrire. Elle me libère. Je l’appelle ma « machine à penser ». À vous d’inventer vos propres outils. (oui, même Emacs…) Des outils indispensables pour inventer et partager votre nouvelle culture, ce mélange de code et d’histoires à raconter qui peut sauver l’humanité avant que nous soyons tous transformés en attache-trombones !

Merci !

Et don’t forget to subscribe to my channel.

D’ailleurs, je profite de cette conférence contre la publicité pour faire de la publicité pour mon nouveau livre. Est-ce de la culture libre ? Elle est déjà libre sur Mais pas que ! Car mon éditeur a annoncé que toute la collection Ludomire (dans laquelle sont publiés mes livres) passera en 2023 sous licence CC By-SA.

Photo : David Revoy, Ploum, Pouhiou et Gee dédicaçant lors du Capitole du Libre à Toulouse le 19 novembre 2022.

Ingénieur et écrivain, j’explore l’impact des technologies sur l’humain. Abonnez-vous à mes écrits en français par mail ou par rss. Pour mes écrits en anglais, abonnez-vous à la newsletter anglophone ou au flux RSS complet. Votre adresse n’est jamais partagée et effacée au désabonnement.

Pour me soutenir, achetez mes livres (si possible chez votre libraire) ! Je viens justement de publier un recueil de nouvelles qui devrait vous faire rire et réfléchir.

January 17, 2023

As you know, before FOSDEM, the MySQL Community Team organizes a 2 days conference in Brussels.

During these preFOSDEM MySQL Days, attendees will have the opportunity to learn more about MySQL from Oracle’s MySQL teams and selected professionals from the MySQL community.

Here is the agenda:

Thursday 2nd February

Registration starts at 09:00 AM.

09:30 AM10:00 AMWelcome to preFOSDEM MySQL DayslefredOracle
10:00 AM11:00 AMWhat’s new in MySQL until 8.0.32Kenny GrypOracle
11:00:00 AM11:30 AMCoffee Break
11:30 AM12:00 PMIdeal Techniques for making Major MySQL Upgrades easier !Arunjith AravidanPercona
12:10 PM12:40 AMIntroduction to MySQL Shell for VS CodeKenny GrypOracle
12:40 PM01:40 PMLunch Break
01:40 PM02:10 PMAll about MySQL Table CacheDmitry LenevPercona
02:20 PM02:50 PMMySQL 8.0 – Dynamic InnoDB Redo LoglefredOracle
03:00 PM03:30 PMMySQL HeatWave for OLTP – Overview and what’s newLuis SoaresOracle
03:30 PM04:00 PMCoffee Break
04:00 PM04:30 PMMySQL HeatWave for OLAP – What is possible today (and what is not)Castern ThalheimerOracle
04:40 PM5:10 PMHow Vitess Achieves consensus with replicated MySQLDeepthi SigireddiVitess
05:10 PM5:30 PMEnd of the day

Friday 3rd February

Registration starts at 09:00 AM.

09:30 AM10:20 AMMySQL High Availability and Disaster RecoveryMiguel Araújo & Kenny GrypOracle
10:30 AM11:00 AMMigrate from MySQL Galera Base Solution to MySQL Group ReplicationMarco TusaPercona
11:00:00 AM11:30 AMCoffee Break
11:30 AM12:00 PMWhat’s new in ProxySQL 2.5René CannaòProxySQL
12:10 PM12:40 AMMySQL HeatWave Lakehouse DemoCastern ThalheimerOracle
12:40 PM01:40 PMLunch Break
01:40 PM02:10 PMWhat we can know with Performance Schema in MySQL 8.0Vinicius GrippaPercona
02:20 PM03:20 PMMigration to MySQL Group Replication at thoughts & experiencesSimon J
03:30 PM04:00 PMCoffee Break
04:00 PM04:30 PM11 Reasons to Migrate to MySQL 8Peter ZaitsevPercona
04:40 PM5:10 PMObservability and operability of replication in MySQL 8.0Luis SoaresOracle
05:30 PM06:00 PMEnd of the day++ (awards??)
6:00 PM7:00 PMFree Time and Q&A
7:00 PMlateMySQL Community DinnerParty-Squad

Please register for this in person and free conference, only people with a ticket will be allowed to attend.

After the conference, the Belgian Party-Squad will host the traditional MySQL Community Dinner. Don’t hesitate to register for this amazing event, a unique opportunity to chat with MySQL developers and MySQL professionals over Belgian food and beers.

The Community Dinner must also be ordered separately here:

And finally, Sunday afternoon, don’t miss FOSDEM MySQL & Friends Devroom !

See you soon in Belgium !

January 15, 2023

With FOSDEM just around the corner, it is time for us to enlist your help. Every year, an enthusiastic band of volunteers make FOSDEM happen and make it a fun and safe place for all our attendees. We could not do this without you. This year we again need as many hands as possible, with the buildup (starting Friday at noon), heralding during the conference and Cleanup (on Sunday evening). No need to worry about missing lunch. Food will be provided. Would you like to be part of the team that makes FOSDEM tick? Sign up here! You could舰

Ik heb de afgelopen jaren al heel wat Python-projecten ontwikkeld. Elke keer je een nieuw Python-project start, moet je een hele machinerie opstellen als je het wat robuust wilt aanpakken: tests en linting opzetten, documentatie schrijven, packaging, en dat alles samenlijmen met pre-commit hooks en continue integratie.

Tot voor kort deed ik dat elke keer manueel, maar enkele maanden geleden ben ik op zoek gegaan naar een schaalbaardere manier. Er bestaan diverse projectgeneratoren en projectsjablonen voor Python. Na een uitgebreide vergelijking kwam ik uit bij PyScaffold. Hiermee maak je eenvoudig een sjabloon voor een Git-repository voor een Python-project aan, dat al allerlei best practices toepast rond documentatie, tests, packaging, afhankelijkheden, codestijl enzovoort.

Ik heb ondertussen al twee Python-projecten met PyScaffold aangemaakt en ben met een derde bezig. De lessen die ik hieruit geleerd heb en een overzicht van wat PyScaffold allemaal voor je kan klaarzetten, heb ik beschreven in een achtergrondartikel voor Tweakers: Gelikte code met PyScaffold: Python-projecten ontwikkelen als een pro.

Andere ontwikkelaars zullen misschien tot een andere keuze komen. Zo maakt PyScaffold geen gebruik van Poetry, een moderne tool voor het beheer van afhankelijkheden en het publiceren van Python-pakketten. Maar ik maakte zelf nog geen gebruik van Poetry en PyScaffold bleek vrij goed overeen te komen met mijn stijl van ontwikkelen in Python. Misschien dat ik volgend jaar een andere keuze zou maken.

Uiteindelijk maakt het ook niet zoveel uit wat je gebruikt. Elke projectgenerator of elk projectsjabloon maakt zijn eigen keuzes voor de onderliggende tools, maar allemaal bereiken ze hetzelfde. Tijdswinst bij het opzetten van een nieuw project, en een projectstructuur die allerlei best practices toepast die je een goede basis geeft om robuuste software te ontwikkelen. Ik ga in ieder geval niet zo snel meer een nieuw Python-project opstarten zonder projectgenerator of projectsjabloon.

January 12, 2023


Charlie Mackesy - The boy, the mole, the fox and the horse (2019)

In so far as you can read this book, I read this book. This book allows you to read one page, and then ponder on it for a week. It's a story, but it is also not a story but more a psychological insight into humans. I will probably open this book again several times this year to discover even more about the meaning of a single page.

Brian W. Kernighan - Understanding the digital world (2021)

Kernighan is famous for co-authoring "The C programming Language" with Dennis Ritchie in 1978 and I have always looked up to him.

In this book he gives an excellent overview of computers and networks, including an easy to read introduction to programming, cryptography and (digital) privacy. I would not advice this book for IT-nerds, since it is way too simple. It is though as good an overview as is possible in 260 pages.

David Kushner - Masters of Doom (2003)

Well this was an excellent read! Enjoyable, intriguing, educational and probably only for fans of Doom or of John Carmack.

The book tells the story of the two John's that created the DooM game in the early nineties. David Kushner interviewed a lot of people to get a complete picture of their youth, their first meeting, the development of Commander Keen, Wolfenstein 3D and of course DooM, and also many games after that like Quake and Heretic.

This book is a nice combo with Sid Meier's Memoir.

Sid Meier's Memoir (2020)

I wanted to link to my tiny review of this book, which I read in 2021, but it turns out I have not written anything about it yet.

The story of the creation of the civilization series of games is a really good read, though probably only if you lived in this era and played some of the early Civilization games. Or earlier 'versions' like Empire or Empire Deluxe, which are mentioned in this book for serving as inspiration for the first Civilization game.

I like the 4X turn-based system of gaming, too bad there are almost no other good games using this (Chess comes close though).

January 10, 2023

At the beginning of each year, I publish a retrospective that looks back on the prior year at Acquia. I write these both to reflect on the year, and to keep a record of things that happen at the company. I've been doing this for the past 14 years.

If you'd like to read my previous retrospectives, you can find them here: 2021, 2020, 2019, 2018, 2017, 2016, 2015, 2014, 2013, 2012, 2011, 2010, 2009.

Reflecting on my retrospectives

As Acquia has grown, it has become increasingly challenging for me to share updates on the company's progress in a way that feels authentic and meaningful.

First, I sometimes feel there are fewer exciting milestones to report on. When a company is small and in startup mode, each year tends to bring significant changes and important milestones. As a company transitions to steady growth and stability, the rate of change tends to slow down, or at least, feel less newsworthy. I believe this to be a natural evolution in the lifecycle of a company and not a negative development.

Second, while it was once exciting to share revenue and headcount milestones, it now feels somewhat self-indulgent and potentially off-putting. In light of the current global challenges, I want to be more mindful of the difficulties that others may be experiencing.

I've also been surprised and humbled by the thorough reading of my retrospectives by a wide range of audiences, including customers, partners, investors, bankers, competitors, former Acquians, and industry analysts.

Because of that I have come to realize that my retrospectives have gradually transformed from a celebration of rapid business milestones to something more akin to a shareholder letter — a way for a company to provide insight into its operations, progress, and future plans. As a business student, I have always enjoyed reading the annual shareholder letters of great companies. Some of my favorite shareholder letters offer a glimpse into the leadership's thinking and priorities.

As I continue to share my thoughts and progress through these annual retrospectives, I aim to strike a balance between transparency, enthusiasm, and sensitivity to the current environment. It is my hope that my updates provide valuable insights and serve as historical markers on the state of Acquia each year. Thank you for taking the time to read them.

Acquia business momentum

When writing my 2022 update, it's impossible to ignore the macroeconomic situation.

The year 2022 was marked by inflation, tighter monetary conditions, and conflict in Europe. This caused asset price bubbles to deflate. While this led to lower valuations and earnings multiples for technology companies, corporate earnings generally held up relatively well in comparison.

Looking ahead to 2023, Jerome Powell and the Federal Reserve are expected to keep their foot on the brake. They will keep interest rates high to tame inflation. Higher interest rates translate to slower growth and margin compression.

In anticipation of lower growth and margin compression, many companies reduced their forecasts for 2023 and adjusted their spending. This has resulted in layoffs across the technology sector.

I have great empathy for those who have lost their jobs or businesses as a result. It is a difficult and unsettling time, and my thoughts are with those affected.

I share these basic macroeconomic trends because they played out at Acquia as well. Like many companies, we adjusted our spending throughout the year. This allowed us to exit 2022 with the highest gross margins and EBITDA margins in our history. If the economy slows down in 2023, Acquia is well-positioned to weather the storm.

That said, we did continue our 16-year, unbroken revenue growth streak in 2022. We saw a healthy increase in our annual recurring revenue (ARR). We signed up many new customers, but also renewed existing customers at record-high rates.

I attribute Acquia's growth to the fact that demand for our products hasn't slowed. Digital is an essential part of most organizations' business strategies. Acquia's products are a "must have" investment even in a tough economy, rather than a "nice to have".

In short, we ended the year in our strongest financial position yet – both at the top line and the bottom line. It feels good to be a growing, profitable company in the current economic climate. I'm grateful that Acquia is thriving.

Can you clarify exactly what Acquia does?

A screenshot of a slide with Acquia's vision and mission statement. Acquia's vision statement reads: Acquia's vision and mission statement has been unchanged the last 8+ years.

Frequently, I am asked: Can you clarify exactly what Acquia does?. Often, people only have a general understanding of our business. They vaguely know it has something to do with Drupal.

In this retrospective, I thought it would be helpful to illustrate what we do through examples of our work with customers in 2022.

Our core business is assisting organizations in building, designing, and maintaining their online presence. We specialize in serving customers with high-traffic websites, many websites, or demanding security and compliance requirements. Our platform scales to hundreds of millions of page views and has the best security of any Drupal hosting platform in the world (e.g. ISO-27001, PCI-DSS, SOC1, SOC2, IRAP, CSA STAR, FedRAMP, etc).

For example, we worked closely with Bayer throughout 2022. Bayer Consumer Health had 400+ sites on a proprietary platform coming to the end of its life, and needed to replatform those sites in a very tight timeframe. Bayer rebuilt these sites in Drupal and migrated its operations to Acquia Cloud in the span of 1.5 years. At the peak of the project, Bayer migrated 100 sites a month. Bayer reduced time to market by 40%, with new sites taking two to three weeks to launch. Organic traffic increased from 4% to 57%. Visit durations have increased 13% and the bounce rate has dropped 19%. As a result of the replatform, Bayer has realized a $15 million efficiency in IT and third-party costs over three years.

For many organizations, the true challenge in 2023 is not redesigning or replatforming their websites (using the latest JavaScript framework), or even maintaining their websites. The more impactful business challenge is to create better, more personalized customer experiences. Organizations want to deliver highly relevant customer experiences not only on their websites, but across other channels such as mobile, email, and social.

The issue that organizations face is that they don't have a deep enough understanding of their customers to make experiences highly relevant. While they may collect a large amount of data, they struggle to use it. Customer data is often scattered across various systems: websites, commerce platforms, email marketing tools, in-store point of sale systems, call center software, and more. These systems are often siloed and do not allow for a comprehensive understanding of the customer.

This is a second area of focus for Acquia, and one that has been growing in importance. Acquia helps these organizations integrate all of these systems and create a unified profile for each customer. By breaking down the data silos and establishing a single, reliable source of information about each customer, we can better understand a customer's unique needs and preferences, leading to significantly improved customer experiences.

In 2022, outdoor retailer Sun & Ski Sports used Acquia's Customer Data Platform (CDP) to drive personalized customer experiences across a variety of digital channels. Acquia CDP unified their customer data into a single source of truth, and helped Sun & Ski Sports recognize subtle but impactful trends in their customer data that they couldn't identify before. The result is a 1,500% increase in response rate for direct mail efforts, a 1,100% improvement in incremental net profit on direct mail pieces, a 200% increase in paid social clickthrough rates, and a 25% reduction in cost per click for paid social efforts.

A third area of focus for Acquia is helping our customers manage their digital content more broadly, beyond just their website content. For example, Autodesk undertook a global rebrand. Their goal was to go from a house of brands to a branded house. This meant Autodesk had to update many digital assets such as images and videos across websites, newsletters, presentations, and even printed materials. Using Acquia DAM, Autodesk redesigned the process of packaging, uploading, and releasing their digital assets to their 14,000 global employees, partners, and the public.

A diagram that shows how content, data, experience orchestration, and experience delivery needs to be unified across different systems such as content management systems, email marketing platforms, commerce solutions and more.Experiences need to be personalized and orchestrated across all digital channels. To deliver the best digital experiences, user profile data needs to be unified across all systems. To make experience delivery easy, all content and digital assets need to be reusable across channels. Last but not least, digital experiences need to be delivered fast, securely, and reliably across all digital touchpoints.

When people ask me what Acquia does, they usually know we do something with Drupal. But they often don't realize how much more we do. Initially known exclusively for our Drupal-in-the-cloud offering, we have since expanded to provide a comprehensive Open Digital Experience Platform. This shift has allowed us to offer a more comprehensive solution to help our customers be digital winners.

Customer validation

At our customer and partner conference, Acquia Engage, we shared the stage with some of our customers, including ABInBev, Academy Mortgage, the American Medical Association, Autodesk, Bayer, Burn Boot Camp, Japan Airlines, Johnson Outdoors, MARS, McCormick, Novavax, Omron, Pearl Musical Instruments, Phillips 66, Sony Pictures, Stanley Black & Decker, and Sun & Ski Sports. Listening to their success stories was a highlight of the year for me.

Acquia customer logos, including Cigna, Novartis, Lowe's, Barnes & Noble, Best Buy, Panasonic and more.Some of Acquia's customers in 2022: Cigna, Novartis, Lowe's, Barnes & Noble, Best Buy, Panasonic and more.

I'm also particularly proud of the fact that in 2022, our customers recognized us across two of the most popular B2B software review sites, TrustRadius and G2. Acquia earned Top Rated designation on TrustRadius four times over, and was named a Leader by G2 users in 20 areas.

G2 and TrustRadius badges recognize Acquia products based on functionality and market presence.Customers and partners rated our products very highly on G2 and TrustRadius.
The results prove that customers want what we're building, which is a great feeling. I'm proud of what our team has accomplished.

Analyst validation

We also received validation on our product strategy from some of the industry's top analyst firms, including "Leader" placements in The Forrester Wave: Digital Asset Management For Customer Experience, Q1 2022 Report, as well as in the 2022 Gartner Magic Quadrant for Digital Experience Platforms.

In addition, Acquia advanced to become a "Leader" in the Omdia Digital Experience Management Universe Report, was named an "Outperformer" in the GigaOm Radar Report for DXPs, and was included on Constellation Shortlists across seven categories, including DXP, CMS, CDP, and DAM.

Analyst reports are important to our customers and prospects because these firms take into account customer feedback, a deep understanding of the market, and each vendor's product strategy and roadmap.

Executing on our product vision

In October, I published a Composable Digital Experience Manifesto, a comprehensive 3,500-word update on Acquia's product strategy.

In this manifesto, I outline the growing demand for agility, flexibility, and speed in the enterprise sector and propose a solution in the form of Composable Digital Experience Platforms (DXPs).

Composable DXPs allow organizations to quickly assemble solutions from pre-existing building blocks, often using low-code or no-code interfaces. This enables organizations to adapt to changing business needs.

In my manifesto, I outline six principles that are essential for a DXP to be considered "composable", and I explain how Acquia is actively investing in research and development in alignment with these principles.

A summary of Acquia's current R&D investments in relation to these principles includes:

  • Component discovery and management tools. This includes tools to bundle components in pre-packaged business capabilities, CI/CD pipelines for continuous integration of components, and more.
  • Low-code / no-code tooling to speed up experience building. No-code tools also enable all business stakeholders to participate in experience creation and delivery.
  • Headless and hybrid content management so content can be delivered across many channels.
  • A unified data layer on which content personalization and journey orchestration tools operate.
  • Machine learning-based automation tools to tailor and orchestrate experiences to people's preferences.
  • Services that streamline the management and sharing of content across an organization.
  • Services to manage a global portfolio of diverse sites and deliver experiences at scale.

For a more detailed explanation of these concepts, I encourage you to read my manifesto at

While there are six principles, there are four fundamental blocks to creating and delivering exceptional digital customer experiences: (1) managing content and digital assets, (2) managing user data, (3) personalized experience orchestration using machine learning, and (4) multi-experience composition and delivery across digital touchpoints. In the next sections, I'll talk a bit more about each of these four.

A diagram that shows how user data and content can be used to deliver the best next experience.

Managing content and digital assets

Given the central role of content in any digital experience, content management is a core capability of any DXP. Our content management capabilities are centered around Drupal and Acquia DAM.

Drupal has long been known for its ability to effectively manage and deliver content across various channels and customer touchpoints. In 2022, Drupal made significant strides with the release of Drupal 10, a major milestone that has taken 22 years to achieve.

The Drupal 10 release was covered by numerous articles on the web so I won't repeat the main Drupal 10 advancements or features in my retrospective. I do want to highlight Acquia's contributions to its success. Acquia was founded with the goal of supporting Drupal and, 16 years later, we continue to fulfill that mission as the largest corporate contributor to Drupal 10. I am proud of the work we have done to support the Drupal community and the release of Drupal 10.

I also wanted to provide an update on Acquia DAM. In 2021, Acquia acquired Widen, a digital asset management platform (DAM), and rebranded it as Acquia DAM. In 2022, we integrated it into both Drupal and Acquia's products.

Providing our customers with strong content management capabilities is essential to help them create great digital experiences. The integration of Widen marks an important step towards achieving our vision and implementing our strategy.

The combination of Drupal and Acquia DAM allows us to assist our customers in managing a wide range of digital content, including website content, images, videos, PDFs, and product information.

Managing user data

We have been investing in experience personalization since 2013, and over the past eight years, the market has continued to move toward data-driven, personalized experiences. IDC predicts that the Customer Data Platform (CDP) market will grow about 20% per year, reaching $3.2 billion by 2025.

This growth is driven by significant market changes, such the end of browser support for third-party cookies and the necessity for marketers to create personal customer experiences that are compliant with privacy regulations. The use of first-party data has become increasingly crucial. Acquia CDP is helping our customers with exactly this, and it's the perfect tool for this moment in time.

In order to deliver great customer experiences, it is essential that our customers have a single, accurate source of truth for customer data and complete user profiles. To achieve this, we made the decision to replatform our marketing products on Snowflake throughout 2022, a major architectural undertaking.

With Snowflake serving as our shared data layer, our customers now have easy and direct access to all of their customer data. This allows them to leverage that data across multiple applications within the Acquia product portfolio and connect to other relevant applications as part of their complete digital experience platform (DXP) solutions. I believe that our shared data layer sets us apart in the market and gives us a competitive advantage.

Personalized experience orchestration using machine learning

In 2022, we also focused on establishing integrations between Acquia CDP and other systems, and we added to the extensive set of machine learning models available for use with Acquia CDP. These improvements provide non-technical marketers, or "citizen data scientists", with more options for understanding and leveraging their customer data.

Our machine learning models allow our customers to use predictive marketing to improve and automate their marketing and customer engagement efforts. In 2022, we delivered over 1 trillion machine learning predictions. These predictions help our customers identify the best time and channel for engagement, as well as the most effective content to use, and more.

One of the primary reasons that companies purchase CDPs is to respect their customers' privacy preferences. A major benefit of a CDP is that it helps customers comply with regulations such as GDPR and CCPA. In June, we enhanced Acquia CDP to make it even easier for our customers to comply with subject data requests and privacy laws.

Multi-experience composition and delivery across digital touchpoints

According to a Gartner report from Q4 2022, global end-user spending on public cloud services is expected to increase by 20.7% in 2023, reaching a total of $591.8 billion.

The growth of Open Source software has paralleled the adoption of cloud technology. In 2022, developers initiated 52 million new Open Source projects on GitHub and made over 413 million contributions to existing Open Source projects. 90% of companies report using Open Source code in some manner.

Drupal is one of the most widely used Open Source projects globally and Acquia operates a large cloud platform. Acquia has been a pioneer in promoting both Open Source software and cloud technology. The combination of Open Source and cloud has proven to be particularly powerful in enabling innovation and digital transformation.

At Acquia, we strongly believe that experience creation and delivery will continue to move to Open Source and the cloud for many more years. The three largest public cloud vendors — Amazon, Microsoft, and Google — all reported annual growth between 25% and 40% in Q3 of 2022 (Q4 results are not public yet).

The past few years, our cloud platform has undergone a major technical upgrade called Acquia Cloud Next, modernizing Acquia Cloud using a cloud-native, Kubernetes-based architecture. In 2022, we made significant progress on Acquia Cloud Next, with many customers transitioning to the platform. They have experienced exceptional levels of performance, self-healing, and dynamic scaling for their large websites thanks to Acquia Cloud Next.

In 2022, we also introduced a new product called Acquia Code Studio. In partnership with GitLab, we offer a fully managed continuous integration/continuous delivery (CI/CD) pipeline optimized for Drupal. I have been using Acquia Code Studio myself and have documented my experience in deploying Drupal sites with Acquia Code Studio. In my more than 20 years of working with Drupal, I believe that Acquia Code Studio offers the best Drupal development and deployment workflow I have encountered.


A key part of delivering on our vision is acquisitions. One disappointment for me was that we were unable to complete any acquisitions in 2022, despite our eagerness to do so. We looked at various potential acquisition targets, but ultimately didn't move forward with any. We remain open to and enthusiastic about acquiring other companies to become part of the Acquia family. This will continue to be a key focus for me in 2023.

The number one trend to watch: AI tools

The ever-growing amount of content on the internet shows no signs of slowing down. With the advent of "AI creation tools" like ChatGPT (for text creation) and DALL·E 2 (for image creation), the volume of content will only increase at an accelerated rate.

It is clear that generative AI will be increasingly adopted by marketers, developers, and other creative professionals. It will transform the way we work, conduct research, and create content and code. The impact of this technology on various industries and fields is likely to be significant.

Initially, AI tools like ChatGPT and DALL·E may present opportunities for Drupal and Acquia DAM. As the volume of content continues to increase, the ability to effectively manage and leverage that content becomes even more important. Drupal and Acquia DAM specialize in the management of content after it has been created, rather than in the creation process itself. As such, these tools may complement our offerings and help our customers better handle the growing volume of content. Those in the business of creating content, rather than managing content, are likely to face some disruption in the years ahead.

In the future, ChatGPT and similar AI tools may pose a threat to traditional websites as they are able to summarize information from various websites, rather than directing users to those websites. This shift could alter the relative importance of search engines, websites, SEO optimization, and more.

At Acquia, we will need to consider how these AI tools fit into our product strategy and be mindful of the ethical implications of their use. I will be closely monitoring this trend and plan to write more about it in the future.

Give back more

In the spirit of our "Give Back More" values, the Acquia team sponsored 200 kids in the annual Wonderfund Gift Drive, contributing gifts both in person and online. We also donated to The Pedigree Foundation and UNICEF. Acquians also participated in collective volunteerism such as the Urmi Project (India), Camp Mohawk (UK), and at Cradles to Crayons (Boston).

Acquia also launched its own Environmental, Social, and Governance (ESG) stewardship as a company and joined the Vista Climate Pledge. This is important, not just for me personally, but also for many of our customers, as it aligns with their values and expectations for socially responsible companies.


In some respects, 2022 was more "normal" than the previous few years, as I had the opportunity to reconnect with colleagues, Open Source contributors, and customers at various in-person events and meetings.

However, in other ways, 2022 was difficult due to the state of the economy and global conflict.

While 2022 was not without its challenges, Acquia had a very successful year. I'm grateful to be in the position that we are in.

Of course, none of our results would be possible without the hard work of the Acquia team, our customers, our partners, the Drupal and Mautic communities, and more. Thank you!

I am not sure what 2023 will bring but I wish you success and happiness in the new year.

January 02, 2023

Mastodon due to the decentralized nature can result in a significant extra load on your site if someone posts a link to it. Every Mastodon instance where the post is seen (which can be 1 but also 100 or 1000 or …) will request not only the page and sub-sequentially the oEmbed json object to be able to show a preview in Mastodon. The page requests should not an issue as you surely have page caching...


January 01, 2023

December 26, 2022

For me, 2022 was the year of Bluetooth. [1] With all the talk about Matter, the one protocol to connect all home automation devices, this can sound strange. However, it will take years for Matter to become adopted, and in the mean time Bluetooth devices are everywhere.

I wrote a book about Bluetooth

Elektor International Media published my book Develop your own Bluetooth Low Energy Applications for Raspberry Pi, ESP32 and nRF52 with Python, Arduino and Zephyr this year. Why did I decide to write a book about Bluetooth? It comes down to a unique combination of accessibility, ubiquity and easy basics of the technology.

Bluetooth Low Energy (BLE) is one of the most accessible wireless communication standards. There's no cost to access the official BLE specifications. Moreover, BLE chips are cheap, and the available development boards (based on an nRF5 or ESP32) and Raspberry Pis are quite affordable. [2] This means you can just start with BLE programming at minimal cost.

On the software side, BLE is similarly accessible. Many development platforms, most of them open source, offer an API (application programming interface) to assist you in developing your own BLE applications. The real-time operating system Zephyr is a powerful platform to develop BLE applications for Nordic Semiconductor nRF5 or equivalent SoCs, with Python and Bleak it's easy to decode BLE advertisemens, and NimBLE-Arduino makes it possible to create powerful firmware such as OpenMQTTGateway for the ESP32.

Another important factor is that BLE radio chips are ubiquitous. You can find them in smartphones, tablets, and laptops. This means that all those devices can talk to your BLE sensors or lightbulbs. Most manufacturers create mobile apps to control their BLE devices, which you can reverse engineer (as I explain in one of the chapters of my book).

You can also find BLE radios in many single-board computers, such as the Raspberry Pi, and in popular microcontroller platforms such as the ESP32. This makes it quite easy for you to create your own gateways for BLE devices. And platforms such as the Nordic Semiconductor nRF5 series of microcontrollers with BLE radio even make it possible to create your own battery-powered BLE devices.

Last but not least, while Bluetooth Low Energy is a complex technology with a comprehensive specification, getting started with the basics is relatively easy. I hope my book contributes to this by explaining the necessary groundwork and showing the right examples to create your own BLE applications.


I contributed to the Theengs project

I have been using OpenMQTTGateway at home for some time, which is a gateway for various wireless protocols that you can install on an ESP32 or other devices. This year OpenMQTTGateway spun out their BLE decoder to a separate project, Theengs Decoder. This is an efficient, portable and lightweight C++ library for BLE payload decoding.

I contributed to the Theengs Decoder project with some decoders for the following devices:

The Theengs project also created a gateway that you can run on a Linux machine such as a Raspberry Pi, Theengs Gateway. This leverages the same Theengs Decoder as OpenMQTTGateway. I quickly adopted this solution as an alternative to bt-mqtt-gateway (which I contributed to with RuuviTag support earlier), and I started contributing to the project. Amongst others, I:

I also started Theengs Explorer under the Theengs umbrella. This is a text user interface to discover BLE devices and show their raw advertisement data and the data as decoded by Theengs Decoder. This project is still in early development, because I wrote this using a pre-0.2 release of Textual, and I still have to rewrite it.


I wrote some Python packages for Bluetooth

Outside the Theengs project, I created two Python packages related to Bluetooth this year. After I struggled with updating the Theengs Explorer code base to the new Textual 0.2 release, I decided to start from scratch with a 'simple' Bluetooth Low Energy scanner. This became HumBLE Explorer, which is a cross-platform, command-line and human-friendly Bluetooth Low Energy scanner, looking like this:


Textual is quite neat. It lets you create interactive applications for the terminal, with widgets such as checkboxes and input fields, a CSS-like layout language, and even mouse support. Moreover, it runs on Linux, Windows and macOS. Although I'm personally only using Linux at home, I find it important that my applications are cross-platform. That's also the reason why this application, as well as all my Bluetooth work with Python, is based on Bleak, which supports the same three operating systems.

Now that HumBLE Explorer is working, I'll revisit Theengs Explorer soon, and update it to the new Textual version with the knowledge that I gained.

A second Bluetooth project that I have been working on this year, even before HumBLE Explorer or Theengs Explorer, is bluetooth-numbers. It's a Python package with a wide set of numbers related to Bluetooth, so Python projects can easily use these numbers. The goal of this project is to provide a shared resource so various Python projects that deal with Bluetooth don't have to replicate this effort by rolling their own database and keeping it updated.

Luckily Nordic Semiconductor already maintains the Bluetooth Numbers Database for Company IDs, Service UUIDs, Characteristic UUIDs and Descriptor UUIDs. My bluetooth-numbers package started as a Python wrapper around this project, by generating Python modules with these data. In the mean time, I extended the package with some SDO Service UUIDs and Member Service UUIDs I extracted from the Bluetooth Assigned Numbers document (but which I'll probably upstream to the Bluetooth Numbers Database), as well as the IEEE database of OUIs for prefixes of Bluetooth addresses.

So you now can install the package from PyPI with pip:

pip install bluetooth-numbers

Then you can get the description of a company ID in your Python code:

>>> from bluetooth_numbers import company
>>> company[0x0499]
'Ruuvi Innovations Ltd.'

Get the description of a service UUID:

>>> from bluetooth_numbers import service
>>> from uuid import UUID
>>> service[0x180F]
'Battery Service'
>>> service[UUID("6E400001-B5A3-F393-E0A9-E50E24DCCA9E")]
'Nordic UART Service'

Get the description of a characteristic UUID:

>>> from bluetooth_numbers import characteristic
>>> from uuid import UUID
>>> characteristic[0x2A37]
'Heart Rate Measurement'
>>> characteristic[UUID("6E400002-B5A3-F393-E0A9-E50E24DCCA9E")]
'UART RX Characteristic'

Get the description of a descriptor UUID:

>>> from bluetooth_numbers import descriptor
>>> descriptor[0x2901]
'Characteristic User Descriptor'

Get the description of an OUI:

>>> from bluetooth_numbers import oui
>>> oui["58:2D:34"]
'Qingping Electronics (Suzhou) Co., Ltd'

I'm using bluetooth-numbers in HumBLE Explorer and Theengs Explorer to show human-readable descriptions of these numbers in the interface. I hope that other Python projects related to Bluetooth will adopt the package too, to prevent everyone from having to keep their own numbers database updated.

I'm keeping the package updated with its various sources, and it has a test suite with 100% code coverage. Contributions are welcome. The API documentation shows how to use it.

Bluetooth developments in Home Assistant and ESPHome

Although I'm personally advocating an MQTT-based approach to home automation, I'm a big fan of Home Assistant and ESPHome because they share my vision of home automation and make it rather easy to use. Both open-source home automation projects had some big improvements in their Bluetooth support this year.

When my book Getting Started with ESPHome: Develop your own custom home automation devices was published last year, BLE support in ESPHome was still quite limited. It only supported reading BLE advertisements, but not connecting to BLE devices. Support for using ESPHome as a BLE client was only added after the book was published.

The biggest BLE addition in 2022 was Bluetooth Proxy in ESPHome 2022.8. This allows you to use your ESP32 devices with ESPHome firmware as BLE extenders for Home Assistant. Each ESPHome Bluetooth proxy device forwards the BLE advertisements it receives to your Home Assistant installation. By strategically placing some devices in various places at home, you can expand the Bluetooth range of your devices this way. [3]

Starting from ESPHome 2022.9 the Bluetooth proxy also supports active connections: it lets Home Assistant connect to your devices that are out of reach of your home automation gateway, as long as one of your Bluetooth proxy devices are in reach of the device you want to connect to.

This feature was joined by a brand new Bluetooth integration in Home Assistant 2022.8, with automatic discovery of new devices and the ability to push device updates. Home Assistant 2022.9 then added support for ESPHome's Bluetooth proxies, and Home Assistant 2022.10 extended this support to active connections.

Another interesting initiative coming from the Home Assistant project is BTHome, a new open standard for broadcasting sensor data over BLE. Raphael Baron's open-source soil moisture sensor b-parasite already adopted the BTHome standard, as did the custom firmware ATC MiThermometer for various Xiaomi temperature sensors. With the BTHome integration in Home Assistant 2022.9, devices advertising in this format will be automatically discovered in your Home Assistant installation.

BTHome's data format is documented in detail, with various data types supported. There's even support for encryption, using AES in CCM mode with a pre-shared key. The bthome-ble project implements a parser for BTHome payloads you can use in your own Python projects. I applaud the initiative to create an open standard for BLE sensor advertisements, and I hope that many open-source devices will adopt BTHome. I will definitely use the format if I create a BLE broadcaster instead of coming up with my own data format.

I also found it interesting to see that Home Assistant decided to move to Bleak as their BLE library. They even sponsored the lead developer David Lechner to implement passive scanning in the Linux backend. This benefits the broader open-source community and allowed me to add passive scanning support to Theengs Gateway, Theengs Explorer and HumBLE Explorer. With Home Assistant as a big user of Bleak, we'll surely see it improving even more. And Bleak is already the best Python library for BLE...


What with BLE in 2023?

I expect that Bluetooth will still remain an important technology in 2023 and further, because I don't see anything changing about the unique combination of accessibility, ubiquity and easy basics. So I will keep contributing to the Theengs project and developing my own Bluetooth projects.

I still have a couple of BLE sensors at home that aren't supported yet by Theengs Decoder, and I'd like to change that! If you have a device that isn't on the list of supported devices, why don't you try adding a decoder? You don't need to be a developer to do this, as the decoders are specifications of the advertisement format in a JSON file.


When I write "Bluetooth", I mean "Bluetooth Low Energy", which is a radical departure from the original Bluetooth standard, which is now called Classic Bluetooth. Bluetooth Low Energy has been part of the Bluetooth standard beginning from Bluetooth 4.0 (2010).


At least before the Raspberry Pis started getting out of stock because of supply chain issues.


Theengs Gateway also supports this type of expanding BLE range, with various ESP32 devices running OpenMQTTGateway scattered around your home. This is called MQTTtoMQTT decoding.

December 24, 2022

The blog has a new layout. Some of the most important changes:

  • Much smaller logo. The logo was taking up waaaaay too much space.
  • The thumbnails have a shadow on the main page.
  • I hope that the font is easier to read. I might tweak this later.
  • Less clutter in the sidebar!
  • The social links have moved to the Contact page.
  • The top menu is rearranged a bit.
  • The blog archive displays the full article, not just an excerpt.
  • Infinite scroll! I don’t know yet if I like it, I might change it later.
  • The blog archive has 2 columns. Again, I’m not sure about this, might change it later. Feedback is welcome, leave a comment!
  • The most recent post is displayed full width.
  • On individual posts the thumbnail image is now the background of the title.
  • I’m still not entirely happy that the author is shown at the bottom of each blog post. I’m the only author here, so that’s useless, but I have not yet found how to remove that. EDIT: fixed with some extra CSS. Thanks for the tip, Frank!

Do you have any suggestions or comments on the new layout?

December 23, 2022

Navigated to OpenAI, an artificial intelligence (AI) research platform:

I asked OpenAI to write a letter to my wife begging her to make orange cookies.

Twenty-four hours later:

A baking sheet holds freshly baked orange cookies.

What an incredible time to be alive. Happy holidays!

December 22, 2022

 It took a while, but here are three books I've read.

Kevin Mitnick - The Art of Invisibility (2017)

I have been interested in Kevin Mitnick since his 1980ies hacking. He is mentioned in several books that I read in the past.

This book is about all the personal data one leaves behind when using the internet (or devices connected to the internet). It is a rather simple guide on how to increase your privacy or how to aim to become almost invisible on the internet.

I would not advice this book for nerds, or for IT security experts, the book is probably not aimed for those people anyway. People who know close to nothing about computers/networks could benefit from this guide. 

Robert M. Pirsig - Zen and the art of motorcycle maintenance (1974)

Now this is an interesting read. It is not about motorcycles, in fact it is perfectly fine if you replace the word motorcycle by 'smartphone' or 'computer' or 'internet privacy and tracking' in the first half of the book. There are some very insightful comments about how people react in different ways to technology.

The main part of this book though, is not the motorcycle trip with his son through the United States, but his alter ego named Phaedrus. The writer was diagnosed with catatonic schizophrenia and received electric-shock treatment. In the book he remembers fragments of this Phaedrus personality (with an IQ of 170).

The second half is a tough read. I often needed to reread several sentences to grasp what he was saying. (Which was never the case in the Mitnick book above.)

Prusa 3D printing handbook

I finally bought a 3D printer and would advice this to anyone who has lot's of time. Yes, this is not yet ready for end users that only want a 'File - Print' option on the computer. But a whole new world opens, at least for me it's a new world with many possibilities and adventures.

It's a small booklet, but I mention it anyway because it is really good, as are all the follow-up guides on


Acquia Cloud was first launched in 2009 as Acquia Hosting. Acquia was one of the earliest adopters of AWS. At the time, AWS had only 3 services: EC2 , S3, and SimpleDB.

A lot has changed since 2009, which led us to re-architect Acquia Cloud starting in late 2019. This effort, labeled "Acquia Cloud Next" (ACN), became the largest technology overhaul in Acquia's history.

In 2013, four years after the launch of Acquia Cloud, Docker emerged. Docker popularized a lightweight container runtime, and a simple way to package, distribute and deploy applications.

Docker was built on a variety of Linux kernel developments, including "cgroups", "user namespaces" and "Linux containers":

  1. In 2006, Paul Menage (Google) contributed generic process containers to the Linux kernel, which was later renamed control groups, or cgroups.
  2. In 2008, Eric W. Biederman (Red Hat) introduced user namespaces. User namespaces allow a Linux process to have its own set of users, and in particular, allow root privileges inside process containers.
  3. In 2008, IBM created the Linux Containers Project (LCP), a set of tools on top of cgroups and user namespaces.

Docker's focus was to deploy containers on a single machine. When organizations started to adopt Docker across a large number of machines, the need for a "container orchestrator" became clear.

Long before Docker was born, in the early 2000s, Google famously built its search engine on commodity hardware. Where competitors used expensive enterprise-grade hardware, Google realized that they could scale faster on cheap hardware running Linux. This worked as long as their software was able to cope with hardware failures. The key to building fault-tolerant software was the use of containers. To do so, Google not only contributed to the development of cgroups and user namespaces, but they also built an in-house, proprietary container orchestrator called Borg.

When Docker exploded in popularity, engineers involved in the Borg project branched off to develop Kubernetes. Google open sourced Kubernetes in 2014, and in the years following, Kubernetes grew to become the leading container management system in the world.

Back to Acquia. By the end of 2019, Acquia Cloud's infrastructure was delivering around 35 billion page views a month (excluding CDN). Our infrastructure had grown to tens of thousands of EC2 instances spread across many AWS regions. We supported some of the highest trafficked events in the world, including coverage of the Olympics, the Australian Open,, the Mueller report, and more.

Throughout 2019, we rolled out many "under the hood" improvements to Acquia Cloud. Thanks to these, our customers' sites saw performance improvements anywhere from 30% to 60%, at no cost to them.

That said, it became harder and harder to make improvements to the existing platform. Because of our scale, it could take weeks to roll out improvements to our fleet of EC2 instances. It was around that time that we set out to re-architect Acquia Cloud from scratch.

Acquia's journey to ACN started prior to Kubernetes and Docker becoming mainstream. Our initial approach was based on cgroups and Linux containers. But as Kubernetes and Docker established themselves in the market, it became clear we had to pivot. We decided to design ACN from the ground up to be a cloud-native, Kubernetes-native platform.

In March of 2021, after a year and a half of development, my little blog,, was the first site to move to ACN. Getting my site live in production was a fun rallying point for our team. Even more so because my site was also the first site to launch on the original Acquia Hosting platform.

I never blogged about ACN because I wanted to wait until enough customer sites had upgraded. Fast forward another year and a half, and a large number of customers are running on ACN. We now have some of our highest traffic customers running on ACN. I can say without a doubt that ACN offers the highest levels of performance, self-healing, and dynamic scaling that Acquia customers have relied on.

ACN continuously monitors application performance, detects failures, reroutes traffic, and scales websites automatically without human assistance. ACN can handle billions of pageviews, gracefully deals with massive traffic spikes, all without manual intervention or architectural changes. Best of all, we can roll out new features in minutes or hours instead of weeks.

There is no better way to visualize this than by sharing a chart:

Acquia cloud next web transactions timeThe "web transaction times" of a large Fortune 2000 customer that upgraded their main website to Acquia Cloud Next. You can see that PHP (blue area), MySQL (dark yellow area) and Memcached (light yellow) became both much faster and less volatile after migrating to Acquia Cloud Next. (The graph is generated by New Relic. New Relic defines the "web transaction time" as the time between when the application receives a HTTP request and when a HTTP response is sent.)

Customers on Acquia Cloud Next get:

  1. Much faster page performance and web transaction times (see chart above)
  2. 5x faster databases compared to traditional MySQL server deployments
  3. Faster dynamic auto-scaling and faster self-healing
  4. Improved resource isolation - Nginx, Memcached, Cron, and other services all run in dedicated pods

To achieve these results, we worked closely with our partner, AWS. We pushed the boundaries of certain AWS services, including Amazon Elastic File System (EFS), Amazon Elastic Kubernetes Service (EKS), and Amazon Aurora. For example, AWS had to make changes to EKS to ensure that they could meet the scale at which we were growing. After 15 years of working with AWS, we continue to be impressed by AWS' willingness to partner with us and keep up with our demand.

In the process, AWS made upstream Kubernetes contributions to overcome some of our scaling challenges. These helped improve the speed and stability of Kubernetes. We certainly like that AWS shares our values and commitments to Open Source.

Last but not least, I'd be remiss not to give a big shoutout to Acquia's product, architect, and engineering teams. Re-architecting a platform with tens of thousands of EC2 instances running large-scale, mission-critical websites is no small feat.

Our team continued to find creative and state-of-the-art ways to build the best possible platform for Drupal. For a glimpse of that, take a look at this presentation we gave at Kubecon 2022. We learned that by switching our scaling metric from Kubernetes' built-in CPU utilization to a custom metric, we could reduce the churn on our clusters by ~1,000%.

Looking back at ACN's journey over the past 3+ years, I'm incredibly proud of how far we have come.

December 21, 2022


I use ArchLinux on my desktop workstation. For the root filesystem, I use btrfs with luks disk encryption and wrote a blog post about it.

My important data is on OpenZFS.

I’ll migrate my desktop to ArchLinux with OpenZFS in RAIDZ configuration as the root filesystem.

To make installation easier I decide to create a custom ArchLinux boot image with linux-lts and OpenZFS support.

You’ll find my journey to create the boot iso below. All action are execute on a ArchLinux host system (already using OpenZFS)


Create a work directory

I created a separate ZFS dataset for the installation on the host system.

[staf@frija archlinux_raidz]$ sudo zfs create <your_zfs_pool>/<data_set>/home/staf/iso
[staf@frija archlinux_raidz]$ sudo chown staf:staf /home/staf/iso/
[staf@frija archlinux_raidz]$ 
[staf@frija archlinux_raidz]$ cd /home/staf/iso/
[staf@frija iso]$ 

Install archiso

Install the archiso package.

[staf@frija archlinux_raidz]$ sudo pacman -Sy archiso

Import the ArchZFS GPG public key

The archiso script uses the GPG public key from the “host” system. If you aren’t using the on your host, you need import the GPG public key.

curl -L |  pacman-key -a -
pacman-key --lsign-key $(curl -L
curl -L > /etc/pacman.d/mirrorlist-archzfs

Create iso image

Copy the config

Copy the default configuration.

[staf@frija iso]$ cp -r /usr/share/archiso/configs/releng/* ~/iso
[staf@frija iso]$ 

Update the packages file

[staf@frija iso]$ vi packages.x86_64 

We’ll use dkms to build the OpenZFS module. Add the required packages to build the module.


Update pacman.conf

Update the pacman.conf in the work directory (~/iso) to include the repository.

[staf@frija iso]$ vi pacman.conf
Server =$repo/$arch

Update boot configuration


Update grub config and add the linux-lts entries.

[staf@frija iso]$ cd grub/
[staf@frija grub]$ ls
[staf@frija grub]$ vi grub.cfg 
menuentry "Arch Linux LTS install medium (x86_64, UEFI)" --class arch --class gnu-linux --class gnu --class os --id 'archlinux' {
    set gfxpayload=keep
    search --no-floppy --set=root --label %ARCHISO_LABEL%
    linux /%INSTALL_DIR%/boot/x86_64/vmlinuz-linux-lts archisobasedir=%INSTALL_DIR% archisolabel=%ARCHISO_LABEL%
    initrd /%INSTALL_DIR%/boot/intel-ucode.img /%INSTALL_DIR%/boot/amd-ucode.img /%INSTALL_DIR%/boot/x86_64/initramfs-linux-lts.img

menuentry "Arch Linux install LTS medium with speakup screen reader (x86_64, UEFI)" --hotkey s --class arch --class gnu-linux --class gnu --class os --id 'archlinux-accessibility' {
    set gfxpayload=keep
    search --no-floppy --set=root --label %ARCHISO_LABEL%
    linux-lts /%INSTALL_DIR%/boot/x86_64/vmlinuz-linux-lts archisobasedir=%INSTALL_DIR% archisolabel=%ARCHISO_LABEL% accessibility=on
    initrd /%INSTALL_DIR%/boot/intel-ucode.img /%INSTALL_DIR%/boot/amd-ucode.img /%INSTALL_DIR%/boot/x86_64/initramfs-linux-lts.img


In practice grub will be used. But for some reasom I ended up to update the uefi configuration :-)

Copy the default efi boot entry.

[staf@frija iso]$ cp ./efiboot/loader/entries/01-archiso-x86_64-linux.conf ./efiboot/loader/entries/03-archiso-x86_64-linux-lts.conf
[staf@frija iso]$ 
title    Arch Linux LTS install medium (x86_64, UEFI)
sort-key 03
linux    /%INSTALL_DIR%/boot/x86_64/vmlinuz-linux-lts
initrd   /%INSTALL_DIR%/boot/intel-ucode.img
initrd   /%INSTALL_DIR%/boot/amd-ucode.img
initrd   /%INSTALL_DIR%/boot/x86_64/initramfs-linux-lts.img
options  archisobasedir=%INSTALL_DIR% archisolabel=%ARCHISO_LABEL%

Build the iso image

staf@frija iso]$  mkarchiso -v -o out .
[mkarchiso] ERROR: mkarchiso must be run as root.
[staf@frija iso]$ sudo  mkarchiso -v -o out .
[sudo] password for staf: 
[mkarchiso] INFO: Validating options...
[mkarchiso] INFO: Done!
[mkarchiso] INFO: mkarchiso configuration settings
[mkarchiso] INFO:              Architecture:   x86_64
[mkarchiso] INFO:         Working directory:   /home/staf/iso/work
[mkarchiso] INFO:    Installation directory:   arch
[mkarchiso] INFO:                Build date:   2022-12-14T20:24+0100
[mkarchiso] INFO:          Output directory:   /home/staf/iso/out
[mkarchiso] INFO:        Current build mode:   iso
[mkarchiso] INFO:               Build modes:   iso
[mkarchiso] INFO:                   GPG key:   None
[mkarchiso] INFO:                GPG signer:   None
[mkarchiso] INFO: Code signing certificates:   None
[mkarchiso] INFO:                   Profile:   /home/staf/iso
[mkarchiso] INFO: Pacman configuration file:   /home/staf/iso/pacman.conf
[mkarchiso] INFO:           Image file name:   archlinux-2022.12.14-x86_64.iso

Have fun!


December 19, 2022

A large sign that spells out 'Acquia' in individually lit letters. Behind the sign, people are engaged in a lively conversation.

In the early days, Acquia was one of the fastest-growing companies in the US. Like most startups, we'd raise money, convert that money into growth, raise money again, etc. In total, we raised nearly $190 million in seven rounds of funding.

At some point, all companies that take this approach have to become self-sustainable. Acquia wasn't any different.

When Acquia did a CEO search in 2017, we had just started that transformation. We hired Mike Sullivan as our CEO to help us grow, while quickly becoming financially independent at the same time.

When Mike told me he decided to leave Acquia at the end of the year, I was sad, but not completely surprised. While there is always more work to do, Mike has accomplished the mission we had set out for him: we continued our growth and became self-sustained. Mike is leaving us in the strongest financial position we have ever been.

Mike will be succeeded by Steve Reny, who has been Acquia's Chief Operating Officer (COO) for 4.5 years. Steve has been guiding all aspects of Acquia's customer success, professional services, global support, security, and operations. And before joining Acquia in 2018, Steve held executive leadership positions at other companies, including as CEO, COO, CFO, head of sales, and head of corporate development.

Everyone at Acquia knows and loves Steve. This CEO transition is natural, planned, and minimally disruptive.

I have a deep appreciation for everything Mike has done for Acquia. And I'm excited for Steve. Not everyone gets to lead one of the most prominent Boston technology companies. As for me, I continue in my role as CTO, and look forward to partnering with Steve.

I believe strongly in Acquia's mission, our purpose, and our opportunity. I have a deep-rooted belief in the critical importance of the web and digital experiences. It's how we communicate, how we stay in touch with loved ones across the world, how we collaborate, how we do business, how we learn, how we bank, and more.

Because of the web's importance to society, we need to help ensure its well-being and long-term health. I think a lot about helping to build a web that I want my children to grow up with. We need to make sure the web is open, accessible, inclusive, safe, energy efficient, pro-privacy, and compliant. Acquia and Drupal both have an important part to play in that.