Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

April 09, 2021

I published the following diary on “No Python Interpreter? This Simple RAT Installs Its Own Copy“:

For a while, I’m keeping an eye on malicious Python code targeting Windows environments. If Python looks more and more popular, attackers are facing a major issue: Python is not installed by default on most Windows operating systems. Python is often available on developers, system/network administrators, or security teams. Like the proverb says: “You are never better served than by yourself”, I found a simple Python backdoor that installs its own copy of the Python interpreter… [Read more]

The post [SANS ISC] No Python Interpreter? This Simple RAT Installs Its Own Copy appeared first on /dev/random.

April 08, 2021

I published the following diary on “Simple Powershell Ransomware Creating a 7Z Archive of your Files“:

If some ransomware families are based on PE files with complex features, it’s easy to write quick-and-dirty ransomware in other languages like Powershell. I found this sample while hunting. I’m pretty confident that this script is a proof-of-concept or still under development because it does not contain all the required components and includes some debugging information… [Read more]

The post [SANS ISC] Simple Powershell Ransomware Creating a 7Z Archive of your Files appeared first on /dev/random.

April 07, 2021

In an interview, Bill Gates was asked how hard it was for him to learn to delegate. Bill answers with how he had to change his mental model, going from writing code to letting go to optimize for impact.

Yeah, scaling [Microsoft] was a huge challenge. At first I wrote all the code. Then I hired all the people that wrote the code and I looked at the code. Then, eventually, there was code that I didn't look at and people that I didn't hire. And of course the average quality per person is going down, but the ability to have big impact is going up. [...] A large company is imperfect in many ways, and yet it's the way to get out to the entire world. — Bill Gates

You can listen to the entire interview but I've also included the excerpt here:

This idea of "having to let go to optimize for impact" really resonates with me. I've gone through this transition in the Drupal community and at Acquia. I've even written about on a few occasions [1, 2].

April 06, 2021

screenshot of the page/ post autoptimization settingsI’m in the process of adding a per page/ post option to disable Autoptimization.

In the current state of this work in process one can disable Autoptimize entirely on a post/ page or disable just JS optimization as you can see on the screenshot.

Now my question to you, Autoptimize user, is; what other options of below list _have_ to go in that metabox taking into account the list should be between 3 and 5 items long?

  • CSS optimization (which includes Critical CSS)
  • Critical CSS usage/ Inline & defer CSS
  • HTML optimization
  • Image optimization
  • Image Lazyload
  • Google Font optimization
  • Preload (from “extra” tab)
  • Preconnect (from “extra” tab)
  • Async (from “extra” tab)

The Adafruit nRF52 bootloader is a USB-enabled CDC/DFU/UF2 bootloader for nRF52 boards. An advantage compared to Nordic Semiconductor's default bootloader is that you can just drag and drop your application firmware from your operating system's file explorer, without having to install any programming tools. For nRF52840 boards, you hold the reset button while sliding the USB connector in the USB port of your computer, or you tap the reset button twice within 500 ms. The bootloader then starts in DFU (device firmware upgrade) mode and behaves like a removable flash drive.

This device shows three virtual files:


contains information about the bootloader build and the board on which it's running


redirects to a page that contains an IDE or other information about the board


the contents of the entire flash storage of the device

Flashing the device with new firmware is as easy as copying a UF2 file to the drive. After the file is copied, the drive is unmounted and the new firmware is running on the board. 1

However, the bootloader itself can't be upgraded like this. Today I had some problems with the April USB Dongle 52840 2 that interrupted the UF2 file transfer before it was finished. The dmesg command showed a call trace and some errors. As a result, the device was useless: I couldn't put any new firmware on it.

I was puzzled, but then I looked at the bootloader's version in INFO_UF2.TXT, and this was quite old: a 0.2.x version from 2018. I hoped that upgrading the bootloader would solve the problem.

Upgrading the Adafruit nRF52 bootloader is quite easy:

  1. Download the latest release of the bootloader. For the April USB Dongle 52840 and other devices based on Nordic Semiconductor's nRF52840 Dongle 3, the firmware file is for the bootloader and SoftDevice wireless protocol stack. PCA10059 is the official name of Nordic Semiconductor's nRF52840 Dongle.

  2. Unpack the ZIP file. You need the file (yes, another ZIP file) in it.

  3. Install adafruit-nrfutil: pip3 install --user adafruit-nrfutil.

  4. Connect the board to your computer's USB port and flash the new bootloader package:

$ adafruit-nrfutil dfu serial --package --port /dev/ttyACM0
Upgrading target on /dev/ttyACM0 with DFU package /home/koan/ Flow control is disabled, Dual bank, Touch disabled
Activating new firmware
Device programmed.

After this, the INFO_UF2.TXT file contains UF2 Bootloader 0.5.0 lib/nrfx (v2.0.0) lib/tinyusb (0.9.0-22-g7cdeed54) lib/uf2 (remotes/origin/configupdate-9-gadbb8c7).

Luckily the upgraded bootloader solved my problem: I was able to flash the board with new UF2 application firmware.


The UF2 format for firmware has become quite popular in recent years. For instance, the Raspberry Pi Pico also has a bootloader that accepts UF2 files.


If you're looking for an nRF52840 device with a longer range than similar devices with a PCB-based antenna, I can definitely recommend the April USB Dongle 52840: in my experiments with the dongle as a Bluetooth Low Energy and a 802.15.4/Zigbee sniffer, the external antenna makes a big difference.


Another interesting nRF52840 board is the nRF52840 MDK USB Dongle from makerdiary. This is essentially a Nordic Semiconductor nRF52840 Dongle with the Adafruit nRF52 bootloader, sold in a case.

April 03, 2021

This morning I finally pushed Autoptimize 2.8.2 out of the gates which was a relatively minor release with misc. small improvements/ bugfixes. Only it proved not that minor as it broke some sites after the update, so here’s a quick postmortem.


  • 7h33 CEST: I pushed out 2.8.2
  • 7h56 CEST: first forum post about a Fatal PHP error due to wp-content/plugins/autoptimize/classes/external/php/ao-minify-html.php missing
  • 7h58 CEST: second forum post confirming issue
  • 8h01 CEST: responded to both forum posts asking if file was indeed missing on filesystem
  • 8h04 CEST: I changed the “stable version” back to 2.8.1 to stop 2.8.2 from being pushed out.
  • 8h07 CEST: forum post replies confirming the file was indeed missing from the filesystem
  • 8h15 CEST: I pushed out 2.8.3 with the fix
  • 8h22 CEST: confirmed fixed by first user
  • 8h26 CEST: confirmed fixed by second user

Root cause analysis

One of the improvements was changing the classname of the HTML minifier to avoid W3 Total Cache’s HTML minifier being used. For this purpose not only small changes were made to the HTML minifier code, but the file was also renamed from minify-html.php into ao-minify-html.php. The file itself was present on my local filesystem, but I did *not* svn add it, so it was never propagated to the SVN server, resulting in it not being in the 2.8.2 zip-file causing the PHP Fatal “require(): Failed opening required” errors.


Every svn ci has be proceeded by an svn stat, always. I’ve updated my “go live” procedure to reflect that.

Additionally; I strongly advise against automatic updates for Autoptimize (and I don’t auto-update any plugin myself), not only for major f-ups like mine today, but also because any change to how (auto-)optimization works needs to be tested for regressions. And if you have a site that generates money somehow, you really should have a staging site (which can auto-update) to test updates on before applying on production.

April 02, 2021

I published the following diary on “C2 Activity: Sandboxes or Real Victims?“:

In my last diary, I mentioned that I was able to access screenshots exfiltrated by the malware sample. During the first analysis, there were approximately 460 JPEG files available. I continued to keep an eye on the host and the number slightly increased but not so much. My diary conclusion was that the malware looks popular seeing the number of screenshots but wait… Are we sure that all those screenshots are real victims? I executed the malware in my sandbox and probably other automated analysis tools were used to detonate the malware in a sandbox. This question popped up in my mind: How do have an idea about the ratio of automated tools VS. real victims? [Read more]

The post [SANS ISC] C2 Activity: Sandboxes or Real Victims? appeared first on /dev/random.

March 31, 2021

I published the following diary on “Quick Analysis of a Modular InfoStealer“:

This morning, an interesting phishing email landed in my spam trap. The mail was redacted in Spanish and, as usual, asked the recipient to urgently process the attached document. The filename was “AVISO.001” (This extension is used by multi-volume archives). The archive contained a PE file with a very long name: AVISO11504122921827776385010767000154304736120425314155656824545860211706529881523930427.exe (SHA256:ff834f404b977a475ef56f1fa81cf91f0ac7e07b8d44e0c224861a3287f47c8c). The file is unknown on VT at this time so I performed a quick analysis… [Read more]

The post [SANS ISC] Quick Analysis of a Modular InfoStealer appeared first on /dev/random.

March 29, 2021

I published the following diary on “Jumping into Shellcode“:

Malware analysis is exciting because you never know what you will find. In previous diaries, I already explained why it’s important to have a look at groups of interesting Windows API call to detect some behaviors. The classic example is code injection. Usually, it is based on something like this:

1. You allocate some memory
2. You get a shellcode (downloaded, extracted from a specific location like a section, a resource, …)
3. You copy the shellcode in the newly allocated memory region
4. You create a new threat to execute it.

[Read more]

The post [SANS ISC] Jumping into Shellcode appeared first on /dev/random.

March 28, 2021

rpi4 with disk

In my last blog post, we set up a FreeBSD virtual machine with QEMU. I switched from the EDK2 (UEFI) firmware to U-boot, the EDK2 firmware had issues with multiple CPU’s in the virtual machines.

In this blog post, we’ll continue with the Network setup, install the virtual machine from a CDROM image and how to start the virtual machine during the PI start-up.

Network Setup


Bridge setup

The network interface on my Raspberry PI is configured in a bridge. I used this bridge setup already for a virtual machine setup with libvirtd.

The bridge is configured with network-manager. I don’t recall how I created it. It was probably created with nmtui or nmcli.

Creating a bridge with nmtui is straight-forward, I’ll not cover it in this how-to.

I use Manjaro on my Raspberry Pi. Manjaro is based on Arch Linux. The ArchLinux wiki has a nice article on how to set up a bridge.


Create a bridge.conf file in /etc/qemu/ to allow the bridge in QEMU.

# cat /etc/qemu/bridge.conf 
allow eth0-bridge


When you use a firewall that drops all packages by default - as you should - you probably want to set up a firewall rule that allows all traffic on the physical interface on the bridge.

iptables -I FORWARD -m physdev --physdev-is-bridged -j ACCEPT

I use a simple firewall script that was based on the Debian firewall wiki:

As always with a firewall make sure that you log the dropped packages. It’ll make your life easier to debug.

You’ll find my iptables firewall rules below.

iptables -F

# Default policy to drop 'everything' but our output to internet
iptables -P FORWARD DROP
iptables -P INPUT   DROP
iptables -P OUTPUT  ACCEPT

# Allow established connections (the responses to our outgoing traffic)
iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT

# Allow local programs that use loopback (Unix sockets)
iptables -A INPUT -s -d -i lo -j ACCEPT

# Uncomment this line to allow incoming SSH/SCP conections to this machine,
# for traffic from (you can use also use a network definition as
# source like

iptables -A INPUT -p tcp --dport 22 -m state --state NEW -j ACCEPT
iptables -A INPUT -p icmp -j ACCEPT

iptables -I FORWARD -m physdev --physdev-is-bridged -j ACCEPT


iptables -A LOGGING_INPUT -m limit --limit 2/min -j LOG --log-prefix "IPTables-Input-Dropped: " --log-level 4

iptables -A LOGGING_FORWARD -m limit --limit 2/min -j LOG --log-prefix "IPTables-Forward-Dropped: " --log-level 4

iptables -A LOGGING_OUTPUT -m limit --limit 2/min -j LOG --log-prefix "IPTables-Output-Dropped: " --log-level 4



To boot the virtual machine with networking enabled, you can add -net nic`-net bridge,br=<your-bridge> to the qemu-system-aarch64 command. My bridge is called eth0-bridge.

As a test, I booted the virtual machine with the FreeBSD virtual machine image.

qemu-system-aarch64 -M virt -m 4096M -cpu host,pmu=off --enable-kvm \
 	-smp 2 -nographic -bios /usr/local/u-boot/u-boot.bin \
 	-hda /home/staf/Downloads/freebsd/FreeBSD-13.0-RC2-arm64-aarch64.qcow2 \
	-boot order=d -net nic -net bridge,br=eth0-bridge

This creates a tap interface that is assigned to the virtual machine. The FreeBSD virtual image is configured to get an ip-address with DHCP.

Install FreeBSD from a cdrom image

Download the FreeBSD ARM64 “Installer Image” from FreeBSD website:

Create a disk image for the virtual machine.

$ qemu-img create -f qcow2 myfreebsd.qcow2 50G
Formatting 'myfreebsd.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=53687091200 lazy_refcounts=off refcount_bits=16

Boot the virtual machine with the “Installer Image” and the created qcow2 image.

$ qemu-system-aarch64 -M virt -m 4096M -cpu host,pmu=on --enable-kvm \
        -smp 2 -nographic -bios /usr/local/u-boot/u-boot.bin \
        -cdrom /home/staf/Downloads/freebsd/iso/FreeBSD-13.0-RC3-arm64-aarch64-dvd1.iso \
        -boot order=c \
        -hda myfreebsd.qcow2 \
        -net nic -net bridge,br=eth0-bridge

The installation continues as “a normal” FreeBSD install.

|  ______               ____   _____ _____  
  |  ____|             |  _ \ / ____|  __ \ 
  | |___ _ __ ___  ___ | |_) | (___ | |  | |
  |  ___| '__/ _ \/ _ \|  _ < \___ \| |  | |
  | |   | | |  __/  __/| |_) |____) | |__| |
  | |   | | |    |    ||     |      |      |
  |_|   |_|  \___|\___||____/|_____/|_____/      ```                        `
                                                s` `.....---.......--.```   -/
 +-----------Welcome to FreeBSD------------+    +o   .--`         /y:`      +.
 |                                         |     yo`:.            :o      `+-
 |  1. Boot Multi user [Enter]             |      y/               -/`   -o/
 |  2. Boot Single user                    |     .-                  ::/sy+:.
 |  3. Escape to loader prompt             |     /                     `--  /
 |  4. Reboot                              |    `:                          :`
 |  5. Cons: Video                         |    `:                          :`
 |                                         |     /                          /
 |  Options:                               |     .-                        -.
 |  6. Kernel: default/kernel (1 of 1)     |      --                      -.
 |  7. Boot Options                        |       `:`                  `:`
 |                                         |         .--             `--.
 |                                         |            .---.....----.
   Autoboot in 5 seconds, hit [Enter] to boot or any other key to stop   

Choose your terminal type, I used xterm. Tip: if your screen gets mixed up during the installation, you can use [CRTL][L] to redraw it.

Starting local daemons:
Welcome to FreeBSD!

Please choose the appropriate terminal type for your system.
Common console types are:
   ansi     Standard ANSI terminal
   vt100    VT100 or compatible terminal
   xterm    xterm terminal emulator (or compatible)
   cons25w  cons25w terminal

Console type [vt100]: 

Continue with the FreeBSD installation…

When you reboot your freshly installed FreeBSD system interrupt the boot process with the [CRTL][a] [x] key combination. To see the other options use [CRTL][a] [h].

qemu-system-aarch64 -M virt -m 4096M -cpu host --enable-kvm \
        -smp 2 -nographic -bios /usr/local/u-boot/u-boot.bin \
        -boot order=c \
        -hda myfreebsd.qcow2 \
        -net nic -net bridge,br=eth0-bridge

The first boot will fail. We are using U-Boot as the BIOS. The EFI boot filesystem doesn’t exist.

Logon to the system.

Automatic file system check failed; help!
ERROR: ABORTING BOOT (sending SIGTERM to parent)!
1970-01-01T01:00:02.912420+01:00 - init 1 - - /bin/sh on /etc/rc terminated abnormally, going to single user mode
Enter root password, or ^D to go multi-user
Enter full pathname of shell or RETURN for /bin/sh: 
root@:/ # 

Verify the filesystem that failed to mount.

root@:/ # mount -a
mount_msdosfs: /dev/vtbd1p1: No such file or directory
root@:/ # 

The root filesystem is read-only. Remount it in read-write mode with mount -u /

root@:/ # mount -u /
root@:/ #

Edit /etc/fstab

root@:/ # vi /etc/fstab

And add a # before the /boot/efi mount point. I’d not remove it, it might be useful to be able to re-enable it when you want to switch to a UEFI bios.

# Device                Mountpoint      FStype  Options         Dump    Pass#
# /dev/vtbd1p1          /boot/efi       msdosfs rw              2       2
/dev/mirror/swap                none    swap    sw              0       0

And reboot you system.

root@:/ # sync
root@:/ # reboot


To implement the auto-start of the QEMU virtual machine, I mainly followed the ArchLinux wiki QEMU wiki

Systemd service

Create the systemd service.

# vi /etc/systemd/system/qemu@.service
Description=QEMU virtual machine

Description=QEMU virtual machine

Environment="haltcmd=kill -INT $MAINPID"
ExecStart=/usr/bin/qemu-system-aarch64 -M virt -name %i --enable-kvm -cpu host -nographic $args
ExecStop=/usr/bin/bash -c ${haltcmd}
ExecStop=/usr/bin/bash -c 'while nc localhost 7100; do sleep 1; done'


Create QEMU config

Create the qemu.d config directory.

# mkdir -p /etc/conf.d/qemu.d/

Create the definition for the virtual machine.

# vi /etc/conf.d/qemu.d/myfreebsd
args="-bios=/usr/local/u-boot/u-boot.bin -hda /var/lib/qemu/images/rataplan/myfreebsd.qcow2 -boot order=c -net nic -net bridge,br=eth0-bridge -serial telnet:localhost:$vmport,server,nowait,nodelay"
haltcmd="ssh powermanager@myfreebsd sudo poweroff"
[root@minerva ~]# systemctl daemon-reload
[root@minerva ~]# 
[root@minerva ~]# systemctl start qemu@myfreebsd
[root@minerva ~]# 


FreeBSD on pi screen

We have two options to execute a poweroff. The first one is by ACPI. QEMU has a “monitor” interface that allows to execute a “system_poweroff” command. This will execute a poweroff by ACPI.

Your client operating system needs to support it. FreeBSD has good ACPI support build-in to the kernel. But I don’t know the state and how stable it is on ARM64. We’re also using U-boot.

The other option is to execute the poweroff command with ssh and sudo. Since I didn’t get ACPI working, I configured it with ssh.

Setup ssh

Generate a ssh key

I normally store my ssh keys on a smartcard-hsm and use a ssh-agent. As a test, I will just use a ssh-key on the host filesystem.

I’ll migrate it when I move my raspberry-pi into my home production environment. :-)

Generate an ssh key on the QEMU host system.

# ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa
Your public key has been saved in /root/.ssh/
The key fingerprint is:
The key's randomart image is:

Install sudo

Install sudo on the FreeBSD client system. The FreeBSD package manager pkg will be installed the first time you execute it.

To execute the poweroff command we’ll use sudo, so let’s install it…

# pkg install -y sudo
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
The following 1 package(s) will be affected (of 0 checked):

New packages to be INSTALLED:
	sudo: 1.9.5p2

Number of packages to be installed: 1

The process will require 4 MiB more space.
890 KiB to be downloaded.
[1/1] Fetching sudo-1.9.5p2.txz: 100%  890 KiB 911.0kB/s    00:01    
Checking integrity... done (0 conflicting)
[1/1] Installing sudo-1.9.5p2...
[1/1] Extracting sudo-1.9.5p2: 100%

Create the powermanager user

Create the powermanager user with the adduser command.

# adduser
Username: powermanager
Full name: powermanager
Uid (Leave empty for default): 
Login group [powermanager]: 
Login group is powermanager. Invite powermanager into other groups? []: 
Login class [default]: 
Shell (sh csh tcsh bash rbash nologin) [sh]: 
Home directory [/home/powermanager]: 
Home directory permissions (Leave empty for default): 
Use password-based authentication? [yes]: no
Lock out the account after creation? [no]: 
Username   : powermanager
Password   : <disabled>
Full Name  : powermanager
Uid        : 1002
Class      : 
Groups     : powermanager 
Home       : /home/powermanager
Home Mode  : 
Shell      : /bin/sh
Locked     : no
OK? (yes/no): yes
adduser: INFO: Successfully added (powermanager) to the user database.
Add another user? (yes/no): no
root@rataplan:~ # 

Configure sudo

Create /usr/local/etc/sudoers.d/powermanager

# visudo -f /usr/local/etc/sudoers.d/powermanager

with the permission to execute the poweroff command with out a password.

powermanager ALL=(ALL) NOPASSWD:/sbin/poweroff


Create the authorized_keys file for the powermanager user.

Create the .ssh directory in homedir of the powermanager.

# cd /home/powermanager/
# umask 027
# mkdir .ssh

Create the authorized_keys file, it’s less known that you can also restrict the access in the authorized_keys file. We’ll restrict the access to the ip address of the Linux hypervisor system.

from="",no-X11-forwarding ssh-rsa <snip>
root@rataplan:/home/powermanager # chown -R root:powermanager .ssh
root@rataplan:/home/powermanager # 


Logon to the FreeBSD virtual machine with the create ssh key and try to execute the poweroff command.

# ssh powermanager@myfreebsd
The authenticity of host 'myfreebsd (' can't be established.
ED25519 key fingerprint is SHA256:R7tmX7In9D21H3hj2JiwJJVwcoQvoIR5BgJjuKgY3CI.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added 'myfreebsd' (ED25519) to the list of known hosts.
FreeBSD 13.0-RC3 (GENERIC) #0 releng/13.0-n244696-8f731a397ad: Fri Mar 19 03:36:50 UTC 2021

Welcome to FreeBSD!

Release Notes, Errata:
Security Advisories:
FreeBSD Handbook:
Questions List:
FreeBSD Forums:

Documents installed with the system are in the /usr/local/share/doc/freebsd/
directory, or can be installed later with:  pkg install en-freebsd-doc
For other languages, replace "en" with a language code like de or fr.

Show the version of FreeBSD installed:  freebsd-version ; uname -a
Please include that output and any error messages when posting questions.
Introduction to manual pages:  man man
FreeBSD directory layout:      man hier

To change this login announcement, see motd(5).
Nice bash prompt: PS1='(\[$(tput md)\]\t <\w>\[$(tput me)\]) $(echo $?) \$ '
		-- Mathieu <>
powermanager@rataplan:~ $ 
$ sudo poweroff
Shutdown NOW!
poweroff: [pid 43082]
powermanager@rataplan:~ $                                                                                
*** FINAL System shutdown message from powermanager@rataplan ***             

System going down IMMEDIATELY                                                  


System shutdown time has arrived
Connection to myfreebsd closed by remote host.
Connection to myfreebsd closed.
[root@minerva ~]# 

Final Test

Make sure that your client system is running and configured to be start at the system startup.

[root@minerva ~]# systemctl enable qemu@myfreebsd
Created symlink /etc/systemd/system/ → /etc/systemd/system/qemu@.service.
[root@minerva ~]# systemctl start qemu@myfreebsd
[root@minerva ~]# 

Verify that the system is running with systemctl status.

[root@minerva ~]# systemctl status qemu@myfreebsd
● qemu@myfreebsd.service - QEMU virtual machine
     Loaded: loaded (/etc/systemd/system/qemu@.service; enabled; vendor preset: disabled)
     Active: active (running) since Sun 2021-03-21 20:24:10 CET; 2min 39s ago
   Main PID: 43360 (qemu-system-aar)
      Tasks: 5 (limit: 8536)
     CGroup: /system.slice/system-qemu.slice/qemu@myfreebsd.service
             └─43360 /usr/bin/qemu-system-aarch64 -M virt -name myfreebsd --enable-kvm -cpu host -nographic -m 4096 -smp 2 -bios /usr/local/u-boot/u-b>

Mar 21 20:24:10 minerva systemd[1]: Started QEMU virtual machine.
Mar 21 20:24:10 minerva qemu-system-aarch64[43360]: QEMU 5.2.0 monitor - type 'help' for more information

On one window logon to your FreeBSD client console with telnet.

$ telnet 7001

On the QEMU Linux system execute

# systemctl stop qemu@myfreebsd

The FreeBSD client should be power-down…

Have fun!


March 26, 2021

Cover Image

The Boy Who Cried Leopard

Recently there's been a new dust up about Richard Stallman and the Free Software Foundation. For those of you just tuning in: an open letter demands that the entire board of the Free-as-in-speech Software Foundation resign, because of past statements and opinions by the radical inventor of free-as-in-speech software.

It's pushed on social media, by various People of Clout. People start sharing their own stories which are somehow meant to prove the power grab is justified because Stallman is horrible. There's also a counter letter, which I and many others have signed. It's all very productive.

The whole situation is remarkable to me. The undersigned claim to detest Stallman, for being an uncompromising libertarian who holds unsavory and immoral views—or at least a caricature of them. Yet they seem incredibly invested in taking over an organization he founded to explicitly defend his personal ideals. You'd think people who are so into guilt by association would prefer to not be associated with any of it.

It's even more remarkable when you notice the backdrop for the previous dust up involving Stallman: MIT and Jeffrey Epstein. Cos what it looked like to me was that a bunch of people suddenly all had their hands in a very dubious funding cookie jar. At the same time, they decided it was very important to use someone as a scapegoat to pin evil opinions on about sex and consent. You gotta wonder.

What I really want to talk about though is a pattern of behavior that keeps recurring.

Please be patient I have autism - Blue hat


Consider this story.

T. Tweeter describes the pain of being sat next to Stallman on a grounded plane for 90 minutes. Stallman complains to the flight attendant and becomes irate. Eventually the narrator "takes one for the team" by striking up a conversation with him, lest the entire flight is cancelled, after ignoring him for 45 minutes. Very empathetic. Stallman sees this as an opportunity to criticize his choice of headphones, that they are a symbol of digital oppression.

The intended take-away, I assume, is that Stallman is immature and lacks the social graces to deal with a difficult situation. He takes out his stress on the people around him, who can't do anything about it, making it worse for everyone. He is single-mindedly focused on his own interests.

That doesn't sound very pleasant.

Though as someone on the spectrum, I can read this situation quite differently.

Planes are uncomfortable for anyone: you are stuck in a tin can, in an uncomfortable seat, next to people you can't get away from. For autists, this is extra bad: they often have difficulty tuning out their environment. This can be experienced as an actual assault of painful sounds, smells and so on. Spending several hours on a plane is Nightmare mode for some of us, and noise-cancelling can be a life saver.

The fact that the plane was grounded is also extremely pertinent: autism is often paired with OCD, and a grounded plane represents a schedule that was made but then disrupted. An expectation was set of orderly events, and then this expectation was violated, with no definite end in sight. This can be unbearable for those with a certain predisposition.

The combination of the two is extra bad, because the way autists generally deal with stressful situations is through planning and preparation: they anticipate the various obstacles and harms they might encounter, and preventatively try to mitigate them. If things go wrong despite all this, because of the actions of others, this can register as negligent and rude. The person on the spectrum is trying their best to avoid harm, to avoid foreseeable problems that will result in pain, but their efforts are in vain or actively frustrated.

Worse, if they complain, they will be seen as arrogant and entitled, because what was plainly obvious to them is rarely understood by others. It puts them in a damned-if-you-do, damned-if-you-don't situation. Annoy people by pointing out their mistakes, or stay silent and be forced to live painfully through their slow, unfolding consequences. Ripping off the band-aid is sometimes necessary, and can have remarkable results.

I'm not defending Stallman's behavior, I'm just explaining what it likely looked like from the other side. The part about the headphones is also pertinent, because to someone like Stallman, being able to talk about his interests is, by definition, a good time. It comes from an inability to understand that others have fundamentally different priorities of what is enjoyable. He sincerely believes the person is making a bad choice because he foresees that some technological limitation will eventually deny them a fundamental expressive right.

What is most remarkable is that Stallman's detractors consider themselves exquisitely empathetic. Yet they seem unable to grasp this from his perspective, even if they find it unreasonable. They assume he is being willfully unbearable in a bearable situation, rather than simply having an unbearable experience, as valid or invalid as theirs.

Japanese Tapas aka Izakaya

The Izakaya Clown Car

I have my own story that hits similar notes. At a local conference, I booked a dinner reservation for a group. Because of an error by the restaurant, it almost fell through, but we managed to sort it all out with a different location. It was all very chaotic.

My invitation was very clear: there are no extra seats available. The guest list was locked in. This was an extremely popular place. So you can imagine how I felt when, day of, more people show up than agreed.

"Well, there's a few people here with their spouse... we couldn't just tell them not to come."

Here's how my sperg brain answered that:

"Yes you can. In fact, those are exactly the people who can go off have dinner on their own without being alone."

Most people don't want to be the one to say "no, you can't come," even if there is a perfectly good reason for it. I am not that guy.

You see, I know conferences. I know the pattern of wandering in the vicinity of the event as part of a hungry group. The chances of finding dinner any time soon shrink with every new person who tags along. This is the exact thing my dinner plans were meant to avoid. Sorry, that's just how it is. Don't blame me for knowing you better than you do. Bystander group dynamics are predictable and tedious.

We ended up squished around too small a table, with visibly exasperated staff, in a place that until then I had been a regular and welcomed customer at. At a location that normally didn't even do reservations but had been forced to accept out of a Japanese sense of franchise honor. And me a nervous wreck for about the first half of it, at least until the sake kicked in.

I'm sure some thought I was the asshole, too spergy to just "have a good time". This is the problem with people: if the assholishness is sufficiently distributed, everyone can claim individually it's not a big deal, even when all the crap flows downhill towards one person. Out of sight, out of mind.

That dinner ended up getting paid for with a Google credit card, btw. I suspect there's a lesson about valley privilege in there. Just saying.

Git rebase

Rebase Richard Stallman

Anyway so, when faced with a stressful and unexpected situation, Stallman freaks out.

Now let's look at the Medium post Remove Richard Stallman from the last dust up:

I’m writing this because I’m too angry to work.

I’m writing this because at 11AM on Wednesday, September 11th 2019, my friend sent me an email that was sent to an MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) mailing list.

This email came from Richard Stallman, a prominent computer scientist.

A single email sent you into a rage, you say?

I was shocked. I continued talking to my friend, a female graduate student in CSAIL, about everything, trying to get the full email thread (I wasn’t on the mailing list). I even started emailing reporters — local and national, news sites, newspapers, radio stations. I couldn’t stop thinking about it. During my 45-minute drive home, when I normally listen to podcasts or music, I just sat in complete silence.

And then you couldn't stop talking about it. You dumped it all on a friend and turned them into your personal backchannel? You reached out to multiple reporters? Your normal routine was completely thrown off?

So I told my friends that I would just write a story myself. I’d planned to do it after work today; instead, because I can’t possibly focus, I’m working on it now.

The problems are so obvious.

Why do we wait until it becomes bad and public and unbearable and people like me have to write posts like this?

Why do we ponder the low enrollment of female and minority graduate students at MIT with one hand and endorse shitty men in science with the other? Not only endorse them — we invite them to our campus where they will brush shoulders with those same female and minority students.

There's a thing that's extremely obvious to her, that she finds unbearable. She is very frustrated that others aren't automatically on board. She hates the idea of being around them and even hints that it is unpleasant to touch them.

She's doing the exact same thing Stallman was on the plane. What's more, she is using all the autistic registers to describe her discomfort: bottled up emotions, OCD, disruption of routine, sensory discomfort, and so on.

It's also similar to my story of being squished around a restaurant table. The big difference is: nobody is forcing her to do anything. This is all just about an email somebody forwarded to her, from a list she's not even on.

What's really, really funny is the next part:

There is nothing I have seen a man in tech do that a woman could not. What’s more, the woman would probably be less egotistical and more team-oriented about it.

Like freaking out in public but pretending you're doing it for the children. Did you know that they say that autism tends to manifest differently in women than in men? And that men tend to have a systems-focus while women tend to have a people-focus? That female autists tend to be more verbally fluent and hence often go unnoticed for years? You say you are an MIT robotics engineer with a fondness for writing?

There is nothing I have seen a woman in tech do that a man could not. What’s more, the man would probably be less egotistical and more team-oriented about it.

Doesn't sound so pleasant anymore, does it? This is the "Women are Wonderful" effect in the wild: making patently sexist statements is okay if they make women sound good.

There is no single person that is so deserving of praise their comments deprecating others should be allowed to slide. Particularly when those comments are excuses about rape, assault, and child sex trafficking.

Notice that the person who openly denigrated "shitty men in science" in bulk earlier claims it is wholly unacceptable to deprecate others, while misrepresenting them as endorsing horrific crimes wholesale.

Let the sperg who is a buddha cast the first stone.

Stoning Scene - The Life of Brian

Dogs and Cats

It's easy to conclude the above represents enormous, total, widespread hypocrisy. But there's a subtle distinction that threatens to get lost.

Stallman was unexpectedly stuck on a plane. I was unexpectedly forced to choose between going hungry or having an extremely uncomfortable dinner.

But nobody was forced to listen to Stallman having a discussion on a private mailing list.

Why do we wait until it becomes bad and public and unbearable and people like me have to write posts like this?

If anyone is being willfully unbearable, it is people who pretend this distinction does not matter. That every knee must bend regardless of who and when and where.

I've thought a lot about what exactly it is that social media is and does. Why it is seemingly so pernicious.

One conclusion is that it is a perfect environment for social predators, especially those with cluster B disorders such as narcissism and borderline. The platforms reward attention-seeking, and thrive on gossip and hearsay. Users trade publicly in reputation rather than facts. The lack of logic in seizing control of an organization when you detest its founders' ideas makes this clear: it's not about the principles, but about grabbing power and funding.

Social media also encourages these behaviors even for those not predisposed to it, simply through monkey-see-monkey-do. The notion of activists as "social script kiddies" is particularly relevant here: people might not realize it, but they are often acting out thinly disguised scripts for emotional abuse, even cult indoctrination. Just fill in the blanks and let it rip. Worse is that it also forces opponents to adopt a systematic way of countering it: zero tolerance for such shenanigans anywhere, classement verticale, into the trash it goes.

But I think there's something else too, and it ties back to one of the oldest stories in the book: The Boy Who Cried Wolf.

The villagers in the story are misled to believe there is an imminent threat. This captures their attention, sending them on a pointless wolf hunt. This happens so often, they conclude there is no danger. When a wolf finally does show up, they don't believe it, and people get eaten.

Social media does something similar, because it creates a global village. But it's not quite the same.

Everyone who subscribes to it is constantly being yelled at that there are wolves everywhere. Many take it seriously, think about it, and join an Anti-Wolf Coalition. Some even go out and hunt. But usually there aren't any real wolves in their neighborhood. So they spend their energy obsessing for no reason. People become afraid to go out at night, worried they might get eaten. Eventually even ordinary accidents are interpreted as wolf attacks. Owning a dog stops being popular, especially if you have children.

Then one day, a leopard shows up. A boy spots the creature at night, but it is difficult to see, so when he describes it, it sounds just like a cat. "Cats are harmless!" the villagers say. "They're nothing like wolves!"

And the leopard ate very well.

Ten years ago, I reflected on the fact that -- by that time -- I had been in Debian for just over ten years. This year, in early February, I've passed the twenty year milestone. As I'm turning 43 this year, I will have been in Debian for half my life in about three years. Scary thought, that.

In the past ten years, not much has changed, and yet at the same time, much has. I became involved in the Debian video team; I stepped down from the m68k port; and my organizing of the Debian devroom at FOSDEM resulted in me eventually joining the FOSDEM orga team, where I eventually ended up also doing video. As part of my video work, I wrote SReview, for which in these COVID-19 times in much of my spare time I have had to write new code and/or fix bugs.

I was a candidate for the position of DPL one more time, without being elected. I was also a candidate for the technical committee a few times, also without success.

I also added a few packages to the list of packages that I maintain for Debian; most obviously this includes SReview, but there's also things like extrepo and policy-rcd-declarative, both fairly recent packages that I hope will improve Debian as a whole in the longer term.

On a more personal level, at one debconf I met a wonderful girl that I now have just celebrated my first wedding anniversary with. Before that could happen, I have had to move to South Africa two years ago. Moving is an involved process at any one time; moving to a different continent altogether is even more so. As it would have been complicated and involved to remain a business owner of a Belgian business while living 9500km away from the country, I sold my shares to my (now ex) business partner; it turned the page of a 15-year chapter of my life, something I could not do without feelings one way or the other.

The things I do in Debian has changed over the past twenty years. I've been the maintainer of the second-highest number of packages in the project when I maintained the Linux Gazette packages; I've been an m68k porter; I've been an AM, and briefly even an NM frontdesk member; I've been a DPL candidate three times, and a TC candidate twice.

At the turn of my first decade of being a Debian Developer, I noted that people started to recognize my name, and that I started to be one of the Debian Developers who had been with the project longer than most. This has, obviously, not changed. New in the "I'm getting old" department is the fact that during the last Debconf, I noticed for the first time that there was a speaker who had been alive for less long than I had been a Debian Developer. I'm assuming these types of things will continue happening in the next decade, and that the future will bring more of these kinds of changes that will make me feel older as I and the project mature more.

I'm looking forward to it. Here's to you, Debian; may you continue to influence my life, in good ways and in bad (but hopefully mostly good), as well as continue to inspire me to improve the world, as you have over the past twenty years!

Today, the U.S. Congress and big tech companies continued the debate about Section 230 of the 1996 Communications Decency Act.

Put simply, Section 230 provides websites immunity from liability from third-party content. This internet legislation is a double-edged sword. On the one hand it has allowed the dangerous spread of misinformation on social media. On the other hand it has helped the internet thrive.

If I write something untrue and damaging about you on Facebook, you might be able to sue me, but you can't sue Facebook. As a result, social media companies don't really care what is said on their platforms. Their immunity is a big reason why fake news, hate speech and misinformation has been able to spread uncontrollably.

At the same time, Section 230 makes it possible for bloggers to host comments from their readers, for Open Source communities to work together online, and for YouTubers to share videos. Section 230 enables people to share, innovate and collaborate. It has empowered a lot of good.

President Biden has suggested revoking Section 230. Other policy makers would like to reform Section 230. Either revoking or modifying Section 230 could have a big impact on any organization that hosts online content.

Hosting companies could be impacted, but also bloggers and Open Source communities. Having to police all content could quickly become unsustainable, especially for individuals and small organizations. People publish so much new content every day!

As Katie Jordan, the Director of Public Policy and Technology for the Internet Society said, If cloud providers get wrapped up in this conversation about pulling back intermediary liability protection, then by default, they're going to have to reduce privacy and security practices because they'll have to look at the content they're storing for you, to know if they're breaking the law..

A wholesale repeal of Section 230 seems too far reaching to me. It could cause more harm than good. A careful reform seems more appropriate.

Instead of being so focused on Section 230, I'd start by regulating search and social media algorithms. Hosting content is one thing, but recommending content to millions of people is another. When search and social media companies reach billions of people, their content recommendation algorithms can sway public sentiment, introduce bias or rapidly spread misinformation. We should start there.

I've said in the past that we need an FDA for large-scale algorithms that impact society. Just as the FDA ensures that pharmaceutical companies aren't lying about the claims they make about their drugs, there should be a similar regulator for large-scale software algorithms. For example, we need some level of guarantee that companies like Google, Twitter and Facebook won't intentionally (or unintentionally) manipulate search results to shape the public opinion.

March 25, 2021

Thousands of Open Source and Free Software advocates are outraged at the Free Software Foundation (FSF), myself included.

In 2019, Richard Stallman was forced out of the FSF, the organization he started. This after he called Jeffrey Epstein's victims of sex trafficking as "entirely willing". This week, Stallman announced that he is reappointed.

The news that Stallman is back came as a shock to me. I feel very strongly that he needs to be removed from leadership roles. There is no room for his misogynistic and other problematic behavior.

And I'm not alone. Almost two thousand Free Software advocates have signed an open letter seeking the removal of Richard Stallman and the entire FSF's Board of Directors.

While I want Stallman removed, I'm holding my judgement regarding the FSF's Board of Directors a bit longer. A few reasons:

  • I don't understand how Stallman was able to return. It doesn't make any sense to me.
  • The FSF's Board of Directors has remained silent throughout this outrage. To the best of my knowledge, no official statement has been made. I want to know what they have to say.
  • Last but not least, Stallman announced his own return, and it seems like there was an element of surprise.

I don't have private information about what is going on at the FSF, but I do have a lot of experience working as a Board Member.

A Board of Directors can't always move fast or communicate openly in the moment. Depending on what is going on, they may have to take legal steps, or carefully sequence their actions to protect the organization or any people involved. Open communication has to wait sometimes.

This news is so wild that I have to believe they are working through a very difficult situation. If so, the Board of Directors' silence does not necessarily mean that they support Stallman. It might mean that they are not able to communicate yet.

My ask to the FSF:

  1. Remove Stallman as soon as you can.
  2. Explain how and why Stallman was reappointed.
  3. Commit to bringing in new leadership.

If the FSF can work through this quickly and do the right thing, there might be a turning point to rebuild the FSF into something new and better. The Free Software movement deserves quality leadership. Given that the FSF governs the license of many software projects, that is something to hope for. It's worth holding my judgment on the Board of Directors a bit longer.

March 19, 2021

The Sparcstation IPC that I owned since around 1995 died. It sat in a cupboard for 15 years, so it may have been dead for a long time already. Upon trying to power it on, it did absolutely nothing.

I knew about early mini-ITX mods using the IPC/IPX case, like this one from 2002: , but nostalgia of being able to boot Linux/Sparc on this IPC kept me from doing my own mod. With the original hardware dead (probably just the PSU actually), this changed everything. A bit of research showed other Sparcstation mini-ITX mods, some with larger sparc4/5/10/20 cases (e.g. ), and one very interesting mod of an IPC: . Michael used an industrial Commell LV-671 motherboard. Commell went through more than 30 variants of that board in the meantime, and has just released an updated Tiger Lake version: the LV-6712 carrying the Intel i7-1185G7E. It's a full-height mini-ITX board, but the IPC case should have enough z-axis space. Challenge: to update the 25 MHz 32-bit Sparcstation IPC to a 1.2-4.4 GHz 64-bit Intel i7 workstation, go from 48 MB RAM to 32 GB (max 64 GB), and from 10Mbps ethernet to 1Gbps (and even 2.5Gbps!), and from SCSI-I HDD to NVMe SSD!

First step: strip the contents of the IPC case. The original 200MB SCSI hard drive was replaced by a 3GB SCSI drive soon after I got the IPC. Now I dismantled the drive to show my kids what a hard drive looks like on the inside. I removed the remains from the lunchbox case.

The LV-6712 can be powered with 12V DC power, which means we can forego the need for a full ATX power supply. Michael built a 12V power supply into the original power supply housing, and I decided to follow his suggestion. Cutting away a bit of the outer casing of a TracoPower TXH 060-112 AC/DC, I was able to fit it in, and even keep the original passthrough power connector.

The original case fan was powered directly from the IPC PSU, but the new motherboard has a PWM case fan header. Fortunately the case fan is a standard 60x60x25mm one. I found a PWM-capable replacement, the Noctua NF-A6x25. I expect it to be less noisy too, with 30 years of engineering progress advantage against the original Mitsubishi fan.

A lot of material needed to be cut and sanded away to put the mini-ITX motherboard as close to the case side wall as possible. Our case fan and the PSU housing, mounted in the case ceiling, come down very close to the two serial ports on the motherboard.

Then I positioned the IO shield against the back and tried a couple of configurations. I settled on the final location and cut the hole for the entire width of the IO shield. I didn't cut to the full height, because I needed the remaining plastic for structural integrity (since the metal plate originally supporting the back wall has gone!).

I cut the original external SCSI connector from the IPC motherboard to fill the hole it originally occupied.  Three hex motherboard spacers were placed in holes drilled in the plastic floor, the fourth sits in one of the original rubber spacers that supported the IPC motherboard. I used two more spacers to support the IO shield.

This supports the motherboard at a height that allows access to all IO ports. 

With the motherboard jumpered to AT mode, it boots when the power comes on. As an alternative, the "always power on after power restore" option in the BIOS also works. While it is technically possible to put 3.5 and/or 2.5 inch SATA drives in the ceiling bracket, I currently have enough space on the NVMe SSD. I installed the provided power breakout cable, but it sits unused in the case.

I bought a USB pin header to dual USB 2.0 adapter, but can't use it for two reasons: the key of the 9-pin connector is on pin 10, but the adapter expects it on pin 9. Worse, the pin header is at the edge of the motherboard, sitting against the side of the case, and the adapter extends about 1 cm in that direction. Plan B: a 4-pin USB pin header to single USB 2.0 port adapter, on a 20 cm cable. I plugged in a USB Bluetooth 5.0 adapter and left it inside the case. 

Currently Ubuntu 20.10 supports the integrated Xe graphics of this board, so that's what I will be running until an LTS distribution picks up Xe support. The IPC deserves enterprise workstation-grade stability!

Further modifications after some usage: 

I configured the "down" cTDP profile in the BIOS, which reduces the base frequency and TDP of the CPU (12W vs 28W on the standard "nominal" profile. The third profile is 15W TDP). That should help with both power draw and heat production as well as noise levels.

The included CPU fan is rather loud for my ears, but unfortunately the included heatsink only has 50mm fan holes, which is a size not commonly available. Alternative heatsinks for the mobile FCBGA1449 socket are also hard to find. The original fan (50x50x10mm) keeps the package at +- 45 degrees at 5700 rpm with occasional jumps to 50 and 60 degrees while showing a full-screen 1440p Youtube video. 

A Noctua 40x40x20mm fan maintains a comparable temperature at under 4000 rpm. Full load does chase the temperature up, with the Noctua not able to bring it down completely. Unfortunately, a 60x60x25mm fan (same model as the case fan) doesn't fit under the steel drive cage. Even without the drive cage, the case top requires a gentle push to close. That force can't be good for the heatsink nor the motherboard underneath it. On the other hand, it keeps the cpu cool at under 2000 rpm (idle) and it maintains a reasonable CPU temperature below 3000 rpm even with load. As soon as I can find a silent 50x50x20mm or 60x60x20mm PWM fan, I'll get it. In hindsight, mounting the motherboard as low as possible (and sanding away more material at the back for the I/O shield) would have been better. Two or three mm would make a difference. I could cut away a part of the drive cage, and maybe also thin away the underside of the PSU case as an alternative. Fortunately the PSU case is perforated, so the fan can suck in air even when the PSU sits directly on top of it.

I've even considered installing some heatpipes to transfer the heat of the heatsink to a radiator next to the board (there is a good 5 or 6 cm of lateral space) and put a fan on that radiator. The main reason I'm not going to do that (yet) is the risk of damaging a component that I can't replace...

I published the following diary on “ Used As a Simple C2 Channel“:

With the growing threat of ransomware attacks, they are other malicious activities that have less attention today but they remain active. Think about crypto-miners. Yes, attackers continue to mine Monero on compromised systems. I spotted an interesting shell script that installs and runs a crypto-miner (SHA256:00e2ddca696426d9cad992662284d1f28b9ecd44ed7c1be39789417c1ea9a5f2). The script looks to be a classic one but there are some interesting behaviors that I’d like to share… [Read more]

The post [SANS ISC] Used As a Simple C2 Channel appeared first on /dev/random.

March 18, 2021

I published the following diary on “Simple Python Keylogger“:

A keylogger is one of the core features implemented by many malware to exfiltrate interesting data and learn about the victim. Besides the fact that interesting keystrokes can reveal sensitive information (usernames, passwords, IP addresses, hostnames, …), just by having a look at the text typed on the keyboard, the attacker can profile his target and estimate if it’s a juicy one or not. 

To follow up on my yesterday diary, Microsoft Windows provides API calls to implement a keylogger via API calls like GetKeyState() and GetAsyncKeyState() help to determine if a particular key is pressed. But, can attackers implement a keylogger in other languages… [Read more]

The post [SANS ISC] Simple Python Keylogger appeared first on /dev/random.

March 17, 2021

I published the following diary on “Defenders, Know Your Operating System Like Attackers Do!“:

Not a technical diary today but more a reflection… When I’m teaching FOR610, I always remind students to “RTFM” or “Read the F… Manual”. I mean to not hesitate to have a look at the Microsoft document when they meet an API call for the first time or if they are not sure about the expected parameters.

Many attackers have a very deep knowledge of how targeted operating systems are behaving and what are the controls in place or features that could be (ab)used by malicious code. When you’re analyzing malware samples, it’s very important to quickly spot interesting blocks of code (by learning which interesting OS feature they use). A classic example is the API call VirtualAllocEx() which allocates a region of memory within the virtual address space of a specified process… [Read more]

The post [SANS ISC] Defenders, Know Your Operating System Like Attackers Do! appeared first on /dev/random.

March 16, 2021

I use my own domain name for email. Unfortunately, I received a few emails about being used for phishing attacks. It's not the first time, but hopefully it will be the last!

I finally added SPF, DKIM and DMARC protection to my domains:

  • Sender Policy Framework (SPF) restricts what servers can send emails using my domain name. To enable it, all I had to do is add a DNS record to my domain. The new DNS record specifies a list of authorized hostnames. Servers that receive an email from me, verify the hostname of the outgoing mail server against the approved list in my SPF DNS record.
  • Domain Keys Identified Mail (DKIM) uses public-key cryptography to make sure that my email isn't tampered with. My mail server keeps a private cryptographic key. When I send an email, my mail server uses the private key to embed a digital signature into my emails. In turn, your mail server/client validates the signature using the corresponding public key. The public key is made available through a DNS record associated with my domain name. (I use Google Workspace as my mail server, and it was easy to enable DKIM.)
  • Domain-based Message Authentication, Reporting and Conformance (DMARC) allows me specify how emails that fail the SPF or DKIM test should be handled. I can set policies to reject or quarantine spoofed emails.
A screenshot of my email headers showing that SPF, DKIM, and DMARC all pass
In Gmail, click 'Show Original' to see what email protection is in place.

Many data breaches and financial losses start with a phishing email. If you use your own domain name for email, take five minutes to check how well you're protected. You can use one of the many checkers. Here are some of the checkers I used:

Owning your own domain name can be a bit of work, but it's also super interesting. I enjoyed learning about SPF, DKIM and DMARC.

March 15, 2021

Het Amerikaanse onderzoekslaboratorium OpenAI heeft vorig jaar een neuraal netwerk getraind dat tekst kan schrijven die bijna menselijk lijkt. Het taalmodel maakt gebruik van deep learning met maar liefst 175 miljard parameters. In mei 2020 werd GPT-3, zoals het model heet, voorgesteld. Het is de derde in een reeks van taalmodellen met de naam Generative Pre-trained Transformer.

Op wat voor soort teksten werd GPT-3 dan getraind? Een gefilterde versie van het webarchief van Common Crawl, een andere dataset van webpagina's, WebText2, de inhoud van heel wat boeken (datasets Books1 en Books2) en de inhoud van Wikipedia. Eigenlijk dus zowat alles wat je maar op internet kunt vinden. In totaal ongeveer 450 gigabytes invoer. Opvallend is dat GPT-3 daardoor niet alleen op menselijke teksten is getraind, maar ook op computercode zoals css, jsx en Python. GPT-3 kan daardoor teksten genereren variërend van poëzie tot proza en nieuwsberichten en computerprogramma's.

OpenAI gaf vorig jaar enkele honderden ontwikkelaars toegang tot een bètaversie van de GPT-3 API en verzekerde zich daardoor van enkele maanden media-aandacht. Het wapenfeit waarmee GPT-3 het meest in het nieuws kwam, was dat het volledige nieuwsartikelen uit zijn duim kon zuigen. Je geeft dan een titel en subtitel op en het model schrijft hierover een kort artikel van ongeveer 200 woorden. Die artikelen blijken op het eerste gezicht verrassend coherent te zijn, maar vaak ook duidelijk verkeerde informatie te bevatten of opvallende herhalingen van zinnen.

Vorig jaar schreef ik voor PC-Active in mijn rubriek Denkwerk een artikel over GPT-3, Een computer die als een mens schrijft. Het is nu ook online te lezen. Je vindt er heel wat voorbeelden van wat er misloopt met GPT-3.

Mijn conclusie:

GPT-3 genereert tekst die bij een oppervlakkige lezing niet te onderscheiden is van een door een mens geschreven tekst. Tegelijk heeft dit taalmodel totaal geen begrip van wat het schrijft. Dat is een gevaarlijke combinatie: er kan onzin in de tekst staan, maar zo overtuigend beschreven dat nietsvermoedende lezers het gewoon geloven. Maar eigenlijk is dat niet zo heel verschillend van wat we nu al hebben: veel mensen schrijven ook onzin. GPT-3 is gewoon een samenvatting van de teksten van miljoenen mensen.

Kortom, GPT-3 is een kind van zijn tijd.

Contact Form 7: trouble right here in volcano city!Given the major change in Contact Form 7‘s frontend JavaScript and the problems having to optimize the JS or cache the page after the update, the question I get asked frequently is what alternatives there are to CF7.

So here is a very quick rundown of 3 such alternatives:

  1. Gravity Forms: premium-only, visual form builder, very flexible, big ecosystem (lots of 3rd party plugins & integrations)
  2. Formidable Forms: has a free Light version, drag & drop interface for building forms, very flexible (we currently use this on, lots of integrations but smaller ecosystem.
  3. HTML Forms: free plugin from the team that also develops “Koko Analytics” (which I now use on all my sites) and “Mailchimp for WordPress” with a premium addon for extra features, similar to Contact Form 7, no frills, very light on JS so great for performance.

My advice; try HTML Forms if you have rather standard contactform-like forms and you’re not looking for something fancy (which CF7 is not either), try Formidable if you need drag & drop form building or if you (will) need more flexibility/ integrations.

March 14, 2021


I got a Raspberry PI 4 a couple of months back and started it use it to run virtual machines.

This works great for GNU/Linux distributions but FreeBSD as a virtual machine didn’t work for me. When I tried to install FreeBSD or import a virtual machine image, FreeBSD wasn’t able to mount the root filesystem and ended with an “error 19”.

On the FreeBSD wiki, there are a few articles on how to use ARM64 FreeBSD with QEMU directly.

You find my journey of getting a FreeBSD Virtual Machine below.

I use Manjaro on my Raspberry PI, but the same setup will work with other GNU/Linux distributions.

Import VM image

Download the VM image

FreeBSD cloud images are available at for the aarch64 (ARM64) and x86 ( AMD64, i386) architectures.

Download the latest VM image of FreeBSD you’d like to use.


To be able to boot the image we need a firmware image (BIOS), there two options EDK (UEFI) or u-boot. The QEMU source comes with UEFI firmware images, for some reason Arch Linux doesn’t include them in the standard QEMU package. The edk2-avmf AUR package provides the required firmware to virtual systems on ARM64.


Boot the virtual machine with UEFI

As a test, I booted the release candidate of the upcoming FreeBSD 13 release. This worked fine with a single CPU.

$ qemu-system-aarch64 -M virt -m 4096M -cpu host,pmu=off --enable-kvm \
 	-nographic -bios /usr/share/edk2/aarch64/QEMU_EFI.fd \
 	-hda  /home/staf/Downloads/freebsd/FreeBSD-13.0-RC2-arm64-aarch64.qcow2 \
        -boot order=c


When I tried to enable more than 1 CPU with -smp 2 or -smp cores=2,sockets=1 the system hangs during the startup…

qemu-system-aarch64 -M virt -m 4096M -cpu host,pmu=off --enable-kvm -smp cores=2,sockets=1 \
        -nographic -bios /usr/share/edk2/aarch64/QEMU_EFI.fd \
        -hda  /home/staf/Downloads/freebsd/FreeBSD-13.0-RC2-arm64-aarch64.qcow2 \
        -boot order=d

I want to use more than 1 CPU core for my FreeBSD virtual system to run FreeBSD jails.

U-boot to the rescue

The other firmware that we can use is U-boot, U-boot is a common used BIOS on ARM64 by a lot of single-board computers…

I didn’t find a U-boot package for Manjaro/ArchLinux for QEMU.

Compile u-boot

Clone the git repo.

$ git clone
Cloning into 'u-boot'...
warning: redirecting to
remote: Enumerating objects: 767065, done.
remote: Counting objects: 100% (767065/767065), done.
remote: Compressing objects: 100% (117586/117586), done.
remote: Total 767065 (delta 639963), reused 766651 (delta 639562), pack-reused 0
Receiving objects: 100% (767065/767065), 150.47 MiB | 1.94 MiB/s, done.
Resolving deltas: 100% (639963/639963), done.
Updating files: 100% (17747/17747), done.

Goto into the u-boot directory.

$ cd u-boot/
[staf@minerva u-boot]$ 

Configure u-boot for QEMU on ARM64.

$ make qemu_arm64_defconfig
# configuration written to .config
[staf@minerva u-boot]$ 


$ make
scripts/kconfig/conf  --syncconfig Kconfig
  UPD     include/config.h
  CFG     u-boot.cfg
  GEN     include/
  GEN     include/
  CC      examples/standalone/hello_world.o
  CC      examples/standalone/stubs.o
  LD      examples/standalone/libstubs.o
  LD      examples/standalone/hello_world
  OBJCOPY examples/standalone/hello_world.srec
  OBJCOPY examples/standalone/hello_world.bin
  LD      u-boot
  OBJCOPY u-boot.srec
  OBJCOPY u-boot-nodtb.bin
  RELOC   u-boot-nodtb.bin
  COPY    u-boot.bin
  SYM     u-boot.sym
  CFGCHK  u-boot.cfg

Copy the u-boot.bin to /usr/local.

Create /usr/local/u-boot.

$ sudo mkdir /usr/local/u-boot

Copy u-boot.bin to /usr/local/u-boot/.

$ ls -l /usr/local/u-boot/
total 732
-rw-r--r-- 1 root root 749072 Mar 14 14:43 u-boot.bin

Boot FreeBSD with U-boot

FreeBSD boot fine now with 2 CPU’s.

qemu-system-aarch64 -M virt -m 4096M -cpu host,pmu=off --enable-kvm \
        -smp 2 -nographic -bios /usr/local/u-boot/u-boot.bin \
        -hda /home/staf/Downloads/freebsd/FreeBSD-13.0-RC2-arm64-aarch64.qcow2

In an upcoming blog post, I’ll go over the network setup and how to install FreeBSD from CDROM image.

Have fun!


March 09, 2021

So much things going on these days, it’s already shaping up to be a pretty crazy year, in the good sense. Pretty much as I predicted at the start of the year, though it must be said that 2020 didn’t exactly raise the bar much. Pretty easy to clear that hurdle.

But that’s for another day. For now, here’s some interesting things I’ve been reading recently, in no particular order / theme:

Modules, monoliths, and microservices

Pretty common sense way of looking at this whole discussion. I’ve seen both ends of the spectrum and as always the right answer is: it depends. Inform yourself and choose wisely.

There certainly isn’t a solution that works for everyone, in every situation.

You need to be able to run your system

So much truth in this one. It requires a bit of investment, but it’s one of those things that act as a force multiplier: it speeds up developers, giving you faster development, more head-space to build a solid product and more time to focus on what actually matters.

Just consider the inverse: if you make their day jobs as cumbersome and frustrating as possible, how do you expect your development team to perform?

Any project I’ve helped roll this way of working out has benefited massively, so I recommend it each and every time. Talk to me if you need help with this.

Breaking down and fixing Kubernetes

As an ops person, I’m a big fan of these kind of fire drills, where you deliberatly damage a system and then try to fix it. Doing this as an exercise, when things aren’t on fire, gives you so much more confidence when things do break down for real.

Comments | More on | @rubenv on Twitter

March 07, 2021

OpenVAS dashboard tumb

In my previous blog post, I described how to install OpenVAS, in this blog post we will configure and execute a security scan with OpenVAS.

OpenVAS documentation is available on the OpenVAS developer website Greenbone:

Logon to the Greenbone Manager assistend at

Security info

Security information is an important part of a security scanner. It describes how we can detect security issues on our network/systems.


It always a good idea to update your security date regularly. Execute the gvm-feed-update script, this will use greenbone-feed-sync as the _gvm user to update the GVMD_DATA, SCAP and CERT data.

$ sudo gvm-feed-update                                                    1 ⨯
[sudo] password for staf: 
[>] Updating OpenVAS feeds
[*] Updating: NVT
Greenbone community feed server -
This service is hosted by Greenbone Networks -
receiving incremental file list
             13 100%   12.70kB/s    0:00:00 (xfr#1, to-chk=0/1)

sent 43 bytes  received 115 bytes  105.33 bytes/sec
total size is 13  speedup is 0.08


OpenVAS SecInfo

You can review the security data at the [SecInfo] tab in Greenbone Manager.

Documentation is available at the greenbone website:

OpenVAS use the following security information;

  • NVT (Network Vulnerability Tests)
    Test that that detects vulnerabilities on the targets.
  • CVE (Common Vulnerabilities and Exposures)
    Most people know this term, it provides a standard way of publishing security vulnerability information.
  • CPE (Common Enumeration)
    CPE is less know, it is a standard way of describing the device, system, software for security information. eg OpenSSH running on GNU/Linux.
  • OVAL (Open Vulnerability Assessment Language)
    This is a list that vendors publish with their software and the vulnerabilities. Security scanner like OpenVAS can use this data to detect outdated software.

    Other opensource security tools that can use this information are OpenSCAP, ovaldi.

    This is standard developed by NIST as part of the SCAP (Security Automation Protocol)


    Most Linux distributions publish OVAL data.

    One distribution that doesn’t publish OVAL is Centos. Redhat publishes OVAL data for RHEL but not for Centos. You can update the Redhat OVAL data for Centos (add the Centos CPE to the Redhat OVAL) But this is not certified officially. Also with the move to CentOS stream, this will not be possible anymore…

  • CERT-Bind Advisories
    CERT-Bund Advisories are published by the CERT-Bund.

  • DFN-CERT Advisories
    DFN-CERT advisories are published by the DFN-CERT.

Scan Configs

OpenVAS Scan Configs

Under the tab [Configuration] [Scan Configs] you can configure the scan config that you can use to execute a scan on the target.

The default scan config can’t be updated. To clone a scan config you can press the “clone sheep” button. This allows you to update it, or to review the SCAP data that is used.

OpenVAS Scan edit

If you want to scan only GNU/Linux systems for example. You can create a custom profile with only the Linux distributions that you use, this will speed up the scan. Keep in mind that OpenVAS need to have access to the system to detect outdated software with the “Local Security Checks” with an authenticated scan.

Authentication can be configured on the [Configuration] [Credentials], we’ll not explain authentication scans in this blog post, but it should be self-explanatory.

As always be careful with creating backdoors on your network to manage or in this case review it for security. It’s important to protect the system that hosts the keys/passwords.


OpenVAS New target

To create a new target goto the [Configuration] [Targets] and click on [New Target] icon. This will open the “New Target” window.


You can configure your target in this window, fill in the IP address etc. If you know that your Target has a firewall running, you can set “Alive Test” to “Consider Alive”.

When you have configured Credentials for your target you can use configure them at the “Credentials for authenticated checks” section.

Press [Save] to save the Target.


Configure the scan

To configure the scan goto the [Scans] tab and select [Tasks].

OpenVAS tasks 001

Click on the [New Task] icon this will give two options “New Task” and “New Container Task”.

OpenVAS tasks 002

A “Container Scan” is used to import reports of other Greenbone Security Managers. A “normal scan” will execute the scan on the Target.

We’ll set up regular scan, so select “New Task”.

This will open the “New Task” Windows

Openvas has two types of Builtin Scanners:

OpenVAS tasks 003

  • OpenVAS Default Scanner
    This will execute a security scan on the Target.

  • CVE Scanner
    This scanner is used to make a forecast of the possible security risks base on information of the Target that was gathered by previous scans (Like the OpenVAS Scanner). It takes the CPE (information of installed software etc) and makes a forecast of possible security risks with the CVE information found at [SecInfo].

With Alterable Task we can specify if the task can be updated, if we allow the task to be alterable the reports of the scan are more difficult to compare to the previous scans.

At Scan Config we select the desired Scan Configuration select [Full and fast].

Click on the [Start Scan] icon to start the scan.


Depending on the scan configuration and the number of hosts in your scan task configuration this will take some time.

OpenVAS Dashbaord


To view the report of scan you can click on the Reports column next to the scan. At the [Scans] tab you can select Reports, Results, Vulnerabilities.

At the Dashboards tab you get a nice overview of the Scan and the results. It’s also possible to create custom dashboards.

Have fun!

March 06, 2021

I published the following diary on “Spotting the Red Team on VirusTotal!“:

Many security researchers like to use the VirusTotal platform. The provided services are amazing: You can immediately have a clear overview of the dangerousness level of a file but… VirusTotal remains a cloud service. It means that, once you uploaded a file to scan it, you have to consider it as “lost” and available to a lot of (good or bad) people! In the SANS FOR610 training (“Reverse Engineering Malware”), we insist on the fact that you should avoid uploading a file to VT!  The best practice is to compute the file hash then search for it to see if someone else already uploaded the same sample. If you’re the first to upload a file, its creator can be notified about the upload and learn that he has been detected. Don’t be fooled: attackers have also access to VirusTotal and monitor activity around their malware! Note that I mention VirusTotal because it is very popular but is not the only service providing repositories of malicious files, they are plenty of alternative services to scan and store malicious files… [Read more]

The post [SANS ISC] Spotting the Red Team on VirusTotal! appeared first on /dev/random.

March 05, 2021

I feel like I was in a street fight and got punched in the stomach.

It started earlier this week with a stitch in my side when I was doing the dishes. By the time I finished, the stitch had turned into stomach cramps. I brushed it off as my stomach being upset due to jet lag. After all, we had just come back from Europe two days prior.

As the pain increased, I wondered if I had pulled a muscle. Maybe I had been sitting with poor posture in my 8 hours of Zoom meetings that day? I tried to stretch it out so I could carry on with my evening. That didn't work.

An hour after that initial stitch in my side, I couldn't explain the pain away any longer. No matter how I sat or laid down, it felt like someone stabbed me in the back with a knife. The next hour I tried to find a less painful position, without luck. I twisted and turned on the bed while my body shivered uncontrollably.

I was scared. This was a pain I didn't recognize and more intense than any pain I had ever experienced. By 9pm I asked Vanessa to drive me to the Emergency Room. Once at the ER, they had me on morphine within 45 minutes.

Selfie of Dries on the hospital bed
Emergency room selfie.

After 5 and a half hours at the ER, the doctors concluded I had a kidney stone. An X-ray revealed that it was 4mm in size, and I was told it should pass on its own. They sent me home around 3am with some oxycodone (opioids) to help manage the pain throughout the night.

I spent the rest of this week in bed on drugs. Today, a few days later, my kidney feels 'bruised'. The 'sharp pain' is replaced by an 'aching pain' — like I got punched in the stomach rather than stabbed with a knife. While not a great feeling, it's a much better feeling.

Unfortunately, I don't believe the stone has passed. I'm afraid that the pain will come roaring back.

This is my first kidney stone, and hopefully my last. Some close friends recommended I name my stone "Cobblestone", in honor of my Belgian roots. A good laugh helps the healing.

Each year, millions of people suffer from kidney stones and I'm sure mine isn't the worst. I'm not looking for pity. However, I hope my write up will help someone else; either by helping them recognize symptoms, or by providing some comfort while searching the internet from the ER.

I published the following diary on “Spam Farm Spotted in the Wild:

If there is a place where you can always find juicy information, it’s your spam folder! Yes, I like spam and I don’t delete my spam before having a look at it for hunting purposes. Besides emails flagged as spam, NDR or “Non-Delivery Receipt” messages also deserve some attention. One of our readers (thanks to him!) reported yesterday how he found a “spam farm” based on bounced emails. By default, SMTP is a completely open protocol. Everybody can send an email pretending to be Elon Musk or Joe Biden! That’s why security control like SPF or DKIM can be implemented to prevent spoofed emails to be sent from anywhere. If not these controls are not implemented, you may be the victim of spam campaigns that abuse your domain name or identity. The “good” point (if we can say this) is that all NDR messages will bounce to the official mail server that you manage. That’s what happened with our reader, he saw many bounced messages for unknown email addresses… [Read more]

The post [ISC SANS] Spam Farm Spotted in the Wild appeared first on /dev/random.

March 04, 2021

I published the following diary on “From VBS, PowerShell, C Sharp, Process Hollowing to RAT“:

VBS files are interesting to deliver malicious content to a victim’s computer because they look like simple text files. I found an interesting sample that behaves like a dropper. But it looks also like Russian dolls seeing all the techniques used to drop a RAT at the end. The file hash is 8697dc74d7c07583f24488926fc6e117975f8a9f014972073d19a5e62d248ead and has a VT score of 12/59. It was delivered by email under the name “Procurement – Attached RFQ 202102.vbs”. If you filter attachments based on the MIME type, this file won’t be detected as suspicious… [Read more]

The post [SANS ISC] From VBS, PowerShell, C Sharp, Process Hollowing to RAT appeared first on /dev/random.

Having (had) lazy eyes myself I cannot help but sympathize with this band of Aussie youngsters. And “Where is my Brain??” is some crazy psychedelic motorik-beat (with a fill) driven piece of geniousness.

OK, that might be somewhat of an  exaggeration, but look & listen carefully headphones on and the volume turned to 11!

YouTube Video
Watch this video on YouTube.

March 02, 2021

If you follow me, you probably already know that I’m a big fan of OSSEC. I would like to thank 44Con for accepting my next training! If you are interested in learning cool stuff about OSSEC and how to integrate it with third-party tools/sources, this one is for you!

OSSEC is sometimes described as a low-cost log management solution but it has many interesting features that, when combined with external sources of information, may help in hunting for suspicious activity occurring on your servers and end-points. Its agent-based architecture allows the automation of many tasks performed during incident investigations.

During this training, you will learn the basics of OSSEC and its components, how to deploy it and quickly get results. The second part will focus on the deployment of specific rules to catch suspicious activities. From an input point of view, we will see how easy it is to learn new log formats to increase the detection scope and, from an output point of view, how we can generate alerts by interconnecting OSSEC with other tools like MISP, TheHive, or an ELK Stack / Splunk / … and add more contextual content with OSINT feeds. Finally, we will use the “Active-Response” feature to deploy useful scripts and improve your response capabilities.

The training is scheduled for September 14-15 2021, fully online. No need to travel, to book a hotel room… Just a browser and an SSH client are required to attend the training!

Interested? Book your seat here.

The post Next OSSEC Training Scheduled @ 44Con appeared first on /dev/random.

February 28, 2021


OpenVAS is an opensource security scanner it started as a fork of Nessus which went from an opensource project to a closed source scanner.

I always prefer opensource software, for security tools, I even prefer it more… It nice to see/audit where the security data comes from, instead of the “magic” that is used by the close source software.

To scan for missing patches on your systems there are faster/better tools available that can be integrated into your build pipeline more easily. But OpenVAS is still a very nice network security scanner. Relying on one security tool is also not a “best security practice”.

Kali GNU/Linux has become the default Linux distribution for security auditing pen testing, it’s nice to have OpenVAS installed on your Kali GNU/Linux setup. If you just want to have OpenVAS available there is also a (virtual) appliance available from the OpenVAS developers ( Greenbone ).

You’ll find my journey to install OpenVAS on Kali GNU/Linux.


Update packages

It’s always a good idea to start with an update of your system.

Update the repository database with apt update.

staf@kali:~$ sudo apt update
Hit:1 kali-rolling InRelease
Reading package lists... Done
Building dependency tree       
Reading state information... Done
All packages are up to date.
staf@kali:~$ sudo apt dist-upgrade
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Calculating upgrade... Done
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

Run apt upgrade to upgrade your packages.

staf@kali:~$ sudo apt dist-upgrade
[sudo] password for staf: 
Sorry, try again.
[sudo] password for staf: 
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Calculating upgrade... Done
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

Make sure that haveged is running

During the setup, OpenVAS will create an encryption key to create this key it’s important to have enough random data available. I had an issue (back in 2015) to create this key in the past. For this reason, I always verify that haveged daemon is running on my system when I install OpenVAS.

staf@kali:~$ ps aux | grep -i have
root         547  0.3  0.1   8088  4852 ?        Ss   10:00   0:01 /usr/sbin/haveged --Foreground --verbose=1 -w 1024
staf        4823  0.0  0.0   6204   836 pts/1    S+   10:10   0:00 grep -i have

Install Openvas

Install OpenVAS with apt install openvas.

staf@kali:~$ sudo apt install openvas
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
  doc-base dvisvgm fonts-lmodern fonts-texgyre gnutls-bin
  greenbone-security-assistant greenbone-security-assistant-common
  texlive-plain-generic tipa tk tk8.6 xdg-utils
0 upgraded, 64 newly installed, 0 to remove and 0 not upgraded.
Need to get 141 MB of archives.
After this operation, 451 MB of additional disk space will be used.
Do you want to continue? [Y/n] 


OpenVAS comes with its own redis service on Kali GNU/Linux. This redis service is configured to work with OpenVAS correctly.

systemctl status redis-server@openvas.service

Run gvm-setup

The openvas-setup setup script has been renamed to gvm-setup. This for marketing reasons, GVM stands for Greenbone Vulnerability Manager. As long the software remains opensource I don’t care.

Gvm-setup will set the PostgreSQL database, create the admin user and download/import all the ScapData.

└─# gvm-setup 
Creating openvas-scanner's certificate files

[>] Creating database
sent 45,218 bytes  received 323,087 bytes  245,536.67 bytes/sec
total size is 73,604,011  speedup is 199.85
[*] Checking Default scanner
OpenVAS  /var/run/ospd/ospd.sock  0  OpenVAS Default
[>] Checking for admin user
[*] Creating admin user
User created with password '*****'.


The gvm-setup script will display the password for the admin at the end. If you forgot to write it down you can reset the admin password with the gvmd command as the _gvm. Unfortunately, you need to use the password as an argument. So it recommended to use a shell without a history or to clear the history (or both) after the password update.

# su - _gvm -s /bin/sh -c "gvmd --user=admin --new-password mypasswd; history -c"
# history -c


You can verify your installation with gvm-check-setup.

$ sudo gvm-check-setup                                                    1 ⨯
[sudo] password for staf: 
We'll all be murdered in our beds!
[sudo] password for staf: 
gvm-check-setup 20.8.0
  Test completeness and readiness of GVM-20.8.0
Step 1: Checking OpenVAS (Scanner)... 
        OK: OpenVAS Scanner is present in version 20.8.1.
        OK: Server CA Certificate is present as /var/lib/gvm/CA/servercert.pem.
Checking permissions of /var/lib/openvas/gnupg/*
        OK: _gvm owns all files in /var/lib/openvas/gnupg
        OK: redis-server is present.
        OK: scanner (db_address setting) is configured properly using the redis-server socket: /var/run/redis-openvas/redis-server.sock
        OK: redis-server is running and listening on socket: /var/run/redis-openvas/redis-server.sock.
        OK: redis-server configuration is OK and redis-server is running.
        OK: _gvm owns all files in /var/lib/openvas/plugins
        OK: NVT collection in /var/lib/openvas/plugins contains 65370 NVTs.
Checking that the obsolete redis database has been removed
        OK: No old Redis DB
        OK: ospd-OpenVAS is present in version 20.8.1.
Step 2: Checking GVMD Manager ... 
        OK: GVM Manager (gvmd) is present in version 20.08.1.
Step 3: Checking Certificates ... 
        OK: GVM client certificate is valid and present as /var/lib/gvm/CA/clientcert.pem.
        OK: Your GVM certificate infrastructure passed validation.
Step 4: Checking data ... 
        OK: SCAP data found in /var/lib/gvm/scap-data.
        OK: CERT data found in /var/lib/gvm/cert-data.
Step 5: Checking Postgresql DB and user ... 
        OK: Postgresql version and default port are OK.
 gvmd      | _gvm     | UTF8     | en_US.UTF-8 | en_US.UTF-8 | 
        OK: At least one user exists.
Step 6: Checking Greenbone Security Assistant (GSA) ... 
Oops, secure memory pool already initialized
        OK: Greenbone Security Assistant is present in version 20.08.1~git.
Step 7: Checking if GVM services are up and running ... 
        OK: ospd-openvas service is active.
        OK: gvmd service is active.
        OK: greenbone-security-assistant service is active.
Step 8: Checking few other requirements...
        OK: nmap is present in version 20.08.1~git.
        OK: ssh-keygen found, LSC credential generation for GNU/Linux targets is likely to work.
        WARNING: Could not find makensis binary, LSC credential package generation for Microsoft Windows targets will not work.
        SUGGEST: Install nsis.
        OK: xsltproc found.
        WARNING: Your password policy is empty.
        SUGGEST: Edit the /etc/gvm/pwpolicy.conf file to set a password policy.

It seems like your GVM-20.8.0 installation is OK.

Keep your scapdata up-to-date

It’s import for a security scanner to keep the security data up to date. A security scanner can only know which software packages have vulnerabilities or how to verify for network exploits when it gets the security data from somewhere. For this reason, vendors must publish security data with OVAL - Open Vulnerability and Assessment Language - for example. This way security scanners can use this data to verify system/network for security issues.

To sync the security feeds on OpenVAS you can use the gvm-feed-update command, this will fetch the security data from Greenbone.

$ sudo gvm-feed-update

Start the openvas services

There is a gvm-start script, this will start the required services and start the web browser to the openvas login url: This script needs to be executed as root.

For this reason, I just enable/start the required systemd services.

$ sudo systemctl start gvmd ospd-openvas
$ sudo systemctl enable gvmd ospd-openvas
Created symlink /etc/systemd/system/ → /lib/systemd/system/gvmd.service.
Created symlink /etc/systemd/system/ → /lib/systemd/system/ospd-openvas.service.
$ sudo systemctl enable greenbone-security-assistant

Created symlink /etc/systemd/system/gsad.service → /lib/systemd/system/greenbone-security-assistant.service.
Created symlink /etc/systemd/system/ → /lib/systemd/system/greenbone-security-assistant.service.

First login

gsa login

If you rebooted your system or just started the services, you might need a few minutes to let the services startup.

Have fun!

February 26, 2021

À la mémoire de Gilberte De Windt, décédée en février 2021

Fin février 2020, je décidai, sur un coup de tête, d’appeler un numéro trouvé dans l’annuaire. Celui de la sculptrice Gilberte De Windt.

Mon épouse et moi l’avions rencontrée lors de ses expositions. Nous étions tombés amoureux de ses statues comme de sa personnalité. Cette vieille dame au corps frêle, mais à l’esprit incroyablement agile nous avait charmés par la finesse de son art. Nous avions sympathisé et beaucoup discuté.

Au téléphone, de but en blanc, je lui annonçai que nous souhaitions acquérir une de ses œuvres. Avec une incroyable gentillesse, elle nous invita à venir visiter son atelier.

Nous passâmes une après-midi passionnante en compagnie de son mari, Guy Berbé, artiste peintre de renom. Alors que mon épouse discutait peinture avec Guy, dans son incroyable atelier, je parlais inspiration, méditation et création avec Gilberte. Par le plus grand des hasards, nous étions tous deux en train de lire le même livre de Steven Laureys, « La méditation, c’est bon pour le cerveau ». Curieux, je tentais de m’inspirer des techniques de Gilberte pour apprendre à sculpter les mots comme elle la matière.

L’entente entre nos deux couples fut immédiate et nous convînmes de nous revoir régulièrement. Mon épouse et moi hésitions entre deux sculptures et, pour tout avouer, le budget nous faisait un peu frémir. Il s’agissait d’un pur coup de cœur irrationnel, une hérésie économique.

Deux semaines plus tard, le confinement commençait. Les enfants furent rapidement déscolarisés et nos priorités furent bouleversées.

Cependant, cette rencontre m’obsédait. J’en rêvais. Je me demandais comment allaient Gilberte et Guy. Je me rendais compte que les visiter n’était plus imaginable en ces temps de confinement. J’en souffrais, car nous avions fait la promesse de revenir. Je prenais également conscience que si l’esprit de Gilberte était brillant, son corps n’était pas immortel. Un pressentiment me hantait.

C’est avec stupeur que je découvris, presque un an jour pour jour après notre après-midi partagé, un message m’annonçant son décès. Un an que, comme beaucoup, je n’ai pas vu passer. Qui s’est envolé, emportant Gilberte avec lui. Je regarde avec tendresse la photo où elle pose près de la statue préférée de notre fils. J’ai une pensée pour Guy, son mari. Je n’ose pas l’avouer, mais je suis triste. Qui suis-je pour prétendre à la tristesse, moi qui ne les ai rencontrés que quelques fois ?

Si ce décès est naturel, dans l’ordre des choses, je ne peux m’empêcher de penser à cette dame qui, comme elle le racontait elle-même, a mené plusieurs vies fort différentes. Elle ne se mit à la sculpture qu’après sa retraite de l’enseignement ! À travers ses statues, elle transmettra pour toujours un mouvement, une finesse, une énergie aux générations à venir.

Égoïstement, je maudis cette pandémie pour avoir empêché que je passe plus de temps avec Gilberte, que je la connaisse mieux. Je suis heureux de cette après-midi lumineuse dans sa maison, son atelier. C’est un souvenir impérissable. J’aurais tant aimé la rencontrer plus tôt.

J’ai le regret de ne pas avoir pu lui acheter une statue. Secrètement, je rêvais de trouver chez moi un écrin merveilleux, d’inviter Gilberte pour lui montrer, pour lui rendre la pareille et lui faire découvrir mon atelier d’écriture orné de son œuvre. Pour lui expliquer qu’elle m’avait enseigné qu’un manuscrit est comme une de ses sculptures en terre. Un matériau de base qui doit ensuite passer par tout un processus, qu’elle nous a décrit en détail, avant de devenir la statue en bronze qu’est le livre final.

Mon atelier d’écriture n’existe pas encore et je n’ai pas de statue de Gilberte. Je n’ai plus que son souvenir.

Au fond, j’ai la chance rare de l’avoir rencontrée et de garder avec moi le souffle d’inspiration qu’elle m’a donné. Lorsque j’ai l’impression de devenir trop vieux pour être créatif, lorsque je réalise que les jeunes artistes talentueux du moment sont plus jeunes que moi, je repense souvent à son expérience, à l’admiration que j’ai éprouvée lorsqu’elle m’a confié l’importance pour elle de continuer à apprendre chaque jour, lorsque j’ai compris l’énergie qu’elle mettait dans une création.

C’est peut-être pour ça que je souhaitais tant avoir une statue de Gilberte à proximité de ma machine à écrire. Parce que ses personnages longilignes caractéristiques me rappellent les regards que nous avons échangés dans son atelier, parce qu’ils m’ancrent dans le désir de création matérielle qu’elle avait sublimé et qui m’échappe trop souvent. Parce qu’en une seule après-midi chez elle, elle a eu une influence notable sur ma vision de la création.

Merci, Gilberte, et bonne chance pour les prochaines de tes nombreuses vies, celles qui apparaissent chaque fois qu’un regard se pose sur l’une de tes nombreuses œuvres.

Salut l’artiste !

Je suis @ploum, ingénieur écrivain. Abonnez-vous par mail ou RSS pour ne rater aucun billet (max 2 par semaine). Je suis convaincu que Printeurs, mon dernier roman de science-fiction vous passionnera. Commander mes livres est le meilleur moyen de me soutenir et de m’aider à diffuser mes idées !


Ce texte est publié sous la licence CC-By BE.

I've now been living at work working from home for almost one year.

Before the pandemic, I often spent three hours a day commuting. Now, I'm using these three hours to spend more time with family, become more successful at my job, and work out more. For those reasons, I prefer not to return to an office.

Many aspects of work function much better when people are face-to-face. In addition, I miss the in-person interactions with my colleagues and the camaraderie that comes from working together in person. For those reasons, I can't wait to go back to the office.

Given that I really like both, my personal preference is for work to be "hybrid". Do individual work from home, but go into an office for collaborative work.

Not everyone experiences the same advantages and disadvantages. My personal preference isn't necessarily best for everyone. As an employer, the pandemic has helped me better understand that people's life routines can be very different. For some, working from home has a negative impact on motivation, productivity and mental health. For others, it has been very positive and more productive.

I wouldn't be surprised if 1/3 would prefer not to return to an office, 1/3 would like to go back to the old normal, and the remaining 1/3 would like a hybrid approach where they can work from home a few days a week.

In the coming months, employers will be revisiting their "Work From Home" policies. In turn, employees will need to decide if they can align and readjust to their employer's updated policy. There are a lot of complexities to think through, both for employers and employees alike.

In the end, people will be more thoughtful about the workplace arrangement that best fits their life. On one hand, that is healthy. On the other hand, many people don't have the privilege to choose. The privileged will likely get more privileged (myself included).

I hope that we approach this workplace transformation with an open mind, empathy, and equity. It's important to consider how both corporate policies and individual choices impact people.

I discussed the Composable Commerce trend with Kelly Goetsch, the Chief Product Officer of commercetools. Composable architectures allow you to build the best possible commerce solution with the best possible shopping experience.

Acquia was named a Leader in The Forrester Wave for Agile Content Management Systems, Q1 2021.

Acquia shown as a Leader together with Adobe and Optimizely

This research replaces Forrester's Wave on Web Content Management Systems. The focus is now on "agile content management" instead of "web content management". This change makes sense given the way people consume content today. Because consumers shift between channels when researching a brand or product, organizations need a back end that can support different end points (e.g. web, mobile, kiosks, voice assistants, etc).

The analysts note: The [Acquia] platform shined in developer and practitioner tooling, with superior capabilities in front-end components and backend extensibility of the platform..

February 25, 2021

Due to a recent major change in Contact Form 7’s frontend JavaScript Autoptimize users might have to add wp-includes/js/dist the comma-separated JS optimization exclusion list (or in some cases even wp-includes/js).

It is nice CF7 gets rid of the jQuery dependancy, but I’m not sure is replacing that with a significant amount of extra WordPress blocks JS-files was such a good idea?

Update: additionally the change also introduces nonces (random password-like strings as hidden elements in the form) which can spell serious trouble when using page caching plugins.

February 19, 2021

I published the following diary on “Dynamic Data Exchange (DDE) is Back in the Wild?‘”:

DDE or “Dynamic Data Exchange” is a Microsoft technology for interprocess communication used in early versions of Windows and OS/2. DDE allows programs to manipulate objects provided by other programs, and respond to user actions affecting those objects. FOr a while, DDE was partially replaced by Object Linking andEmbedding (OLE) but it’s still available in the latest versions of the Microsoft operating system for backward compatibility reasons. If fashion is known to be in a state of perpetual renewal, we could say the same in the cybersecurity landscape. Yesterday, I spotted a malicious Word document that abused this DDE technology… [Read more]

The post [SANS ISC] Dynamic Data Exchange (DDE) is Back in the Wild? appeared first on /dev/random.

myMail is a popular (10M+ downloads!) alternative email client for mobile devices. Available for iOS and Android, it is a powerful email client compatible with most of the mail providers (POP3/IMAP, Gmail, Yahoo!, Outlook, and even ActiveSync). Recently, I was involved in an incident that was related to a malicious usage of myMail. I had a closer look at the application, how it works and found something weird…

[Note: I tested the Android version of this app, iOS was not tested and I don’t know if it behaves in the same way]

I installed the version on an lab Android device. This device is configured to use a BurpSuite instance to log and inspect all the HTTP traffic. As usual, for the Android ecosystem, many permissions are requested by the application, besides the classic one (access to contacts, Internet, storage, …), those ones look more suspicious:

android.permission.REQUEST_INSTALL_PACKAGESAllows an application to request installing packages.
android.permission.WRITE_CONTACTSAllows an application to change contacts

Once you installed the application on your device, it’s time for the basic configuration. This is a pretty straightforward process: You enter your credentials (email address + password) and the application takes care of everything for you (if your email platform supports autodiscovery services).

Now that you entered your email and password, you can consider them as lost because myMail will send them to the backend architecture:

GET /cgi-bin/auth?Password=XXXXXXXX&mobile=1&mob_json=1&simple=1&useragent=android& HTTP/1.1
User-Agent: mobmail android
Host: aj-https[.]my[.]com
Connection: close
Accept-Encoding: gzip, deflate

Indeed, the application does not talk directly to your mail server but uses the cloud infrastructure to take care of all email-related communications. The mobile app is just a client for the API and all traffic happens over HTTPS. This technique is used to provide the “push services”:

Once configured, the app will poll the myMail cloud for messages:

GET /api/v1/messages/status?prefetch=1&sort=%7B%22type%22%3A%22id%22%2C%20%22order%22%3A%22desc%22%7D&snippet_limit=183&last_modified=1613681670&folder=0&offset=0&limit=20& HTTP/1.1
Accept-Encoding: gzip, deflate
User-Agent: mobmail android
Host: aj-https[.]my[.]com
Connection: close

Because all the traffic is passing through the cloud, your attached files are also stored on their infrastructure. Here is an example of file download:

GET /cgi-bin/readmsg/cacert.cer?rid=354637854942502553472181455302888008800&&id=16130393950000000005;0;0&notype=1&notype=1&mp=android&mmp=mail&DeviceID=9ff695be583a7eb72048dbe5e321189a&client=mobile&playservices=11400000&os=Android&os_version=4.2.2& HTTP/1.1
Accept-Encoding: gzip, deflate
User-Agent: mobmail android
Host: af[.]attachmy[.]com
Connection: close

But the worst has to come…

Once I completed my investigations, I first un-configured my test account. Before, I performed a proper logout:

GET /cgi-bin/logout?mp=android&mmp=mail&DeviceID=9ff695be583a7eb72048dbe5e321189a&client=mobile&playservices=11400000&os=Android&os_version=4.2.2& HTTP/1.1
Accept-Encoding: gzip, deflate
User-Agent: mobmail android
Host: aj-https[.]my[.]com
Connection: close

While checking my mail server logs later, I saw this:

Feb 18 21:27:48 marge dovecot: imap-login: Login: user=, method=PLAIN, rip=, lip=, mpid=5140, TLS, session=<0JtFK6K7+7C5HrCo>
Feb 18 21:27:49 marge dovecot: imap(johndoe): Logged out in=275 out=1145

The cloud infrastructure is still connecting to the mail account at regular intervals! Connections are coming from two /24:


The next step was to kill the app still running on the phone (but unconfigured). Same effect, polling was (and is still now) ongoing.

Once authenticated, keeps a session with the mail server to fetch new emails. In my lab, it was a simple IMAP server:

root@marge:/var/tmp# net stat -anp |egrep "185\.30\.[76]*"
tcp 0 0 ESTABLISHED 27234/dovecot/imap-

Let’s switch our cap now and try to think like an attacker. myMail can also be very interesting because you can test credentials belonging to your target without any connection directly from your network/devices! The application can be used as a “proxy”. In the case I was involved, one of the user who was detected as a myMail user never installed the application! This is perfect to monitor / steal data from your victim’s mailbox which leaving artefacts.

Conclusion: from a user’s experience, myMail is a nice application to handle multiple mailboxes, especially when you don’t have a built-in push notification service but, if you use it in a corporate environment, you should keep in mind that they have ALL your data in their hands! By the way, is an international subsidiary of, which provides the well-known mail service with the same name. The application code is also full of Java classes like “ru/mail/mailapp/” with references to URLs:

  • hxxps://e[.]mail[.]ru/fines/documents?openStatus=1
  • hxxps://clicker[.]bk[.]ru/api/v1/clickerproxy/mobile https://aj
  • hxxps.mail[.]ru/api/v1/gibdd/gmap
  • hxxps://help[.]mail[.]ru/mail/account/login/qr#noaccount
  • hxxps://account[.]mail[.]ru/login?opener=androidapp
  • hxxps://e[.]mail[.]ru/payment/center
  • hxxps://m[.]calendar[.]
  • hxxps://touch[.]calendar[.]
  • hxxps://touch[.]calendar[.]mail[.]ru/create
  • hxxps://todo[.]mail[.]ru/?source=webview
  • hxxps://
  • hxxps://iframe[.]imgsmail[.]ru/pkgs/amp.viewer/2.3.1/iframe.html
  • hxxps://amproxy[.]imgsmail[.]ru/ hxxps://help[.]mail[.]ru/mailhelp/bonus/offline/ua?utm_source=mail_app& utm_medium=android
  • hxxps://help[.]mail[.]ru/mail-help/bonus/offline/support?utm_source=mail_ app&utm_medium=android

The post myMail Manages Your Mailbox… in a Strange Way! appeared first on /dev/random.

February 18, 2021

Up until now Autoptimize, when performing image optimization, relies on JS-based lazyloading (with the great lazysizes component) to differentiate between browser that support different image formats (AVIF, WebP and JPEG as fallback).

As JS-based Lazyload is going out of fashion though (with native lazyload being supported by more browsers and WordPress having out-of-the-box support for it too), it is time to start working on <picture> output in Autoptimize to serve “nextgen image formats” where different <source> tags offer the AVIF and WebP files and the <img> tag (which includes the  loading=”lazy” attribute) with JPEG as fallback.

For now that functionality is in a separate “power-up”, available on Github. If you have Image Optimization active in Autoptimize (and you are on the Beta version, Autoptimize 2.8.1 is missing a filter which the power-up needs so download & install the Beta if not done yet), you can download the plugin Github and give it a go. All feedback is welcome!

Facebook social decay
© Andrei Lacatusu

In response to a proposed law that requires technology companies to pay Australian publishers for linking to their news articles, Facebook made the sudden decision to restrict people and publishers from sharing news in Australia.

Facebook struggles to silence misinformation for years, but succeeds in silencing quality news in days. In both cases, Facebook's fast and loose approach continues to hurt millions of people ...

Social media platforms, news publishers, governments and internet users have been stuck in an inadequate equilibrium for years. The silver lining is that conflict is often necessary for driving positive changes.

My preferred outcome is that Australians "unfriend" Facebook, and switch back to reading real new websites.

February 17, 2021

De la nocivité des ondes à la bouffe bio et aux réseaux pédophiles, de la politique de la crise COVID à la distribution de vaccins : et si les complots étaient bien réels ? Réels mais pas tout à fait comme on les imagine.

Le complot des ondes électromagnétiques

Lorsque je me retrouve face à une personne qui me parle de la nocivité des ondes électromagnétiques, je lui demande d’abord si elle sait ce qu’est, physiquement, une telle onde. Dans la totalité des cas que j’ai vécus, la personne avoue son ignorance totale.

Une onde électromagnétique n’est qu’une série de particules, appelées photons, qui voyagent en vibrant à une certaine fréquence. Pour une certaine plage de fréquence, les photons deviennent visibles. On appelle cela… la lumière. Il y’a d’autres fréquences que nous ne voyons pas : l’infrarouge, l’ultraviolet et, bien entendu, les ondes radio.

Les ondes radio sont tellement difficiles à détecter qu’il est nécessaire de fabriquer des antennes particulièrement sophistiquées pour les capter. Antennes qui équipent nos téléphones.

Les ondes électromagnétiques peuvent être absorbées. L’énergie de leur vibration se transforme alors en chaleur. Pour vous en convaincre, il vous suffit de vous promener sous la plus grande source électromagnétique à notre disposition : le soleil. Les ondes émises par le soleil vous réchauffent. À trop grandes doses, elles peuvent même vous brûler. C’est le fameux « coup de soleil ». C’est également le principe qu’utilise votre four à micro-ondes, qui envoie des ondes à une fréquence dont l’énergie se transmet particulièrement bien à l’eau. C’est pour cela que votre four reste froid : il ne réchauffe que l’eau.

Les ondes électromagnétiques qui possèdent une très grande quantité d’énergie peuvent faire sauter un électron de l’atome qu’elles vont toucher. Cet atome est ionisé. Si un trop grand nombre d’atomes de notre ADN est ionisé, cet ADN ne pourra plus être réparé et cela peut induire des cancers. Il faut bien entendu une exposition longue, répétée à une source très puissante.

Par exemple le soleil. Responsable de nombreux cancers de la peau. Ou bien les rayons X, utilisés pour faire des radiographies médicales. L’avantage des ondes à très haute énergie, c’est qu’elles interagissent avec la première chose qu’elles touchent et qu’elles sont donc arrêtées facilement. C’est pour ça qu’il y’a des petits rideaux de caoutchouc plombé sur le tapis à rayons X  des aéroports. Ces protections servent essentiellement à protéger les employés qui, sans cela, seraient exposés en permanence aux rayons X. Pour le voyageur qui ne fait que passer deux fois par an, c’est bien moins essentiel.

En ce sens, les antennes GSM sont un peu comme des phares. Ils émettent des rayons électromagnétiques de la même façon. Seule la fréquence est différente.

Si un phare peut éblouir voire même brûler si on s’approche à quelques centimètres, personne n’ose imaginer que l’exposition à un phare puisse provoquer des cancers ou être nocive. De même pour votre routeur wifi : il n’émet pas plus d’énergie que votre ampoule halogène.

S’inquiéter de l’impact des ondes électromagnétiques semble donc absurde. Même si on venait à découvrir que certaines fréquences très précises pouvaient avoir un effet délétère, nous sommes dans un bain permanent d’ondes électromagnétiques depuis l’aube de l’humanité. Il est donc raisonnable de penser que tout impact actuellement inconnu, si un tel impact existe, est anecdotique.

Pourtant, je pense que les « anti-ondes » ont raison.

Les ondes sont nocives. Non pas parce qu’elles sont des ondes, mais à cause de l’usage que nous en faisons. Aujourd’hui, nous sommes en permanence hyperconnectés. Nos téléphones bruissent de notifications indésirables que nous ne savons pas désactiver. Nos maisons regorgent de petites lampes qui clignotent pour nous dire que le réseau est actif, que la tablette recharge. Quand je dors dans une chambre d’hôtel, je dois démonter la télévision pour accéder au routeur caché derrière et le débrancher. Non pas à cause des ondes, mais parce que je ne supporte pas ces lumières vertes clignotantes dans l’obscurité, lumière agrémentée de l’insupportable œil rouge luisant de la télévision en veille.

Comment ne pas être stressé à l’idée des millions de bits qui nous transperce en permanence pour aller notifier notre voisin de restaurant qu’une nouvelle vidéo YouTube est disponible ? Comment dormir en sachant toute cette activité qui nous traverse ? Les expériences ont montré que la sensibilité électromagnétique est belle et bien réelle. Que les gens en souffrent. Mais qu’elle n’est pas causée par la présence d’ondes électromagnétiques. Elle est causée par la croyance qu’il y’a des ondes électromagnétiques.

Les anti-ondes ont intuitivement perçu le problème. Avant de l’assigner à une raison qui n’est pas sous leur contrôle.

D’une manière générale, toutes les théories conspirationnistes sont des constructions basées sur un problème très juste. Problème auquel on a créé une cause artificielle absurde ou exagérée, cause qui symbolise et personnifie le problème afin d’avoir l’impression de le comprendre.

C’est pour cela que prouver l’absurdité d’une théorie du complot ne fonctionne pas. Le complot existe généralement réellement. Mais il est beaucoup trop simple, banal. Ce qui donne un sentiment d’impuissance. En lui donnant un nom, on se crée un ennemi identifié et la possibilité d’agir, de le combattre activement.

Le complot du deep state

Selon la légende, Dame Carcas libéra la ville de Carcassonne, assiégée par Charlemagne depuis cinq ans. La population mourant de faim, Dame Carcas eut l’idée de prendre le dernier porc de la ville, de nourrir avec le dernier sac de blé avant de le jeter du haut des remparts sur les assaillants. Ceux-ci se dirent que si la ville pouvait se permettre de balancer un porc nourri au blé, c’est qu’elle avait encore de nombreuses ressources et qu’il était préférable de lever le siège. Charlemagne ne se posa pas la question de savoir comment la ville pouvait avoir encore autant de ressources après cinq années de siège. Alors que les troupes s’éloignaient, Dame Carcas fit sonner les cloches de la ville qui en tirera désormais son nom : Carcas sonne !

La plupart des théories du complot se heurtent à un problème fondamental : leur réalité implique des milliers de spécialistes de domaines extrêmement différents travaillant dans le secret le plus total au sein d’une organisation incroyablement parfaite et efficace qui ne ferait jamais la moindre erreur. Or, il suffit d’ouvrir les yeux pour voir que dès que trois personnes travaillent ensemble, l’inefficacité est la loi.

Pour vous en convaincre, il vous suffit de regarder des films d’espionnage. L’histoire est toujours là même : un service ultra-secret de contre-espionnage lutte contre une organisation ultra-secrète d’espionnage qui cherche à accomplir son rôle en mettant au grand jour le service de contre-espionnage, qui s’engage donc dans une lutte de contre-contre-espionnage. C’est particulièrement marquant dans les « Missions Impossibles » ou dans la série Alias. Un peu de recul permet de se rendre compte que toutes ces organisations… ne servent strictement à rien. Même les scénaristes, spécialistes de la fiction, n’arrivent pas à trouver des idées pour justifier l’existence de telles organisations. On parachute alors artificiellement un terroriste qui veut faire sauter une bombe nucléaire, afin de camoufler légèrement la fatuité du scénario.

La réalité des services d’espionnage est tout autre. Des fonctionnaires qui, pour justifier leur budget et l’existence de leurs nombreux emplois, vont jusqu’à inventer des complots (un truc qui revient aussi dans Mission Impossible). Contrairement à Tom Cruise, les milliardaires surpuissants et les espions sont des humains qui mangent, dorment, font caca et se grattent les hémorroïdes. Ils font des erreurs de jugement, se laissent emporter par leur idéologie et leur sentiment de toute-puissance.

Et oui, ils tentent de favoriser leurs intérêts, même de manière illégale ou immorale. Cela consiste essentiellement à tenter de convaincre le monde d’acheter leur merde (le marketing), de commettre des délits d’initiés sur les plateformes boursières et de financer du lobbying politique pour que les lois soient en leur faveur. Là se trouvent les véritables complots, les véritables scandales qui ne requièrent la complicité que de quelques personnes, qui ne nécessitent pas de compétence ou de technologie particulière et qui ne sont, la plupart du temps, même pas secrets du tout !

La plupart des innovations secrètes de la guerre froide n’étaient que des canulars qui servaient à effrayer le camp adverse : rayons de la mort, rayon de contrôle des esprits, contacts extra-terrestres. D’ailleurs, les innovations réelles étaient tout sauf secrètes. La bombe nucléaire, la conquête spatiale, l’informatique et les prémices d’Internet. Comme le cochon de Dame Carcas, tout était entièrement public et les seules choses vraiment secrètes étaient ce qui n’existait pas, dans une tentative d’intoxication informationnelle.

Dans certains cas, la recherche des services secrets mènera à quelques rares avancées réelles. Ce fut par exemple le cas de Clifford Cocks qui inventa la cryptographie asymétrique en 1973 pour le compte des services secrets anglais. Malheureusement, cette invention purement théorique ne pouvait être mise en pratique sans un développement que Cocks ne pouvait réaliser seul. Elle fut dont jetée aux oubliettes avant que le concept ne soit redécouvert de l’autre côté de l’Atlantique, 3 ans plus tard, par Diffie, Hellman et Merkle qui la publieront et lanceront les bases d’une nouvelle science : la cryptographie informatique. Une fois encore l’histoire démontre que rien n’est réellement possible dans le secret et l’isolement. Le mythe de l’entrepreneur scientifique solitaire fonctionne dans les romans d’Ayn Rand (quand c’est un bon) et Ian Flemming (quand c’est un mauvais), pas dans la réalité.

La notion de « Deep state » ou d’élites secrètes prenant les décisions est plus rassurante que la vérité selon laquelle, oui, nos dirigeants sont corrompus, mais tout simplement comme des humains, pour favoriser leurs petits intérêts personnels en lieu et place de l’intérêt général. Le tout, en faisant des erreurs et en tentant de se justifier moralement que leur profit est bien pour l’intérêt général (comme la théorie du ruissellement des richesses ou l’idée selon laquelle la richesse se mérite). Les complots existent, mais ils sont petits, mesquins et pas particulièrement secrets.

Le complot des vaccins

L’idée d’un vaccin avec des puces pour nous surveiller ou des chemtrails pour contrôler nos esprits (technologies qui semblent complètement impossibles dans l’état actuel de nos connaissances et qu’il serait donc particulièrement difficile de développer en marge de la communauté scientifique, dans le secret le plus total) nous sert à oublier que nos téléphones nous surveillent déjà très bien et fournissent plus de données que ne peuvent en exploiter les gouvernements, que la télévision nous abrutit parfaitement, et que nous avons choisi de les utiliser, que personne ne nous a jamais forcés.

De même, les anti-vaccins pointent, avec justesse, le fait que l’oligopole pharmaceutique a un intérêt commercial évident à ce que nous soyons le plus possible malade pour consommer le plus de médicaments. Qu’à travers les brevets, l’industrie pharmaceutique privatise d’énormes budgets publics pour les transformer en juteux profits, parfois au détriment de notre santé. Mais il est difficile de se passer des médicaments. Il est donc plus simple d’attaquer les vaccins, médicaments dont la procédure est impressionnante (une piqure) et qui ont, à très court terme, un effet néfaste (fièvre ou durillon). Pire, on ne perçoit jamais l’utilité d’un vaccin. Si un vaccin fonctionne, on se dira toute sa vie qu’il n’était pas nécessaire… Et qu’on a été victime d’un complot.

Le vaccin, qui est probablement la plus belle invention de l’humanité en ce qui concerne le confort et l’espérance de vie, sert donc très injustement d’étendard à l’intuitif conflit d’intérêts et à la rapacité (réelle) de l’industrie pharmaceutique. La plupart des médicaments sont beaucoup moins efficaces que ce qu’ils prétendent, ils sont vendus à grands coups de marketing. Le simple fait que les devantures de pharmacie soient transformées en gigantesques panneaux publicitaires est un scandale en soi. Les vaccins sont peut-être l’exception la plus sûre, la plus bénéfique et la plus surveillée. Mais c’est aussi intuitivement la plus facile à critiquer.

Et ces critiques sont parfois nécessaires : les vaccins étant peu rentables (on ne les prend qu’une fois dans sa vie), l’industrie pharmaceutique tente de les faire développer sur des fonds publics à travers les universités avant de s’arroger tous les bénéfices en les revendant très cher aux états… qui ont financé leur mise au point ! L’université d’Oxford avait d’ailleurs annoncé son souhait de mettre son vaccin COVID dans le domaine public, sur le modèle de l’Open Source, avant de se raviser sous, à ce qu’il parait, la pression de la fondation Bill Gates. Un complot qui, sans remettre en cause la qualité du vaccin, me semble parfaitement plausible et réaliste. À  croire que les complots absurdes comme les puces 5G dans les vaccins sont inventés exprès pour décrédibiliser la moindre des critiques et nous détourner des véritables problématiques. À noter que la fondation Bill Gates joue un rôle positif prépondérant dans l’éradication de la polio. Rien n’est jamais parfaitement noir ni blanc. Le monde est complexe.

Le complot des réseaux pédophiles

Pour faire une bonne théorie du complot, il suffit donc de reprendre les souffrances réelles, de les amalgamer avec une histoire séduisante et choquante. Un exemple parmi tant d’autres est la persistance des théories de réseaux pédophiles très sophistiqués pour les élites. Parfois mâtinée de satanisme et de cannibalisme pour le décorum.

La pédophilie est bel et bien un problème de notre société. Hélas, elle est majoritairement présente au sein des familles elles-mêmes. Les enfants sont le plus souvent abusés par un parent ou un proche de confiance (comme l’ont souvent été les prêtres). Mais imaginer qu’un oncle ou un père puisse violer un enfant de sa propre famille est tellement affreux que nous rejetons la faute sur les ultra-riches. Ultra-riches qui ne font qu’ajouter de l’huile sur le feu en ayant parfois une sexualité débridée par un sentiment d’impunité, sentiment exacerbé par une culture machiste du viol menant parfois réellement à la pédophilie comme les affaires Weinstein ou Polanski.

Le traumatisme de l’affaire Dutroux en Belgique s’explique en partie, car il est difficile d’admettre qu’un pauvre type complètement malade puisse tout simplement enlever des gamines dans sa camionnette et les planquer dans sa cave. Que son nom était bien sur la liste des suspects, mais que la lenteur de la police à le démasquer s’explique essentiellement par l’application aveugle des procédures administratives en vigueur à l’époque, procédures ralenties par certains conflits de pouvoir au sein de la hiérarchie (ce qui a conduit, d’ailleurs, à une refonte complète de la police en Belgique). Il y’a un certain réconfort à imaginer que le crime n’est pas juste une série de malchances et de mesquineries administratives, mais bien la volonté d’une organisation toute puissante impliquant jusqu’à la famille royale.

Les complots de la juiverie internationale et de QAnon

Les théories du complot sont généralement l’illustration d’une perte de confiance justifiée envers les garants de la moralité et de l’autorité. Elles fleurissent le plus souvent en période de désarroi profond. La misère économique des années 30, juste après le krach boursier, permettra de mettre en avant la théorie séculaire de la cabale juive avec les conséquences que l’on sait en Allemagne. Je ne peux d’ailleurs m’empêcher de vous recommander l’excellent « Le cimetière de Prague », d’Umberto Eco, pour une illustration romancée de cette cabale.

La crise financière de 2008 n’échappe pas à la règle. Sur ses cendres naitront Donald Trump et QAnon qui n’ont, d’un point de vue historique, aucune originalité. Tout semble être, à la lettre près, issu des théories complotistes du passé.

Des thèses absurdes, mais avec, encore une fois, une intuition d’un problème très juste. Le problème de l’existence de l’industrie de la finance. Comment se fait-il qu’une industrie qui ne semble produire rien de concret pour les citoyens lambdas, qui génère des milliards, qui semble rendre chacun de ses membres millionnaires, comment se fait-il que cette industrie aux pratiques incompréhensibles reçoivent autant d’argent du gouvernement lors d’une difficulté qu’elle a elle-même créé ? Comment se fait-il que, dans ce qui se présente comme une démocratie, le principal facteur pour arriver au pouvoir soit la richesse ? Comment se fait-il que tous nos meilleurs cerveaux issus des écoles d’ingénieurs, de science ou d’administration soient recrutés dans le domaine de la finance ?

À ce sujet, je conseille le magnifique discours de Fabrice Luchini (préparé, mais jamais déclamé) dans le film « Alice et le maire ». Un film qui illustre de manière très réaliste les dessous de la politique : des gens stressés, qui enchainent les réunions et qui n’ont plus le temps de penser. Comment voulez-vous que ces organisations dont la vision à long terme relève de la prochaine élection puissent sérieusement mettre en place des complots d’envergure ?

Le complot de la malbouffe

Les théories du complot ne peuvent que diviser. Les intuitifs savent qu’elles représentent un problème réel. Les rationnels peuvent démontrer qu’elles sont absurdes et en viennent à nier l’existence du problème initial. Les deux camps ne peuvent donc plus se parler. Les comportements sensés et absurdes se mélangent.

Entrez dans un magasin de nourriture bio et vous serez abasourdi par le fatras de concepts dont une simple boîte de conserve peut se revendiquer.

Votre boîte est « bio ». Cela signifie qu’elle a reçu un label comme quoi elle utilisait une quantité limitée de certains pesticides.

La démarche est rationnelle. Si la nocivité des pesticides sur l’humain n’est pas toujours démontrée, elle l’est sur le vivant. L’absorption des pesticides par le corps a été démontrée et l’hypothèse que ces pesticides puissent avoir un impact sur la santé est sérieusement étudiée.

Votre boîte est également dans un emballage « écologique ». Cela semble intuitif, mais, malheureusement, la culture biologique produit énormément plus de CO2 que la culture avec pesticide. Ceci dit, les pesticides ont également un impact environnemental non négligeable, même si ce n’est pas du CO2.

L’aliment est également garanti sans OGM. Là, cela devient plus étrange. La nature produit en effet des OGM en permanence. C’est même le principe de l’évolution. Les OGM pourraient donc être particulièrement bénéfiques, par exemple en étant plus nutritifs. Rejeter les OGM, c’est rejeter le principe du bouturage, vieux comme l’agriculture. Mais le rejet des OGM est, encore une fois, le symptôme d’un réel problème, à savoir la volonté d’apposer une propriété intellectuelle sur les semences, procédé monopolistique dangereux. La lutte anti-OGM n’est pas tant contre le principe de l’OGM lui-même (la plupart des anti-OGM ne savent d’ailleurs pas ce qu’est un OGM) qu’une défiance envers ceux qui prétendent manipuler la nourriture sans vouloir nous dire comment ni nous permettre de le faire nous-mêmes. La défiance envers l’industrie qui pratique l’OGM  est pertinente. La défiance envers le principe même de l’OGM ne l’est sans doute pas.

Enfin, il arrive que votre nourriture (ou vos produits de beauté, s’ils sont de la marque Weleda) soit issue des principes de la biodynamie. La biodynamie est un concept inventé par Rudolf Steiner, un illuminé notoire qui a décidé de réinventer la philosophie, les sciences, la médecine, l’éducation et la religion en se basant uniquement sur son intuition. Il n’y connaissait strictement rien en agriculture, mais a un jour improvisé une conférence devant une centaine d’amis, dont seule une minorité d’agriculteurs, sur la meilleure manière de cultiver. Cette conférence a été retranscrite par une sténographe, mais Steiner lui-même a dit plusieurs fois qu’il n’avait pas relu cette transcription et que sa conférence avait pour objectif d’être orale, pas écrite. Que la transcription devait comporter énormément d’erreurs. Il mourra peu après sans jamais relire ni même mentionner le terme « biodynamie » qui sera inventé par après.

Il n’empêche que cette transcription erronée d’une conférence improvisée par un non-agriculteur passionné d’occultisme et de magie sert aujourd’hui de référence à toute une industrie. Les règles sont du style : « Telle plante doit être plantée quand Mars est visible dans le ciel parce que les fleurs sont rouges et que Mars est rouge. Et il faut répandre des rats morts dans le compost durant les nuits de pleines lunes parce que ça le fait ». Tout livre ou agriculteur qui se revendique de la biodynamie aujourd’hui ne fait qu’une chose : reprendre les élucubrations sans aucune substance empirique issues de la transcription erronée d’une seule et unique conférence d’un illuminé. Bref, la définition même de la théologie. Cependant, si on supprime toute la partie ésotérique, on retrouve les fondements de l’agriculture biologique. Comme n’importe quelle religion, la biodynamie est donc loin d’avoir tout faux. Tout simplement parce que, statistiquement, avoir tout faux est aussi improbable que d’avoir tout vrai et parce que, comme le souligne Kahneman, l’intuition est souvent juste. Mais pas toujours. Ce qui est son gros problème.

Donc, en achetant de la nourriture bio, ce que je fais personnellement, je mélange le plus souvent du sensé, du pas complètement sensé et de l’absurde total.

Tout cela à cause d’un problème intuitif bien réel : on possède désormais un confort suffisant pour faire le difficile concernant notre nourriture et force est de constater qu’on bouffe de la merde. À travers le sucre et les graisses saturées, les producteurs de nourriture ne cherchent qu’à nous rendre addicts à moindre coût au mépris le plus total de notre santé. Les aliments sont manipulés pour paraitre jolis en magasin, au détriment de leur composition. Depuis des décennies, des arnaques intellectuelles, parfois promues par nos gouvernements, ont servi les intérêts industriels (par exemple le fait de boire du lait pour renforcer les os ou le principe de la pyramide alimentaire, principe sans aucun fondement scientifique). Le complot est donc bel et bien réel !

Le complot des complotistes

Nous le sentons alors nous cherchons à préserver notre santé, à diminuer nos cancers en nous protégeant des ondes électromagnétiques et en bouffant bio. Ce qui, objectivement, pourrait avoir un impact positif. Très faible, mais ce n’est pas impossible.

Mais vous savez ce qui a un impact majeur sur notre santé ?

La cigarette, les pots d’échappement de voiture, l’alcool. Supprimez ces trois-là, dont deux sont à votre portée immédiate, et cela aura un million de fois plus d’effet que de bouffer bio et de mettre son GSM en mode avion la nuit. Pour un effet maximal, diminuez également la viande rouge, cancérigène établi, et faites 30 minutes d’exercice par jour.

Ils sont là les complots qui en veulent à votre santé. Ils crèvent les yeux. C’est le lobby du tabac qui fait qu’il est légal de fumer en public, en empestant autour de soi. C’est le lobby automobile qui vous vend des SUV en vous faisant pester sur les embouteillages et en tuant les jeunes adultes inconscients qui roulent à pleine vitesse. C’est le lobby de l’alcool qui fait des cartes blanches contre le concept de « tournée minérale » en Belgique et qui subventionne les cercles étudiants, ce sont les Facebook et Google qui accaparent toute votre vie privée et mettent en place des procédés monopolistiques qui les rendent incontournables.

Nous pouvons tous lutter contre ces complots qui nous menacent directement dans notre intégrité physique et mentale. Les plus grandes causes de mortalités évitables, hors suicide, peuvent se résumer à alcool, tabac et bagnole.

Mais c’est très difficile de renoncer à sa clope, à sa bagnole et à son compte Facebook. Alors on poste contre les vaccins, contre les OGMs et contre la 5G. On manifeste contre ce qu’on ne peut pas vraiment changer. Quitte à se mettre en danger un fumant de l’herbe « bio », en buvant des alcools distillés artisanalement et en refusant les vaccins pour ses enfants. Tout en le clamant haut et fort sur Facebook.

À force de remettre en question l’autorité, on se tourne alors vers des sources d’autorités sans aucune légitimité, mais qui nous font du bien. On prétend ne pas vouloir se faire manipuler et on va se mettre dans les pattes des intérêts commerciaux des gourous, des shamans et des vendeurs de cruches qui énergétisent l’eau. Sous prétexte de ne pas vouloir obéir, on en vient à faire exactement le contraire de ce que les autorités disent, sans réfléchir au fait qu’on est encore plus facilement manipulable, comme l’enfant qui dit toujours non et à qui on dit « Ne mange surtout pas ta soupe ! ».

Si vous pensez qu’un domaine quelconque est corrompu, de l’industrie alimentaire à la recherche scientifique, vous avez probablement raison. Mais ce n’est pas contre le domaine en question qu’il faut lutter, c’est contre la corruption. L’industrie de l’alimentation biologique, celle du cannabis, celle des cristaux énergétiques et des réseaux de coaching anti-cancer astrologique sont tout aussi corrompus, tout comme l’est la politique écologique. Ils comportent une partie de gens honnêtes dilués dans une population ne cherchant qu’à vider votre portefeuille.

Le plus dur à accepter c’est que, non, on ne nous cache pas la vérité. Elle est là, devant les yeux de qui veut bien la voir. Il n’y a rien de secret, rien de mystérieux. L’intelligence moyenne reste la même, quel que soit le niveau de richesse ou de pouvoir politique. Mais cette réalité est difficile à accepter, car elle n’offre pas de réponse toute faite, parce qu’elle n’offre aucune certitude, que des probabilités, parce qu’elle va très souvent en contradiction avec nos convictions et nos actions passées. Et parce que, si le complot est le plus souvent inventé ou exagéré, la souffrance qui en résulte est elle bien réelle.

Pour aller plus loin :  complot du Covid et autres lectures

« Vaincre les épidémies », par Didier Pittet et Thierry Crouzet.

Inventeur du gel hydroalcoolique que nous utilisons désormais tous les jours, Didier Pittet est un spécialiste suisse mondialement reconnu des maladies infectieuses et des épidémies. Dans ce livre, il retrace sa découverte du Covid, sa comparaison avec les autres épidémies (H1N1, grippe aviaire) et son expérience de devenir l’expert de référence pour Macron, qui enverra un jet privé le chercher pour l’amener à une réunion de l’Élysée. Ce livre illustre donc à merveille la vision d’une personne qui fait partie du plus haut niveau de pouvoir en ce qui concerne le COVID. Au menu : incompétences à tous les niveaux de décisions, conflits politiques qui impactent des décisions qui devraient être purement scientifiques, tentatives pas souvent efficaces de manipuler l’opinion publique « dans le bon sens » à travers le marketing. Dans le COVID comme partout, les complots sont bel et bien présents, mais tellement petits, humains, mesquins…

Didier Pittet vient d’être fait Docteur honoris causa de l’université où j’enseigne l’Open Source. Ce que je salue, car, avec la formule de son gel hydroalcoolique, il est un pionnier de l’Open Source dans le domaine de la santé.

Thierry Crouzet revient sur la nécessité de créer un vaccin Open Source.

Ce qui n’est malheureusement pas le cas, comme je l’ai raconté, à cause de la fondation Bill Gates.

Dans son intervention, le parlementaire belge François De Smet tente de trouver un juste milieu entre les mesures anti-Covid et les libertés publiques. Loin de crier au complot, dans un sens ou dans l’autre, il milite pour un équilibre raisonnable. Cela devient tellement rare que cela mérite d’être souligné. De la même façon, il avait dénoncé les procédures entourant le marché des vaccins anti-covid tout en militant pour plus de transparence. Un politicien qui me fait plaisir. Il risque de ne pas avoir beaucoup de voix. D’ailleurs, il ne semble intéresser personne d’autre que moi.

Bad science, un livre et une chronique qui revient sur les arnaques scientifiques de l’industrie pharmaceutique, depuis Big Pharma aux laboratoires bio/indépendants qui fournissent les compléments alimentaires « alternatifs » (je n’ai pas lu le livre, je me fie à la critique de Cory Doctorow).

« Le cimetière de Prague », d’Umberto Eco. Avec sa verve habituelle, Eco nous plonge dans la vie d’un faussaire obligé de créer de toutes pièces les preuves d’un complot. Jouissif.

Compte-rendu de l’incompétence des services secrets anglais

Un très long témoignage sur comment les théories du complot nous manipulent et sur le parallèle entre la diététique « alternative », les religions et les complots politiques.

Photo by Markus Spiske on Unsplash

Je suis @ploum, ingénieur écrivain. Abonnez-vous par mail ou RSS pour ne rater aucun billet (max 2 par semaine). Je suis convaincu que Printeurs, mon dernier roman de science-fiction vous passionnera. Commander mes livres est le meilleur moyen de me soutenir et de m’aider à diffuser mes idées !


Ce texte est publié sous la licence CC-By BE.

Cory Doctorow is one of the most prolific bloggers in the world, capable of publishing multiple great posts a day. He recently documented his writing and publishing process. It's fascinating.

Over the last 20 years, Cory built a huge, personal database of thoughts, articles and links. He explains how his database simplifies and supports his writing process:

The memex I've created by thinking about and then describing every interesting thing I've encountered is hugely important for how I understand the world. It's the raw material of every novel, article, story and speech I write.

Inspired by Cory, I brought back the Notes section on my site. I will use Notes to document articles or ideas that grab my attention, but that I'm not ready to write a longer blog post about. I'll build my own memex with the goal to become a better writer.

February 13, 2021

Elementor 3.1 came with “refactored YouTube source to use YouTube API in Video widget” which resulted in WP YouTube Lyte not being able to lyten up the YouTubes any more. Below code snippet (consider it beta) hooks into Elementor to fix this regression;

add_filter( 'elementor/frontend/widget/should_render', function( $should_render, $elementor_element ) {
	if ( function_exists( 'lyte_preparse' ) && 'Elementor\Widget_Video' === get_class( $elementor_element ) ) {
		$pas = get_private_property( $elementor_element , 'parsed_active_settings');
		if ( ! empty( $pas['youtube_url'] ) ) {
			echo lyte_preparse( $pas['youtube_url'] );
			return false;

	return $should_render;
}, 10, 2 );

// from
function get_private_property( object $object, string $property ) {
    $array = (array) $object;
    $propertyLength = strlen( $property );
    foreach ( $array as $key => $value ) {
        if ( substr( $key, -$propertyLength ) === $property ) {
            return $value;

Hat tip to Koen for his assistance in digging into Elementor, much appreciated!

February 12, 2021

I published the following diary on “Agent Tesla Dropped Through Automatic Click in Microsoft Help File‘”:

Attackers have plenty of resources to infect our systems. If some files may look suspicious because the extension is less common (like .xsl files), others look really safe and make the victim confident to open it. I spotted a phishing campaign that delivers a fake invoice. The attached file is a classic ZIP archive but it contains a .chm file: a Microsoft compiled HTML Help file. The file is named “INV00620224400.chm” (sha256:af9fe480abc56cf1e1354eb243ec9f5bee9cac0d75df38249d1c64236132ceab) and has a current VT score of 27/59. If you open this file, you will get a normal help file (.chm extension is handled by the c:\windows\hh.exe tool)… [Read more]

The post [SANS ISC] Agent Tesla Dropped Through Automatic Click in Microsoft Help File appeared first on /dev/random.

February 11, 2021

(In case you don’t know rakudo-pkg: it’s a project to provide native packages and relocatable builds of Rakudo for Linux distributions.)

The downward spiral of Travis since it was taken over by a Private Equity firm (there is a pattern here), triggered a long coming refactoring of the rakudo-pkg workflow.  Until now the packages were created on Travis because it supported running Linux containers so we could tailor the build for each distribution/release. Almost at the same time JFrog announced it was sunsetting Bintray and other services popular with FOSS projects. The deb and rpm repos needed a new home.

Oh well, Travis and Bintray will be missed and certainly deserve our thanks for their service during years. It was difficult to imagine a better time to implement a few ideas from the good old TODO list.

From Travis to Github Actions

Github Actions is way faster than the post-P.E. Travis. Builds that took a few hours (we build around 25 distro/version combinations) now are done between 10 and 20 minutes. Not surprisingly this not only meant learning a complete new tool, but also a complete rewrite of the rakudo-pkg workflow.

Running on Github also means that every Github user can fork the repo including the rakudo-pkg Github workflows. One of the advantages of a rewrite is implementing new functionalities, like the new feature of rakudo-pkg to allow everyone to test the upstream MoarVM/NQP/Rakudo and zef commits/releases:

devbuild workflow

From Bintray to CloudSmith with Alpine repositories

Cloudsmith has a huge advantage over Bintray: it supports Alpine Linux repositories, making it a great addition to the rpm and deb repositories we already offer. The packages in the old repositories were GPG signed by BinTray. From now onwards, rakudo-pkg will be signed with his own key on Cloudsmith. Check the project’s README for instructions how to change your configuration. So far the experience with CloudSmith has been stellar. Their support is impressively responsive and they solve issues immediately, sometime while we were chatting about it.

From fpm to nfpm

rakudo-pkg created packages with the venerable fpm, written in Ruby. While the tool is extremely capable, I spent most of the time of a release getting fpm to build on new versions of distributions. The Ruby Gems ecosystem looks like a moving target. Enter nfpm, inspired with fpm but with the promise of a single binary. Written it Go, it’s more accessible to me to understand how it works and send fixes upstream if needed. The devs are also very responsive and fixed one issue I encountered extremely fast.


Feel free to open issues of send PRs if I missed something or if you have ideas to improve rakudo-pkg

February 10, 2021

All talks have been recorded and they will be made available on as soon as the presenter reviews their talk. For FOSDEM 2021, we make a mix of the original recording submitted by the presenter and the Q&A happening in the main room. To make sure that the result is up to our quality standards, we request that the original presenter reviews the video so the result contains no errors or mistakes. This can take a while. We expect that it will take some days to some weeks for all videos to become available at See our舰

February 09, 2021

Cover Image

Lies, damned lies, and social media

Rhetoric about the dangers of the internet is at a feverish high. The new buzzword these days is "disinformation." Funnily enough, nobody actually produces it themselves: it's only other people who do so.

Arguably the internet already came up with a better word for it: an "infohazard." This is information that is inherently harmful and destructive to anyone who hears or sees it. Infohazards are said to be profoundly unsettling and, in horror stories, possibly terminal: a maddening song stuck in your head; a realization so devastating it saps the will to live; a curse that happens to anyone who learns of it. You know, like the videotape from The Ring.

Words of power are nothing new. Neither is the concept of magic: weaving language into spells, blessing or harming specific people, or even compelling the universe itself to obey. Like much human mythology, it's wrong on the surface, but correct in spirit, at least more than is convenient. Luckily most actual infohazards are pretty mundane and individually often harmless, being just dumb memes. The problem is when magical thinking becomes the norm and forms a self-reinforcing system.

This is a weird place to start for sure, but I think it's a useful one. Because that is the concern, right, that people are being bewitched by the internet?

A witch offering a poison apple


Last year the following was making the rounds, about polarization and extremism on Facebook, and their efforts to curb it. Citing work by Facebook researcher and sociologist Monica Lee:

The high number of extremist groups was concerning, the presentation says. Worse was Facebook’s realization that its algorithms were responsible for their growth. The 2016 presentation states that “64% of all extremist group joins are due to our recommendation tools” and that most of the activity came from the platform’s “Groups You Should Join” and “Discover” algorithms: “Our recommendation systems grow the problem.”

It's an extremely tweetable stat and quote, so naturally they first tell you how to feel about it. The article is rather long and dry, so we all know many people who shared it didn't read it in full. The current headline says that Facebook "shut down efforts to make the site less divisive." If you look at the URL, you can see it was originally titled: "Facebook knows it encourages division, top executives nixed solutions." So pot, meet kettle, but that's an aside.

While they acknowledge that some people believe social media has little to do with it, they immediately drown out the entire notion by referencing the American election and never mention this viewpoint again.

Don't get me wrong, I do think Facebook has problems, which is why I'm not on it. Optimizing mainly for time spent on the site is a textbook case of Goodhart's law: it's no surprise that it goes wrong. But a) that's not even a big-tech-specific problem and b) that's not what most people actually want changed when they cite this. What they say is that Facebook needs to limit "harmful content": they don't want Facebook to interfere less, they want it to interfere more.

This is in fact a very common pattern in all "disinformation" discourse: they tell you that what you are seeing is specifically abnormal and/or harmful, without any basis to actually justify limiting the conclusion. Like that anything about this is Facebook-specific. It's just that it's an easier sell to pressure one platform or channel at a time. It's not about what, but about who and whom.

There's also an assumption being snuck in. If a recommendation algorithm suggests you join a group, does the operator of that algorithm have a moral responsibility for your subsequent actions? If you agree with this, it seems Facebook should never recommend you join an extremist group, because extremism is bad. That sounds admirable, but is also not very achievable. It also hinges on what exactly is and isn't extreme, which is highly subjective.

Harmful content is for example "racist, conspiracy-minded and pro-Russian." So I doubt that they would consider e.g. the Jussie Smollett hoax or the hunt for the Covington Kid as harmful, seeing as they were sanctioned by both media outlets and prominent politicians. Despite being textbook cases of mass hysteria.

Calls for racially motivated action which result in violence, arson and anarchy under the Black Lives Matter flag also do not seem to count in practice. Facebook in fact placed BLM banners on official projects for most of 2020, as did others. The people who say we need to be deprogrammed seem to be doing most of the actual activism in the first place.

Facebook's React urging people to donate to Black Lives Matter

Facebook using its open-source projects to urge people to donate to political causes in an election year.

Like most big social media companies, Facebook's claims of political impartiality ring hollow on both an American and international level. Plus, if you actually read the whole article, the take-away is that they have put an inordinate amount of time, effort and process into this issue. They're just not very good at it.

But it doesn't really matter because in practice, people will share one number, with no actual comparison or context, from a source we can't see ourselves. This serves to convince you to let a specific group of people have specific powers with extremely wide reach, with no end in sight, for the good of all.



It's worth to ponder the stat. What should the number be?

First, some control. If 64% of members of extremist groups joined due to a machine recommendation, then what is that number for non-extremist groups? It would be useful to have some kind of reference. It would also be useful to compare to other platforms, to know whether Facebook is particularly different here. This is not rocket science.

Second, you should be sure about what this number is specifically measuring. It only tracks how effective recommendations are relative to other Facebook methods of discovering extremist groups, like search or likes. This is very different from which recommended groups people actually join. Confusing the two is how the trick works.

The two different conditional probabilities

The difference between P(Recommended | (Joined & Extremist)) and P((Joined & Extremist) | Recommended).

They're talking about the thing on the left, not on the right.

If ~0% of group joins were due to a recommendation, that would mean the recommendation algorithm is so bad nobody uses it. It's always wrong about who is interested in something. You wouldn't even see this if e.g. right-wing extremism was being shown only to left-wing extremists, or vice versa, because both camps pay enormous attention to each other. You would basically only need to recommend extremist groups to 100% apolitical people. Ironically, people would interpret that as the worst possible radicalization machine.

They do say Facebook had the idea of "[tweaking] recommendation algorithms to suggest a wider range of Facebook groups than people would ordinarily encounter," but seem to ignore that this implies exposing the middle to more of the extremes.

If ~100% of group joins were due to a recommendation, then that would imply the algorithm is so overwhelmingly good that it eclipses all other forms of discovery on the site, at least for extremist groups. Hence it could only be recommending them to people who are extremists or definitely want to hang out with them. This would be based on a person's strong prior interests, so the algorithm wouldn't be causing extremism either.

The value of this number doesn't really matter. The higher it is, the worse it sounds on paper: the recommendation engine seems to be doing more of the driving, even though the absolute numbers likely shrink. But the lower it is, the less relevant the recommendations must be, creating more complaints about them. It's somewhere in the middle, so nobody can actually say.

The popularity of extremist groups has nothing to do with what percentage of their members join due to a recommendation. You need to know the other stats: how often are certain recommendations made, and actually acted upon? How does it differ from topic to topic? If a particular category of recommendations is removed, do people seek it out in other ways?

The only acceptable value, per the censors, is if it becomes ~0% because Facebook stops recommending extremist groups altogether. That's why this really is just a very fancy call for censorship, using a lone statistic to create a sense of moral urgency. If a person had a choice of extremist and non-extremist recommendations, but deliberately chose the more extreme one... wouldn't that make a pretty strong case that the algorithm explicitly isn't responsible and actually just a scapegoat?

The article tells us that "[divisive] groups were disproportionately influenced by a subset of hyperactive users," so it seems to me that personal recommendations have a much higher success rate than machine recommendations. In that case, your problem isn't an algorithm, it's that everyone and their dog has a stake in manipulating this, and they do. They even brag about it in Time magazine.

There's a question early on in the article: "Does its platform aggravate polarization and tribal behavior? The answer it found, in some cases, was yes." Another way of putting that is: "The answer in most cases was no."


The notion that groups are dominated by hyperactive influencers does track with my experience. I once worked in social media advertising, and let me tell you the open industry secret: going viral is a lie.

The romantic idea is that some undiscovered Justin Bieber-like wunderkind posts something brilliant online, in a little unknown corner. A kind soul discovers it, and it starts being shared from person to person. It's a bottom-up grass-roots phenomenon, growing in magnitude, measured by the virality coefficient: just like COVID if it's >1 then it spreads exponentially. This is the sort of thing mediocre marketing firms will explain to you with various diagrams.

Going viral from one person to many

The reality is very different. Bottom-up virality near 1 is almost unheard of, because that would imply that every person who sees it also shares it with someone else. We just don't do this. For something to go viral, it must be broadcast to a large enough group each time, so that at least one person decides to repost it to another big group.

Thus, going viral is not about bottom-up sharing at all: it's about content riding the top-down lightning between broadcasters and audiences. These channels build up their audiences glacially by comparison, one person at a time. One pebble does not shift the landscape. It also means the type of content that can go viral is constrained by the existing network. Even group chats work this way: you have to be invited in first.

This should break any illusions of the internet as a flat open space: rather it is accumulated infrastructure to route attention. Everyone knows you need sufficient eyeballs to be able to sell ad space, but somehow, translated into the world of social media, this basic insight was lost. When the billboard is a person or a personality, people forget. Because they seem so accessible.

In the conflict between big tech and old media, you often hear the lament that "tech people don't like to think about the moral implications of what they build." My pithy answer to that is "we learned it from you" but a more accurate answer is "yes, we do actually, quite a lot, and our code doesn't even have bylines." Though I can't actually consider myself "big tech" in any meaningful way. In this house we hack.

I can agree that large parts of social media are basically just cesspits of astroturfing and mass-hypnosis. But once you factor in who is broadcasting what to whom, and at what scale, the lines of causation look very different from the usually cited suspects.

While there are indeed weird niche groups online and offline, it is the crazy non-niche groups we should be more concerned about. Who is shouting the loudest, to the most people? Why, the traditional news outlets. The ones that 2020 revealed to be far less capable, objective and in-the-know than they pretend to be. The ones who chastised public gatherings as irresponsible only when it was the wrong group of people doing so.

So don't just question what they say and how they say it. Ask yourself what other stories they could've written, and why they did not.


What's especially frustrating is that the class of people who are supposed to be experts on media and communication have themselves been bubbled inside dogmatic social sciences and their media outposts. If you ask these people to tell you about radicalization on the internet, you are likely to hear a very detailed, incomplete, mostly wrong summary of pertinent events.

Much of this can be attributed to what I mentioned earlier: telling you how to feel about something before they tell you the details. Sometimes this is done explicitly, but often this is done by way of Russell Conjugation: "I am being targeted, you are getting pushback, they are being held accountable." The phrasing tells you whether something is happening to a person you should root for, be neutral about, or dislike. Given a pre-existing framework of oppressors and oppressed, they just snap to grid, and repetition does the rest.

Sometimes it's blatant, like when the NYT rewrote a fairly neutral article about Ellen Pao into nakedly partisan hero worship a few years ago.

But most people don't even realize they're doing it. When called upon they will insist "it's totally different," even if it's not. It's judging things by their subjective implications, not by what they objectively are. Once the initial impression of the players is set, future events are shoehorned to fit. The charge of whataboutism is a convenient excuse to not have to think about it. This is how you end up with people sincerely believing Jordan Peterson is an evil transphobe rather than a compelled speech objector dragged through the mud.

Information that disproves the narrative is swept under the rug, with plenty of scare quotes so you don't go check. If a reporter embarrasses themselves by asking incessantly leading questions, the story will shift to how mean and negative the response is, instead of actually correcting the misconceptions they just tried to dupe an entire nation with.

The magnitude of a particular event is also not decided by how many people participated in it, but rather, by how many people heard about it. This is epitomized by the story format that "the X community is mad about Y" based on 4 angry tweets or screencaps. It wasn't a thing until they decided to make it a thing, and now it is definitely a thing.

The idea of the media as an active actor in this process is a concept they are quite resistant to. Because if it isn't a story until they write the story, that means they are not actually reporting broadly on what's happening. They're just the same as anyone else.

Part of the problem is that what passes for news about the internet is mostly just gossip. Even if it is done in good faith, it is very hard for an individual to effectively see the full extent of an online phenomenon. In practice they rarely try, and when they do, the data gathering and analysis is usually amateur at best, lacking any reasonable control or perspective.

You will often be shown a sinister looking network graph, proving how corrupting influences are spreading around. What they don't tell you is that this is what the entire internet looks like everywhere, and e.g. a knitting community likely looks exactly the same as an alt-right influencer network. Each graph is just a particular subset of a larger thing, and you have to think about what's not shown as well as what is.

It's an even more profound mistake to think that we can all agree on which links should be cut and which should be amplified. Just because we can observe it, doesn't mean we know how to improve it. In fact, the biggest problem here is that so few people can decide what so many can't see. That's what makes them want to fight over it.

The idea that any of this is achievable in a moral way is itself a big red flag: it requires you to be sufficiently bubbled inside one ideology to even consider doing so. It is fundamentally illiberal.

It really is quite stunning. By and large, today the people who shout the loudest about disinformation, and the need to correct it, are themselves living in an enormous house of cards. It is built on bad thinking, distorted facts and sometimes, straight up gaslighting. They have forced themselves on companies, schools and governments, using critical theory to beat others into submission. They use the threat of cancellation as the stick, amplified eagerly by clickbait farms... but it's Facebook's fault. "They need to be held accountable."

These advocates only know how to mouth other people's incantations, they don't actually live by them.

Here's the thing about secret police: when studied, it is found that it's mostly underachievers who get the job and stick with it. Because they know that in a more merit-driven system, they would be lower on the totem pole.

If the world is messed up, it's because we gave power to people who don't know wtf they're supposed to do with it.

February 07, 2021

The Linux Professional Institute (LPI) has fixed the conference rate exam registration, and it is now available for anyone who participated in FOSDEM and is interested in taking advantage of it. As in previous years, the Linux Professional Institute will again offer a discount to examination candidates at FOSDEM 2021. The level 1, level 2 and level 3 exams are offered at a discount of nearly 50%. See our certification page for more information.

February 06, 2021

if “Load WebP or AVIF in supported browsers?” is on, .png files with transparency will loose that transparency in browsers that support AVIF due to a recent technical change in Shortpixel’s AVIF toolchain.

Shortpixel is looking at alternative solutions, but until then as a workaround you can either:

  • add .png to Autoptimize’s lazyload exclusion field
  • or to use below code snippet to disable AVIF images;

add_filter( 'autoptimize_filter_imgopt_do_avif', '__return_false');

February 05, 2021

An important part of our programme: the T-shirts and hoodies. They are (again) available, and contrary to normal practices, you won't have to stand in line to get them and they won't run out! For the online edition, we partnered with an print-on-demand provider, which means that they won't run out at Saturday noon, as they normally do. Visitors from the EU can go to, offering shipping to the EU. If you want to ship to a destionation outside of the EU, go to If you can't find your country in either of the shops, send an舰

The M5Stack Core is a modular, stackable, ESP32 board with a 2 inch LCD screen and three buttons, all in a package that doesn't look bad in your living room. ESPHome is a system to configure your ESP8266/ESP32 with YAML files to connect them to your home automation system (MQTT or Home Assistant). As of ESPHome 1.16.0, released this week, ESPHome supports the M5Stack Core's ILI9341 display.

What this means is that you can now set up your own display device without having to solder (thanks to the M5Stack Core's all-in-one package) and without having to program (thanks to ESPHome's configuration-based approach), and let it talk to your home automation system, including advanced functionality such as over-the-air (OTA) updates. This is really bringing do-it-yourself home automation to the masses.

For an example, have a look at my ESPHome configuration for the M5Stack PM2.5 Air Quality Kit. I wrote it two months ago for the dev version of ESPHome, and I can confirm that this now just works with the 1.16.0 release. The result looks like this:


Using the display

The display is using the SPI bus, so you define it like this with the right clock, MOSI and MISO pins:

  clk_pin: 18
  mosi_pin: 23
  miso_pin: 19

Then define a font: 1

# Download Roboto font from
  - file: "fonts/Roboto-Medium.ttf"
    id: font_roboto_medium22
    size: 22
    glyphs: '!"%()+,-_.:°0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz/³µ'

Then you can define the display and add a lambda expression to show something on the screen:

  - platform: ili9341
    id: m5stack_display
    model: M5Stack
    cs_pin: 14
    dc_pin: 27
    led_pin: 32
    reset_pin: 33
    rotation: 0
    lambda: |-
      Color RED(1,0,0);
      Color BLUE(0,0,1);
      Color WHITE(1,1,1);
      it.rectangle(0,  0, it.get_width(), it.get_height(), BLUE);
      it.rectangle(0, 22, it.get_width(), it.get_height(), BLUE);
      it.print(it.get_width() / 2, 11, id(font_roboto_medium22), RED, TextAlign::CENTER, "Particulate matter");

Using the buttons

The M5Stack Core's buttons were already supported: they can just be used as a binary sensor. For instance, this is how you use the middle button to toggle the display's backlight:

# Button to toggle the display backlight
  - platform: gpio
    id: M5_BtnB
      number: 38
      inverted: true
        - switch.toggle: backlight

# GPIO pin of the display backlight
  - platform: gpio
    pin: 32
    name: "Backlight"
    id: backlight
    restore_mode: ALWAYS_ON

The M5Stack Core has three buttons. From left to right those are button A (GPIO39), button B (GPIO38) and button C (GPIO37). You can use them all like in the code above. As an exercise, I have reimplemented Homepoint's interface in ESPHome. With the left and right buttons I cycle through the pages showing sensor values of my home, and with the middle button I toggle the display backlight.


To be able to use fonts in ESPHome, you need to install Pillow with pip3 install pillow.

I’m a fan of the Nanoleaf light panels! I use them in my office all the time. They provide a great daylight color while I’m in a Webex or training, they react to my music or give a relaxing atmosphere (while you need to concentrate on important stuff). Years ago, when I was working on Solaris systems, I often used the “snoop” (the Solaris version of tcpdump) command with the “-a” parameter to play a sound when some packets matched my filter. Who remembers this feature? Today, it’s the same, I’m often running a tcpdump command with a complex BPF filter, expecting to grab some interesting packets. Why not use my Nanoleaf panels to react to specific flows?

The Nanoleaf controller is connected to the IoT network and can be interfaced with any home automation platform (“Hey Siri, turn on the lights in my office“). But it is also reachable through an API. A Python library is available for this purpose. Let’s try it!

Python 3.8.5 (default, Jul 28 2020, 12:59:40)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from nanoleafapi import Nanoleaf
>>> nl = Nanoleaf("")
>>> nl.get_power()
>>> nl.get_current_effect()

When you need to capture packets with Python, the best library to use is scapy! The script is simple: It sniffs some traffic and, for each received packet, it performs an action on the light panels (if it matches the requirements). The script I wrote works in two “modes”:

  • With random colours – The first time a port of seen, it is assigned to a panel and a random colour is generated.
  • With fixed colours assigned by port – Easier to track the activity on a specific port (ex: you’re expecting an incoming connection on port 8000 and turn a panel in red)

To reduce the calls to the API, the script keeps a list of seen network flows and reacts only with new ones. The syntax is pretty simple:

$ ./ -h
Usage: [options]

  --version             show program's version number and exit
  -h, --help            show this help message and exit
  -i INTERFACE, --interface=INTERFACE
                        Interface (default: "lo")
  -f BPF_FILTER, --filter=BPF_FILTER
                        BPF Filter (default: "ip")
  -c COUNT, --count=COUNT
                        Packets to capture (default: no limit)
  -H HOST, --host=HOST  Nanoleaf Controller IP/FQDN
  -C COLORS, --colors=COLORS
                        Color for protocols (port1=(r,g,b)/port2=(r,g,b)/…)
                        (default: random)
  -v, --verbose Verbose output

Here is a quick video to demonstrate the script:

The good point is that you can sniff on any location in your network and just make the light panels react just in front of you.

The script is available on my Github repository. As usual, the code is working for me but it has not been intensively tested. Bugs are always ahead! 😉

The post Network Flows Visualization With Nanoleaf Light Panels appeared first on /dev/random.

I published the following diary on “VBA Macro Trying to Alter the Application Menus‘”:

Who remembers the worm Melissa? It started to spread in March 1999! In information security, it looks like speaking about prehistory but I spotted a VBA macro that tried to use the same defensive technique as Melissa. Yes, back in 1999, attackers already tried to use techniques to defeat users’ protections. The sample macro has a low VT score (7/44) (SHA256:386e1a60011ff0a818adff8c638005ec5015930c1b35d06cacc11f3ab53725d0)… [Read more]

The post [SANS ISC] VBA Macro Trying to Alter the Application Menus appeared first on /dev/random.

February 04, 2021

FOSDEM 2021 is only in several hours, so now is the time to check the schedule and mark all talks and stands that interest you! For a preview of how everything looks and works, go to the virtual conference floor and click on any of the links (they all work!). Talks and stands can be found on the schedule (look for the S building to find the stands). All of them will have several links to a live stream (called Video only), a live stream with the Q&A visible (called Video with Q&A) and a dedicated chatroom (called Join舰

How to create the long-lasting computer that will save your attention, your wallet, your creativity, your soul and the planet. Killing monopolies will only be a byproduct.

Each time I look at my Hermes Rocket typewriter (on the left in the picture), I’m astonished by the fact that the thing looks pretty modern and, after a few cleaning, works like a charm. The device is 75 years old and is a very complex piece of technology with more than 2000 moving parts. It’s still one of the best tools to focus on writing. Well, not really. I prefer the younger Lettera 32, which is barely 50 years old (on the right in the picture).

Typewriters are incredibly complex and precise piece of machinery. At their peak in the decades around World War II, we built them so well that, today, we don’t need to build any typewriters anymore. We simply have enough of them on earth. You may object that it’s because nobody uses them anymore. It’s not true. Lots of writers keep using them, they became trendy in the 2010s and, to escape surveillance, some secret services started to use them back. It’s a very niche but existing market.

Let’s that idea sink in: we basically built enough typewriters for the world in less than a century. If we want more typewriters, the solution is not to build more but to find them in attics and restore them. For most typewriters, restoration is only a matter of taking the time to do it. There’s no complex skills or tools involved. Even the most difficult operations could be learned alone, by simple trial and error. The whole theory needed to understand a typewriter is the typewriter itself.

By contrast, we have to change our laptops every three or four years. Our phones every couple of years. And all other pieces of equipment (charger,router, modem,printers,…) need to be changed regularly.

Even with proper maintenance, they simply fade out. They are not compatible with their environment anymore. It’s impossible for one person alone to understand perfectly what they are doing, let alone repair them. Batteries wear out. Screen cracks. Processors become obsolete. Software becomes insecure when they don’t crash or refuse to launch.

It’s not that you changed anything in your habits. You still basically communicate with people, look for information, watch videos. But today your work is on Slack. Which requires a modern CPU to load the interface of what is basically a slick IRC. Your videoconference software uses a new codec which requires a new processor. And a new wifi router. Your mail client is now 64 bits only. If you don’t upgrade, you are left out in the cold.

Of course, computers are not typewriters. They do a lot more than typewriters.

But could we imagine a computer built like a typewriter? A computer that could stay with you for your lifetime and get passed to your children?

Could we build a computer designed to last at least fifty years?

Well, given how we use the resources of our planet, the question is not if we could or not. We need to do it, no matter what.

So, how could we build a computer to last fifty years ? That’s what I want to explain in this essay. In my notes, I’m referring to this object as the #ForeverComputer. You may find a better name. It’s not really important. It’s not the kind of objects that will have a yearly keynote to present the new shiny model and ads everywhere telling us how revolutionary it is.

Focusing on timeless use cases

There’s no way we can predict what will be the next video codec or the next wifi standard. There’s no point in trying to do it. We can’t even guess what kind of online activity will be trendy in the next two years.

Instead of trying to do it all, we could instead focus on building a machine that will do timeless activities and do them well. My typewriter from 1944 is still typing. It is still doing something I find useful. Instead of trying to create a generic gaming station/Netflix watching computer, let’s accept a few constraints.

The machine will be built to communicate in written format. It means writing and reading. That covers already a lot of use cases. Writing documents. Writing emails. Reading mails, documents, ebooks. Searching on the network for information. Reading blogs and newsletters and newsgroups.

It doesn’t seem much but, if you think about it, it’s already a lot. Lots of people would be happy to have a computer that does only that. Of course, the graphic designers, the movie makers and the gamers would not be happy with such a computer. That’s not the point. It’s just that we don’t need a full-fledged machine all the time. Dedicated and powerful workstations would still exist but could be shared or be less often renewed if everybody had access to its own writing and reading device.

By constraining the use cases, we create lots of design opportunities.


The goal of the 50-year computer is not to be tiny, ultra-portable and ultra-powerful. Instead, it should be sturdy and resilient.

Back in the typewriter’s day, a 5 kg machine was considered as ultraportable. As I was used to a 900 g MacBook and felt that my 1,1kg Thinkpad was bulky, I could not imagine being encumbered. But, as I started to write on a Freewrite (pictured between my typewriters), I realised something important. If we want to create long-lasting objects, the objects need to be able to create a connection with us.

A heavier and well-designed object feels different. You don’t have it always with you just in case. You don’t throw it in your bag without thinking about it. It is not there to relieve you from your boredom. Instead, moving the object is a commitment. A conscious act that you need it. You feel it in your hands, you feel the weight. You are telling the object: « I need you. You have a purpose. » When such a commitment is done, the purpose is rarely « scroll an endless stream of cat videos ». Having a purpose makes it harder to throw the object away because a shiny new version has been released. It also helps draw the line between the times where you are using the object and the times you are not.

Besides sturdiness, one main objective from the ForeverComputer would be to use as little electricity as possible. Batteries should be easily swappable.

In order to become relevant for the next 50 years, the computer needs to be made of easily replaceable parts. Inspirations are the Fairphone and the MNT Reform laptop. The specifications of all the parts need to be open source so anybody can produce them, repair them or even invent alternatives. The parts could be separated in a few logical blocks : the computing unit, which include a motherboard, CPU and RAM, the powering unit, aka the battery, the screen, the keyboard, the networking unit, the sound unit and the storage unit. All of this come in a case.

Of course, each block could be made of separate components that could be fixed but making clear logical blocks with defined interfaces allows for easier compatibility.

The body requires special attention because it will be the essence of the object. As for the ship of Theseus, the computer may stay the same even if you replace every part. But the enclosing case is special. As long as you keep the original case, the feeling toward the object would be that nothing has changed.

Instead of being mass-produced in China, ForeverComputers could be built locally, from open source blueprints. Manufacturers could bring their own skills in the game, their own experience. We could go as far as linking each ForeverComputer to a system like Mattereum where modifications and repairs will be listed. Each computer would thus be unique, with a history of ownership.

As with the Fairphone, the computer should be built with materials as ethical as possible. If you want to create a connection with an object, if you want to give him a soul, that object should be as respectful of your ethical principles as possible.

Opiniated choices

As we made the choice to mostly use the computer for written interaction, it makes sense, in the current affair of the technology, to use an e-ink screen. E-ink screens save a lot of power. This could make all the difference between a device that you need to recharge every night, replacing the battery every two years, and a device that basically sit idle for days, sometimes weeks and that you recharge once in a while. Or that you never need to recharge if, for example, the external protective case comes with solar panels or an emergency crank.

E-ink is currently harder to use with mouses and pointing devices. But we may build the computer without any pointing device. Geeks and programmers know the benefit of keyboard oriented workflows. They are efficient but hard to learn.

With dedicated software, this problem could be smartly addressed. The Freewrite has a dedicated part of the screen, mostly used for text statistics or displaying the time. The concept could be extended to display available commands. Most people are ready to learn how to use their tools. But, by changing the interface all the time with unexpected upgrades, by asking designers to innovate instead of being useful, we forbid any long-term learning, considering users as idiots instead of empowering them.

Can we create a text-oriented user interface with a gradual learning curve? For a device that should last fifty years, it makes sense. By essence, such device should reveal itself, unlocking its powers gradually. Careful design will not be about « targeting a given customer segment » but « making it useful to humans who took the time to learn it ».

Of course, one could imagine replacing the input block to have a keyboard with a pointing device, like the famous Thinkpad red dot. Or a USB mouse could be connected. Or the screen could be a touchscreen. But what if we tried to make it as far as we could without those?

E-ink and no pointing would kill the endless scrolling, forcing us to think of the user interface as a textual tool that should be efficient and serve the user, even if it requires some learning. Tools need to be learned and cared. If you don’t need to learn it, if you don’t need to care for it, then it’s probably not a tool. You are not using it, you are the one used.

Of course, this doesn’t mean that every user should learn to program in order to be able to use it. A good durable interface requires some learning but doesn’t require some complex mental models. You understand intuitively how a typewriter works. You may have to learn some more complex features like tabulations. But you don’t need to understand how the inside mechanism works to brink the paper forward with each key press.

Offline first

Our current devices expect to be online all the time. If you are disconnected for whatever reason, you will see plenty of notifications, plenty of errors. In 2020, MacOS users infamously discovered that their OS was sending lots of information to Apple’s servers because, for a few hours, those servers were not responding, resulting in an epidemic of bugs and error. At the same time, simply trying to use my laptop offline allowed me to spot a bug in the Regolith Linux distribution. Expecting to be online, a small applet was trying to reconnect furiously, using all the available CPU. The bug was never caught before me because very few users go offline for an extended period of time (it should be noted that it was fixed in the hours following my initial report, open source is great).

This permanent connectivity has a deep effect on our attention and on the way we use computers. By default, the computer is notifying us all the time with sounds and popups. Disabling those requires heavy configuration and sometimes hack. On MacOS, for example, you can’t enable No Disturb mode permanently. By design, not being disturbed is something that should be rare. The hack I used was to configure the mode to be set automatically between 3AM and 2AM.

When you are online, your brain knows that something might be happening, even without notification. There might be a new email waiting for you. A new something on a random website. It’s there, right on your computer. Just move the current window out of the way and you may have something that you are craving: newness. You don’t have to think. As soon as you hit some hard thought, your fingers will probably spontaneously find a diversion.

But this permanent connectivity is a choice. We can design a computer to be offline first. Once connected, it will synchronise everything that needs to be: mails will be sent and received, news and podcasts will be downloaded from your favourite websites and RSS, files will be backuped, some websites or gemini pods could even be downloaded until a given depth. This would be something conscious. The state of your sync will be displayed full screen. By default, you would not be allowed to use the computer while it is online. You would verify that all the sync is finished then take the computer back offline. Of course, the full screen could be bypassed but you would need to consciously do it. Being online would not be the mindless default.

This offline first design would also have a profound impact on the hardware. It means that, by default, the networking block could be wired. All you need is a simple RJ-45 plug.

We don’t know how wifi protocols will change. There are good chance that today’s wifi will not be supported by tomorrow’s routers or only as a fallback alternative. But chances are that RJ-45 will stay for at least a few decades. And if not RJ-45, a simple adaptor could be printed.

Wifi has other problems: it’s a power hog. It needs to always scan the background. It is unreliable and complex. If you want to briefly connect to wifi, you need to enable wifi, wait for the background scan, choose the network where to connect, cross your fingers that it is not some random access point that wants to spy your data, enter the password. Wait. Reenter that password because you probably wrote a zero instead of a O. Wait. It looks to be connected. Is it? Are the files synchronised? Why was the connection interrupted? Am I out of range? Are the walls too thick?

By contrast, all of this could be achieved by plugging a RJ-45 cable. Is there a small green or orange light? Yes, then the cable is well plugged, problem solved. This also adds to the consciousness of connection. You need to walk to a router and physically connect the cable. It feels like loading the tank with information.

Of course, the open source design means that anybody could produce a wifi or 5G network card that you could plug in a ForeverComputer. But, as with pointing devices, it is worth trying to see how far we could go without it.

Introducing peer-to-peer connectivity

The Offline First paradigm leads to a new era of connectivity: physical peer to peer. Instead of connecting to a central server, you could connect two random computers with a simple cable.

During this connection, both computers will tell each other what they need and, if by any chance they can answer one of those needs, they will. They could also transmit encrypted messages for other users, like bottles in the sea. If you ever happen to meet Alice, please give her this message.

Peer-to-peer connectivity implies strong cryptography. Private information should be encrypted with no other metadata than the recipient. The computer connecting to you have no idea if you are the original sender or just one node in the transmission chain. Public information should be signed, so you are sure that they come from a user you trust.

This also means that our big hard disks would be used fully. Instead of sitting on a lot of empty disk spaces, your storage will act as a carrier for others. When full, it will smartly erase older and probably less important stuff.

In order to use my laptop offline, I downloaded Wikipedia, with pictures, using the software Kiwix. It only takes 30Go of my hard drive and I’m able to have Wikipedia with me all the time. I only miss a towel to be a true galactic hitchhiker.

In this model, big centralised servers only serve as a gateway to make things happen faster. They are not required anymore. If a central gateway disappears, it’s not a big deal.

But it’s not only about Wikipedia. Protocols like IPFS may allow us to build a whole peer-to-peer and serverless Internet. In some rural areas of the planet where broadbands are not easily available, such Delay Tolerant Networks (DTNs) are already working and extensively used, including to browse the web.


It goes without saying that, in order to built a computer that could be used for the next 50 years, every software should be open source.

Open source means that bugs and security issues could be solved long after the company that coded them has disappeared. Once again, look at typewriters. Most companies have disappeared or have been transformed beyond any recognition (try to bring back your IBM Selectric to an IBM dealer and ask for a repair, just to see the look on their face. And, yes, your IBM Selectric is probably exactly 50 years old). But typewriters are still a thing because you don’t need a company to fix them for you. All you need is a bit of time, dexterity and knowledge. For missing parts, other typewriters, sometimes from other brands, can be scavenged.

For a fifty-year computer to hit the market, we need an operating system. This is the easiest part as the best operating systems out there are already open source. We also need a user interface who should be dedicated to our particular needs. This is hard work but doable.

The peer-to-peer offline-first networking part is probably the most challenging part. As said previously, essential pieces like IPFS already exist. But everything needs to be glued together with a good user interface.

Of course, it might make sense to rely on some centralised servers first. For example, building on Debian and managing to get all dedicated features uploaded as part of the Official Debian repository already offers some long-term guarantees.

The main point is to switch our psychological stance about technological projects. Let’s scrap the Silicon Valley mentality of trying to stay stealthy then to suddenly try to get as many market share as possible in order to hire more developers.

The very fact that I’m writing this in the public is a commitment to the spirit of the project. If we ever manage to build a computer which is usable in 50 years and I’m involved, I want it highlighted that since the first description, everything was done in the open and free.

More about the vision

A computer built to last 50 years is not about market shares. It’s not about building a brand, raising money from VC and being bought by a monopoly. It’s not about creating a unicorn or even a good business.

It’s all about creating a tool to help humanity survive. It’s all about taking the best of 8 billion brains to create this tool instead of hiring a few programmers.

Of course, we all need to pay bills. A company might be a good vehicle to create the computer or at least parts of it. There’s nothing wrong with a company. In fact, I think that a company is currently the best option. But, since the very beginning, everything should be built by considering that the product should outlast the company.

Which means that customers will buy a tool. An object. It will be theirs. They could do whatever they want with it afterward.

It seems obvious but, nowadays, nearly every high technological item we have is not owned by us. We rent them. We depend on the company to use them. We are not allowed to do what we want. We are even forced to do things we don’t want such as upgrading software at an inappropriate time, sending data about us and hosting software we don’t use that can’t be removed or using proprietary clouds.

When you think about it, the computer built to last 50 years is trying to address the excessive consumption of devices, to fight monopolies, to claim back our attention, our time and our privacy and free us from abusive industries.

Isn’t that a lot for a single device? No because those problems are all different faces of the same coin. You can’t fight them separately. You can’t fight on their own grounds. The only hope? Changing the ground. Changing the rules of the game.

The ForeverComputer is not a replacement. It will not be better than your MacBook or your android tablet. It will not be cheaper. It will be different. It will be an alternative. It will allow you to use your time on a computer differently.

It doesn’t need to replace everything else to win. It just needs to exist. To provide a safe place. Mastodon will never replace Twitter. Linux desktop never replaced Windows. But they are huge successes because they exist.

We can dream. If the concept becomes popular enough, some businesses might try to become compatible with that niche market. Some popular websites or services may try to become available on a device which is offline most of the time, which doesn’t have a pointer by default and which has only an e-ink screen.

Of course, those businesses would need to find something else than advertising, click rates and views to earn money. That’s the whole point. Each opportunity to replace an advertising job (which includes all the Google and Facebook employees) by an honest way to earn money is a step in destroying our planet a bit less.

Building the first layers

There’s a fine equilibrium at play when an innovation tries to change our relationship with technology. In order to succeed, you need technologies, a product and contents. Most technologists try to build technologies first, then products on top of it then waits for content. It either fails or become a niche thingy. To succeed, there should be a game of back and forth between those steps. People should gradually use the new products without realising it.

The ForeverComputer that I described here would never gain real traction if released today. It would be incompatible with too much of the content we consume every day.

The first very small step I imagined is building some content that could, later, be already compatible. Not being a hardware guy (I’m a writer with a software background), it’s also the easiest step I could do today myself.

I call this first step WriteOnly. It doesn’t exist yet but is a lot more realistic than the ForeverComputer.

WriteOnly, as I imagine it, is a minimalist publishing tool for writers. The goal is simple : write markdown text files on your computer. Keep them. And let them published by WriteOnly. The readers will choose how they read you. They can read it on a website like a blog, receive your text by email or RSS if they subscribed, they can also choose to read you through Gemini or DAT or IPFS. They may receive a notification through a social network or through the fediverse. It doesn’t matter to you. You should not care about it, just write. Your text files are your writing.

Features are minimal. No comments. No tracking. No statistics. Pictures are dithered in greyscale by default (a format that allows them to be incredibly light while staying informative and sharper than full-colour pictures when displayed on an e-ink screen).

The goal of WriteOnly is to stop having the writers worrying about where to post a particular piece. It’s also a fight against censorship and cultural conformity. Writers should not try to write to please the readers of a particular platformn according to the metrics of that platform moguls. They should connect with their inner selves and write, launching words into the ether.

We never know what will be the impact of our words. We should set our writing free instead of reducing it to a marketing tool to sell stuff or ourselves.

The benefit of a platform like WriteOnly is that adding a new method of publishing would automatically add all the existing content to it. The end goal is to have your writing available to everyone without being hosted anywhere. It could be through IPFS, DAT or any new blockchain protocol. We don’t know yet but we can already work on WriteOnly as an open source platform.

We can also already work on the ForeverComputer. There will probably be different flavours. Some may fail. Some may reinvent personal computing as we know it.

At the very least, I know what I want tomorrow.

I want an open source, sustainable, decentralised, offline-first and durable computer.

I want a computer built to last 50 years and sit on my desk next to my typewriter.

I want a ForeverComputer.

Make it happen

As I said, I’m a software guy. I’m unlikely to make a ForeverComputer happen alone. But I still have a lot of ideas on how to do it. I also want to focus on WriteOnly first. If you think you could help make it a reality and want to invest in this project contact me on lionel at

If you would like to use a ForeverComputer or WriteOnly, you can either follow this blog (which is mostly in French) or subscribe here to a dedicated mailing list. I will not sell those emails, I will not share them and will not use them for anything else than telling you about the project when it becomes reality. In fact, there’s a good chance that no mail will ever be sent to that dedicated mailing list. And to make things harder, you will have to confirm your email address by clicking on a link in a confirmation mail written in French.


Further Reads

« The Future of Stuffs », by Vinay Gupta. A short, must-read, book about our relationship with objects and manufacturing.

« The Typewriter Revolution », by Richard Polt. A complete book and guide about the philosophy behind typewriters in the 21st century. Who is using them, why and how to use one yourself in an era of permanent connectivity.

NinjaTrappeur home built a digital typewriter with an e-ink screen in a wooden case:

Another DIY project with an e-ink screen and a solar panel included:

SL is using an old and experimental operating system (Plan9) which allows him to do only what he wants (mails, simple web browsing and programming).

Two artists living off the grid on a sail boat and connecting only rarely.

« If somebody would produce a simple typewriter, an electronic typewriter that was silent, that I could use on airplanes, that would show me a screen of 8 1/2 by 11, like a regular page, and I could store it and print it out as a manuscript, I would buy one in a second! » (Harlan Ellison, SF writer and Startrek scenarist)

LowTech magazine has an excellent article about low-tech Internet, including Delay Tolerant Networks.

Another LowTech magazine article about the impact typewriters and computers had on office work.

UPDATE 6th Feb 2020 : Completely forgot about Scuttlebutt, which is an offline-first, p2p social network. It does exactly what I’m describing here to communicate.

A good very short introduction about it on BoingBoing :

UPDATE 8th Feb 2020 : The excellent « Tales from the Dork Web » has an issue on The 100 Year Computer which is strikinly similar to this piece.

I also add this attempt at a Offline-first protocol : the Pigeon protocol :

And another e-ink DIY typewriter :

UPDATE 15th Feb 2020 : Designer Micah Daigle has proposed the concept of the Prose, an e-ink/distraction free laptop.

Je suis @ploum, ingénieur écrivain. Abonnez-vous par mail ou RSS pour ne rater aucun billet (max 2 par semaine). Je suis convaincu que Printeurs, mon dernier roman de science-fiction vous passionnera. Commander mes livres est le meilleur moyen de me soutenir et de m’aider à diffuser mes idées !


Ce texte est publié sous la licence CC-By BE.

February 02, 2021

I published the following diary on “New Example of XSL Script Processing aka ‘Mitre T1220‘”:

Last week, Brad posted a diary about TA551. A few days later, one of our readers submitted another sample belonging to the same campaign. Brad had a look at the traffic so I decided to have a look at the macro, not because the code is heavily obfuscated but data are spread at different locations in the Word document… [Read more]

The post [SANS ISC] New Example of XSL Script Processing aka “Mitre T1220” appeared first on /dev/random.

February 01, 2021

FOSDEM is all about talks, so we're going to delve a bit deeper in that topic. FOSDEM 2021 Online will happen through, our main website. Each talk is in a room, and each room maps to a virtual room on, combining video and Q&A. We'll add a link to each room, but the virtual room name is the same as the room name minus the bit before the dot. So you can find the M.misc room at You do not need an account to view the talks in the virtual room. Find more details on the舰

January 31, 2021

FOSDEM is in 6 days, so now is a good time to check whether you can visit the conference. We're going to walk through all of the bits and pieces so you are fully prepared on Saturday en Sunday between 10am and 6pm CET. FOSDEM 2021 Online will happen through, our main website. Each talk is in a room, and each room maps to a virtual room on, combining video and Q&A. We'll add a link to each room, but the virtual room name is the same as the room name minus the bit before the dot.舰