Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

January 22, 2022

Obsidian’s Gems of the Year 2021 nomination has been a great source of cool ideas to add tweaks to my Obsidian setup.

In particular, Quick Capture (mac/iOS) and Inbox Processing was a great gem to uncover as I try and implement the weekly review stage of my Second Brain/PARA setup!

I noticed that the archive/move script was a little slow, taking several seconds to open up the dialog for selecting a folder, breaking my flow. I checked the code and noticed it built a set of folders recursively.

I simplified the code for my use case, removing the archive folder path, and using the file explorer’s built in move dialog (which is much faster) and a callback to advance.

The resulting gist is Obsidian: Archive current file and then open next file in folder (Templater script) · GitHub

I’m sure it could be improved further if I understood the execution, variable scope, and callback model better, but this is good enough for me!

I get very little coding time these days, and I hate working in an environment I haven’t had a chance to really master yet. It’s all trial and error through editing a javascript file in a markdown editor with no syntax highlighting. But it’s still a nice feeling when you can go in and out of a code base in a few hours and scratch the itch you had.

flattr this!

January 21, 2022

x230

I already use coreboot on my Lenovo W500 with FreeBSD. I bought a Lenovo x230 for a nice price I decide to install coreboot on it. After reading a lot of online documentation. I decided to install the skulls coreboot distribution on it. The skulls project has nice documentation on how to install it.

To replace the BIOS with coreboot you will need to disassemble the laptop and use a clip on the BIOS chip to install it.

During the installation, I followed the links below:

As my installation notes might be useful for other people I decided to create a blog post about it.

Update the x230 BIOS

The first step is to update the BIOS and your EC firmware to the latest stable version. You cannot update the EC firmware when Coreboot is installed unless you restore the original BIOS.

I downloaded the BIOS Update (Bootable CD) from:

https://pcsupport.lenovo.com/be/en/products/laptops-and-netbooks/thinkpad-x-series-laptops/thinkpad-x230/downloads/ds029187

and updated the BIOS with an external USB CDROM drive.

Requirements

To flash the BIOS you’ll need a Raspberry Pi and a SOIC 8 test clip.

Prepare raspberry-pi

I use a Raspbery-pi 1 model B to flash coreboot.

Install Raspberry PI OS

Downloads the latest 32 bits Raspberry Pi OS version from:

https://www.raspberrypi.com/software/operating-systems/

Enable the SPI port

The SPI port isn’t enabled by default on Raspberry PI os. We’ll need to enable it.

Open /boot/config.txt in your favourite text editor.

root@raspberrypi:/boot# cd /boot
root@raspberrypi:/boot# vi config.txt
# Uncomment some or all of these to enable the optional hardware interfaces
#dtparam=i2c_arm=on
#dtparam=i2s=on
dtparam=spi=on

And reboot your Raspberry PI.

root@raspberrypi:/boot# reboot
root@raspberrypi:/boot# Connection to pi1 closed by remote host.
Connection to pi1 closed.
[staf@vicky ~]$ 

Flashing

Open the Laptop

Open your laptop and pull the protective film to get access to the two BIOS chips.

The blog post from Chuck Nemeth: https://www.chucknemeth.com/laptop/lenovo-x230/flash-lenovo-x230-coreboot has some nice pictures about it.

Wiring

I used the wiring diagram from the skulls project:

https://github.com/merge/skulls/blob/master/x230/README.md

Pin Number Clip (25xx signal) Raspberry Pi
1 CS 24
2 MISO 21
3 not used not used
4 GND 25
5 MOSI 19
6 CLK 23
7 not used not used
8 3.3V  

I didn’t connect the 3.5V line. The 3.5V line on the raspberry-pi isn’t stable enough.

You can use a separate power supply.

Another trick is to connect the network cable to the x230 and connect the power supply, this way you get a stable 3.5V connection.

I used the later method.

Test

Test the connection to your flash chip with flashrom. I start a test with a low speed and specify the spispeed=512 to get the connection established.

Sometime it helps to execute the flashrom command twice.

pi@raspberrypi:~ $ flashrom -p linux_spi:dev=/dev/spidev0.0,spispeed=512
flashrom v1.2 on Linux 5.10.63+ (armv6l)
flashrom is free software, get the source code at https://flashrom.org

Using clock_gettime for delay loops (clk_id: 1, resolution: 1ns).
Found Macronix flash chip "MX25L6405" (8192 kB, SPI) on linux_spi.
Found Macronix flash chip "MX25L6405D" (8192 kB, SPI) on linux_spi.
Found Macronix flash chip "MX25L6406E/MX25L6408E" (8192 kB, SPI) on linux_spi.
Found Macronix flash chip "MX25L6436E/MX25L6445E/MX25L6465E/MX25L6473E/MX25L6473F" (8192 kB, SPI) on linux_spi.
Multiple flash chip definitions match the detected chip(s): "MX25L6405", "MX25L6405D", "MX25L6406E/MX25L6408E", "MX25L6436E/MX25L6445E/MX25L6465E/MX25L6473E/MX25L6473F"
Please specify which chip definition to use with the -c <chipname> option.
pi@raspberrypi:~ $ 

When the connection is stable you can try it without the spispeed setting.

pi@raspberrypi:~ $ flashrom -p linux_spi:dev=/dev/spidev0.0
flashrom v1.2 on Linux 5.10.63+ (armv6l)
flashrom is free software, get the source code at https://flashrom.org

Using clock_gettime for delay loops (clk_id: 1, resolution: 1ns).
Using default 2000kHz clock. Use 'spispeed' parameter to override.
Found Macronix flash chip "MX25L6405" (8192 kB, SPI) on linux_spi.
Found Macronix flash chip "MX25L6405D" (8192 kB, SPI) on linux_spi.
Found Macronix flash chip "MX25L6406E/MX25L6408E" (8192 kB, SPI) on linux_spi.
Found Macronix flash chip "MX25L6436E/MX25L6445E/MX25L6465E/MX25L6473E/MX25L6473F" (8192 kB, SPI) on linux_spi.
Multiple flash chip definitions match the detected chip(s): "MX25L6405", "MX25L6405D", "MX25L6406E/MX25L6408E", "MX25L6436E/MX25L6445E/MX25L6465E/MX25L6473E/MX25L6473F"
Please specify which chip definition to use with the -c <chipname> option.
pi@raspberrypi:~ $ 
root@raspberrypi:~/x230# flashrom -c "MX25L6406E/MX25L6408E" -p linux_spi:dev=/dev/spidev0.0,spispeed=512 
flashrom v1.2 on Linux 5.10.63+ (armv6l)
flashrom is free software, get the source code at https://flashrom.org

Using clock_gettime for delay loops (clk_id: 1, resolution: 1ns).
Found Macronix flash chip "MX25L6406E/MX25L6408E" (8192 kB, SPI) on linux_spi.
No operations were specified.
root@raspberrypi:~/x230# 

Backup

The x230 has two BIOS chips.

The top chips is 4MB, the bottom is 8MB.

The skull scripts to install coreboot will to backup of the existing BIOSes. But I also created a backup manually. It’s also a nice test to verify that you have a stable connection.

Get the chip types

It’s recommended to verify the BIOS chip type with a magnifier loupe. But I cloudn’t read the chip types on my laptop.

The Lenovo x230 uses the following chip types:

  • bottom ROM: MX25L6406E/MX25L6408E
  • top ROM: MX25L3206E/MX25L3208E

bottom rom

The bottom ROM is “MX25L6406E/MX25L6408E” on the x230.

Read the ROM 3 times.

pi@raspberrypi:~/x230 $ flashrom -c "MX25L6406E/MX25L6408E" -p linux_spi:dev=/dev/spidev0.0 -r bottom_1.rom
flashrom v1.2 on Linux 5.10.63+ (armv6l)
flashrom is free software, get the source code at https://flashrom.org

Using clock_gettime for delay loops (clk_id: 1, resolution: 1ns).
Using default 2000kHz clock. Use 'spispeed' parameter to override.
Found Macronix flash chip "MX25L6406E/MX25L6408E" (8192 kB, SPI) on linux_spi.
Reading flash... done.
pi@raspberrypi:~/x230 $ 
pi@raspberrypi:~/x230 $ flashrom -c "MX25L6406E/MX25L6408E" -p linux_spi:dev=/dev/spidev0.0 -r bottom_2.rom
pi@raspberrypi:~/x230 $ flashrom -c "MX25L6406E/MX25L6408E" -p linux_spi:dev=/dev/spidev0.0 -r bottom_3.rom

And compare the hashes.

pi@raspberrypi:~/x230 $ sha256sum bottom*.rom
593b7ebad463d16ee7474f743883db86dd57c841c36136fe87374151f829d663  bottom_1.rom
593b7ebad463d16ee7474f743883db86dd57c841c36136fe87374151f829d663  bottom_2.rom
593b7ebad463d16ee7474f743883db86dd57c841c36136fe87374151f829d663  bottom_3.rom

top rom

Read the top ROM 3 three times.

pi@raspberrypi:~/x230 $ flashrom -c "MX25L3206E/MX25L3208E" -p linux_spi:dev=/dev/spidev0.0 -r top_1.rom
flashrom v1.2 on Linux 5.10.63+ (armv6l)
flashrom is free software, get the source code at https://flashrom.org

Using clock_gettime for delay loops (clk_id: 1, resolution: 1ns).
Using default 2000kHz clock. Use 'spispeed' parameter to override.
Found Macronix flash chip "MX25L3206E/MX25L3208E" (4096 kB, SPI) on linux_spi.
Reading flash... done.
$ flashrom -c "MX25L3206E/MX25L3208E" -p linux_spi:dev=/dev/spidev0.0 -r top_2.rom
$ flashrom -c "MX25L3206E/MX25L3208E" -p linux_spi:dev=/dev/spidev0.0 -r top_3.rom

And compare the hashes.

pi@raspberrypi:~/x230 $ sha256sum top*.rom
3ab6eafe675817ab9955e7bd4a0f003098c46cfe4016d98184f7c199ebae874a  top_1.rom
3ab6eafe675817ab9955e7bd4a0f003098c46cfe4016d98184f7c199ebae874a  top_2.rom
3ab6eafe675817ab9955e7bd4a0f003098c46cfe4016d98184f7c199ebae874a  top_3.rom
pi@raspberrypi:~/x230 $ 

Copy

Copy the backup ROM’s to save location.

[staf@vicky ~]$ cd backup/
[staf@vicky backup]$ cd x230/
[staf@vicky x230]$ ls
bottom_1.rom  bottom_2.rom  bottom_3.rom
[staf@vicky x230]$ scp pi@pi1:~/x230/* .
pi@pi1's password: 
bottom_1.rom                                  100% 8192KB   2.9MB/s   00:02    
bottom_2.rom                                  100% 8192KB   2.9MB/s   00:02    
bottom_3.rom                                  100% 8192KB   3.0MB/s   00:02    
top_1.rom                                     100% 4096KB   2.8MB/s   00:01    
top_2.rom                                     100% 4096KB   2.9MB/s   00:01    
top_3.rom                                     100% 4096KB   2.9MB/s   00:01    
[staf@vicky x230]$ 

Flash skulls

Download the skulls project.

Logon to the raspberry pi.

[staf@vicky ~]$ ssh pi@pi1
Received disconnect from 192.168.1.23 port 22:2: Too many authentication failures
Disconnected from 192.168.1.23 port 22
[staf@vicky ~]$ ssh pi@pi1
pi@pi1's password: 
Linux raspberrypi 5.10.63+ #1488 Thu Nov 18 16:14:04 GMT 2021 armv6l

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Sun Jan 16 09:46:51 2022 from 192.168.1.10
pi@raspberrypi:~ $ 

Create a directory to download the skulls release.

pi@raspberrypi:~ $ mkdir skull
pi@raspberrypi:~ $ cd skulls 

Download the latest skulls release.

$ wget https://github.com/merge/skulls/releases/download/1.0.4/skulls-1.0.4.tar.xz
$ wget https://github.com/merge/skulls/releases/download/1.0.4/skulls-1.0.4.tar.xz.asc
pi@raspberrypi:~/skull $ gpg --verify skulls-1.0.4.tar.xz.asc
gpg: directory '/home/pi/.gnupg' created
gpg: keybox '/home/pi/.gnupg/pubring.kbx' created
gpg: assuming signed data in 'skulls-1.0.4.tar.xz'
gpg: Signature made Thu 16 Dec 2021 12:23:03 GMT
gpg:                using RSA key 15339E3B5F19D8688519D268C7BCBE1E66F0DB3C
gpg: Can't check signature: No public key
pi@raspberrypi:~/skull $ 

The tar ball must be signed with the “15339E3B5F19D8688519D268C7BCBE1E66F0DB3C” gpg public key. Which is the public key of Martin Kepplinger the main developer of the skulls project.

Extract the tar ball.

pi@raspberrypi:~/skull $ tar xvf skulls-1.0.4.tar.xz

Goto th extracted directory.

pi@raspberrypi:~/skull $ cd skulls-1.0.4/
pi@raspberrypi:~/skull/skulls-1.0.4 $ 

Flash the bottom rom (8MB)

Verify that you have stable connection.

pi@raspberrypi:~/skull/skulls-1.0.4 $ flashrom -p linux_spi:dev=/dev/spidev0.0
flashrom v1.2 on Linux 5.10.63+ (armv6l)
flashrom is free software, get the source code at https://flashrom.org

Using clock_gettime for delay loops (clk_id: 1, resolution: 1ns).
Using default 2000kHz clock. Use 'spispeed' parameter to override.
Found Macronix flash chip "MX25L6405" (8192 kB, SPI) on linux_spi.
Found Macronix flash chip "MX25L6405D" (8192 kB, SPI) on linux_spi.
Found Macronix flash chip "MX25L6406E/MX25L6408E" (8192 kB, SPI) on linux_spi.
Found Macronix flash chip "MX25L6436E/MX25L6445E/MX25L6465E/MX25L6473E/MX25L6473F" (8192 kB, SPI) on linux_spi.
Multiple flash chip definitions match the detected chip(s): "MX25L6405", "MX25L6405D", "MX25L6406E/MX25L6408E", "MX25L6436E/MX25L6445E/MX25L6465E/MX25L6473E/MX25L6473F"
Please specify which chip definition to use with the -c <chipname> option.
pi@raspberrypi:~/skull/skulls-1.0.4 $ 

The top chips is 4MB, the bottom is 8MB.

Execute the external_install_bottom.sh With the -m you’ll run me_cleaner to clean the Intel Management Engine.

pi@raspberrypi:~/skull/skulls-1.0.4 $ sudo ./external_install_bottom.sh -m -k /home/pi/x230/skulls_backup_bottom.rom

Select your flashing device.

Skulls

Please select the hardware you use:
1) Raspberry Pi
2) CH341A
3) Exit
Please select the hardware flasher: 1 
Ok. Run this on a Rasperry Pi.
trying to detect the chip...
Detected MX25L6406E/MX25L6408E.
make: Entering directory '/home/pi/skull/skulls-1.0.4/util/ifdtool'
gcc -O2 -g -Wall -Wextra -Wmissing-prototypes -Werror -I../commonlib/include -c -o ifdtool.o ifdtool.c
gcc -o ifdtool ifdtool.o 
Intel ME will be cleaned.
<snip>

Using clock_gettime for delay loops (clk_id: 1, resolution: 1ns).
Found Macronix flash chip "MX25L6406E/MX25L6408E" (8192 kB, SPI) on linux_spi.
Reading old flash chip contents... done.
Erasing and writing flash chip... Erase/write done.
Verifying flash... VERIFIED.
DONE
pi@raspberrypi:~/skull/skulls-1.0.4 $ 

Flash the top (4MB) chip

Poweroff the RaspBerry PI. And connect the clip to the top chip.

pi@raspberrypi:~/skull/skulls-1.0.4 $ sudo poweroff
pi@raspberrypi:~/skull/skulls-1.0.4 $ Connection to pi1 closed by remote host.
Connection to pi1 closed.
[staf@vicky ~]$ 

Verify that you have a stable connection.

pi@raspberrypi:~ $ flashrom -p linux_spi:dev=/dev/spidev0.0
flashrom v1.2 on Linux 5.10.63+ (armv6l)
flashrom is free software, get the source code at https://flashrom.org

Using clock_gettime for delay loops (clk_id: 1, resolution: 1ns).
Using default 2000kHz clock. Use 'spispeed' parameter to override.
Found Macronix flash chip "MX25L3205(A)" (4096 kB, SPI) on linux_spi.
Found Macronix flash chip "MX25L3205D/MX25L3208D" (4096 kB, SPI) on linux_spi.
Found Macronix flash chip "MX25L3206E/MX25L3208E" (4096 kB, SPI) on linux_spi.
Found Macronix flash chip "MX25L3273E" (4096 kB, SPI) on linux_spi.
Multiple flash chip definitions match the detected chip(s): "MX25L3205(A)", "MX25L3205D/MX25L3208D", "MX25L3206E/MX25L3208E", "MX25L3273E"
Please specify which chip definition to use with the -c <chipname> option.
pi@raspberrypi:~ $ 

Go to the skulls directory.

pi@raspberrypi:~ $ cd skull/
pi@raspberrypi:~/skull $ ls
skulls-1.0.4  skulls-1.0.4.tar.xz  skulls-1.0.4.tar.xz.asc
pi@raspberrypi:~/skull $ cd skulls-1.0.4/

And execute the external_install_top.sh script.

pi@raspberrypi:~/skull/skulls-1.0.4 $ sudo ./external_install_top.sh -b x230 -k /home/pi/x230/skulls_top_backup.rom

Select the BIOS that you want to flash.

1) ./x230_coreboot_seabios_free_74d2218cc7_top.rom
2) ./x230_coreboot_seabios_74d2218cc7_top.rom
3) Quit
Please select a file to flash or start with the -i option to use a different one: 1

Select your flashing device.

Please select the hardware you use:
1) Raspberry Pi
2) CH341A
3) Quit
Please select the hardware flasher: 1

Wait for of the flashing to complete. And try to boot your system.

Updating

After you’ve installed coreboot on the x230. You’re able to update the BIOS from the commandline. In order to be able to flash the BIOS you’ll need to update kernel boot arguments.

Update grub config

Edit your grub configuration.

# vi /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="iomem=relaxed"
root@x230:/home/staf/github/merge/skulls# /usr/sbin/update-grub
Generating grub configuration file ...
Found background image: /usr/share/images/desktop-base/desktop-grub.png
Found linux image: /boot/vmlinuz-5.10.0-10-amd64
Found initrd image: /boot/initrd.img-5.10.0-10-amd64
done
root@x230:/home/staf/github/merge/skulls# 

Flash

Download the latest skulls release from https://github.com/merge/skulls/.

$  wget https://github.com/merge/skulls/releases/download/1.0.4/skulls-1.0.4.tar.xz

Download the signature file.

staf@x230:~/tmp$ wget https://github.com/merge/skulls/releases/download/1.0.4/skulls-1.0.4.tar.xz.asc

Verify the signature.

staf@x230:~/tmp$ gpg --verify skulls-1.0.4.tar.xz.asc
gpg: keybox '/home/staf/.gnupg/pubring.kbx' created
gpg: assuming signed data in 'skulls-1.0.4.tar.xz'
gpg: Signature made Thu 16 Dec 2021 01:23:03 PM CET
gpg:                using RSA key 15339E3B5F19D8688519D268C7BCBE1E66F0DB3C
gpg: Can't check signature: No public key
staf@x230:~/tmp$

Extract the tar archive.

$ tar xvf skulls-1.0.4.tar.xz

Go to the directory.

$ cd skulls-1.0.4/

Execute the ./skulls.sh.

staf@x230:~/tmp/skulls-1.0.4$ ./skulls.sh -b x230 -U
You are using the latest version of Skulls
staf@x230:~/tmp/skulls-1.0.4$ 

Have fun!

Links

A regularly reported issue for Autoptimize + Elementor users is that JavaScript optimization breaks the “Edit with Elementor” button in the front-end admin bar. The easiest workaround is to disable the “also optimize for logged in administrators/ editors” option, but below code snippet is more surgical as it will only disable JS optimization, leaving e.g. CSS & Image, which do not impact “Edit...

Source

I published the following diary on isc.sans.edu: “Obscure Wininet.dll Feature?“:

The Internet Storm Center relies on a group of Handlers who are volunteers and offer some free time to the community besides our daily job. Sometimes, we share information between us about an incident or a problem that we are facing and ask for help. Indeed, why not request some help from fellow Handlers with broad experience? Yesterday, Bojan was involved in an incident with a customer and came back to us with this question… [Read more]

The post [SANS ISC] Obscure Wininet.dll Feature? appeared first on /dev/random.

January 20, 2022

I published the following diary on isc.sans.edu: “RedLine Stealer Delivered Through FTP“:

Here is a piece of malicious Python script that injects a RedLine stealer into its own process. Process injection is a common attacker’s technique these days (for a long time already). The difference, in this case, is that the payload is delivered through FTP! It’s pretty unusual because FTP is today less and less used for multiple reasons (lack of encryption by default, complex to filter with those passive/active modes). Support for FTP has even been disabled by default in Chrome starting with version 95! But FTP remains a common protocol in the IoT/Linux landscape with malware families like Mirai. My honeypots still collect a lot of Mirai samples on FTP servers. I don’t understand why the attacker chose this protocol because, in most corporate environments, FTP is not allowed by default (and should definitely not be!)… [Read more]

The post [SANS ISC] RedLine Stealer Delivered Through FTP appeared first on /dev/random.

January 18, 2022

Cover Image

The known unknown knowns we lost

When people think of George Orwell's 1984, what usually comes to mind is the orwellianism: a society in the grip of a dictatorial, oppressive regime which rewrote history daily as if it was a casual matter.

Not me though. For whatever reason, since reading it as a teenager, what has stuck was something different and more specific. Namely that as time went on, the quality of all goods, services and tools that people relied on got unquestionably worse. In the story, this happened slowly enough that many people didn't notice. Even if they did, there was little they could do about it, because this degradation happened across the board, and the population had no choice but to settle for the only available options.

I think about this a lot, because these days, I see it everywhere around me. What's more, if you talk and listen to seniors, you will realize they see even more of it, and it's not just nostalgia. Do you know what you don't know?

rotisserie chicken

Chickens roost and sleep in trees

A Chicken in Every Pot

From before I was born, my parents have grown their own vegetables. We also had chickens to provide us with more eggs than we usually knew what to do with. The first dish I ever cooked was an omelette, and in our family, Friday was Egg Day, where everyone would fry their own, any way they liked.

As a result, I remain very picky about the eggs I buy. A fresh egg from a truly free range chicken has an unmistakeable quality: the yolk is rich and deep orange. Nothing like factory-farmed cage eggs, whose yolks are bright yellow, flavorless and quite frankly, unappetizing. Another thing that stands out is how long our eggs would keep in the fridge. Aside from the freshness, this is because an egg naturally has a coating to protect it, when it comes out of the chicken. By washing them aggressively, you destroy this coating, increasing spoilage.

The same goes for the chickens themselves. I learned at an early age what it looks like to chop a chicken's head off with a machete. I also learned that chicken is supposed to be a flavorful meat with a distinct taste. The idea that other things would "taste like chicken" seems preposterous from this point of view. Rather, it's that most of the chicken we eat simply does not taste like chicken anymore. Industrial chickens are raised in entirely artificial circumstances, unhealthy and constrained, and this has a noticeable effect on the development and taste of the animal.

Here's another thing. These days when I fry a piece of store-bought meat, even when it's not frozen, the pan usually fills up with a layer of water after a minute. I have to pour it out, so I can properly brown it at high temperature and avoid steaming it. That's because a lot of meat is now bulked up with water, so it weighs more at the point of sale. This is not normal. If the only exposure you have to meat is the kind that comes in a styrofoam tray wrapped in plastic, you are missing out, and not even realizing it.

tomatoes of all kinds
san marzano canned tomatoes

For vegetables and fruit, there is a similar degradation. Take tomatoes, which naturally bruise easily. In order to make them more suitable for transport, industrial tomatoes have mainly been selected for toughness. This again correlates to more water content. But as a side effect, most tomatoes simply don't taste like proper tomatoes anymore. The flavor that most people now associate with e.g. sun-dried, heirloom tomatoes, is simply what tomatoes used to taste like. Rather than buying them fresh, you are often better off buying canned Italian Roma tomatoes, which didn't suffer quite the same fate. Italians know their tomatoes, even if they are non-native to the country and continent.

For berries, it's the same story. Our yard had several bushes, with blueberries and red berries, and my mom would make jam out of them every year. But on a good day we would just eat them straight from the bush. I can tell you, the ones I buy in the store simply don't taste as good.

There is another angle to this too: preparation. Driven by the desire to serve more customers more quickly, industrial cooks prefer dishes that are easy to assemble and quick to make. But many traditional dishes involve letting stews and sauces simmer for hours at a time in a single pot, developing deep flavors over time. This is simply not compatible with rapid, mass production. It implies that you need to prepare it all ahead of time, in sufficient quantities. When was the last time you ordered something at a chain, and were told they had run out for the day?

Hence these days, growing your own food, raising your own animals, and cooking your own meals is not just a choice about self-sufficiency. It's a choice to favor artisanal methods over mass-scale production, which strongly affects the result. It's a choice to favor varieties for taste rather than what packages, transports and sells easily. To favor methods that are more labor intensive, but which build upon decades, even centuries of experience.

It also echoes a time when the availability of particular foods was incredibly seasonal, and building up preserves for winter was a necessity. People often had to learn to make do with basic, unglamorous ingredients, and they succeeded anyway. Add to this the fact that many countries suffered severe shortages during World War II, which is traceable in the local cuisine, and you end up with a huge amount of accumulated knowledge about food that we're slowly but surely losing.

1950s living room
1950s vision of the future: everything in plastic

Life in Plastic

It's difficult now to imagine a world without plastic. The first true plastic, bakelite, was developed in 1907. Since then, chemistry has delivered countless synthetic materials. But it would take over half a century for plastic to become truly common-place. With our oceans now full of floating micro-plastics, affecting the food chain, this seems to have been a dubious choice.

1950s kitchen 1950s kitchen 1950s kitchen

When I look at pictures of households from the 1950s, one thing that stands out to me is the materials. There is far more wood, metal, glass and fabric than there is plastic. These are all heavier materials, but also, tougher. When they did use plastic, the designs often look far bulkier than a modern equivalent. What's also absent is faux-materials: there's no plastic that's been painted glossy to look like metal, or particle board made to look like real wood, or acrylic substituting for real glass.

The problem is simple: when exposed to the UV rays in sunlight, plastic will degrade and discolor. When exposed to strain and tension, tough plastic will crack instead of flex. Hence, when you replace a metal or wooden frame with a plastic one, a product's lifespan will suffer. When it breaks, you can't simply manufacture a replacement using an ordinary tool shop either. Without a 3D printer and highly detailed measurements, you're usually out of luck, because you need one highly specific, molded part, which is typically attached not via explicit screws, but simply held in place via glue or tension. This tension will guarantee that such a part will fail sooner than later.

In fact, I have this exact problem with my freezer. The outside of the door is hooked up to the inside with 4 plastic brackets, each covering a metal piece. The metal is fine. But one plastic piece has already cracked from repeated opening, and probably the temperature shifts haven't helped either. The best thing I could do is glue it back on, because it's practically impossible to obtain the exact replacement I need. Whoever designed this, they did not plan for it to be used more than a few years. For an essential household appliance, this is shameful. And yet it is normal.

Products simply used to have a much longer lifespan. They were built to last and were expected to last. When you bought an appliance, even a small one, it was an investment. Whatever gains were made by producing something that is lighter and easier to transport were undone by the fact that you will now be transporting and disposing of 2 or 3 of them in the same time you used to only need just one.

This is also a difference that you can only notice in the long term. In the short term, people will prefer the cheaper product, even if it's more expensive eventually. Hence, the long-lasting products are pushed out of the market, replaced with imitations that seem more modern and less resource intensive, but which are in fact the exact opposite.

The only way to counter this is if there are sufficient craftsmen and experts around who provide sufficient demand for the "real" thing. If those craftsmen retire without passing on their knowledge, the degradation sets in. Even if the knowledge is passed on, it's worthless if the tools and parts those craftsmen depend on disappear or lose their luster.

This isn't limited to plastic either. Even parts that are made out of metal can be produced in good or bad ways. When cheap alloys replace expensive ones, when tolerances are slowly eroded away down to zero, the result is undeniably inferior. Yet it's difficult to tell without a detailed breakdown of the manufacturing process.

A striking example comes in the form of the Dubai Lamp. These are LED lamps, made specifically for the Dubai market, through an exclusive deal. They're identical in design to the normal ones, except the Dubai Lamp has far more LED filaments: it's designed to be underpowered instead of running close to tolerance. As a result, these lamps last much longer instead of burning out quickly.

Invisible Software

Luckily, the real world still provides plenty of sanity checks. The above is relatively easy to explain, because it can be stated in terms of our primary senses. If food tastes different, if a product feels shoddy and breaks more quickly, it's easy to notice, if you know what to look for.

But one domain where this does not apply at all is software. The reason is simple: software operates so quickly, it's beyond our normal ability to fathom. The primary goal of interactive software is to provide seamless experiences that deliberately hide many layers of complexity. As long as it feels fast enough, it is fast enough, even if it's actually enormously wasteful.

What's more, there's a perverse incentive for software developers here. At a glance, software developers are the most productive when they use the fastest computers: they spend the least amount of time waiting for code to be validated and compiled. In fact, when Apple released the new M1, which was at least 50% faster than the previous generation—sometimes far more—many companies rushed out and bought new laptops for their entire staff, as if it was a no-brainer.

However this has a terrible knock-on effect. If a developer has a machine that's faster than the vast majority of their users, then they will be completely misinformed what the typical experience actually is. They may not even notice performance problems, because a delay is small enough on their machine so as to be unobtrusive. This is made worse by the fact that most developers work in artificial environments, on reduced data sets. They will rarely reach the full complexity of a real world workload, unless they specifically set up tests for that purpose, informed by a detailed understanding of their users' needs.

On a slower machine, in a more complicated scenario, performance will inevitably suffer. For this reason, I make it a point to do all my development on a machine that is several years out of date. It guarantees that if it's fast enough for me, it will be fast enough for everyone. It means I can usually spot problems with my own eyes, instead of needing detailed profiling and analysis to even realize.

This is obvious, yet very few people in our industry do so. They instead prefer to have the latest shiny toys, even if it only provides a temporary illusion of being faster.

Apple Powerbook G4 Titanium (2001)

Apple Powerbook G4 Titanium (2001)

Dysfunctional Cloud

Where this problem really gets bad is with cloud-based services. The experience you get depends on the speed of your internet connection. Most developers will do their work entirely on their own machine, in a zero-latency environment, which no actual end-user can experience. The way the software is developed prevents everyday problems from being noticed until it's too late, by design.

Only in a highly connected urban environment, with fiber-to-the-door, and very little latency to the data center, will a user experience anything remotely closely to that. In that case, cloud-based software can provide an extremely quick and snappy experience that rivals local software. If not, it's completely different.

There is another huge catch. Implicit in the notion of cloud-based software is that most of the processing happens on the server. This means that if you wish to support twice as many users, you need twice as much infrastructure, to handle twice as many requests. For traditional off-line software, this simply does not apply: every user brings their own computer to the table, and provides their own CPU, memory and storage capacity for what they need. No matter how you structure it, software that can work off-line will always be cheaper to scale to a large user base in the long run.

From this point of view, cloud-based software is a trap in design space. It looks attractive at the start, and it makes it easy to on-board users seamlessly. It also provides ample control to the creator, which can be turned into artificial scarcity, and be monetized. But once it takes off, you are committed to never-ending investments, which grow linearly with the size of your user-base.

This means a cloud-based developer will have a very strong incentive to minimize the amount of resources any individual user can consume, limiting what they can do.

An obvious example is when you compare the experience of online e-mail vs offline e-mail. When using an online email client, you are typically limited to viewing one page of your inbox at a time, showing maybe 50 emails. If you need to find older messages, the primary way of doing so is via search; this search functionality has to be implemented on the server, indexed ahead of time, with little to no customization. There is also a functionality gap between the email itself and the attachments: the latter have to be downloaded and accessed separately.

In an offline email client, you simply have an endless inbox, which you can scroll through at will. You can search it whenever you want, even when not connected. And all the attachments are already there, and can be indexed by the OS' search mechanism. Even a cheap computer these days has ample resources to store and index decades worth of email and files.

Mozilla Thunderbird

Mozilla Thunderbird with integrated RSS

The New News

To illustrate the problems with monetization, you need only look at the average news site. To provide a source of income, they harvest data from their visitors, posting clickbait to attract them. But driven by GDPR and similar privacy laws, they now all have cookie dialogs, which make visiting such a site a miserable experience. As long as you keep rejecting cookies, you will keep having to reject cookies. Once you agree, you can no longer revoke consent. The geniuses who drafted such laws did not anticipate the obvious exception of letting sites set a single, non-identifiable "no" cookie, which would apply in perpetuity. Or likely they did, but it was lobbied out of consideration.

That's not all. In the early days of GDPR, these dialogs used to provide you with an actual choice, even if they did so reluctantly. But nowadays, even that has gone out of the window. Through the ridiculous concept of "legitimate interest", many now require you to explicitly object to fingerprinting and tracking, on a second panel which is buried. Simply clicking "Disagree" is not sufficient, because that button still means you agree to being "legitimately" tracked, for all the same purposes they used to need cookies for, including ad personalization. Fully objecting means manually unselecting half a dozen options with every visit, sometimes more.

Illegitimate interest

The worst part is the excuse used to justify this: that newspapers have to make their money somehow. Yet this is a sham, because to my knowledge, no news site out there turns off the tracking for paying subscribers. You can pay to remove ads, but you can't pay to remove tracking. Why would they, when it's leaving money on the table, and fully legal? The resulting data sets are simply more valuable the more comprehensive they are.

In a different world, most people would do most of their reading via a subscription mechanism such as RSS. A social media client would be an aggregator that builds a feed from a variety of sources. Tracking users' interests would be difficult, because the act of reading is handled by local software.

Of course we can expect that in such a world, news sites would still try to use tracking pixels and other dubious tricks, but, as we have seen with email, remote images can be blocked, and it would at least give users a fighting chance to keep some of their privacy.

People whose documents were removed from Google Docs

* * *

The conclusion seems obvious to me: the same kind of incentives that made industrial food what it is, and industrial manufacturing what it is, have made industrial software worse for everyone. And whereas web browsing used to be exactly that, browsing, it now means an active process where you are being tagged and tracked by software that spans a large chunk of the web, which makes the entire experience unquestionably worse.

The analogy is even stronger, because the news now seems equally bland and tasteless as the tomatoes most of us buy. The lore of RSS and distributed protocols has mostly been lost, and many software developers do not have the skills necessary to make off-line software a success in a connected world. Indeed, very few even bother to try.

It has all happened gradually, just like in 1984, and each individual has little power to stop it, except through their own choices.

Under the guise of progress, we tend to assume that changes are for the better, that the economy drives processes towards greater efficiency and prosperity. Unfortunately it's a fairy tale, a story contradicted by experience and lore, and something we can all feel in our bones.

The solution is to adopt a long-term perspective, to weigh choices over time instead of for convenience, and to think very carefully about what you give up. When you let others control the terms of engagement, don't be surprised if under the cover of polite every-day business, they absolutely screw you over.

January 17, 2022

In my previous post, I explained how I recently set up backups for my home server to be synced using Amazon's services. I received a (correct) comment on that by Iustin Pop which pointed out that while it is reasonably cheap to upload data into Amazon's offering, the reverse -- extracting data -- is not as cheap.

He is right, in that extracting data from S3 Glacier Deep Archive costs over an order of magnitude more than it costs to store it there on a monthly basis -- in my case, I expect to have to pay somewhere in the vicinity of 300-400 USD for a full restore. However, I do not consider this to be a major problem, as these backups are only to fulfill the rarer of the two types of backups cases.

There are two reasons why you should have backups.

The first is the most common one: "oops, I shouldn't have deleted that file". This happens reasonably often; people will occasionally delete or edit a file that they did not mean to, and then they will want to recover their data. At my first job, a significant part of my job was to handle recovery requests from users who had accidentally deleted a file that they still needed.

Ideally, backups to handle this type of situation are easily accessible to end users, and are performed reasonably frequently. A system that automatically creates and deletes filesystem snapshots (such as the zfsnap script for ZFS snapshots, which I use on my server) works well. The crucial bit here is to ensure that it is easier to copy an older version of a file than it is to start again from scratch -- if a user must file a support request that may or may not be answered within a day or so, it is likely they will not do so for a file they were working on for only half a day, which means they lose half a day of work in such a case. If, on the other hand, they can just go into the snapshots directory themselves and it takes them all of two minutes to copy their file, then they will also do that for files they only created half an hour ago, so they don't even lose half an hour of work and can get right back to it. This means that backup strategies to mitigate the "oops I lost a file" case ideally do not involve off-site file storage, and instead are performed online.

The second case is the much rarer one, but (when required) has the much bigger impact: "oops the building burned down". Variants of this can involve things like lightning strikes, thieves, earth quakes, and the like; in all cases, the point is that you want to be able to recover all your files, even if every piece of equipment you own is no longer usable.

That being the case, you will first need to replace that equipment, which is not going to be cheap, and it is also not going to be an overnight thing. In order to still be useful after you lost all your equipment, they must also be stored off-site, and should preferably be offline backups, too. Since replacing your equipment is going to cost you time and money, it's fine if restoring the backups is going to take a while -- you can't really restore from backup any time soon anyway. And since you will lose a number of days of content that you can't create when you can only fall back on your off-site backups, it's fine if you also lose a few days of content that you will have to re-create.

All in all, the two types of backups have opposing requirements: "oops I lost a file" backups should be performed often and should be easily available; "oops I lost my building" backups should not be easily available, and are ideally done less often, so you don't pay a high amount of money for storage of your off-sites.

In my opinion, if you have good "lost my file" backups, then it's also fine if the recovery of your backups are a bit more expensive. You don't expect to have to ever pay for these; you may end up with a situation where you don't have a choice, and then you'll be happy that the choice is there, but as long as you can reasonably pay for the worst case scenario of a full restore, it's not a case you should be worried about much.

As such, and given that a full restore from Amazon Storage Gateway is going to be somewhere between 300 and 400 USD for my case -- a price I can afford, although it's not something I want to pay every day -- I don't think it's a major issue that extracting data is significantly more expensive than uploading data.

But of course, this is something everyone should consider for themselves...

This time a much shorter post, as I've been asked to share this information recently and found that it, by itself, is already useful enough to publish. It is a conceptual data model for IT services.

The IT model, and why it is useful

A conceptual data model for IT services supports several IT processes, with a strong focus on asset management and configuration management. Many IT vendors that have solutions active within those processes will have their own data model in place, but I often feel that their models have room for improvement.

Some of these models are too fine-grained, others are limited to server infrastructure. And while most applications allow for further customization, I feel that an IT architect should have a conceptual model in mind for their actions and projects.

The conceptual data model that I'm currently working on looks as follows:

An IT CDM, first version

My intention is to update this CDM with new insights when I capture those. It is not my intention to further develop the data model into a physical data model, but perhaps in the long term I could make it a conceptual one (explaining what the attributes are of each concept).

Feedback? Comments? Don't hesitate to drop me an email, or join the discussion on Twitter.

January 16, 2022

I have a home server.

Initially conceived and sized so I could digitize my (rather sizeable) DVD collection, I started using it for other things; I added a few play VMs on it, started using it as a destination for the deja-dup-based backups of my laptop and the time machine-based ones of the various macs in the house, and used it as the primary location of all the photos I've taken with my cameras over the years (currently taking up somewhere around 500G) as well as those that were taking at our wedding (another 100G). To add to that, I've copied the data that my wife had on various older laptops and external hard drives onto this home server as well, so that we don't lose the data should something happen to one or more of these bits of older hardware.

Needless to say, the server was running full, so a few months ago I replaced the 4x2T hard drives that I originally put in the server with 4x6T ones, and there was much rejoicing.

But then I started considering what I was doing. Originally, the intent was for the server to contain DVD rips of my collection; if I were to lose the server, I could always re-rip the collection and recover that way (unless something happened that caused me to lose both at the same time, of course, but I consider that sufficiently unlikely that I don't want to worry about it). Much of the new data on the server, however, cannot be recovered like that; if the server dies, I lose my photos forever, with no way of recovering them. Obviously that can't be okay.

So I started looking at options to create backups of my data, preferably in ways that make it easily doable for me to automate the backups -- because backups that have to be initiated are backups that will be forgotten, and backups that are forgotten are backups that don't exist. So let's not try that.

When I was still self-employed in Belgium and running a consultancy business, I sold a number of lower-end tape libraries for which I then configured bacula, and I preferred a solution that would be similar to that without costing an arm and a leg. I did have a look at a few second-hand tape libraries, but even second hand these are still way outside what I can budget for this kind of thing, so that was out too.

After looking at a few solutions that seemed very hackish and would require quite a bit of handholding (which I don't think is a good idea), I remembered that a few years ago, I had a look at the Amazon Storage Gateway for a customer. This gateway provides a virtual tape library with 10 drives and 3200 slots (half of which are import/export slots) over iSCSI. The idea is that you install the VM on a local machine, you connect it to your Amazon account, you connect your backup software to it over iSCSI, and then it syncs the data that you write to Amazon S3, with the ability to archive data to S3 Glacier or S3 Glacier Deep Archive. I didn't end up using it at the time because it required a VMWare virtualization infrastructure (which I'm not interested in), but I found out that these days, they also provide VM images for Linux KVM-based virtual machines (amongst others), so that changes things significantly.

After making a few calculations, I figured out that for the amount of data that I would need to back up, I would require a monthly budget of somewhere between 10 and 20 USD if the bulk of the data would be on S3 Glacier Deep Archive. This is well within my means, so I gave it a try.

The VM's technical requirements state that you need to assign four vCPUs and 16GiB of RAM, which just so happens to be the exact amount of RAM and CPU that my physical home server has. Obviously we can't do that. I tried getting away with 4GiB and 2 vCPUs, but that didn't work; the backup failed out after about 500G out of 2T had been written, due to the VM running out of resources. On the VM's console I found complaints that it required more memory, and I saw it mention something in the vicinity of 7GiB instead, so I decided to try again, this time with 8GiB of RAM rather than 4. This worked, and the backup was successful.

As far as bacula is concerned, the tape library is just a (very big...) normal tape library, and I got data throughput of about 30M/s while the VM's upload buffer hadn't run full yet, with things slowing down to pretty much my Internet line speed when it had. With those speeds, Bacula finished the backup successfully in "1 day 6 hours 43 mins 45 secs", although the storage gateway was still uploading things to S3 Glacier for a few hours after that.

All in all, this seems like a viable backup solution for large(r) amounts of data, although I haven't yet tried to perform a restore.

January 14, 2022

10 Janvier

La fin de la journée arrive. J’ai répondu aux mails, j’ai consulté ce qu’il fallait. Au lieu de lire en ligne, j’ai été forcé de terminer certaines tâches. Je sais qu’il n’y aura rien de nouveau sur mon ordinateur. Pas besoin de le consulter avant d’aller dormir. Pas besoin de le consulter immédiatement au lever. Le matin, en buvant mon thé, je commence à prendre l’habitude de répondre aux derniers mails dans ma boîte avant ma prochaine synchronisation.

Aujourd’hui, j’ai raté une réunion téléphonique.

J’avais bien allumé mon téléphone ce matin, mais je l’avais laissé en silencieux.

Je suis bien forcé si je ne veux pas être dérangé par les appels presque quotidiens du fameux « Bureau des énergies », une sorte d’arnaque téléphonique incompréhensible qui ne respecte aucune règle, aucune loi, changeant à chaque fois de numéro et raccrochant dès que l’on demande le nom de la société incriminée ou de ne plus être appelé. Ce spam constant a rendu, à lui seul, mon téléphone invivable s’il n’est pas en silencieux.

Il y’a aussi les messageries instantanées. Il y’a surtout les messageries instantanées. J’utilise Signal, mais vous connaissez probablement Whatsapp, Telegram, Messenger, Viber… Sur le principe, toutes sont similaires (Signal ayant l’avantage d’être chiffré et de ne pas espionner ses utilisateurs, contrairement aux autres. Une différence fondamentale.).

L’instantanéité spontanée de ces outils a donné au mail un caractère formel qu’il n’avait peu ou prou initialement. Mais il est vrai que, pour envoyer un email, il faut structurer une idée, lui donner un début, une fin. Clarifier ce qui est attendu de la personne en face. À l’opposé, les messageries instantanées offrent de partager avec d’autres ce que les écrivains appellent un « flux de conscience », un rouleau sans fin que l’on déroule au fur et à mesure que l’on pense sans trop savoir où l’on va. Il n’y a plus de barrière au partage, plus d’anticipation. Le message est envoyé avant même que son expéditeur ait pu réfléchir à ce qu’il écrit. « Je passe justement dans ta rue, ça te dit de boire un verre ? » « Oups, oublie, j’avais oublié que j’avais un rendez-vous » « Ce sera pour une autre fois, ce serait chouette de se voir » « Au fait, j’espère que tu vas bien ».

Nous avons le rouleau sans fin, mais nous ne sommes pas Jack Kerouac. Beaucoup de conversations instantanées sont en fait de tristes soliloques guettant désespérément une validation externe, validation faite sous forme de réponses, car ne pas répondre est souvent perçu comme grossier. Ce comportement est encouragé par les plateformes, depuis l’incroyablement intrusif « indicateur de lecture du message » (que je vous conseille de désactiver) jusqu’aux fonctionnalités implémentées dans certains logiciels, comme Snap, qui affiche sous forme de récompense le nombre de jours consécutifs durant lesquels vous avez été en contact avec un correspondant. Lorsque la fille adolescente d’un ami est partie au camp scout, où les GSMs étaient interdits, elle a confié son téléphone à son père en le chargeant d’envoyer un message, une fois par jour, à une liste prédéfinie de contacts. Afin de ne pas briser la chaîne ! « Et surtout, Papa, n’oublie pas. Ce serait trop la loose auprès de mes copines ! »

D’autres m’avouent consulter le contenu de leurs messages depuis les notifications de leur téléphone afin que la messagerie ne marque pas le message comme « lu » auprès de l’expéditeur. Une manière de gagner un peu de temps avant d’être forcé de répondre.

À travers nos téléphones, nous sommes noyés dans des multiples flux de conscience partagés. Avec le risque de perdre notre propre conscience, notre propre individualité. L’actualité politique le montre suffisamment : nous nous agrégeons, nous perdons notre libre arbitre, notre conscience propre. Nous la déléguons dans des multiples groupes de discussion, créés généralement pour une cause très précise (un voyage, un événement …), mais dérapant systématiquement vers des discussions sans queue ni tête, des partages de rumeurs, d’images rigolotes, d’avis de perte de chiens et chats, de l’autopromotion pour une brocante, l’ouverture du magasin d’un arrière-cousin ou un livre.

Contrairement à l’email, qui a connu et connait encore ces travers, il n’est pas possible de filtrer les messages. Il n’est pas possible de les consulter et de les traiter à un moment donné. De considérer une conversation comme close. Dans toutes les cultures, la fin d’une conversation, orale ou écrite, est marquée par un protocole social de clôture alambiqué. « Salutations distinguées ! », « Je dois y aller vraiment y aller, a+  », « Ce fut un plaisir », etc. L’utilité de ces formules est fondamentale pour permettre à chaque participant de passer à autre chose, de changer de contexte. C’est également le dernier moment pour échanger de l’information critique. C’est une fois debout pour sortir de la réunion ou sur le pas de la porte, la veste déjà enfilée, que les cœurs s’ouvrent, les choses se révèlent, se disent. Malheureusement, ces clôtures sont généralement inexistantes dans les groupes de discussion. N’étant jamais terminées, les discussions instantanées sont omniprésentes, à toute heure du jour ou de la nuit. Les notifications vous sautent aux yeux alors que vous saisissez votre téléphone pour payer dans un magasin, pour consulter votre agenda ou pour téléphoner. Même en silencieux, la plupart des téléphones s’allument et illuminent la pièce lors de la réception d’un message. Une fois que le cerveau a vu qu’il y’avait un message, impossible d’y échapper, de ne pas être distrait au moins quelques secondes. La seule solution, hormis de ne pas avoir de messagerie, est de mettre son téléphone en mode avion pour s’offrir quelques heures de répit. De rendre le téléphone inopérant.

Autour de moi, j’observe des gens courbés sur leur téléphone dans la rue, dans les maisons, dans les familles. Leurs doigts tapotent des messages alors qu’ils marchent sur le trottoir, qu’ils mangent, qu’ils tiennent leurs enfants par la main. Parfois, ils tiennent le téléphone horizontal face à leur bouche pour enregistrer un message audio qui ne sera pas toujours écouté. Au lieu de regarder le coucher de soleil, ils le prennent en photo et l’envoient aussitôt pour le commenter avec d’autres. Ou partagent le selfie d’un moment en famille.

Comme si un moment non partagé en ligne n’existait plus. Comme si le souvenir biologique seul ne suffisait plus.

Nous perdons la conscience et la mémoire. Nous les avons délocalisées toutes les deux vers les serveurs de grandes sociétés informatiques qui n’ont pour but que de nous afficher le plus de publicités possible.

Si le choix était individuel, cela ne prêterait pas tellement à conséquence. Mais le choix est global, sociétal. La seule solution pour ne pas subir un bombardement permanent d’informations est de se couper complètement du monde, d’être totalement injoignable. La possibilité technique de contacter un tiers transforme la plupart des questions en urgences vitales (et je suis le premier coupable de ce genre de comportement) : « suis au magasin, est-ce que je dois reprendre du pain ? » ou « tu viens ou pas à la fête ce soir ? Dois savoir immédiatement pour commander le traiteur ».

Être joignable partout tout le temps étant la norme, changer, déplacer ou annuler un rendez-vous sont des comportements acceptables, banalisés. « T’es où ? » « J’arrive ! » « Finalement, on est devant le bowling, pas devant le ciné » « OK, je suis là dans 5 minutes ». En conséquence, il n’est plus possible de prévoir, de planifier, d’organiser sa journée. Tout peut être modifié, parfois même après le début prévu de l’événement. La décision de participer ou non à un événement est repoussée, en attente des autres sollicitations potentielles pour ce moment.

Nous sommes tout le temps en interaction, tout le temps entre deux décisions, entre deux messages. Les messageries nous forcent à être en permanence sur le qui-vive. La réalité non virtuelle n’est qu’une pause forcée entre deux notifications.

Ce n’est pas un hasard si, en occident, la popularité de la méditation a suivi la courbe de progression des téléphones. Méditer, c’est s’offrir 10, 20 ou 30 minutes de silence mental par jour. Quelques minutes sans sollicitations, c’est tellement peu…

C’est tellement peu et c’est inquiétant, car, dans l’histoire humaine, les intellectuels ont de tout temps baigné dans ce silence mental permanent. Les sollicitations étaient l’exception. Une fois chez eux, les intellectuels n’avaient d’autres ressources que de réfléchir et consulter leur bibliothèque. La plupart des découvertes, des œuvres et des progrès humains ont été réalisés, car leurs auteurs avaient à disposition du temps et de l’espace mental (c’est d’ailleurs la raison pour laquelle la plupart étaient rentiers de naissance ou, comme Voltaire, le sont devenus dans le but explicite de se consacrer à leur art). Le progrès humain s’est construit sur la douleur de l’ennui solitaire. Comme toute douleur, comme tout effort, nous tentons de l’effacer. De l’interdire.

Si nous perdons notre conscience, notre mémoire et que nous brisons les espaces de réflexion, d’où viendront les prochaines grandes idées, celles qui nous font cruellement défaut ?

Recevez les billets par mail ou par RSS. Max 2 billets par semaine, rien d’autre. Adresse email jamais partagée et définitivement effacée lors du désabonnement. Dernier livre paru : Printeurs, thriller cyberpunk. Pour soutenir l’auteur, lisez, offrez et partagez des livres.

Ce texte est publié sous la licence CC-By BE.

January 13, 2022

In a perfect world, using infrastructure or technology services would be seamless, without impact, without risks. It would auto-update, tailor to the user needs, detect when new features are necessary, adapt, etc. But while this is undoubtedly what vendors are saying their product delivers, the truth is way, waaaay different.

Managing infrastructure services implies that the company or organization needs to organize itself to deal with all aspects of supporting a service. What are these aspects? Well, let's go through those that are top-of-mind for me...

January 12, 2022

An important part of our programme: the T-shirts and hoodies. They are (again) available, and contrary to normal practices, you won't have to stand in line to get them and they won't run out! For the online edition, we partnered with a print-on-demand provider, which means that they won't run out at Saturday noon, as they normally do. Two batches will be produced, one before FOSDEM and one after. The order cut-off dates are Sunday 16th January 2022 and Sunday 13th February 2022. Visitors from the EU can go to treasure.fosdem.org, offering shipping to the EU. If you want舰

January 11, 2022

7 janvier 2021

À l’université, j’avais un professeur d’électronique pour qui nous donner cours pendant 2h sans fumer représentait une épreuve terrible. Durant tout le cours, il manipulait son briquet, jouait machinalement avec où l’utilisait comme exemple.

« C’est un peu comme ce briquet ! »

À la fin du cours, nous l’avons retenu plusieurs fois pour poser des questions. Il prenait visiblement beaucoup de plaisir à nous répondre. Mais une partie de son esprit était déjà ailleurs. En sus du briquet, il préparait sa cigarette qu’il portait parfois à ses lèvres en nous parlant.

Après une semaine de déconnexion, je pense que je commence à le comprendre.

Une semaine pendant laquelle je n’ai synchronisé mon ordinateur qu’une seule fois par jour. Une semaine pendant laquelle une partie de mon esprit ne cessait de me rappeler que, au départ, j’avais imaginé faire deux synchronisations par jour (une le matin pour recevoir les mails, une le soir pour les envoyer).

Une semaine pendant laquelle j’ai réalisé le nombre de petites actions quotidiennes que nous faisons en ligne sans réfléchir. Des factures à payer. Un scanner à installer dont le mode d’emploi est en ligne. Une bibliothèque logicielle à installer pour mes projets. Un papier administratif à obtenir sur le site du ministère. Cela n’arrête littéralement pas. Un colis devait me parvenir, sans urgence aucune. En synchronisant mes mails un matin, j’ai découvert… 10 mails traitant du colis. Le colis avait quitté l’entrepôt. Le colis était dans les mains du livreur. Le colis aurait peut-être un peu de retard. Le colis serait finalement livré aujourd’hui. Le fait d’avoir ces mails en une fois m’a ouvert les yeux sur l’absurdité de notre consommation de l’Internet et des mails. Comme l’illustre le paradoxe de Jevons, lorsqu’une ressource devient plus facilement accessible, nous en augmentons l’usage de manière disproportionnée, au point de rendre le bénéfice de cette facilité nouvelle nul, voire négatif.

Je m’étais autorisé une connexion prévue et planifiée pour modifier l’infrastructure de mon gemlog (mon blog sur Gemini). Des modifications techniques à effectuer sur un serveur distant. Il s’est avéré que ma mission n’était pas très claire, que rien ne fonctionnait comme je le voulais. Au bout de 28 minutes, je me suis rendu compte que je cherchais compulsivement des solutions en ligne. J’ai donc arrêté. Même topo avec une facture impayée de mon service de courriel, Protonmail, qui menaçait de suspendre mon compte. J’ai tenté de payer en urgence, mais aucune de mes cartes de crédit ne fonctionnait (le popup de confirmation de la banque se fermait automatiquement, la transaction était à chaque fois annulée). 26 minutes perdues. Dans les deux cas, en me déconnectant, j’ai pu revenir au problème plusieurs heures plus tard en sachant exactement ce que je devais faire. En étant connecté, j’aurai probablement résolu le problème en 1h ou 2, consultant en parallèle un million d’autres trucs. Cela m’aurait énervé, mais je n’aurai jamais su dire avec certitude combien de temps j’y avais passé. Le multitâche nous permet de supporter les frustrations administratives. C’est un problème, car ces frustrations sont devenues la norme.

Pour apprendre de ces échecs, je me suis imposé une nouvelle règle : sauf urgence clairement définie, je me limite à deux connexions par semaine. Ces connexions seront préparées à l’avance avec la liste exacte des sites web à visiter et, pour chacun, la tâche exacte à accomplir. Si je dois me connecter en urgence pour une tâche donnée, je ne peux effectuer que cette tâche précise, sans prendre de l’avance dans les tâches non urgentes. Si une tâche ne se déroule pas comme prévu, elle est immédiatement abandonnée pour être reconsidérée. En quelques jours, la liste de tâches pour ma prochaine connexion s’est déjà allongée à une dizaine de lignes : commander un livre technique non disponible en librairies, se désinscrire de plusieurs newsletters, faire mon don annuel à certains projets open source, rechercher des exemples techniques pour intégrer plusieurs logiciels (mutt, abook, notmuch pour ceux qui connaissent) parce que je n’y arrive pas avec la documentation que j’ai, etc.

Tout comme mon professeur jouant avec son briquet, je me retrouve à consulter machinalement cette liste, à la lire, la relire en anticipant le moment où je vais enfin me connecter. Cette relecture a un effet positif : je me rends compte que certains éléments ne sont pas clairs. D’autres, ajoutés impulsivement, ne sont pas strictement nécessaires. Je les supprime. J’hésite d’ailleurs à m’autoriser des recherches aussi larges que « trouver des exemples techniques d’intégration entre plusieurs logiciels ». Je préférerais avoir un livre de référence. Après deux jours de cogitations, je réalise que je dispose d’une copie offline d’une partie du réseau Gemini, un réseau susceptible de parler de sujets aussi techniques. Une recherche dans la liste des fichiers Gemini me le confirme. Plutôt que de chercher un peu au hasard sur le web, je vais déjà tenter d’exploiter les nombreuses informations dont je dispose déjà sur mon ordinateur. Et quelques minutes plus tard, je dois me rendre à l’évidence. Ça fonctionne ! J’ai trouvé exactement l’information que je cherchais, postée en 2019 sur un gemlog. 3 lignes de code minimales qui sont tout ce que je souhaitais. 3 lignes de code que j’ai pleinement comprises, assimilées avant de les adapter. Tout le contraire de mon comportement en ligne consistant à ouvrir 10 solutions différentes, les copier-coller sans comprendre, les tester avant de passer à la suivante.

Pourquoi être si sévère avec moi-même ? Parce que cette déconnexion est difficile. Mon esprit erre sans cesse vers le monde en ligne que j’ai quitté. Que s’y passe-t-il ? Quelles sont les réactions à mes billets de blog ? Quelles sont les nouveautés de tel ou tel projet ? La connexion quotidienne et son avalanche de mails me donne l’impression d’une bouffée de ma drogue préférée. Je lis avec avidité les mails de réaction de mes lecteurs (même si j’ai choisi consciemment de n’y répondre que très rarement). Une fois les mails, les RSS et les gemlogs lus, le silence se fait. Je sais que rien n’arrivera plus sur mon ordinateur jusqu’au lendemain. C’est à la fois un soulagement et terriblement angoissant.

J’écris alors dans mon journal. Parfois en anglais pour publier sur mon gemlog afin de décrire mes questionnements techniques. Le fait de l’écrire, de savoir que je n’aurai pas de réponse me donne du recul, une vision différente des choses. Je me lève plus souvent de ma chaise. Je considère plus rapidement une tâche comme terminée : si je n’ai pas l’information pour continuer, rien ne sert de me torturer les méninges.

Paradoxalement, je lis moins. Je passe plus de temps sur mon ordinateur. J’explore les manpages (pages de manuel). Je peste sur Devhelp, le logiciel de documentation que j’utilise pour programmer en Python. Je plonge dans mes propres notes. Je relis mon propre journal. Je lis et relis les réponses que j’ai faites à certains mails. Je procrastine toujours autant mes projets. Je me dis que cette déconnexion était une idée vraiment stupide. Je commence à ressentir le manque…

On ne brise pas si facilement plus de vingt années d’accoutumance…

Recevez les billets par mail ou par RSS. Max 2 billets par semaine, rien d’autre. Adresse email jamais partagée et définitivement effacée lors du désabonnement. Dernier livre paru : Printeurs, thriller cyberpunk. Pour soutenir l’auteur, lisez, offrez et partagez des livres.

Ce texte est publié sous la licence CC-By BE.

At the beginning of every year, I like to publish a retrospective that looks back at the last 12 months at Acquia. I write these retrospectives because I like to keep a record of the changes that happen at the company. It also helps me to reflect.

If you'd like to read my previous retrospectives, you can find them here: 2020, 2019, 2018, 2017, 2016, 2015, 2014, 2013, 2012, 2011, 2010, 2009. This year marks the thirteenth retrospective. All combined, it would be an 85-page document that provides a comprehensive overview of Acquia's trajectory.

Our momentum continued in 2021

We continued our unbroken, 15-year revenue growth streak.

We now have around 1,500 employees, up from around 1,100 a year earlier. Acquia was named a "Best Place to Work" in both the UK and Boston, where our headquarters is located. Interesting tidbit: almost 60,000 people applied for a job at Acquia last year.

Acquia's product adoption continued to grow as well:

  • Our Drupal Cloud platform served nearly 600 billion HTTP requests in 2021 (1.6 billion requests a day).
  • Our Customer Data Platform delivered over 1 trillion Machine Learning predictions in 2021 (2.9 billion predictions a day). This is an almost 200% increase compared to last year, driven by both customer growth and Acquia releasing additional machine learning capabilities.
  • Acquia's Campaign Studio saw a 166% increase in emails sent.
Acquia customer logos, including Sony, J&J, Leica, Sony, AMD, Nokia, Danone, Qualcomm, GE, Lids, Nestle and more
Some of Acquia's customers in 2021: Sony, J&J, Leica, Universal, AMD, Danone, Qualcomm, GE, Lids, Nestle, GE, Lonely Planet, Cheesecake Factory, Moody's, and more.

We continued to support nearly a quarter of the Fortune 100 organizations. Some customer highlights from last year include:

  • J. Crew: The iconic retailer uses Acquia CDP to improve its marketing campaigns. They saw double-digit lifts on order value, conversion rates, open rates, and click rates.
  • Fannie Mae: With wildfires wreaking havoc in the U.S., Fannie Mae wanted to provide wildfire relief information to homeowners and renters living in some of the most impacted areas. With Acquia DXP, 38% of visitors clicked on personalized content, helping them stay informed about critical resources.
  • Bayer: In the midst of the pandemic, pharmaceutical company Bayer needed to nurture its relationships with healthcare professionals. Bayer launched a new training website with 320+ training activities, with key performance indicators exceeding targets.

Last but not least, we continued to champion diversity and inclusion efforts. We organized opportunities for education and discussion around important observances like Black History Month and Asian American and Pacific Islander Heritage Month. We celebrated Pride, like we always do. We have added both Juneteenth and Indigenous People's Day as official Acquia-observed holidays in the U.S.

Executing on our product vision

The best way to learn about all the 2021 innovations is to watch my Acquia Engage 2021 keynote:

There are too many innovations to write about, but there are two product highlights I'd like to call out.

First, we released "Acquia Cloud Next" in Q1, a rewrite of our existing cloud platform. We support some of the highest-trafficked sites in the world, including coverage of the Olympics, the World Cup and the Australian Open. Our platform scales to hundreds of millions of page views and has the best security of any platform in the world (e.g. ISO-27001, PCI-DSS, SOC1, SOC2, IRAP, CSA STAR, etc).

Why rewrite our platform, you may ask? Because we found a way to deliver faster dynamic auto-scaling, further improve site isolation, deliver 5x faster database throughput, and make our infrastructure more self-healing.

Twelve years ago when we first launched Acquia Cloud, my personal site was its first production user. Once again, my site was the first production website to run on our new Acquia Cloud Next platform. It's been a happy website.

We have been migrating existing customers to Acquia Cloud Next throughout the year. As we exit 2021, the pace of migrations continues to accelerate. It's a major accomplishment for our team.

The second highlight is our acquisition of Widen, Acquia's largest acquisition to date. Widen is an integrated Digital Asset Management (DAM) and Product Information Management (PIM) platform. Content is at the heart of any digital experience. We acquired Widen, so our customers can create better content, more easily. Widen is off to an incredible start, beating our expectations.

2021 analyst recognition; leadership positions from Forrester, Gartner and IDC
Acquia received a leadership position from Forrester, Gartner and IDC.

We received validation on our product strategy from some of the industry's best analyst firms. We received "Leader" placements in IDC MarketScape: Worldwide Content Management Systems for Persuasive Digital Experiences, Gartner Magic Quadrant for Digital Experience Platforms, and the Forrester Agile CMS Wave. Acquia was also placed on the Constellation ShortList for Digital Experience Platforms and was named a midsize enterprise customers' choice by Gartner Peer Insights for Digital Experience Platforms.

I'm proud of these analyst results because their opinions are based on both customer interviews and a deep understanding of the competitive landscape. It means our customers love the open, composable Digital Experience Platform (DXP) platform that we are building for them.

Giving back to Open Source

Drupal celebrated its 20th birthday in 2021. The Drupal community continues to march toward a Drupal 10 launch in 2022, while bringing important improvements to Drupal 9.

In 2021, Drupal received contributions from almost 7,500 individuals and over 1,000 organizations. You can read more about these trends in my 2021 "Who sponsors Drupal development?" report.

Acquia also continued to invest in Mautic, the Open Source marketing automation company that we acquired in 2019. Contributions to Open Source Mautic are up more than 40% compared to 2020. While Mautic is still a lot smaller than Drupal, it's great to see its steady growth.

I'm proud that Acquia is the top contributor to both Drupal and Mautic in 2021. I started Acquia to help grow Drupal. Fifteen years later, we are still very committed to that.

Acquia also became one of the founding members of the PHP Foundation, supporting its launch with a $25,000 donation. Both Drupal and Mautic are PHP applications. It's important to support the projects that we depend on.

Personal growth and development

Last year was another busy year for me. We grew the R&D team at Acquia by nearly 30%. By the end of 2021, the R&D team was over 450 individuals strong.

To help us scale our operations, we decided to organize the company into three internal business units, each with its own General Manager. The business units are Drupal Cloud, Marketing Cloud and Content Cloud (Widen).

We eased into this new organizational structure in the second half of the year.

Our VP of Product for Marketing Cloud left Acquia early in the year, so I stepped in and ran the Marketing Cloud team. In July, we hired Mark Picone as our first General Manager, responsible for Marketing Cloud.

The acquisition of Widen helped us establish the second business unit. Matthew Gonnering, the former CEO of Widen, became its General Manager.

With two General Managers in place, we were missing one for Drupal Cloud. In Q4, Jim Shaw, a 10+ year Acquia veteran, transitioned into the role of General Manager of Drupal Cloud.

By December of 2021, the business unit structure had more or less settled in: Mark and Matthew were ramped up, and Jim started in his new role as well. The General Managers report to Mike Sullivan (Acquia's CEO) and myself, and some of the teams that reported to me now report to the General Managers.

I'm excited about this change because for the past few years, 90% of my time has been internally focused and very operational. With the new business unit structure, I can be more externally facing again.

While it's early days, the General Managers have already taken some operational work off my plate – from driving weekly program meetings, to approving expenses and budgets, to tracking progress on hiring, and more.

I start 2022 with the ability to focus a bit more on strategic work including vision, product portfolio management, thought-leadership, acquisitions, and more. And if we can get past COVID, I'm excited to start traveling again. I can't wait to attend conferences, meet customers, spend time with the Drupal and Mautic community, and visit our offices around the world.

Thank you

Looking back at 2021, I'm reminded of how lucky I am to work with an amazing team. While it's hard not to be frustrated by the pandemic's ongoing disruption, I feel fortunate for the position both Acquia and Drupal are in today. Here is to good health and continued prosperity in 2022.

January 07, 2022

I published the following diary on isc.sans.edu: “Custom Python RAT Builder“:

This week I already wrote a diary about “code reuse” in the malware landscape but attackers also have plenty of tools to generate new samples on the fly. When you received a malicious Word documents, it has not been prepared by hand, it has been for sure automatically generated. Except if you’re a “nice” target for attackers and victim of some kind of “APT”. The keyword here is “automation”. If defenders try to automate as much as possible, attackers too… [Read more]

The post [SANS ISC] Custom Python RAT Builder appeared first on /dev/random.

January 06, 2022

Het land in crisis: alle hoofdredacteuren van het land schrijven opiniestukken! Duizenden vrouwen gingen naar de rechtbank.

Meanwhile: Op 31 maart 2020 werd België in de zaak Jeanty door het Europees Mensenrechtenhof veroordeeld voor de inhumane behandeling van een gedetineerde met psychische problemen. Artikel 3 EVRM wordt geschonden geacht omdat de man in voorlopige hechtenis niet de vereiste medische zorg had verkregen (link).

Nobody cares.

Ik verwijs naar een blog-post van me uit 2008: Moral indulgence.

I published the following diary on isc.sans.edu: “Malicious Python Script Targeting Chinese People“:

This week I found a lot of interesting scripts as this is my fourth diary in a row! I spotted a Python script that targets Chinese people. The script has a very low VT score (2/56) (SHA256:aaec7f4829445c89237694a654a731ee5a52fae9486b1d2bce5767d1ec30c7fb). How attackers can restricts their surface attack to some regions, countries or people… [Read more]

The post [SANS ISC] Malicious Python Script Targeting Chinese People appeared first on /dev/random.

January 05, 2022

During winter break/holidays, I offered myself a new Bass and I mentioned this to one of my friends, who also offered himself a new guitar. As pandemic is still ongoing, he decided to just quickly record himself (video shot) and posted me the link and asked me to do the same.

Then became the simple problem to solve : while I have two nice Fender Amplifiers (Mustang LT and Rumble LT) that are recognized natively by linux kernel on CentOS Stream 8 as valid input sources, I wanted to also combine that with a backing track (something playing on my computer, basically a youtube stream) and record that easily with the simple Cheese video recording app present by default in gnome.

I had so a look at PulseAudio and see if that was easily possible to combine the monitor device (basically the sound coming from your pc/speaker when you play something) with my amplifier as different input, and so then record in one shot that as a new stream/input that Cheese would transparently use (Cheese lets you specific a webcam but nothing wrt sound/microphone/input device)

Here is the solution :

  • creating a new sink with the module-null-sink pulseaudio module
  • adding some inputs (basically the main audio .monitor device and my amplifier) to that sink with the module-loopback pulseaudio module
  • creating then a "fake" stream that can be used as input device (like a microphone) using the module-remap-source

For example, when my Guitar amplifier is usb connected , it's shown like this :

pacmd list-sources | egrep '(^\s+name: .*)|(^\s+device.description = .*)'

    name: <alsa_output.usb-Lenovo_ThinkPad_Thunderbolt_3_Dock_USB_Audio_000000000000-00.analog-stereo.monitor>
        device.description = "Monitor of ThinkPad Thunderbolt 3 Dock USB Audio Analog Stereo"
    name: <alsa_input.usb-Lenovo_ThinkPad_Thunderbolt_3_Dock_USB_Audio_000000000000-00.mono-fallback>
        device.description = "ThinkPad Thunderbolt 3 Dock USB Audio Mono"
    name: <alsa_input.usb-046d_HD_Pro_Webcam_C920_F4525F9F-02.analog-stereo>
        device.description = "HD Pro Webcam C920 Analog Stereo"
    name: <alsa_input.usb-MICE_MICROPHONE_USB_MICROPHONE_201308-00.mono-fallback>
        device.description = "Blue Snowball Mono"
    name: <alsa_output.pci-0000_00_1f.3.analog-stereo.monitor>
        device.description = "Monitor of Built-in Audio Analog Stereo"
    name: <alsa_input.pci-0000_00_1f.3.analog-stereo>
        device.description = "Built-in Audio Analog Stereo"
    name: <alsa_input.usb-FMIC_Mustang_LT_25_00000000001A-02.analog-stereo>
        device.description = "Mustang LT 25 Analog Stereo"

Now that we have the full name, we can use a simple bash wrapper script to either create a new input , based on bass/guitar amp preference, and this is the script :

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
#!/bin/bash

# This little bash wrapper will just combine monitor and existing source from fender amplifier
# and create a virtual input that can be selected a default input for recording


f_log() {
   echo "[+] $0 -> $*"
}

function usage () {
cat << EOF

You need to call this script like this : $0 (-r) -i <input>
  -r : reset pulseaudio to default and so removes virtual input
  -i : external amplifier to combine with source monitor [required param, values: (guitar|bass)]

EOF
}

while getopts "hri:" option
do
  case ${option} in
    h)
      usage
      exit
      ;;
    r)
     action=reset
      ;;
    i)
     amplifier_model=${OPTARG}
     ;;
    ?)
      usage
      exit
      ;;
  esac
done

# Checking first if we just need to reset pulseaudio
if [ "${action}" == "reset" ] ; then
   f_log "Resetting pulseaudio to defaults ..."
   pactl unload-module module-loopback
   pactl unload-module module-null-sink
   sleep 2
   pulseaudio -k
   exit 
fi

# Parsing amplifier input to combine and exit if not specified
# One can use the following commands to know which sources are available
# pacmd list-sources | egrep '(^\s+name: .*)|(^\s+device.description = .*)'

if [ "${amplifier_model}" == "guitar" ] ; then
  f_log "Fender Mustang amplifier selected"
  source_device="alsa_input.usb-FMIC_Mustang_LT_25_00000000001A-02.analog-stereo"
  sink_name="monitor-and-amp"
  fake_input_name="mustang-combined"
elif [ "${amplifier_model}" == "bass" ] ; then 
  f_log "Fender Rumbler Amplifier selected"
  source_device="alsa_input.usb-FMIC_Fender_LT_USB_Audio_Streaming_00000000001A-00.analog-stereo"
  sink_name="monitor-and-bassamp"
  fake_input_name="rumble-combined"
else
  usage
  exit 1
fi

# Now let's do the real work
# Common
monitor_device="alsa_output.usb-Lenovo_ThinkPad_Thunderbolt_3_Dock_USB_Audio_000000000000-00.analog-stereo.monitor"

f_log "Adding new sink [${sink_name}]"
pactl load-module module-null-sink sink_name=${sink_name} sink_properties=device.description=Source-monitor-amp
sleep 5
f_log "Adding monitor device [${monitor_device}] to created sink [${sink_name}]"
pactl load-module module-loopback source=${monitor_device} sink_dont_move=true sink=${sink_name}
sleep 5
f_log "Adding external amplifier [${source_device}] to created sink [${sink_name}]"
pactl load-module module-loopback source=${source_device} sink_dont_move=true sink=${sink_name}

# Create fake input combining all sinks 
f_log "Creating now new virtual input [${fake_input_name}] to be used as input for recording"
sleep 5
pactl load-module module-remap-source source_name=${fake_input_name} master=${sink_name}.monitor source_properties=device.description=${fake_input_name}

Now that we have a script, I can just call it like that, example for my Guitar amp :

 ./pulse-audio-amp-combine -i guitar
[+] ./pulse-audio-amp-combine -> Fender Mustang amplifier selected
[+] ./pulse-audio-amp-combine -> Adding new sink [monitor-and-amp]
26
[+] ./pulse-audio-amp-combine -> Adding monitor device [alsa_output.usb-Lenovo_ThinkPad_Thunderbolt_3_Dock_USB_Audio_000000000000-00.analog-stereo.monitor] to created sink [monitor-and-amp]
27
[+] ./pulse-audio-amp-combine -> Adding external amplifier [alsa_input.usb-FMIC_Mustang_LT_25_00000000001A-02.analog-stereo] to created sink [monitor-and-amp]
28
[+] ./pulse-audio-amp-combine -> Creating now new virtual input [mustang-combined] to be used as input for recording
29

And it then appears as new input that I can select as default under gnome :

gnome-settings

I also have rebuilt/installed pavucontrol application, which can be handy to visualize all the streams and you can also control the volume in the recording tab :

pavucontrol-recording

You can then have lower input from the audio you're playing on laptop (for example a backing track found on youtube but anything played on laptop is going to the monitor device) but YMMV and you have to do a quick test first with your other input (my amp+instrument in my case)

Once done, you can use any app like audacity or cheese or else to just record. Probably easier and faster than complex (but more professional though) systems around Jack. As said, it's just to quickly record something and combine streams/sinks all together, nothing like a DAW system :-)

Un spectre hante mon pays. Profitant de la panique liée à une épidémie, il s’insinue dans les esprits, il corrompt les familles, détruit les amitiés, brise les ménages. Ce spectre, c’est celui de l’intolérance, de la haine aveugle. Celui d’une forme insidieuse de fascisme.

Depuis les côtes de la vieille Europe, nous avons observé, mi-amusés, mi-inquiets, la société américaine se polariser et se diviser sous le règne de Trump. Une déliquescence accélérée par les nouveaux usages médiatiques contre laquelle nous nous croyions immunisés, fiers d’être le continent des Lumières. Mais force fut de constater que l’immunité n’était pas absolue. La contagion a gagné nos contrées.

Flânant dans la rue après avoir tendu un QR code me donnant accès à une zone extérieure grillagée, je n’entends qu’elle. Les témoignages des repas de famille ou entre amis s’accumulent. Le sujet est omniprésent, incontournable, fanatique : « T’es vacciné toi ? ».

Autour de moi, j’entends des personnes qui ne peuvent plus voir leurs amis parce qu’ils sont vaccinés. D’autres parce qu’ils ne le sont pas. Nous souffrons tous depuis désormais deux ans. Nous avons perdu des proches, des emplois, des opportunités, de l’énergie. En mal de visibilité médiatique, certains politiques ont choisi la voie facile et simpliste du bouc émissaire. Ce seront les non-vaccinés.

Je suis moi-même vacciné contre le COVID. Si les intérêts économiques des groupes pharmaceutiques me semblent particulièrement malsains, pour ne pas dire mafieux, je pense que le vaccin est techniquement une invention magnifique et un outil essentiel dans la lutte contre la pandémie.

Mais je ne suis pas médecin. Je ne peux pas juger la pertinence ou non pour un individu d’être vacciné. Je sais que certaines personnes non vaccinées ont des raisons que je trouve particulièrement absurdes, mauvaises, voire dangereuses. Je sais que d’autres ont tout simplement très mal réagi à la première dose et sont médicalement inaptes à en recevoir une seconde. Je n’ai pas la prétention de connaître tous les cas, encore moins de pouvoir les juger.

Comme me le disait récemment un ami, également vacciné : « En quelques mois, 85% de la population a été vaccinée avec un tout nouveau vaccin. C’est inespéré. Il y’aura toujours des irréductibles, il est illusoire de faire beaucoup mieux et ce n’est pas sûr que cela changerait grand-chose. »

J’ai peur de ce que mon pays est en train de devenir. J’ai peur parce que, désormais, il me faut parfois montrer patte blanche pour entrer dans des espaces pourtant publics et extérieurs. Que ce processus s’inscrit dans la lignée d’un fichage numérique complexe dont les possibilités d’abus me sautent aux yeux de par ma formation. J’ai peur parce que les politiciens exploitent la crise en attisant la haine de ceux qui n’ont pas ce pass, quelle que soit la raison. Une situation que j’ai du mal à définir autrement que comme du fascisme. Un fascisme que j’observe croître, grandir tout en étant du bon côté. Après tout, je suis blanc, mâle, hétérosexuel et vacciné.

Si je fais confiance au vaccin, je m’inquiète de l’outil politique qu’il est devenu. Car, dans leur colère aveugle dont les médias se délectent, certains politiciens ont perdu de vue l’objectif qu’il s’était initialement fixé : gérer une épidémie. La tâche étant complexe, l’attention s’est portée sur l’un des moyens parmi d’autres : vacciner. Faire augmenter le taux de personnes vaccinées. Non pas en rendant le vaccin obligatoire, mais en augmentant progressivement l’inconfort des non-vaccinés, en attisant la haine à leur égard. Haine qu’une partie des non-vaccinés rend d’ailleurs fort bien en refusant de parler à des vaccinés. Stigmatisation qui force les derniers hésitants à choisir un camp, beaucoup décidant définitivement de ne pas se faire vacciner pour ne pas « céder à l’arbitraire ». Si ces comportements semblent irrationnels, ils n’en sont pas moins une réaction émotionnelle logique et prévisible.

Cette polarisation, cette mise en valeur des extrêmes est purement politique et contre toute logique scientifique. Elle risque de créer des blessures profondes et durables dans une société qui n’avait pas besoin qu’on lui rajoute cela. Je prédis que la division provaxx/antivaxx s’enrichira progressivement de tous les sujets sociétaux santé publique/privatisée, immigration/anti-immigration, gauche/droite… Tant pis pour ceux qui souhaitent de la modération, de la subtilité ou une diversité d’opinions.

En créant des zones nécessitant un code d’accès, nous avons créé un faux sentiment de sécurité. Les mesures basiques de prévention sont négligées. Tous les spécialistes clament pourtant qu’un vaccin n’est jamais efficace à 100%. Pire : ces codes d’accès étant trivialement copiables ou falsifiables, ils n’ont aucun effet sur les non-vaccinés malhonnêtes, ne stigmatisant que les hésitants de bonne foi. Cette évidence m’a longtemps fait croire que jamais nous n’en arriverions à ce système absurde et dangereux. Je pensais naïvement qu’un système efficace serait trop complexe et, de toute façon, antidémocratique. Je n’avais jamais imaginé que peu importait l’efficacité, car le déni de démocratie était justement la fonctionnalité majeure du dispositif.

Auriez-vous imaginé il y a seulement six mois devoir présenter votre téléphone pour accéder à un marché de Noël clôturé ? Auriez-vous imaginé que la société puisse être coupée en deux sur le choix d’un acte médical privé et, comme le rappelle la convocation vaccinale, volontaire ? Auriez-vous accepté d’être fiché par QR code ?

Le fascisme a de tout temps prospéré grâce aux crises, l’angoisse, l’incertitude. Il ne s’installe jamais en fanfare, mais insidieusement, grignotant chaque liberté mois après mois et, à chaque fois, pour une raison indiscutable, rationnelle. La voie ouverte depuis deux ans était royale. Avec le recul, elle était aussi prévisible.

Je ne suis pas épidémiologiste. Je n’y connais rien en soins de santé. Je ne suis donc pas apte à juger de la gravité de la situation sanitaire.

Je peux pourtant observer plusieurs indices. Les stades de football semblent pleins à craquer sur les couvertures des magazines sportifs chez mon libraire. Les centres commerciaux n’ont, à ma connaissance, pas désempli de l’année. Dans celui de ma ville, on s’y bousculait joyeusement avant Noël dans des commerces dont aucun ne pourrait être considéré comme de première voire de seconde nécessité. Sans pass. Parce que les centres commerciaux sont, au même titre que les lieux de culte, sacrés. Les discothèques ont, au moins à un moment, été ouvertes. Sans l’opposition ferme et personnelle du bourgmestre de la commune où devait se dérouler l’événement, le festival Tomorrowland aurait eu lieu, ayant obtenu l’aval des politiciens nationaux. Un festival qui draine des dizaines de milliers de personnes du monde entier dans une promiscuité sous psychotropes. Car les avions sont également toujours aussi remplis. Les touristes partent toujours en vacances au bout du monde. Y compris dans des destinations touristiques où la couverture vaccinale est presque nulle par manque de moyens.

Plusieurs études scientifiques que j’ai lues établissent la corrélation et la causation entre le taux de vitamine D dans le sang et la gravité du COVID. Une de ces études, qui n’a à ma connaissance pas été réfutée, s’est même enhardie à extrapoler un taux de vitamine D à partir duquel la maladie n’est plus mortelle. Bien entendu, ces résultats sont entourés de toute l’incertitude scientifique nécessaire (j’avoue avoir vérifié les calculs statistiques et n’avoir pas trouvé d’erreur dans ceux-ci, mais je manque de pratique et ne peux juger de la validité médicale). Il n’empêche que l’immense majorité de la population de mon pays est en déficit de vitamine D, que les pharmacies regorgent de compléments alimentaires éprouvés pour augmenter ce taux. Une mesure prophylactique qui pourrait se révéler particulièrement efficace serait donc : « prenez de la vitamine D et allez dehors une heure par jour, même quand le temps est gris ». À aucun moment cette idée n’a même été suggérée par nos responsables. On tente, au contraire, de garder les gens à l’intérieur, sous contrôle.

Dans ma zone de spécialité professionnelle, j’observe que tous les efforts visant à produire un vaccin Open Source sont immédiatement réduits à néant à grand coup de billets de banques et de contrats immoraux. Les grands groupes pharmaceutiques ont donc plus peur pour leur portefeuille que pour la santé mondiale, transformant les pays les plus pauvres en véritables bouillons de culture chargés de produire le prochain variant à la place de leur propre vaccin.

Il y a plus d’une dizaine d’années, je me souviens avoir joué à un petit jeu vidéo dans lequel il fallait créer un virus qui allait exterminer la planète. La difficulté étant qu’une fois le virus identifié, les gouvernements fermaient les frontières et les aéroports. Force est de constater qu’on en est loin, très loin d’une telle situation. Contrairement au printemps 2020, où la prudence était de mise, difficile pour quelqu’un qui ne consulte pas les médias, d’imaginer que nous sommes encore dans une épidémie réellement dangereuse. J’ai en effet le désormais très rare défaut de tenter de voir la réalité locale avec mes propres yeux plutôt qu’à travers les liens spécialement sélectionnés par Facebook pour me radicaliser, par une longue chaîne Whatsapp elle-même issue de Facebook ou par des médias dont l’objectif est devenu de générer des clics sur Facebook (ce qui comprend les médias financés par l’argent public).

Je n’affirme pas que l’épidémie n’est pas dangereuse, je n’ai pas la compétence pour cela. J’affirme juste que les politiciens ne sont pas réellement catastrophés, car ils ne prennent aucune mesure réellement efficace. Ils se contentent de faire ce qu’on appelle, dans le jargon, du « security theatre ». Prendre des mesures inutiles, mais spectaculaires comme le furent les militaires dans nos villes et comme le sont les marchés de Noël clôturés. Il est intéressant de se rappeler que l’objectif des mesures « security theatre » n’est pas d’augmenter la sécurité, mais de créer un sentiment politique rappelant que la sécurité est en péril afin de renforcer la cohésion contre l’ennemi, de créer une psychose. C’est à cela et uniquement cela que servirent nos militaires portant de lourdes armes de guerre, heureusement sans chargeur, la convention de Genève l’interdit, dans nos rues. C’est à cela que sert le QR code que nous devons tendre : à créer une psychose et une psychologie de troupeau.

Cette épidémie est réellement mortelle. C’est indéniable. J’ai connaissance de plusieurs décès dans mon entourage large. Cette épidémie doit être gérée. Mais une bonne gestion implique également de mesurer les effets de chaque mesure. Selon l’OMS, le tabac tue chaque année plus de 8 millions de personnes dans le monde dont 1,2 million n’ont jamais fumé. La pollution de l’air seule tue, en Europe, 600.000 personnes par an. Le réchauffement climatique menace totalement nos sociétés. Pourtant, nous ne prenons aucune mesure. Cela ne semble ni urgent ni primordial. À titre de comparaison, l’OMS affirme que le COVID aurait tué 5 millions de personnes en deux ans. Peut-être ce chiffre est-il sous estimé. Et sans vaccin, le bilan aurait certainement été bien supérieur. L’ordre de grandeur reste néanmoins similaire et la disproportion entre la nonchalance et la panique totale me saute aux yeux. Interdire le tabac aujourd’hui sauverait immédiatement beaucoup plus de vie, surtout parmi les plus jeunes, que de vacciner contre le COVID ceux qui ne le sont pas encore. À moindres frais.

Dans mon pays, l’immense majorité des victimes du COVID semble avoir plus de 65 ans voire essentiellement plus de 85 ans (selon covidata.be). Si chaque décès est, pour la famille et les proches, une épreuve, un décès dans ce qu’on appelle « le troisième âge » est un fait naturel, inéluctable. L’ampleur des décès dans mon pays ne vient-elle pas, au moins en partie, de l’incroyable déséquilibre de la pyramide des âges et de la propension que nous avons à rallonger la vie à tout prix, souvent au détriment de sa qualité ? L’engorgement des hôpitaux est-il dû uniquement à l’ampleur exceptionnelle du COVID ou à un sous-dimensionnement budgétaire ? J’ai le souvenir d’avoir entendu parler régulièrement de saturation des hôpitaux, même en dehors de cette pandémie. N’est-on pas en train de cyniquement profiter de la crise pour se délester de la responsabilité politique qu’est le financement des soins de santé ?

N’oublions pas que, en plus d’être nombreux, les vieux votent. Politiquement, il est donc préférable de sauvegarder cet électorat, quitte à sacrifier une frange de la population qui ne vote pas. Au hasard les enfants. En fermant les écoles, en perturbant leur parcours scolaire. Par mesure de prévention, les écoles seront fermées une semaine plus tôt. Les enfants seront inscrits… dans des stages (le covid ne se transmet pas dans les stages ?). Contrairement aux centres commerciaux, les écoles ne sont pas un service essentiel. Pour une raison simple : cela ne coûte rien de les fermer. Les profs sont, encore heureux, payés. L’encadrement des enfants sera à la charge des parents. Peut-être est-ce dû à mon microcosme, mais à la question « Peut-on sacrifier l’espérance de vie de nos ainés pour que les enfants aillent à l’école ? », tous les vieux que je connais répondent en chœur « Oui ! ».

Mais en politique, si une mesure fait de l’effet, c’est qu’il faut en augmenter l’amplitude. Si elle ne produit pas d’effet, c’est qu’on ne l’applique pas assez fort, il faut en augmenter l’amplitude. Le nombre de vaccinés n’augmente pas assez vite ? Que pourrait-on faire pour avoir une jolie courbe qui augmente ? Vacciner les enfants ! Pourtant, les enfants n’ont que très peu de risque de complication lié au COVID. L’OMS considère que le coût de la vaccination des enfants est supérieur aux bénéfices. La pédiatre de mes enfants, qui leur a administré la panoplie traditionnelle des vaccins enfantins, déconseille fortement le vaccin à ARN messager pour les plus jeunes et pour les adolescents après avoir vu de nombreux effets secondaires indésirables. Peut-être est-ce anecdotique ? Il n’empêche que la vie de mes enfants n’étant clairement pas en danger, je préfère leur éviter un acte médical non nécessaire. Ce que défendent également ceux qui seraient les premiers bénéficiaires du vaccin : les grands-parents.

Non contents d’instiller la haine et des pratiques fascistes dans notre quotidien, les froids calculs électoraux et la lâcheté politique de nos dirigeants se permettent d’hypothéquer le futur de la nation. Mon fils aura grandi sans jamais voir le visage de ses institutrices maternelles. Il fait partie de la minorité chanceuse qui, lorsqu’il n’est pas à l’école, a des parents et des grands-parents disponibles pour le stimuler intellectuellement. Ceux dont les parents sont indisponibles ou ne parlent pas bien le français paieront, comme à chaque crise, le prix fort.

Beaucoup d’arguments que j’ai entendus pour ne pas se faire vacciner me semblent, aujourd’hui et avec les maigres informations dont je dispose, stupides, voire dangereux. Mais ils ne le sont certainement pas plus que la tolérance que nous avons envers le tabac. Ou envers les excès d’alcool (un comportement morbide que nous appelons trop souvent « faire la fête »). Que celui qui n’a jamais eu de comportement que les autres trouvent stupide me jette la première bière…

Si les centres commerciaux et les stades de football étaient fermés, si les lieux confinés étaient interdits, si une véritable politique de gestion de crise était mise en place avec aide économique immédiate pour les secteurs touchés, alors je pense qu’il faudrait envisager une vaccination obligatoire, au moins pour les métiers les plus à risques. Le vaccin pourrait être au choix de l’individu parmi ceux reconnus par l’OMS et non pas au choix des pays en raison des accords commerciaux signés (la femme d’un de mes amis, vaccinée dans son pays d’origine, ne peut pas pénétrer sur le territoire belge, son vaccin, pourtant reconnu par l’OMS, n’étant pas considéré comme valide). L’OMS aurait d’ailleurs une responsabilité morale de fournir une formule open source du vaccin pour que chaque pays puisse les produire. Bien qu’obligatoire, cette vaccination resterait entièrement privée entre l’individu, l’état et le médecin traitant. C’est, en Belgique, le cas du vaccin contre la polio et personne ne pose la question de savoir si les enfants avec qui les siens jouent sont vaccinés contre la polio (en dépit de quelques tricheurs, la polio est en voie d’éradication grâce au vaccin).

Force est de constater que nous sommes loin d’une situation de crise réelle. Je dois en déduire que l’épidémie, sans être bénigne, n’est pas (encore ?) le fléau qui va décimer l’humanité. Que les intérêts économiques restent supérieurs à ceux de la santé. Et que si la campagne de vaccination était une nécessité, les mesures antidémocratiques imposant un « pass » ne sont que des décisions prises, car elles avaient l’avantage d’être « faciles ». De n’engager aucune responsabilité réelle. Ne ne rien coûter.

De ne rien coûter si ce n’est une division radicale de notre société et un glissement de nos valeurs vers celles du fascisme.

De ne rien coûter si ce n’est d’être incroyablement difficiles à résilier. Qui osera prendre la responsabilité de supprimer ce « pass », de déclarer l’épidémie sous contrôle si le COVID devenait devenir une forme de grippe récurrente ? L’exemple des militaires dans les rues après des attentats qui ont fait, en Europe, quelques dizaines de morts, prouve qu’il est facile de réduire les libertés, mais politiquement impossible de les restaurer. Combien d’années sommes-nous prêts à vivre en tendant un QR code à chaque coin de rue ? Combien de doses de rappels sommes-nous prêts à nous injecter, combien de maladies sommes-nous prêts à considérer comme faisant partie de notre « pass » ? J’ai choisi de me faire vacciner contre le COVID. Je pense que c’était un très bon choix. Mais rien ne garantit que le pass ne nécessitera pas bientôt un acte médical que je ne souhaite pas.

Je suis terrifié par la société que génère la crise COVID. Je suis terrifié de la rapidité avec laquelle nous sacrifions nos libertés les plus fondamentales comme celles de circuler ou de disposer de notre propre corps.

Mais peut-être s’agit-il, encore une fois, d’un simple calcul électoral. Car rien n’est plus facile à contrôler et manipuler qu’une société déchirée et aux libertés restreintes, un système où, au pouvoir comme en opposition, n’existent plus que la voix des extrémistes.

Recevez les billets par mail ou par RSS. Max 2 billets par semaine, rien d’autre. Adresse email jamais partagée et définitivement effacée lors du désabonnement. Dernier livre paru : Printeurs, thriller cyberpunk. Pour soutenir l’auteur, lisez, offrez et partagez des livres.

Ce texte est publié sous la licence CC-By BE.

I published the following diary on isc.sans.edu: “Code Reuse In the Malware Landscape“:

Code re-use is classic behavior for many developers and this looks legit: Why reinvent the wheel if you can find some pieces of code that do what you are trying to achieve? If you publish a nice piece of code on platforms like GitHub, there are chances that your project will be used and sometimes forked by other developers who will add features, fix issues, etc. That’s the magic of the Internet. But attackers are also looking for interesting code to borrow on GitHub. A few weeks ago, I wrote a diary about Excel Add-In’s used to distribute malware… [Read more]

The post [SANS ISC] Code Reuse In the Malware Landscape appeared first on /dev/random.

January 04, 2022

I published the following diary on isc.sans.edu: “A Simple Batch File That Blocks People“:

I found another script that performs malicious actions. It’s a simple batch file (.bat) that is not obfuscated but it has a very low VT score (1/53). The file hash is cc8ae359b629bc40ec6151ddffae21ec8cbfbcf7ca7bda9b3d9687ca05b1d584. The file is detected by only one antivirus that triggered on the “shutdown.exe” located at the end of the script! Why is this script annoying people? Because it uses the BlockInput() API call through a PowerShell one-liner… [Read more]

The post [SANS ISC] A Simple Batch File That Blocks People appeared first on /dev/random.

This page started as an email that I sent to my kids in 2013:

Dear Axl and Stan,

I'm writing this e-mail on the plane from Boston to San Francisco.  Sadly, I don't get to spend a lot of time parenting you right now, so I'm writing you this long e-mail instead.  It provides a list of things I wish I had known when I was 21.

You are still too young to read, but I hope you will read and re-read this e-mail when you're older.  Keep a copy handy.  Needless to say, I'm here to help you in person as well.

I wish I could promise you that life is going to be easy.  I can't.  However, I can promise you that it is really worth it, especially if you live by the following principles.

I love you,

Dad

I've maintained this list on since. My last edits were in January 2022.

Principles to live by daily

  • Exercise your brain continuously: keep it busy. Play chess or other strategy games. Write a journal. Keep your brain buzzing.
  • Travel as much as you can. My first trip to India blew my mind and changed me forever. Let's go anywhere together, especially if it gives us an opportunity to learn something new.
  • Food: Learn, experiment, try out, taste all different types of foods. I find it to be one of the greatest things in life.
  • Learn about finances. Even if it sounds boring, or not applicable immediately to you, learn about finances. To make money, you need to understand money. It's why I've talked to you about investing since a very early age.
  • Get exercise to be part of your weekly routine. I'm still not great at this myself, but I've seen the benefit. Being busy is a poor excuse.
  • Don't spend more than you earn. Start saving now. Get into the habit of saving, even if it is only ten dollars/euro every week. Try to build up 6 months of living expenses in a savings account, and invest the rest in high-quality companies or index funds.
  • Embrace your emotions. Laugh when you can and allow yourself to cry when you have to. Sing out loud. Dance in the kitchen while doing the dishes. Laugh at stupid jokes until your stomach hurts. Cry. Crying doesn't mean that you are weak. Since birth, it has always been a sign that you are alive.
  • Appreciate music. Listen to as many different genres as possible. Music has a healing function. If you're curious, here is a list with some of my favorite music. Most of these songs helped me in life.
  • Read as much as you can. I love reading biographies, business books and academic articles. More things will make sense to you when you read often.
  • Love the outdoors. The more you are out and away from your desk, the greater the chance of enjoying life. Get a good hammock or a camper van. One big enough for more than one person.
  • Don't take things personally. The hurtful things people say nearly always have far more to do with their own unhappiness than anything else. I've been dealing with criticism for many years; it gets easier over time, but can still hurt.
  • Seek to understand. Don't make assumptions. Don't assume you know what someone is thinking or why they're acting like they are. Ask and you'll nearly always find out that your assumption was wrong.
  • Always do your best. Your best on one day, may not as good as your best on another day but always work hard. Celebrate results and outcomes, but not the hard work itself.
  • Work hard but never hide in your work. When unhappy, working more will never change the outcome. I've made this mistake many times.
  • Go on holiday with your friends. You'll remember these holidays forever. I still remember every holiday with friends.
  • Take your time. There will be a lot of decisions and opportunities in your life. When the decision is irreversible, give yourself time and space to think it through. When the decision is easily reversible, don't overthink it.
  • Spend at least one year living in a foreign country. It will change the way you look at things and make you better at everything else you'll do in life.
  • It is 100% okay to be different. We need more diversity, not less.
  • Learn to say "No". The earlier in life you master this, the better off you will be. Hesitate or be too courteous to say "no", and you can end up burdening yourself. I was really bad at this in my twenties and early thirties, but have come a long way.
  • Deliver on your word. They say your word is worth more than your weight in gold. It is true.
  • Be ambitious but realistic. Keep away from those that try to belittle your ambitions. Small people will do that. The really great people will make you believe that you too can become great.
  • Success doesn't come overnight. We tend to greatly overestimate what we can achieve in the next 10 months and greatly underestimate what we can achieve in the next 10 years.
  • Accept that life will f*cking suck at times. Life is not always easy. Fight for what you care about and don't give up. Things will often seem impossible until they are done. In the end, the hard experiences will make your life meaningful. Sometimes things fall apart so better things can fall together. It become worse, before it gets better.
  • Your life will not turn out the way you expect it too, and in the end that is a good thing. I didn't know when I was 21 that I was going to start a company in Boston. If your future turns out exactly the way you plan, that means you're living the plan of a 21-year old, and that should give you pause.
  • Don't settle. In everything you do, keep your standards high. When it comes to the important things in life, the details are not the details.
  • Be real. Don't fake. When talking or writing, try to tell the deepest truth -- don't hedge with a partial truth.
  • Don't be afraid of life being difficult and scary. In fact, do what scares you. Take chances. It is the best way to grow as a person.
  • If you have kids yourself, work hard to give them a life that was at least as good as yours -- if not better.
  • Focus on what you can control. Don't to worry about what you can't control.
  • Remember that you only have one life. Waste it wisely.
  • Apologize when you should. I hope you live a life that you are proud of, and that if you find that you are not, that you have the strength to apologize and start over again. Be good men.
  • Find out about your parents. They are way more interesting than you think. :)

January 03, 2022

I published the following diary on isc.sans.edu: “McAfee Phishing Campaign with a Nice Fake Scan“:

I spotted this interesting phishing campaign that (ab)uses the McAfee antivirus to make people scared. It starts with a classic email that notifies the targeted user that a McAfee subscription expired… [Read more]

The post [SANS ISC] McAfee Phishing Campaign with a Nice Fake Scan appeared first on /dev/random.

Troisième jour de ma « déconnexion ». Elle m’obnubile, m’encombre l’esprit. Difficile de penser à autre chose. Mais au fond, qu’est-ce qu’une déconnexion ? Est-il réellement possible de se déconnecter totalement ? Est-ce que je lis les emails me demandant si je lis mes emails ? Je reste un humain, un animal social vivant dans un monde où la connexion à Internet est omniprésente.

Le terme « déconnexion » est donc arbitraire et propre à chacun. Un chef d’entreprise qui met son téléphone en mode avion le temps d’un week-end parle de déconnexion. De nombreux stages offrent de la déconnexion sous le nom « digital detox ». Mais lorsqu’il est question de dépasser quelques jours, il est indispensable de mettre en place un protocole, de formaliser les règles qui permettront d’établir si, oui ou non, nous respectons notre déconnexion.

Le pionnier en la matière est Thierry Crouzet qui, en 2012, a raconté dans son livre « J’ai débranché » les six mois passés sans utiliser Internet à une époque où l’addiction restait cantonnée à quelques geeks. Une déconnexion totale ? Pas tout à fait ! Lorsque l’utilisation d’Internet devenait incontournable, Thierry se mettait à supplier son épouse d’accomplir pour lui les actions nécessaires en ligne. Chaque déconnexion implique de mettre en place des échappatoires codifiées pour survivre dans un monde connecté.

Les déconnexions peuvent aller de la plus extrême, comme celle de Robert Hassan qui raconte dans « Uncontained » les cinq semaines passées sur un bateau porte-conteneurs sans connexion et avec des interactions minimales avec une poignée de membres d’équipage. À l’opposé, ma déconnexion de 2018 consistait à bloquer les sites d’actualités et de réseaux sociaux dans mon navigateur pendant trois mois. L’écrivain Cory Doctorow prend des « email holidays » durant lesquels chaque nouvel email reçu est effacé. L’expéditeur reçoit une réponse automatique lui demandant de réenvoyer son mail après une date donnée. Si la demande est urgente, l’expéditeur doit contacter la mère de Cory. Les informations de contact de celle-ci ne sont pas fournies, toute personne ne sachant pas contacter sa mère n’étant pas censée devoir le contacter en urgence.

Si ces expériences sont particulièrement utiles pour ouvrir les yeux et prendre un temps de réflexion, elles possèdent toutes un point commun : elles ne sont pas durables. Une fois le temps de la déconnexion terminée, les anciennes habitudes vont reprendre graduellement leur place. C’est insidieux, car la reconnexion est généralement perçue avec un certain dégoût. Le temps de déconnexion est idéalisé. Il devient un éden, un lieu de villégiature confortable, mais incompatible avec les exigences du monde moderne. Le néo-reconnecté est persuadé d’avoir changé. Il est d’abord parcimonieux, se reconnecte avec douceur. À la première période de stress, de nouveaux réflexes délétères se mettent en place.

Dix ans après sa déconnexion, Thierry s’énerve encore sur Facebook et reconnait « craquer » sans raison impérieuse pour un nouveau macbook. Robert Hassan explique avoir replongé dans son quotidien frénétique. Moi-même, je n’ai supprimé mes comptes de réseaux sociaux qu’à la perspective de cette déconnexion-ci, celle d’il y a trois ans étant déjà depuis longtemps oubliée.

En concevant cette déconnexion, je me suis posé l’objectif d’en faire une déconnexion durable. D’établir un protocole qui devrait pouvoir s’installer dans la durée, y compris après cette année de test. Le but n’est pas de prendre des vacances, mais bien d’établir une nouvelle méthode de travail.

Le tour réflexe

Lorsque je suis connecté à Internet, j’effectue machinalement ce que j’appelle mon « tour ». Une série de sites à visiter pour vérifier qu’il n’y rien d’urgent, rien d’important, rien de nouveau en ligne. Ce concept de « tour » est partagé par beaucoup de personnes dans ma situation. Il commence, par exemple, par vérifier sa boîte mail puis vérifier un site d’actualité puis Facebook, Linkedin, Twitter et quelques salons de discussion en ligne. Il peut contenir des éléments aussi divers que les résultats du football, un forum ou les cours de la bourse.

À chaque arrêt, il y’a potentiellement des nouveautés à lire, des messages auxquels répondre. Si le tour est suffisamment long, il peut être immédiatement recommencé une fois terminé, des messages ou des réponses étant arrivées dans l’intervalle. Il se transforme en boucle. Beaucoup de personnes utilisant Internet au quotidien possèdent ce genre de tour. Il n’est pas toujours conscient, pas toujours ordonné. On pourrait même dire que toute connexion à Internet sans un objectif préalable clairement défini ne sert qu’à accomplir une variante de ce « tour » réflexe, presque atavique. Chez certains, le tour ne contient qu’une étape : Facebook ou Instagram.

Une anecdote illustre l’effrayante force de l’habitude d’un tour. En 2014, le collègue placé à mes côtés dans l’openspace où je travaillais me montre un meme, une image comique particulièrement adaptée à la situation que nous rencontrions à ce moment-là. Je rigole. Je lui demande où il à trouvé cela.

— Sur 9gag. (prononcez « naïne gag» )

— Sur quoi ?

— Quoi ? Tu ne connais pas 9gag ?

Devant ma mine ahurie, il m’explique le principe du site, une simple page sur laquelle s’affichent les images rigolotes ayant reçu le plus de votes. Le site est tellement populaire que le renouvellement est presque constant.

Amusé, j’ouvre la page dans mon navigateur. Je prends très vite le réflexe de la consulter dès que le temps me semble long ou que je suis confronté à une tâche un peu difficile. Je parcours les images amusantes, les partage avec mon collègue. Je n’ai jamais ajouté le site à mes favoris, mais mon navigateur comprend très vite où je veux aller si je tape un simple « 9 » dans la barre d’adresse. Sans que je l’aie décidé, 9gag s’est imposé dans mon tour. Une étape réflexe, incontournable.

Après seulement quelques mois, je me rends très vite compte du temps perdu et de l’absurdité de ce site. Je n’ai plus envie d’y aller. Malgré cela, il m’arrive de trouver le site 9gag ouvert sur mon écran. Je m’effraie. Je découvre un projet Kickstarter d’un bracelet qui envoie des décharges électriques dès que l’on accède à un site « interdit », preuve que je suis loin d’être le seul dans ce genre de situation. Je rigole avec mes collègues à l’idée d’en acheter un. Sans recourir à cette extrémité, je me contente de bloquer le site 9gag dans mon navigateur. Un blocage qui est d’ailleurs toujours présent depuis lors.

Cela fait sept années que le site 9gag est bloqué sur mon ordinateur. Sept années sans y accéder et cela ne manque pas du tout. Ce n’était que de l’amusement sans valeur et sans intérêt. Pourtant, sept années plus tard, lorsque je suis confronté à une difficulté intellectuelle ou une tâche administrative un peu rébarbative face à un navigateur web ouvert, mes doigts tapent machinalement un « 9 » dans la barre d’adresse. Parfois, je ne souviens même plus de l’origine de ce chiffre. Je dois réfléchir pour comprendre pourquoi s’affiche une recherche sur le chiffre « 9 ». Mes doigts, eux, n’ont pas ce scrupule. Ils tapent « 9 », sept années après mon dernier accès à un site contenant ce caractère.

do_the_internet.sh

Durant ma connexion de 2018, je me suis efforcé d’améliorer mon tour. De le réduire et d’en retirer les éléments les plus morbides. L’une de mes observations a été ma capacité à trouver des alternatives. Dès que mon tour devenait « trop court », je découvrais une nouvelle source de distraction. Les sources les plus qualitatives étaient les plus dangereuses. En effet, la qualité de ce que je lisais me permettait de justifier le temps passé et l’automatisme était très vite assimilé. La consultation de mon tour mélange chez moi une savante dose de procrastination, un désir de découvrir des nouvelles choses et la crainte de rater ce qui se dit ou se fait, le fameux FOMO (Fear of Missing Out).

Sur le gemlog de Solderpunk, le créateur du protocole Gemini (un gemlog est l’équivalent d’un blog sur le réseau Gemini), j’ai lu l’idée d’un script « do_the_internet.sh ». Un script est un petit programme simple consistant en un ensemble de commandes que l’ordinateur doit effectuer de manière automatique. L’intérêt d’un script est d’automatiser une suite de commandes qui sont toujours les mêmes. J’ai alors eu une illumination. Plutôt que de lutter pour améliorer mon tour, j’allais l’automatiser.

Une heureuse sérendipité me faisait lire, le même jour, un article sur la conception de scripts suggérant que chaque étape ne devait pas nécessairement être automatique. Un bon script pouvait contenir une instruction demandant explicitement à un humain d’accomplir une tâche, mélangeant les concepts d’automatisation et de check-list.

J’ai donc créé un fichier appelé « do_the_internet.sh » dans lequel j’ai listé ce que je voulais accomplir en ligne chaque jour. Au début, ce fichier ne contenait absolument pas de code, mais de simples phrases comme « vérifier mes emails » ou « vérifier mon agenda ». Au cours des semaines, j’ai réussi à automatiser certaines des tâches. À chaque étape de mon tour quotidien, je me posais la question « Pourrais-je l’automatiser ? Cela en vaut-il vraiment la peine ? ».

L’objectif final m’est apparu rapidement : une fois ce script entièrement automatisé, je pourrai me contenter de le lancer une fois par jour sans rien rater. Encore faut-il que j’estime qu’il soit « terminé ». Je me suis donc fixé une date de départ symbolique et évidente : le premier janvier. Il me restait quelques semaines pour mettre au point mon protocole de déconnexion.

Le quotidien de ma déconnexion

Pour cette déconnexion 2022, je peux donc, une fois par jour, synchroniser mon ordinateur en lançant mon script do_the_internet.sh. Cette synchronisation se fait en branchant un câble dans mon ordinateur sur le palier de l’escalier de la maison. Je dois physiquement me déplacer pour y accéder. Lorsque l’ordinateur se synchronise, je ne peux pas l’utiliser, tout est automatique.

Ce script va tout d’abord synchroniser mes emails. Ceux-ci sont téléchargés sur mon ordinateur afin d’y accéder sans connexion. Tous les emails que j’ai rédigés vont être envoyés. Les mails que j’ai archivés sur mon ordinateur vont être archivés en ligne. Envoyer des mails hors-ligne est une expérience très satisfaisante. Une fois la commande d’envoi effectuée, je n’ai rien à faire, rien à penser. Le mail partira sans action de ma part lors de la prochaine synchronisation. Par contre, si j’éprouve le moindre doute, même plusieurs heures après, je peux aller annuler mon mail. Cette latence artificielle empêche le fameux « ping-pong » où plusieurs mails et réponses s’envoient et se télescopent en quelques minutes. Je suis forcé de réfléchir à ce que j’écris, forcé de prendre le temps de lire ce que je reçois.

En second lieu, mon script va synchroniser les flux RSS des blogs que je lis. Cela se fait uniquement en mode texte, sans image. Cela me permet de garder le contact avec les gens que j’aime bien ou qui sont intéressants. La plupart des blogs que je suis ne postent que de manière très sporadique, j’ai tendance à supprimer les flux dès qu’ils dépassent quelques billets par semaine. En complément des flux, l’accès à l’email me donne également la possibilité d’être abonné à des newsletters. En réalité, je ne suis abonné qu’au site lobste.rs, un site très technique qui permet la mise en place de filtres. J’ai été tellement drastique que je ne reçois que les articles dans un champ d’intérêt très restreint et rare.

L’un de mes objectifs lors de cette année de déconnexion est de réfléchir aux concepts de décentralisation et de protocoles Internet. Pour cette raison, j’ai souhaité ne pas perdre le contact avec le réseau Gemini. Je me suis donc attelé à modifier le navigateur AV-98 pour qu’il me permette de naviguer hors-ligne sur le réseau Gemini. Cette modification a pris de l’ampleur et, avec l’accord de Solderpunk, le créateur de AV-98, j’ai décidé d’en faire un logiciel différent (un fork). J’ai donc codé « Offpunk », un navigateur hors-ligne. Il se concentre pour le moment sur le réseau Gemini, mais je compte le faire évoluer. Le principe d’Offpunk est de se synchroniser pour télécharger le contenu qui pourrait être utile puis d’être utilisable entièrement hors-ligne. Si l’utilisateur tente d’accéder à un contenu qui n’est pas disponible hors-ligne, il est marqué pour être téléchargé lors de la prochaine connexion. La troisième étape de mon script est donc de demander à Offpunk de télécharger ce qui doit l’être sur le réseau Gemini.

Les mails, les RSS et Gemini me donnent l’opportunité d’avoir accès au monde extérieur tout en étant littéralement déconnecté. Si une page web me semble vraiment intéressante à lire et que je n’ai pas accès au contenu, mon système prépare automatiquement un mail qui sera envoyé au service forlater.email. À tout email contenant une ou plusieurs adresses, ce service répond avec un email contenant l’intégralité du texte de la page web en question. Je peux donc naviguer sur le web en utilisant l’email. Avec un délai de 24h, entre le moment où je décide de lire le contenu d’un lien et le moment où je le lis réellement.

Mon téléphone est un Hisense A5, un téléphone avec écran eink en noir et blanc et ne disposant pas des services Google. J’ai désinstallé tout navigateur web et tout logiciel de courriels. Quand il est dans mon bureau, mon téléphone est en mode avion. Whatsapp a été « freezé », signifiant qu’il ne va pas recevoir de messages (mais je dois le garder pour certaines urgences familiales potentielles). Je garde Signal pour les interactions immédiates et nécessaires à la gestion familiale. En dehors de Signal, le téléphone est principalement utile pour les applications de cartographies : OSMAnd, Organic Maps et Google Maps (qui fonctionne sans compte Google). Ce dernier est utile pour consulter les heures d’ouverture et les numéros de téléphone des commerces. Je considère que ces applications de cartographie sont d’excellents services qui n’entrainent aucune distraction. Je peux donc sans soucis les garder.

Enfin, je dispose sur mon ordinateur d’une copie de Wikipédia accessible grâce aux logiciels Kiwix et Webarchives.

Les exceptions (ou la tricherie tolérée)

Malheureusement, il n’est pas réaliste de tout traiter en ligne à travers les emails. J’ai relevé plusieurs actions qui restent nécessaires et ne peuvent se faire qu’interactivement. La gestion administrative et bancaire, les achats en ligne (depuis les livres non disponibles en libraire au matériel de vélo), la préparation des voyages, des raids vélo (Komoot) ainsi que la gestion des projets open source (rapports de bugs, merge sur Github). Sans compter les incontournables réunions en ligne sur Teams ou Jitsi. Paradoxalement, je n’ai pas non plus encore automatisé le fait de poster sur mon blog (mais j’y travaille).

Pour faire tout cela sans devoir, comme Thierry, supplier mon épouse, j’ai dû me résoudre à m’autoriser des accès directs à Internet. Ces accès seront « en conscience ». Cela signifie qu’avant chaque accès à Internet, je suis obligé d’écrire les objectifs de cette connexion et de décrire autant que possible les tâches que je vais accomplir en ligne. Je garde dans un dossier les mails qui ne peuvent être actionnés qu’en ligne et je note dans un fichier les tâches obligatoires à faire en ligne.

Une fois qu’une connexion est décidée, je note dans un fichier spécial la date, l’heure et la raison de la connexion. Je lance un chronomètre. À ce moment-là, et seulement le chronomètre lancé, je me lève et je vais chercher le câble sur le palier. Je branche mon ordinateur et accomplis les tâches prévues. Je ne peux pas m’en écarter. Si la tâche est imprécise (comme accomplir une recherche ou investiguer un domaine), je me donne un temps limite fixé à l’avance et je lance un timer. Durant ces connexions, je n’ai pas le droit de lancer ma synchronisation. Chaque connexion est donc clairement identifiée, consciente, objective.

Une fois le câble débranché et remis à sa place, je peux couper le chronomètre et indiquer, dans mon fichier, le temps passé, arrondi à la minute supérieure. Nous sommes le 3 janvier et, en 2022, je peux affirmer que j’ai passé exactement 5 minutes en ligne, le temps qu’il m’a fallu pour poster le billet de blog du 1er janvier. Durant ces 5 minutes, j’ai dû me faire violence pour ne pas subrepticement ouvrir un onglet de navigateur vers un autre site que mon blog. J’ai tenu bon.

Ce système est également l’occasion de faire des économies. Céder aux achats en ligne impulsifs devient beaucoup plus compliqué. Ma carte de crédit devrait apprécier.

Un système évolutif et une expérience partagée

L’objectif premier de ma déconnexion n’est pas de devenir un puriste voire un puritain, mais de trouver un nouvel équilibre durable. Il me parait donc évident que, soumises aux contraintes de la vie réelle, mes règles vont évoluer. Que je vais devoir trouver des compromis ou des solutions. Que je vais découvrir des choses sur moi-même, sur ceux qui m’entourent, sur ceux avec qui je perdrai contact et ceux, au contraire, avec qui je vais me rapprocher.

Outre l’introspection, je souhaite également partager l’aspect technique que j’affine depuis 2018. Comment améliorer la quantité et la qualité des mails reçus et ceux qu’on envoie. Comment reprendre le contrôle de sa présence en ligne et de ses données. Cette déconnexion est une expérience globale, holistique que je souhaite partager avec vous dans ce blog, dans ce livre en cours d’écriture. Une manière de me connecter à vous, que vous me lisiez en 2022 ou dans un lointain futur. L’écriture et la lecture ne sont-elles pas, depuis 5000 ans, les expressions les plus pures de la connexion intellectuelle par delà la déconnexion physique ?

Si vous êtes éditeur ou agent littéraire et que le sujet vous intéresse, n’hésitez pas à m’envoyer un mail à agent arrobase ploum.eu. Si vous avez des lectures ou des outils à me recommander, utilisez reaction arrobase ploum.eu. Les mails mettent un peu plus de temps que la normale à me parvenir, mais ils arrivent toujours à mon port.

Liens

Mon fameux script de synchronisation :

=> https://github.com/ploum/offlinetools

Le logiciel Offpunk (anciennement appelé AV-98-offline) :

=> https://tildegit.org/ploum/AV-98-offline

Recevez les billets par mail ou par RSS. Max 2 billets par semaine, rien d’autre. Adresse email jamais partagée et définitivement effacée lors du désabonnement. Dernier livre paru : Printeurs, thriller cyberpunk. Pour soutenir l’auteur, lisez, offrez et partagez des livres.

Ce texte est publié sous la licence CC-By BE.

January 02, 2022

January 01, 2022

Les feux d’artifice résonnent dans notre quartier. Après avoir embrassé mon épouse et lui avoir souhaité une bonne année 2022, je suis allé dans mon bureau pour mettre en pratique une résolution prise et préparée depuis presque deux mois. J’ai hésité une seconde. Voilà deux mois que l’idée a germé, que j’attends ce moment avec impatience, que je tourne et retourne les modalités pratiques dans ma tête. Pourtant, au moment fatidique, une partie de moi me fait croire que je n’en ai pas vraiment besoin. Que ça peut attendre. Être plus progressif.

Le sentiment d’être un addict me frappe de plein fouet. Jusque là, j’avais toujours cru que tout n’était qu’une question de choix. Que j’arrêtais quand je le voulais. Pourtant, même après deux mois de préparation enthousiaste, mon inconscient cherche à négocier jusqu’au dernier moment.

Pour ne pas lui laisser la moindre marge de manœuvre, j’agis sans regarder mon écran. Le cœur battant, j’arrache le câble RJ-45 relié à mon ordinateur et le sort de mon bureau.

2022 sera une année déconnectée. Une année loin du web.

C’est en prévision de ce moment que j’ai désactivé le wifi de mon portable et n’utilise plus que la connexion câblée depuis plusieurs mois. C’est en prévision de ce moment que j’ai configuré mon ordinateur et passé mes dernières semaines à coder.

Depuis mon premier site web il y a vingt-quatre ans, le web a fait de moi qui je suis. Je lui ai offert des dizaines de milliers d’heures de ma vie et il m’a donné en échange des idées, des rencontres, des carrières, des opportunités dont je n’aurais osé rêver. Pourtant, depuis plusieurs années, un sentiment diffus s’est installé. La balance s’est subtilement inversée. Le web me prend plus. Affecte mon humeur, ma santé, ma productivité. Il m’apporte moins. De moins en moins. Avec une qualité déclinante.

La qualité, les réflexions, elles sont pourtant à portée de main dans les rayonnages des bibliothèques qui constellent ma maison. Malheureusement, après quelques pages, mes yeux se tournent machinalement vers l’écran qui scintille, qui m’appelle.

Une idée germe. Je lance mon éditeur pour en rédiger les balbutiements. Mais derrière l’éditeur se tapit, sournois, un navigateur web toujours lancé, près à s’engouffrer dans la moindre hésitation, à interpréter le moindre frémissement des mes doigts sur le clavier comme une invitation à flâner en ligne. Confronté à une phrase difficile, je m’échappe, je clique machinalement de lien en lien, cherchant La Nouvelle Importante, l’Article Tellement Intéressant avant de constater que mon idée initiale s’est tarie.

Je m’arrache à cette lascive procrastination d’un suprême effort de volonté, je m’immerge dans ce que les anglophones appellent le « flow » avant d’être interrompu par une notification de mise à jour d’un logiciel que j’utilise. Le calendrier m’affiche le rappel de l’anniversaire d’une vague connaissance ou d’un événement que j’avais pourtant refusé. Ma boîte mail se met à clignoter. N’avais-je pourtant pas désactivé ces notifications ? Malgré le fait qu’il soit en silencieux, mon téléphone s’illumine dans mon champ de vision, car un énième message s’est empilé dans l’un de ces groupes Whatsapp ou Signal auxquels j’ai été, à un moment ou un autre, ajouté.

Ding Dong ! On sonne à la porte. Le livreur apporte un colis que je ne me souvenais même plus avoir commandé. Plutôt que de travailler à mes tâches importantes, une énième idée m’était venue, j’avais investigué le matériel nécessaire pour la mettre en place et j’avais même passé commande devant une offre alléchante. J’en avais profité pour commander la liste des livres que mon libraire ne peut pas obtenir en un temps décent. Malgré que la commande ait été faite tout d’un bloc et sans urgence, Amazon va me les faire parvenir au compte-goutte, chaque livre nécessitant un arrêt de la camionnette et une sonnerie de porte.

Les enfants rentrent de l’école. Je n’ai pas progressé dans les tâches que je m’étais fixées. Par contre, des millions de fragments d’idées ont germé dans la sérendipité du web, me remplissant de nouveaux objectifs, de nouveaux projets que je devrais accomplir si le web m’en laissait le temps. Ma liste de tâche n’a donc fait qu’augmenter, avec elle ma frustration.

En 2022, c’est décidé, je me déconnecte. J’envoie valser un quart de siècle de réflexes, de conditionnement. Je veux réapprendre à aimer mon ordinateur, ne plus le voir comme un ennemi, en avoir peur.

Mais est-ce réaliste ? Tant de choses se font sur le web désormais. Se déconnecter du web, s’est également se passer de tout un pan d’Internet aussi indispensable que l’électricité ou l’eau courante.

La connexion m’est, professionnellement et par passion, incontournable. Mais je peux la rendre minimale, contrôlée. Consciente. Efficace.

C’est à cela que je me prépare depuis deux mois. C’est cela que je m’apprête à mettre en production alors que les pétards se sont tus, mais pas encore les échos de la fête.

Une année 2022 déconnectée.

Une année que je vais documenter sous forme d’un livre publié sur ce blog dont vous êtes en train de lire le premier chapitre (et pour lequel je suis à la recherche d’un éditeur ou d’un agent littéraire, y compris pour une version anglophone).

Le point final étant mis à ce chapitre, mon premier réflexe est de me récompenser de mon effort en m’offrant quelques minutes de surf sur le web. Mes doigts cherchent, mais mon esprit conscient réalise l’impossibilité d’assouvir ce désir. Mon navigateur affiche une erreur de connexion.

Nous sommes en 2022. Ça y’est ! Je suis déconnecté.

Recevez les billets par mail ou par RSS. Max 2 billets par semaine, rien d’autre. Adresse email jamais partagée et définitivement effacée lors du désabonnement. Dernier livre paru : Printeurs, thriller cyberpunk. Pour soutenir l’auteur, lisez, offrez et partagez des livres.

Ce texte est publié sous la licence CC-By BE.

December 30, 2021

My father, a retired mechanical engineer and a who’s technical skills, knowledge and passion are a big inspiration for me, always told his colleagues never to quick-fix the problem, but to look for the root cause instead. Photo by Luis Villasmil on Unsplash This obviously is true for software as well and remembering this good advice while walking the dogs yesterday evening stopped me from...

Source

No, not Diphtheria, Tetanus, and Pertussis (vaccine), but Development, Test, Acceptance, and Production (DTAP): different environments that, together with a well-working release management process, provide a way to get higher quality and reduced risks in production. DTAP is an important cornerstone for a larger infrastructure architecture as it provides environments that are tailored to the needs of many stakeholders.

December 28, 2021

Ten years ago, I joined Kiva, a platform for crowdfunding loans for the unbanked.

In those ten years, I made 500+ loans. I'd like to think I've helped 500+ people improve their life.

To be clear: this is a form of charitable giving, and not a financial investment. I don't earn any interest from these loans and occasionally lose some or all of my principal.

More than 1.5 billion people around the world are unbanked. 500 loans is still a drop in the bucket.

To celebrate my 10-year Kiva membership, I added more funds to Kiva. I plan to make at least 1,500 loans the next 10 years — 3 times the amount of my first ten years.

If you have the means, and you want to help build towards a financially inclusive world, consider joining Kiva.

December 26, 2021

When you create your own sensor devices with ESPHome, you generally let them send their sensor measurements to a home automation gateway such as Home Assistant or an MQTT broker. Then you can visualize these data in a central dashboard.

However, sometimes you want to visualize those data locally on the device itself, on a display. For instance, this year I created an air quality monitor with an ESPHome configuration. The first version just had an RGB LED for local feedback about the measured air quality, but recently I created a second version built around the LilyGO TTGO T-Display ESP32, which has a built-in 1.14 inch display. This way I can display the CO₂ and particulate matter concentrations and the temperature, humidity and pressure locally, which is way more actionable.

ESPHome 2021.10 has a new type of component that is interesting for this purpose: a graph. For instance, this is how I defined the graphs for CO₂ and particulate matter concentrations:

graph:
  - id: co2_graph
    sensor: co2_value
    duration: 1h
    min_value: 400
    max_value: 2000
    width: ${graph_width}
    height: ${graph_height}
    border: false
  - id: pm_graph
    duration: 1h
    min_value: 0
    width: ${graph_width}
    height: ${graph_height}
    border: false
    traces:
      - sensor: pm2_5_value
        color: color_yellow
      - sensor: pm10_value
        color: color_green

Note that the second graph shows multiple sensor values, each in their own colour. There are a lot more options possible, for instance for grids, borders and line types.

Then in your display component you can show these graphs:

display:
  - platform: st7789v
    id: ttgo_tdisplay
    backlight_pin: GPIO4
    cs_pin: GPIO5
    dc_pin: GPIO16
    reset_pin: GPIO23
    rotation: 270
    pages:
      - id: page1
        lambda: |-
          it.graph(0, 60, id(co2_graph));
      - id: page2
        lambda: |-
          it.graph(0, 60, id(pm_graph));

This draws both graphs on their own page, at position x=0, y=60. With a header, frame and some extra information this looks like this on the T-Display's display:

/images/esphome-graph-co2.png

You can find the full configuration in the example file t-display_example.yaml of the GitHub repository of the project.

December 21, 2021

Velociraptor is a great DFIR tool that becomes more and more popular amongst Incident Handlers. Velociraptor works with agents that are deployed on endpoints. Once installed, the agent automatically “phones home” and keep s a connection with the server… exactly like a malware with it’s C2 server but this time it’s for the good and not the bad. Because, I heard a lot of positive stories about Velociraptor, I decided to learn about the tool then deploy my own server and use it for investigations. There are two approaches: the first one, you proactively deploy the agent on every endpoints in your organization and you’re ready to investigate future incidents. The other one is a real incident response context and agents are deployed after the initial compromise to allow collecting evidences and keep an eye on the infrastructure for a shorter period. I’m using Velociraptor in the second approach. The agent is ready to be downloaded and deployed via a MSI package (full automation is possible through a GPO).

The first approach with Velociraptor could be sometimes a bit rocky. To interact with agents, you must use a specific query language called VQL. It looks like SQL queries but, once you understood the basics, it’s really powerful. By example, the following query returns DNS requests performed by an agent:

SELECT System.TimeStamp AS Timestamp,
  EventData.QueryName AS Query,
  EventData.QueryType AS Type,
  EventData.QueryResults AS Answer
  FROM watch_etw(guid="{95126€-7EEA-49A9-A3FE-A378B03DDB4D}")
  WHERE System.ID = 3020

From a DFIR perspective, Velociraptor does the job. You can easily access the memory (take a memory image), access the filesystems, the registry, the Windows EventLogs and much more… Let’s imagine you’re working on an incident and you discover on the patient zero that the malware dropped a file in %APPDATA%. You can easily write a VQL query and search for this file on all your connected agents and discover more victims. And it’s possible to quarantine hosts where the suspicious file has been found. It’s not my goal here to explain Velociraptor deeper, if you’re interested, have a look at the documentation.

Another nice feature of Velociraptor is a way to deploy and run third-party tools on agents (and collect results of course). One of the tool I like to use to search for artefacts is the Loki IOC scanner. Loki is the small brother is a very powerful tool called Thor. Because Thor is a commercial product, I focused on Loki for this blog post. The idea is to use Velociraptor as an orchestrator to launch a scan on a suspicious endpoint:

  • Download a Loki package
  • Extract files
  • Run Loki in “upgrade mode” to fetch the latest IOC & YARA rules
  • Run Loki
  • Send results back to Velociraptor

Once you select the target, you can configure some basic parameters before launching the scan:

Then you can follow the ongoing scan:

While I was writing this artifact, I found that another guy had the same project. I like the idea to upload the results back to Velociraptor for further processing, thank to Eduardo! The biggest difference between our scripts is the way Loki is deployed. When Loki is installed, the first operation it will perform is to download the latest IOC & rules. Eduardo took another approach: He prepared a stand alone Loki archive which already contains the required data. The good point: no Internet connectivity is required by the agent, everything will be downloaded from the Velociraptor server. I decided to keep the “online” approach and let Loki fetch some files from GitHub.com. It means that I don’t have to maintain a local package, every time Loki is launched it will have the latest rules but… Internet connectivity will be required.

Here is the command executed on the endpoint:

cmd.exe /c cd C:\Program Files\Velociraptor\Tools\tmp61649993\loki && \
    loki-upgrader.exe --nolog && \
    loki.exe --noindicator -p C:\ -l \
    C:\Program Files\Velociraptor\Tools\tmp61649993\loki\win10vm-loki.csv \
    --csv --dontwait

Once the scan is completed, results are available in Velociraptor for review:

Note that, by default, Velociraptor uses a timeout of 600 seconds to allow artifacts to successfully complete. Loki can be time consuming so don’t forget to increase this timeout! After many tests, I found that the best value for me was 14400 seconds (4 hours). But usually, 2 hours are more than enough.

If you’re interested in testing the artifact, it’s on my Github repository. This is a simple example of a third-party tool integration but it could give you (funny) ideas!

The post Velociraptor & Loki appeared first on /dev/random.

I published the following diary on isc.sans.edu: “More Undetected PowerShell Dropper“:

Last week, I published a diary about a PowerShell backdoor running below the radar with a VT score of 0! This time, it’s a dropper with multiple obfuscation techniques in place. It is also important to mention that the injection technique used is similar to Jan’s diary posted yesterday but I decided to review it because it has, here again, a null VT store… [Read more]

The post [SANS ISC] More Undetected PowerShell Dropper appeared first on /dev/random.

December 19, 2021

We are a few years further. A few years in which we all tried to make a difference.

I’m incredibly proud of my achievement of QTBUG-61928. At the time I thought I could never convince the Qt development team of changing their APIs. They did and today in Qt6 it’s all very much part of the package.

I want to thank Thiago and others. But I also think it’s a team effort. It might not be because of just me. But I still feel a little bit proud of having pushed this team just enough to make the changes.

I am now at a new Qt bug report. This time it’s about int64_t. I think that QModelIndex should be completely supporting it. Again, I think a lot. And I have a lot of opinions. But I anyway filed QTBUG-99312 for this.

December 17, 2021

Gisterenavond hebben we ons nog eens op friet van het frietkot gesmeten en tijdens die schranspartij trilde mijn Fitbit goedkeurend. Op het einde van de rit rekende mijn niet-toch-niet-zo-slim horloge me voor dat ik gedurende 36 minuten vet had verbrand en zo 285 caloriën had verbruikt. Goed toch?

Source

December 15, 2021

Cover Image

The essay "Who goes Nazi" (1941) by Dorothy Thompson is a commonly cited classic. Through a fictional dinner party, we are introduced to various characters and personalities. Thompson analyzes whether they would or wouldn't make particularly good nazis.

Supposedly it comes down to this:

"Those who haven't anything in them to tell them what they like and what they don't—whether it is breeding, or happiness, or wisdom, or a code, however old-fashioned or however modern, go Nazi."

I have no doubt she was a keen social observer, that much is clear from the text. But I can't help but notice a big blind spot here.

If you're the kind of person to read and share this essay, satisfied about what it says about you and the world... what does that imply? Maybe that you needed someone else to tell you that? That you prefer to say it in their words rather than your own? Or even that you didn't have your own convictions sorted until then?

In other words, it seems "people who share Who goes Nazi?" is also a category of people who easily go nazi. What's more, in order to become an expert on what makes a particularly good nazi at a proto-nazi party, you have to be the kind of person who attends a lot of those parties in the first place.

So instead of two spidermen pointing at each other, let's ask a simpler question: who doesn't go nazi?

There's a pretty easy answer.

Brass Tacks

I bring this up because it's been impossible to miss lately: many people don't seem capable of recognizing totalitarianism unless it is polite enough to wear a swastika on its arm.

"Who doesn't go nazi" is anyone who is currently speaking up or protesting against lockdowns, curfews, QR-codes, mandatory vaccination, quarantine camps or similar. These are the people who, when a proto-fascist situation starts to develop, don't play along, or stand on the sidelines, but actually refuse to stay quiet. You can be pretty sure those people will not go nazi. It's everyone else you have to worry about.

I've gone to protest twice here already, and each time the crowd has been joyful, enormous and incredibly diverse. Not just left and right, white, brown and black. But upper and lower class. Christian or muslim. These were not anti-vax protests, and no wild riots either. Most people were there to oppose the QR-code, the harsh measures and the incompetent, lying politicians.

I go to represent myself, nobody else, but I've never felt any sense of embarrassment or shame to share a street with these people. On the whole they're fun, friendly and conscientious.

This opposition includes public servants like firemen, and also health care workers. Those last ones in particular have a very understandable grievance. They were heroes just a year ago, but today, they are threatened with job loss unless they get jabbed. In an already understaffed medical system, with an aged population. To make them undergo a medical procedure for which the manufacturer is not liable, and for which the governmental contracts have been kept secret.

A manufacturer paid with public money, in an industry with a proven track record of messing up human lives on enormous scales, and a history of trying to hide it.

The Real COVID Challenge

The reason we have to go along with all this, we are told, is solidarity. The need to look out for each other. Well, I find solidarity nowhere to be seen.

Because in many countries, a minority of people is being actively excluded from society and social life. In some places even cut off from buying groceries, even going outside. There is no limit to how many times they can be harassed and fined for their non-compliance.

At the same time, tons of people, who undoubtedly see themselves as empathetic and sensitive, are going out, acting like nothing's wrong. Some are even proudly saying the government should crack down harder, and make life truly miserable for those dirty vaccine refusers, until they comply.

To these people, I offer you the true COVID challenge. The pro-social, solidary thing to do is obvious: join them. Go out without your QR code, just once, for one afternoon or evening. See what happens.

Learn how it feels to have other citizens turn you away into the winter cold. Experience the drain of going door to door, wondering if the next one will be the one to let you have a simple drink or meal in peace. Maybe bring some QR'd friends along, so you can truly get into the role of being the 5th wheel nobody wants. Force everyone to sit outside with your mere presence. Ask them to buy and order things for you, like you're a teenager again.

Because that's what you want to inflict on other people every single hour of every single day for the rest of their free lives. Simply because they do not feel confident in a new medical treatment. Because let's face it: nobody knows if it's safe long term, if it failed to do what was promised after just 6 months. Why would you still believe anyone who claims otherwise?

And why, oh why, are the pillars of society dead set on shaming and punishing all the folks who weren't gullible enough? Shouldn't they be looking inward? Have they no shame?

Judas

There was recently a remarkable court judgement in the Netherlands. Thierry Baudet, of the Dutch Forum for Democracy, was forced to delete the following 4 tweets, which were judged to be unacceptably offensive (translated from Dutch):

"Deeply touched by the newsletter by @mauricedehond this morning. He's so right: the situation now is comparable to the '30s and '40s. The unvaccinated are the new jews, and the excluders who look away are the new nazi's and NSBers. There, I said it."

"Irony supreme! Ex-concentration camp Buchenwald is appying 2G policy [proof of recovery or vaccination] for an exhibit on... excluding people. How is it POSSIBLE to still not see how history is repeating?"

"Ask yourself: is this the country you want to live in? Where children who are "unvaccinated" can't go the Santa Claus parade? And have to be towelled off outside after swimming lessons? If not: RESIST! Don't participate in this apartheid, this exclusion! #FvD"

"Dear Jewish organizations:
1) The War does not belong to you but to us all.
2) Nobody compared the "holocaust" to the #apartheidspass, it was about the '30s
3) For 50 years, the "left" has done nothing but invoke the War
4) Look around you, what is happening NOW before our eyes!"

When people get outraged over supposedly offensive speech, often the person complaining isn't actually the one being insulted. Rather, they are taking offense on behalf of another party. When words are deemed hurtful, someone has a specific type of person in mind to whom those words are hurtful.

But in this case, Jewish organizations have gotten seriously offended over things some Jews are also saying, and doing so specifically as Jews. So who are these organizations actually representing?

Based on their behavior, it's as if they think nie wieder purely means that the Jewish people should never be persecuted again, as opposed to no group of people, of whatever ethnicity or conviction. That it inherently hurts the prospects of Jews to compare their historical plight to anyone else. It would seem they are taking an ethno-nationalist stance rather than a human rights stance. It ought to be painfully embarrassing for them, and it's not surprising they lash out. That doesn't make them right.

You can observe the same dynamic going on with the public and corona. When people are derisively labelled "anti-vaxxers" and selfish "hyperindividualists", the charge is that they are hurting society by helping spread the virus to the weaker members of society. But the people making the accusations seem to feel safe and confident enough to go out themselves and go party. Even though they can spread it too, and they are the majority of the population. In some places over 90% of adults. Who is being selfish?

The "unclean" are now actually stuck at home in many places, locked out of society. How are they still supposed to be driving anything now? It's absurd.

In fact, it seems to be the politicians and their royal advisors who are the hyperindividualists, deciding policy for millions. They never got consent to do so, and there is clearly no accountability for promises made. In some cases, they were literally never even elected.

* * *

It's all entirely backwards. It's not the unvaccinated who should feel ashamed, it's anyone who didn't speak up when an actual scapegoat underclass was created. When comparisons are judged not by their accuracy and implications, but by the emotional immaturity of anyone who might be listening.

They are now stuck with faith-based scientism, where matters are settled by unquestionable virologists and the PR departments of Pfizer and Moderna. But PR can't fix disasters, it can only pretend they didn't happen.

Know that the minute the tide turns, the loudest will immediately pretend to have believed so all along, to try and save face.

So stop blaming the scapegoats. It's not only stupid, it's inhumane. People like me will be here to remind you of that for the rest of time. Better get used to it.

I published the following diary on isc.sans.edu: “Simple but Undetected PowerShell Backdoor“:

For a while, most security people agree on the fact that antivirus products are not enough for effective protection against malicious code. If they can block many threats, some of them remain undetected by classic technologies. Here is another example with a simple but effective PowerShell backdoor that I spotted yesterday. The file has been uploaded on VT (SHA256:4cd82b6cbd3e20fc8a9a0aa630d2a866d32cfb60e76b032191fda5d48720a64d) and received a score of … 0/57… [Read more]

The post [SANS ISC] Simple but Undetected PowerShell Backdoor appeared first on /dev/random.

December 13, 2021

This is how I “Bloom”…in mourning mode. Again. And again. And again. And again. And again. And again. Guess what keeps me alive… Naima Joris, Facebook This is not just a cover of a beautiful song (which I consider one of the best ever by Radiohead), this is so full of emotion, so real, so painful. Naima Joris is a great artist!

Source

December 11, 2021

We now invite proposals for presentations. FOSDEM offers open source and free software developers a place to meet, share ideas and collaborate. Renowned for being highly developer-oriented, the event brings together some 8000+ geeks from all over the world. The twenty-second edition will take place online on Saturday 5th and Sunday 6th February 2022 using the same underlying technology as the twenty-first edition. We will be running a variety of streams including Developer Rooms, Main Tracks and Lightning Talks. Proposals All proposals for presentations should be submitted using Pentabarf: https://fosdem.org/submit. If you already created an account in the system舰
Cover Image

Hassle free GLSL

I've been working on a new library to compose GLSL shaders. This is part of a side project to come up with a composable and incremental way of driving WebGPU and GPUs in general.

#pragma import { getColor } from 'path/to/color'

void main() {
  gl_FragColor = getColor();
}

The problem seems banal: linking together code in a pretty simple language. In theory this is a textbook computer science problem: parse the code, link the symbols, synthesize new program, done. But in practice it's very different. Explaining why feels itself like an undertaking.

From the inside, GPU programming can seem perfectly sensible. But from the outside, it's impenetrable and ridiculously arcane. It's so bad I made fun of it.

This might seem odd, given the existence of tools like ShaderToy: clearly GPUs are programmable, and there are several shader languages to choose from. Why is this not enough?

Well in fact, being able to render text on a GPU is still enough of a feat that someone has literally made a career out of it. There's a data point.

Another data point is that for almost every major engine out there, adopting it is virtually indistinguishable from forking it. That is to say, if you wish to make all but the most minor changes, you are either stuck at one version, or you have to continuously port your changes to keep up. There is very little shared cross-engine abstraction, even as the underlying native APIs remain stable over years.

When these points are raised, the usual responses are highly technical. GPUs aren't stack machines for instance, so there is no real recursion. This limits what you can do. There are also legacy reasons for certain features. Sometimes, performance and parallelism demands that some things cannot be exposed to software. But I think that's missing the forest for the trees. There's something else going on entirely. Much easier to fix.

a puzzle

Just Out of Reach

Let's take a trivial shader:

vec4 getColor(vec2 xy) {
  return vec4(xy, 0.0, 1.0);
}

void main() {
  vec2 xy = gl_FragIndex * vec2(0.001, 0.001);
  gl_FragColor = getColor(xy);
}

This produces an XY color gradient.

In shaders, the main function doesn't return anything. The input and output are implicit, via global gl_… registers.

Conceptually a shader is just a function that runs for every item in a list (i.e. vertex or pixel), like so:

// On the GPU
for (let i = 0; i < n; ++i) {
  // Run shader for every (i) and store result
  result[i] = shader(i);
}

But the for loop is not in the shader, it's in the hardware, just out of reach. This shouldn't be a problem because it's such simple code: that's the entire idea of a shader, that it's a parallel map().

If you want to pass data into a shader, the specific method depends on the access pattern. If the value is constant for the entire loop, it's a uniform. If the value is mapped 1-to-1 to list elements, it's an attribute.

In GLSL:

// Constant
layout (set = 0, binding = 0) uniform UniformType {
  vec4 color;
  float size;
} UniformName;
// 1-to-1
layout(location = 0) in vec4 color;
layout(location = 1) in float size;

Uniforms and attributes have different syntax, and each has its own position system that requires assigning numeric indices. The syntax for attributes is also how you pass data between two connected shader stages.

But all this really comes down to is whether you're passing color or colors[i] to the shader in the implicit for loop:

for (let i = 0; i < n; ++i) {
  // Run shader for every (i) and store result (uniforms)
  result[i] = shader(i, color, size);
}
for (let i = 0; i < n; ++i) {
  // Run shader for every (i) and store result (attributes)
  result[i] = shader(i, colors[i], sizes[i]);
}

If you want the shader to be able to access all colors and sizes at once, then this can be done via a buffer:

layout (std430, set = 0, binding = 0) readonly buffer ColorBufferType {
  vec4 colors[];
} ColorBuffer;

layout (std430, set = 0, binding = 1) readonly buffer SizeBufferType {
  vec4 sizes[];
} SizeBuffer;

You can only have one variable length array per buffer, so here it has to be two buffers and two bindings. Unlike the single uniform block earlier. Otherwise you have to hardcode a MAX_NUMBER_OF_ELEMENTS of some kind.

Attributes and uniforms actually have subtly different type systems for the values, differing just enough to be annoying. The choice of uniform, attribute or buffer also requires 100% different code on the CPU side, both to set it all up, and to use it for a particular call. Their buffers are of a different type, you use them with a different method, and there are different constraints on size and alignment.

Only, it gets worse. Like CPU registers, bindings are a precious commodity on a GPU. But unlike CPU registers, typical tools do not help you whatsover in managing or hiding this. You will be numbering your bind groups all by yourself. Even more, if you have both a vertex and fragment shader, which is extremely normal, then you must produce a single list of bindings for both, across the two different programs.

And even then the above is all an oversimplification.

It's actually pretty crazy. If you want to make a shader of some type (A, B, C, D) => E, then you need to handroll a unique, bespoke definition for each particular A, B, C and D, factoring in a neighboring function that might run. This is based mainly on the access pattern for the underlying data: constant, element-wise or random, which forcibly determines all sorts of other unrelated things.

No other programming environment I know of makes it this difficult to call a plain old function: you have to manually triage and pre-approve the arguments on both the inside and outside, ahead of time. We normally just automate this on both ends, either compile or run-time.

It helps to understand why bindings exist. The idea is that most programs will simply set up a fixed set of calls ahead of time that they need to make, sharing much of their data. If you group them by kind, that means you can execute them in batches without needing to rebind most of the arguments. This is supposed to be highly efficient.

Though in practice, shader permutations do in fact reach high counts, and the original assumption is actually pretty flawed. Even a modicum of ability to modularize the complexity would work wonders here.

The shader from before could just be written to end in a pure function which is exported:

// ...
#pragma export
vec4 main(vec2 xy) {
  return getColor(xy * vec2(0.001, 0.001));
}

Using plain old functions and return values is not only simpler, but also lets you compose this module. This main can be called from somewhere else. It can be used by a new function vec2 => vec4 that you could substitute for it.

The crucial insight is that the rigid bureaucracy of shader bindings is just a very complicated calling convention for a function. It overcomplicates even the most basic programs, and throws composability out with the bathwater. The fact that there is a special set of globals for input/output, with a special way to specify 1-to-1 attributes, was a design mistake in the shader language.

It's not actually necessary to group the contents of a shader with the rules about how to apply that shader. You don't want to write shader code that strictly limits how it can be called. You want anyone to be able to call it any way they might possibly like.

So let's fix it.

Reinvent The Wheel

There is a perfectly fine solution for this already.

If you have a function, i.e. a shader, and some data, i.e. arguments, and you want to represent both together in a program... then you make a closure. This is just the same function with some of its variables bound to storage.

For each of the bindings above (uniform, attribute, buffer), we can define a function getColor that accesses it:

vec4 getColor(int index) {
  // uniform - constant
  return UniformName.color;
}
vec4 getColor(int index) {
  // attribute - 1 to 1
  return color;
}
vec4 getColor(int index) {
  // buffer - random access
  return ColorBuffer.color[index];
}

Any other shader can define this as a function prototype without a body, e.g.:

vec4 getColor(int index);

You can then link both together. This is super easy when functions just have inputs and outputs. The syntax is trivial.

If it seems like I am stating the obvious here, I can tell you, I've seen a lot of shader code in the wild and virtually nobody takes this route.

The API of such a linker could be:

link : (module: string, links: Record<string, string>) => string

Given some main shader code, and some named snippets of code, link them together into new code. This generates exactly the right shader to access exactly the right data, without much fuss.

But this isn't a closure, because this still just makes a code string. It doesn't actually include the data itself.

To do that, we need some kind of type T that represents shader modules at run-time. Then you can define a bind operation that accepts and returns the module type T:

bind : (module: T, links: Record<string, T>) => T

This lets you e.g. express something like:

let dataSource: T = makeSource(buffer);
let boundShader: T = bind(shader, {getColor: dataSource});

Here buffer is a GPU buffer, and dataSource is a virtual shader module, created ad-hoc and bound to that buffer. This can be made to work for any type of data source. When the bound shader is linked, it can produce the final manifest of all bindings inside, which can be used to set up and make the call.

That's a lot of handwaving, but believe me, the actual details are incredibly dull. Point is this:

If you get this to work end-to-end, you effectively get shader closures as first-class values in your program. You also end up with the calling convention that shaders probably should have had: the 1-to-1 and 1-to-N nature of data is expressed seamlessly through the normal types of the language you're in: is it an array or not? is it a buffer? Okay, thanks.

In practice you can also deal with array-of-struct to struct-of-arrays transformations of source data, or apply mathbox-like number emitters. Either way, somebody fills a source buffer, and tells a shader closure to read from it. That's it. That's the trick.

Shader closures can even represent things like materials too. Either as getters for properties, or as bound filters that directly work on values. It's just code + data, which can be run on a GPU.

When you combine this with a .glsl module system, and a loader that lets you import .glsl symbols directly into your CPU code, the effect is quite magical. Suddenly the gap between CPU and GPU feels like a tiny crack instead of the canyon it actually is. The problem was always just getting at your own data, which was not actually supposed to be your job. It was supposed to tag along.

Here is for example how I actually bind position, color, size, mask and texture to a simple quad shader, to turn it into an anti-aliased SDF point renderer:

import { getQuadVertex } from '@use-gpu/glsl/instance/vertex/quad.glsl';
import { getMaskedFragment } from '@use-gpu/glsl/mask/masked.glsl';
  
const vertexBindings = makeShaderBindings(VERTEX_BINDINGS, [
  props.positions ?? props.position ?? props.getPosition,
  props.colors ?? props.color ?? props.getColor,
  props.sizes ?? props.size ?? props.getSize,
]);

const fragmentBindings = makeShaderBindings(FRAGMENT_BINDINGS, [
  (mode !== RenderPassMode.Debug) ? props.getMask : null,
  props.getTexture,
]);

const getVertex = bindBundle(
  getQuadVertex,
  bindingsToLinks(vertexBindings)
);
const getFragment = bindBundle(
  getMaskedFragment,
  bindingsToLinks(fragmentBindings)
);

getVertex and getFragment are two new shader closures that I can then link to a general purpose main() stub.

I do not need to care one iota about the difference between passing a buffer, a constant, or a whole 'nother chunk of shader, for any of my attributes. The props only have different names so it can typecheck. The API just composes, and will even fill in default values for nulls, just like it should.

a puzzle

GP(GP(GP(GPU)))

What's neat is that you can make access patterns themselves a first-class value, which you can compose.

Consider the shader:

T getValue(int index);
int getIndex(int index);

T getIndexedValue(int i) {
  int index = getIndex(i);
  return getValue(index);
}

This represents using an index buffer to read from a value buffer. This is something normally done by the hardware's vertex pipeline. But you can just express it as a shader module.

When you bind it to two data sources getValue and getIndex, you get a closure int => T that works as a new data source.

You can use similar patterns to construct virtual geometry generators, which start from one vertexIndex and produce complex output. No vertex buffers needed. This also lets you do recursive tricks, like using a line shader to make a wireframe of the geometry produced by your line shader. All with vanilla GLSL.

By composing higher-order shader functions, it actually becomes trivial to emulate all sorts of native GPU behavior yourself, without much boilerplate at all. Giving shaders a dead-end main function was simply a mistake. Everything done to work around that since has made it worse. void main() is just where currently one decent type system ends and an awful one begins, nothing more.

In fact, it is tempting to just put all your data into a few giant buffers, and use pointers into that. This already exists and is called "bindless rendering". But this doesn't remove all the boilerplate, it just simplifies it. Now instead of an assortment of native bindings, you mainly use them to pass around ints to buffers or images, and layer your own structs on top somehow.

This is a textbook case of the inner platform effect: when faced with an incomplete or limited API, eventually you will build a copy of it on top, which is more capable. This means the official API is so unproductive that adopting it actually has a negative effect. It would probably be a good idea to redesign it.

In my case, I want to construct and call any shader I want at run-time. Arbitrary composition is the entire point. This implies that when I want to go make a GPU call, I need to generate and link a new program, based on the specific types and access patterns of values being passed in. These may come from other shader closures, generated by remote parts of my app. I need to make sure that any subsequent draws that use that shader have the correct bindings ready to go, with all associated data loaded. Which may itself change. I would like all this to be declarative and reactive.

If you're a graphics dev, this is likely a horrible proposition. Each engine is its own unique snowflake, but they tend to have one thing in common: the only reason that the CPU side and the GPU side are in agreement is because someone explicitly spent lots of time making it so.

This is why getting past drawing a black screen is a rite of passage for GPU devs. It means you finally matched up all the places you needed to repeat yourself in your code, and kept it all working long enough to fix all the other bugs.

The idea of changing a bunch of those places simultaneously, especially at run-time, without missing a spot, is not enticing to most I bet. This is also why many games still require you to go back to the main screen to change certain settings. Only a clean restart is safe.

So let's work with that. If only a clean restart is safe, then the program should always behave exactly as if it had been restarted from scratch. As far as I know, nobody has been crazy enough to try and do all their graphics that way. But you can.

One way of doing that is with a memoized effect system. Mine is somewhere halfway between discount ZIO and discount React. The "effect" part ensures predictable execution, while the "memo" part ensures no redundant re-execution. It takes a while to figure out how to organize a basic WebGPU/Vulkan-like pipeline this way, but you basically just stare at the data dependencies for a very long time and keep untangling. It's just plain old code.

The main result is that changes are tracked only as granularly as needed. It becomes easy to ensure that even when a shader needs to be recompiled, you are still only recompiling 1 shader. You are not throwing away all other associated resources, state or caches, and the app does not need to do much work to integrate the new shader into subsequent calls immediately. That is, if you switch a binding to another of the same type, you can keep using the same shader.

The key thing is that I don't intend to make thousands of draw calls this way either. I just want to make a couple dozen of exactly the draw calls I need, preferably today, not next week. It's a radically different use case from what game engines need, which is what the current industry APIs are really mostly tailored for.

The best part is that the memoization is in no way limited to shaders. In fact, in this architecture, it always knows when it doesn't need to re-render, when nothing could have changed. Code doesn't actually run if that's the case. This is illustrated above by only having the points move around if the camera changes. For interactive graphics outside of games, this is actually a killer feature, yet it's something that's usually solved entirely ad-hoc.

One unanticipated side-effect is that when you add an inspector tool to a memoized effect system, you also get an inspector for every piece of significant state in your entire app.

On the spectrum of retained vs immediate mode, this perfectly hits that React-like sweet spot where it feels like immediate mode 90% of the time, even if it is retaining a lot behind the scenes. I highly recommend it, and it's not even finished yet.

* * *

A while ago I said something about "React VR except with Lisp instead of tears when you look inside". This is starting to feel a lot like that.

In the code, it looks absolutely nothing like any OO-style library I've seen for doing the same, which is a very good sign. It looks sort of similar, except it's as if you removed all code except the constructors from every class, and somehow, everything still keeps on working. It contains a fraction of the bookkeeping, and instead has a bunch of dependencies attached to hooks. There is not a single isDirty flag anywhere, and it's all driven by plain old functions, either Typescript or GLSL.

The effect system allows the run-time to do all the necessary orchestration, while leaving the specifics up to "user space". This does involve version counters on the inside, but only as part of automated change detection. The difference with a dirty flag might seem like splitting hairs, but consider this: you can write a linter for a hook missing a dependency, but you can't write a linter for code missing a dirty flag somewhere. I know which one I want.

Right now this is still just a mediocre rendering demo. But from another perspective, this is a pretty insane simplification. In a handful of reactive components, you can get a proof-of-concept for something like Deck.GL or MapBox, in a fraction of the code it takes those frameworks. Without a bulky library in between that shields you from the actual goodies.

December 10, 2021

I published the following diary on isc.sans.edu: “Python Shellcode Injection From JSON Data“:

My hunting rules detected a niece piece of Python code. It’s interesting to see how the code is simple, not deeply obfuscated, and with a very low VT score: 2/56!. I see more and more malicious Python code targeting the Windows environments. Thanks to the library ctypes, Python is able to use any native API calls provided by DLLs.

The script is very simple, so here is the full code… [Read more]

The post [SANS ISC] Python Shellcode Injection From JSON Data appeared first on /dev/random.

December 04, 2021

When I heard this on PBB (I think, but it might have been WorldWide FM as well) I thought it was a nice hat tip to spiritual jazz of the sixties & seventies, but it turns out it’s a cover/ remake of a tune that was originally performed by saxophonist Albert Ayler (mostly known as free-jazz saxophonist and protégé of John Coltrane) and written (and sung) by his partner Mary Maria Parks back in 1969.

Source

December 03, 2021

I published the following diary on isc.sans.edu: “The UPX Packer Will Never Die!“:

Today, many malware samples that you can find in the wild are “packed”. The process of packing an executable file is not new and does not mean that it is de-facto malicious. Many developers decide to pack their software to protect the code. But why malware are often packed? Because packing slows down the malware analyst job and defeats many static analysis tools. The advantages of packed malware (from an attacker’s point of view) are (amongst others)… [Read more]

The post [SANS ISC] The UPX Packer Will Never Die! appeared first on /dev/random.

December 01, 2021

I published the following diary on isc.sans.edu: “Info-Stealer Using webhook.site to Exfiltrate Data“:

We already reported multiple times that, when you offer an online (cloud) service, there are a lot of chances that it will be abused for malicious purposes. I spotted an info-stealer that exfiltrates data through webhook.site. Today, many Python scripts use Discard as a C2 communication channel. This time, something different and that looks definitively less suspicious… [Read more]

The post [SANS ISC] Info-Stealer Using webhook.site to Exfiltrate Data appeared first on /dev/random.

November 30, 2021

I’m using Tor for so long that I can’t remember! The main reasons to use it are to access some websites while preserving my anonymity (after all that’s the main purpose of Tor) but also to access dangerous resources like command & control servers or sites delivering malicious content. The last reason is to perform scans and assessments of web services. This is often mandatory when the tested server adds your IP address to a temporary block list. I know how it can be annoying because I’m performing the same for my web servers… But, even when you use Tor, your current IP address (the one of the Tor exit node) can be blocked as well. So you have to “renew” it (a bit like DHCP – rough comparison).

I have a Tor proxy running in my lab and accessible from any host/application connected in the same VLAN. Today, most tools have an option to use a (SOCKS) proxy.

BurpSuite is my favorite tool to test web applications and it can be configured to perform all requests through Tor. This will prevent me to become blocked by the web application until… the Tor exit node becomes blocked too! I faced this situation recently. To bypass this issue, you could try to renew the Tor circuit at regular intervals to use another exit node but a new circuit does not always provide you a new IP address! For optimation reasons, Tor will select the best path for you following some parameters (I never really checked the magic behind that). But, at least, your IP address should change from time to time.

To renew the Tor circuit, you can restart or send a SIGHUP signal to the process. A less aggressive way is to configure the control port (by default listening to port 9051). If you do this, don’t forget to configure a password to restrict access:

$ tor --hash-password MySuperStrongPW
16:FA53BE7AAFA42E726068794B0408F6BACCC3165413B940071BF3E78494

Save this password in the /etc/tor/torrc file:

ControlPort 9051
HashedControlPassword 16:FA53BE7AAFA42E726068794B0408F6BACCC3165413B940071BF3E78494

Now, you can renew your circuit. Automate this in a cronjob:

*/3 * * * * printf 'authenticate "MySuperTrongPW"\r\nsignal newnym\r\n' | nc 127.0.0.1 9051

To improve the IP address renewal, I changed my lab setup and use a small project found on GitHub: TorIpChanger. This tool will allow you to get a new IP address without problems. You can run it in a Docker container:

# git clone https://github.com/DusanMadar/TorIpChanger.git
# cd TorIpChanger
# docker-compose up -d

Note: Don’t forget to change the default password in the docker-compose.yml.

TorIpChanger starts a small daemon listening to port TCP/8080. You can renew your Tor IP address with this command:

$ curl http://localhost:8080/changeip/
{"error":"","newIp":"185.220.101.144"}

The returned JSON could be a bit strange but “error”:”” means that everything is fine. I tested it in a loop with a sleep of 30″:

$ while true; do curl http://localhost:8080/changeip/; sleep 60; done
{"error":"","newIp":"185.100.87.129"}
{"error":"","newIp":"107.189.12.240"}
{"error":"","newIp":"5.199.143.202"}
{"error":"","newIp":"107.189.7.175"}
{"error":"","newIp":"101.100.146.147"}
{"error":"","newIp":"141.95.18.225"}
{"error":"","newIp":"217.79.178.53"}
{"error":"","newIp":"103.28.52.93"}
{"error":"","newIp":"198.144.121.43"}
{"error":"","newIp":"92.35.70.172"}
{"error":"","newIp":"185.220.101.52"}
{"error":"","newIp":"185.31.175.213"}
{"error":"","newIp":"185.220.101.145"}
{"error":"","newIp":"185.220.102.240"}
{"error":"","newIp":"192.42.116.17"}
{"error":"","newIp":"185.220.101.187"}
{"error":"","newIp":"185.220.101.167"}
{"error":"","newIp":"185.220.102.251"}
{"error":"","newIp":"176.10.104.240"}
{"error":"","newIp":"185.220.102.241"}
{"error":"","newIp":"5.255.97.170"}
{"error":"","newIp":"23.236.146.162"}
{"error":"","newIp":"51.15.235.211"}
{"error":"","newIp":"107.189.14.76"}
{"error":"","newIp":"18.27.197.252"}
{"error":"","newIp":"128.31.0.13"}
{"error":"","newIp":"109.70.100.19"}

And, like above, you can automate it via a cronjob.

To use Tor, the approach is different. A TorProxy is now used and listens for requests on TCP/8118. You can reconfigure this proxy in BurpSuite:

To prevent active scanning from failing too often (if the current IP address is already blocked before the renewal), I optimized the following values. This gave me pretty good results and did not break any scan:

A final note about this technique: if the webserver you are testing implements sessions based on extra parameters like your IP address, it will of course fail or have side effects. Besides some timeouts (when the IP is renewed), I did not expect any issues, except the fact that Tor remains sometimes very slow. But that’s another story!

The post Tor IP Renewal For The Win appeared first on /dev/random.

November 29, 2021

We are pleased to announce the developer rooms that will be organised at FOSDEM 2022. Developer rooms are assigned to self-organising groups to work together on open source projects, to discuss topics relevant to a broader subset of the community, etc. The individual developer room organisers will issue their calls for participation in the next few days. The list below will be updated accordingly. Topic Call for Participation Ada CfP Apache OpenOffice CfP BSD CfP Collaboration and Content Management CfP Computer Aided Modeling and Design CfP Conference Organisation CfP Containers CfP Continuous Integration and Continuous Deployment CfP Dart and舰

November 28, 2021

A vote has been proposed in Debian to change the formal procedure in Debian by which General Resolutions (our name for "votes") are proposed. The original proposal is based on a text by Russ Allberry, which changes a number of rules to be less ambiguous and, frankly, less weird.

One thing Russ' proposal does, however, which I am absolutely not in agreement with, is to add a absolutly hard time limit after three weeks. That is, in the proposed procedure, the discussion time will be two weeks initially (unless the Debian Project Leader chooses to reduce it, which they can do by up to one week), and it will be extended if more options are added to the ballot; but after three weeks, no matter where the discussion stands, the discussion period ends and Russ' proposed procedure forces us to go to a vote, unless all proposers of ballot options agree to withdraw their option.

I believe this is a big mistake. I think any procedure we come up with should allow for the possibility that we may end up with a situation where everyone agrees that extending the discussion time a short time is a good idea, without necessarily resetting the whole discussion time to another two weeks (modulo a decision by the DPL).

At the same time, any procedure we come up with should try to avoid the possibility of process abuse by people who would rather delay a vote ad infinitum than to see it voted upon. A hard time limit certainly does that; but I believe it causes more problems than it solves.

I think insted that it is necessary for any procedure to allow for the discussion time to be extended as long as a strong enough consensus exists that this would be beneficial.

As such, I have proposed an amendment to Russ' proposal (a full version of my proposed constitution can be seen on salsa) that hopefully solves these issues in a novel way: it allows anyone to request an extension to the discussion time, which then needs to be sponsored according to the same rules as a new ballot option. If the time extension is successfully created, those who supported the extension can then also no longer propose any new ones. Additionally, after 4 weeks, the proposed procedure allows anyone to object, so that 4 weeks is probably the practical limit -- although the possibility exists if enough support exists to extend the discussion time (or not enough to end it). The full rules involve slightly more than that (I don't like to put too much formal language in a blog post), but they're not too complicated, I think.

That proposal has received a number of seconds, but after a week it hasn't yet reached the constitutional requirement for the option to be on the ballot.

So, I guess this is a public request for more support to my proposal. If you're a Debian Developer and you agree with me that my proposed procedure is better than the alternative, please step forward and let yourself be heard.

Thanks!

Met de app Zoek mijn van Apple kun je een verloren iPhone, iPad, Mac(Book), Apple Watch of AirPod terugvinden, evenals de AirTag. Dat werkt doordat Apple-apparaten Bluetooth-signalen uitzenden, die door andere apparaten worden opgepikt en dan naar Apple worden verzonden. Dat gebeurt met respect voor privacy: die Bluetooth-signalen bevatten noch je identiteit noch andere persoonsgegevens. Andere Apple-gebruikers en zelfs Apple krijgen dus niet te zien waar je apparaten zich bevinden.

Onderzoekers van het Secure Mobile Networking Lab van de TU Darmstadt in Duitsland hebben als onderdeel van hun project Open Wireless Link de werking van het Zoek mijn-protocol van Apple gereconstrueerd. Ze hebben op basis hiervan ook hun eigen framework gemaakt om Bluetooth-apparaten te traceren via het Zoek mijn-netwerk: OpenHaystack.

In het artikel Stuur sensordata via Zoek mijn-netwerk van Apple in Computer!Totaal beschrijf ik hoe je de OpenHaystack-firmware op een ESP32-ontwikkelbordje installeert en dan de locatie van het bordje overal in de wereld kunt traceren via het Zoek mijn-netwerk van Apple. Dat werkt zolang er maar genoeg Apple-apparaten in de buurt van het bordje zijn.

Willekeurige sensordata

Persoonlijk begon mijn interesse in OpenHaystack pas toen beveiligingsonderzoeker Fabian Bräunlein van Het Berlijnse bedrijf Positive Security een manier vond om willekeurige data via het Zoek mijn-netwerk van Apple te versturen. De aanpak noemde hij Send My.

Positive Security heeft firmware voor de ESP32 ontwikkeld waarmee het bordje een modem wordt voor het Zoek mijn-netwerk. Via een seriële verbinding vanaf je computer naar de ESP32 typ je boodschappen in en de ESP32 codeert die in de vorm van publieke sleutels die het via Bluetooth Low Energy verzendt. Daarna kun je in de bijbehorende DataFetcher-applicatie op je Mac de gedecodeerde boodschappen inlezen.

Het blogartikel van Positive Security waarin ze hun hack aankondigen, spreekt al over het gebruik van de techniek door kleine sensoren. Die kunnen dan in een omgeving zonder toegang tot mobiel internet toch hun sensordata uitsturen, zolang er maar Apple-apparaten in de buurt zijn. Op die manier kun je goedkope sensoren maken die lang op één batterij meegaan.

Maar de code van Positive Security was voor een ESP32, die nog altijd vrij veel energie verbruikt en niet zo compact is. Daarom heb ik de techniek voor andere hardware geprogrammeerd, namelijk de RuuviTag. Dit is een klein sensorbordje met sensoren voor de temperatuur, luchtvochtigheid, luchtdruk en beweging dat de sensordata via Bluetooth Low Energy verstuurt. Met de standaard firmware gaat het apparaatje meerdere jaren mee op een CR2477-batterij. Mijn idee was om zelf firmware te programmeren en in ieder geval de temperatuur van de RuuviTag via het Zoek mijn-netwerk rond te sturen met de techniek van Positive Security.

Om de RuuviTag te programmeren heb je ook een RuuviTag Development Kit nodig. Daarom heb ik dezelfde code ook op een goedkopere oplossing getest: een nRF52840-dongel van Nordic Semiconductor, waarop ik een break-outbordje met BME280-sensor aansloot via I²C.

/images/send-my-nrf52840.jpg

Zephyr-firmware

De code maakt gebruik van het realtime besturingssysteem Zephyr, dat over een volledige opensource BLE-stack beschikt en uitstekende ondersteuning voor de nRF52-chips heeft. Ik publiceerde de code in twee delen: een Zephyr-module die OpenHaystack implementeert en Send My Sensor dat van die module gebruikmaakt om de temperatuurwaardes van de BME280 rond te sturen.

Waarschuwing

Dit is een proof-of-concept dat de hele tijd een statische sleutel via Bluetooth uitstuurt, met een vast modem ID. Iedereen die dit modem ID kent, kan dus van overal in de wereld je sensordata lezen. Er is ook geen power management geïmplementeerd.

In een omgeving met weinig Apple-apparaten is deze manier van data doorsturen niet betrouwbaar, was mijn ervaring. Pas als de signalen van je sensorbordje door talloze apparaten worden opgepikt en naar Apple worden doorgestuurd, kun je de sensordata lezen. En als ook maar één bit uit een byte niet ontvangen wordt, is het overeenkomende teken onleesbaar. Dat ziet er dan als volgt uit in de macOS-applicatie DataFetcher van Positive Security:

/images/send-my-sensor.png

Zo heb ik de RuuviTag met deze firmware in mijn broekzak gehad terwijl ik in een vol restaurant zat. Ik kon er nadien nog net uit afleiden dat het in mijn broekzak tussen de 29 en 31 graden was. Al met al is deze techniek dus nog een prototype, maar het toont wel aan dat je met wat creativiteit heel leuke dingen kunt doen met Apples Zoek mijn-netwerk.

November 27, 2021

Prachtige roman over geestelijk gewonde mensen, fundamentele eenzaamheid en pogingen daaraan te ontsnappen. Maar vrolijk word je er niet van …

Source

November 26, 2021

We rode in a converted German missile launcher over Langjökull, Iceland's second-largest glacier.

The glacier is massive: 50 kilometers long, 20 kilometers wide, and the ice is up to 580 meter thick.

A converted German missile launcher in front of the entrance of the glacier.

Through a small entrance, we descended into a man-made tunnel. We walked through various tunnels and caves, and experienced blue ice deep inside the glacier.

One of the tunnels in the glacier, lid with Christmas lights.

November 22, 2021

There is a common misconception that large open source projects are well-funded. In practice, many rely on a small group of maintainers.

The PHP programming language is one of them. Despite being used by 75%+ of the web, PHP only has a few full-time contributors.

That is why the PHP Foundation is launching. Its mission: The PHP Foundation will be a non-profit organization whose mission is to ensure the long life and prosperity of the PHP language..

Acquia is proud to support the PHP Foundation by contributing $25,000 to the foundation, alongside Automattic, JetBrains, Laravel and others. Our donations will help fund the development of PHP.

PHP is vital to the functioning of governments, schools, non-profits, private companies, public companies, and much more. If your organization relies on PHP, I'd encourage you to make a contribution.

Large open source projects like PHP need meaningful, long-term support. I remain very passionate about how to make Open Source production more sustainable, more fair, more egalitarian, and more cooperative. It will be interesting to see how the PHP Foundation develops.

This weekend I created a topics page for my site. It aspires to the simplicity of printed works.

It's a great way to re-discover old blog posts. Quite a few blog posts brought a smile on my face.

November 21, 2021

My dad loves working on old cars, and my kids have always been fascinated by it. Great memories!

Stan sitting behind the steering wheel of Opa's Citroën 2CV.
Opa showing Stan how to use the kludge of his Citroën 2CV.
Stan and Opa in a Citroën 2CV.

About 2 weeks ago, OSMC happened, the Open Source Monitoring Conference, a conference that normally takes place every year and as a habitual visitor and speaker. The conference takes place in Nuremburg, Germany and is for 3 days, Tuesday, Wednesday and Thursday. As most conferences, OSMC 2020 didn’t happen, but 2021 was able to run. It was a close call, as the situtation changed over the weekend before the conference for Germany. The complete conference was “3G” safe, and while we did need to wear masks for a lot of the conference, it was still doable, and there was a safe zone where you didn’t need to wear the mask all of the time. There was also enough opportunity for hall way track discussions and extracurricular activities.

On day one of the conference, the opening was about the logistics of the conference and how we’re all happy to back on an in-person event. Next I learned about Merlin and Neamon, a good presentation, where the demo failed, however still interesting. After that I saw Fue talk about Contributing to Open Source, while I already contribute to open source in many ways, it still is good to get others opinion and this presentation was good, and reminds us that not only code makes up open source. Any contribution is welcome, from documentation to hosting events, spreading the word and being a good user. After lunch I saw the ignites, Lennart presented his Icinga-Installer more as a lighting talk, then Bram did his Overengineering your personal website, which with every run adds more complexity. Finally Kris did his Dashboard as Code, which is a nice tool, however impressive, the tool that runs the ignite, seemed to have an NTP issue, and didn’t adhere to the 15 second per slide setting, but all went well. I then saw Monitoring Open Infrastructure, a talk I had already seen from Marcelo, and so I switched to Bram’s talk, Gamification of Observability, in which he advises to train for outages like fireman. We have never actually tried this ourselfs, and I have always been able to confince customers to create check lists, and have emergency lists, similar to what pilots have, where in an outage, means grabbing that check-list. While we do not advice customers to do this as a general practice, simulations are adviced. However Bram’s presentation hightlights why training is vital, and not just for operators. I then saw the presention on Thola, a tool which I have played with, but not in full detail, the presentation gave a good overview on the tool and how to extend it, a very nice piece of software. And to close the first day, the usual, Current State of Icinga, where Bernd presented 2 years of Icinga development in one hour. It was very interesting with a lot of German jokes.

On day two, I opened with Monitoring Open Source Hardware, in which I spoke about open source hardware, the choices available today, the projects in working, like OPAL, an open source firmware for POWER systems, the LibreBMC project an open source hardware project. And then how monitoring can be used from the inside out, letting us get more information with less overhead. After my talk, I saw Open Source Application Performance in which a large Java stack gets monitored, using open source tooling. I then saw Philipp’s talk about Observability, in which he reminds us that tools are just tools, similar to using Linux, which distribution you favour doesn’t make a differnce, or for DevOps, which tooling you chose, the tool doesn’t mean anything unless to use and make use of it correctly. After which I saw Still directing the director, in which the Icinga Director, Icinga Business Processes, and Ansible, are glued together. Then Kris gave his Observability will not fix your broken Monitoring, in which he spoke about the misconceptions of a hype. The hype of observability, that will solve problems like monitoring, alerting, or culture, does not exists, there is no magic. If you want to achieve real observability, you need tranditional more than just monitoring tools, logging tools, and alerting tools that can work together, you need to use those tools to achieve better insights, that you then use to better understand and improve your infrastructure. Last I saw the Icinga for Windows presentation an update of the work around Icinga2 on Windows.

Overall the whole conference went soomthly and the talks were good, the number of attendees was less than in a normal year. We all enjoyed the whole conference and it was good to be again at an in-person conference.

November 20, 2021

Nowadays it is impossible to ignore, or even prevent open source from being active within the enterprise world. Even if a company only wants to use commercially backed solutions, many - if not most - of these are built with, and are using open source software.

However, open source is more than just a code sourcing possibility. By having a good statement within the company on how it wants to deal with open source, what it wants to support, etc. engineers and developers can have a better understanding of what they can do to support their business further.

In many cases, companies will draft up an open source policy, and in this post I want to share some practices I've learned on how to draft such a policy.

November 19, 2021

When I have a little bit of time, I enjoy working on my website. I sand it down, polish it, or smoothen a rough edge.

Just this week I added a search feature to my blog posts page.

I often have to find links to old blog posts. To do so, I navigate to https://dri.es/blog, which provides a long list of every blog post I've ever written. From there, I type ⌘-F to use my browser's built-in search.

I like that my blog posts page is one long list that is easy to search. What I don't like is that when there is more than one match, iterating through the results causes the search experience to jump around.

The new search feature smoothens that rough edge. Instead of jumping around, it filters the list. It creates a nice overview. Try it out, and you'll see that it makes a big difference.

Amazing what 30 lines of vanilla JavaScript code can do!

November 18, 2021

Over 20 years have passed since first seeing Magnolia which not only was a beautiful movie, but also a testament to the great songwriter Aimee Mann is. “I see You” is a song from her new album “Queens of the Summer Hotel” and it is just as great!

Source

November 17, 2021

Security professionals are high-profile users and virtualization is a key component of our labs. Many of us are also fans of Macbook laptops. But since Apple started to roll out its new computers with M1 processors, we are facing a major issue… The M1 is an ARM-based chipset and this architecture has a huge impact on virtualization… Let’s be clear: Today, there is no way to easily run a classic (Intel) Windows guest on an M1-based Macbook! We see here and there blog posts that explain how to install an ARM version of Windows 11 on a new Macbook but it remains unpractical to run your best tools on it. How can we deal with this?

My current Macbook pro is 1-year old and is pretty powerful (64GB RAM and 2TB of storage), I don’t have plans to change in the coming months but who knows 😉 When the time for a change will come, there will be no alternative (because I love Macbooks) and I’ll switch to a M1-setup. That’s why I decided to prepare for the future and change the way I’m working. I’m teaching the SANS FOR610 training and we use a malware analyzis lab based on two virtual machines: one Windows and one Linux (based on REMnux).

The idea is to get rid of the virtual machines on my Macbook and run them on a light device that I could bring with me when travelling. Let’s review the hardware I chose. My first idea was to use an Intel NUC but it was difficult to find one with multiple NICs onboard. After some research, I found the following MiniPC on Amazon:

MiniPC picture

The hardware specifications are more then enough to run an hypervisor:

  • Intel CPU with all virtualization features
  • 2 x NICs (1GBits & 2.5Gbits)
  • Wireless
  • Enough USB ports
  • 16GB memory
  • 512GB SSD
  • HDMI w/4K support (ok, less interesting for virtualization)

It’s possible to extend the memory by replacing the modules and an free slot is present to host an extra SSD!

My first choice was to use ESXi (the free version) but I faced a problem with the network chipsets. The 1Gbits port is based on a Realtek chipset and the 2.5Gbits one on an Intel chipset. I was able to generate a customized ESXi 6.7 image with the Realtek driver but no Intel. The Intel driver is available with ESXi 7.0 but … no the Realtek one! After testing multiple images, I gave up and decided to switch to something else: The perfect candidate was ProxMox! This hypervisor was already mentioned multiple times in my entourage. Based on Debian, this distribution offers a complete virtualization solution and it was able to detect and use all three NICs:

root@pve0:~# ip a|grep s0
2: enp1s0: mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
3: enp2s0: mtu 1500 qdisc noop state DOWN group default qlen 1000
4: wlp3s0: mtu 1500 qdisc noop state DOWN group default qlen 1000

I won’t describe the installation of ProxMox, it’s pretty straightforward: Create a bootable USD drive, boot it and follow the wizard. The configuration is very simple, no cluster, nothing special. Once the setup is ready, the hypervisor is able to boot automatically without a keyboard and a screen. My network setup is the following: The 1Gbits NIC is dedicated to management and has a fixed IP address. It will be available on my home network and when traveling, I’ll just need a cross-cable between my Macbook and the MiniPC. The 2.5Gbits is dedicated to guests that need to be connected to the Internet.

Network schema

The lab used by students during the FOR610 class requires to be disconnected from the Internet and any other network for security reasons. We use it to analyze pieces of malware. The first thing to do is to create a new network that will be isolated. In /etc/network/interfaces, add the following lines and restart the network:

auto vmbr1
iface vmbr1 inet static
address 10.0.0.1/24
bridge_ports none
bridge_stp off
bridge_fd 0

Then, I installed the two guests. Because SANS supports only the VMware hypervisor, the virtual machines are provided as VMware guests. The first step is to convert the disk images from VMDK to QCOW2. Because I don’t like to install specific tools on my Macbook, I’m a big fan of Docker containers. You can use a simple container that offers the qemu toolbox and directly convert the image:

$ docker run -v $(pwd):/data --rm heinedej/docker-qemu-utils \
      qemu-img convert \
               -f vmdk /data/REMnux-disk.vmdk \
               -O qcow2 /data/REMnux-disk.qcow2

Once the conversion completed, transfer the .cqow2 file (use scp) into /root/imported-disks/ on your Proxmox host.

It’s now time to create the two guests. Start with a standard config (assign resources depending on your future usage like cores and memory). Be sure to select the right bridge (the one created just above) for isolation. You will have to create a disk but we will deleted it later, just create a disk of a few Gigabytes.

My REMnux guest looks like this:

REMnux config

Note: I had to change the Display driver from “default” to “VMware compatible” to be able to boot the guest. Same for the SCSI controller.

And my REMWorkstation guest:

REMworkstation config

Once the guests are created, we must import the converted disks into an existing VM. SSH to the Proxmox and attach the disk images to newly created guests:

$ cd /root/imported-disks
$ qm importdisk <vm-id> <disk>.qcow2 local-lvm

Detach (and delete) the original disk created during the initial configuration, change the boot order and boot the guests. The last step is to configure the network to allow network connectivity between them. Configure a fixed IP address on REMnux and on REMworkstation. Usually, I use the bridge network + VM ID: 10.0.0.100 & 10.0.0.101. Don’t forget to configure the REMnux IP address as DNS server and default gateway on REMworkstation!

Last steps:

  • Fine tune your hosts
  • Create your initial snapshot
  • Enable auto-start of both guests

Happy reversing!

Note: This setup can be deployed in a cloud environment or a colocation server.

The post Portable Malware Analyzis Lab appeared first on /dev/random.

November 10, 2021

Just when I was starting to get a good old-fashioned cold I heard this on the radio while in the car. It didn’t stop me from going into hibernation for a couple of days, but man what a great tune!

Source

I published the following diary on isc.sans.edu: “Shadow IT Makes People More Vulnerable to Phishing“:

Shadow IT is a real problem in many organizations. Behind this term, we speak about pieces of hardware or software that are installed by users without the approval of the IT department. In many cases, shadow IT is used because internal IT teams are not able to provide tools in time. Think about a user who needs to safely exchange files with partners and no tool is available. A change request will be created to deploy one but, with the lack of (time|money|resources), the project will take time. Unfortunately, the user needs the tool now, so an alternative path will be used like a cloud file sharing service… [Read more]

The post [SANS ISC] Shadow IT Makes People More Vulnerable to Phishing appeared first on /dev/random.