Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

June 17, 2021

Bluetooth Low Energy (BLE) devices have two ways to transfer data:

  • Connectionless: This is called "broadcasting" or "advertising": the device advertises its existence, its capabilities and some data, and every BLE device in the neighbourhood can pick this up.

  • With a connection: This requires the client device to connect to the server device and ask for the data.

Until recently, ESPHome only supported reading BLE data without a connection. In my book, I gave some examples about how to use this functionality. I also referred to pull request #1177 by Ben Buxton, which was only merged in ESPHome 1.18.0, after the book was published.

In this blog post, I'll show you how to read the battery level and button presses from a Gigaset keeper Bluetooth tracker, as well as the heart rate of a heart rate sensor, for instance in your fitness tracker.


These examples only work for ESP32 boards. The ESP8266 doesn't have Bluetooth Low Energy, and external BLE modules aren't supported.

Setting up a BLE client

If you want your ESPHome device to connect to another device using BLE, you first need to add a ble_client component, which requires an esp32_ble_tracker component. For instance:


  - mac_address: FF:EE:DD:CC:BB:AA
    id: gigaset_keeper

Just specify the BLE MAC address of the device you want to connect to, and give it an ID.


If you don't know the device's MAC address, just add the esp32_ble_tracker component and upload the firmware to your ESPHome device. It will start scanning for BLE devices and will show the devices it finds, with their name and MAC address. You can also scan with the mobile app nRF Connect for Android or iOS.

Reading a one-byte characteristic

After connecting to the device, you can create a BLE Client sensor to read one-byte characteristics such as the battery level from your device.

Each BLE server (the device you connect to) has various services and characteristics. The Bluetooth Special Interest Group has published a lot of standard services and their characteristics in their Specifications List. So if you want to read a device's battery level, consult the Battery Service 1.0 document. You see that it defines one characteristic, Battery Level, which "returns the current battery level as a percentage from 0% to 100%; 0% represents a battery that is fully discharged, 100% represents a battery that is fully charged."

Each service and characteristic has a UUID. The standard services and characteristics are defined in the Bluetooth SIG's Assigned Numbers specifications. These are 16-bit numbers. For instance, the Battery service has UUID 0x180f and the Battery Level characteristic has UUID 0x2a19. To read this in ESPHome, you add a sensor of the ble_client platform:

  - platform: ble_client
    ble_client_id: gigaset_keeper
    name: "Gigaset keeper battery level"
    service_uuid: '180f'
    characteristic_uuid: '2a19'
    notify: true
    icon: 'mdi:battery'
    accuracy_decimals: 0
    unit_of_measurement: '%'

Make sure to refer in ble_client_id to the ID of the BLE client you defined before.

If you compile this, your ESPHome device connects to your Bluetooth tracker and subscribes to notifications for the battery level. 1

The Gigaset keeper has another one-byte characteristic that's easy to read, but it's not a standard one: the number of clicks on the button. By exploring the characteristics with nRF Connect, clicking the button and reading their values, you'll find the right one. You can then read this in ESPHome with:

  - platform: ble_client
    ble_client_id: gigaset_keeper
    name: "Gigaset keeper button clicks"
    service_uuid: '6d696368-616c-206f-6c65-737a637a796b'
    characteristic_uuid: '66696c69-7020-726f-6d61-6e6f77736b69'
    notify: true
    accuracy_decimals: 0

Every time you click the button on the Gigaset keeper, you'll get an update with the click count. Of course you're probably not interested in the number of clicks, but just in the event that a click happens. You can act on the click with an on_value automation in the sensor, which can for instance toggle a switch in Home Assistant. This way you can use your Bluetooth tracker as a wireless button for anything you want.

Reading arbitrary characteristic values

In ESPHome 1.18 you could only read the first byte of a characteristic, and it would be converted to a float number. In ESPHome 1.19, David Kiliani contributed a nice pull request (#1851) that allows you to add a lambda function to parse the raw data. All received bytes of the characteristic are passed to the lambda as a variable x of type std::vector<uint8_t>. The function has to return a single float value.

You can use this for example to read the value of a heart rate sensor. If you read the specification of the Heart Rate Service, you'll see that the Heart Rate Measurement characteristic is more complex than just a number. There's a byte with some flags, the next one or two bytes is the heart rate, and then come some other bytes. So if you have access to the full raw bytes of the characteristic, you can read the heart rate like this:

  - platform: ble_client
    ble_client_id: heart_rate_monitor
    id: heart_rate_measurement
    name: "${node_name} Heart rate measurement"
    service_uuid: '180d'  # Heart Rate Service
    characteristic_uuid: '2a37'  # Heart Rate Measurement
    notify: true
    lambda: |-
      uint16_t heart_rate_measurement = x[1];
      if (x[0] & 1) {
          heart_rate_measurement += (x[2] << 8);
      return (float)heart_rate_measurement;
    icon: 'mdi:heart'
    unit_of_measurement: 'bpm'

Note how the heart rate is in the second byte (x[1]), and if the rightmost bit of x[0] is set, the third byte holds the most significant byte of the 16-bit heart rate value.

However, this way you can still only return numbers. What if you want to read a string? For instance, the device name is accessible in characteristic 0x2a00. Luckily, there's a trick. First define a template text sensor:

  - platform: template
    name: "${node_name} heart rate sensor name"
    id: heart_rate_sensor_name

And then define a BLE client sensor that accesses the raw bytes of the Device Name characteristic, converts it to a string and publishes it to the template text sensor:

  - platform: ble_client
    ble_client_id: heart_rate_monitor
    id: device_name
    service_uuid: '1800'  # Generic Access Profile
    characteristic_uuid: '2a00'  # Device Name
    lambda: |-
      std::string data_string(x.begin(), x.end());
      return (float)x.size();

The only weirdness is that you need to return a float in the lambda, because it's a sensor that's expected to return a float value.

After someone on the Home Assistant forum asked how he could read the heart rate of his BLE heart rate monitor, I implemented all this in a project that displays the heart rate and device name of a BLE heart rate sensor on an M5Stack Core or LilyGO TTGO T-Display ESP32, my two go-to ESP32 boards. I published the source code on GitHub as koenvervloesem/ESPHome-Heart-Rate-Display.

This looks like this:


This is just one example of the many new possibilities with Bluetooth Low Energy in ESPHome 1.19.


Subscribing for notifications (with notify: true) uses less energy than continuously polling for a new value. With a notification, the server only pushes an update when the characteristic value changes.

June 15, 2021

PlatformIO supports the Digispark USB development board, a compact board with the ATtiny85 AVR microcontroller. The canonical example code that lets the built-in LED blink looks like this:

digispark_blink_platformio/main.c (Source)

/* Atml AVR native blink example for the Digispark
 * Copyright (C) 2021 Koen Vervloesem (
 * SPDX-License-Identifier: MIT
#include <avr/io.h>
#include <util/delay.h>

// Digispark built-in LED
// Note: on some models the LED is connected to PB0
#define PIN_LED PB1
#define DELAY_MS 1000

int main(void) {
  // Initalize LED pin as output
  DDRB |= (1 << PIN_LED);

  while (1) {
    PORTB ^= (1 << PIN_LED);

  return 0;


This doesn't use the Arduino framework, but directly uses the native AVR registers.

If you've bought your Digispark recently, the Micronucleus bootloader is a recent version that isn't supported by PlatformIO's older micronucleus command.

If you've already upgraded the bootloader on your Digispark, you also have the newest version of the micronucleus command. So the only thing you need is make PlatformIO use this version when uploading your code. You can do this with the following platformio.ini:

platform = atmelavr
board = digispark-tiny
upload_protocol = custom
upload_command = micronucleus --run .pio/build/digispark-tiny/firmware.hex

The platform and board options are just the configuration options the PlatformIO documentation lists for the Digispark USB. By setting the upload_protocol to custom, you can supply your own upload command, and the micronucleus in this command refers to the one you've installed globally with sudo make install in /usr/local/bin/micronucleus.

After this, you can just build the code and upload it:

koan@tux:~/digispark_blink_platformio$ pio run -t upload
Processing digispark-tiny (platform: atmelavr; board: digispark-tiny)
Verbose mode can be enabled via `-v, --verbose` option
PLATFORM: Atmel AVR (3.1.0) > Digispark USB
HARDWARE: ATTINY85 16MHz, 512B RAM, 5.87KB Flash
DEBUG: Current (simavr) On-board (simavr)
 - tool-avrdude 1.60300.200527 (6.3.0)
 - toolchain-atmelavr 1.50400.190710 (5.4.0)
LDF: Library Dependency Finder ->
LDF Modes: Finder ~ chain, Compatibility ~ soft
Found 0 compatible libraries
Scanning dependencies...
No dependencies
Building in release mode
Checking size .pio/build/digispark-tiny/firmware.elf
Advanced Memory Usage is available via "PlatformIO Home > Project Inspect"
RAM:   [          ]   0.0% (used 0 bytes from 512 bytes)
Flash: [          ]   1.4% (used 82 bytes from 6012 bytes)
Configuring upload protocol...
CURRENT: upload_protocol = custom
Uploading .pio/build/digispark-tiny/firmware.hex
> Please plug in the device ...
> Device is found!
connecting: 33% complete
> Device has firmware version 2.5
> Device signature: 0x1e930b
> Available space for user applications: 6650 bytes
> Suggested sleep time between sending pages: 7ms
> Whole page count: 104  page size: 64
> Erase function sleep duration: 728ms
parsing: 50% complete
> Erasing the memory ...
erasing: 66% complete
> Starting to upload ...
writing: 83% complete
> Starting the user app ...
running: 100% complete
>> Micronucleus done. Thank you!
====================================== [SUCCESS] Took 3.99 seconds ======================================

I've created a GitHub project with this configuration, the example code, a Makefile and a GitHub Action to automatically check and build the code: koenvervloesem/digispark_blink_platformio. This can be used as a template for your own AVR code for the Digispark with PlatformIO.

June 14, 2021

The Digispark USB development board has an ATtiny85 microcontroller. (source: Digistump)

Recently I have been playing with the Digispark development board with the ATtiny85 AVR microcontroller. The cool thing is that it's only 2 by 2 cm big and it plugs right into a USB port.

But when I flashed the canonical blink example to the microcontroller, I noticed that the program didn't blink continuously with the fixed interval I had set up. The first five seconds after plugging in the Digispark the LED blinked in a fast heartbeat pattern (which seems to be normal, this is the bootloader staying in the programming mode for five seconds), then my program slowly blinked for a few seconds, then it was five seconds in heartbeat pattern again, then slower blinks again, and so on. It seemed the microcontroller got stuck in a continuous boot loop.

Thinking about the previous time I solved a microcontroller problem by upgrading the bootloader, I decided to try the same approach here. The Digispark is sold with the Micronucleus bootloader, but the boards I bought 1 apparently had an older version. Upgrading the bootloader is easy on Ubuntu.

First build the micronucleus command-line tool:

git clone
cd micronucleus/commandline
sudo apt install libusb-dev
sudo make install

The Micronucleus project offers firmware files to upgrade older versions, so then upgrading the existing firmware was as easy as:

cd ..
micronucleus --run firmware/upgrades/upgrade-t85_default.hex

Then insert the Digispark to write the firmware.

After flashing the blink example, I didn't see the boot loop this time. And the Micronucleus developers apparently got rid of the annoying heartbeat pattern in the newer version.


Only after I received the 5-pack I realized that I had bought a Chinese clone. So it's possible that the original product sold by Digistump doesn't have the issue I'm describing here.

My current role within the company I work for is “domain architect”, part of the enterprise architects teams. The domain I am accountable for is “infrastructure”, which can be seen as a very broad one. Now, I’ve been maintaining an overview of our IT services before I reached that role, mainly from an elaborate interest in the subject, as well as to optimize my efficiency further.

Becoming a domain architect allows me to use the insights I’ve since gathered to try and give appropriate advice, but also now requires me to maintain a domain architecture. This structure is going to be the starting point of it, although it is not the true all and end all of what I would consider a domain architecture.

A single picture doesn’t say it all

To start off with my overview, I had a need to structure the hundreds of technology services that I want to keep an eye on in a way that I can quickly find it back, as well as present to other stakeholders what infrastructure services are about. This structure, while not perfect, currently looks like in the figure below. Occasionally, I move one or more service groups left or right, but the main intention is just to have a structure available.

Overview of the IT services

Figures like these often come about in mapping exercises, or capability models. A capability model that I recently skimmed through is the IF4IT Enterprise Capability Model. I stumbled upon this model after searching for some reference architectures on approaching IT services, including a paper titled IT Services Reference Catalog by Nelson Gama, Maria do Mar Rosa, and Miguel Mira da Silva.

Capability models, or service overviews like the one I presented, do not fit each and every organization well. When comparing the view I maintain with others (or the different capability and service references out there), I notice two main distinctions: grouping, and granularity.

  • Certain capabilities might be grouped one way in one reference, and use a totally different grouping in another. A database system might be part of a “Databases” group in one, a “Data Management” group in another, or even “Information Management” in a third. Often, this grouping also reveals the granularity that the author wants to pursue.
    Grouping allows for keeping things organized and easier to explain, but has no structural importance. Of course, a well-chosen grouping also allows you to tie principles and other architectural concerts to the groups themselves, and not services in particular. But that still falls under the explainability part.

  • The granularity is more about how specific a grouping is. In the example above, “Information Management” is the most coarse-grained grouping, whereas “Databases” might be a very specific one. Granularity can convey more insights in the importance of services, although it can also be due to historical reasons, or because an overview started from one specific service and expanded later. In that case, it is very likely that the specific service and its dependencies are more elaborately documented.

In the figure I maintain, the grouping is often based both on the extensiveness of a group (if a group contains far too many services, I might want to see if I can split up the group) as well as historical and organizational choices. For instance, if the organization has a clear split between network oriented teams and server oriented teams, then the overview will most likely convey the same message, as we want to have the overview interpretable by many stakeholders - and those stakeholders are often aware of the organizational distinctions.

Services versus solutions

I try to keep track of the evolutions of services and solutions within this overview. Now, the definition of a “service” versus “solution” does warrant a bit more explanation, as it can have multiple meanings. I even use “service” for different purposes depending on the context.

For domain architecture, I consider an “infrastructure service” as a product that realizes or supports an IT capability. It is strongly product oriented (even when it is internally developed, or a cloud service, or an appliance) and makes a distinction between products that are often very closely related. For instance, Oracle DB is an infrastructure service, as is the Oracle Enterprise Manager. The Oracle DB is a product that realizes a “relational database” capability, whereas OEM is a “central infrastructure management” capability.

The reason I create distinct notes for these is because they have different life cycles, might have different organizational responsible teams, different setups, etc. Hence, components (parts of products) I generally do not consider as separate, although there are definitely examples where it makes sense to consider certain components separate from the products in which they are provided.

The several hundred infrastructure services that the company is rich in are all documented under this overview.

Alongside these infrastructure services I also maintain a solution overview. The grouping is exactly the same as the infrastructure services, but the intention of solutions is more from a full internal offering point of view.

Within solution architectures, I tend to focus on the choices that the company made and the design that follows it. Many solutions are considered ‘platforms’ on top of which internal development, third party hosting or other capabilities are provided. Within the solution, I describe how the various infrastructure services interact and work together to make the solution reality.

A good example here is the mainframe platform. Whereas the mainframe itself is definitely an infrastructure service, how we internally organize the workload and the runtimes (such as the LPARs), how it integrates with the security services, database services, enterprise job scheduling, etc. is documented in the solution.

Not all my domain though

Not all services and solutions that I track are part of ‘my’ domain though. For instance, at my company, we make a distinction between the infrastructure-that-is-mostly-hosting, and infrastructure-that-is-mostly-workplace. My focus is on the ‘mostly hosting’ orientation, whereas I have a colleague domain architect responsible for the ‘mostly workplace’ side of things.

But that’s just about accountability. Not knowing how the other solutions and services function, how they are set up, etc. would make my job harder, so tracking their progress and architecture is effort that pays off.

In a later post I’ll explain what I document about services and solutions and how I do it when I have some time to spend.

June 11, 2021

I just upped Autoptimize 2.9 beta to version 4, which is likely to be the last version before the official 2.9 release (eta end June/ early July).

Main new features;

You can download the beta from Github (do disable 2.8.x before activating 2.9-beta-4) and you can log any issues/ bugs over at

Looking forward to your feedback!

I published the following diary on “Keeping an Eye on Dangerous Python Modules“:

With Python getting more and more popular, especially on Microsoft Operating systems, it’s common to find malicious Python scripts today. I already covered some of them in previous diaries. I like this language because it is very powerful: You can automate boring tasks in a few lines. It can be used for offensive as well as defensive purposes, and… it has a lot of 3rd party “modules” or libraries that extend its capabilities. For example, if you would like to use Python for forensics purposes, you can easily access the registry and extract data… [Read more]

The post [SANS ISC] Keeping an Eye on Dangerous Python Modules appeared first on /dev/random.

June 09, 2021

At my workplace, I jokingly refer to the three extra layers on top of the OSI network model as a way to describe the difficulties of discussions or cases. These three additional layers are Financial Layer, Politics Layer and Religion Layer, and the idea is that the higher up you go, the more challenging discussions will be.

June 08, 2021

Fastly had some issues today.

Here is a top 10 of the number of complainers on downdetector of some random internet services...

10. github 199
9. spotify 264
8. stackoverflow 351
7. twitter 368
6. discord 906
5. nytimes 2069
4. hbomax 2245
3. cnn 3432
2. twitch 5438
1. reddit 18628

Does this measure popularity? probably not.

Does it measure anything useful? Nope.

June 06, 2021

I have a couple of Dragino's LoRaWAN temperature sensors in my garden. 1 The LSN50 that I bought came with a default uplink interval of 30 seconds. Because the temperature outside doesn't change that fast and I want to have a long battery life, I wanted to change the uplink interval to 10 minutes.

Dragino's firmware accepts commands as AT commands over a USB to TTL serial line (you need to physically connect to the device's UART pins for that) or as LoRaWAN downlink messages (which you can send over the air via the LoRaWAN gateway). All Dragino LoRaWAN products support a common set of commands explained in the document End Device AT Commands and Downlink Commands.

The Change Uplink Interval command starts with the command code 0x01 followed by three bytes with the interval time in seconds. So to set the uplink interval to 10 minutes, I first convert 600 seconds to its hexadecimal representation:

$ echo 'ibase=10;obase=16;600' | bc

So I have to send the command 0x01000258.

My LoRaWAN gateway is connected to The Things Network, a global open LoRaWAN network. You can send a downlink message on the device's page in The Things Network's console (which is the easiest way), but you can also do it with MQTT (which is a bit harder). The Things Network's MQTT API documentation shows you the latter. 2 The MQTT topic should be in the form <AppID>/devices/<DevID>/down and the message looks like this:

  "port": 1,
  "confirmed": false,
  "payload_raw": "AQACWA=="

Note that you should supply the payload in Base64 format. You can easily convert the hexadecimal payload to Base64 on the command line:

$ echo 01000258|xxd -r -p|base64

Enter this as the value for the payload_raw field.

So then the full command to change the uplink interval of the device to 10 minutes becomes:

$ mosquitto_pub -h -p 8883 --cafile ttn-ca.pem -u AppID -P AppKey -d -t 'AppID/devices/DevID/down' -m '{"port":1,"payload_raw":"AQACWA=="}'

This downlink message will be scheduled by The Things Network's application server to be sent after the next uplink of the device. After it has been received, the device changes its uplink interval.


I highly recommend Dragino. Their products are open hardware and use open source firmware, and their manuals are extremely detailed, covering countless application scenarios.


Note that I'm still using The Things Network v2, as my The Things Indoor Gateway can't be migrated yet to The Things Stack (v3). Consult the MQTT documentation of The Things Stack if you're already using v3.

June 04, 2021

I published the following diary on “Russian Dolls VBS Obfuscation“:

We received an interesting sample from one of our readers (thanks Henry!) and we like this. If you find something interesting, we are always looking for fresh meat! Henry’s sample was delivered in a password-protected ZIP archive and the file was a VBS script called “presentation_37142.vbs” (SHA256:2def8f350b1e7fc9a45669bc5f2c6e0679e901aac233eac63550268034942d9f). I uploaded a copy of the file on MalwareBazaar… [Read more]

The post [SANS ISC] Russian Dolls VBS Obfuscation appeared first on /dev/random.

June 03, 2021

When an organization has an extensively large, and heterogeneous infrastructure, infrastructure architects will attempt to make itless complex and chaotic by introducing and maintaining a certain degree of standardization. While many might consider standardization as a rationalization (standardizing on a single database technology, single vendor for hardware, etc.), rationalization is only one of the many ways in which standards can simplify such a degree of complexity.

In this post, I'd like to point out two other, very common ways to standardize the IT environment, without really considering a rationalization: abstraction and virtualization.

June 01, 2021

Bluetooth Low Energy (BLE) is een dankbaar protocol om als hobbyprogrammeur mee te werken. De specificaties zijn allemaal op de website van de Bluetooth SIG te vinden en zowat elk ontwikkelplatform kent wel een of meerdere BLE-bibliotheken.

In een artikel voor Computer!Totaal leg ik uit hoe je data van BLE-sensors uitleest in je eigen Arduino-code die op een ESP32-microcontroller draait, en het resultaat visualiseert op het scherm van een M5Stack Core.

Een BLE-apparaat kan op twee manieren communiceren: door data te broadcasten naar iedereen in de buurt, of door een één-op-één verbinding te maken met een ander apparaat en daarnaar data te zenden. In het artikel werk ik voor beide communicatiemanieren een voorbeeldprogramma uit.

Broadcasten gebeurt vaak door omgevingssensoren. De RuuviTag is zo'n sensor die temperatuur, luchtvochtigheid, luchtdruk en beweging detecteert. 1 De sensor broadcast zijn data elke seconde via BLE als manufacturer data. Het protocol is volledig gedocumenteerd. In het artikel in Computer!Totaal leg ik uit hoe je de temperatuur hieruit decodeert en op het scherm van de M5Stack Core toont.

Andere apparaten gebruiken verbindingen. De Bluetooth SIG heeft een heel aantal services voor verbindingen gedefinieerd waarvan je de specificaties gewoon kunt raadplegen. Een populaire is de service Heart Rate. In het artikel beschrijf ik hoe je de code schrijft om met behulp van de bibliotheek NimBLE-Arduino met een hartslagsensor te verbinden en realtime je hartslag op het scherm van de M5Stack Core te tonen. Ideaal voor tijdens het sporten! 2


De nRF52832-gebaseerde RuuviTag is open hardware en ook de firmware is opensource.


Ik heb ondertussen ook een versie van de code geschreven voor de nieuwere M5Stack Core2: M5Core2 Heart Rate Display.

May 30, 2021

Ik vind dat het verhuizen van welke rechtbank laster een eerroof behandelt naar de correctionele rechtbanken te laten gaan, een goede zaak is.

Ik begrijp het, dat het feit dat, assisen de rechtbank was die hierover ging, als een soort van buffer speelde om de vrijheid van meningsuiting te vrijwaren. De comma’s staan er omdat het momenten om over na te denken zijn.

Maar dat is en was eigenlijk een misbruik van een soort van overheidsincompentie (of eigenlijk rechtbankenincompetentie). Namelijk dat er bij Assisen te weinig capaciteit was om dit soort misdaden te beoordelen en wanneer (en slechts enkel wanneer) het over de schreef ging, te veroordelen.

In de praktijk betekende het dat eigenlijk bijna alles kon. Terwijl dat nu geen goed idee meer is. Niet alles kan. Je vrijheid om bv. een leugen te spreken is begrenst daar waar die vrijheid schade berokkent. En dat is zo zelfs wanneer wat jij zegt geen leugen is, en voornamelijk wanneer dat wat gezegd wordt geen maatschappelijke meerwaarde heeft.

Het is dus aan een rechter om je dan te stoppen. Wij leven hier in België onder het Belgische recht. Niet onder jouw absolute vrijheid. Want onze geringe vrijheid is afhankelijk van ons Belgische recht. M.a.w. is, hoe controversieel ook (en laat ik duidelijk zijn dat ik duidelijk weet dat het controversieel is), jouw ingebeelde recht om een ander te schaden, niet altijd gelijk aan jouw recht om het ook te uiten. Niet hier in België. Niet altijd. Bijna altijd wel, maar niet altijd.

Context doet er toe (dus, de humor kan terecht zeer ver gaan). Maar niet altijd.

Ik ben wel erg bezorgd hierover. Maar dat maakt niet dat dit niet mag of kan behandeld worden. Het slaat stevig in op de verlichtingsfilosofie. Die zegt dat bijna alles moet kunnen gezegd worden. Maar om alles te kunnen zeggen moeten we ook, zeker met de opkomst van nieuwe media, akkoord gaan dat een overvloed van leugens ook de reputatie van eerbare mensen onherstelbaar zal schaden. Dus was de verlichting als filosofie soms naïef (hoewel ze stelde dat dit normaalgesproken zichelf corrigeert).

De grond van de vrije meningsuiting mogen we toch niet vergeten. Vind ik.

May 28, 2021

I published the following diary on “Malicious PowerShell Hosted on“:

Google has an incredible portfolio of services. Besides the classic ones, there are less known services and… they could be very useful for attackers too. One of them is Google Apps Script. Google describes it like this:

Apps Script is a rapid application development platform that makes it fast and easy to create business applications that integrate with G Suite.

Just a quick introduction to this development platform to give you an idea about the capabilities. If, as the description says, it is used to extend the G Suite, it can of course perform basic actions like… hosting and delivering some (malicious) content… [Read more]

The post [SANS ISC] Malicious PowerShell Hosted on appeared first on /dev/random.

May 27, 2021

The pandemic was a bit of a mess for most FLOSS conferences. The two conferences that I help organize -- FOSDEM and DebConf -- are no exception. In both conferences, I do essentially the same work: as a member of both video teams, I manage the postprocessing of the video recordings of all the talks that happened at the respective conference(s). I do this by way of SReview, the online video review and transcode system that I wrote, which essentially crowdsources the manual work that needs to be done, and automates as much as possible of the workflow.

The original version of SReview consisted of a database, a (very basic) Mojolicious-based webinterface, and a bunch of perl scripts which would build and execute ffmpeg command lines using string interpolation. As a quick hack that I needed to get working while writing it in my spare time in half a year, that approach was workable and resulted in successful postprocessing after FOSDEM 2017, and a significant improvement in time from the previous years. However, I did not end development with that, and since then I've replaced the string interpolation by an object oriented API for generating ffmpeg command lines, as well as modularized the webinterface. Additionally, I've had help reworking the user interface into a system that is somewhat easier to use than my original interface, and have slowly but surely added more features to the system so as to make it more flexible, as well as support more types of environments for the system to run in.

One of the major issues that still remains with SReview is that the administrator's interface is pretty terrible. I had been planning on revamping that for 2020, but then massive amounts of people got sick, travel was banned, and both the conferences that I work on were converted to an online-only conference. These have some very specific requirements; e.g., both conferences allowed people to upload a prerecorded version of their talk, rather than doing the talk live; since preprocessing a video is, technically, very similar to postprocessing it, I adapted SReview to allow people to upload a video file that it would then validate (in terms of length, codec, and apparent resolution). This seems like easy to do, but I decided to implement this functionality so that it would also allow future use for in-person conferences, where occasionally a speaker requests that modifications would be made to the video file in a way that SReview is unable to do. This made it marginally more involved, but at least will mean that a feature which I had planned to implement some years down the line is now already implemented. The new feature works quite well, and I'm happy I've implemented it in the way that I have.

In order for the "upload" processing and the "post-event" processing to not be confused, however, I decided to import the conference schedules twice: once as the conference itself, and once as a shadow version of that conference for the prerecordings. That way, I could track the progress through the system of the prerecording completely separately from the progress of the postprocessing of the video (which adds opening/closing credits, and transcodes to multiple variants of the same video). Schedule parsing was something that had not been implemented in a generic way yet, however; since that made doubling the schedule in that way rather complex, I decided to bite the bullet and (finally) implement schedule parsing in a generic way. Currently, schedule parsers exist for two formats (Pentabarf XML and the Wafer variant of that same format which is almost, but not quite, entirely the same). The API for that is quite flexible, and I'm happy with the way things have been implemented there. I've also implemented a set of "virtual" parsers, which allow mangling the schedule in various ways (by either filtering out talks that we don't want, or by generating the shadow version of the schedule that I talked about earlier).

While the SReview settings have reasonable defaults, occasionally the output of SReview is not entirely acceptable, due to more complicated matters that then result in encoding artifacts. As a result, the DebConf video team has been doing a final review step, completely outside of SReview, to ensure that such encoding artifacts don't exist. That seemed suboptimal, so recently I've been working on integrating that into SReview as well. First tests have been run, and seem to be acceptable, but there's still a few loose ends to be finalized.

As part of this, I've also reworked the way comments could be entered into the system. Previously the presence of a comment would signal that the video has some problems that an administrator needed to look at. Unfortunately, that was causing some confusion, with some people even thinking it's a good place to enter a "thank you for your work" style of comment... which it obviously isn't. Turning it into a "comment log" system instead fixes that, and also allows for better two-way communication between administrators and reviewers. Hopefully that'll improve things in that area as well.

Finally, the audio normalization in SReview -- for which I've long used bs1770gain -- is having problems. First of all, bs1770gain will sometimes alter the timing of the video or audio file that it's passed, which is very problematic if I want to process it further. There is an ffmpeg loudnorm filter which implements the same algorithm, so that should make things easier to use. Secondly, the author of bs1770gain is a strange character that I'd rather not be involved with. Before I knew about the loudnorm filter I didn't really have a choice, but now I can just rip bs1770gain out and replace it by the loudnorm filter. That will fix various other bugs in SReview, too, because SReview relies on behaviour that isn't actually there (but which I didn't know at the time when I wrote it).

All in all, the past year-and-a-bit has seen a lot of development for SReview, with multiple features being added and a number of long-standing problems being fixed.

Now if only the pandemic would subside, allowing the whole "let's do everything online only" wave to cool down a bit, so that I can finally make time to implement the admin interface...

Bye, Freenode

I have been on Freenode for about 20 years, since my earliest involvement with Debian in about 2001. When Debian moved to OFTC for its IRC presence way back in 2006, I hung around on Freenode somewhat since FOSDEM's IRC channels were still there, as well as for a number of other channels that I was on at the time (not anymore though).

This is now over and done with. What's happening with Freenode is a shitstorm -- one that could easily have been fixed if one particular person were to step down a few days ago, but by now is a lost cause.

At any rate, I'm now lurking, mostly for FOSDEM channels, on, under my usual nick, as well as on OFTC.

Today is the first meeting of the LibreBMC project. This project is an initiative of the OpenPOWER Foundation. We are starting this project to build an open source hardware BMC. It will use the POWER ISA as a soft-core running on an FPGA, like the Lattice ECP5 or the Xilinx Artix7. Our goals is to design a board that is OCP DC-SCM compatible and that can easily build on current FPGA’s. While we are starting with these 2 FPGA’s in mind and we’ll be focussing on 1 or 2 soft-cores. But we want it to be modular, so the general design can support different soft-cores, and run at least LSB and OpenBMC. You can read more in the OPF announcement :

I am very happy to be part of this initiative and will be contributing as much as I can. This initiative is from the OpenPOWER Foundation, however it runs as a public SIG, meaning anybody can help out. If you want to contribute, participate or observe, feel free to follow any updates on : We also have a project page available, where we’ll update the git repo links, discussion links and any other information :

A small anecdote, which isn’t mentioned in many articles is how the name came to be, while initial discussions start in the beginning of this year 2021. We, meaning the OpenPOWER Foundation steering committee, were guaging interest and looking for founding members. In our discussions it quickly became very confusing, as we were talking about theBMC, using the term OpenBMC, and then switched to using OpenBMC hardware, or OpenBMC software, which lead to more confusion. I started using the name LibreBMC for the hardware project we wanted to start and OpenBMC for the existing BMC software stack. It quickly was adopted by the rest of the team and made it out into the wild. While we, engineers, often struggle to come up with good names for our projects, this one was easy and inspired by LibreOffice, Libre-SOC, and many other projects that are open/libre and this is what we’re also doing.

May 26, 2021

Last week my new book has been published, Getting Started with ESPHome: Develop your own custom home automation devices. It demonstrates how to create your own home automation devices with ESPHome on an ESP32 microcontroller board.

I always like to look at the big picture. That's why I want to take some time to talk about what ESPHome is, why you should use it and what you need.

Configuring instead of programming

The ESP8266 and its successor, the ESP32, are a series of low-cost microcontrollers with integrated Wi-Fi (for both series) and Bluetooth (for the ESP32), produced by Espressif Systems. The maker community quickly adopted these microcontrollers for tasks where an Arduino didn't suffice. 1

You can program the ESP8266 and ESP32 using Espressif's ESP-IDF SDK, the ESP32 Arduino core, or MicroPython. Arduino and MicroPython lower the bar significantly, but it still takes some programming experience to build solutions with these microcontrollers.

One of the domains in which the ESP8266 and ESP32 have become popular is in the DIY (do-it-yourself) home automation scene. You just have to connect a sensor, switch, LED, or display to a microcontroller board, program it, and there you have it: your customised home automation device.

However, "program it" isn't that straightforward as it sounds. For instance, if you're using the Arduino environment, which has a lot of easy-to-use libraries, you still have to know your way around C++.

Luckily there are a couple of projects to make it easier to create firmware for ESP8266 or ESP32 devices for home automation. One of these is ESPHome.

On its homepage, the ESPHome developers describe it as: "ESPHome is a system to control your ESP8266/ESP32 by simple yet powerful configuration files and control them remotely through Home Automation systems."

The fundamental idea of ESPHome is that you don't program your ESP8266 or ESP32 device, but configure it. Often you only have to configure which pins you have connected to a component, such as a sensor. You don't have to initialize the sensor, read its values in a loop, and process them.

Configuration is a completely different mindset than programming. It lowers the bar even more. With ESPHome, everyone can make home automation devices. 2

Essentially ESPHome creates C++ code based on your configuration. The process looks like this:


So when you write a YAML file with your device's configuration, ESPHome generates C++ code from it. More specifically, ESPHome creates a PlatformIO project using the Arduino framework. PlatformIO then builds the C++ code, and esptool uploads the resulting firmware to your device.

You don't have to know anything about what's happening under the hood. You just have to write the configuration of your device in a YAML file and memorise a small number of commands to let ESPHome do the rest.

The advantages of ESPHome

Why use ESPHome? The first reason is clear from the project's description: because you don't need to be able to program. Even if you're a programmer, ESPHome offers many advantages:

Works completely locally

Many commercial Wi-Fi-based home automation devices need a connection to a cloud service of the manufacturer. In contrast, ESPHome devices work locally and can communicate with a local home automation system such as Home Assistant or an MQTT-based home automation system.

Offers on-device automations

Many home automation systems use a central gateway that contains all the logic, with automations like "if the sun goes down, close the blinds." In contrast, ESPHome offers powerful on-device automations. Your devices can work independently from a home automation gateway, so they keep working if they lose Wi-Fi access or if your home automation gateway crashes.

Offers over-the-air updates

ESPHome includes out-of-the-box over-the-air (OTA) update functionality. This makes it easy to centrally manage your ESPHome devices and update the firmware. This means you don't have to go around your house with your laptop to connect a serial cable to each device and flash the firmware.

Supports a lot of components

ESPHome supports many components out-of-the-box: several types of sensors, switches, and displays (even e-paper displays) are available with just a couple of configuration lines. The list of supported components is growing with every release.

Has extensive documentation

The developers have documented every component in ESPHome, and this documentation is quite good.

Is customisable

Although you create ESPHome firmware by writing a configuration file, ESPHome doesn't hide anything from you. It's still possible to add custom components that you write in C++. You can even look at the C++ code that ESPHome generates and change it.

What hardware do you need?

ESPHome creates custom firmware for the ESP8266 and ESP32 microcontrollers, so you need one of these. There are many types of boards for both microcontrollers, varying in the amount of flash memory, RAM, and available pins. Some of them even come with extras such as a built-in display (OLED, TFT, or e-paper), battery, or camera.

ESPHome doesn't support all features of all boards out-of-the-box. Technically, all ESP8266/ESP32 devices should be able to run ESPHome. Some features just aren't supported yet.

Your first choice is between the ESP8266 or ESP32. If you're buying a device at present, the choice is simple: the ESP32. It is much more capable than its predecessor and has a faster processor, more memory, more peripherals, and adds Bluetooth.

Then comes the choice of board. Espressif has some development boards. Many other companies are making them too. There are even complete kits such as the M5Stack series. These are ESP32 development boards ready to use in your living room in a case with a display, buttons, MicroSD card slot, and speaker. These are currently my favourite devices to use with ESPHome. If you don't need something with a nice finish as the M5Stack devices, the TTGO T-Display ESP32 made by LilyGO is my development board of choice: it has an integrated 1.14 inch TFT display.


Other interesting devices to run ESPHome on are devices from manufacturers such as Sonoff and Shelly. These come with firmware that works with the manufacturer's cloud services. You can however replace the firmware with ESPHome. This unlocks the full potential of these devices and lets you use them in your local home automation system without any link to a cloud system.

What software do you need?

You can use ESPHome to create a fully autonomous microcontroller project --- for example, a plant monitor that turns on an LED if the plant's soil is too dry. However, if you don't publish the plant's status over the network, this would be a waste of the ESP32's capabilities. The main usage cases of ESPHome are:

  • To send a device's sensor measurements to a home automation gateway.

  • To remotely control a device's lights or switches from a home automation gateway.

ESPHome supports two ways of communication between your device and the home automation gateway:

Native API

The ESPHome native API is a highly optimised network protocol using Google's protocol buffers. It's meant to be used with Home Assistant --- a popular open-source home automation system.


MQTT (Message Queuing Telemetry Transport) is an OASIS standard messaging protocol designed with a lightweight publish/subscribe approach for messages. All your ESPHome devices then communicate with an MQTT broker such as Eclipse Mosquitto.

From the beginning (when it was still called esphomeyaml), the ESPHome project has been tightly integrated with Home Assistant, so the ESPHome developers prefer the native API. However, MQTT is fully supported, allowing your devices to communicate with many other home automation gateways, as MQTT is a popular standard. 3

Do you need a development environment?

With ESPHome you don't program your devices but configure them. However, you still need something that looks like a "development" environment. When your device configurations are simple, you could do without, but the more complex they become, you'll need all the help you can get.

This doesn't mean you have to install a full-blown Integrated Development Environment (IDE). You should only need a couple of programs:

An editor

You could make do with a simple text editor such as Notepad (Windows), TextEdit (macOS), or the default text editor on your Linux distribution. However, having an editor with syntax highlighting for YAML files is easier. Some examples are Notepad++ and Sublime Text. If you're a command-line user on Linux, both vim and Emacs work fine. 4 Use whatever you like, because your editor is an important tool to work with ESPHome.

A YAML linter

A linter is a program that checks your file for the correct syntax. An editor with syntax highlighting has this linter built-in, but you can also run this standalone. A good YAML linter is the Python program yamllint. Not only does it check for syntax validity, but also weird things like key repetitions, as well as cosmetic problems such as line length, trailing spaces, and inconsistent indentation. ESPHome includes its own linter, specifically targeted at finding errors in ESPHome configurations. Both linters are complementary.

If you're used to developing in an IDE, an interesting alternative is the ESPHome plugin for Visual Studio Code. 5 This plugin provides validation and completion of what you type in an ESPHome YAML file. It also shows tooltips with help when you hover over keywords in the configuration.


Basic Arduino models don't have network connectivity, which limits their use for home automation and IoT applications.


Note that you can still add your own C++ code to program ESPHome devices if you like.


I prefer MQTT, and my previous book, Control Your Home with Raspberry Pi: Secure, Modular, Open-Source and Self-Sufficient explains how to create a home automation system centered around MQTT. ESPHome fits perfectly in this approach, and this is how I'm using it at home.


I'm an avid vim user.


If you're worried about the telemetry that Microsoft has enabled by default in Visual Studio Code, download VSCodium. This project builds Microsoft's vscode source code without the telemetry.

May 25, 2021

The FOSDEM IRC channels have been moved to the Libera.Chat network. Please join us on the #fosdem channel there. All other #fosdem-* channels and the Matrix bridge will be available soon. As some of our previous channels on the FreeNode network have been taken over by the new staff, these channels are no longer affiliated with FOSDEM. The FOSDEM organisation is saddened by this act of hostility towards the community. We urge our users to leave these channels as soon as possible and move to the new ones on Libera.Chat. We would like to remind you that any channel舰

Got my first Covid-19 vaccine shot today, and apparently also a new wireless device on my home network:

May 22, 2021

I published the following diary on “‘Serverless’ Phishing Campaign“:

The Internet is full of code snippets and free resources that you can embed in your projects. SmtpJS is one of those small projects that are very interesting for developers but also bad guys. It’s the first time that I spot a phishing campaign that uses this piece of JavaScript code.

To launch a phishing campaign, most attackers deploy their phishing kits on servers (most of the time compromised). These kits contain the HTML code, images, CSS files, … but also scripts (often in PHP) to collect the information provided by the victim and store it into a flat file or send them to another service… [Read more]

The post [SANS ISC] “Serverless” Phishing Campaign appeared first on /dev/random.

May 21, 2021

Als we onze hersenen als technologie zouden beschouwen, zou elke ingenieur onder de indruk zijn: met een verbruik van amper 20 watt slaagt ons brein erin om talloze gevarieerde en complexe taken uit te voeren, zoals spraak en beeld herkennen, navigeren in omgevingen waar we nog nooit geweest zijn, nieuwe vaardigheden leren en redeneren over abstracte zaken. Het is dan ook geen wonder dat onze hersenen al sinds jaar en dag als inspiratie dienen om computers 'intelligentie' te geven.

Een belangrijke aanpak in machinaal leren vormen (kunstmatige) neurale netwerken. Ze bootsen de werking van de hersenen na, die een biologisch neuraal netwerk vormen: een kluwen van talloze verbindingen tussen neuronen (hersencellen). Een kunstmatig neuraal netwerk bestaat meestal uit meerdere lagen:

  • Een invoerlaag van neuronen die de invoer van een probleem voorstellen, bijvoorbeeld de pixels in een foto.

  • Een uitvoerlaag van neuronen die de oplossing van het probleem voorstellen. Die herkennen bijvoorbeeld dat er in de foto een hond te zien is.

  • Eén of meer verborgen lagen die berekeningen uitvoeren. Die herkennen bijvoorbeeld vacht, grootte, aantal poten enzovoorts.


Een neuraal netwerk programmeer je niet door expliciet aan te geven hoe het een probleem moet oplossen; je 'traint' het door het vele voorbeelden van een probleem te geven. De parameters van alle neuronen van het neurale netwerk convergeren door die training dan naar de juiste waarden, zodat het de taak leert uit te voeren.

Vooral deep learning maakt het laatste decennium furore in de wereld van machine learning. Bij deep learning maak je gebruik van een neuraal netwerk met een groot aantal lagen tussen invoer en uitvoer. Door dit grote aantal lagen zijn eindelijk heel complexe taken mogelijk. Een neuraal netwerk als GPT-3 gebruikt zo'n honderd lagen. Als je het volledig zelf zou willen trainen, kijk je aan tegen enkele miljoenen euro's aan kosten om gpu-rekenkracht in de cloud te huren. En dan spreken we nog niet over het energieverbruik en de erbij horende CO₂-uitstoot.

Terwijl de neuronen in onze hersenen met pulsen communiceren, is dat aspect bij de klassieke neurale netwerken niet overgenomen omdat discontinue pulsen nu eenmaal wiskundig moeilijker te hanteren zijn dan continue signalen. Toch is er al de hele geschiedenis van AI ook een aanpak geweest om neurale netwerken te modelleren met pulsen. Dat noemen we een gepulst neuraal netwerk (spiking neural network). Door hun wiskundige complexiteit zijn ze nooit doorgebroken.

In een artikel over gepulse neurale netwerken voor PC-Active beschreef ik onlangs recent onderzoek van het Amsterdamse Centrum Wiskunde & Informatica (CWI) en het Eindhovense onderzoekscentrum IMEC/Holst Centre naar een nieuw algoritme dat een factor honderd energiezuiniger zou zijn dan de beste hedendaagse klassieke neurale netwerken. Voorlopig is de techniek nog beperkt tot zo'n duizend neuronen, maar daarmee liggen toepassingen zoals spraakherkenning, de classificatie van elektrocardiogrammen (ecg) en het herkennen van gebaren al in het verschiet.

[Edited: The technique discussed in this diary is not mine and has been used without proper citation of the original author]

I published the following diary on “Locking Kernel32.dll As Anti-Debugging Technique“:

For bad guys, the implementation of techniques to prevent Security Analysts to perform their job is key! The idea is to make our life more difficult (read: “frustrating”). There are plenty of techniques that can be implemented but it’s an ever-ongoing process. Note that this topic is covered in the SANS FOR610 training.

An anti-debugging technique is based on the following steps… [Read more]

The post [SANS ISC] Locking Kernel32.dll As Anti-Debugging Technique appeared first on /dev/random.

May 20, 2021

When I ask people why they fell in love with Drupal, most often they talk about feeling empowered to build ambitious websites with little or no code. In fact, the journey of many Drupalists started with Drupal's low-code approach to site building.

With that in mind, I proposed a new Project Browser initiative in my DrupalCon North America keynote. A Project Browser makes it easy for site builders to find and install modules. You shouldn't need to use the command line!

Making module discovery and module installation easier is long overdue. It's time to kick off this initiative! I will host the first meeting on May 24th between 14:30 UTC and 15:15 UTC. We'll share a Zoom-link on the Project Browser Slack channel before the meeting starts. Join our Slack channel and mark your calendars.

We'll start the meeting with high-level planning, and we need people with all kinds of skills. For example, we'll need help defining requirements, help designing and prototyping the user experience, and much more.

Elf jaar geleden blogde ik over mijn Tide bestek. Bij deze een update.

Ik weet totaal niet waarom dit mij nu weer boeit, misschien omdat dit in de jaren 70 ons thuis bestek was? Toch zou ik graag de ontbrekende vleesvork, minivork en vismes hebben, wie wil ruilen?

Hier twee foto's van wat ik nu heb: (De inleg van de vleesvork, het kleinste vorkje en het vismes zijn copyright S.R.)

De geschiedenis van dit bestek: Het was te krijgen via premiebons die bij Tide waspoeder zaten in de jaren 1950 (en jaren 60?). Er bestaat een FB-groep voor Tide bestek, maar ik zit niet op FB. Veel kans hebben ze daar meer foto's en ruilmogelijkheden.


UPDATE 2021-05-20: Joepie 6 dessertvorkjes

May 19, 2021

Le Titanic était réputé insubmersible. Il était composé de plusieurs compartiments étanches et pouvait flotter même si plusieurs de ces compartiments s’étaient remplis d’eau. Si le gigantesque navire avait foncé droit dans l’iceberg, il y’aurait eu un grand choc, un ou plusieurs compartiments ouverts, des dizaines de blessés suite au choc et un bateau immobilisé, mais en état de flotter. Malheureusement, une vigie a aperçu l’iceberg. Un peu trop tard. En voulant l’éviter, le navire l’a frôlé et a vu sa coque déchirée tout le long, ouvrant des voies d’eau dans chacun des compartiments étanches. Les passagers n’ont rien senti au moment même, mais la catastrophe reste emblématique plus d’un siècle plus tard.

Que ce serait-il passé si, sur le bateau, s’était trouvé un groupe d’industriels voyageant en première classe et dont la spécialité était une hypothétique colle à réparer la coque des bateaux ? Qu’auraient répondu ces richissimes voyageurs voyant arriver à eux un commandant essoufflé et à l’uniforme débraillé, les suppliant de fournir la formule de leur produit pour sauver le navire ?

La crise du réchauffement climatique nous le laissait présager, mais le débat sur l’ouverture des brevets sur les vaccins COVID nous en donne une réponse éclatante.

Les riches industriels auraient simplement roulé de grands yeux en fustigeant l’idée qu’on puisse « piller leur propriété intellectuelle ». Et lorsque le commandant insistera en disant que le bateau coule, ils ricaneront en disant que ce ne sont que les 3e classes qui ont des voies d’eau. Au pire les 2e classes.

Car l’ouverture des vaccins sur le COVID est essentielle. Sans cette ouverture et la possibilité de fabriquer leurs propres vaccins (ce qui est incroyablement simple avec les vaccins basés sur l’ARN messager), l’immense majorité des pays les plus pauvres ne verront pas une goutte de vaccin avant 2023. C’est une catastrophe pour ces populations qui, même en admettant que ce ne sont que des 3e classes, permettrait un véritable bouillon de culture d’où pourrait émerger des variants bien plus puissants et insensibles à nos vaccins actuels. Ne pas voir cela, c’est littéralement penser que les 3e classes vont couler, mais que le pont des 1res classes va continuer sa route comme par miracle.

Didier Pittet, l’homme qui a offert au monde la formule du gel hydroalcoolique dont l’aspect open source a été un atout indéniable dans la lutte contre cette épidémie, l’explique dans son livre « Vaincre les épidémies ». Lors de ses voyages, il a découvert des installations d’une ingéniosité extrême permettant de produire du gel hydroalcoolique dans des régions souffrant d’un grand manque d’infrastructure. Les produits manquants étaient remplacés par des équivalents disponibles tout en gardant voire en améliorant l’efficacité. Parce que, contrairement aux théories racistes qui percolent dans notre colonialisme industriel, ce n’est pas parce qu’une région a un grand déficit en infrastructure que ses habitants n’ont pas de cerveau. Malgré notre vision du monde fondée sur Tintin au Congo, li pti noir li pas complètement crétin et li fabriquer vaccins si li pas empêché par brevets de bwana blanc.

S’il n’y avait que l’aspect humanitaire, la question d’ouverture des brevets COVID ne devrait même pas se poser. Rien que pour cela, tout personne s’opposant à l’ouverture des brevets dans le contexte actuel est un fou dangereux psychopathe.

Mais il y’a pire : les brevets sont une vaste escroquerie mondiale qui a pris des proportions incroyables.

Je vous ai expliqué ma propre expérience avec les brevets, expérience professionnelle durant laquelle on m’a enseigné à écrire un brevet en m’expliquant de but en blanc l’immoralité du système et la manière de l’exploiter.

Lors de son mandat, le parlementaire européen Christian Angstrom avait largement démontré que l’immense majorité des fonds permettant le développement d’un nouveau médicament étaient publics (de 90% à 99%). La grande majorité du travail de recherche et des travaux préliminaires nécessaires est accomplie dans les universités par des chercheurs payés par de l’argent public. L’industrie du médicament elle-même bénéficie de nombreuses subventions et d’abattements fiscaux.

Au final, un fifrelin du coût final est issu de la firme elle-même, firme qui va obtenir un monopole sur cette recherche pendant 20 ans grâce au brevet. C’est le traditionnel credo financier « Mutualiser les risques, privatiser les profits ».

N’oublions pas que dans l’esprit initial, le brevet est un monopole temporaire (c’était d’ailleurs le nom qu’on lui donnait à l’origine) en échange du fait qu’une invention soit rendue publique. C’est pour cela que le brevet explique l’invention : l’inventeur a 20 ans pour bénéficier de son monopole et s’engage à ce que l’invention devienne un bien public par la suite.

Ce n’est évidemment pas du gout des industries qui ont trouvé une parade : étendre la durée des brevets en modifiant un produit ou en en sortant un nouveau juste avant l’expiration de l’ancien. Ces modifications sont le plus souvent cosmétiques.

Pourquoi croyez-vous que les vaccins sont désormais mélangés en une seule et unique dose malgré les risques d’augmentation des effets secondaires ? Parce qu’il s’agit d’une manière simple de breveter un nouvel emballage pour des vaccins éprouvés qui, sans cela, ne coûterait littéralement plus rien. Et s’il y’a bien une chose que veut éviter l’industrie pharmaceutique, c’est que les gens soient en bonne santé pour pas cher.

Pour résumer, l’industrie pharmaceutique vole littéralement l’argent public pour privatiser des bénéfices plantureux. Et ne peut imaginer remettre en question ses bénéfices alors que la survie de notre société est peut-être en jeu. Le fait qu’il s’agisse du vaccin COVID est d’autant plus ironique, car, depuis 14 mois, l’argent public a afflué sans restriction dans tous les laboratoires du monde. L’industrie pharmaceutique a été payée pour développer un produit garanti de trouver 8 milliards de clients et prétend aujourd’hui privatiser 100% des bénéfices. Dans le cas du vaccin AstraZeneca, l’ironie est encore plus mordante : il a été conçu de bout en bout par une équipe de scientifiques financés par l’argent public et qui souhaitait le rendre open source. La fondation Bill Gates, idéologiquement opposée à toute idée d’open source, a réussi à leur racheter la formule. Tous les scientifiques ne sont pas Didier Pittet.

Un Didier Pittet qui affirme se faire encore régulièrement appeler « L’homme qui nous a fait perdre des milliards » par les représentants d’une industrie pharmaceutique qui ne digère toujours pas la mise open source du gel hydroalcoolique. Cela en dit long sur la mentalité du secteur. Toute possibilité de se soigner ou se protéger à moindre coût est perçue comme « de l’argent perdu ». C’est la pensée typique d’un monopole pour qui l’idée même de compétition est une intolérable agression que les amis politiques doivent bien vite juguler.

On pourrait s’étonner que l’industrie pharmaceutique n’ouvre pas le brevet du vaccin sur le COVID juste pour redorer son blason, pour en faire une belle opération de relations publiques.

Mais il y’a une raison pour laquelle la mise open source du vaccin AstraZeneca devait être empêchée à tout prix, une raison pour laquelle ce brevet ne peut pas, même temporairement, être ouvert.

C’est que le monde comprendrait que ça fonctionne. Que, comme l’a démontré l’aventure du gel hydroalcoolique, ça fonctionne foutrement bien. Cela créerait un précédent. Car si on le fait pour le COVID, pourquoi ne pas le faire pour les médicaments pour le sida ? Pourquoi ne pas le faire sur l’insuline alors qu’aux États-Unis, des diabétiques meurent parce qu’ils ne peuvent simplement pas s’en acheter ? Pourquoi ne pas le faire pour…

Vous imaginez le précédent ? Un monde où les résultats des recherches publiques sont open source ? Où les régions, même les plus pauvres, peuvent développer une indépendance sanitaire avec des chaînes logistiques locales et courtes ?

Non, il faut que l’orchestre continue de jouer. Et tant pis pour les 3e classes. Tant pis pour les 2e classes. Tant pis pour les chaussettes des 1e classes. Le bateau est insubmersible, n’est-ce pas ?

Les vaccins sont l’une des plus belles inventions humaines. N’en déplaise aux conspirationnistes, les vaccins sont la première cause d’augmentation de notre espérance de vie et de notre confort moderne. Je me ferai vacciner contre le COVID à la première occasion par souci de contribuer à une immunité collective (car le vaccin est un médicament altruiste, il ne fonctionne que si une majorité de gens l’utilise). Cela ne m’empêchera pas de pleurer le fait que ce progrès magnifique soit retenu en otage pour contribuer à l’une des plus grandes arnaques économique, idéologique et financière de ce siècle.

Les antivaccins ont raison : il y’a bien un complot qui détruit notre santé et notre tissu social pour maximiser l’enrichissement d’une minorité de monopoles dirigés par des psychopathes à qui des politiciens véreux servent la soupe en se vautrant dans une fange d’immoralité hypocrite.

Mais ce ne sont pas les vaccins eux-mêmes la base du complot, ce sont tout simplement les brevets et les monopoles industriels.

Photo by Ivan Diaz on Unsplash

Je suis @ploum, ingénieur écrivain. Abonnez-vous par mail ou RSS pour ne rater aucun billet (max 2 par semaine). Je suis convaincu que Printeurs, mon dernier roman de science-fiction vous passionnera. Commander et partager mes livres est le meilleur moyen de me soutenir et de m’aider à diffuser mes idées !

Ce texte est publié sous la licence CC-By BE.

May 18, 2021

I published the following diary on “From RunDLL32 to JavaScript then PowerShell“:

I spotted an interesting script on VT a few days ago and it deserves a quick diary because it uses a nice way to execute JavaScript on the targeted system. The technique used in this case is based on very common LOLbin: RunDLL32.exe. The goal of the tool is, as the name says, to load a DLL and execute one of its exported function:

C:\> rundll32.exe sample.dll,InvokedFunction()

Many Windows OS functions can be invoked through RunDLL32… [Read more]

The post [SANS ISC] From RunDLL32 to JavaScript then PowerShell appeared first on /dev/random.

May 16, 2021

If you're only occasionally building ESP32 firmware with Espressif's IoT Development Framework (ESP-IDF), your current build probably fails because the installation from last time is outdated. Then you have to reinstall the newest version of the framework and its dependencies, and you probably still encounter a dependency issue. If it finally works and you have successfully built your firmware, your excitement makes up for the frustrating experience, until the next time you want to build some other ESP32 firmware, and the process repeats.

But there's another way: use the IDF Docker image. It contains ESP-IDF, all the required Python packages, and all build tools such as CMake, make, ninja, and cross-compiler toolchains. In short: a ready-to-use IDF build system.

Building ESP32 firmware with CMake in the container is as easy as:

docker run --rm -v $PWD:/project -w /project espressif/idf build

This mounts the current directory on the host ($PWD) as the directory /project in the container and runs the command build on the code. After the build is finished, you can find your firmware in the current directory.


If you're building a project that needs a specific version of ESP-IDF, just use a tag such as espressif/idf:v4.2.1 instead of espressif/idf. You can find the available tags on Docker Hub.

The same works if the project is using make for its build:

docker run --rm -v $PWD:/project -w /project espressif/idf make defconfig all -j4

You can also use the container interactively.

May 15, 2021

Ou les tribulations d’un auteur bibliophile qui souhaite faire du commerce local de proximité en payant en cryptomonnaies.

Dans ce billet, je vous raconte ma vie de bibliophile, je râle un peu sur les monopoles du monde du livre, je pleure sur la disparition programmée d’un bouquiniste local, je fais la promotion d’Alternalivre, nouvelle plateforme de vente de livres peu ou mal distribués et je vous parle de Print@Home, concept futuriste du livre « téléchargé et imprimé à la maison ». À la fin du billet, vous aurez l’opportunité de commander des livres de mon éditeur pour le tiers ou la moitié du prix normal, selon le cours du Bitcoin. Qu’est-ce que le Bitcoin vient faire dans tout ça ? Mystère !

On entend souvent qu’Amazon ou Facebook ne sont pas des monopoles, car nous ne sommes pas forcés de les utiliser. Après tout, tout le monde peut commander ailleurs que sur Amazon et supprimer son compte Facebook.

Que ce soit clair : si nous étions forcés d’utiliser Amazon ou Facebook, ce ne seraient plus des monopoles, mais des dictatures. Un monopole n’est pas une entreprise impossible à éviter, c’est une entreprise difficile à éviter. Pourquoi ai-je publié un billet annonçant mon retrait de LinkedIn en fanfare ? Parce que cela a été pour moi un choix difficile, un réel risque professionnel. Pourquoi suis-je encore sur Facebook ? Pourquoi est-ce que je passe encore par Amazon ?

Tout simplement parce que c’est très difficile de l’éviter. Dernièrement, voulant éviter de passer par Amazon pour commander un produit particulier, j’ai réussi à trouver un fournisseur différent. Ma commande a nécessité la création d’un énième compte à travers un formulaire bugué qui m’a imposé de changer d’adresse email d’inscription (la première comportant un caractère non toléré par ce site particulier) en cours d’inscription et qui fait que mon compte est désormais inaccessible. Toutes mes données sont dans ce énième silo que je n’utiliserai plus jamais, sans compter les inscriptions non sollicitées à des newsletters. J’ai finalement reçu mon colis sans passer par Amazon, mais à quel prix !

Autre exemple. Grâce à la recommandation d’un lecteur, j’ai voulu acheter le livre « Le Startupisme » d’Antoine Gouritin. Sur le site de l’éditeur, les frais de livraison s’élevaient à 10€. Mais étaient gratuits sur Amazon. Pour un livre à 20€, avouez que ça fait mal de payer 10€. Qu’auriez-vous fait à ma place ? Et je ne vous parle pas des livres en anglais, introuvables partout y compris sur et que je commande… sur (allez comprendre !).

Amazon est donc très difficile à contourner. C’est pourquoi j’apprécie quand les sites reconnaissent que je ne vais pas les utiliser tous les jours et cherchent à me rendre l’achat le plus simple possible, notamment en n’obligeant pas à la création d’un compte (fonctionnalité à laquelle travaille mon éditeur).

Car, dès le début du projet d’édition de Printeurs, mon éditeur et moi sommes tombés d’accord sur le fait d’éviter Amazon autant que possible. Mais, dans l’édition du livre, il n’y a pas qu’Amazon qui abuse de sa position. Un acteur invisible contrôle le marché entre les éditeurs et les libraires : le distributeur.

Mon roman Printeurs a reçu de bonnes critiques et commence a exister sur Babelio, Senscritique et Goodreads.

Je suis extrêmement reconnaissant aux lecteurs qui prennent le temps de noter mes livres ou de mettre une critique, même brève. Il semble que certains lecteurs aient découvert Printeurs grâce à vous ! J’ai néanmoins un conflit moral à vous recommander d’alimenter ces plateformes propriétaires à visée monopolistique. Cela rend certaines critiques postées sur des blogs personnels encore plus savoureuses (surtout celle-là, merci Albédo !).

Malgré cet accueil initial favorable et de bonnes ventes dans les librairies suisses, aucun distributeur belge ou français n’a été jusqu’à présent intéressé par distribuer le catalogue de mon éditeur. Les librairies, elles, ne souhaitent pas passer directement par les éditeurs.

Pire : être dans un catalogue de distributeur n’offre pas toujours la garantie d’être trouvable en libraire. Du moins près de chez moi.

Dans ma ville, riante cité universitaire et pôle intellectuel majeur du pays, il n’existe que deux librairies (!), faisant toutes deux partie de grandes chaines (Fnac et Furet du Nord). Bon, il y’a aussi mon dealer de bandes dessinées devant la vitrine duquel je me prosterne tous les jours et deux bouquineries d’occasion. Enfin, bientôt plus qu’une. La plus grande des deux (et la seule qui fait également de la BD de seconde main) va en effet disparaître, l’université, à travers son organisme de gestion immobilière, ayant donné son congé au gérant. Le gérant m’a fait observer qu’en rénovant la place des Wallons (où est située la bouquinerie), les ouvriers ont installé devant chez lui des emplacements pour parasols. Il semble donc qu’il soit prévu de longue date de remplacer la bouquinerie par un commerce alimentaire. Une pétition a été mise en place pour sauver la bouquinerie.

Mais le gérant n’y croit plus. Il a commencé à mettre son stock en caisse, les larmes plein les yeux, ne sachant pas encore où aller ni que faire, espérant revenir. Deux librairies et bientôt une seule et minuscule bouquinerie pour toute une cité universitaire. Mais plusieurs dizaines de magasins de loques hors de prix cousues dans des caves par des enfants asiatiques. Heureusement qu’il reste mon temple bédéphile, mais je commence à m’en méfier : les vendeurs m’y appellent désormais par mon nom avec obséquiosité, déroulent un tapis rouge à mon arrivée dans la boutique, m’offrent boissons et mignardises en me vantant les dernières nouveautés et en me félicitant de mes choix. Lorsqu’un vendeur débutant ne me reconnait pas, l’autre lui montre sur l’écran ma carte de fidélité ce qui entraine un mouvement machinal de la main et un sifflement. Je ne sais pas trop comment interpréter ces signes…

Mais trêve de digression sentimentalo-locale, abandonnons les moutons de l’Esplanade (le centre commercial climatisé du cru qui tond lesdits ovins pour remplacer leur laine par les loques suscitées) pour revenir aux nôtres.

Souhaitant acquérir le roman Ecce Homo de l’autrice Ingid Aubry, j’ai découvert qu’il était affiché sur le site du Furet du Nord. Je me suis donc rendu dans l’enseigne de ma ville et j’ai demandé à une libraire de faction de le commander. Malgré son empressement sincère, elle n’a jamais trouvé le livre dans ses bases de données. Déjà, le fait qu’elle ait dû regarder dans pas moins de trois bases de données différentes (avec des interfaces très disparates) m’a semblé absurde. Mais le résultat a été sans appel : le livre, pourtant référencé sur le site de la librairie, était incommandable. (livre pourtant distribué par le plus grand distributeur en francophonie, Hachette, quasi-monopole).

Ingrid a finalement fini par m’envoyer le livre par la poste. Son mari Jean-François m’a révélé qu’ils avaient tenté de créer, à deux reprises, une boutique Amazon pour vendre son livre en ligne à moindre prix (il est en effet disponible sur Amazon, mais avec des frais de livraison de… 40€ !). À chaque fois, leur compte a été suspendu. La raison ? Ils vendaient un livre déjà listé sur Amazon. Le livre d’Ingrid est donc littéralement impossible à acheter à un prix décent !

Ingrid et son mari ont pris le problème à bras le corps et lancé leur propre plateforme de vente de livres. Une plateforme dédiée aux livres peu ou mal diffusés. Alternalivre.

Je loue cette initiative en cruel manque de visibilité, étant coincé entre Fnac, Furet du Nord et Amazon pour assouvir ma bibliophilie compulsive (et je déteste acheter mes livres au milieu des tout nouveaux téléviseurs en promotion, ce qui exclut la Fnac). Mon éditeur s’est empressé de rendre Printeurs et toute la collection Ludomire disponible sur Alternalivre (ce qui devrait diminuer les frais d’expédition pour les Français et les Belges). Vous y trouverez également mon livre pour enfant, « Les aventures d’Aristide, le lapin cosmonaute ». Tout en espérant être un jour disponible au Furet du Nord (parce que, de mon expérience, les libraires y sont sympas, compétents et cultivés) voir, honneur suprême, chez Slumberland (qui fait aussi dans le roman de genre, mais je travaille à des scénarios de BD rien que pour être dans leurs rayons).

Écrire un livre et le faire éditer et convaincre les lecteurs de l’acheter n’est donc pas tout. Encore faut-il que ce soit possible pour les lecteurs de l’acquérir. Dans Printeurs, je poussais à l’extrême le concept d’impression 3D jusqu’à inclure les êtres vivants. En 2012, Jaron Lanier imaginait l’impression locale des smartphones et autres gadgets dans son livre « Who owns the future? ». Pourrais-ton l’imaginer pour les livres, floutant de plus en plus la limite entre le livre électronique et le livre papier ?

Oui, m’a répondu mon éditeur en reposant le manuscrit de Printeurs. Et on va l’inventer. Ce sera le Print@home, un concept financé par les contributeurs de la campagne Ulule Printeurs.

Voici donc la première plateforme dédiée aux livres imprimables artisanalement. Cela ne vaut peut-être pas (encore?) une impression professionnelle, mais le concept peut ouvrir la voie à une nouvelle façon de diffuser les livres.

Et le tout, à prix libre bien sûr ! Les livres imprimables étant tous sous publiés sous une licence Creative Commons.

Pour financer cette plateforme, mon éditeur a lancé une campagne de crowdfunding pour le moins originale, car totalement décentralisée. Au lieu de tourner sur le gigantesque serveur d’un acteur quasi monopolistique (comme Ulule), la campagne tourne sur un raspberry dans son bureau. Et au lieu de payer avec des monnaies centralisées, les paiements se font en bitcoins.

Là où ça devient intéressant pour vous, amis lecteurs, c’est que les tarifs en bitcoin sont calculés en faisant l’hypothèse qu’un bitcoin vaut 100.000€. Cela signifie que si le bitcoin est inférieur et vaut, par exemple, 40.000€, vous ne payez que 40% du prix réel des livres commandés. Et cela, y compris pour les livres papier !

Si vous avez quelques centimes de bitcoins et que vous hésitiez à acheter une version papier de Printeurs, des exemplaires à offrir ou la collection complète Ludomire, c’est le moment !

Tout cela sent bon le bricolage et l’expérimentation. Il y’aura des erreurs, des apprentissages. De cette imprécision typiquement humaine dont nous nous sentons inconsciemment privés par les algorithmes perfectionnés des monopoles centralisés. Bonne découverte !

Photo by César Viteri on Unsplash

Je suis @ploum, ingénieur écrivain. Abonnez-vous par mail ou RSS pour ne rater aucun billet (max 2 par semaine). Je suis convaincu que Printeurs, mon dernier roman de science-fiction vous passionnera. Commander et partager mes livres est le meilleur moyen de me soutenir et de m’aider à diffuser mes idées !

Ce texte est publié sous la licence CC-By BE.

May 14, 2021

I published the following diary on “‘Open’ Access to Industrial Systems Interface is Also Far From Zero“:

Jan’s last diary about the recent attack against the US pipeline was in perfect timing with the quick research I was preparing for a few weeks. If core components of industrial systems are less exposed in the wild, as said Jan, there is another issue with such infrastructures: remote access tools. Today, buildings, factories, farms must be controlled remotely or sometimes managed by third parties. If Microsoft RDP is common on many networks (and is often the weakest link in a classic attack like ransomware), there is another protocol that is heavily used to remote control industrial systems: VNC (“Virtual Network Computing”). This protocol works with many different operating systems (clients and servers), is simple and efficient. For many companies developing industrial systems, It is a good candidate to offer remote access… [Read more]

The post [SANS ISC] “Open” Access to Industrial Systems Interface is Also Far From Zero appeared first on /dev/random.

Imagine you want backups of your data (Somehow not everybody wants this, which I don't understand, but there are many things in this world that I don't understand).

Now imagine you want your backups to be encrypted  (Somehow not everybody wants this, which I don't understand, but there are many things in this world that I don't understand).

And imagine you want these backups to be automated. (...)

Now imagine you want these backups in several distinct locations, so they are not lost if your house burns down or if a burglar steals them. (...)

And imagine you want redundancy in case one or more of these remote locations are unavailable.

And of course it should be simple, because nobody wants complex solutions.

What is the best way to have simple personal redundant automated distributed encrypted backups?

A technical solution:

1. Get a Raspberry Pi, attach a USB stick.

2. Rent five VPS spread across five countries.

3. Set up an iSCSI target on all five VPS.

4. Configure the local Raspberry Pi as Initiator.

5. Create an mdadm RAID6 on these five drives and format with LUKS cryptfs?

6. Mirror this device on the USB drive attached to the Pi (So there is a local encrypted copy of the remote distributed encrypted copy).

7. Setup (on the Pi) crontab with rsync to backup certain directories on my personal laptop. Any file copied to that directory will then be encrypted, backed up locally and distributed redundantly in remote.

The only manual thing in this setup is entering the cryptfs key when the Pi needs a reboot (which happens less than once each month but often enough to remember the key).

(I know I can automate the cryptfs key but I refuse. That key is in my head, and nowhere else.)

Note: Maybe the mirroring should happen before the encryption?? Let me sleep on this.

Cost: I think I can get 15GB per VPS for about 15 euro/month (both OVH and Hetzner do this for 3euro/month). So the full backup device will be 45GB (RAID6 of 5x15GB) which should be adequate for personal documents.

I should try this...


The Initiater on the Pi is connected to the five targets. I wonder if this will work...

root@elvire~# ls -l /dev/disk/by-id/ | grep wwn | cut -b1-55,75-
lrwxrwxrwx 1 root root  9 May  2 18:08 wwn-0x60014052b9750 -> ../../sde
lrwxrwxrwx 1 root root  9 May  2 18:08 wwn-0x6001405557bcf -> ../../sdf
lrwxrwxrwx 1 root root  9 May  2 18:08 wwn-0x60014056a1559 -> ../../sdg
lrwxrwxrwx 1 root root  9 May  2 18:08 wwn-0x60014058ec9a4 -> ../../sdd
lrwxrwxrwx 1 root root  9 May  2 18:09 wwn-0x600140598f532 -> ../../sdc
root@elvire~# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid6 sdc[4] sdd[3] sdg[2] sdf[1] sde[0]
      44012544 blocks super 1.2 level 6, 512k chunk, algorithm 2 [5/5] [UUUUU]
      [>....................]  resync =  1.7% (260284/14670848) finish=434.6min speed=552K/sec
unused devices: <none>

UPDATE 14-MAY-2021: It seems to work.

root@elvire~# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid6 sdc[4] sdd[3] sdg[2] sdf[1] sde[0]
44012544 blocks super 1.2 level 6, 512k chunk, algorithm 2 [5/5] [UUUUU]

unused devices: <none>
root@elvire~# mount | grep VPS
/dev/mapper/vpsmd0 on /srv/VPS_mirror type ext4 (rw,relatime,stripe=6144)
root@elvire~# crontab -l | tail -1
0 0 * * * rsync -a /srv/VPS_mirror/ /srv/cova/VPS_mirror/
root@elvire~# ls -l /srv/VPS_mirror/
total 24
-rw-r--r-- 1 root root 0 May 11 13:36 VPS_mirror
drwxr-xr-x 2 root root 4096 May 5 11:27 dotfiles
drwxr-xr-x 2 root root 4096 May 5 11:27 etcfiles
drwx------ 2 root root 16384 May 3 17:42 lost+found

May 09, 2021

boot failed

One of the nice new features of FreeBSD 13 is OpenZFS 2.0. OpenZFS 2.0 comes with zstd compression support. Zstd compression can have compression ratios similar to gzip with less CPU usage.

For my backups, I copy the most import data - /etc/, /home, … - first locally to a ZFS dataset. This data gets synced to a backup server. This local ZFS dataset was compressed with gzip, after upgrading the zroot pool and setting zstd as the compress method. FreeBSD failed to boot with the error message:

ZFS: unsupported feature: org.freebsd:zstd
ZFS: pool zroot is not supported
gptzfsboot: failed to mount default pool zroot

As this might help people with the same issue, I decided to create a blog post about it.

Update the boot loader

We need to update the boot loader with the newer version that has zstd compression support.

live CD

Boot from cdrom

Boot your system from FreeBSD 13 installation cdrom/dvd or USB stick and choose <Live CD>. Log in as the root account, the root account doesn’t have a password on the “Live CD”.

Enable ssh

I prefer to update the boot loader over ssh.

I followed this blog post to enable sshd on the live cd:

# ifconfig
# ifconfig <net_interface> up
#	mkdir /tmp/etc
#	mount_unionfs /tmp/etc/ /etc
#	passwd root
#	cd /etc/ssh/
#	vi sshd_config
#	/etc/rc.d/sshd onestart

Log on to the system remotely.

$ ssh

Update the bootloader

The commands to install the bootloader comes from the FreeBSD wiki.

The wiki page page above describes who install FreeBSD on ZFS root pool. This was very useful before the FreeBSD installer had native ZFS support.

List your partitions to get your boot device name and slice number. The example below is on FreeBSD virtual machine, the device name is vtb0 and the slice number is 1. On a physical FreeBSD system, the device name is probably ada0.

root@:~ # gpart show
=>       40  419430320  vtbd0  GPT  (200G)
         40       1024      1  freebsd-boot  (512K)
       1064        984         - free -  (492K)
       2048    8388608      2  freebsd-swap  (4.0G)
    8390656  411037696      3  freebsd-zfs  (196G)
  419428352       2008         - free -  (1.0M)

=>     33  2335913  cd0  MBR  (4.5G)
       33  2335913       - free -  (4.5G)

=>     33  2335913  iso9660/13_0_RELEASE_AMD64_DVD  MBR  (4.5G)
       33  2335913                                  - free -  (4.5G)

root@:~ # 

I use a legacy BIOS on my system. On a system with a legacy BIOS, you can use the following command to update the bootloader.

root@:~ # gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 vtbd0
partcode written to vtbd0p1
bootcode written to vtbd0
root@:~ # 

To update the bootloader on a UEFI system.

# gpart bootcode -p /boot/boot1.efi -i1 ada0

Should to the trick.

Reboot your FreeBSD 13 system and enjoy zstd compression.

root@:~ # sync
root@:~ # reboot

Have fun!


May 06, 2021

I published the following diary on “Alternative Ways To Perform Basic Tasks“:

I like to spot techniques used by malware developers to perform basic tasks. We know the LOLBins that are pre-installed tools used to perform malicious activities. Many LOLBins are used, for example, to download some content from the Internet. Some tools are so powerful that they can also be used to perform unexpected tasks. I found an interesting blog article describing how to use curl to copy files… [Read more]

The post [SANS ISC] Alternative Ways To Perform Basic Tasks appeared first on /dev/random.

May 03, 2021

Bitterzoet (maar zoals goeie fondant chocolade dat is) verhaal van een jonge man die enkel beleefd en onzichtbaar wilt zijn. Over enkel “een goed functionerend radertje” proberen zijn, maar daar niet in slagen. Over prachtig kunnen zingen, maar dat niet willen. Over liefde, verlies en vluchten. Over Noorwegen, de Faeröer eilanden en een heel klein beetje over de Caraïben. En over de Cardigans.

Aanrader; 4,5 ipv 5 sterren, voornamelijk omdat ik “Max, Micha & het Tet-offensief” nog beter vond.

April 30, 2021

Il est assez rare qu’un livre bouleverse votre représentation du monde. Ou mieux, qu’il éclaire votre compréhension dudit monde en reliant sous un modèle unique parfaitement théorisé toute une série d’intuitions que vous aviez dans des domaines forts différents.

C’est exactement l’effet qu’a eu sur moi le livre Monopolized, de David Dayen, malheureusement pas encore traduit en français et que je n’ai pas réussi à obtenir à un prix décent en Europe (je me suis rabattu sur la version électronique pirate, la faute aux monopoles du livre).

L’idée de David Dayen est de nous démontrer que la puissance économique (et donc politique) est de plus en plus concentrée dans un nombre de plus en plus restreint de mains au travers des monopoles et autres oligopoles, de nous expliquer pourquoi, historiquement et économiquement il en est ainsi, pourquoi c’est une mauvaise chose pour tous ceux qui ne sont pas à la tête d’un monopole et en quoi c’est une tendance « mécanique » : la monopolisation dans un domaine entraine l’apparition de monopoles dans les domaines connexes, ce qui fait boule de neige. Pour finir, David Dayen émet la thèse que seule la régulation politique peut enrayer les abus des monopoles (ce qu’elle faisait d’ailleurs à peu près bien jusque dans les années huitante).

Ceux d’entre vous qui suivent ce blog connaissent mon intérêt pour les problématiques liées aux monopoles de haute technologie (Google, Facebook, Microsoft, etc.). Ma fascination pour Monopolized vient du fait que j’ai compris que mon combat se dirigeait contre une simple conséquence anecdotique d’un paradigme beaucoup plus large : la monopolisation.

D’ailleurs, entre nous, pourquoi êtes-vous si nombreux à avoir l’intuition que « la financiarisation » de l’économie est une mauvaise chose alors qu’en soit, la finance voire même le trading ne sont que des échanges économiques entre adultes consentants ? À cause de la monopolisation de cette finance.

Pourquoi y’a-t-il une telle défiance envers l’industrie pharmaceutique entrainant des comportements absurdes comme le refus de la vaccination ? À cause de la monopolisation.

Pourquoi, quand je m’arrête dans une supérette ou une pompe à essence pour acheter un en-cas n’ai-je le choix qu’entre des dizaines de variations du même mauvais chocolat enrobé de mauvais sucre ? La monopolisation.

La monopolisation jusque dans l’art. La planète écoute désormais une vingtaine de musiciens surpayés alors que des millions d’autres tout aussi talentueux ne gagnent pas un sous, tout bénéfice pour les producteurs.

La tentation du monopole

De tout temps, le monopole s’est imposé comme le meilleur moyen de générer des fortunes pharaoniques. Lorsque vous disposez d’un monopole pour un produit quelconque, vous bénéficiez d’une rente immuable tant que ce produit sera consommé. Et comment s’assurer que le produit restera consommé ? Tout simplement en rachetant les jeunes entreprises qui développent des alternatives ou, mieux, qui pourraient être en mesure de le faire.

Un monopole peut augmenter les prix d’un produit à volonté pour maximiser ses rentes. Mais ce serait maladroit, car cela augmenterait d’autant les incitants économiques pour créer de la compétition. Il est donc préférable pour un monopole de garder le prix le plus bas possible pour empêcher toute compétition. Comment faire de la concurrence à Google ou Facebook alors que, pour l’utilisateur final, le produit semble gratuit ?

Au lieu d’augmenter ses tarifs, un monopole va chercher à diminuer ses coûts. Premièrement en exploitant ses fournisseurs qui, généralement, n’ont pas le choix, car pas d’autres clients potentiels. C’est le monopsone, l’inverse du monopole : un marché avec un seul acheteur et beaucoup de vendeurs. Grâce à cet état de fait, le monopole peut augmenter ses marges tout en gardant les mains propres. Le sale travail d’exploitation des travailleurs est transféré à des fournisseurs voire aux travailleurs eux-mêmes, considérés comme indépendants. C’est le phénomène de « chickenization » bien connu aux États-Unis où les éleveurs de poulets sont obligés de suivre des règles très strictes d’élevage, d’acheter leurs graines et d’utiliser le matériel fourni par… leur seul et unique acheteur qui peut fixer le prix d’achat du poulet. Les éleveurs de poulets sont, pour la plupart, endettés auprès de leur propre client qui peut refuser d’acheter les poulets et les ruiner complètement, mais qui se garde bien de le faire, leur laissant juste de quoi avoir l’espoir d’un jour en sortir. Dans « Planètes à gogos » et sa suite, Frederik Pohl et Cyril Kornbluth nous mettaient en garde contre ce genre d’abus à travers une superbe scène où le personnage principal, ex-publicitaire à succès, se retrouve à travailler sur Vénus pour un salaire qui ne lui permet juste pas de payer son logement et sa nourriture fournie par son employeur monopolistique.

Enfin, le dernier facteur permettant à un monopole de faire du profit, c’est de réduire toute innovation voire même d’activement dégrader la qualité de ses produits. Un phénomène particulièrement bien connu des habitants des zones rurales aux États-Unis où la connexion Internet est de très mauvaise qualité et très chère. Preuve s’il en est qu’il s’agit d’une réelle volonté, des villes ont décidé de mettre en place des programmes municipaux d’installation de fibre optique. Il en résulte… des attaques en justice de la part des fournisseurs d’accès Internet traditionnel pour « concurrence déloyale ».

La morbidité des monopoles

Depuis des siècles, la nocivité des monopoles est bien connue et c’est même l’un des rôles premiers des états, quels que soient la tendance politique : casser les monopoles (les fameuses lois antitrust), mettre hors-la-loi les accords entre entreprises pour perturber un marché ou, si nécessaire, mettre le monopole sous la coupe de l’état, le rendre public. Parfois, l’état peut accorder un monopole temporaire et pour un domaine très restreint à un acteur particulier. Cela pouvait être une forme de récompense, une manière de donner du pouvoir à un vassal ou à l’encourager. Les brevets et le copyright sont des monopoles temporaires de ce type.

Mais, en 1980, Robert Bork, conseiller du président Reagan, va émettre l’idée que les monopoles sont, tout compte fait, une bonne chose sauf s’ils font monter les prix. À partir de cet instant, l’idée va faire son chemin parmi les gens de pouvoir qui réalisent qu’ils sont des bénéficiaires des fameux monopoles. Mais comme je l’ai expliqué ci-dessus, un monopole résulte rarement en une augmentation franche et directe du prix. Pire, il est impossible de prévoir. En conséquence de quoi, les administrations américaines vont devenir de plus en plus souples avec les fusions et les acquisitions.

Si IBM et AT&T sont cassés en plein élan dans les années 80, si Microsoft doit mollement se défendre dans les années 90, Google et Facebook auront un boulevard à partir des années 2000, boulevard ouvert par le fait que les acteurs du passé ont encore peur des lois antitrust et que les acteurs du futur ne peuvent plus émerger face à la toute-puissance de ce qu’on appelle désormais les GAFAM, ces entreprises qui ont saisi la fenêtre d’opportunité parfaite. Une dominance entérinée de manière officielle quand, après les attentats du 11 septembre 2001, l’administration américaine stoppe toute procédure visant à interdire à Google d’exploiter les données de ses utilisateurs, procédure annulée en échange d’une promesse, tenue, que Google aidera désormais la défense à détecter les terroristes grâce aux données susnommées (anecdote racontée dans The Age of Surveillance Capitalism, de Shoshana Zuboff).

Fusion, acquisition

Le laxisme face aux monopoles donne le signal d’une course à l’ultra-monopolisation. Pour survivre dans une économie de mastodontes, il n’est d’autre choix que de devenir un mastodonte soi-même. En fusionnant ou en rachetant de plus petits concurrents, on détruit la compétition et on diminue les coûts de production, augmentant de ce fait les bénéfices et construisant autour de son business ce que Warren Buffet appelle une « douve protectrice » qui empêche toute concurrence. Warren Buffet n’a jamais fait un mystère que sa stratégie d’investissement est justement de favoriser les monopoles. Mieux : il en a fait une idéologie positive. Pour devenir riche, à défaut de construire un monopole à partir de rien (ce que bien peu pourront faire après Mark Zuckerberg et Jeff Bezos), investissez dans ce qui pourrait devenir un monopole !

Il faut dire que le business des fusions/acquisitions est particulièrement juteux. Les transactions se chiffrent rapidement en milliards et les cabinets de consultance qui préparent ces fusions sont payés au prorata, en sus des frais administratifs.

Alors jeune ingénieur en passe d’être diplômé, j’ai participé à une soirée de recrutement d’un de ces prestigieux cabinets (un des « Big Three »). Sur la scène, une ingénieure de quelques années mon aînée, décrivait le cas sur lequel elle avait travaillé, sans donner les noms. Les chiffres s’alignaient explicitement avec, dans la colonne « bénéfices », le nombre d’employés que la fusion permettrait de licencier avec peu ou prou d’indemnités, le nombre de sites à fermer, les opportunités de délocalisation pour échapper à certaines régulations financières ou écologiques.

J’ai levé la main et j’ai demandé, naïvement, ce qu’il en était des aspects éthiques. L’oratrice m’a répondu avec assurance que l’éthique était très importante, qu’il y avait une charte. J’ai demandé un exemple concret de la manière dont la charte éthique était appliquée au projet décrit. Elle me répondit que, par exemple, la charte impliquait que l’intérêt du client passait avant toute chose, ce qui impliquait le respect de la confidentialité et l’interdiction pour un employé du cabinet d’être en contact avec les employés du cabinet qui représentaient l’autre côté du deal.

J’ai été surpris d’une telle naïveté et, surtout, de la non-réponse à ma question. Après la conférence, je suis allé la trouver durant le cocktail dinatoire traditionnel. Un verre à la main, j’ai insisté. Elle ne comprenait pas de quoi je voulais parler. J’ai explicité ce que j’entendais par éthique : l’impact de cette fusion sur les travailleurs, sur les conditions économiques, sur l’aspect écologique global. L’éthique quoi !

La brave ingénieure, qui nous avait été présentée comme ayant obtenu le grade le plus élevé à la fin de ses études (le cabinet ne recrutant que parmi les meilleures notes et les doctorats, je n’avais d’ailleurs aucune chance), est devenue blanche. Elle m’a regardé la bouche ouverte et a fini par balbutier qu’elle n’avait jamais pensé à cela.

Il faut bien avouer que, face à un tel pactole, il est tentant de ne voir que des colonnes de chiffres. En théorie, les cabinets spécialistes des fusions/acquisitions sont censés déconseiller les fusions qui ne seraient pas vraiment intéressantes. Mais, sans fusion, pas de pourcentage. Aucun cabinet ne va donc déconseiller ce type d’opération. C’est également particulièrement intéressant pour les individus hautement impliqués. Wikipedia raconte que, entre 2009 et 2013, un jeune banquier d’affaire de la banque Rothschild va gagner plus de deux millions d’euros en travaillant sur des fusions et des rachats controversés. Il faut avouer que, selon ses supérieurs, il est extrêmement doué pour ce métier et pourrait devenir l’un des meilleurs de France. Il va cependant choisir une autre voie, profitant des appuis importants de ce milieu. Son nom ? Emmanuel Macron.

La quête de rendement et la métamorphose du métier d’entrepreneur.

Historiquement, un entrepreneur est une personne qui cherche à créer un business. Plutôt que de travailler pour un patron, l’entrepreneur va travailleur pour des clients. Les entrepreneurs à succès pouvaient espérer gagner très bien leur vie, une société florissante pouvant se permettre de payer un très haut salaire à son patron fondateur. Il n’en reste pas moins qu’il s’agissait d’un salaire lié à un travail. Pour les investisseurs, une entreprise pouvait également verser des dividendes.

Cependant, la quête de rendement élevé a, ironiquement, entrainé la chute des dividendes. À quoi bon gagner quelques pour cent par an sur une somme immobilisée et donc totalement illiquide, ne permettant pas de bénéficier d’autres opportunités ? La plupart des entreprises actuelles ne versent d’ailleurs que peu ou prou de dividendes. Achetez pour 1000€ d’actions et, à la fin de l’année, vous seriez chanceux d’avoir plus de 10€ de dividendes.

Pour un investisseur qui parie sur une jeune entreprise, il n’existe que deux façons de faire du profit et récupérer sa mise (ce qu’on appelle un « exit »). Premièrement si cette entreprise est cotée en bourse, ce qui est extrêmement rare et prend beaucoup de temps ou, et c’est la voie préférée, en voyant cette entreprise rachetée.

C’est également tout bénéfice pour les fondateurs qui au lieu de travailler toute leur vie sur un projet espèrent désormais gagner un pactole après quelques années seulement (et beaucoup de chance). J’ai vu et encadré suffisamment de startups et de levées de fonds dans ma vie professionnelle pour comprendre que le but d’une startup, désormais, n’est plus de faire un produit, mais d’être rachetée. Pas de vendre mais d’être vendu. Les modalités potentielles d’exit sont discutées avant même les premières lignes de code ou le premier client. De cette manière, toute l’énergie entrepreneuriale est dirigée vers un seul et unique objectif : faire croître les géants.

Ces échanges sont facilités par le fait que les investisseurs, les fameux Venture Capitalists, ont généralement des liens étroits avec les actionnaires de ces fameux géants qui rachètent. Dans certains cas, ce sont tout simplement les mêmes personnes. Pour faire simple, si je fais partie du board de Facebook, je vais donner un million à de jeunes entrepreneurs en les conseillant sur la meilleure manière de développer un produit que Facebook voudra racheter puis je m’arrange pour que le-dit Facebook rachète la boîte à un tarif qui valorise mes parts à 10 millions. Un simple trafic d’influence qui me rapporte 9 millions. Si la startup n’a pas développé de produit, ce n’est pas grave, on parlera alors d’acqui-hire (on rachète une équipe, une expertise et on tue le produit).

C’est également tout bénéfice pour Facebook qui tue de cette manière toute concurrence dans l’œuf et qui augmente ses effectifs pour une bouchée de pain. Voire même qui optimise fiscalement certains bénéfices de cette manière.

Ce procédé est tellement efficace qu’il s’est industrialisé sous forme de fonds. Les investisseurs, au lieu de mettre 1 million dans une jeune startup, créent un fonds de manière à mettre 100 millions dans 100 startups. Les 100 millions sont fournis par les riches qui sont en dehors de toutes ces histoires et qui sont du coup taxés avec des frais de gestion et un pourcentage sur les bénéfices (typiquement, 2 ou 3% du capital par an plus entre 20 et 30% des bénéfices reviennent au gestionnaire du fonds. Ce qui reste intéressant : si un gestionnaire transforme votre million en 10  millions, vous pouvez lui donner 3 millions, vous n’en aurez pas moins gagné 6 millions. Une fameuse somme !).

Les fonds de type Private Equity fonctionnent sur le même principe. Les gestionnaires investissent dans diverses entreprises durant 2 ou 3 ans puis se donnent 6 ou 7 ans pour réaliser des exits. L’argent est bloqué pour 10 ans, mais avec la promesse d’avoir été multiplié par 5 au bout de cette période (ce qui fait du 20% par an !).

Comment garantir les exits ? Premièrement grâce à des lobbies auprès des géants du secteur susceptibles d’acheter les petites boîtes. En dernier recours, il restera au gestionnaire du fonds la possibilité de créer un nouveau fonds pour racheter les invendus du premier. Cette opération fera du premier fond un réel succès, asseyant la réputation du gestionnaire et lui permettant de lever encore plus d’argent dans son nouveau fonds.

Le paradoxe du choix

Cette concentration est pourtant rarement perceptible lorsque nous allons faire nos courses. Et pour cause ? Les monopoles ne sont pas bêtes et proposent « de la diversité pour satisfaire tous les consommateurs ». Que vous achetiez des M&Ms, des Maltesers, un Mars, un Milky Way, un Snickers, un Twix, un Bounty, un Balisto ou bien d’autres, seul l’emballage change. Il s’agit des mêmes produits fabriqués dans les mêmes usines.

Du riz Uncle Ben’s, de l’Ebly, des pâtes Miracoli ou Suzi Wan ? Pareil.

Et pour les animaux ? Pedigree, Cesar, Whiskas, Royal Canin, Sheba, Kitekat, Canigou, Frolic ? Pareil.

Passez dans le rayon chewing-gum, toutes les marques sont par le même fournisseur.

D’ailleurs, je n’ai pas choisi ces exemples au hasard. Le fournisseur en question est identique pour toutes les marques que je viens de citer : Mars.

Rendez-vous dans votre supermarché et supprimez les produits Mars, Nestlé et Unilever. Il ne restera plus grand-chose à part quelques produits Kraft, Danone ou Pepsico. Vos magasins bio ne sont pas en reste. Certaines marques bio appartiennent aux même grands groupes, d’autres sont en pleine consolidation, car le marché est encore jeune.

L’exemple de la nourriture est frappant, mais il en est de même dans tous les secteurs lorsqu’on gratte un peu : automobile, hôtellerie, vêtements, voyages, compléments alimentaires naturels et bio… Grâce aux « alliances », il n’existe en réalité plus qu’une poignée de compagnie aérienne en Europe.

Lutter contre les monopoles.

Les monopoles, par leur essence même, sont difficilement évitables. Nous consommons monopoles, nous travaillons pour un monopole ou ses sous-traitants, renforçant chaque jour leur pouvoir.

Intuitivement, nous percevons le danger. Dans un billet précédent, je vous parlais de l’intuition à l’origine des théories du complot. Si l’on applique le filtre « monopole » à ces théories du complot, la révélation est saisissante.

Le monopole de l’industrie pharmaceutique conduit à des problématiques importantes (lobby pour la non-mise en open source du vaccin contre le Covid, fourniture des vaccins mélangés dans des ampoules pour diminuer les coûts même au prix d’une baisse d’efficacité et d’une augmentation des effets secondaires, augmentation des tarifs et lobby pour des brevets absurdes) qui entrainent une méfiance envers le principe même d’un vaccin, surtout développé en un an alors que les entreprises ont toujours dit qu’il fallait des années (afin d’allonger la durée de vie des brevets et créer des pénuries sur le marché).

Le contrôle total des monopoles du web sur nos données entraine une méfiance envers les ondes qui transmettent lesdites données voire même, dans une succulente fusion avec le monopole précédent, la crainte que les vaccins contiennent des puces 5G pour nous espionner (mais n’empêche cependant personne d’installer des espions comme Alexa ou Google Home dans sa propre maison).

Le sentiment profond d’une inégalité croissante, d’une financiarisation nocive, d’une exploitation sans vergogne de la planète et des humains qui s’y trouvent, tout cela est créé ou exacerbé par la prise de pouvoir des monopoles qui n’hésitent pas à racheter des entreprises florissantes avant de les pousser à la faillite afin de liquider tous les avoirs (bâtiments, machines, stocks). Une technique qui permet de supprimer la concurrence tout en faisant du profit au prix de la disparition de certaines enseignes de proximité dans les régions les plus rurales (sans parler du désastre économique des pertes d’emploi massives brutales dans ces mêmes régions).

Heureusement, la prise de conscience est en train de se faire. De plus en plus de scientifiques se penchent sur le sujet. Un consensus semble se développer : il faut une réelle volonté politique de démanteler les monopoles. Volonté difficile à l’heure où les politiciens ont plutôt tendance à se prosterner devant les grands patrons en échange de la promesse de créer quelques emplois et, dans certains cas, la promesse d’un poste dans un conseil d’administration une fois l’heure de la retraite politique sonnée. S’il y a quelques années, un chef d’entreprise était tout fier de poser pour une photo serrant la main à un chef d’État, aujourd’hui, c’est bel et bien le contraire. La fierté brille dans les yeux des chefs d’État et des ministres.

Si l’Europe cherche à imiter à tout prix son grand frère américain, les Chinois semblent avoir bien compris la problématique. Un géant comme Alibaba reste sous le contrôle intimidant de l’état qui l’empêche, lorsque c’est nécessaire, de prendre trop d’ampleur. La disparition, pendant plusieurs mois, de Jack Ma a bien fait comprendre qu’en Chine, être milliardaire ne suffit pas pour être intouchable. Ce qui ne rend pas le modèle chinois désirable pour autant…

Un autre consensus se dessine également : l’idéologie promue par Robert Bork sous Reagan est d’une nocivité extrême pour la planète, pour l’économie et pour les humains. Même pour les plus riches qui sont pris dans une course frénétique à la croissance de peur d’être un peu moins riches demain et qui savent bien, au fond d’eux-mêmes, que cela ne durera pas éternellement. Cette idéologie est également nocive pour tous les tenants d’une économie de marché libérale : les monopoles détruisent littéralement l’économie de marché ! Le capitalisme reaganien a apporté aux Américains ce qu’ils craignaient du communisme : de la pénurie et de la piètre qualité fournie par des monopoles qui exploitent une main-d’œuvre qui tente de survivre.

Avant de lutter, avant même d’avoir des opinions sur des sujets aussi variés que la vie privée sur le web, la finance, la politique ou la malbouffe, il est important de comprendre de quoi on parle. À  ce titre, Monopolized de David Dayen est une lecture édifiante. Certainement trop centré sur les États-Unis d’Amérique (mais qui déteignent sur l’Europe), écrit « à l’américaine » avec force anecdotes et certaines généralités questionnables (par exemple le chapitre sur les Private Equity), le livre n’en reste pas moins une somme parfaitement documentée et argumentée, bourrée de références et de repères bibliographiques.

Ce qui est intéressant également, c’est de constater que notre vision de la politique a été transformée avec, à droite, les tenants de monopoles privés et, à gauche, les tenants de monopoles appartenant à l’état. Une ambiguïté sur laquelle Macron, fort de son expérience, a parfaitement su jouer en proposant un seul et unique parti monopolistique n’ayant que pour seul adversaire le populisme absurde.

Lorsque vous êtes témoin d’une injustice, posez-vous la question : ne s’agit-il pas d’un monopole à l’œuvre ? Et si le futur passait par la désintégration pure et simple des monopoles ? Depuis les plus petits et les plus éphémères comme les brevets et le copyright, transformé en arme de censure massive, jusqu’aux géants bien connus.

Photo by Joshua Hoehne on Unsplash

Je suis @ploum, ingénieur écrivain. Abonnez-vous par mail ou RSS pour ne rater aucun billet (max 2 par semaine). Je suis convaincu que Printeurs, mon dernier roman de science-fiction vous passionnera. Commander et partager mes livres est le meilleur moyen de me soutenir et de m’aider à diffuser mes idées !

Ce texte est publié sous la licence CC-By BE.

April 29, 2021

Als je nas of server op zolder is gecrasht, wil je waarschijnlijk voorkomen dat je het probleem ter plaatse moet oplossen. Met een kvm-over-ip-systeem kun je op afstand ingrijpen via het netwerk. Met Pi-KVM bouw je zoiets zelf met een Raspberry Pi en enkele goedkope componenten.

Op de GitHub-pagina van Pi-KVM staat uitgelegd welke componenten je nodig hebt om je eigen kvm te maken:

  • Raspberry Pi 4

  • hdmi-naar-csi-2-adapterbordje of hdmi-naar-usb-dongel

  • een usb-splitter om de usb-c-aansluiting van de Raspberry Pi 4 zowel voor stroom als usb-otg in te zetten

De Raspberry Pi 4 krijgt dan toegang tot de hdmi-uitvoer van je nas of server en emuleert een toetsenbord, muis en opslag. De opensourcesoftware Pi-KVM biedt dit alles in een webinterface aan, zodat je via je webbrowser op je nas of server kunt werken alsof je ernaast zit:


Voor PCM schreef ik het artikel Pi-KVM: Nas benaderen via Raspberry Pi op afstand. Ik leg er uit hoe je zelf een usb-splitter maakt van twee usb-kabels en hoe je Pi-KVM installeert en instelt en het gebruikt om een iso-image aan te koppelen en op je nas of server te installeren.

De ontwikkelaars hebben ook een eigen HAT ontwikkeld die je op de Raspberry Pi 4 kunt monteren. De verkoop start binnenkort.

I published the following diary on “From Python to .Net“:

The Microsoft operating system provides the .Net framework to developers. It allows to fully interact with the OS and write powerful applications… but also malicious ones. In a previous diary, I talked about a malicious Python script that interacted with the OS using the ctypes library. Yesterday I found another Python script that interacts with the .Net framework to perform the low-level actions… [Read more]

The post [SANS ISC] From Python to .Net appeared first on /dev/random.

April 23, 2021

I published the following diary on “Malicious PowerPoint Add-On: ‘Small Is Beautiful‘”:

Yesterday I spotted a DHL-branded phishing campaign that used a PowerPoint file to compromise the victim. The malicious attachment is a PowerPoint add-in. This technique is not new, I already analyzed such a sample in a previous diary. The filename is “dhl-shipment-notification-6207428452.ppt” (SHA256:934df0be5a13def81901b075f07f3d1f141056a406204d53f2f72ae53f583341) and has a VT score of 18/60.. [Read more]

The post [SANS ISC] Malicious PowerPoint Add-On: “Small Is Beautiful” appeared first on /dev/random.

April 22, 2021

Last week, Drupalists around the world gathered virtually for DrupalCon North America 2021.

In good tradition, I delivered my State of Drupal keynote. You can watch the video of my keynote, download my slides (244 MB), or read the brief summary below.

I gave a Drupal 9 and Drupal 10 update, talked about going back to our site builder roots, and discussed the need to improve Drupal's contributor experience.

Drupal 9 update

People are adopting Drupal 9 at a record pace. We've gone from 0 to 60,000 websites in only one month. In contrast, it took us seven months to reach the same milestone with Drupal 7, and three months for Drupal 8.

A chart that shows that Drupal 9 adoption is much faster than Drupal 7's and Drupal 8's
With Drupal 8, after about 1.5 years, only a third of the top 50 Drupal modules were ready for Drupal 8. Now, only 10 months after the release of Drupal 9, a whopping 90% of top 50 modules are Drupal 9 ready.
A chart that shows the Drupal 9 module ecosystem is pretty much ready

Drupal 10 update

Next, I spoke about the five big initiatives for Drupal 10, which are making progress:

  1. Decoupled menus
  2. Easy out of the box
  3. Automated updates
  4. Drupal 10 readiness
  5. New front-end theme initiative

I then covered some key dates for Drupal 9 and 10:

A timeline that shows Drupal 9.3 will be released in December 2021 and Drupal 10.0.0 in June 2022

Improving the site builder experience with a project browser

A Drupal robot staring in the distance along with a call to action to focus on the site builder experience

When I ask people why they fell in love with Drupal, most often they talk about feeling empowered to build ambitious websites with little or no code. In fact, the journey of many Drupalists started with Drupal's low-code approach to site building. It's how they got involved with Drupal.

This leads me to believe that we need to focus more on the site builder persona. With that in mind, I proposed a new Project Browser initiative. One of the first things site builders do when they start with Drupal is install a module. A Project Browser makes it easier to find and install modules.

If you're interested in helping, check out the Project Browser initiative and join the Project Browser Slack channel.

Modernizing's collaboration tools with GitLab

A small vessel sailing towards a large GitLab boat

Drupal has one of the largest and most robust development communities. And's collaboration tools have been key to that success.

What you might not know is that we've built these tools ourselves over the past 15+ years. While that made sense 10 years ago, it no longer does today.

Today, most Open Source communities have standardized on tools like GitHub and GitLab. In fact, contributors expect to use GitHub or GitLab when contributing to Open Source. Everything else requires too much learning.

For example, here is a quick video that shows of how easy it is to contribute to Symfony using GitHub:

Next, I showed how people contribute to Drupal. As you can see in the video below, the process takes much longer and the steps are not as clear cut.

(This is an abridged version of the full experience; you can also watch the full video.)

To improve Drupal's contributor experience, the Drupal Association is modernizing our collaboration tools with GitLab. So far, this has resulted in some great new features. However, more work is required to give new Drupalists an easier path to start contributing.

Please reach out to Heather Rocker, the Executive Director at Drupal Association, if you want to help support our GitLab work. We are looking for ways to expand the Drupal Association's engineering team so we can accelerate this work.'s goals for Gitlab along with positive attendee feedback in chat

Thank you

I'd like to wrap up with a thank you to the people and organizations who have contributed since we released Drupal 9 last June. It's been pretty amazing to see the momentum!

The names of the 1,152 individuals that contributed to Drupal 9 so far
The logos of the 365 organizations that contributed to Drupal 9 so far

I published the following diary on “How Safe Are Your Docker Images?“:

Today, I don’t know any organization that is not using Docker today. For only test and development only or to full production systems, containers are deployed everywhere! In the same way, most popular tools today have a “dockerized” version ready to use, sometimes maintained by the developers themselves, sometimes maintained by third parties. An example is the Docker container that I created with all Didier’s tools. Today, we are also facing a new threat: supply chain attacks (think about Solarwinds or, more recently, CodeCov). Let’s mix the attraction for container technologies and this threat, we realize that  Docker images are a great way to compromise an organization… [Read more]

The post [SANS ISC] How Safe Are Your Docker Images? appeared first on /dev/random.

April 20, 2021

I will be starting to blog again after a very long time.
While this is in no way a fixed commitment, there are some technical topics I wish to express my opinion about.
This blog will mostly be focused on technical and other open source related topics, including but not limited to software, hardware, firmware and other topics.
All opinions are my own and in no way or shape are endorsed or supported by any entity I work for or with.

April 16, 2021

I published the following diary on “HTTPS Support for All Internal Services“:

SSL/TLS has been on stage for a while with deprecated protocols, free certificates for everybody. The landscape is changing to force more and more people to switch to encrypted communications and this is good! Like Johannes explained yesterday, Chrome 90 will now append “https://” by default in the navigation bar. Yesterday diary covered the deployment of your own internal CA to generate certificates and switch everything to secure communications. This is a good point. Especially, by deploying your own root CA, you will add an extra  string to your securitybow… [Read more]

The post [SANS ISC] HTTPS Support for All Internal Services appeared first on /dev/random.

April 15, 2021

JavaScript often has to be excluded from being aggregated due to inline JS depending on it. That’s why Autoptimize 2.9 will also have to option to defer inline JS, allowing all JS to be deferred, even that pesky jQuery. As seen in below screenshot exclusions obviously will be honored for both inline and linked JS so you will be able to tweak everything just right.

The settings for JavaScript optimization also have been reshuffled, making “also aggregate inline JS”, “force JS in head” and “try/catch wrapping” sub-options of “Aggregate JS” (so hidden on the screenshot as “aggregate JS” is off), whereas “Defer inline JS” is a sub-option of “Don’t aggregate but defer”.

So we have per post/ page AO settings and we now have “also defer inline JS” for what will become 2.9. And there’s more to come …

April 13, 2021

Hier, j’ai enfin supprimé mon compte Linkedin. Ce compte me narguait depuis 2006 par son inutilité et son impact sur ma boîte mail. Ce compte que je voulais supprimer depuis des années, mais que je gardais, acceptant son coût de maintenance, dans la crainte qu’il me soit un jour utile.

La goutte d’eau a été de découvrir que j’avais été abonné à des newsletters par des gens que j’avais acceptés dans mon réseau (j’accepte tout le monde, comme ça je ne me pose pas de questions) et qui avait utilisé des services externes permettant d’exporter les adresses email de ses contacts Linkedin.

Mais le vase était déjà plein depuis bien longtemps. En presque 15 années d’utilisation et des milliers de mails dans ma boîte, je n’ai pas trace d’un seul contact utile, d’une seule opportunité qui m’a été permise par Linkedin. Ah si ! Un lecteur de mon roman Printeurs m’a dit, connaissant mon amour pour ce réseau, que c’est par Linkedin qu’il a appris la parution du livre. Que Linkedin m’a donc apporté un lecteur.

Pourtant, j’y ai mis du mien. Jeune et naïf, j’avais tenté de n’accepter dans mon réseau que des personnes que je connaissais suffisamment pour les recommander. Aux requêtes inconnues, j’opposais un refus poli. Je me suis pris plusieurs bordées de bois vert raillant ma jeunesse et mon incompréhension de l’open-networking. Je me suis alors adapté en acceptant toutes les requêtes, sans exception.

Durant quelques mois, j’ai poussé l’expérience (ou le vice, c’est selon) jusqu’à accepter toutes les propositions qui m’arrivaient par message, disant oui que j’étais intéressé. Du moins à celles qui ne me demandaient pas de payer pour un service, mais qui proposaient de m’employer ou de me faire rejoindre des projets.

Dans l’immense majorité des cas, je n’ai eu aucune nouvelle suite à mon acceptation. Dans certains cas, la conversation s’est poursuivie jusqu’à ce qu’on oublie de me répondre. J’étais d’accord sur tout, j’affirmais mon désir d’aller plus loin. Rien n’y a fait. J’ai même accepté d’aller donner une formation informatique en Éthiopie, je me suis retrouvé dans une discussion à 3 avec le responsable. J’ai dit oui, j’ai relancé plusieurs fois et mes derniers mails sont restés lettre morte.

J’ai ensuite décidé d’appliquer ma stratégie « email only ». Elle consiste à répondre un message standard lorsqu’on me contacte par une messagerie quelconque : « Hello, je ne consulte pas cette messagerie. Merci de me contacter par mail pour ce sujet. Voici mon adresse ». Ma page Facebook dispose d’ailleurs d’un répondeur qui le fait automatiquement et se fait régulièrement insulter.

L’idée étant que si la personne ne prend pas le temps de m’envoyer un véritable mail, c’est que ce n’est pas vraiment important, qu’elle n’attend pas vraiment une réponse.

Et bien le constat est sans attente. Je peux compter sur les doigts d’une main ceux qui m’ont effectivement envoyé un mail. Dans tous les cas, c’étaient des gens que je connaissais hors Linkedin et qui disposaient probablement de mes coordonnées.

J’ai découvert que, parfois, des connaissances me contactaient par Linkedin et que je ne voyais le message que bien plus tard. Paradoxalement, les réseaux me rendent moins facilement joignable.

Du coup, j’ai pris le réflexe d’aller vérifier Linkedin quelques fois par mois. Et donc de subir les notifications, les demandes de connexions. Bref, de me faire aspirer par la machine à attention que les fabricants de réseaux sociaux construisent désormais si efficacement.

J’ai parfois l’impression d’être désorganisé, de lancer des tas de projets avant de les abandonner. Je crois que, sur Linkedin, les gens sont pires que moi. La quantité des relations a remplacé la qualité. Les recruteurs, les marketeux, les aspirants entrepreneurs sont comme des enfants dans un magasin de jouets. Ils veulent tout, ils remplissent leur caddie avec gourmandise avant de passer à autre chose sans rien déballer.

Linkedin a toujours été pour moi un réseau de mendiants. Mendiants pour un job (pardon « Looking for new opportunities » ou « Ready for the next challenge »), mendiants pour des clients sous toutes les formes, mendiants pour de la visibilité « professionnelle ». Les marketeux trouvent leur compte, car ils peuvent envoyer des messages à X contacts, récolter des adresses email et dire que leur journée est faite. Les recruteurs se contentent de faire des recherches par mot clé et d’utiliser des moulinettes automatisées. Le fait que j’aie fait 6 mois de J2EE en 2006 semble toujours faire de moi « le profil idéal pour un client important ». Pour le reste, tout le monde espère que passer sa journée sur Linkedin va miraculeusement se transformer en espèce sonnante et trébuchante.

Malgré tout cela, je suis resté toutes ces années. Parce que j’avais l’impression que « ça pourrait ptêtre servir un jour ». Parce que c’est dur d’accepter que le bilan soit tellement nul après autant d’années. Parce que je pensais que « c’est dommage d’abandonner un réseau patiemment constitué » (tu parles, quelques milliers de clics pour accepter des demandes souvent aléatoires).

Mais je ne pouvais plus supporter cet enjouement corporate forcé, ces messages de félicitations semi-automatiques pour fêter mes trois ans dans un job que j’ai quitté il y a 2 ans et demi en oubliant de mettre mon profil à jour (envoyés par d’illustres inconnus ou des gens avec qui j’ai partagé un bureau pendant 3 semaines il y a 10 ans), cette timeline remplie d’adjectifs dithyrambiques pour se congratuler l’un l’autre de ce qui n’est qu’une énième tentative de transformer un spreasheet d’emails en clients débités tous les mois ou de vendre un concept intellectuellement rachitique en journée de formation pour booster la performance de votre équipe.

Linkedin étant pour moi un réseau de mendiants, tout ce que j’y voyais était à vendre. Y compris mes données, mon adresse email, mon temps. J’ai décidé de me retirer, avec mes données, du marché. Je ne suis plus sur Linkedin.

Si vous me suiviez là-bas, il suffit de vous abonner à ce blog. Votre adresse mail ne sera visible que par moi, ne sera pas utilisée pour autre chose qu’envoyer mes billets et ne sera jamais partagée. Le tout, sans passer par l’intermédiaire de Microsoft (propriétaire de Linkedin). Je pense que le ratio qualité de l’information par rapport au temps passé et nombre de mails reçus est bien plus avantageux en vous abonnant à ce blog qu’en allant sur Linkedin. Si nous perdons contact suite à mon départ de Linkedin, c’est peut-être que nous n’étions tout simplement pas en contacts en premier lieu. Nous en avions seulement l’illusion, comme souvent dans l’univers des réseaux sociaux. L’illusion d’être aimé (Facebook), l’illusion d’avoir des amis (Facebook), l’illusion d’être écouté (Twitter), l’illusion d’avoir une vie cool (Instagram), l’illusion d’être professionnellement important et bien connecté (Linkedin). D’ailleurs, sans ce billet, il est probable que personne n’aurait remarqué mon absence. Sur les réseaux sociaux, les absents sont rapidement emporté par le flux, la brêve et illusoire gloriole qu’ils avaient construite se diluant instantanément dans l’immédiateté de l’oubli. Le lit de la rivière ne conserve pas la trace du caillou que vous venez de retirer.

Une situation n’est pas l’autre. Linkedin est peut-être utile, voire indispensable pour votre activité. L’important étant, comme le souligne Cal Newport dans son excellent Digital Minimalism, de bien peser le coût réel par rapport aux bénéfices réels (et non pas ceux supposés) et de faire ses propres choix en conscience.

Dans ma situation, chaque source de distraction supprimée est un livre de plus lu à la fin de l’année. Donc acte. Je quitte le grand réseau bleu, je retire ma cravate, mes chaussures corporate et me replonge dans mes lectures.

Photo by Jonathan Kho on Unsplash

Je suis @ploum, ingénieur écrivain. Abonnez-vous par mail ou RSS pour ne rater aucun billet (max 2 par semaine). Je suis convaincu que Printeurs, mon dernier roman de science-fiction vous passionnera. Commander et partager mes livres est le meilleur moyen de me soutenir et de m’aider à diffuser mes idées !

Ce texte est publié sous la licence CC-By BE.

April 09, 2021

I published the following diary on “No Python Interpreter? This Simple RAT Installs Its Own Copy“:

For a while, I’m keeping an eye on malicious Python code targeting Windows environments. If Python looks more and more popular, attackers are facing a major issue: Python is not installed by default on most Windows operating systems. Python is often available on developers, system/network administrators, or security teams. Like the proverb says: “You are never better served than by yourself”, I found a simple Python backdoor that installs its own copy of the Python interpreter… [Read more]

The post [SANS ISC] No Python Interpreter? This Simple RAT Installs Its Own Copy appeared first on /dev/random.

April 08, 2021

I published the following diary on “Simple Powershell Ransomware Creating a 7Z Archive of your Files“:

If some ransomware families are based on PE files with complex features, it’s easy to write quick-and-dirty ransomware in other languages like Powershell. I found this sample while hunting. I’m pretty confident that this script is a proof-of-concept or still under development because it does not contain all the required components and includes some debugging information… [Read more]

The post [SANS ISC] Simple Powershell Ransomware Creating a 7Z Archive of your Files appeared first on /dev/random.

April 07, 2021

In an interview, Bill Gates was asked how hard it was for him to learn to delegate. Bill answers with how he had to change his mental model, going from writing code to letting go to optimize for impact.

Yeah, scaling [Microsoft] was a huge challenge. At first I wrote all the code. Then I hired all the people that wrote the code and I looked at the code. Then, eventually, there was code that I didn't look at and people that I didn't hire. And of course the average quality per person is going down, but the ability to have big impact is going up. [...] A large company is imperfect in many ways, and yet it's the way to get out to the entire world. — Bill Gates

You can listen to the entire interview but I've also included the excerpt here:

This idea of "having to let go to optimize for impact" really resonates with me. I've gone through this transition in the Drupal community and at Acquia. I've even written about on a few occasions [1, 2].

April 06, 2021

screenshot of the page/ post autoptimization settingsI’m in the process of adding a per page/ post option to disable Autoptimization.

In the current state of this work in process one can disable Autoptimize entirely on a post/ page or disable just JS optimization as you can see on the screenshot.

Now my question to you, Autoptimize user, is; what other options of below list _have_ to go in that metabox taking into account the list should be between 3 and 5 items long?

  • CSS optimization (which includes Critical CSS)
  • Critical CSS usage/ Inline & defer CSS
  • HTML optimization
  • Image optimization
  • Image Lazyload
  • Google Font optimization
  • Preload (from “extra” tab)
  • Preconnect (from “extra” tab)
  • Async (from “extra” tab)

The Adafruit nRF52 bootloader is a USB-enabled CDC/DFU/UF2 bootloader for nRF52 boards. An advantage compared to Nordic Semiconductor's default bootloader is that you can just drag and drop your application firmware from your operating system's file explorer, without having to install any programming tools. For nRF52840 boards, you hold the reset button while sliding the USB connector in the USB port of your computer, or you tap the reset button twice within 500 ms. The bootloader then starts in DFU (device firmware upgrade) mode and behaves like a removable flash drive.

This device shows three virtual files:


contains information about the bootloader build and the board on which it's running


redirects to a page that contains an IDE or other information about the board


the contents of the entire flash storage of the device

Flashing the device with new firmware is as easy as copying a UF2 file to the drive. After the file is copied, the drive is unmounted and the new firmware is running on the board. 1

However, the bootloader itself can't be upgraded like this. Today I had some problems with the April USB Dongle 52840 2 that interrupted the UF2 file transfer before it was finished. The dmesg command showed a call trace and some errors. As a result, the device was useless: I couldn't put any new firmware on it.

I was puzzled, but then I looked at the bootloader's version in INFO_UF2.TXT, and this was quite old: a 0.2.x version from 2018. I hoped that upgrading the bootloader would solve the problem.

Upgrading the Adafruit nRF52 bootloader is quite easy:

  1. Download the latest release of the bootloader. For the April USB Dongle 52840 and other devices based on Nordic Semiconductor's nRF52840 Dongle 3, the firmware file is for the bootloader and SoftDevice wireless protocol stack. PCA10059 is the official name of Nordic Semiconductor's nRF52840 Dongle.

  2. Unpack the ZIP file. You need the file (yes, another ZIP file) in it.

  3. Install adafruit-nrfutil: pip3 install --user adafruit-nrfutil.

  4. Connect the board to your computer's USB port and flash the new bootloader package:

$ adafruit-nrfutil dfu serial --package --port /dev/ttyACM0
Upgrading target on /dev/ttyACM0 with DFU package /home/koan/ Flow control is disabled, Dual bank, Touch disabled
Activating new firmware
Device programmed.

After this, the INFO_UF2.TXT file contains UF2 Bootloader 0.5.0 lib/nrfx (v2.0.0) lib/tinyusb (0.9.0-22-g7cdeed54) lib/uf2 (remotes/origin/configupdate-9-gadbb8c7).

Luckily the upgraded bootloader solved my problem: I was able to flash the board with new UF2 application firmware.


The UF2 format for firmware has become quite popular in recent years. For instance, the Raspberry Pi Pico also has a bootloader that accepts UF2 files.


If you're looking for an nRF52840 device with a longer range than similar devices with a PCB-based antenna, I can definitely recommend the April USB Dongle 52840: in my experiments with the dongle as a Bluetooth Low Energy and a 802.15.4/Zigbee sniffer, the external antenna makes a big difference.


Another interesting nRF52840 board is the nRF52840 MDK USB Dongle from makerdiary. This is essentially a Nordic Semiconductor nRF52840 Dongle with the Adafruit nRF52 bootloader, sold in a case.

April 03, 2021

This morning I finally pushed Autoptimize 2.8.2 out of the gates which was a relatively minor release with misc. small improvements/ bugfixes. Only it proved not that minor as it broke some sites after the update, so here’s a quick postmortem.


  • 7h33 CEST: I pushed out 2.8.2
  • 7h56 CEST: first forum post about a Fatal PHP error due to wp-content/plugins/autoptimize/classes/external/php/ao-minify-html.php missing
  • 7h58 CEST: second forum post confirming issue
  • 8h01 CEST: responded to both forum posts asking if file was indeed missing on filesystem
  • 8h04 CEST: I changed the “stable version” back to 2.8.1 to stop 2.8.2 from being pushed out.
  • 8h07 CEST: forum post replies confirming the file was indeed missing from the filesystem
  • 8h15 CEST: I pushed out 2.8.3 with the fix
  • 8h22 CEST: confirmed fixed by first user
  • 8h26 CEST: confirmed fixed by second user

Root cause analysis

One of the improvements was changing the classname of the HTML minifier to avoid W3 Total Cache’s HTML minifier being used. For this purpose not only small changes were made to the HTML minifier code, but the file was also renamed from minify-html.php into ao-minify-html.php. The file itself was present on my local filesystem, but I did *not* svn add it, so it was never propagated to the SVN server, resulting in it not being in the 2.8.2 zip-file causing the PHP Fatal “require(): Failed opening required” errors.


Every svn ci has be proceeded by an svn stat, always. I’ve updated my “go live” procedure to reflect that.

Additionally; I strongly advise against automatic updates for Autoptimize (and I don’t auto-update any plugin myself), not only for major f-ups like mine today, but also because any change to how (auto-)optimization works needs to be tested for regressions. And if you have a site that generates money somehow, you really should have a staging site (which can auto-update) to test updates on before applying on production.

April 02, 2021

I published the following diary on “C2 Activity: Sandboxes or Real Victims?“:

In my last diary, I mentioned that I was able to access screenshots exfiltrated by the malware sample. During the first analysis, there were approximately 460 JPEG files available. I continued to keep an eye on the host and the number slightly increased but not so much. My diary conclusion was that the malware looks popular seeing the number of screenshots but wait… Are we sure that all those screenshots are real victims? I executed the malware in my sandbox and probably other automated analysis tools were used to detonate the malware in a sandbox. This question popped up in my mind: How do have an idea about the ratio of automated tools VS. real victims? [Read more]

The post [SANS ISC] C2 Activity: Sandboxes or Real Victims? appeared first on /dev/random.

March 31, 2021

I published the following diary on “Quick Analysis of a Modular InfoStealer“:

This morning, an interesting phishing email landed in my spam trap. The mail was redacted in Spanish and, as usual, asked the recipient to urgently process the attached document. The filename was “AVISO.001” (This extension is used by multi-volume archives). The archive contained a PE file with a very long name: AVISO11504122921827776385010767000154304736120425314155656824545860211706529881523930427.exe (SHA256:ff834f404b977a475ef56f1fa81cf91f0ac7e07b8d44e0c224861a3287f47c8c). The file is unknown on VT at this time so I performed a quick analysis… [Read more]

The post [SANS ISC] Quick Analysis of a Modular InfoStealer appeared first on /dev/random.

March 29, 2021

I published the following diary on “Jumping into Shellcode“:

Malware analysis is exciting because you never know what you will find. In previous diaries, I already explained why it’s important to have a look at groups of interesting Windows API call to detect some behaviors. The classic example is code injection. Usually, it is based on something like this:

1. You allocate some memory
2. You get a shellcode (downloaded, extracted from a specific location like a section, a resource, …)
3. You copy the shellcode in the newly allocated memory region
4. You create a new threat to execute it.

[Read more]

The post [SANS ISC] Jumping into Shellcode appeared first on /dev/random.

March 28, 2021

rpi4 with disk

In my last blog post, we set up a FreeBSD virtual machine with QEMU. I switched from the EDK2 (UEFI) firmware to U-boot, the EDK2 firmware had issues with multiple CPU’s in the virtual machines.

In this blog post, we’ll continue with the Network setup, install the virtual machine from a CDROM image and how to start the virtual machine during the PI start-up.

Network Setup


Bridge setup

The network interface on my Raspberry PI is configured in a bridge. I used this bridge setup already for a virtual machine setup with libvirtd.

The bridge is configured with network-manager. I don’t recall how I created it. It was probably created with nmtui or nmcli.

Creating a bridge with nmtui is straight-forward, I’ll not cover it in this how-to.

I use Manjaro on my Raspberry Pi. Manjaro is based on Arch Linux. The ArchLinux wiki has a nice article on how to set up a bridge.


Create a bridge.conf file in /etc/qemu/ to allow the bridge in QEMU.

# cat /etc/qemu/bridge.conf 
allow eth0-bridge


When you use a firewall that drops all packages by default - as you should - you probably want to set up a firewall rule that allows all traffic on the physical interface on the bridge.

iptables -I FORWARD -m physdev --physdev-is-bridged -j ACCEPT

I use a simple firewall script that was based on the Debian firewall wiki:

As always with a firewall make sure that you log the dropped packages. It’ll make your life easier to debug.

You’ll find my iptables firewall rules below.

iptables -F

# Default policy to drop 'everything' but our output to internet
iptables -P FORWARD DROP
iptables -P INPUT   DROP
iptables -P OUTPUT  ACCEPT

# Allow established connections (the responses to our outgoing traffic)
iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT

# Allow local programs that use loopback (Unix sockets)
iptables -A INPUT -s -d -i lo -j ACCEPT

# Uncomment this line to allow incoming SSH/SCP conections to this machine,
# for traffic from (you can use also use a network definition as
# source like

iptables -A INPUT -p tcp --dport 22 -m state --state NEW -j ACCEPT
iptables -A INPUT -p icmp -j ACCEPT

iptables -I FORWARD -m physdev --physdev-is-bridged -j ACCEPT


iptables -A LOGGING_INPUT -m limit --limit 2/min -j LOG --log-prefix "IPTables-Input-Dropped: " --log-level 4

iptables -A LOGGING_FORWARD -m limit --limit 2/min -j LOG --log-prefix "IPTables-Forward-Dropped: " --log-level 4

iptables -A LOGGING_OUTPUT -m limit --limit 2/min -j LOG --log-prefix "IPTables-Output-Dropped: " --log-level 4



To boot the virtual machine with networking enabled, you can add -net nic`-net bridge,br=<your-bridge> to the qemu-system-aarch64 command. My bridge is called eth0-bridge.

As a test, I booted the virtual machine with the FreeBSD virtual machine image.

qemu-system-aarch64 -M virt -m 4096M -cpu host,pmu=off --enable-kvm \
 	-smp 2 -nographic -bios /usr/local/u-boot/u-boot.bin \
 	-hda /home/staf/Downloads/freebsd/FreeBSD-13.0-RC2-arm64-aarch64.qcow2 \
	-boot order=d -net nic -net bridge,br=eth0-bridge

This creates a tap interface that is assigned to the virtual machine. The FreeBSD virtual image is configured to get an ip-address with DHCP.

Install FreeBSD from a cdrom image

Download the FreeBSD ARM64 “Installer Image” from FreeBSD website:

Create a disk image for the virtual machine.

$ qemu-img create -f qcow2 myfreebsd.qcow2 50G
Formatting 'myfreebsd.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=53687091200 lazy_refcounts=off refcount_bits=16

Boot the virtual machine with the “Installer Image” and the created qcow2 image.

$ qemu-system-aarch64 -M virt -m 4096M -cpu host,pmu=on --enable-kvm \
        -smp 2 -nographic -bios /usr/local/u-boot/u-boot.bin \
        -cdrom /home/staf/Downloads/freebsd/iso/FreeBSD-13.0-RC3-arm64-aarch64-dvd1.iso \
        -boot order=c \
        -hda myfreebsd.qcow2 \
        -net nic -net bridge,br=eth0-bridge

The installation continues as “a normal” FreeBSD install.

|  ______               ____   _____ _____  
  |  ____|             |  _ \ / ____|  __ \ 
  | |___ _ __ ___  ___ | |_) | (___ | |  | |
  |  ___| '__/ _ \/ _ \|  _ < \___ \| |  | |
  | |   | | |  __/  __/| |_) |____) | |__| |
  | |   | | |    |    ||     |      |      |
  |_|   |_|  \___|\___||____/|_____/|_____/      ```                        `
                                                s` `.....---.......--.```   -/
 +-----------Welcome to FreeBSD------------+    +o   .--`         /y:`      +.
 |                                         |     yo`:.            :o      `+-
 |  1. Boot Multi user [Enter]             |      y/               -/`   -o/
 |  2. Boot Single user                    |     .-                  ::/sy+:.
 |  3. Escape to loader prompt             |     /                     `--  /
 |  4. Reboot                              |    `:                          :`
 |  5. Cons: Video                         |    `:                          :`
 |                                         |     /                          /
 |  Options:                               |     .-                        -.
 |  6. Kernel: default/kernel (1 of 1)     |      --                      -.
 |  7. Boot Options                        |       `:`                  `:`
 |                                         |         .--             `--.
 |                                         |            .---.....----.
   Autoboot in 5 seconds, hit [Enter] to boot or any other key to stop   

Choose your terminal type, I used xterm. Tip: if your screen gets mixed up during the installation, you can use [CRTL][L] to redraw it.

Starting local daemons:
Welcome to FreeBSD!

Please choose the appropriate terminal type for your system.
Common console types are:
   ansi     Standard ANSI terminal
   vt100    VT100 or compatible terminal
   xterm    xterm terminal emulator (or compatible)
   cons25w  cons25w terminal

Console type [vt100]: 

Continue with the FreeBSD installation…

When you reboot your freshly installed FreeBSD system interrupt the boot process with the [CRTL][a] [x] key combination. To see the other options use [CRTL][a] [h].

qemu-system-aarch64 -M virt -m 4096M -cpu host --enable-kvm \
        -smp 2 -nographic -bios /usr/local/u-boot/u-boot.bin \
        -boot order=c \
        -hda myfreebsd.qcow2 \
        -net nic -net bridge,br=eth0-bridge

The first boot will fail. We are using U-Boot as the BIOS. The EFI boot filesystem doesn’t exist.

Logon to the system.

Automatic file system check failed; help!
ERROR: ABORTING BOOT (sending SIGTERM to parent)!
1970-01-01T01:00:02.912420+01:00 - init 1 - - /bin/sh on /etc/rc terminated abnormally, going to single user mode
Enter root password, or ^D to go multi-user
Enter full pathname of shell or RETURN for /bin/sh: 
root@:/ # 

Verify the filesystem that failed to mount.

root@:/ # mount -a
mount_msdosfs: /dev/vtbd1p1: No such file or directory
root@:/ # 

The root filesystem is read-only. Remount it in read-write mode with mount -u /

root@:/ # mount -u /
root@:/ #

Edit /etc/fstab

root@:/ # vi /etc/fstab

And add a # before the /boot/efi mount point. I’d not remove it, it might be useful to be able to re-enable it when you want to switch to a UEFI bios.

# Device                Mountpoint      FStype  Options         Dump    Pass#
# /dev/vtbd1p1          /boot/efi       msdosfs rw              2       2
/dev/mirror/swap                none    swap    sw              0       0

And reboot you system.

root@:/ # sync
root@:/ # reboot


To implement the auto-start of the QEMU virtual machine, I mainly followed the ArchLinux wiki QEMU wiki

Systemd service

Create the systemd service.

# vi /etc/systemd/system/qemu@.service
Description=QEMU virtual machine

Description=QEMU virtual machine

Environment="haltcmd=kill -INT $MAINPID"
ExecStart=/usr/bin/qemu-system-aarch64 -M virt -name %i --enable-kvm -cpu host -nographic $args
ExecStop=/usr/bin/bash -c ${haltcmd}
ExecStop=/usr/bin/bash -c 'while nc localhost 7100; do sleep 1; done'


Create QEMU config

Create the qemu.d config directory.

# mkdir -p /etc/conf.d/qemu.d/

Create the definition for the virtual machine.

# vi /etc/conf.d/qemu.d/myfreebsd
args="-bios=/usr/local/u-boot/u-boot.bin -hda /var/lib/qemu/images/rataplan/myfreebsd.qcow2 -boot order=c -net nic -net bridge,br=eth0-bridge -serial telnet:localhost:$vmport,server,nowait,nodelay"
haltcmd="ssh powermanager@myfreebsd sudo poweroff"
[root@minerva ~]# systemctl daemon-reload
[root@minerva ~]# 
[root@minerva ~]# systemctl start qemu@myfreebsd
[root@minerva ~]# 


FreeBSD on pi screen

We have two options to execute a poweroff. The first one is by ACPI. QEMU has a “monitor” interface that allows to execute a “system_poweroff” command. This will execute a poweroff by ACPI.

Your client operating system needs to support it. FreeBSD has good ACPI support build-in to the kernel. But I don’t know the state and how stable it is on ARM64. We’re also using U-boot.

The other option is to execute the poweroff command with ssh and sudo. Since I didn’t get ACPI working, I configured it with ssh.

Setup ssh

Generate a ssh key

I normally store my ssh keys on a smartcard-hsm and use a ssh-agent. As a test, I will just use a ssh-key on the host filesystem.

I’ll migrate it when I move my raspberry-pi into my home production environment. :-)

Generate an ssh key on the QEMU host system.

# ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa
Your public key has been saved in /root/.ssh/
The key fingerprint is:
The key's randomart image is:

Install sudo

Install sudo on the FreeBSD client system. The FreeBSD package manager pkg will be installed the first time you execute it.

To execute the poweroff command we’ll use sudo, so let’s install it…

# pkg install -y sudo
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
The following 1 package(s) will be affected (of 0 checked):

New packages to be INSTALLED:
	sudo: 1.9.5p2

Number of packages to be installed: 1

The process will require 4 MiB more space.
890 KiB to be downloaded.
[1/1] Fetching sudo-1.9.5p2.txz: 100%  890 KiB 911.0kB/s    00:01    
Checking integrity... done (0 conflicting)
[1/1] Installing sudo-1.9.5p2...
[1/1] Extracting sudo-1.9.5p2: 100%

Create the powermanager user

Create the powermanager user with the adduser command.

# adduser
Username: powermanager
Full name: powermanager
Uid (Leave empty for default): 
Login group [powermanager]: 
Login group is powermanager. Invite powermanager into other groups? []: 
Login class [default]: 
Shell (sh csh tcsh bash rbash nologin) [sh]: 
Home directory [/home/powermanager]: 
Home directory permissions (Leave empty for default): 
Use password-based authentication? [yes]: no
Lock out the account after creation? [no]: 
Username   : powermanager
Password   : <disabled>
Full Name  : powermanager
Uid        : 1002
Class      : 
Groups     : powermanager 
Home       : /home/powermanager
Home Mode  : 
Shell      : /bin/sh
Locked     : no
OK? (yes/no): yes
adduser: INFO: Successfully added (powermanager) to the user database.
Add another user? (yes/no): no
root@rataplan:~ # 

Configure sudo

Create /usr/local/etc/sudoers.d/powermanager

# visudo -f /usr/local/etc/sudoers.d/powermanager

with the permission to execute the poweroff command with out a password.

powermanager ALL=(ALL) NOPASSWD:/sbin/poweroff


Create the authorized_keys file for the powermanager user.

Create the .ssh directory in homedir of the powermanager.

# cd /home/powermanager/
# umask 027
# mkdir .ssh

Create the authorized_keys file, it’s less known that you can also restrict the access in the authorized_keys file. We’ll restrict the access to the ip address of the Linux hypervisor system.

from="",no-X11-forwarding ssh-rsa <snip>
root@rataplan:/home/powermanager # chown -R root:powermanager .ssh
root@rataplan:/home/powermanager # 


Logon to the FreeBSD virtual machine with the create ssh key and try to execute the poweroff command.

# ssh powermanager@myfreebsd
The authenticity of host 'myfreebsd (' can't be established.
ED25519 key fingerprint is SHA256:R7tmX7In9D21H3hj2JiwJJVwcoQvoIR5BgJjuKgY3CI.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added 'myfreebsd' (ED25519) to the list of known hosts.
FreeBSD 13.0-RC3 (GENERIC) #0 releng/13.0-n244696-8f731a397ad: Fri Mar 19 03:36:50 UTC 2021

Welcome to FreeBSD!

Release Notes, Errata:
Security Advisories:
FreeBSD Handbook:
Questions List:
FreeBSD Forums:

Documents installed with the system are in the /usr/local/share/doc/freebsd/
directory, or can be installed later with:  pkg install en-freebsd-doc
For other languages, replace "en" with a language code like de or fr.

Show the version of FreeBSD installed:  freebsd-version ; uname -a
Please include that output and any error messages when posting questions.
Introduction to manual pages:  man man
FreeBSD directory layout:      man hier

To change this login announcement, see motd(5).
Nice bash prompt: PS1='(\[$(tput md)\]\t <\w>\[$(tput me)\]) $(echo $?) \$ '
		-- Mathieu <>
powermanager@rataplan:~ $ 
$ sudo poweroff
Shutdown NOW!
poweroff: [pid 43082]
powermanager@rataplan:~ $                                                                                
*** FINAL System shutdown message from powermanager@rataplan ***             

System going down IMMEDIATELY                                                  


System shutdown time has arrived
Connection to myfreebsd closed by remote host.
Connection to myfreebsd closed.
[root@minerva ~]# 

Final Test

Make sure that your client system is running and configured to be start at the system startup.

[root@minerva ~]# systemctl enable qemu@myfreebsd
Created symlink /etc/systemd/system/ → /etc/systemd/system/qemu@.service.
[root@minerva ~]# systemctl start qemu@myfreebsd
[root@minerva ~]# 

Verify that the system is running with systemctl status.

[root@minerva ~]# systemctl status qemu@myfreebsd
● qemu@myfreebsd.service - QEMU virtual machine
     Loaded: loaded (/etc/systemd/system/qemu@.service; enabled; vendor preset: disabled)
     Active: active (running) since Sun 2021-03-21 20:24:10 CET; 2min 39s ago
   Main PID: 43360 (qemu-system-aar)
      Tasks: 5 (limit: 8536)
     CGroup: /system.slice/system-qemu.slice/qemu@myfreebsd.service
             └─43360 /usr/bin/qemu-system-aarch64 -M virt -name myfreebsd --enable-kvm -cpu host -nographic -m 4096 -smp 2 -bios /usr/local/u-boot/u-b>

Mar 21 20:24:10 minerva systemd[1]: Started QEMU virtual machine.
Mar 21 20:24:10 minerva qemu-system-aarch64[43360]: QEMU 5.2.0 monitor - type 'help' for more information

On one window logon to your FreeBSD client console with telnet.

$ telnet 7001

On the QEMU Linux system execute

# systemctl stop qemu@myfreebsd

The FreeBSD client should be power-down…

Have fun!


March 26, 2021

Cover Image

The Boy Who Cried Leopard

Recently there's been a new dust up about Richard Stallman and the Free Software Foundation. For those of you just tuning in: an open letter demands that the entire board of the Free-as-in-speech Software Foundation resign, because of past statements and opinions by the radical inventor of free-as-in-speech software.

It's pushed on social media, by various People of Clout. People start sharing their own stories which are somehow meant to prove the power grab is justified because Stallman is horrible. There's also a counter letter, which I and many others have signed. It's all very productive.

The whole situation is remarkable to me. The undersigned claim to detest Stallman, for being an uncompromising libertarian who holds unsavory and immoral views—or at least a caricature of them. Yet they seem incredibly invested in taking over an organization he founded to explicitly defend his personal ideals. You'd think people who are so into guilt by association would prefer to not be associated with any of it.

It's even more remarkable when you notice the backdrop for the previous dust up involving Stallman: MIT and Jeffrey Epstein. Cos what it looked like to me was that a bunch of people suddenly all had their hands in a very dubious funding cookie jar. At the same time, they decided it was very important to use someone as a scapegoat to pin evil opinions on about sex and consent. You gotta wonder.

What I really want to talk about though is a pattern of behavior that keeps recurring.

Please be patient I have autism - Blue hat


Consider this story.

T. Tweeter describes the pain of being sat next to Stallman on a grounded plane for 90 minutes. Stallman complains to the flight attendant and becomes irate. Eventually the narrator "takes one for the team" by striking up a conversation with him, lest the entire flight is cancelled, after ignoring him for 45 minutes. Very empathetic. Stallman sees this as an opportunity to criticize his choice of headphones, that they are a symbol of digital oppression.

The intended take-away, I assume, is that Stallman is immature and lacks the social graces to deal with a difficult situation. He takes out his stress on the people around him, who can't do anything about it, making it worse for everyone. He is single-mindedly focused on his own interests.

That doesn't sound very pleasant.

Though as someone on the spectrum, I can read this situation quite differently.

Planes are uncomfortable for anyone: you are stuck in a tin can, in an uncomfortable seat, next to people you can't get away from. For autists, this is extra bad: they often have difficulty tuning out their environment. This can be experienced as an actual assault of painful sounds, smells and so on. Spending several hours on a plane is Nightmare mode for some of us, and noise-cancelling can be a life saver.

The fact that the plane was grounded is also extremely pertinent: autism is often paired with OCD, and a grounded plane represents a schedule that was made but then disrupted. An expectation was set of orderly events, and then this expectation was violated, with no definite end in sight. This can be unbearable for those with a certain predisposition.

The combination of the two is extra bad, because the way autists generally deal with stressful situations is through planning and preparation: they anticipate the various obstacles and harms they might encounter, and preventatively try to mitigate them. If things go wrong despite all this, because of the actions of others, this can register as negligent and rude. The person on the spectrum is trying their best to avoid harm, to avoid foreseeable problems that will result in pain, but their efforts are in vain or actively frustrated.

Worse, if they complain, they will be seen as arrogant and entitled, because what was plainly obvious to them is rarely understood by others. It puts them in a damned-if-you-do, damned-if-you-don't situation. Annoy people by pointing out their mistakes, or stay silent and be forced to live painfully through their slow, unfolding consequences. Ripping off the band-aid is sometimes necessary, and can have remarkable results.

I'm not defending Stallman's behavior, I'm just explaining what it likely looked like from the other side. The part about the headphones is also pertinent, because to someone like Stallman, being able to talk about his interests is, by definition, a good time. It comes from an inability to understand that others have fundamentally different priorities of what is enjoyable. He sincerely believes the person is making a bad choice because he foresees that some technological limitation will eventually deny them a fundamental expressive right.

What is most remarkable is that Stallman's detractors consider themselves exquisitely empathetic. Yet they seem unable to grasp this from his perspective, even if they find it unreasonable. They assume he is being willfully unbearable in a bearable situation, rather than simply having an unbearable experience, as valid or invalid as theirs.

Japanese Tapas aka Izakaya

The Izakaya Clown Car

I have my own story that hits similar notes. At a local conference, I booked a dinner reservation for a group. Because of an error by the restaurant, it almost fell through, but we managed to sort it all out with a different location. It was all very chaotic.

My invitation was very clear: there are no extra seats available. The guest list was locked in. This was an extremely popular place. So you can imagine how I felt when, day of, more people show up than agreed.

"Well, there's a few people here with their spouse... we couldn't just tell them not to come."

Here's how my sperg brain answered that:

"Yes you can. In fact, those are exactly the people who can go off have dinner on their own without being alone."

Most people don't want to be the one to say "no, you can't come," even if there is a perfectly good reason for it. I am not that guy.

You see, I know conferences. I know the pattern of wandering in the vicinity of the event as part of a hungry group. The chances of finding dinner any time soon shrink with every new person who tags along. This is the exact thing my dinner plans were meant to avoid. Sorry, that's just how it is. Don't blame me for knowing you better than you do. Bystander group dynamics are predictable and tedious.

We ended up squished around too small a table, with visibly exasperated staff, in a place that until then I had been a regular and welcomed customer at. At a location that normally didn't even do reservations but had been forced to accept out of a Japanese sense of franchise honor. And me a nervous wreck for about the first half of it, at least until the sake kicked in.

I'm sure some thought I was the asshole, too spergy to just "have a good time". This is the problem with people: if the assholishness is sufficiently distributed, everyone can claim individually it's not a big deal, even when all the crap flows downhill towards one person. Out of sight, out of mind.

That dinner ended up getting paid for with a Google credit card, btw. I suspect there's a lesson about valley privilege in there. Just saying.

Git rebase

Rebase Richard Stallman

Anyway so, when faced with a stressful and unexpected situation, Stallman freaks out.

Now let's look at the Medium post Remove Richard Stallman from the last dust up:

I’m writing this because I’m too angry to work.

I’m writing this because at 11AM on Wednesday, September 11th 2019, my friend sent me an email that was sent to an MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) mailing list.

This email came from Richard Stallman, a prominent computer scientist.

A single email sent you into a rage, you say?

I was shocked. I continued talking to my friend, a female graduate student in CSAIL, about everything, trying to get the full email thread (I wasn’t on the mailing list). I even started emailing reporters — local and national, news sites, newspapers, radio stations. I couldn’t stop thinking about it. During my 45-minute drive home, when I normally listen to podcasts or music, I just sat in complete silence.

And then you couldn't stop talking about it. You dumped it all on a friend and turned them into your personal backchannel? You reached out to multiple reporters? Your normal routine was completely thrown off?

So I told my friends that I would just write a story myself. I’d planned to do it after work today; instead, because I can’t possibly focus, I’m working on it now.

The problems are so obvious.

Why do we wait until it becomes bad and public and unbearable and people like me have to write posts like this?

Why do we ponder the low enrollment of female and minority graduate students at MIT with one hand and endorse shitty men in science with the other? Not only endorse them — we invite them to our campus where they will brush shoulders with those same female and minority students.

There's a thing that's extremely obvious to her, that she finds unbearable. She is very frustrated that others aren't automatically on board. She hates the idea of being around them and even hints that it is unpleasant to touch them.

She's doing the exact same thing Stallman was on the plane. What's more, she is using all the autistic registers to describe her discomfort: bottled up emotions, OCD, disruption of routine, sensory discomfort, and so on.

It's also similar to my story of being squished around a restaurant table. The big difference is: nobody is forcing her to do anything. This is all just about an email somebody forwarded to her, from a list she's not even on.

What's really, really funny is the next part:

There is nothing I have seen a man in tech do that a woman could not. What’s more, the woman would probably be less egotistical and more team-oriented about it.

Like freaking out in public but pretending you're doing it for the children. Did you know that they say that autism tends to manifest differently in women than in men? And that men tend to have a systems-focus while women tend to have a people-focus? That female autists tend to be more verbally fluent and hence often go unnoticed for years? You say you are an MIT robotics engineer with a fondness for writing?

There is nothing I have seen a woman in tech do that a man could not. What’s more, the man would probably be less egotistical and more team-oriented about it.

Doesn't sound so pleasant anymore, does it? This is the "Women are Wonderful" effect in the wild: making patently sexist statements is okay if they make women sound good.

There is no single person that is so deserving of praise their comments deprecating others should be allowed to slide. Particularly when those comments are excuses about rape, assault, and child sex trafficking.

Notice that the person who openly denigrated "shitty men in science" in bulk earlier claims it is wholly unacceptable to deprecate others, while misrepresenting them as endorsing horrific crimes wholesale.

Let the sperg who is a buddha cast the first stone.

Stoning Scene - The Life of Brian

Dogs and Cats

It's easy to conclude the above represents enormous, total, widespread hypocrisy. But there's a subtle distinction that threatens to get lost.

Stallman was unexpectedly stuck on a plane. I was unexpectedly forced to choose between going hungry or having an extremely uncomfortable dinner.

But nobody was forced to listen to Stallman having a discussion on a private mailing list.

Why do we wait until it becomes bad and public and unbearable and people like me have to write posts like this?

If anyone is being willfully unbearable, it is people who pretend this distinction does not matter. That every knee must bend regardless of who and when and where.

I've thought a lot about what exactly it is that social media is and does. Why it is seemingly so pernicious.

One conclusion is that it is a perfect environment for social predators, especially those with cluster B disorders such as narcissism and borderline. The platforms reward attention-seeking, and thrive on gossip and hearsay. Users trade publicly in reputation rather than facts. The lack of logic in seizing control of an organization when you detest its founders' ideas makes this clear: it's not about the principles, but about grabbing power and funding.

Social media also encourages these behaviors even for those not predisposed to it, simply through monkey-see-monkey-do. The notion of activists as "social script kiddies" is particularly relevant here: people might not realize it, but they are often acting out thinly disguised scripts for emotional abuse, even cult indoctrination. Just fill in the blanks and let it rip. Worse is that it also forces opponents to adopt a systematic way of countering it: zero tolerance for such shenanigans anywhere, classement verticale, into the trash it goes.

But I think there's something else too, and it ties back to one of the oldest stories in the book: The Boy Who Cried Wolf.

The villagers in the story are misled to believe there is an imminent threat. This captures their attention, sending them on a pointless wolf hunt. This happens so often, they conclude there is no danger. When a wolf finally does show up, they don't believe it, and people get eaten.

Social media does something similar, because it creates a global village. But it's not quite the same.

Everyone who subscribes to it is constantly being yelled at that there are wolves everywhere. Many take it seriously, think about it, and join an Anti-Wolf Coalition. Some even go out and hunt. But usually there aren't any real wolves in their neighborhood. So they spend their energy obsessing for no reason. People become afraid to go out at night, worried they might get eaten. Eventually even ordinary accidents are interpreted as wolf attacks. Owning a dog stops being popular, especially if you have children.

Then one day, a leopard shows up. A boy spots the creature at night, but it is difficult to see, so when he describes it, it sounds just like a cat. "Cats are harmless!" the villagers say. "They're nothing like wolves!"

And the leopard ate very well.

Ten years ago, I reflected on the fact that -- by that time -- I had been in Debian for just over ten years. This year, in early February, I've passed the twenty year milestone. As I'm turning 43 this year, I will have been in Debian for half my life in about three years. Scary thought, that.

In the past ten years, not much has changed, and yet at the same time, much has. I became involved in the Debian video team; I stepped down from the m68k port; and my organizing of the Debian devroom at FOSDEM resulted in me eventually joining the FOSDEM orga team, where I eventually ended up also doing video. As part of my video work, I wrote SReview, for which in these COVID-19 times in much of my spare time I have had to write new code and/or fix bugs.

I was a candidate for the position of DPL one more time, without being elected. I was also a candidate for the technical committee a few times, also without success.

I also added a few packages to the list of packages that I maintain for Debian; most obviously this includes SReview, but there's also things like extrepo and policy-rcd-declarative, both fairly recent packages that I hope will improve Debian as a whole in the longer term.

On a more personal level, at one debconf I met a wonderful girl that I now have just celebrated my first wedding anniversary with. Before that could happen, I have had to move to South Africa two years ago. Moving is an involved process at any one time; moving to a different continent altogether is even more so. As it would have been complicated and involved to remain a business owner of a Belgian business while living 9500km away from the country, I sold my shares to my (now ex) business partner; it turned the page of a 15-year chapter of my life, something I could not do without feelings one way or the other.

The things I do in Debian has changed over the past twenty years. I've been the maintainer of the second-highest number of packages in the project when I maintained the Linux Gazette packages; I've been an m68k porter; I've been an AM, and briefly even an NM frontdesk member; I've been a DPL candidate three times, and a TC candidate twice.

At the turn of my first decade of being a Debian Developer, I noted that people started to recognize my name, and that I started to be one of the Debian Developers who had been with the project longer than most. This has, obviously, not changed. New in the "I'm getting old" department is the fact that during the last Debconf, I noticed for the first time that there was a speaker who had been alive for less long than I had been a Debian Developer. I'm assuming these types of things will continue happening in the next decade, and that the future will bring more of these kinds of changes that will make me feel older as I and the project mature more.

I'm looking forward to it. Here's to you, Debian; may you continue to influence my life, in good ways and in bad (but hopefully mostly good), as well as continue to inspire me to improve the world, as you have over the past twenty years!

Today, the U.S. Congress and big tech companies continued the debate about Section 230 of the 1996 Communications Decency Act.

Put simply, Section 230 provides websites immunity from liability from third-party content. This internet legislation is a double-edged sword. On the one hand it has allowed the dangerous spread of misinformation on social media. On the other hand it has helped the internet thrive.

If I write something untrue and damaging about you on Facebook, you might be able to sue me, but you can't sue Facebook. As a result, social media companies don't really care what is said on their platforms. Their immunity is a big reason why fake news, hate speech and misinformation has been able to spread uncontrollably.

At the same time, Section 230 makes it possible for bloggers to host comments from their readers, for Open Source communities to work together online, and for YouTubers to share videos. Section 230 enables people to share, innovate and collaborate. It has empowered a lot of good.

President Biden has suggested revoking Section 230. Other policy makers would like to reform Section 230. Either revoking or modifying Section 230 could have a big impact on any organization that hosts online content.

Hosting companies could be impacted, but also bloggers and Open Source communities. Having to police all content could quickly become unsustainable, especially for individuals and small organizations. People publish so much new content every day!

As Katie Jordan, the Director of Public Policy and Technology for the Internet Society said, If cloud providers get wrapped up in this conversation about pulling back intermediary liability protection, then by default, they're going to have to reduce privacy and security practices because they'll have to look at the content they're storing for you, to know if they're breaking the law..

A wholesale repeal of Section 230 seems too far reaching to me. It could cause more harm than good. A careful reform seems more appropriate.

Instead of being so focused on Section 230, I'd start by regulating search and social media algorithms. Hosting content is one thing, but recommending content to millions of people is another. When search and social media companies reach billions of people, their content recommendation algorithms can sway public sentiment, introduce bias or rapidly spread misinformation. We should start there.

I've said in the past that we need an FDA for large-scale algorithms that impact society. Just as the FDA ensures that pharmaceutical companies aren't lying about the claims they make about their drugs, there should be a similar regulator for large-scale software algorithms. For example, we need some level of guarantee that companies like Google, Twitter and Facebook won't intentionally (or unintentionally) manipulate search results to shape the public opinion.

March 25, 2021

Thousands of Open Source and Free Software advocates are outraged at the Free Software Foundation (FSF), myself included.

In 2019, Richard Stallman was forced out of the FSF, the organization he started. This after he called Jeffrey Epstein's victims of sex trafficking as "entirely willing". This week, Stallman announced that he is reappointed.

The news that Stallman is back came as a shock to me. I feel very strongly that he needs to be removed from leadership roles. There is no room for his misogynistic and other problematic behavior.

And I'm not alone. Almost two thousand Free Software advocates have signed an open letter seeking the removal of Richard Stallman and the entire FSF's Board of Directors.

While I want Stallman removed, I'm holding my judgement regarding the FSF's Board of Directors a bit longer. A few reasons:

  • I don't understand how Stallman was able to return. It doesn't make any sense to me.
  • The FSF's Board of Directors has remained silent throughout this outrage. To the best of my knowledge, no official statement has been made. I want to know what they have to say.
  • Last but not least, Stallman announced his own return, and it seems like there was an element of surprise.

I don't have private information about what is going on at the FSF, but I do have a lot of experience working as a Board Member.

A Board of Directors can't always move fast or communicate openly in the moment. Depending on what is going on, they may have to take legal steps, or carefully sequence their actions to protect the organization or any people involved. Open communication has to wait sometimes.

This news is so wild that I have to believe they are working through a very difficult situation. If so, the Board of Directors' silence does not necessarily mean that they support Stallman. It might mean that they are not able to communicate yet.

My ask to the FSF:

  1. Remove Stallman as soon as you can.
  2. Explain how and why Stallman was reappointed.
  3. Commit to bringing in new leadership.

If the FSF can work through this quickly and do the right thing, there might be a turning point to rebuild the FSF into something new and better. The Free Software movement deserves quality leadership. Given that the FSF governs the license of many software projects, that is something to hope for. It's worth holding my judgment on the Board of Directors a bit longer.

March 19, 2021

The Sparcstation IPC that I owned since around 1995 died. It sat in a cupboard for 15 years, so it may have been dead for a long time already. Upon trying to power it on, it did absolutely nothing.

I knew about early mini-ITX mods using the IPC/IPX case, like this one from 2002: , but nostalgia of being able to boot Linux/Sparc on this IPC kept me from doing my own mod. With the original hardware dead (probably just the PSU actually), this changed everything. A bit of research showed other Sparcstation mini-ITX mods, some with larger sparc4/5/10/20 cases (e.g. ), and one very interesting mod of an IPC: . Michael used an industrial Commell LV-671 motherboard. Commell went through more than 30 variants of that board in the meantime, and has just released an updated Tiger Lake version: the LV-6712 carrying the Intel i7-1185G7E. It's a full-height mini-ITX board, but the IPC case should have enough z-axis space. Challenge: to update the 25 MHz 32-bit Sparcstation IPC to a 1.2-4.4 GHz 64-bit Intel i7 workstation, go from 48 MB RAM to 32 GB (max 64 GB), and from 10Mbps ethernet to 1Gbps (and even 2.5Gbps!), and from SCSI-I HDD to NVMe SSD!

First step: strip the contents of the IPC case. The original 200MB SCSI hard drive was replaced by a 3GB SCSI drive soon after I got the IPC. Now I dismantled the drive to show my kids what a hard drive looks like on the inside. I removed the remains from the lunchbox case.

The LV-6712 can be powered with 12V DC power, which means we can forego the need for a full ATX power supply. Michael built a 12V power supply into the original power supply housing, and I decided to follow his suggestion. Cutting away a bit of the outer casing of a TracoPower TXH 060-112 AC/DC, I was able to fit it in, and even keep the original passthrough power connector.

The original case fan was powered directly from the IPC PSU, but the new motherboard has a PWM case fan header. Fortunately the case fan is a standard 60x60x25mm one. I found a PWM-capable replacement, the Noctua NF-A6x25. I expect it to be less noisy too, with 30 years of engineering progress advantage against the original Mitsubishi fan.

A lot of material needed to be cut and sanded away to put the mini-ITX motherboard as close to the case side wall as possible. Our case fan and the PSU housing, mounted in the case ceiling, come down very close to the two serial ports on the motherboard.

Then I positioned the IO shield against the back and tried a couple of configurations. I settled on the final location and cut the hole for the entire width of the IO shield. I didn't cut to the full height, because I needed the remaining plastic for structural integrity (since the metal plate originally supporting the back wall has gone!).

I cut the original external SCSI connector from the IPC motherboard to fill the hole it originally occupied.  Three hex motherboard spacers were placed in holes drilled in the plastic floor, the fourth sits in one of the original rubber spacers that supported the IPC motherboard. I used two more spacers to support the IO shield.

This supports the motherboard at a height that allows access to all IO ports. 

With the motherboard jumpered to AT mode, it boots when the power comes on. As an alternative, the "always power on after power restore" option in the BIOS also works. While it is technically possible to put 3.5 and/or 2.5 inch SATA drives in the ceiling bracket, I currently have enough space on the NVMe SSD. I installed the provided power breakout cable, but it sits unused in the case.

I bought a USB pin header to dual USB 2.0 adapter, but can't use it for two reasons: the key of the 9-pin connector is on pin 10, but the adapter expects it on pin 9. Worse, the pin header is at the edge of the motherboard, sitting against the side of the case, and the adapter extends about 1 cm in that direction. Plan B: a 4-pin USB pin header to single USB 2.0 port adapter, on a 20 cm cable. I plugged in a USB Bluetooth 5.0 adapter and left it inside the case. 

Currently Ubuntu 20.10 supports the integrated Xe graphics of this board, so that's what I will be running until an LTS distribution picks up Xe support. The IPC deserves enterprise workstation-grade stability!

Further modifications after some usage: 

I configured the "down" cTDP profile in the BIOS, which reduces the base frequency and TDP of the CPU (12W vs 28W on the standard "nominal" profile. The third profile is 15W TDP). That should help with both power draw and heat production as well as noise levels.

The included CPU fan is rather loud for my ears, but unfortunately the included heatsink only has 50mm fan holes, which is a size not commonly available. Alternative heatsinks for the mobile FCBGA1449 socket are also hard to find. The original fan (50x50x10mm) keeps the package at +- 45 degrees at 5700 rpm with occasional jumps to 50 and 60 degrees while showing a full-screen 1440p Youtube video. 

A Noctua 40x40x20mm fan maintains a comparable temperature at under 4000 rpm. Full load does chase the temperature up, with the Noctua not able to bring it down completely. Unfortunately, a 60x60x25mm fan (same model as the case fan) doesn't fit under the steel drive cage. Even without the drive cage, the case top requires a gentle push to close. That force can't be good for the heatsink nor the motherboard underneath it. On the other hand, it keeps the cpu cool at under 2000 rpm (idle) and it maintains a reasonable CPU temperature below 3000 rpm even with load. As soon as I can find a silent 50x50x20mm or 60x60x20mm PWM fan, I'll get it. In hindsight, mounting the motherboard as low as possible (and sanding away more material at the back for the I/O shield) would have been better. Two or three mm would make a difference. I could cut away a part of the drive cage, and maybe also thin away the underside of the PSU case as an alternative. Fortunately the PSU case is perforated, so the fan can suck in air even when the PSU sits directly on top of it.

I've even considered installing some heatpipes to transfer the heat of the heatsink to a radiator next to the board (there is a good 5 or 6 cm of lateral space) and put a fan on that radiator. The main reason I'm not going to do that (yet) is the risk of damaging a component that I can't replace...

I published the following diary on “ Used As a Simple C2 Channel“:

With the growing threat of ransomware attacks, they are other malicious activities that have less attention today but they remain active. Think about crypto-miners. Yes, attackers continue to mine Monero on compromised systems. I spotted an interesting shell script that installs and runs a crypto-miner (SHA256:00e2ddca696426d9cad992662284d1f28b9ecd44ed7c1be39789417c1ea9a5f2). The script looks to be a classic one but there are some interesting behaviors that I’d like to share… [Read more]

The post [SANS ISC] Used As a Simple C2 Channel appeared first on /dev/random.