Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

April 22, 2019

The post Retrieving the Genesis block in Bitcoin with bitcoin-cli appeared first on ma.ttias.be.

If you run a Bitcoin full node, you have access to every transaction and block that was ever created on the network. This also allows you to look at the content of, say, the genesis block. The first block ever created, over 10y ago.

Retrieving the genesis block

First, you can ask for the block hash by providing it the block height. As with everything in computer science, arrays and block counts start at 0.

You use command getblockhash to find the correct hash.

$ bitcoin-cli getblockhash 0
000000000019d6689c085ae165831e934ff763ae46a2a6c172b3f1b60a8ce26f

Now you have the block hash that matches with the first ever block.

You can now request the full content of that block using the getblock command.

$ bitcoin-cli getblock 000000000019d6689c085ae165831e934ff763ae46a2a6c172b3f1b60a8ce26f
{
  "hash": "000000000019d6689c085ae165831e934ff763ae46a2a6c172b3f1b60a8ce26f",
  "confirmations": 572755,
  "strippedsize": 285,
  "size": 285,
  "weight": 1140,
  "height": 0,
  "version": 1,
  "versionHex": "00000001",
  "merkleroot": "4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127b7afdeda33b",
  "tx": [
    "4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127b7afdeda33b"
  ],
  "time": 1231006505,
  "mediantime": 1231006505,
  "nonce": 2083236893,
  "bits": "1d00ffff",
  "difficulty": 1,
  "chainwork": "0000000000000000000000000000000000000000000000000000000100010001",
  "nTx": 1,
  "nextblockhash": "00000000839a8e6886ab5951d76f411475428afc90947ee320161bbf18eb6048"
}

This is the only block that doesn't have a previousblockhash, all other blocks will have one as they form the chain itself. But, the first block can't have a previous one.

Retrieving the first and only transaction from the genesis block

In this block, there is only one transaction included. The one with the hash 4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127b7afdeda33b. This is a coinbase transaction, it's the block reward for finding the miner for finding this block (50BTC).

[...]
  "tx": [
    "4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127b7afdeda33b"
  ],
[...]

Let's have a look at what's in there, shall we?

$ bitcoin-cli getrawtransaction 4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127b7afdeda33b
The genesis block coinbase is not considered an ordinary transaction and cannot be retrieved

Ah, sucks! This is a special kind of transaction, but we'll see a way to find the details of it later on.

Getting more details from the genesis block

We retrieved the block details using the getblock command, but there's actually more details in that block than initially shown. You can get more verbose output by adding the 2 at the end of the command, indicating you want a json object with transaction data.

$ bitcoin-cli getblock 000000000019d6689c085ae165831e934ff763ae46a2a6c172b3f1b60a8ce26f 2
{
  "hash": "000000000019d6689c085ae165831e934ff763ae46a2a6c172b3f1b60a8ce26f",
  "confirmations": 572758,
  "strippedsize": 285,
  "size": 285,
  "weight": 1140,
  "height": 0,
  "version": 1,
  "versionHex": "00000001",
  "merkleroot": "4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127b7afdeda33b",
  "tx": [
    {
      "txid": "4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127b7afdeda33b",
      "hash": "4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127b7afdeda33b",
      "version": 1,
      "size": 204,
      "vsize": 204,
      "weight": 816,
      "locktime": 0,
      "vin": [
        {
          "coinbase": "04ffff001d0104455468652054696d65732030332f4a616e2f32303039204368616e63656c6c6f72206f6e206272696e6b206f66207365636f6e64206261696c6f757420666f722062616e6b73",
          "sequence": 4294967295
        }
      ],
      "vout": [
        {
          "value": 50.00000000,
          "n": 0,
          "scriptPubKey": {
            "asm": "04678afdb0fe5548271967f1a67130b7105cd6a828e03909a67962e0ea1f61deb649f6bc3f4cef38c4f35504e51ec112de5c384df7ba0b8d578a4c702b6bf11d5f OP_CHECKSIG",
            "hex": "4104678afdb0fe5548271967f1a67130b7105cd6a828e03909a67962e0ea1f61deb649f6bc3f4cef38c4f35504e51ec112de5c384df7ba0b8d578a4c702b6bf11d5fac",
            "reqSigs": 1,
            "type": "pubkey",
            "addresses": [
              "1A1zP1eP5QGefi2DMPTfTL5SLmv7DivfNa"
            ]
          }
        }
      ],
      "hex": "01000000010000000000000000000000000000000000000000000000000000000000000000ffffffff4d04ffff001d0104455468652054696d65732030332f4a616e2f32303039204368616e63656c6c6f72206f6e206272696e6b206f66207365636f6e64206261696c6f757420666f722062616e6b73ffffffff0100f2052a01000000434104678afdb0fe5548271967f1a67130b7105cd6a828e03909a67962e0ea1f61deb649f6bc3f4cef38c4f35504e51ec112de5c384df7ba0b8d578a4c702b6bf11d5fac00000000"
    }
  ],
  "time": 1231006505,
  "mediantime": 1231006505,
  "nonce": 2083236893,
  "bits": "1d00ffff",
  "difficulty": 1,
  "chainwork": "0000000000000000000000000000000000000000000000000000000100010001",
  "nTx": 1,
  "nextblockhash": "00000000839a8e6886ab5951d76f411475428afc90947ee320161bbf18eb6048"
}

Aha, that's more info!

Now, you'll notice there is a section with details of the coinbase transaction. It shows the 50BTC block reward, and even though we can't retrieve it with getrawtransaction, the data is still present in the genesis block.

      "vout": [
        {
          "value": 50.00000000,
          "n": 0,
          "scriptPubKey": {
            "asm": "04678afdb0fe5548271967f1a67130b7105cd6a828e03909a67962e0ea1f61deb649f6bc3f4cef38c4f35504e51ec112de5c384df7ba0b8d578a4c702b6bf11d5f OP_CHECKSIG",
            "hex": "4104678afdb0fe5548271967f1a67130b7105cd6a828e03909a67962e0ea1f61deb649f6bc3f4cef38c4f35504e51ec112de5c384df7ba0b8d578a4c702b6bf11d5fac",
            "reqSigs": 1,
            "type": "pubkey",
            "addresses": [
              "1A1zP1eP5QGefi2DMPTfTL5SLmv7DivfNa"
            ]
          }
        }
      ],

Satoshi's Embedded Secret Message

I've always heard that Satoshi encoded a secret message in the first genesis block. Let's find it?

In our extensive output, there's a hex line in the block.

"hex": "01000000010000000000000000000000000000000000000000000000000000000000000000ffffffff4d04ffff001d0104455468652054696d65732030332f4a616e2f32303039204368616e63656c6c6f72206f6e206272696e6b206f66207365636f6e64206261696c6f757420666f722062616e6b73ffffffff0100f2052a01000000434104678afdb0fe5548271967f1a67130b7105cd6a828e03909a67962e0ea1f61deb649f6bc3f4cef38c4f35504e51ec112de5c384df7ba0b8d578a4c702b6bf11d5fac00000000"

If we transform this hexadecimal format to a more readable ASCII form, we get this:

$ echo "01000000010000000000000000000000000000000000000000000000000000000000000000ffffffff
4d04ffff001d0104455468652054696d65732030332f4a616e2f32303039204368616e63656c6c6f72206f6e20
6272696e6b206f66207365636f6e64206261696c6f757420666f722062616e6b73ffffffff0100f2052a010000
00434104678afdb0fe5548271967f1a67130b7105cd6a828e03909a67962e0ea1f61deb649f6bc3f4cef38c4f3
5504e51ec112de5c384df7ba0b8d578a4c702b6bf11d5fac00000000" | xxd -r -p

����M��EThe Times 03/Jan/2009 Chancellor on brink of second bailout for banks�����*CAg���UH'g�q0�\֨(�9	�yb��a޶I�?L�8��U���\8M�
        �W�Lp+k�_�

This confirms there is indeed a message in the form of "The Times 03/Jan/2009 Chancellor on brink of second bailout for banks", referring to a newspaper headline at the time of the genesis block.

The post Retrieving the Genesis block in Bitcoin with bitcoin-cli appeared first on ma.ttias.be.

The post Requesting certificates with Let’s Encrypt’s official certbot client appeared first on ma.ttias.be.

There's plenty of guides on this already, but I recently used Let's Encrypt certbot client again manually (instead of through already automated systems) and figured I'd write up the commands for myself. Just in case.

$ git clone https://github.com/letsencrypt/letsencrypt.git /opt/letsencrypt
$ cd /opt/letsencrypt

Now that the client is available on the system, you can request new certificates. If the DNS is already pointing to this server, it's super easy with the webroot validation.

$ /opt/letsencrypt/letsencrypt-auto certonly --expand \
  --email you@domain.tld --agree-tos \
  --webroot -w /var/www/vhosts/yoursite.tld/htdocs/public/ \
  -d yoursite.tld \
  -d www.yoursite.tld

You can add multiple domains with the -d flag and point it to the right document root using the -w flag.

After that, you'll find your certificates in

$ ls -alh /etc/letsencrypt/live/yoursite.tld/*
/etc/letsencrypt/live/yoursite.tld/cert.pem -> ../../archive/yoursite.tld/cert1.pem
/etc/letsencrypt/live/yoursite.tld/chain.pem -> ../../archive/yoursite.tld/chain1.pem
/etc/letsencrypt/live/yoursite.tld/fullchain.pem -> ../../archive/yoursite.tld/fullchain1.pem
/etc/letsencrypt/live/yoursite.tld/privkey.pem -> ../../archive/yoursite.tld/privkey1.pem

You can now use these certs in whichever webserver or application you like.

The post Requesting certificates with Let’s Encrypt’s official certbot client appeared first on ma.ttias.be.

Autoptimize 2.5 has been released earlier today (April 22nd).

Main focus of this release is more love for image optimization, now on a separate tab and including lazyload and WebP support.

Lots of other bugfixes and smaller improvements too off course, e.g. an option to disable the minification of excluded CSS/ JS (which 2.4 did by default).

No Easter eggs in there though :-)

I was using docker on an Odroid U3, but my Odroid stopped working. I switched to another system that is i386 only.

You’ll find my journey to build docker images for i386 below.

Reasons to build your own docker images

If you want to use docker you can start with docker images on the docker registry. There are several reasons to build your own base images.

  • Security

The first reason is security, docker images are not signed by default.

Anyone can upload docker images to the public docker hub with bugs or malicious code.

There are “official” docker images available at https://docs.docker.com/docker-hub/official_images/ when you execute a docker search the official docker images are tagged on the official column and are also signed by Docker. To only allow signed docker images you need to set the DOCKER_CONTENT_TRUST=1 environment variable. - This should be the default IMHO -

There is one distinction, the “official” docker images are signed by the “Repo admin” of the Docker hub, not by the official GNU/Linux distribution project. If you want to trust the official project instead of the Docker repo admin you can resolve this building your own images.

  • Support other architectures

Docker images are generally built for AMD64 architecture. If you want to use other architectures - ARM, Power, SPARC or even i386 - you’ll find some images on the Docker hub but these are usually not Official docker images.

  • Control

When you build your own images, you have more control over what goes or not goes into the image.

Building your own docker base images

There are several ways to build your own docker images.

The Mobyproject is Docker’s development project - a bit like what Fedora is to RedHat -. The Moby project has a few scripts that help you to create docker base images and is also a good start if you want to review how to build your own images.

GNU/Linux distributions

I build the images on the same GNU/Linux distribution (e.g. The debian images are build on a Debian system) to get the correct gpg keys.

Debian GNU/Linux & Co

Debian GNU/Linux makes it very easy to build your own Docker base images. Only debootstrap is required. I’ll use the moby script to the Debian base image and debootstrap to build an i386 docker Ubuntu 18.04 image.

Ubuntu doesn’t support i386 officially but includes the i386 userland so it’s possible to build i386 Docker images.

Clone moby

1
2
3
4
5
6
7
8
staf@whale:~/github$ git clone https://github.com/moby/moby
Cloning into 'moby'...
remote: Enumerating objects: 265639, done.
remote: Total 265639 (delta 0), reused 0 (delta 0), pack-reused 265640
Receiving objects: 99% (265640/265640), 137.75 MiB | 3.05 MiB/s, done.
Resolving deltas: 99% (179885/179885), done.
Checking out files: 99% (5508/5508), done.
staf@whale:~/github$ 

Make sure that debootstrap is installed

1
2
3
4
5
6
7
8
9
staf@whale:~/github/moby/contrib$ sudo apt install debootstrap
[sudo] password for staf: 
Reading package lists... Done
Building dependency tree       
Reading state information... Done
debootstrap is already the newest version (1.0.114).
debootstrap set to manually installed.
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
staf@whale:~/github/moby/contrib$ 

The Moby way

Go to the contrib directory

1
2
staf@whale:~/github$ cd moby/contrib/
staf@whale:~/github/moby/contrib$ 

mkimage.sh

mkimage.sh --help gives you more details howto use the script.

1
2
3
4
5
6
7
8
9
staf@whale:~/github/moby/contrib$ ./mkimage.sh --help
usage: mkimage.sh [-d dir] [-t tag] [--compression algo| --no-compression] script [script-args]
   ie: mkimage.sh -t someuser/debian debootstrap --variant=minbase jessie
       mkimage.sh -t someuser/ubuntu debootstrap --include=ubuntu-minimal --components=main,universe trusty
       mkimage.sh -t someuser/busybox busybox-static
       mkimage.sh -t someuser/centos:5 rinse --distribution centos-5
       mkimage.sh -t someuser/mageia:4 mageia-urpmi --version=4
       mkimage.sh -t someuser/mageia:4 mageia-urpmi --version=4 --mirror=http://somemirror/
staf@whale:~/github/moby/contrib$ 

build the image

1
2
3
4
5
6
7
8
9
10
11
12
staf@whale:~/github/moby/contrib$ sudo ./mkimage.sh -t stafwag/debian_i386:stretch debootstrap --variant=minbase stretch
[sudo] password for staf: 
+ mkdir -p /var/tmp/docker-mkimage.dY9y9apEoK/rootfs
+ debootstrap --variant=minbase stretch /var/tmp/docker-mkimage.dY9y9apEoK/rootfs
I: Target architecture can be executed
I: Retrieving InRelease 
I: Retrieving Release 
I: Retrieving Release.gpg 
I: Checking Release signature
I: Valid Release signature (key id 067E3C456BAE240ACEE88F6FEF0F382A1A7B6500)
I: Retrieving Packages 
<snip>

Test

Verify that images is imported.

1
2
3
4
staf@whale:~/github/moby/contrib$ docker images
REPOSITORY            TAG                 IMAGE ID            CREATED              SIZE
stafwag/debian_i386   stretch             cb96d1663079        About a minute ago   97.6MB
staf@whale:~/github/moby/contrib$ 

Run a test docker instance

1
2
3
4
staf@whale:~/github/moby/contrib$ docker run -t -i --rm stafwag/debian_i386:stretch /bin/sh
# cat /etc/debian_version 
9.8
# 

The debootstrap way

Make sure that debootstrap is installed

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
staf@ubuntu184:~/github/moby$ sudo apt install debootstrap
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Suggested packages:
  ubuntu-archive-keyring
The following NEW packages will be installed:
  debootstrap
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 35,7 kB of archives.
After this operation, 270 kB of additional disk space will be used.
Get:1 http://be.archive.ubuntu.com/ubuntu bionic-updates/main amd64 debootstrap all 1.0.95ubuntu0.3 [35,7 kB]
Fetched 35,7 kB in 0s (85,9 kB/s)    
Selecting previously unselected package debootstrap.
(Reading database ... 163561 files and directories currently installed.)
Preparing to unpack .../debootstrap_1.0.95ubuntu0.3_all.deb ...
Unpacking debootstrap (1.0.95ubuntu0.3) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
Setting up debootstrap (1.0.95ubuntu0.3) ...
staf@ubuntu184:~/github/moby$ 

bootsrap

Create a directory that will hold the chrooted operating system.

1
2
staf@ubuntu184:~$ mkdir -p dockerbuild/ubuntu
staf@ubuntu184:~/dockerbuild/ubuntu$ 

Bootstrap.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
staf@ubuntu184:~/dockerbuild/ubuntu$ sudo debootstrap --verbose --include=iputils-ping --arch i386 bionic ./chroot-bionic http://ftp.ubuntu.com/ubuntu/
I: Retrieving InRelease 
I: Checking Release signature
I: Valid Release signature (key id 790BC7277767219C42C86F933B4FE6ACC0B21F32)
I: Validating Packages 
I: Resolving dependencies of required packages...
I: Resolving dependencies of base packages...
I: Checking component main on http://ftp.ubuntu.com/ubuntu...
I: Retrieving adduser 3.116ubuntu1
I: Validating adduser 3.116ubuntu1
I: Retrieving apt 1.6.1
I: Validating apt 1.6.1
I: Retrieving apt-utils 1.6.1
I: Validating apt-utils 1.6.1
I: Retrieving base-files 10.1ubuntu2
<snip>
I: Configuring python3-yaml...
I: Configuring python3-dbus...
I: Configuring apt-utils...
I: Configuring netplan.io...
I: Configuring nplan...
I: Configuring networkd-dispatcher...
I: Configuring kbd...
I: Configuring console-setup-linux...
I: Configuring console-setup...
I: Configuring ubuntu-minimal...
I: Configuring libc-bin...
I: Configuring systemd...
I: Configuring ca-certificates...
I: Configuring initramfs-tools...
I: Base system installed successfully.

Customize

You can customize your installation before it goes into the image. One thing that you should customize is include update in the image.

Update /etc/resolve.conf

1
staf@ubuntu184:~/dockerbuild/ubuntu$ sudo vi chroot-bionic/etc/resolv.conf
1
nameserver 9.9.9.9

Update /etc/apt/sources.list

1
staf@ubuntu184:~/dockerbuild/ubuntu$ sudo vi chroot-bionic/etc/apt/sources.list

And include the updates

1
2
3
deb http://ftp.ubuntu.com/ubuntu bionic main
deb http://security.ubuntu.com/ubuntu bionic-security main
deb http://ftp.ubuntu.com/ubuntu/ bionic-updates main

Chroot into your installation and run apt-get update

1
2
3
4
5
6
7
8
9
10
11
12
13
staf@ubuntu184:~/dockerbuild/ubuntu$ sudo chroot $PWD/chroot-bionic
root@ubuntu184:/# apt update
Hit:1 http://ftp.ubuntu.com/ubuntu bionic InRelease
Get:2 http://ftp.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]   
Get:3 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]       
Get:4 http://ftp.ubuntu.com/ubuntu bionic/main Translation-en [516 kB]                  
Get:5 http://ftp.ubuntu.com/ubuntu bionic-updates/main i386 Packages [492 kB]           
Get:6 http://ftp.ubuntu.com/ubuntu bionic-updates/main Translation-en [214 kB]          
Get:7 http://security.ubuntu.com/ubuntu bionic-security/main i386 Packages [241 kB]     
Get:8 http://security.ubuntu.com/ubuntu bionic-security/main Translation-en [115 kB]
Fetched 1755 kB in 1s (1589 kB/s)      
Reading package lists... Done
Building dependency tree... Done

and apt-get upgrade

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
root@ubuntu184:/# apt upgrade
Reading package lists... Done
Building dependency tree... Done
Calculating upgrade... Done
The following NEW packages will be installed:
  python3-netifaces
The following packages will be upgraded:
  apt apt-utils base-files bsdutils busybox-initramfs console-setup console-setup-linux
  distro-info-data dpkg e2fsprogs fdisk file gcc-8-base gpgv initramfs-tools
  initramfs-tools-bin initramfs-tools-core keyboard-configuration kmod libapparmor1
  libapt-inst2.0 libapt-pkg5.0 libblkid1 libcom-err2 libcryptsetup12 libdns-export1100
  libext2fs2 libfdisk1 libgcc1 libgcrypt20 libglib2.0-0 libglib2.0-data libidn11
  libisc-export169 libkmod2 libmagic-mgc libmagic1 libmount1 libncurses5 libncursesw5
  libnss-systemd libpam-modules libpam-modules-bin libpam-runtime libpam-systemd
  libpam0g libprocps6 libpython3-stdlib libpython3.6-minimal libpython3.6-stdlib
  libseccomp2 libsmartcols1 libss2 libssl1.1 libstdc++6 libsystemd0 libtinfo5 libudev1
  libunistring2 libuuid1 libxml2 mount ncurses-base ncurses-bin netcat-openbsd
  netplan.io networkd-dispatcher nplan openssl perl-base procps python3 python3-gi
  python3-minimal python3.6 python3.6-minimal systemd systemd-sysv tar tzdata
  ubuntu-keyring ubuntu-minimal udev util-linux
84 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 26.6 MB of archives.
After this operation, 450 kB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://security.ubuntu.com/ubuntu bionic-security/main i386 netplan.io i386 0.40.1~18.04.4 [64.6 kB]
Get:2 http://ftp.ubuntu.com/ubuntu bionic-updates/main i386 base-files i386 10.1ubuntu2.4 [60.3 kB]
Get:3 http://security.ubuntu.com/ubuntu bionic-security/main i386 libapparmor1 i386 2.12-4ubuntu5.1 [32.7 kB]
Get:4 http://security.ubuntu.com/ubuntu bionic-security/main i386 libgcrypt20 i386 1.8.1-
<snip>
running python rtupdate hooks for python3.6...
running python post-rtupdate hooks for python3.6...
Setting up initramfs-tools-core (0.130ubuntu3.7) ...
Setting up initramfs-tools (0.130ubuntu3.7) ...
update-initramfs: deferring update (trigger activated)
Setting up python3-gi (3.26.1-2ubuntu1) ...
Setting up file (1:5.32-2ubuntu0.2) ...
Setting up python3-netifaces (0.10.4-0.1build4) ...
Processing triggers for systemd (237-3ubuntu10.20) ...
Setting up networkd-dispatcher (1.7-0ubuntu3.3) ...
Installing new version of config file /etc/default/networkd-dispatcher ...
Setting up netplan.io (0.40.1~18.04.4) ...
Setting up nplan (0.40.1~18.04.4) ...
Setting up ubuntu-minimal (1.417.1) ...
Processing triggers for libc-bin (2.27-3ubuntu1) ...
Processing triggers for initramfs-tools (0.130ubuntu3.7) ...
root@ubuntu184:/# 
staf@ubuntu184:~/dockerbuild/ubuntu$ 

Import

Go to your chroot installation.

1
2
staf@ubuntu184:~/dockerbuild/ubuntu$ cd chroot-bionic/
staf@ubuntu184:~/dockerbuild/ubuntu/chroot-bionic$ 

and import the image.

1
2
3
staf@ubuntu184:~/dockerbuild/ubuntu/chroot-bionic$ sudo tar cpf - . | docker import - stafwag/ubuntu_i386:bionic
sha256:83560ef3c8d48b737983ab8ffa3ec3836b1239664f8998038bfe1b06772bb3c2
staf@ubuntu184:~/dockerbuild/ubuntu/chroot-bionic$ 

Test

1
2
3
4
staf@ubuntu184:~/dockerbuild/ubuntu/chroot-bionic$ docker images
REPOSITORY            TAG                 IMAGE ID            CREATED              SIZE
stafwag/ubuntu_i386   bionic              83560ef3c8d4        About a minute ago   315MB
staf@ubuntu184:~/dockerbuild/ubuntu/chroot-bionic$ 
1
2
3
4
5
6
7
8
staf@ubuntu184:~/dockerbuild/ubuntu/chroot-bionic$ docker run -it --rm stafwag/ubuntu_i386:bionic /bin/bash
root@665cec6ee24f:/# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 18.04.2 LTS
Release:        18.04
Codename:       bionic
root@665cec6ee24f:/# 

Have fun!

Links

April 21, 2019

Several years ago, I created a list of ESXi versions with matching VM BIOS identifiers. The list is now complete up to vSphere 6.7 Update 2.
Your Linux runs on a VMware VM, but which on which ESXi version? You can see for yourself: run "dmidecode" and look at lines 10, 11 and 12.
ESX 2.5 - BIOS Release Date: 04/21/2004 - Address 0xE8480 - Size 97152 bytes
ESX 3.0 - BIOS Release Date: 04/17/2006 - Address 0xE7C70 - Size 99216 bytes
ESX 3.5 - BIOS Release Date: 01/30/2008 - Address 0xE7910 - Size 100080 bytes
ESX 4 - BIOS Release Date: 08/15/2008 - Address 0xEA6C0 - Size 88384 bytes
ESX 4U1 - BIOS Release Date: 09/22/2009 - Address 0xEA550 - Size 88752 bytes
ESX 4.1 - BIOS Release Date: 10/13/2009 - Address 0xEA2E0 - Size 89376 bytes
ESXi 5 - BIOS Release Date: 01/07/2011 - Address 0xE72C0 - Size 101696 bytes
ESXi 5.1 - BIOS Release Date: 06/22/2012 - Address: 0xEA0C0 - Size: 89920 bytes
ESXi 5.5 - BIOS Release Date: 07/30/2013 - Address: 0xEA050 - Size: 90032 bytes
ESXi 6 - BIOS Release Date: 09/30/2014 - Address: 0xE9A40 - Size: 91584 bytes
ESXi 6.5 - BIOS Release Date: 04/05/2016 - Address: 0xEA580 - Size: 88704 bytes 
ESXi 6.7 - BIOS Release Date: 07/03/2018 - Address: 0xEA520 - Size: 88800 bytes
ESXi 6.7 U2 - BIOS Release Date 12/12/2018 - Address: 0xEA490 - Size: 88944 bytes
NB These DMI properties are set at boot time. Even if your VM gets live-migrated to a host running a different vSphere version, your VM will keep the values it got from the host it booted on. What you see is the vSphere version of the host your VM booted on. It is the VM power-on that matters, so a guest OS reboot will not regenerate the DMI properties. A guest OS shut down on the other hand will also power off the VM, and the next power-on will regenerate the DMI properties.

April 20, 2019

We ditched the crowded streets of Seattle for a short vacation in Tuscany's beautiful countryside. After the cold winter months, Tuscany's rolling hills are coming back to life and showing their new colors.

Beautiful tuscany
Beautiful tuscany
Beautiful tuscany

April 18, 2019

I published the following diary on isc.sans.edu: “Malware Sample Delivered Through UDF Image“:

I found an interesting phishing email which was delivered with a malicious attachment: an UDF image (.img). UDF means “Universal Disk Format” and, as said by Wikipedia], is an open vendor-neutral file system for computer data storage. It has supplented the well-known ISO 9660 format (used for burning CD & DVD) that was also used in previous campaign to deliver malicious files… [Read more]

[The post [SANS ISC] Malware Sample Delivered Through UDF Image has been first published on /dev/random]

April 16, 2019

In a recent VMware project, an existing environment of vSphere ESXi hosts had to be split off to a new instance of vCenter. These hosts were member of a distributed virtual switch, an object that saves its configuration in the vCenter database. This information would be lost after the move to the new vCenter, and the hosts would be left with "orphaned" distributed vswitch configurations.

Thanks to the export/import function now available in vSphere 5.5 and 6.x, we can now move the full distributed vswitch configuration to the new vCenter:

  • In the old vCenter, right-click the switch object, click "Export configuration" and choose the default "Distributed switch and all port groups"
  • Add the hosts to the new vCenter
  • In the new vCenter, right-click the datacenter object, click "Import distributed switch" in the "Distributed switch" sub-menu.
  • Select your saved configuration file, and tick the "Preserve original distributed switch and port group identifiers" box (which is not default!)
What used to be orphaned configurations on the host, are now valid member switches of the distributed switch you just imported!
In vSphere 6, if the vi-admin account get locked because of too many failed logins, and you don't have the root password of the appliance, you can reset the account(s) using these steps:

  1. reboot the vMA
  2. from GRUB, "e"dit the entry
  3. "a"ppend init=/bin/bash
  4. "b"oot
  5. # pam_tally2 --user=vi-admin --reset
  6. # passwd vi-admin # Optional. Only if you want to change the password for vi-admin.
  7. # exit
  8. reset the vMA
  9. log in with vi-admin
These steps can be repeated for root or any other account that gets locked out.

If you do have root or vi-admin access, "sudo pam_tally2 --user=mylockeduser --reset" would do it, no reboot required.
Most VMware appliances (vCenter Appliance, VMware Support Appliance, vRealize Orchestrator) have the so called VAMI: the VMware Appliance Management Interface, generally served via https on port 5480. VAMI offers a variety of functions, including "check updates" and "install updates". Some appliances offer to check/install updates from a connected CD iso, but the default is always to check online. How does that work?
VMware uses a dedicated website to serve the updates: vapp-updates.vmware.com. Each appliance is configured with a repository URL: https://vapp-updates.vmware.com/vai-catalog/valm/vmw/PRODUCT-ID/VERSION-ID . The PRODUCT-ID is a hexadecimal code specific for the product. vRealize Orchestrator uses 00642c69-abe2-4b0c-a9e3-77a6e54bffd9, VMware Support Appliance uses 92f44311-2508-49c0-b41d-e5383282b153, vCenter Server Appliance uses 647ee3fc-e6c6-4b06-9dc2-f295d12d135c. The VERSION-ID contains the current appliance version and appends ".latest": 6.0.0.20000.latest, 6.0.4.0.latest, 6.0.0.0.latest.
The appliance will check for updates by retrieving the repository URL /manifest/manifest-latest.xml . This xml contains the latest available version in fullVersion and version (fullVersion includes the build number), pre- and post-install scripts, EULA, and a list of updated rpm packages. Each entry has a that can be appended to the repository URL and downloaded. The update procedure downloads manifest and rpms, verifies checksums on downloaded rpms, executes the preInstallScript, runs rpm -U on the downloaded rpm packages, executes the postInstallScript, displays the exit code and prompts for reboot.
With this information, you can setup your own local repository (for cases where internet access is impossible from the virtual appliances), or you can even execute the procedure manually. Be aware that manual update would be unsupported. Using a different repository is supported by a subset of VMware appliances (e.g. VCSA, VRO) but not all (VMware Support Appliance).
I did not yet update my older post when vSphere 6.7 was released. The list now complete up to vSphere 6.7. Your Linux runs on a VMware VM, but which on which ESXi version? You can see for yourself: run "dmidecode" and look at lines 10, 11 and 12.
ESX 2.5 - BIOS Release Date: 04/21/2004 - Address 0xE8480 - Size 97152 bytes
ESX 3.0 - BIOS Release Date: 04/17/2006 - Address 0xE7C70 - Size 99216 bytes
ESX 3.5 - BIOS Release Date: 01/30/2008 - Address 0xE7910 - Size 100080 bytes
ESX 4 - BIOS Release Date: 08/15/2008 - Address 0xEA6C0 - Size 88384 bytes
ESX 4U1 - BIOS Release Date: 09/22/2009 - Address 0xEA550 - Size 88752 bytes
ESX 4.1 - BIOS Release Date: 10/13/2009 - Address 0xEA2E0 - Size 89376 bytes
ESXi 5 - BIOS Release Date: 01/07/2011 - Address 0xE72C0 - Size 101696 bytes
ESXi 5.1 - BIOS Release Date: 06/22/2012 - Address: 0xEA0C0 - Size: 89920 bytes
ESXi 5.5 - BIOS Release Date: 07/30/2013 - Address: 0xEA050 - Size: 90032 bytes
ESXi 6 - BIOS Release Date: 09/30/2014 - Address: 0xE9A40 - Size: 91584 bytes
ESXi 6.5 - BIOS Release Date: 04/05/2016 - Address: 0xEA580 - Size: 88704 bytes 
ESXi 6.7 - BIOS Release Date: 07/03/2018 - Address: 0xEA520 - Size: 88800 bytes
NB These DMI properties are set at boot time. Even if your VM gets live-migrated to a host running a different vSphere version, your VM will keep the values it got from the host it booted on. What you see is the vSphere version of the host your VM booted on. It is the VM power-on that matters, so a guest OS reboot will not regenerate the DMI properties. A guest OS shut down on the other hand will also power off the VM, and the next power-on will regenerate the DMI properties.
Updating the VCSA is easy when it has internet access or if you can mount the update iso. On a private network, VMware assumes you have a webserver that can serve up the updaterepo files. In this article, we'll look at how to proceed when VCSA is on a private network where internet access is blocked, and there's no webserver available. The VCSA and PSC contain their own webserver that can be used for an HTTP based update. This procedure was tested on PSC/VCSA 6.0.

Follow these steps:


  • First, download the update repo zip (e.g. for 6.0 U3A, the filename is VMware-vCenter-Server-Appliance-6.0.0.30100-5202501-updaterepo.zip ) 
  • Transfer the updaterepo zip to a PSC or VCSA that will be used as the server. You can use Putty's pscp.exe on Windows or scp on Mac/Linux, but you'd have to run "chsh -s /bin/bash root" in the CLI shell before using pscp.exe/scp if your PSC/VCSA is set up with the appliancesh. 
    • chsh -s /bin/bash root
    • "c:\program files (x86)\putty\pscp.exe" VMware*updaterepo.zip root@psc-name-or-address:/tmp 
  • Change your PSC/VCSA root access back to the appliancesh if you changed it earlier: 
    • chsh -s /bin/appliancesh root
  • Make a directory for the repository files and unpack the updaterepo files there:
    • mkdir /srv/www/htdocs/6u3
    • chmod go+rx /srv/www/htdocs/6u3
    • cd /srv/www/htdocs/6u3
    • unzip /tmp/VMware-vCenter*updaterepo.zip
    • rm /tmp/VMware-vCenter*updaterepo.zip
  • Create a redirect using the HTTP rhttpproxy listener and restart it
    • echo "/6u3 local 7000 allow allow" > /etc/vmware-rhttpproxy/endpoints.conf.d/temp-update.conf 
    • /etc/init.d/vmware-rhttpproxy restart 
  • Create a /tmp/nginx.conf (I copied /etc/nginx/nginx.conf, changed "listen 80" to "listen 7000" and changed "mime.types" to "/etc/nginx/mime.types")
  • Start nginx
    • nginx -c /tmp/nginx.conf
  • Start the update via the VAMI. Change the repository URL in settings,  use http://psc-name-or-address/6u3/ as repository URL. Then use "Check URL". 
  • Afterwards, clean up: 
    • killall nginx
    • cd /srv/www/htdocs; rm -rf 6u3


P.S. I personally tested this using a PSC as webserver to update both that PSC, and also a VCSA appliance.
P.P.S. VMware released an update for VCSA 6.0 and 6.5 on the day I wrote this. For 6.0, the latest version is U3B at the time of writing, while I updated to U3A.
VMware's solution to a lost or forgotten root password for ESXi is simple: go to https://kb.vmware.com/s/article/1317898?lang=en_US and you'll find that "Reinstalling the ESXi host is the only supported way to reset a password on ESXi".

If your host is still connected to vCenter, you may be able to use Host Profiles to reset the root password, or alternatively you can join ESXi in Active Directory via vCenter, and log in with a user in the "ESX Admins" AD group.

If your host is no longer connected to vCenter, those options are closed. Can you avoid reinstallation? Fortunately, you can. You will need to reset and reboot your ESXi though. If you're ready for an unsupported deep dive into the bowels of ESXi, follow these steps:

  1. Create a bootable Linux USB-drive (or something else you can boot your server with). I used a CentOS 7 installation USB-drive that I could use to boot into rescue mode.
  2. Reset your ESXi and boot from the Linux medium.
  3. Identify your ESXi boot device from the Linux prompt. Use "fdisk -l /dev/sda", "fdisk -l /dev/sdb", etc. until you find a device that has 9 (maybe 8 in some cases) partitions. Partitions 5 and 6 are 250 MB and type "Microsoft basic" (for more information on this partition type, see https://en.wikipedia.org/wiki/Microsoft_basic_data_partition ). These are the ESXi boot banks. My boot device was /dev/sda, so I'll be using /dev/sda5 and/or /dev/sda6 as partition devices.
  4. Create a temporary directory for the primary boot bank: mkdir /tmp/b
  5. Mount the first ESXi bootbank on that directory: mount /dev/sda5 /tmp/b
  6. The current root password hash is stored inside state.tgz . We'll unpack this first. Create a temp directory for the state.tgz contents: mkdir /tmp/state
  7. Unpack state.tgz: cd /tmp/state ; tar xzf /tmp/b/state.tgz
  8. Inside state.tgz is local.tgz. Create a tempfile for the local.tgz contents: mkdir /tmp/local
  9. Unpack local.tgz: cd /tmp/local ; tar xzf /tmp/state/local.tgz
  10. Generate a new password hash: on a Linux system with Perl installed, you can use this: perl -e 'print crypt("MySecretPassword@","\$6\$AbCdEfGh") . "\n";' . On a Linux system with Python installed (like the CentOS rescue mode), you can use this: python -c "import crypt; print crypt.crypt('MySecretPassword@')" . Both will print out a new password hash for the given password: $6$MeOt/VCSA4PoKyHl$yk5Q5qbDVussUjt/3QZdy4UROEmn5gaRgYG7ckYIn1NC2BXXCUnCARnvNkscL5PA5ErbTddoVQWPqBUYe.S7Y0  . Alternatively, you can use an online hash generator, or you can leave the password hash field empty.
  11. Edit the shadow file to change the root password: vi /tmp/local/etc/shadow . Replace the current password hash in the second field of the first line (the line that starts with root:) with the new hash. Esc : w q Enter saves the contents of the shadow file.
  12. Recreate the local.tgz file: cd /tmp/local ; tar czf /tmp/state/local.tgz etc
  13. Recreate the state.tgz file: cd /tmp/state ; tar czf /tmp/b/state.tgz local.tgz
  14. Detach the bootbank partition: umount /tmp/b
  15. Exit from the Linux rescue environment and boot ESXi.
  16. Do the same for the other boot bank (/dev/sda6 in my case) if your system doesn't boot from the first boot bank. NB logging in via SSH doesn't work with an empty hash field. The Host UI client via a web browser does let you in with an empty password, and allows you to change your password.


April 15, 2019

Last week, many Drupalists gathered in Seattle for DrupalCon North America, for what was the largest DrupalCon in history.

As a matter of tradition, I presented my State of Drupal keynote. You can watch a recording of my keynote (starting at 32 minutes) or download a copy of my slides (153 MB).

Making Drupal more diverse and inclusive

DrupalCon Seattle was not only the largest, but also had the most diverse speakers. Nearly 50% of the DrupalCon speakers were from underrepresented groups. This number has been growing year over year, and is something to be proud of.

I actually started my keynote by talking about how we can make Drupal more diverse and inclusive. As one of the largest and most thriving Open Source communities, I believe that Drupal has an obligation to set a positive example.

Free time to contribute is a privilege

I talked about how Open Source communities often incorrectly believe that everyone can contribute. Unfortunately, not everyone has equal amounts of free time to contribute. In my keynote, I encouraged individuals and organizations in the Drupal community to strongly consider giving time to underrepresented groups.

Improving diversity is not only good for Drupal and its ecosystem, it's good for people, and it's the right thing to do. Because this topic is so important, I wrote a dedicated blog post about it.

Drupal 8 innovation update

I dedicated a significant portion of my keynote to Drupal 8. In the past year alone, there have been 35% more sites and 48% more stable modules in Drupal 8. Our pace of innovation is increasing, and we've seen important progress in several key areas.

With the release of Drupal 8.7, the Layout Builder will become stable. Drupal's new Layout Builder makes it much easier to build and change one-off page layouts, templated layouts and layout workflows. Best of all, the Layout Builder will be accessible.

Drupal 8.7 also brings a lot of improvements to the Media Library.

We also continue to innovate on headless or decoupled Drupal. The JSON:API module will ship with Drupal 8.7. I believe this not only advances Drupal's leadership in API-first, but sets Drupal up for long-term success.

These are just a few of the new capabilities that will ship with Drupal 8.7. For the complete list of new features, keep an eye out for the release announcement in a few weeks.

Drupal 7 end of life

If you're still on Drupal 7, there is no need to panic. The Drupal community will support Drupal 7 until November 2021 — two years and 10 months from today.

After the community support ends, there will be extended commercial support for a minimum of three additional years. This means that Drupal 7 will be supported for at least five more years, or until 2024.

Upgrading from Drupal 7 to Drupal 8

Upgrading from Drupal 7 to Drupal 8 can be a lot of work, especially for large sites, but the benefits outweigh the challenges.

For my keynote, I featured stories from two end-users who upgraded large sites from Drupal 7 to Drupal 8 — the State of Georgia and Pegasystems.

The keynote also featured quietone, one of the maintainers of the Migrate API. She talked about the readiness of Drupal 8 migration tools.

Preparing for Drupal 9

As announced a few months ago, Drupal 9 is targeted for June 2020. June 2020 is only 14 months away, so I dedicated a significant amount of my keynote to Drupal 9.

Making Drupal updates easier is a huge, ongoing priority for the community. Thanks to those efforts, the upgrade path to Drupal 9 will be radically easier than the upgrade path to Drupal 8.

In my keynote, I talked about how site owners, Drupal developers and Drupal module maintainers can start preparing for Drupal 9 today. I showed several tools that make Drupal 9 preparation easier. Check out my post on how to prepare for Drupal 9 for details.

A timeline with important dates and future milestones

Thank you

I'm grateful to be a part of a community that takes such pride in its work. At each DrupalCon, we get to see the tireless efforts of many volunteers that add up to one amazing event. It makes me proud to showcase the work of so many people and organizations in my presentations.

Thank you to all who have made this year's DrupalCon North America memorable. I look forward to celebrating our work and friendships at future events!

April 13, 2019

April 12, 2019

April 11, 2019

With Drupal 9 targeted to be released in June of 2020, many people are wondering what they need to do to prepare.

The good and important news is that upgrading from Drupal 8 to Drupal 9 should be really easy — radically easier than upgrading from Drupal 7 to Drupal 8.

The only caveat is that you need to manage "deprecated code" well.

If your site doesn't use deprecated code that is scheduled for removal in Drupal 9, your upgrade to Drupal 9 will be easy. In fact, it should be as easy as a minor version upgrade (like upgrading from Drupal 8.6 to Drupal 8.7).

What is deprecated code?

Code in Drupal is marked as "deprecated" when it should no longer be used. Typically, code is deprecated because there is a better alternative that should be used instead.

For example, in Drupal 8.0.0, we deprecated \Drupal::l($text, $url). Instead of using \Drupal::l(), you should use Link::fromTextAndUrl($text, $url). The \Drupal::l() function was marked for removal as part of some clean-up work; Drupal 8 had too many ways to generate links.

Deprecated code will continue to work for some time before it gets removed. For example, \Drupal::l() continues to work in Drupal 8.7 despite the fact that it was deprecated in Drupal 8.0.0 more than three years ago. This gives module maintainers ample time to update their code.

When we release Drupal 9, we will "drop" most deprecated code. In our example, this means that \Drupal::l() will not be available anymore in Drupal 9.

In other words:

  • Any Drupal 8 module that does not use deprecated code will continue to work with Drupal 9.
  • Any Drupal 8 module that uses deprecated code needs to be updated before Drupal 9 is released, or it will stop working with Drupal 9.

If you're interested, you can read more about Drupal's deprecation policy at https://www.drupal.org/core/deprecation.

How do I know if my site uses deprecated code?

There are a few ways to check if your site is using deprecated code.

If you work on a Drupal site as a developer, run drupal-check. Matt Glaman (Centarro) developed a static PHP analysis tool called drupal-check, which you can run against your codebase to check for deprecated code. I recommend running drupal-check in an automated fashion as part of your development workflow.

If you are a site owner, install the Upgrade Status module. This module was built by Acquia. The module provides a graphical user interface on top of drupal-check. The goal is to provide an easy-to-use readiness assessment for your site's migration to Drupal 9.

If you maintain a project on Drupal.org, enable Drupal.org's testing infrastructure to detect the use of deprecated code. There are two complementary ways to do so: you can run a static deprecation analysis and/or configure your existing tests to fail when calling deprecated code. Both can be set up in your drupalci.yml configuration file.

If you find deprecated code in a contributed module used on your site, consider filing an issue in the module's issue queue on Drupal.org (after having checked no issue has been created yet). If you can, provide a patch to fix the deprecation and engage with the maintainer to get it committed.

How hard is it to update my code?

While there are some deprecations that require more detailed refactoring, many are a simple matter of search-and-replace.

You can check the API documentation for instructions on how to remedy the deprecation.

When can I start updating my code?

I encourage you to start today. When you update your Drupal 8 code to use the latest and greatest APIs, you can benefit from those improvements immediately. There is no reason to wait until Drupal 9 is released.

Drupal 8.8.0 will be the last release to deprecate for Drupal 9. Today, we don't know the full set of deprecations yet.

How much time do I have to update my code?

The current plan is to release Drupal 9 in June of 2020, and to end-of-life Drupal 8 in November of 2021.

Contributed module maintainers are encouraged to remove the use of deprecated code by June of 2020 so everyone can upgrade to Drupal 9 the day it is released.

A timeline with important dates and future milestones

Drupal.org project maintainers should keep the extended security coverage policy in mind, which means that Drupal 8.8 will still be supported until Drupal 9.1 is released. Contributed projects looking to support both Drupal 8.8 and Drupal 9.0 might need to use two branches.

How ready are the contributed modules?

Dwayne McDaniel (Pantheon) analyzed all 7,000 contributed module for Drupal 8 using drupal-check.

As it stands today, 44% of the modules have no deprecation warnings. The remaining 56% of the modules need to be updated, but the majority have less than three deprecation warnings.

The benefits of backwards compatibility (BC) are clear: no users are left behind. Which leads to higher adoption rates because you’re often getting new features and you always have the latest security fixes.

Of course, that’s easy when you have a small API surface (as Nate Haug once said: “the WordPress API has like 11 functions!” — which is surprisingly close to the truth). But Drupal has an enormous API surface. In fact, it seems there’s APIs hiding in every crevice!

In my job at Acquia, I’ve been working almost exclusively on Drupal 8 core. In 2012–2013 I worked on authoring experience (in-place editing, CKEditor, and more). In 2014–2015, I worked on performance, cacheability, rendering and generally the stabilizing of Drupal 8. Drupal 8.0.0 shipped on November 19, 2015. And since then, I’ve spent most of my time on making Drupal 8 be truly API-first: improving the RESTful Web Services support that Drupal 8 ships with, and in the process also strengthening the JSON API & GraphQL contributed modules.

I’ve learned a lot about the impact of past decisions (by myself and others) on backwards compatibility. The benefit of backwards compatibility (BC). But the burden of ensuring BC can increase exponentially due to certain architectural decisions. I’ve been experiencing that first-hand, since I’m tasked with making Drupal 8’s REST support rock-solid, where I am seeing time and time again that “fixing bugs + improving DXrequires BC breaks. Tough decisions.

In Drupal 8, we have experience with some extremes:

  1. the BigPipe & Dynamic Page Cache modules have no API, but build on top of other APIs: they provide functionality only, not APIs
  2. the REST module has an API, and its functionality can be modified not just via that API, but also via other APIs

The first cannot break BC. The second requires scrutiny for every line of code modified to ensure we don’t break BC. For the second, the burden can easily outweigh the benefit, because how many sites actually are using this obscure edge case of the API?


We’ll look at:

  • How can we make our modules more evolvable in the future? (Contrib & core, D8 & D9.)
  • Ideas to improve this, and root cause hypotheses (for example, the fact that we have API cascades and not orthogonal APIs)

We should be thinking more actively about how feature X, configuration Y or API Z might get in the way of BC. I analyzed the architectural patterns in Drupal 8, and have some thoughts about how to do better. I don’t have all the answers. But what matters most is not answers, but a critical mindset going forward that is consciously considering BC implications for every patch that goes into Drupal 8! This session is only a starting point; we should continue discussing in the hallways, during dinner and of course: in the issue queues!

Preview:

DrupalCon Seattle
Seattle, WA, United States

April 10, 2019

In Open Source, there is a long-held belief in meritocracy, or the idea that the best work rises to the top, regardless of who contributes it. The problem is that a meritocracy assumes an equal distribution of time for everyone in a community.

Open Source is not a meritocracy

Free time to contribute is a privilege

I incorrectly made this assumption myself, saying: The only real limitation [to Open Source contribution] is your willingness to learn.

Today, I've come to understand that inequality makes it difficult for underrepresented groups to have the "free time" it takes to contribute to Open Source.

For example, research shows that women still spend more than double the time as men doing unpaid domestic work, such as housework or childcare. I've heard from some of my colleagues that they need to optimize every minute of time they don't spend working, which makes it more difficult to contribute to Open Source on an unpaid, volunteer basis.

Or, in other cases, many people's economic conditions require them to work more hours or several jobs in order to support themselves or their families.

Systemic issues like racial and gender wage gaps continue to plague underrepresented groups, and it's both unfair and impractical to assume that these groups of people have the same amount of free time to contribute to Open Source projects, if they have any at all.

What this means is that Open Source is not a meritocracy.

Underrepresented groups don't have the same amount of free time

Free time is a mark of privilege, rather than an equal right. Instead of chasing an unrealistic concept of meritocracy, we should be striving for equity. Rather than thinking, "everyone can contribute to open source", we should be thinking, "everyone deserves the opportunity to contribute".

Time inequality contributes to a lack of diversity in Open Source

This fallacy of "free time" makes Open Source communities suffer from a lack of diversity. The demographics are even worse than the technology industry overall: while 22.6% of professional computer programmers in the workforce identify as women (Bureau of Labor Statistics), less than 5% of contributors do in Open Source (GitHub). And while 34% of programmers identify as ethnic or national minorities (Bureau of Labor Statistics), only 16% do in Open Source (GitHub).

Diversity in data

It's important to note that time isn't the only factor; sometimes a hostile culture or unconscious bias play a part in limiting diversity. According to the same GitHub survey cited above, 21% of people who experienced negative behavior stopped contributing to Open Source projects altogether. Other recent research showed that women's pull requests were more likely to get accepted if they had a gender-neutral username. Unfortunately, examples like these are common.

Taking action: giving time to underrepresented groups

A person being ignored

While it's impossible to fix decades of gender and racial inequality with any single action, we must do better. Those in a position to help have an obligation to improve the lives of others. We should not only invite underrepresented groups into our Open Source communities, but make sure that they are welcomed, supported and empowered. One way to help is with time:

  • As individuals, by making sure you are intentionally welcoming people from underrepresented groups, through both outreach and actions. If you're in a community organizing position, encourage and make space for people from underrepresented groups to give talks or lead sprints about the work they're interested in. Or if you're asked to, mentor an underrepresented contributor.
  • As organizations in the Open Source ecosystem, by giving people more paid time to contribute.

Taking the extra effort to help onboard new members or provide added detail when reviewing code changes can be invaluable to community members who don't have an abundance of free time. Overall, being kinder, more patient and more supportive to others could go a long way in welcoming more people to Open Source.

In addition, organizations within the Open Source ecosystem capable of giving back should consider financially sponsoring underrepresented groups to contribute to Open Source. Sponsorship can look like full or part-time employment, an internship or giving to organizations like Girls Who Code, Code2040, Resilient Coders or one of the many others that support diversity in technology. Even a few hours of paid time during the workweek for underrepresented employees could help them contribute more to Open Source.

Applying the lessons to Drupal

Over the years, I've learned a lot from different people's perspectives. Learning out in the open is not always easy, but it's been an important part of my personal journey.

Knowing that Drupal is one of the largest and most influential Open Source projects, I find it important that we lead by example.

I encourage individuals and organizations in the Drupal community to strongly consider giving time and opportunities to underrepresented groups. You can start in places like:

When we have more diverse people contributing to Drupal, it will not only inject a spark of energy, but it will also help us make better, more accessible, inclusive software for everyone in the world.

Each of us needs to decide if and how we can help to create equity for everyone in Drupal. Not only is it good for business, it's good for people, and it's the right thing to do.

Special thanks to the Drupal Diversity and Inclusion group for discussing this topic with me. Ashe Dryden's thought-leadership indirectly influenced this piece. If you are interested in this topic, I recommend you check out Ashe's blog post The Ethics of Unpaid Labor and the OSS Community.

ImioCe jeudi 25 avril 2019 à 19h se déroulera la 77ème séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : Imio : clés du succès du logiciel libre dans les communes wallonnes

Thématique : communauté|développement

Public : développeurs|entreprises|étudiants

L’animateur conférencier : Joël Lambillotte (IMIO)

Lieu de cette séance : Campus technique (ISIMs) de la Haute Ecole en Hainaut, Avenue V. Maistriau, 8a, Salle Académique, 2e bâtiment (cf. ce plan sur le site de l’ISIMs, et ici sur la carte Openstreetmap).

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page http://jeudisdulibre.fikket.com/. La séance sera suivie d’un verre de l’amitié.

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, OpenSides, MeaWeb et Phonoid.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

Description : L’intercommunale Imio conçoit et héberge des solutions libres pour près de 300 administration publiques locales en Wallonie.

Ce projet, initié par deux communes en 2005, s’est spontanément associé à la communauté open source Plone afin de co-construire des solutions dont il pouvait conserver la maîtrise.

Ce sont les choix technologiques et philosophiques opérés qui ont permis d’assurer la pérennité et la croissance de la structure. Ils sont multiples : la programmation orientée composant facilitant la gestion d’un tronc commun pour des grandes villes aussi bien que pour des communes rurales, la démarche qualité via la rédaction systématique de tests et l’utilisation d’outils comme Robot Framework, l’industrialisation via Jenkins, Puppet, Docker, Rundeck, les aspect sociaux via les ateliers pour les utilisateurs et sprints.

Short bio : Joël Lambillotte est directeur général adjoint d’Imio, dont il a participé à la création. Gradué en informatique il a une longue expérience comme responsable IT de la commune de Sambreville et a co-fondé les communautés open source CommunesPlone et PloneGov, finalistes aux EU eGovernement awards en 2007 et 2009.

April 09, 2019

For most people, today marks the first day of DrupalCon Seattle.

Open Source communities create better, more inclusive software when diverse people come to the table. Unfortunately, there is still a huge gender gap in Open Source, and software more broadly. It's something I'll talk more about in my keynote tomorrow.

One way to help close the gender gap in the technology sector is to give to organizations that are actively working to solve this problem. During DrupalCon Seattle, Acquia will donate $5 to Girls Who Code for every person that visits our booth.

April 08, 2019

The post Lazily load below-the-fold images and iframes appeared first on ma.ttias.be.

A pretty cool feature has landed in Chromium that allows you to easily lazy-load images and iframes.

Here's some info directly from the mailing list:

Support deferring the load of below-the-fold images and iframes on the page until the user scrolls near them.

This is to reduce data usage, memory usage, and speed up above-the-fold content.

Web pages can use the "loading" attribute on and elements to control and interact with the default lazy loading behavior, with possible values "lazy", "eager", and "auto" (which is equivalent to leaving the "loading" attribute unset).

Source: Intent to Ship: Lazily load below-the-fold images and iframes -- Google Groups

Which leads to some pretty powerful optimizations for page loading and bandwidth savings, especially on image-heavy sites (like news sites, photo blogs, ...).

It works simply as follows:

<img src="example.jpg" loading="lazy" alt="example" />
<iframe src="example.html" loading="lazy">

Some more technical readings: Native Lazy Loading for <img> and <iframe> is Coming to the Web.

The post Lazily load below-the-fold images and iframes appeared first on ma.ttias.be.

April 06, 2019

The post Using Oh Dear! to keep your Varnish cache warm appeared first on ma.ttias.be.

If we're already crawling your site, we might as well update your cached pages in the meanwhile!

The idea is as follows: if you've enabled our broken links or mixed content checks for any of your sites, we'll crawl your sites to find any broken pages.

On top of that, we have the ability to set custom HTTP headers per website that get added to both the uptime checks and our crawler.

Combining our crawler and the custom HTTP headers allows you to authorize our crawler in your Varnish configs to let it update the cache.

Source: Using Oh Dear! to keep your Varnish cache warm -- Oh Dear! blog

The post Using Oh Dear! to keep your Varnish cache warm appeared first on ma.ttias.be.

April 05, 2019

Inquiet, je jetai un regard à ma femme qui refermait doucement la porte de notre appartement.
— Alors ? Tu en as ?
— Moins fort ! me répondit-elle. Je ne tiens pas à ce que les voisins nous dénoncent.

Puis, d’un air conspirateur, elle me tendit un minuscule paquet qu’elle gardait serré dans son poing. Je m’en saisis immédiatement.
— C’est tout ? balbutiais-je.
— Laisse-m’en ! Il faut tenir jusqu’à la prochaine livraison.

Je divisai le paquet en deux parts égales avant de lui en tendre une. Mon maigre butin dans le creux de ma main, je me retirai dans notre toilette, la seule pièce sans fenêtre.
— N’utilise pas tout d’un coup ! chuchota ma femme.

Je ne répondis même pas. Je pensais à l’époque où la vente était libre. Où on se fournissait dans les grands magasins, comparant les marques, n’achetant que de la bonne qualité. Mais le lobby sanitaire s’était joint à l’hystérie écologiste. Aujourd’hui, nous étions des hors-la-loi.

Nous avions certes tenté de nous sevrer, tenant parfois près d’une semaine. Mais, à chaque fois, nous avions craqué, nous étions retombés dans notre addiction, allant jusqu’à plusieurs fois par jour.

Seul dans la toilette, j’ouvris la main et me mis à l’ouvrage. Les muscles de ma nuque se détendirent, mes paupières se fermèrent naturellement et je me mis à pousser des soupirs de jouissance tandis que le dangereux, le précieux coton-tige explorait mon canal auriculaire.

Oui, je connaissais les méfaits de mon acte. J’étais conscient du coût écologique de ces bouts de plastiques, du risque pour mon tympan. Mais rien ne pouvait remplacer cette extase, cet unique moment de jouissance.

Photo by Simone Scarano on Unsplash

Je suis @ploum, conférencier et écrivain électronique. Si vous avez apprécié ce texte, n'hésitez pas à me soutenir sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens réguliers, même symboliques, sont une réelle motivation et reconnaissance. Merci !

Ce texte est publié sous la licence CC-By BE.

April 04, 2019

I published the following diary on isc.sans.edu: “New Waves of Scans Detected by an Old Rule“:

Who remembers the famous ShellShock (CVE-2014-6271)? This bug affected the bash shell in 2014 and was critical due to the facts that it was easy to exploit and that bash is a widespread shell used in many tools/applications. So, at this time, I created an OSSEC alerts to report ShellShock exploitation attempts against my servers. Still today, I’m getting a hit on this rule from time to time… [Read more]

[The post [SANS ISC] New Waves of Scans Detected by an Old Rule has been first published on /dev/random]

April 03, 2019

The post The end of Extended Validation certificates appeared first on ma.ttias.be.

You know those certificates you paid 5x more for than a normal one? The ones that are supposed to give you a green address bar with your company name imprinted on it?

It's been mentioned before, but my take is the same: they're dead.

That is to say, they'll still work, but they don't warrant a 5x price increase anymore. Because this is what an extended validation certificate is supposed to look like on Chrome.

And this is what it looks like for some users that are part of a Chrome "experiment".

Notice the difference?

It looks exactly the same as a free Let's Encrypt certificate, like the one we use on Oh Dear!. That green bar -- the one we paid extra for -- is gone.

Those part of the Chrome experiment will notice this message in their Developer Console.

As part of an experiment, Chrome temporarily shows only the lock icon in the address bar.
Your SSL certificate with Extended Validation is still valid.

My feeling is it won't be temporary. There's little to no added value to EV certificates, users don't look at it. From a technical point of view, they're also just certificates. They encrypt your traffic just like a Let's Encrypt certificate would.

Today, I wouldn't bother buying Extended Validation certificates anymore. I wouldn't even renew them anymore and go for automated, often-rotated, Let's Encrypt certificates instead.

(Oh, and if you're going that route, give Oh Dear! a try to help monitor your expiration dates and chains. Just to feel safe.)

The post The end of Extended Validation certificates appeared first on ma.ttias.be.

March 29, 2019

Ceci est le dernier épisode d’une aventure qui ce sera étalée sur plusieurs années. J’espère que vous avez apprécier la lecture, que vous la conseillerez à d’autres et que j’aurai l’occasion de la mettre en forme pour vous proposer un véritable livre, électronique ou sur papier. Merci pour votre fidélité à travers cette histoire !

Ce matin, le plus vieux est venu me chercher. Les bébés étaient calmes. J’étais confiant, mon pouvoir était revenu.

— Viens avec moi ! m’a dit le plus vieux. Je veux que tu racontes ton histoire à Mérissa. Je me refuse de croire qu’elle soit inhumaine à ce point là. Une future maman ne peut rester insensible, elle va comprendre, elle va agir.

Je n’ai rien dit, je l’ai suivi silencieusement à travers la ville jusque dans cette grande pièce avec une engrossée. Lorsque le plus jeune a soudainement surgit, avec une jeune femme nue, je me suis doucement mis en retrait. Je sais que mon pouvoir me permet de ne pas être remarqué, de ne pas attirer l’attention sur moi.

Ils ont parlé pendant une éternité. Mais j’ai appris la patience. Je les ai laissé. J’avais confiance. Le pouvoir me soufflerait lorsque serait venu le temps d’agir.

L’enfer s’est soudainement déchainé. Mes cauchemars sont devenus une nouvelle forme de réalité.

J’ai souris.

Elle était là, familière, présente, suintante. La peur ! Ma peur.

Sans forcer, sans colère, j’ai enfoncé la fine baguette de métal dans le dos du plus jeune. Puis du plus vieux. Une simple tige que j’avais arraché à un meuble de l’appartement dans lequel j’avais séjourné et que j’avais caché dans ma manche.

Les bébés hurlaient, dansaient mais cette fois, ce n’est pas moi qu’ils regardaient. L’engrossée regardait un écran et tentait de taper sur un clavier. La plus jeune la soutenait. Je lui ai enfoncé la baguette dans le cou.

Elle a porté ses mains à sa gorge avant de tourner vers moi un regard de surprise extrême. Ses lèvres ont articulé quelques mots.
— L’élément perturbateur, l’imprévu…
Elle s’est écroulée, renversant l’engrossée qui est tombée sur le sol en hurlant.

Je me suis approchée d’elle.

Elle gémissait, tentant de s’apaiser avec des petites respirations saccadées. J’avais déjà vu des travailleuse mettre bas, cela ne me faisait ni chaud ni froid.

D’un geste du doigt, elle me fit signe de me rapprocher. J’obtempérai.
— Comment… Comment t’appelles-tu ? haleta-t-elle.
— 689, répondis-je machinalement.

Malgré sa difficile situation, elle suintait l’autorité. Le pouvoir semblait littéralement jaillir de sa voix, de son visage. Je l’adorais, la vénérais.
— 689, murmura-t-elle, si tu appuies sur la plus grosse touche du clavier, tu détruiras le maitre du monde. La commande est tapée, il suffit de la confirmer.

Le pouvoir. L’immense pouvoir.

Lentement, je me redressai tout en contemplant le clavier, l’écran.

J’ai trouvé la touche. J’ai vu l’écran. J’ai levé le doigt. J’ai hésité.

Puis j’ai regardé la femme en train de hurler tout en se tenant le ventre. Une petite tête humide et visqueuse pointait entre ses cuisses. Les cris de la mère couvraient le cauchemar de la pièce.

— Appuie ! cria-t-elle. Appuie maintenant !

612 se tenait devant moi, le visage tordu par la douleur et le coup mais le regard pétillant de malice.

— L’un d’entre vous verra la Terre. Il la sauvera. L’Élu ! Appuie !

Ma vie se mit à défiler devant mes yeux. La douleur, l’humiliation, l’usine. Devenir G89. Tuer le vieux. Approcher le contremaître. Voir l’espace. La terre. Gagner la confiance du vieux et du jeune terrien. Tuer le jeune terrien qui était un peu trop perspicace. Rester caché dans l’appartement. Affronter mes cauchemars. Être témoin de la résurrection du plus jeune. Et puis devenir le maître du monde ?

— Accomplis ton destin ! m’ordonna le vieux 612. Appuie sur le clavier, sauve la Terre !

Au sol, la femme blonde haletait doucement, les yeux hagards, les jambes écartées. Un bébé silencieux se tortillait auprès d’elle tandis qu’un second crâne minuscule faisait son apparition dans l’enfer de la vie.

À mes pieds, ce qui avait été la maîtresse du monde se convulsait dans les affres de l’enfantement.

— Deviens… Deviens le maitre du monde ! bégaiait-elle. Appuie !
— Appuie ! me supplia 612.

Mais j’avais compris. Une mère est prête à tout pour ses enfants. Une mère ne me confierait jamais les rennes du monde.

Lentement, je m’assis devant le clavier et l’écran. Il état là, le véritable maître du monde. Celui que tout le monde craignait. Celui qui faisait suinter la peur dans les esprits, qui organisait la construction, l’achat, la destruction, la vente, infini cycle consumériste qui consumait lentement la planète.

Dans la pièce, le silence était revenu. Le cauchemar s’était tu. 612 avait disparu, chassé de mon esprit par ma nouvelle lucidité. Seuls restaient des cadavres, une parturiente agonisante et deux nouveaux-nés.

Je contemplai mon œuvre. La femme nue râla, porta la main à sa gorge et tenta de se relever. Sans succès.

Je souris.

Peur, ma fidèle conseillère, ma vieille amie. Je t’obéirai. Je suis ton humble serviteur.

Lentement, je m’éloignai du clavier et de la touche. Je vénérais l’écran, le véritable maître du monde. Mais il savait que, d’une simple pression, je pouvais l’éteindre. Le monde avait retrouvé l’équilibre. Je devais me mettre au service du maître du monde et de ma peur.

Une forme grise sauta sur le bureau, près de l’écran.
— Miaouw ! fit-elle.

Je sursautai.
— Miaouw ! insista-t-elle.

Elle retroussa ses babines, me montrant de minuscules dents blanches. Un feulement jaillit de ce petit corps poilus.
— Lancelot, murmura la femme qui accouchait. Mon petit Lancelot à sa mémère…

Incrédule, je détournai le regard. Mais, doucement, sans même avoir l’air d’y prêter attention, la bête se mit à marcher sur le bureaux. Sa patte enfonça la touche du clavier. Des lignes se mirent à défiler à toute vitesse sur l’écran avant de s’arrêter. Rien ne se passa. Était-ce un subterfuge de mon esprit où la lumière avait-elle clignoté un bref instant ?

Dans un profond borborygme, la femme nue parvint à se mettre à genoux, le corps couvert de sang.

Sur le sol, les deux bébés se mirent soudain à crier. Au dessus de moi, le plafond laissa soudain passer un voile de ciel d’un bleu trop clair, trop brillant.

Photo by Grant Durr on Unsplash

Je suis @ploum, conférencier et écrivain électronique. Si vous avez apprécié ce texte, n'hésitez pas à me soutenir sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens réguliers, même symboliques, sont une réelle motivation et reconnaissance. Merci !

Ce texte est publié sous la licence CC-By BE.

March 28, 2019

I published the following diary on isc.sans.edu: “Running your Own Passive DNS Service“:

Passive DNS is not new but remains a very interesting component to have in your hunting arsenal. As defined by CIRCL, a passive DNS is “a database storing historical DNS records from various resources. The historical data is indexed, which makes it searchable for incident handlers, security analysts or researchers”. There are plenty of existing passive DNS services: CIRCL, VirusTotal, RiskIQ, etc. I’m using them quite often but, sometimes, they simply don’t have any record for a domain or an IP address I’m interested in. If you’re working for a big organization or a juicy target (depending on your business), why not operate your own passive DNS? You’ll collect data from your network that will represent the traffic of your own users… [Read more]

[The post [SANS ISC] Running your Own Passive DNS Service has been first published on /dev/random]

March 26, 2019

I don't use Google Analytics or any other web analytics service on dri.es. Why not? Because I don't desire to know how many people visit my site, where they come from, or what operating system they use.

Because I don't have a compelling reason to track my site's visitors, I don't have to bother anyone with a "cookies consent" popup either. That is a nice bonus because the web is littered with those already. I like that dri.es is clutter-free.

This was all well and good until a couple of weeks ago, when I learned that when I embed a YouTube video in my blog posts, Google sends an HTTP cookie to track my site's visitors. Be damned!

After some research, I discovered that YouTube offers a privacy-enhanced way of embedding videos. Instead of linking to youtube.com, link to youtube-nocookie.com, and no data-collecting HTTP cookie will be sent. This is Google's way of providing GDPR-compliant YouTube videos.

So I went ahead and updated all blog posts on dri.es to use youtube-nocookie.com.

In addition to improving privacy, this change also makes my site faster. I used https://webpagetest.org to benchmark a recent blog post with a YouTube video.

Before:

A waterfall diagram that shows requests and load times before replacing youtube.com with youtube-nocookie.comWhen embedding a video using youtube.com, Google uses DoubleClick to track your users (yellow bar). A total of 22 files were loaded, and the total time to load the page was 4.4 seconds (vertical blue line). YouTube makes your pages slow, as the vast majority of requests and load time is spent on loading the YouTube video.

After:

A waterfall diagram that shows requests and load times after replacing youtube.com with youtube-nocookie.comWhen using youtube-nocookie.com, Google no longer uses DoubleClick to track your users. No HTTP cookie was sent, "only" 18 files were loaded, and the total page load time was significantly faster at 2.9 seconds (vertical blue line). Most of the load time is still the result of embedding a single YouTube video.

So on Feb 25th my Do Not Donate-page was featured on Hacker News and that obviously brought some extra page-views.

Here are some more numbers for that memorable day;

  • Most popular pages:
    1. Do not donate: 10 013
    2. Homepage/ archives: 1 108
    3. about:futtta: 235
  • Referrers:
    1. Hacker News 7 978
    2. Facebook 112
    3. Search Engines 84
  • Outgoing links:
    1. https://en.wikipedia.org/wiki/Flanders 959
    2. https://en.wikipedia.org/wiki/List_of_countries_by_inequality-adjusted_HDI#List 809
    3. https://profiles.wordpress.org/futtta 596
    4. https://www.kiva.org 87

And my server? Even at the busiest time (around 10-11 AM UTC+1) it quietly hummed along with a 0.11 system load :-)

March 24, 2019

The post Archiving the bitcoin-dev mailing lists appeared first on ma.ttias.be.

I've started yet another effort to index and archive a public mailing list in order to present it in a more readable, clean format.

The road to mailing lists

Why do I keep being drawn towards efforts to parse & present all these mailing lists?

Well, looking back at older posts, I think this piece of knowledge I apparently had in 2015 sums it pretty good.

There are no (well: very little) trolls on mailing lists. Those who take the effort of signing up to a mailing list aren't doing it to curse at others or to be violent. They do so to stay informed, to interact and to help people.

This still is true for me regarding mailing lists: quality content, smart & dedicated people and overall an attitude of helpfulness towards others. Something that's very rare on Reddit or Hackernews discussions.

In 2016 I started an e-mail archive and cancelled it again almost 2y later. The main reason is that the tooling like mhonarc, Pipermail, ... is just really bad. I couldn't find a proper alternative in all these years, so I'm building my own this time.

Solving the mailing list readability problem

What bothers me about mailing lists is the way we browse and look at them online. It's an ugly format, split and archived per month which makes you lose threads if they happen to span multiple months.

Most of us consume mailing lists via -- can you guess it? -- email, obviously. But if you want to share a story posted on a mailing list, I'd want it to be easily readable.

I don't claim to be particularly good at design, but anything is better than pre-formatted text wrapper in pre HTML tags.

The end of mailing list support at the Linux Foundation

One thing I learned from the mailing list, is that the Linux Foundation is slowly deprecating their support for email.

The Bitcoin mailing lists will migrate to groups.io as announced on the bitcoin-dev list. For mailing list users not much should change -- it's still a mailing list (I think?).

However, it presented me with yet another opportunity to go ahead and create my own online archive.

Mirroring bitcoin-dev, bitcoin-core-dev and bitcoin-discuss

I created a new repository that handles the parsing and displaying of the mailing list (and soon, other Bitcoin related projects): github.com/mattiasgeniar/CommunityBitcoin.

The name needs work, but it's the best I could think of.

The mailing lists are now mirrored here: mojah.be/mailing-list. The domain mojah.be refers to an old World of Warcraft character I had. Since I couldn't decide on a proper name yet, it's now hosted on that domain I had lying around for years and did nothing with.

The project features a couple of things I appreciate;

  • A one-page view of an email thread, that can span across multiple months (example)
  • Gravatar support (example)
  • A filter by email author (threads + messages, example)

I can use some more features that I'd happily accept contributes to. I think an RSS feed would be nice, it opens the way for IFTTT-style automation and a Twitter bot. Also pagination is a must since pages get really large.

The goal now is to experiment with the Bitcoin protocol and use this repository as a playground to throw some stuff online and see what sticks.

I'd be more than happy to accept PRs to this project to add functionality!

The post Archiving the bitcoin-dev mailing lists appeared first on ma.ttias.be.

March 22, 2019

I changed my blog’s tagline in “$CURRENT_ROMAN_EMPIRE is a great nation, but leave us alone”. The only thing that I am not sure about is that if when the day comes that the current “Roman empire” switches again, the dollar-sign will still be the scripting programming language’s prefix-sign for variables?

I don’t know how to solve this other than writing a blog article like this one.

I guess I could carve some documentation into a rock or something, like in Wallonia where there are rocks rather than limestone. In front of my graffiti-tag I could plant a way to change the subject into the name of what then is the world’s empire.

More intelligent people probably have an answer? What is most important, is that we try.

The post Initial impressions on running a Bitcoin Core full node appeared first on ma.ttias.be.

Since about a week I'm running my own Bitcoin Core full node, one that keeps a full copy of the blockchain with all transactions included.

Node Discovery

When you first start up your node, the Bitcoin Core daemon bitcoind queries a set of DNS endpoints to do its first discovery of nodes. Once it connects to the first node, more peers will be exchanged & the node start connecting to those too. That's how the network initially bootstraps.

There are about 8 DNS Seeds defined in src/chainparams.cpp. Each node returns a handful of peer IPs to connect to. For instance, the node seed.bitcoin.sipa.be returns over 20 IPs.

$ dig seed.bitcoin.sipa.be | sort
seed.bitcoin.sipa.be.	3460	IN	A	104.197.64.3
seed.bitcoin.sipa.be.	3460	IN	A	107.191.62.217
seed.bitcoin.sipa.be.	3460	IN	A	129.232.253.2
seed.bitcoin.sipa.be.	3460	IN	A	13.238.61.97
seed.bitcoin.sipa.be.	3460	IN	A	178.218.118.81
seed.bitcoin.sipa.be.	3460	IN	A	18.136.117.109
seed.bitcoin.sipa.be.	3460	IN	A	192.206.202.6
seed.bitcoin.sipa.be.	3460	IN	A	194.14.246.85
seed.bitcoin.sipa.be.	3460	IN	A	195.135.194.3
seed.bitcoin.sipa.be.	3460	IN	A	211.110.140.47
seed.bitcoin.sipa.be.	3460	IN	A	46.19.34.236
seed.bitcoin.sipa.be.	3460	IN	A	47.92.98.119
seed.bitcoin.sipa.be.	3460	IN	A	52.47.88.66
seed.bitcoin.sipa.be.	3460	IN	A	52.60.222.172
seed.bitcoin.sipa.be.	3460	IN	A	52.67.65.129
seed.bitcoin.sipa.be.	3460	IN	A	63.32.216.190
seed.bitcoin.sipa.be.	3460	IN	A	71.60.79.214
seed.bitcoin.sipa.be.	3460	IN	A	73.188.124.183
seed.bitcoin.sipa.be.	3460	IN	A	81.206.193.115
seed.bitcoin.sipa.be.	3460	IN	A	83.49.154.118
seed.bitcoin.sipa.be.	3460	IN	A	84.254.90.125
seed.bitcoin.sipa.be.	3460	IN	A	85.227.137.129
seed.bitcoin.sipa.be.	3460	IN	A	88.198.201.125
seed.bitcoin.sipa.be.	3460	IN	A	92.53.89.123
seed.bitcoin.sipa.be.	3460	IN	A	95.211.109.194

Once a connection to one node is made, that node will share some of its peers it knows about to you.

There's no simple way to get all node IPs and map the entire network. Nodes will share some information about their peers, but by doing so selectively they hide critical information about the network design and still allow for all transactions to be fairly spread across all nodes.

Initial Block Download (IBD)

With a few connections established, a new node will start to query for the blockchain state of its peers and start downloading the missing blocks.

Currently, the entire blockchain is 224GB in size.

$ du -hs .bitcoin/blocks/
224G	.bitcoin/blocks/

Once started, your node will download that 224GB worth of blockchain data. It's reasonably fast at it, too.

I was on a Gigabit connection at the time, the first 3/5th of the chain got downloaded at about 150Mbps, the rest slightly slower at 100Mbps and later at 25Mbps.

Notice how the bandwidth consumption changes over time and lowers? There's a good reason for that too and it starts to become more obvious if we map out the CPU usage of the node at the same time.

This wasn't a one-off occurrence. I resynced the chain entirely and the effect is reproducable. More on that later.

Disk consumption

Zooming in a bit, we can see the disk space is consumed gradually as the node syncs.

Also notice how, as the CPU usage starts to spike in the chart above, the disk consumption rate slows down.

It looks like at that point a more efficient algoritme was used, which is taxing the CPUs higher for block validation, but gives us a more efficient storage method on the disk.

Looking at the transaction timestamps in the logs, as soon as transactions around 2018-07-30 (July 30th, 2018) are processed, CPU spikes.

The IOPS appear to confirm this too, as the amount of I/O operations slows down as the CPU intensity increases, indicating writes and reads to disk are slower than usual.

At first glance, this is a good thing. Syncing the chain becomes more CPU intense from that point forward, but as the block validation needs to happen only once when doing the initial block download, the disk space saved remains forever.

Thoughts on the block size

There's quite a lot of debate about the block size in Bitcoin: bigger blocks allow for more data to be saved and would allow for more complicated scripts or even smart contracts to exist on the chain.

Bigger blocks also mean more storage consumption. If the chain becomes too big, it becomes harder to run one on your own.

Because of this, I'm currently in the smaller blocks are better-camp. While diskspace is becoming cheaper & cheaper, a cloud server with more than 250GB disk space capacity quickly costs you $50/month and starts to add up over time.

We can't change the current blockchain size (I think?), but we can prevent it from getting too large by thinking about what data to store on-chain vs. off-chain.

Setting up your own node

Want to get your hands dirty with Bitcoin? One of the best ways to get started is running your own node and get some experience.

If you're on CentOS, I dedicated a full article on setting up your own node: Run a Bitcoin Core full node on CentOS 7.

If you don't want to keep ~250GB of storage, you can limit the disk consumption by just keeping the newest blocks. For more details, see here: Limit the disk space consumed by Bitcoin Core nodes on Linux.

The post Initial impressions on running a Bitcoin Core full node appeared first on ma.ttias.be.

On Facebook someone asked me how to do Gutenberg the right way to avoid loading too much JS on the frontend, this is a somewhat better organized version of my answer;

I’m not a Gutenberg specialist (for from it, really) but:

  • the wrong way is adding JS with wp-block/ wp-element and other gutenberg dependencies on init calling wp_enqueue_script,
  • the right way is either hooking into enqueue_block_editor_assets (see https://jasonyingling.me/enqueueing-scripts-and-styles-for-gutenberg-blocks/)
  • or when using init doing wp_register_script and then register_block_type referring to the correct editor_script previously registered (see https://wordpress.org/gutenberg/handbook/designers-developers/developers/tutorials/block-tutorial/writing-your-first-block-type/).

I’ve tried both of these on a “bad” plugin and can confirm both solutions do prevent those needless wp-includes/js/dist/* JS-files from being added on the front-end.

March 21, 2019

I’m in Washington, waiting for my flight back to Belgium. I just attended the 2019 edition of the OSSEC Conference, well more precisely, close to Washington in Herndon, VA. This was my first one and I’ve been honoured to be invited to speak at the event. OSSEC is a very nice project that I’m using for a long time. I also contributed to it and I’m giving training on this topic. The conference was already organized for a few years and attracted more people every year. They doubled the number of attendees for the 2019 edition.

The opening session was performed by Scott Shinn, OSSEC Project Manager, who came with some recap. The project started in 2003 and was first released in 2005. It supports a lot of different environments and, basically, if you can compile C code  on your device, it can run OSSEC! Some interesting facts were presented by Scott. What is the state of the project? OSSEC is alive with 500K downloads in 2018 and trending up. A survey is still ongoing but already demonstrates that many users are long-term users (31% are using OSSEC for >5y). If the top user profile remains based on infosec people, the second profile is IT operations and devops. There is now an OSSEC foundation (503c – a non-profit organization) which has multiple goals: to promote OSSEC, a bug bounty will probably be started, to attract more developers and to enforce the project. There is an ongoing effort to make the tool more secure with an external audit of the code.

Then, Daniel Cid presented his keynote. Daniel is the OSSEC founder and reviewed the story of his baby. Like many of us, he was facing problems in his daily job and did not find the proper tool. So he started to develop OSSEC. There was already some tools here and there like Owl, Syscheck or OSHIDS. Daniel integrated them and added a network layer and the agent/server model. He reviewed the very first versions from the 0.1 until 0.7. Funny story, some people asked him to stop flooding the mailing where he announced all the versions and suggested him to contribute to the project ’Tripwire’.

Then, Scott came back on stage to talk about the Future of OSSEC. Some times, when I mention OSSEC, people’ first reaction is to argue that OSSEC does not improve or does not have clear roadmap. Really? Scott give a nice overview of what’s coming soon. Here is a quick list:

  • Dynamic decoders – OSSEC will implement user defined variable names. They will be configured via a KV store represented in JSON. The next step will be to implement the output transport to other format to replace tools like Filebeat, ArcSight, Splunk agents, etc.
  • Real-time threat intelligence – Instead of using CDB lists (that must be re-generated at regular interval, OSSEC will be able to query threat intelligence lists on the flight, in the same way as the GeoIP lookups are working.
  • GOSSEC – Golang OSSEC. agent-auth has already been ported to Golang.
  • Noisesocket – To replace the existing encryption mechanism between the OSSEC server and agents.
  • A new web managment console

Most of these new features should be available in OSSEC 3.3.

The next presentation was about “Protecting Workloads in Google Kubernetes with OSSEC and Google Cloud Armor” by Ben Auch and Joe Miller, Gannett working at USA Today. This media company operates a huge network with 140M unique visitors monthly, 120 markets in the US and a worldwide presence. As a media company, there are often targeted (defacement, information change, fake news, etc). Ben & Joe explained how they successfully deployed OSSEC in their cloud infrastructure to automatically block attackers with a bunch of Active-Response scripts. The biggest challenge was to be able to remain independent of the cloud provider and to access logs in a simple but effective way.Detect malicious requests to GKE containers

Mike Shinn, from Atomicorp, came to speak about “Real Time Threat Intelligence for Advanced Detection“. Atomicorp, the organizer of the conference, is providing OSSEC professional services and is also working on extensions. Mike demonstrated what he called “the next-generation Active-Response”. Today, this OSSEC feature accesses data from CDB but it’s not real-time. The idea is to collect data from OSSEC agents installed in multiple locations, multiple organizations (similar to what dshield.org is doing) and to apply some machine-learning magic. The idea is also to replace the CDB lookup mechanism by something more powerful and in real time: via DNS lookups. Really interesting approach!

Ben Brooks, from Beryllium Infosec, presented “A Person Behind Every Event“. This talk was not directly related to OSSEC but interesting anyway. Tools like OSSEC are working with rules and technical information – IP addressds, files, URLs, but what about the people behind those alerts? Are we facing real attackers or rogue insides? Who’s the most critical? The presentation was focussed on the threat intelligencecycle:
Direction > Collection > Processing > Analysis > DesseminationBof

The next two talks had the same topic: automation. Ken Moini from Fierce Software Automation, presented “Automating Security Across the Enterprise with Ansible and OSSEC“. The idea behind the talk was to solve the problems that most organizations are facing: people problems (skills gaps), point tools (proliferation of tools and vendors solutions), pace of innovation. Mike Waite, from RedHat, spoke about “Containerized software for a modern world, The good, the bad and the ugly“. A few years ago, the ecosystem was based on many Linux flavors. Today, we have the same issue but with many flavours of Kubernetes. It’s all about applications. If applications can be easily deployed, software vendors are becoming also Linux maintainers!

The next presentation was performed by Andrew Hay, from LEO Cybersecurity: “Managing Multi-Cloud OSSEC Deployments“. Andrew is a long OSSEC advocate and co-wrote the book “OSSEC HIDS Host Based Intrusion Detection Guide” with Daniel Cid. He presented tips & tricks to deploy OSSEC in cloud services, how to generate configuration files with automation tools like Chef, Puppet or Ansible.

Mike Shinn came back with “Atomic Workload Protection“. Yesterday, organizations’ business was based on a secure network of servers. Tomorrow, we’ll have to use a network of secure workloads. Workloads must be security and cloud providers can’t do everything for us. Cloud providers take care of the cloud security but the security IN the cloud relies on their customers! Gartner said that, by 2023, 99% of the cloud security failures will be customer’s fault. Mike explained how Atomicorp developed extra layers on top of OSSEC to secure workloads: Hardening, Vulnerability shielding, Memory protection, Application control, Behavioral Monitoring, Micro segmentation, Deception and AV/Antimalware.

The next slot was assigned to myself, I presented “Threat Hunting with OSSEC“.

Finally, the last presentation was the one of Dmitry Dain who presented the NoiseSocket that will be implemented in the next OSSEC release. The day ended with a quick OSSEC Users panel and a nice social event.

The second day was mainly a workshop. Scott prepared some exercises to demonstrate how to use some existing features of OSSEC (FIM, Active-Response) but also the new feature called “Dynamic Decoder” (see above). I met a lot of new people who are all OSSEC users or contributors.

[The post OSSEC Conference 2019 Wrap-Up has been first published on /dev/random]

JSON:API being dropped into Drupal by crane

Breaking news: we just committed the JSON:API module to the development branch of Drupal 8.

In other words, JSON:API support is coming to all Drupal 8 sites in just a few short months! 🎉

This marks another important milestone in Drupal's evolution to be an API-first platform optimized for building both coupled and decoupled applications.

With JSON:API, developers or content creators can create their content models in Drupal’s UI without having to write a single line of code, and automatically get not only a great authoring experience, but also a powerful, standards-compliant, web service API to pull that content into JavaScript applications, digital kiosks, chatbots, voice assistants and more.

When you enable the JSON:API module, all Drupal entities such as blog posts, users, tags, comments and more become accessible via the JSON:API web service API. JSON:API provides a standardized API for reading and modifying resources (entities), interacting with relationships between resources (entity references), fetching of only the selected fields (e.g. only the "title" and "author" fields), including related resources to avoid additional requests (e.g. details about the content's author) and filtering, sorting and paginating collections of resources.

In addition to being incredibly powerful, JSON:API is easy to learn and use and uses all the tooling we already have available to test, debug and scale Drupal sites.

Drupal's JSON:API implementation was years in the making

Development of the JSON:API module started in May 2016 and reached a stable 1.0 release in May 2017. Most of the work was driven by a single developer partially in his free time: Mateu Aguiló Bosch (e0ipso).

After soliciting input and consulting others, I felt JSON:API belonged in Drupal core. I first floated this idea in July 2016, became more convinced in December 2016 and recommended that we standardize on it in October 2017.

This is why at the end of 2017, I asked Wim Leers and Gabe Sullice — as part of their roles at Acquia — to start devoting the majority of their time to getting JSON:API to a high level of stability.

Wim and Gabe quickly became key contributors alongside Mateu. They wrote hundreds of tests and added missing features to make sure we guarantee strict compliance with the JSON:API specification.

A year later, their work culminated in a JSON:API 2.0 stable release on January 7th, 2019. The 2.0 release marked the start of the module's move to Drupal core. After rigorous reviews and more improvements, the module was finally committed to core earlier today.

From beginning to end, it took 28 months, 450 commits, 32 releases and more than 5,500 test runs.

The best JSON:API implementation in existence

The JSON:API module for Drupal is almost certainly the most feature-complete and easiest-to-use JSON:API implementation in existence.

The Drupal JSON:API implementation supports every feature of the JSON:API 1.0 specification out-of-the-box. Every Drupal entity (a resource object in JSON:API terminology) is automatically made available through JSON:API. Existing access controls for both reading and writing are respected. Both translations and revisions of entities are also made available. Furthermore, querying entities (filtering resource collections in JSON:API terminology) is possible without any configuration (e.g. setting up a "Drupal View"), which means front-end developers can get started on their work right away.

What is particularly rewarding is that all of this was made possible thanks to Drupal's data model and introspection capabilities. Drupal’s decade-old Entity API, Field API, Access APIs and more recent Configuration and Typed Data APIs exist as an incredibly robust foundation for making Drupal’s data available via web service APIs. This is not to be understated, as it makes the JSON:API implementation robust, deeply integrated and elegant.

I want to extend a special thank you to the many contributors that contributed to the JSON:API module and that helped make it possible for JSON:API to be added to Drupal 8.7.

Special thanks to Wim Leers (Acquia) and Gabe Sullice (Acquia) for co-authoring this blog post and to Mateu Aguiló Bosch (e0ipso) (Lullabot), Preston So (Acquia), Alex Bronstein (Acquia) for their feedback during the writing process.

The JSON:API module was added to Drupal 8.7 as a stable module!

See Dries’ overview of why this is an important milestone for Drupal, a look behind the scenes and a look toward the future. Read that first!

Upgrading?

As Mateu said, this is the first time a new module is added to Drupal core as “stable” (non-experimental) from day one. This was the plan since July 2018 — I’m glad we delivered on that promise.

This means users of the JSON:API 8.x-2.x contrib module currently on Drupal 8.5 or 8.6 can update to Drupal 8.7 on its release day and simply delete their current contributed module, and have no disruption in their current use of JSON:API, nor in security coverage! 1

What’s happened lately?

The last JSON:API update was exactly two months ago, because … ever since then Gabe, Mateu and I are have been working very hard to get JSON:API through the core review process. This resulted in a few notable improvements:

  1. a read-only mode that is turned on by default for new installs — this strikes a nice balance between DX (still having data available via APIs by default/zero config: reading is probably the 80% use case, at least today) and minimizing risk (not allowing writes by default) 2
  2. auto-revisioning when PATCHing for eligible entity types
  3. formally documented & tested revisions and translations support 3
  4. formally documented security considerations

Get these improvements today by updating to version 2.4 of the JSON:API module — it’s identical to what was added to Drupal 8.7!

Contributors

An incredible total of 103 people contributed in JSON:API’s issue queue to help make this happen, and 50 of those even have commits to their name:

Wim Leers, ndobromirov, e0ipso, nuez, gabesullice, xjm, effulgentsia, seanB, jhodgdon, webchick, Dries, andrewmacpherson, jibran, larowlan, Gábor Hojtsy, benjifisher, phenaproxima, ckrina, dww, amateescu, voleger, plach, justageek, catch, samuel.mortenson, berdir, zhangyb, killes@www.drop.org, malik.kotob, pfrilling, Grimreaper, andriansyahnc, blainelang, btully, ebeyrent, garphy, Niklan, joelstein, joshua.boltz, govind.maloo, tstoeckler, hchonov, dawehner, kristiaanvandeneynde, dagmar, yobottehg, olexyy.mails@gmail.com, keesee, caseylau, peterdijk, mortona2k, jludwig, pixelwhip, abhisekmazumdar, izus, Mile23, mglaman, steven.wichers, omkar06, haihoi2, axle_foley00, hampercm, clemens.tolboom, gargsuchi, justafish, sonnykt, alexpott, jlscott, DavidSpiessens, BR0kEN, danielnv18, drpal, martin107, balsama, nileshlohar, gerzenstl, mgalalm, tedbow, das-peter, pwolanin, skyredwang, Dave Reid, mstef, bwinett, grndlvl, Spleshka, salmonek, tom_ek, huyby, mistermoper, jazzdrive3, harrrrrrr, Ivan Berezhnov, idebr, mwebaze, dpolant, dravenk, alan_blake, jonathan1055, GeduR, kostajh, pcambra, meba, dsdeiz, jian he, matthew.perry.

Thanks to all of you!

Future JSON:API blogging

I blogged about once a month since October 2018 about JSON:API, to get more people to switch to version 2.x of the JSON:API module, to ensure it was maximally mature and bug free prior to going into Drupal core. New capabilities were also being added at a pretty high pace because we’d been preparing the code base for that months prior. We went from ~1700 installs in January to ~2700 today!

Now that it is in Drupal core, there will be less need for frequent updates, and I think the API-First Drupal: what’s new in 8.next? blog posts that I have been doing probably make more sense. I will do one of those when Drupal 8.7.0 is released in May, because not only will it ship with JSON:API land, there are also other improvements!

Special thanks to Mateu Aguiló Bosch (e0ipso) for their feedback!


  1. We’ll of course continue to provide security releases for the contributed module. Once Drupal 8.7 is released, the Drupal Security Team stops supporting Drupal 8.5. At that time, the JSON:API contributed module will only need to provide security support for Drupal 8.6. Once Drupal 8.8 is released at the end of 2019, the JSON:API contributed module will no longer be supported: since JSON:API will then be part of both Drupal 8.7 and 8.8, there is no reason for the contributed module to continue to be supported. ↩︎

  2. Existing sites will continue to have writes enabled by default, but can choose to enable the read-only mode too. ↩︎

  3. Limitations in the underlying Drupal core APIs prevent JSON:API from 100% of desired capabilities, but with JSON:API now being in core, it’ll be much easier to make the necessary changes happen! ↩︎

I published the following diary on isc.sans.edu: “New Wave of Extortion Emails: Central Intelligence Agency Case“:

The extortion attempts haved moved to another step recently. After the “sextortion” emails that are propagating for a while, attackers started to flood people with a new type of fake emails and their imaginnation is endless… I received one two days ago and, this time, they go one step further. In many countries, child pornography is, of course, a very strong offense punished by law. What if you received an email from a Central Intelligence Agency officer who reveals that you’re listed in an international investigation about a case of child pornography and that you’ll be arrested soon… [Read more]

[The post [SANS ISC] New Wave of Extortion Emails: Central Intelligence Agency Case has been first published on /dev/random]

March 19, 2019

The post MySQL 8 & Laravel: The server requested authentication method unknown to the client appeared first on ma.ttias.be.

For local development I use Laravel Valet. Recently, the brew packages have updated to MySQL 8 which changed a few things about its user management. One thing I continue to run into is this error when working with existing Laravel applications.

 SQLSTATE[HY000] [2054] The server requested authentication method unknown to the client

So, here's the fix. You can create a user with the "old" authentication mechanisme, which the MySQL database driver for PHP still expects.

CREATE USER 'ohdear_ci'@'localhost' IDENTIFIED WITH mysql_native_password BY 'ohdear_secret';
GRANT ALL PRIVILEGES ON ohdear_ci.* TO 'ohdear_ci'@'localhost';

If you already have an existing user with permissions on databases, you can modify that user instead.

ALTER USER 'ohdear_ci'@'localhost' IDENTIFIED WITH mysql_native_password BY 'ohdear_secret';

After that, your PHP code can once again connect to MySQL 8.

The post MySQL 8 & Laravel: The server requested authentication method unknown to the client appeared first on ma.ttias.be.

March 18, 2019

Over the past couple of months, since the release of WordPress 5.0 which includes Gutenberg, the new JavaScript-based block editor, I have seen many sites loading a significant amount of extra JavaScript from wp-includes/js/dist on the frontend due to plugins doing it wrong.

So dear plugin-developer-friends; when adding Gutenberg blocks please differentiate between editor access and visitor access, only enqueue JS/ CSS if needed to display your blocks and when registering for front-end please please frigging please don’t declare wp-blocks, wp-element, … and all of those other editor goodies as dependencies unless your 100% sure this is needed (which will almost never be the case).

The performance optimization crowd will thank you for being considerate and -more likely- will curse you if you are not!

March 15, 2019

Tout en prétendant le sauver. Et pourquoi ils sont le pire modèle possible pour nos enfants.

Je déteste les films de superhéros. Je conchie cette mode abjecte qui a dirigé la moitié des conversations d’Internet sur le thème DC ou Marvel, qui a créé une génération d’exégètes de bandes-annonces en attente du prochain « film événement » que va leur fournir l’implacable machine à guimauve et à navet hors de prix appelée Hollywood.

Premièrement à cause de cette éternelle caricature du bien contre le mal, cet épuisant manichéisme qu’on tente désormais de nous camoufler en montrant que le bon doit faire des choses mauvaises, qu’il doute ! Mais, heureusement, le spectateur lui, ne doute jamais. Il sait très bien qui est le bon (celui qui lutte contre le mauvais) et qui est le mauvais (celui qui cherche à faire le Mal, avec un M majuscule, mais sans aucune véritable autre motivation, rendant le personnage complètement absurde). Le bon n’en sort que meilleur, c’est effrayant de bêtise, de faiblesse scénaristique. C’est terrifiant sur l’implication dans nos sociétés. Ce qui est Bien est Bien, c’est évident, on ne peut le questionner. Le Mal, c’est l’autre, toujours.

Mais outre ce misérabilisme intellectuel engoncé sous pléthores d’explosions et d’effets spéciaux, ce qui m’attriste le plus dans cet univers global est le message de fond, l’odieuse idée sous-jacente qui transparait dans tout ce pan de la fiction.

Car la fiction est à la fois le reflet de notre société et le véhicule de nos valeurs, de nos envies, de nos pulsions. La fiction représente ce que nous sommes et nous façonne à la fois. Qui contrôle la fiction contrôle les rêves, les identités, les aspirations.

Les blockcbusters des années 90, d’Independance Day à Armaggedon en passant par Deep Impact, mettaient tous en scène une catastrophe planétaire, une menace totale pour l’espèce. Et, dans tous les cas, les humains s’en sortaient grâce à la coopération (une coopération généralement fortement dirigée par les États-Unis avec de nauséabonds relents de patriotisme, mais de la coopération tout de même). La particularité des héros des années 90 ? C’étaient tous des monsieurs et madames Tout-le-Monde. Bon, surtout des monsieurs. Et américains. Mais le scénario insistait à chaque fois lourdement sur sa normalité, sur le fait que ça pouvait être vous ou moi et qu’il était père de famille.

Le message était clair : les États-Unis vont unir le monde pour lutter contre les catastrophes, chaque individu est un héros et peut changer le monde.

Durant mon adolescence, les films de superhéros étaient complètement ringards. Il n’y avait pas l’ombre du moindre réalisme. Les costumes fluo étaient loin de remplir les salles et, surtout, n’occupaient pas les conversations.

Puis est arrivé Batman Begins, qui selon toutes les critiques de l’époque a changé la donne. À partir de là, les films de superhéros se sont voulus plus réalistes, plus humains, plus sombres, plus glauques. Le héros n’était plus lisse. 

Mais, par essence, un superhéros n’est pas humain ni réaliste. Il peut bien sûr être plus sombre si on change l’éclairage et qu’on remplace le costume fluo. Pour le reste, on va se contenter de l’apparence. Une pincée d’explications par un acteur en blouse blanche pour faire pseudo-scientifique apportera la touche de réalisme. Pour le côté humain, on montrera le superhéros face au doute et éprouvant des caricatures d’émotions : la colère, le désir de faire du mal au Mal, la peur d’échouer, une vague pulsion sexuelle s’apparentant à l’amour. Mais il restera un superhéros, le seul capable de sauver la planète.

Le spectateur n’a plus aucune prise sur l’histoire, sur la menace. Il fait désormais partie de cette foule anonyme qui se contente d’acclamer le superhéros, de l’attendre voire de servir, avec le sourire, de victime collatérale. Car le superhéros moderne fait souvent plus de dégâts que les aliens d’Independance Day. Ce n’est pas grave, c’est pour la sauvegarde du Bien.

Désormais, pour sauver le monde, il faut un super pouvoir. Ou bien il faut être super riche. Si tu n’as aucun des deux, tu n’es que de la chair à canon, dégage-toi du chemin, essaie de ne pas gêner.

C’est tout bonnement terrifiant.

Le monde que nous renvoient ces univers est un monde passif, d’acceptation où personne ne cherche à comprendre ce qu’il y’a au-delà des apparences.  Un monde où chacun attend benoîtement que le Super Bien vienne vaincre le Super Mal, le cul vissé sur la chaise de son petit boulot gris et terne.

La puissance évocatrice de ces univers est telle que les acteurs qui jouent les superhéros sont adulés, applaudis plus encore que leurs avatars, car, comble du Super Bien, ils enfilent leur costume pour aller passer quelques heures avec les enfants malades. Les héros de notre imaginaire sont des saltimbanques multimillionnaires qui, entre deux tournages de publicité pour nous laver le cerveau, acceptent de consacrer quelques heures aux enfants malades sous le regard des caméras !

À travers moults produits dérivés et costumes, nous renforçons cet imaginaire manichéens chez notre progéniture. Alors que notre plus grand espoir serait de former les jeunes à être eux-mêmes, à découvrir leurs propres pouvoirs, à apprendre à coopérer à large échelle, à cultiver les complémentarités et l’intérêt pour le bien commun, nous préférons nous vanter de leur avoir fabriqué un super beau costume de superhéros. Parce que ça fait super bien sur Instagram, parce qu’on devient, pour quelques likes, un super papa ou une super maman.

Le reste de la société est à l’encan. Ne collaborez plus mais devenez un superhéros de l’entrepreneuriat, un superhéros de l’environnement en triant vos déchets, une rockstar de la programmation !

C’est super pathétique…

Photo by TK Hammonds on Unsplash

Je suis @ploum, conférencier et écrivain électronique. Si vous avez apprécié ce texte, n'hésitez pas à me soutenir sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens réguliers, même symboliques, sont une réelle motivation et reconnaissance. Merci !

Ce texte est publié sous la licence CC-By BE.

March 14, 2019

La cabine individuelle du monorail me déposa à quelques mètres de l’entrée du bâtiment de la Compagnie. Les larges portes de verre s’écartèrent en enfilade pour me laisser le passage. Je savais que j’avais été reconnu, scanné, identifié. L’ère des badges était bel et bien révolue. Tout cela me paraissait normal. Ce ne devait être qu’une journée de travail comme les autres.

Le colossal patio grouillait d’individus qui, comme moi, arboraient l’uniforme non officiel de la compagnie. Un pantalon de couleur grise sur des baskets délacées, une paire de bretelles colorées, une chemise au col faussement ouvert dans une recherche très travaillée de paraître insouciant de l’aspect vestimentaire, une barbe fournie, des lunettes rondes. Improbables mirliflores jouisseurs, épigones de l’hypocrite productivisme moderne.

À travers les étendues vitrées du toit, la lumière se déversait à flots, donnant au gigantesque ensemble la sensation d’être une trop parfaite simulation présentée par un cabinet d’architecture. Régulièrement, des plantes et des arbres dans de gigantesques vasques d’un blanc luisant rompaient le flux des travailleurs grâce à une disposition qui ne devait rien au hasard. Les robots nettoyeurs et les immigrés engagés par le service d’entretien ne laissaient pas un papier par terre, pas un mégot. D’ailleurs, la Compagnie n’engageait plus de fumeurs depuis des années.

J’avisais les larges tours de verre des ascenseurs. Elles se dressaient à près d’un demi-kilomètre, adamantin fanal encalminé dans cet étrange cloître futuriste. J’ignorais délibérément une trottinette électrique qui, connaissant mon parcours habituel, vint me proposer ses services. J’avais envie de marcher un peu, de longer les vitrines des salles de réunion, des salles de sport où certains de mes collègues pédalaient déjà avec un enthousiasme matinal que j’avais toujours trouvé déplacé avant ma première tasse de kombusha de la journée.

Une voix douce se mit à parler au-dessus de ma tête, claire, intelligible, désincarnée, asexuée.

— En raison d’un problème technique aux ascenseurs, nous conseillons, dans la mesure du possible, de prendre l’escalier.

J’arrivai au pied des tours de verre et de métal. La voix insistait.

— En raison d’un problème technique, l’usage des ascenseurs est déconseillé, mais reste possible.

J’avais traversé le bâtiment à pied, je n’avais aucune envie de descendre une trentaine d’étages par l’escalier. Sans que je l’admette consciemment, une certaine curiosité morbide me poussait à constater de mes yeux quel problème pouvait bien rendre l’utilisation d’un ascenseur possible, mais déconseillée.

Je rentrai dans la spacieuse cabine en compagnie d’un type assez bedonnant en costume beige et comble du mauvais goût, en cravate, ainsi que d’une dame en tailleur bleu marine, aux lunettes larges et au chignon sévère. Nous ne nous adressâmes pas la parole, pénétrant ensemble dans cet espace clos comme si nous étions chacun seuls, comme si le moindre échange était une vulgarité profane.

Les parois brillantes resplendissaient d’une lumière artificielle parfaitement calibrée. Comme à l’accoutumée, je ne réalisai pas immédiatement que les portes s’étaient silencieusement refermées et que nous avions amorcé la descente.

Une légère musique tentait subtilement d’égayer l’atmosphère tandis que nous appliquions chacun une stratégie différente pour éviter à tout prix de croiser le regard de l’autre. L’homme maintenait un visage glabre aux sourcils épais complètement impassible, le regard obstinément fixé sur la paroi d’en face. La femme gardait les yeux rivés vers le sac en cuir qu’elle avait posé à ses pieds. Elle serrait un classeur contre son buste comme un naufragé se raccroche à sa bouée de sauvetage. De mon côté, je détaillais les arêtes du plafond comme si je les découvrais pour la première fois.

La lumière baissait sensiblement à mesure que nous descendions, comme pour nous rappeler que nous nous enfoncions dans les entrailles chtoniennes de la planète.

Lorsque nous fîmes halte au -34, l’homme en costume dû toussoter pour que je m’écarte à cause du léger rétrécissement de la cabine.

La plongée reprit. La baisse de luminosité et le rétrécissement devenaient très perceptibles. Au -78, l’étage de la dame, nous évoluions dans une pénombre grisâtre. En écartant les bras, j’aurais pu toucher les deux parois.

J’étais désormais seul, comme si l’ascenseur ne m’avait pas reconnu et ignorait ma présence. Une impulsion irrationnelle me décida d’aller aussi profond que possible. Simple accès de curiosité. Après tout, cela faisait des années que je travaillais pour la Compagnie et n’était jamais descendu aussi bas.

La lumière baissait de plus en plus, mais je m’aperçus que ma compagne de descente avait oublié son sac de cuir. Je peinais à distinguer les parois que je pouvais désormais toucher des doigts. Sur le compteur lumineux, qui était de plus en plus proche de moi, les étages défilaient de moins en moins vite.

Je sentis mes épaules frotter et je dus me mettre de profil pour ne pas être écrasé. Je plaçai le sac à hauteur de mon visage et pus très vite le lâcher, car il tenait par la simple force de pression que les parois exerçaient sur lui. La cabine m’enserrait désormais de tous côtés : les épaules, le dos et la poitrine. Ma respiration se faisait difficile alors survint le noir total. Les ténèbres m’enveloppèrent. Seul brillait encore faiblement le compteur qui se stabilisa sur -118.

Calmement, la certitude que j’allais mourir étouffé s’empara de moi. C’était certainement le problème dont m’avait averti la voix. Je ne l’avais pas écoutée, j’en payais le prix. C’était logique, il n’y avait rien à faire.

Dans un silence oppressant, je me rendis compte que la paroi à ma droite était un peu moins obscure. En me contorsionnant, je parvins à me glisser sous le sac qui était désormais à moitié écrasé. La porte était ouverte. Je fis quelques pas hors de la cabine dans une glauque et moite pénombre. Je distinguais des parois en feutre gris arrivant à mi-torse, délimitant des petits espaces où s’affairaient des collègues. Ils portaient des chemises que je percevais grises, des cravates et des gilets sans manches. La faible luminosité de vieux tubes cathodiques se reflétait dans leurs lunettes. Les discussions étaient douces, feutrées. J’avais l’impression d’être un étranger, personne ne faisait attention à moi.

Dans un coin, une vieille imprimante matricielle crachotait des pages de caractères sibyllins en émettant ses stridents chuintements.

Comme un somnambule, je déambulais, étranger à ce monde. Ou du moins, je l’espérais.

Après quelques hésitations, je repris ma place en me glissant avec quelques difficultés dans la cabine dont la porte ne s’était pas refermée, comme si elle m’attendait.

De nouveau, ce fut le noir. L’oppression. Mais pas pour longtemps. Je respirais. Les parois s’écartaient, je distinguais une légère lueur. Je remontais, je renaissais.

Les chiffres défilaient de plus en plus rapidement sur le compteur. Lorsqu’ils s’arrêtèrent sur 0, je défroissai ma chemise et, le sac en cuir dans une main, je me ruai dans les lumineux rayons du soleil filtré.

Au-dessus de ma tête, la voix désincarnée continuait sa péroraison.
— En raison d’un problème technique aux ascenseurs, nous conseillons, dans la mesure du possible, de prendre l’escalier.

Je me mis à courir en riant. Des balcons aux salles de sport, toutes les têtes se retournaient sur mon passage. Je n’y prêtais guère attention. Je riais, je courais à perdre haleine. Quelques remarques fusèrent, mais je ne les entendais pas.

Bousculant un garde, je franchis la série de doubles portes et sortis hors du bâtiment, hors de la Compagnie. Il pleuvait, le ciel était gris.

De toutes mes forces, je lançai la mallette de cuir. Elle s’ouvrit à son apogée, distribuant aux vents feuillets, fiches et autres notes qui vinrent dessiner une parodie d’automne sur le bitume noir de la route détrempée.

Je m’assis sur la margelle du trottoir, les yeux fermés, inspirant profondément les relents de petrichor tandis que des gouttes ruisselaient sur mon sourire.

Ottignies, 22 février 2019. Première nouvelle écrite sur le Freewrite, en moins de 2 jours. Rêve du 14 juillet 2008.Photo by Justin Main on Unsplash

Je suis @ploum, conférencier et écrivain électronique. Si vous avez apprécié ce texte, n'hésitez pas à me soutenir sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens réguliers, même symboliques, sont une réelle motivation et reconnaissance. Merci !

Ce texte est publié sous la licence CC-By BE.

March 13, 2019

Autoptimize 2.5 is almost ready! It features a new “Images”-tab to house all Image optimization options, including support for lazy-loading images and WebP (the only next-gen image format that really matters, no?);

So download the beta and test lazy-loading and WebP (and all of the other changes) and let me know of any issue you might find!

March 12, 2019

Today, the world wide web celebrates its 30th birthday. In 1989, Sir Tim Berners-Lee invented the world wide web and changed the lives of millions of people around the globe, including mine.

Tim Berners-Lee sitting in front of a computer showing the first websiteTim Berners-Lee, inventor of the World Wide Web, in front of the early web.

Milestones like this get me thinking about the positive impact a free and Open Web has had on society. Without the web, billions of people would not have been able to connect with one another, be entertained, start businesses, exchange ideas, or even save lives. Open source communities like Drupal would not exist.

As optimistic as I am about the web's impact on society, there have been many recent events that have caused me to question the Open Web's future. Too much power has fallen into the hands of relatively few platform companies, resulting in widespread misinformation, privacy beaches, bullying, and more.

However, I'm optimistic that the Open Web has a chance to win in the future. I believe we'll see three important events happen in the next five years.

First, the day will come when regulators will implement a set of laws that govern the ownership and exchange of data online. It's already starting to happen with GDPR in the EU and various state data privacy laws taking shape in the US. These regulations will require platforms like Facebook to give users more control over their data, and when that finally happens, it will be a lot easier for users to move their data between services and for the Open Web to innovate on top of these data platforms.

Second, at some point, governments globally will disempower large platform companies. We can't leave it up to a handful of companies to judge what is false and true, or have them act as our censors. While I'm not recommending governments split up these companies, my hope is that they will institute some level of algorithmic oversight. This will offer an advantage to the Open Web and Open Source.

Third, I think we're on the verge of having a new set of building blocks that enable us to build a better, next-generation web. Thirty years into the web, our data architectures still use a client-server model; data is stored centrally on one computer, so to speak. The blockchain is turning that into a more decentralized web that operates on top of a distributed data layer and offers users control of their own data. Similar to building a traditional website, distributed applications (dApps) require file storage, payment systems, user data stores, etc. All of these components are being rebuilt on top of the blockchain. While we have a long way to go, it is only a matter of time before a tipping point is reached.

In the past, I've publicly asked the question: Can we save the Open Web? I believe we can. We can't win today, but we can keep innovating and get ready for these three events to unfold. The day will come!

With that motivation in mind, I want to wish a special happy birthday to the world wide web!

Bonjour,

En vertu de la loi RGPD, pourriez-vous m’informer de la manière par laquelle vous avez obtenu mes coordonnées et effacer toutes données me concernant de vos différentes bases de données. Si vous les avez acquises, merci de me donner les coordonnées de votre fournisseur.

Bien à vous,

Il y’a 15 ans, le spam était un processus essentiellement automatisé qui consistait à repérer des adresses email sur le web et à envoyer massivement des publicités pour du Viagra. Les filtres intelligents sont finalement venus à bout de ce fléau, au prix de quelques mails parfaitement légitimes égarés. Ce qui a donné une excuse parfaite à toute une génération : « Quoi ? Je ne t’ai pas répondu sur le dossier Bifton ? Oh, ton mail était dans mes spams ! ».

Mais aujourd’hui, le spam s’est institutionnalisé. Il a gagné ses lettres de noblesse en se rebaptisant « newsletter » ou « mailing ». Les spammeurs se sont rebrandés sous le terme « email marketing » ou « cold mailing ». Désormais, il n’est pas une petite startup, une boucherie de quartier, un club de sport, une institution publique qui ne produise du spam.

Comme tout le monde le fait, tout le monde se sent obligé de le faire. À peine est-on inscrit à un service dont on a besoin, à peine vient-on de payer un abonnement à un club qu’il vient automatiquement avec sa kyrielle de newsletters. Ce qui est stupide, car on vient juste de payer. La moindre des choses quand on a un nouveau client, c’est de lui foutre la paix.

Le pire reste sans conteste le jour de votre anniversaire. Tous les services qui, d’une manière ou d’un autre, ont une date de naissance liée à votre adresse mail se sentent obligés de vous le rappeler. Le jour de son anniversaire, on reçoit déjà pas mal de messages des proches alors que, généralement, on est occupé. Normal, c’est la tradition, c’est chouette. Facebook nous envoie des dizaines voire des centaines de messages de gens moins proches voire d’inconnus perdus de vue. Passons, c’est le but de Facebook. Mais que chaque site où j’ai un jour commandé une pompe à vélo à 10€ ou un string léopard m’envoie un message d’anniversaire, c’est absurde ! Joyeux Spamniversaire !

Le problème avec ce genre de pourriel c’est que, contrairement au spam vintage type Viagra, il n’est pas toujours complètement hors de nos centres d’intérêt. On se dit que, en fait, pourquoi pas. On le lirait bien plus tard. La liste produira peut-être un jour un mail intéressant ou une offre commerciale pertinente. Surtout que se désabonner passe généralement par un message odieusement émotionnel de type « Vous allez nous manquer, vous êtes vraiment sûr ? ».  Quand il ne faut pas un mot de passe ou que le lien de désinscription n’est pas tout bonnement cassé. De toute façon, on ne se désinscrit que de « certaines catégories de mails ». Régulièrement, de nouvelles catégories sont ajoutées auxquelles on est abonné d’office. La palme revient à Facebook, qui m’envoie encore 2 ou 3 mails par semaine alors que, depuis plusieurs mois, je clique à chaque fois, je dis bien à chaque fois, sur les liens de désinscription.

Un magasin en ligne bio, écolo, ne vendant que des produits durables mais qui applique les techniques de marketing les plus anti-éthiques.

Si vous n’êtes pas aussi extrémiste que moi, il est probable que votre boîte mail soit bourrée jusqu’à la gorge, que votre inbox atteigne les 4 ou 5 chiffres. Mais de ces milliers de mails, combien sont importants ? 

Plus concrètement, combien de mails importants avez-vous perdus de vue parce que votre inbox a été saturé par ces mailings ? L’excuse est toujours valide, le mail de votre collègue est bien dans les spams. Tout votre inbox est devenu une gigantesque boîte à spams.

Ceux qui me suivent depuis longtemps savent que je suis un adepte de la méthode Inbox 0. Ma boîte mail est comme ma boîte aux lettres physiques : elle est vide la plupart du temps. Chaque mail est archivé le plus vite possible.

Au fil des années, j’ai découvert que la stratégie la plus importante pour atteindre régulièrement Inbox 0 est d’éviter de recevoir des mails dont je n’ai pas envie. Même s’ils sont potentiellement intéressants. Le simple fait de recevoir le mail, d’être distrait par lui, de le lire, d’étudier si le contenu vaut la peine nécessite un effort mental total qui n’est jamais compensé par un intérêt tout relatif et très aléatoire. En fait, les mails « intéressants » sont les pires, car ils font hésiter, douter.

Réfléchissons une seconde. Si des gens sont payés pour m’envoyer un mail que je n’ai pas demandé, c’est qu’à terme ils espèrent que je paie d’une manière ou d’une autre. Pour qu’une mailing liste soit réellement intéressante, il y’a un critère simple : il faut payer. Si vous ne payez pas le rédacteur de la newsletter vous-même, alors vous le paierez indirectement.

J’ai décidé d’attaquer le problème frontalement grâce à un merveilleux outil que nous offre l’Europe, la loi RGPD.

À chaque mail non sollicité que je reçois, je réponds le message que vous avez pu lire en entête de ce billet. Parfois, j’ai envie de juste archiver ou mettre dans les spams. Parfois je me dis que ça peut être intéressant. Mais je tiens bon : à chaque mail, je me désabonne ou je réponds (parfois les deux). Si une information est réellement pertinente, l’univers trouvera un moyen de me la communiquer.

Cela fait plusieurs mois que j’ai mis en place cette stratégie en utilisant un outil qui complète automatiquement le mail quand je tape une combinaison de lettres (j’utilise les snippets Alfred pour macOS). L’effet est proprement hallucinant.

Tout d’abord, cela m’a permis de remonter à la source de certaines bases de données revendues à grande échelle. Mais, surtout, cela m’a permis de me rendre compte que les apprenti-marketeux savent très bien ce qu’ils font. Ils se répandent en excuses, ils se justifient, ils me promettent que cela n’arrivera plus alors que mon mail n’est aucunement critique. La simple mention du RGPD les effraie. Bref, tout le monde le fait, mais tout le monde sait que ça emmerde le client et que c’est désormais à la limite de la légalité.

Et mon inbox dans tout ça ? Il n’en revient toujours pas. À force de me désinscrire de tout pendant plusieurs mois, il m’est même arrivé de passer 24h sans recevoir le moindre mail. Cela m’a permis de détecter que certains mails vraiment importants passaient parfois dans les spams vu que, étonné de ne rien recevoir, j’ai visité ce dossier.

Soyons honnêtes, c’était un cas exceptionnel. Mais je reçois moins de 10 mails par jour, généralement 4 ou 5, ce qui est tout à fait raisonnable. Je reprends même du plaisir à échanger par mail. Je préfère en effet cette manière de correspondre au chat qui implique une notion stressante d’immédiateté.

Maintenir mon inbox propre nécessite cependant une réelle rigueur. Il ne se passe pas une semaine sans que je découvre être inscrit à une nouvelle mailing liste, parfois utilisant des données anciennes et apparaissant comme par magie.

Aussi je vous propose de passer avec moi à la vitesse supérieure en appliquant exactement ma méthode.

À chaque mail non sollicité, répondez avec mon message ou un de votre composition. Copiez-collez-le ou utilisez des outils de réponses automatiques. Surtout, n’en laissez plus passer un seul. Vous allez voir, c’est fastidieux au début, mais ça devient vite grisant.

Plus nous serons, moins envoyer un mailing deviendra rentable. Imaginez un peu la tête du marketeux qui, à chaque mail, doit répondre non plus à un ploum un peu excentrique, mais à 10 voire 100 personnes !

Ne soyez pas agressifs. Ne jugez pas. N’essayez pas d’entrer dans un débat (je l’ai fait au début, c’était une erreur). Contentez-vous du factuel et inattaquable : « Retirez-moi de vos bases de données ». Vous n’avez pas à vous justifier plus que cela. N’oubliez pas de mentionner les lettres magiques : RGPD.

Qui sait ? Si nous sommes assez nombreux à appliquer cette méthode, peut-être qu’on en reviendra au bon vieux principe de n’envoyer des mails qu’à ceux qui ont demandé pour les recevoir.

Je rêve peut-être, mais la rigueur que je me suis imposée pour commencer cet exercice s’est transformée en plaisir de voir ma boîte mail si souvent vide, prête à recevoir les messages et les critiques de mes lecteurs. Car, ces messages-là, je n’en ai jamais assez…

Photo by Franck V. on Unsplash

Je suis @ploum, conférencier et écrivain électronique. Si vous avez apprécié ce texte, n'hésitez pas à me soutenir sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens réguliers, même symboliques, sont une réelle motivation et reconnaissance. Merci !

Ce texte est publié sous la licence CC-By BE.

Product marketing teams are responsible for bringing products to market and championing their success and adoption. To make this happen, they work closely with three sets of key stakeholders: the product team (development/engineering), the marketing team and the sales team.

Product marketing is at the center of product management, sales and marketing

In some organizations, product marketing reports to marketing. In other organizations, it reports to product. The most common pattern is for product marketing teams to live in marketing, but in my opinion, a product marketing organization should sit where the highest frequency of communication and collaboration is needed. That can depend on the type of product, but also on the maturity of the product.

For new products, companies with an evolving product strategy, or very technical products, it makes the most sense for product marketing to report directly to the product team. For mature and steady products, it makes sense for product marketing to report into marketing.

This reporting structure matters in that it facilitates communication and alignment.

For example, Acquia has recently decided to restructure product marketing to report to the product team (the team I'm responsible for), rather than to marketing. We made this decision because there has been a lot of change and growth on the product front.

We've also added to our product leadership team, hiring an SVP of Product Marketing, Tom Wentworth. Those of you who have followed Acquia's story may know Tom as our former CMO and head of product marketing. You can read more about it in Tom's blog post — he explains why he rejoined Acquia, but also writes about content management history and trends. Well worth a read!

March 11, 2019

Yes. Let’s print lot’ s of Euros for the purpose of enhancing technologies related to Climate.

Because we need to increase the Euro’ s inflation. We should stop investing in government bonds for the same purpose (saving the Greek socialist government). We need to invest in our shared European military too (replacing NATO). Those should increase our Euro inflation. Investing in climate related technologies will likely increase our Euro inflation. Which we still need. Urgently.

We however need to violently stop increasing Euro inflation by investing in EU government debt. We need to start investing in the real things the young people in the European Union want.

What do we need to invest European money in (in order of priority):

  • Propaganda (RT is fine, but, we probably want to control it ourselves instead)
  • Military (a European DARPA). We really need our own EU military research. Space. Rocket science. Weapons research. Because this will improve research and technology in and of civilian space. Whether civilians like this or not. Besides, we might some day need it against an invading force (rather unlikely, but still).
  • Climate technologies. It’s clear that civilians want this. Let’s do it, then.
  • Infrastructure (roads, borders, schools, swimming pools in villages)
  • Social security (Look at Leuven’s academic hospital. This is fantastic. More of this, please)
  • Lawmaking about new technologies (social media, privacy in a digital age, genetic engineering of seeds and others, chemicals, farming, medical, and many more)

 

 

March 08, 2019

Live version of a great new song by a new (super-)band by one not-so-new (Conor Oberst) and (given her age) one newish artist (Phoebe Bridgers). Somewhat reminds me of the alternative rock-scene of the nineties (Hole, Throwing Muses and whatnot) and that is a good thing!

YouTube Video
Watch this video on YouTube.

March 07, 2019

I published the following diary on isc.sans.edu: “Keep an Eye on Disposable Email Addresses“:

In many organisations, emails still remain a classic infection path today. The good old email is still today a common communication channel to exchange information with people outside of the security perimeter. Many security controls are in place to reduce the number of malicious emails landing in users’ mailboxes. If, from a network perspective, firewalls inspect traffic in both directions (“egress” and “ingress” filters), it’s not always the case with email flows. They are often just allowed to go out through local MTA’s (Mail Transfert Agents)… [Read more]

[The post [SANS ISC] Keep an Eye on Disposable Email Addresses has been first published on /dev/random]

March 03, 2019

Logo Ansible wideCe jeudi 21 mars 2019 à 19h se déroulera la 76ème séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : Automatiser son infrastructure avec Ansible, tester grâce à Molecule

Thématique : sysadmin

Public : sysadmin|entreprises|étudiants

L’animateur conférencier : Fabrice Flore-Thebault (Stylelabs, Centsix)

Lieu de cette séance : Mic-Belgique, Avenue des Bassins, 64 à 7000 Mons (cf. la carte OSM).

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page http://jeudisdulibre.fikket.com/. La séance sera suivie d’un verre de l’amitié.

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, OpenSides, MeaWeb et Phonoid.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

Description : Ansible est une plate-forme d’automatisation IT. Cet outil permet la gestion de configuration des systèmes informatiques, de déployer des applications et d’orchestrer des tâches plus complexes (déploiement continu, zero downtime rolling updates).

Tout ceci en restant simple d’utilisation. Et sans agent. Ceci mérite d’être signalé étant donné l’étendue des capacités d’Ansible. Bien entendu sont supportés les hôtes traditionnels tel que Linux et autres formes d’Unix, ainsi que de Mac OS et Windows. Mais Ansible n’en reste pas là: le support des fournisseurs Cloud est extensif (AWS, Azure, GCE, Linode, Ovirt, VMWare, Vultr). Un nombre matériels réseaux sont aussi supportés (A10, ACI, Cisco ASA, F5, Junos, Palo Alto …).

Simple d’utilisation, certes, mais peut-être que ce que vous allez réaliser avec Ansible va cesser d’être simpliste, et rapidement, avant d’appliquer une nouvelle version d’un playbook en production, vous voudrez établir un niveau de confiance certain dans la capacité de cette nouvelle version à ne pas tout casser.

C’est à ce moment qu’intervient molecule, et tous ses amis avec. Molecule permet de tester les rôles Ansible. En s’adaptant à vos besoins, mais aussi en vous poussant à faire mieux. Molecule commence par valider votre syntaxe. Vous apprend à faire mieux. C’est un bon coach. Molecule vous aide ensuite à créer un infrastructure de test qui vous corresponde, et va tester que votre rôle s’exécute correctement.

La présentation s’appuiera sur plusieurs expériences de développement de playbooks Ansible, avec des degrés variables de maturité, de tests et d’automatisation.

Short bio : Fabrice Flore-Thebault est historien de formation, devenu utilisateur de logiciels libres par conviction, puis sysadmin professionnel à cause des circonstances, acquis à la cause devops des premiers jours, utilisateur convaincu d’Ansible, contributeur de la communauté Molecule, et papa très fier de sa fille.

Images versus unattended setup

Old-school

Unattended setup

In a traditional environment, systems are installed from a CDROM. The configuration is executed by the system administrator through the installer. This soon becomes a borning and unpractical task when we need to set up a lot of systems also it is important that systems are configured in same - and hopefully correct - way.

In a traditional environment, this can be automated by booting via BOOTP/PXE boot and configured is by a system that “feeds” the installer. Examples are:

Cloud & co

Cloud-init

In a cloud environment, we use images to install systems. The system automation is generally done by cloud-init. Cloud-init was originally developed for Ubuntu GNU/Linux on the Amazon EC2 cloud. It has become the de facto installation configuration tool for most Unix like systems on most cloud environments.

Cloud-init uses a YAML file to configure the system.

Images

Most GNU/Linux distributions provide images that can be used to provision a new system. You can find the complete list on the OpenStack website

https://docs.openstack.org/image-guide/obtain-images.html

The OpenStack documentation also describes how you can create your own base images in the OpenStack Virtual Machine Image Guide

Use a centos cloud image with libvirtd

Download the cloud image

Download

Download the latest “GenericCloud” centos 7 cloud image and sha256sum.txt.asc sha256sum.txt from:

https://cloud.centos.org/centos/7/images/

Verify

You should verify your download - as always against a trusted signing key -

On a centos 7 system, the public gpg is already installed at /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

Verify the fingerprint

Execute

1
2
3
4
staf@centos7 iso]$ gpg --with-fingerprint /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
pub  4096R/F4A80EB5 2014-06-23 CentOS-7 Key (CentOS 7 Official Signing Key) <security@centos.org>
      Key fingerprint = 6341 AB27 53D7 8A78 A7C2  7BB1 24C6 A8A7 F4A8 0EB5
[staf@centos7 iso]$ gpg --with-fingerprint /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

and verify the fingerprint, the fingerprints that are used by centos are listed at:

https://www.centos.org/keys/

Import key

Import the pub centos gpg key:

1
2
3
4
5
[staf@centos7 iso]$ gpg --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
gpg: key F4A80EB5: public key "CentOS-7 Key (CentOS 7 Official Signing Key) <security@centos.org>" imported
gpg: Total number processed: 1
gpg:               imported: 1  (RSA: 1)
[staf@centos7 iso]$ 

List the trusted gpg key:

1
2
3
4
5
6
7
staf@centos7 iso]$ gpg --list-keys
/home/staf/.gnupg/pubring.gpg
-----------------------------
pub   4096R/F4A80EB5 2014-06-23
uid                  CentOS-7 Key (CentOS 7 Official Signing Key) <security@centos.org>

[staf@centos7 iso]$ gpg --list-keys

Verify the sha256sum file

1
2
3
4
5
6
7
[staf@centos7 iso]$ gpg --verify sha256sum.txt.asc
gpg: Signature made Thu 31 Jan 2019 04:28:30 PM CET using RSA key ID F4A80EB5
gpg: Good signature from "CentOS-7 Key (CentOS 7 Official Signing Key) <security@centos.org>"
gpg: WARNING: This key is not certified with a trusted signature!
gpg:          There is no indication that the signature belongs to the owner.
Primary key fingerprint: 6341 AB27 53D7 8A78 A7C2  7BB1 24C6 A8A7 F4A8 0EB5
[staf@centos7 iso]$ 

The key fingerprint must match the one of RPM-GPG-KEY-CentOS-7.

Verify the iso file

1
2
3
4
[staf@centos7 iso]$ xz -d CentOS-7-x86_64-GenericCloud-1901.qcow2.xz
[staf@centos7 iso]$ sha256sum -c sha256sum.txt.asc 2>&1 | grep OK
CentOS-7-x86_64-GenericCloud-1901.qcow2: OK
[staf@centos7 iso]$ 

Image

info

The image we download is a normal qcow2 image, we can see the image information with qemu-info

1
2
3
4
5
6
7
8
9
[root@centos7 iso]# qemu-img info CentOS-7-x86_64-GenericCloud-1901.qcow2
image: CentOS-7-x86_64-GenericCloud-1901.qcow2
file format: qcow2
virtual size: 8.0G (8589934592 bytes)
disk size: 895M
cluster_size: 65536
Format specific information:
    compat: 0.10
[root@centos7 iso]# 

Copy & resize

The default image is small - 8GB - we might be using the image to provision other systems so it better to leave it untouched.

Copy the image to the location where we’ll run the virtual system.

1
2
3
[root@centos7 iso]# cp -v CentOS-7-x86_64-GenericCloud-1901.qcow2 /var/lib/libvirt/images/tst/tst.qcow2
'CentOS-7-x86_64-GenericCloud-1901.qcow2' -> '/var/lib/libvirt/images/tst/tst.qcow2'
[root@centos7 iso]# 

and resize it to the required size:

1
2
3
4
[root@centos7 iso]# cd /var/lib/libvirt/images/tst
[root@centos7 tst]# qemu-img resize tst.qcow2 20G
Image resized.
[root@centos7 tst]# 

cloud-init

We’ll create a simple cloud-init configuration file and generate an iso image with cloud-localds. This iso image holds the cloud-init configuration and will be used to setup the system during the bootstrap.

Install cloud-utils

It’s important to NOT install cloud-init on your KVM host machine. This creates a cloud-init service that runs during the boot and tries to reconfigure your host. Something that you probably don’t want on your KVM hypervisor host.

The cloud-util package has all the tool we need to convert the cloud-init configuration files to an iso image.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
[root@centos7 tst]# yum install -y cloud-utils
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
 * base: centos.cu.be
 * extras: centos.cu.be
 * updates: centos.mirror.ate.info
Resolving Dependencies
--> Running transaction check
---> Package cloud-utils.x86_64 0:0.27-20.el7.centos will be installed
--> Processing Dependency: python-paramiko for package: cloud-utils-0.27-20.el7.centos.x86_64
--> Processing Dependency: euca2ools for package: cloud-utils-0.27-20.el7.centos.x86_64
--> Processing Dependency: cloud-utils-growpart for package: cloud-utils-0.27-20.el7.centos.x86_64
--> Running transaction check
---> Package cloud-utils-growpart.noarch 0:0.29-2.el7 will be installed
---> Package euca2ools.noarch 0:2.1.4-1.el7.centos will be installed
--> Processing Dependency: python-boto >= 2.13.3-1 for package: euca2ools-2.1.4-1.el7.centos.noarch
--> Processing Dependency: m2crypto for package: euca2ools-2.1.4-1.el7.centos.noarch
---> Package python-paramiko.noarch 0:2.1.1-9.el7 will be installed
--> Running transaction check
---> Package m2crypto.x86_64 0:0.21.1-17.el7 will be installed
---> Package python-boto.noarch 0:2.25.0-2.el7.centos will be installed
--> Finished Dependency Resolution

Dependencies Resolved

=======================================================================================
 Package                    Arch         Version                   Repository     Size
=======================================================================================
Installing:
 cloud-utils                x86_64       0.27-20.el7.centos        extras         43 k
Installing for dependencies:
 cloud-utils-growpart       noarch       0.29-2.el7                base           26 k
 euca2ools                  noarch       2.1.4-1.el7.centos        extras        319 k
 m2crypto                   x86_64       0.21.1-17.el7             base          429 k
 python-boto                noarch       2.25.0-2.el7.centos       extras        1.5 M
 python-paramiko            noarch       2.1.1-9.el7               updates       269 k

Transaction Summary
=======================================================================================
Install  1 Package (+5 Dependent packages)

Total download size: 2.5 M
Installed size: 12 M
Downloading packages:
(1/6): cloud-utils-growpart-0.29-2.el7.noarch.rpm               |  26 kB  00:00:01     
(2/6): cloud-utils-0.27-20.el7.centos.x86_64.rpm                |  43 kB  00:00:01     
(3/6): euca2ools-2.1.4-1.el7.centos.noarch.rpm                  | 319 kB  00:00:01     
(4/6): m2crypto-0.21.1-17.el7.x86_64.rpm                        | 429 kB  00:00:01     
(5/6): python-boto-2.25.0-2.el7.centos.noarch.rpm               | 1.5 MB  00:00:02     
(6/6): python-paramiko-2.1.1-9.el7.noarch.rpm                   | 269 kB  00:00:03     
---------------------------------------------------------------------------------------
Total                                                     495 kB/s | 2.5 MB  00:05     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : python-boto-2.25.0-2.el7.centos.noarch                              1/6 
  Installing : python-paramiko-2.1.1-9.el7.noarch                                  2/6 
  Installing : cloud-utils-growpart-0.29-2.el7.noarch                              3/6 
  Installing : m2crypto-0.21.1-17.el7.x86_64                                       4/6 
  Installing : euca2ools-2.1.4-1.el7.centos.noarch                                 5/6 
  Installing : cloud-utils-0.27-20.el7.centos.x86_64                               6/6 
  Verifying  : m2crypto-0.21.1-17.el7.x86_64                                       1/6 
  Verifying  : cloud-utils-growpart-0.29-2.el7.noarch                              2/6 
  Verifying  : python-paramiko-2.1.1-9.el7.noarch                                  3/6 
  Verifying  : python-boto-2.25.0-2.el7.centos.noarch                              4/6 
  Verifying  : euca2ools-2.1.4-1.el7.centos.noarch                                 5/6 
  Verifying  : cloud-utils-0.27-20.el7.centos.x86_64                               6/6 

Installed:
  cloud-utils.x86_64 0:0.27-20.el7.centos                                                                                                                                     

Dependency Installed:
  cloud-utils-growpart.noarch 0:0.29-2.el7      euca2ools.noarch 0:2.1.4-1.el7.centos      m2crypto.x86_64 0:0.21.1-17.el7      python-boto.noarch 0:2.25.0-2.el7.centos     
  python-paramiko.noarch 0:2.1.1-9.el7         

Complete!
[root@centos7 tst]# 

Cloud-init configuration

A complete overview of cloud-init configuration directives is available at https://cloudinit.readthedocs.io/en/latest/.

We’ll create a cloud-init configuration file to update all the packages - which is always a good idea - and to add a user to the system.

A cloud-init configuration file has to start with #cloud-config, remember this is YAML so only use spaces…

We’ll create a password hash that we’ll put into your cloud-init configuration, it’s also possible to use a plain-text password in the configuration with chpasswd or to set the password for the default user. But it’s better to use a hash so nobody can see the password. Keep in mind that is still possible to brute-force the password hash.

Some GNU/Linux distributions have the mkpasswd utility this is not available on centos. The mkpasswd utility is part of the expect package and is something else…

I used a python one-liner to generate the SHA512 password hash

1
python -c 'import crypt,getpass; print(crypt.crypt(getpass.getpass(), crypt.mksalt(crypt.METHOD_SHA512)))'

Execute the one-liner and type in your password:

1
2
3
4
[staf@centos7 ~]$ python -c 'import crypt,getpass; print(crypt.crypt(getpass.getpass(), crypt.mksalt(crypt.METHOD_SHA512)))'
Password: 
<your hash>
[staf@centos7 ~]$ 

Create config.yaml - replace <your_user>, <your_hash>, <your_ssh_pub_key> - with your data:

1
2
3
4
5
6
7
8
9
10
11
#cloud-config
package_upgrade: true
users:
  - name: <your_user>
    groups: wheel
    lock_passwd: false
    passwd: <your_passord_hash>
    shell: /bin/bash
    sudo: ['ALL=(ALL) NOPASSWD:ALL']
    ssh-authorized-keys:
      - <your_public_ssh_key>

And generate the configuration iso image:

1
2
3
root@centos7 tst]# cloud-localds config.iso config.yaml
wrote config.iso with filesystem=iso9660 and diskformat=raw
[root@centos7 tst]# 

Create the virtual system

Libvirt has predefined definitions for operating systems. You can query the predefined operation systems with the osinfo-query os command.

We use centos 7, we use osinfo-query os to find the correct definition.

1
2
3
[root@centos7 tst]# osinfo-query  os | grep -i centos7
 centos7.0            | CentOS 7.0                                         | 7.0      | http://centos.org/centos/7.0            
[root@centos7 tst]# 

Create the virtual system:

1
2
3
4
5
6
7
8
9
10
11
12
virt-install \
  --memory 2048 \
  --vcpus 2 \
  --name tst \
  --disk /var/lib/libvirt/images/tst/tst.qcow2,device=disk \
  --disk /var/lib/libvirt/images/tst/config.iso,device=cdrom \
  --os-type Linux \
  --os-variant centos7.0 \
  --virt-type kvm \
  --graphics none \
  --network default \
  --import

The default escape key - to get out the console is ^[ ( Ctrl + [ )

Have fun!

Links

March 01, 2019

As I wrote in my previous post, you might be seeing a lot more of Acquia in the coming weeks. If you listen to NPR, you may have heard our new radio ads.

Like our highway billboards and train station takeover, our NPR campaign is another great opportunity to reach commuters.

NPR is a national non-profit media organization with a network of more than 1,000 affiliated radio stations across the United States — and quite a few use Drupal and Acquia for their sites. It boasts listenership of nearly 30 million, and its airwaves reach nearly 99 percent of Americans.

Our NPR ads are running during the morning and evening commutes. In addition, Acquia ads will be featured on the Marketplace Tech podcast, which is popular among technology decision makers. Between the podcasts and radio ads, the potential reach is 64 million impressions.

We have always believed in doing well by doing good. Sponsoring NPR creates brand awareness for Acquia, but also supports NPR financially. High-quality media organizations are facing incredible challenges today, and underwriting NPR's work is a nice way for Acquia to give back.

February 28, 2019

The post Showing the DNS score in your dashboard & an updated layout appeared first on ma.ttias.be.

A new release of DNS Spy marks some useful improvements.

We’ve had our public DNS rating system for a little over a year now. Every day, hundreds of sites get scanned and receive recommendations for how to improve the resilience & setup of their nameservers. If you haven’t tried it out yet, go have a look.

Despite us having that scoring system available on our website for everyone, we never showed the score of a domain of a paying subscriber.

Until now.

Source: Showing the DNS score in your dashboard & an updated layout -- DNS Spy Blog

The post Showing the DNS score in your dashboard & an updated layout appeared first on ma.ttias.be.

What are the differences between building on top of a framework and building on top of an application? How does using an application as a framework cause problems, and how can these problems be avoided? That is what this post is all about.

Decoupled web application

In your typical web application, the code handles a request and returns a response. Let’s assume we are using a web framework to handle common tasks such as routing. Let’s also assume that we think framework binding has a high cost and decouple our application from it. The flow of control would look like this:

Execution starts with the framework. For PHP frameworks this will be in a file like public/index.php. The framework then bootstraps itself and does a bunch of stuff. It’s safe to assume this stuff will include routing, and often it also includes things like dependency construction and error handling.

After the framework did the tasks you want it to do, it hands control over to your application. Your application does a bunch of application and domain logic and interacts with persistence. It likely uses a number of libraries, especially for infrastructure tasks like logging and database access. Even so, control stays with the application. (The key difference between frameworks and libraries is that you control/call libraries while frameworks control/call you.) Your application might also be calling the framework and use it as a library. Again, control stays with the application.

Finally when the application is done, it hands some kinda of result back to the framework. The framework then does another bunch of stuff, like template rendering and translations. In case of a web framework it then spits out a HTTP response and execution ends.

An application like this keeps you in control of what happens, making it easier to change things. This style also makes it easy to decouple from the framework. There are only two points where you need to decouple.

My post Implementing The Clean Architecture outlines one architectural approach that leads to this kind of application.

Frameworks vs Applications

Let’s compare how frameworks and applications differ when they are used as a foundation for an/another application.

Frameworks don’t do stuff on their own. There is no application or domain logic. There is no set of existing web pages or API endpoints with their own structure and behavior. This is all defined by your application when using a framework. When building on top of an application that acts as a framework, you’ll need to deal with existing structure and behavior. You’ll need to insert your own stuff, change existing behavior in certain situations and prevent default behavior altogether in others.

I know that there are “frameworks” that do provide their own stuff out of the box. (Example: web shop framework.) While they might not be a full application on their own, for the purpose of this blog post they are the same as an application that gets used as a framework.

Plugins and Extensions

There is nothing inherently bad about building things on top of an application. Plugins and extensions are a very useful pattern. A plugin that interacts with a single plugin point can decouple itself when appropriate and is in control over itself. And for extensions that use many extension points of the application yet are shallow/small, framework decoupling might not make sense.

This post is about using applications as framework foundation for sizable sets of code which are applications in their own right.

Applications as Frameworks

Let’s imagine we have an application that is used on some site for some use case. We’ll call this application FrameworkApp, since we’ll use it as framework for another application that powers another site.

When building our application on top of FrameworkApp, we’ll need to register new behavior and modify existing behavior. To make this possible, FrameworkApp needs to provide the appropriate extension points. Often these take the form of abstract classes or even systems, though the exact nature of the extension points is not important for our purposes.

This leads to a very different flow of control. Rather than calling us once, the FrameworkApp calls each extension point our application handles.

The diagram is showing just 6 extension points, though there can be 100s.

When visualized like this, it becomes easy to see how decoupling from the framework becomes next to impossible. Even if you manage to avoid coupling to framework code in your application, its whole structure is still defined by the framework. This means you are very limited in what you can do in your application and need to understand the framework to effectively develop the application. Framework coupling causes more issues than that, though a comprehensive overview of those is out of scope for this post.

An OOP Solution

Favor composition over inheritance

— OOP principle

Using an application as a framework is very similar to using inheritance for code reuse.

Just like with the application that is build on top of the app that acts as framework, the subclass might not be in control and be invoked many times from the base class. This is especially the case when using the Template Method Pattern and when having a deep inheritance hierarchy. The flow of control can bounce all over the place and decoupling the subclass from the classes up the hierarchy becomes all but impossible.

You can avoid this classical inheritance mess by using composition. Which suggests one way to move away from using an application as a framework or avoid doing so altogether: stop treating the framework as a base class. If there is code to share, use composition. This way you stay in control, can decouple easier and avoid The Fallacy of DRY.

Just like with class hierarchies you can always slap on an extra level.

Thanks to Raz Shuty for proofreading and making some suggestions.

Don’t miss the book!

Sign up below to receive news on my upcoming Clean Architecture book, including a discount:

 

The post Applications as Frameworks appeared first on Entropy Wins.

February 27, 2019

If you pass through Kendall Square MBTA station in the Boston area, you'll see a station "takeover" starting this week featuring the Acquia brand.

Like our highway billboards introduced in December, the goal is for more people to learn about Acquia during their commutes. I'm excited about this campaign, because Acquia often feels like a best-kept secret to many.

The Kendall Square station takeover will introduce Acquia to 272,000 daily commuters in one of the biggest innovation districts in the Boston area – and home to the prestigious MIT.

An Acquia poster at Kendall Square station featuring an Acquia employee
Acquia branding on the turnstyles

In addition to posters on every wall of the station, the campaign includes Acquia branding on entry turnstiles, 75 digital live boards, and geo-targeted mobile ads that commuters may see while looking at their phones while waiting for the train. It will be hard not to be introduced to Acquia.

An Acquia poster at Kendall Square station featuring an Acquia employee

What makes this extra special is that all of the ads feature photographs of actual Acquia employees (Acquians, as we call ourselves), which is a nice way to introduce our company to people who may not know us.

Où je poursuis ma déconnexion en explorant les deux grands types de réseaux sociaux, la manière dont ils nous rendent dépendants et comment ils corrompent les plus grands esprits de ce siècle.

Dans l’étude de mon addiction aux réseaux sociaux, je me suis rendu compte qu’il en existait deux types : les réseaux symétriques et ceux qui sont asymétriques.

Dans les réseaux symétriques, comme Facebook ou Linkedin, une connexion est toujours partagée d’un commun accord. Une des personnes doit faire une demande, l’autre doit l’accepter. Le résultat est que chacun voit ce que poste l’autre. Même s’il existe des mécanismes pour « cacher » certains de vos amis ou « voir moins de posts de cette personne », il est implicitement acquis que « Si je vois ce qu’il poste, il voit ce que je poste ». Ce fallacieux postulat donne l’impression d’un lien social. Le fait de recevoir une demande de connexion est donc source d’une décharge de dopamine. « Youpie ! Quelqu’un veut être en relation avec moi ! ». Mais également source de surcharge cognitive : dois-je accepter cette personne ? Où tracer la frontière entre ceux que j’accepte et les autres ? Que va-t-elle penser si je ne l’accepte pas ? Je l’aime bien, mais pas au point de l’accepter, etc.

Facebook joue très fort sur l’aspect émotionnel du social. Son addiction vient du fait qu’on a l’impression d’être en lien avec des gens qu’on aime et qui, par réciprocité de la relation, devraient nous aimer. Ne pas aller sur Facebook revient à ne pas écouter ce que disent nos amis, à ne pas s’intéresser à eux. Il s’agit donc de l’exploitation commerciale pure et simple de notre instinct grégaire. Alléger son flux en « unfollowant » est une véritable violence, car « Je l’aime bien quand même » ou « Elle poste parfois des trucs intéressants que je risque de rater » voire « Elle va croire que je ne l’aime plus, que je ne veux plus rien avoir à faire avec elle ».

Linkedin joue dans la même cour, mais exploite plutôt notre peur de rater des opportunités. Tout contact sur Linkedin se fait avec l’arrière-pensée « Un jour, cette personne pourrait me rapporter de l’argent, mieux vaut l’accepter ».

Personnellement, pour ne pas avoir à prendre de décisions, j’ai décidé d’accepter absolument toute requête de connexion sur ces réseaux. Le résultat est assez génial : ils ont perdu tout intérêt pour moi, car ils sont un flux complètement inintéressant de gens dont je n’ai pas la moindre idée qui ils sont. De leur côté, ils sont sans doute contents que je les aie acceptés sans que ça ne change rien à ma vie. Bref, tout est pour le mieux.

Mais il existe une deuxième classe de réseaux sociaux dits « asymétriques » ou « réseaux d’intérêts ». Ce sont Twitter, Mastodon, Diaspora et le défunt Google+.

Asymétriques, car on peut y suivre qui on veut et n’importe qui peut nous suivre. Cela rend le follow/unfollow beaucoup plus facile et permet d’avoir un flux bien plus centré sur nos intérêts.

L’asymétrie est un mécanisme qui me convient. Twitter et Mastodon me plaisent énormément.

Le follow étant facile, mon flux se remplit continuellement. Ces deux plateformes sont une source ininterrompue de « distractions ». Mais, contrairement à Facebook et Linkedin, je les trouve intéressantes. Comment ne pas redevenir dépendant ?

Se déconnecter trois mois, c’était bien. Mais pourrais-je établir une stratégie tenable sur le long terme ? Il ne faut pas réfléchir en termes de volonté, mais bien en termes de biologie : comment faire en sorte qu’aller sur une plateforme ne soit pas source de dopamine ?

Là où sur Facebook je suis tout le monde, rendant le truc inutile (Facebook m’aide beaucoup avec une interface que je trouve insupportablement moche et complexe), sur Twitter et Mastodon j’ai décidé de ne suivre presque personne.

Processus cruel qui m’a obligé d’unfollower des gens que j’aime beaucoup ou que je trouve très intéressants. Mais, bien souvent, il s’agit aussi d’anciennes rencontres, des personnes avec qui je n’ai plus de contact depuis des mois voire des années. Ces personnes sont-elles encore importantes dans ma vie ? En restreignant de manière drastique les comptes que je suis, le résultat ne s’est pas fait attendre. Le lendemain matin, il y’avait trois nouveaux tweets dans mon flux. Trois !

Cela m’a permis de remarquer que, malgré mon blocage systématique des comptes qui font de la publicité, un tweet sur trois de mon flux est sponsorisé. Pire : après quelques jours, Twitter semble avoir compris l’astuce et me propose désormais des tweets de gens qui sont suivis par ceux que je suis moi-même.

Exemple parfait : Twitter essaie de m’enrôler dans ce qui ressemble à une véritable flamewar mêlant antisémitisme et violences policières sous le seul prétexte qu’un des participants à cette guéguerre est suivi par deux de mes amis.

Publicités et insertion de flamewars aléatoires dans mon flux, Twitter est proprement insupportable. C’est un outil qui fonctionne contre ma liberté d’esprit. Je ne peux que vous encourager à faire le saut sur Mastodon, ça vaut vraiment la peine sur le long terme et, sur Mastodon, ma stratégie d’unfollow massif fonctionne extrêmement bien. Je redécouvre les pouets (c’est comme ça qu’on dit sur Mastodon) de mes amis, messages qui étaient auparavant noyés dans un flux gigantesque de libristes (ce qu’on trouve principalement sur Mastodon).

Après quelques jours, force fut de constater que j’étais de nouveau accro ! Je répondais à des tweets, me retrouvais embarqué dans des discussions. Seule solution : unfollower ceux qui postent souvent, malgré mon intérêt pour eux.

J’admire profondément des gens comme Vinay Gupta ou David Graeber. Ils m’inspirent. j’aime lire leurs idées lorsqu’elles sont développées en longs billets voire en livres. Mais sur Twitter, ils s’éparpillent. Je dois trier et lutter pour ne pas être intéressé par tout ce qu’ils postent.

En ce sens, les réseaux sociaux sont une catastrophe. Ils permettent aux grands esprits de décharger leurs idées sans prendre la peine de les compiler, les mettre en forme. Twitter, c’est un peu comme un carnet de note public sur lequel tu ne reviens jamais.

Je me demande s’ils écriraient plus au format blog sans Twitter. Cela me semble plausible. J’étais moi-même dans ce cas. Beaucoup de blogueurs l’avouent également. Mais alors, cela signifierait que les réseaux sociaux sont en train de corrompre même les plus grands esprits que sont Graeber et Gupta ! Quelle perte ! Quelle catastrophe ! Combien de livres, combien de billets de blog n’ont pas été écrits parce que la frustration de s’exprimer a été assouvie par un simple tweet aussitôt perdu dans les méandres d’une base de données centralisée et propriétaire ?

Au fond, les réseaux sociaux ne font que rendre abondant ce qui était autrefois rare : le lien social, le fait de s’exprimer publiquement. Et, je me répète, rendre rare ce qui était autrefois abondant : l’ennui, la solitude, la frustration de ne pas être entendu.

Ils nous induisent en erreur en nous faisant croire que nous pouvons être en lien avec 500 ou 1000 personnes qui nous écoutent. Que chaque connexion a de la valeur. En réalité, cette valeur est nulle pour l’individu. Au contraire, nous payons avec notre temps et notre cerveau pour accéder à quelque chose de très faible valeur voire d’une valeur négative. Plusieurs expériences semblent démontrer que l’utilisation des réseaux sociaux crée des symptômes de dépression.

On retrouve une constante dans l’histoire du capitalisme : toute innovation, toute entreprise commence par créer de la valeur pour ses clients et donc pour l’humanité. Lorsque cette valeur commence à baisser, l’entreprise disparait ou se restructure. Mais, parfois, une entreprise a acquis tellement de pouvoir sur le marché qu’elle peut continuer à croître en devenant une nuisance pour ses clients. Que ce soit techniquement ou psychologiquement, ceux-ci sont captifs.

Facebook (et donc Instagram et Whatsapp), Twitter et Google sont arrivés à ce stade. Ils ont apporté des innovations extraordinaires. Mais, aujourd’hui, ils sont une nuisance pour l’humain et l’humanité. Ils nous trompent en nous apportant une illusion de valeur pour monétiser nos réflexes et nos instincts. L’humanité est malade d’une hypersocialisation distractive dopaminique à tendance publicitaire.

Heureusement, prendre conscience du problème, c’est un premier pas vers la guérison.

Photo by Donnie Rosie on Unsplash

Je suis @ploum, conférencier et écrivain électronique. Si vous avez apprécié ce texte, n'hésitez pas à me soutenir sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens réguliers, même symboliques, sont une réelle motivation et reconnaissance. Merci !

Ce texte est publié sous la licence CC-By BE.

February 26, 2019

Le face à face continue entre Nellio et Eva, d’une part, et Georges Farreck, le célèbre acteur, et Mérissa, une femme mystérieuse qui semble contrôler tout le conglomérat industriel d’autre part.

Mérissa reste interdite. Eva la pousse dans ses retranchements.

— Pourquoi as-tu choisir d’avoir des enfants Mérissa ?
— Je…

D’un geste machinal, elle appelle le chat qui bondit sur ses genoux et frotte son crâne contre la fine main blanche. Après deux mouvements, lassé, il saute sur le sol sans un regard en arrière. Les yeux de Mérissa s’emplissent de tristesse.

Georges Farreck s’est approché. Amicalement, il lui pose la main sur l’épaule. Elle le regarde et il lui répond silencieusement avec une moue interrogatrice. Éperdue, elle pose ses yeux tour à tour sur chacun de nous.
— Je suis la femme la plus puissante du monde. Je suis la plus belle réussite de l’histoire du capitalisme voire, peut-être, de l’histoire de l’humanité. J’ai conquis l’humanité sans guerre, sans combat.
— Sans guerre ouverte, sifflé-je entre mes dents. Mais au prix de combien de morts ?

Eva me lance un regard sévère et ignore délibérément mon interruption.
— Pourquoi vouloir avoir des enfants Mérissa ?
— Parce que…

Comme un barrage soumis à une trop forte pression, elle cède brutalement.
— Parce que tout simplement je voulais savoir ce que c’était de créer la vie. Parce que j’ai été éduquée avec cette putain de croyance qu’une femme n’est complète qu’en pondant des mioches. Parce que j’ai quatre-vingt-neuf ans, j’en parais quarante et je suis partie pour en vivre deux cents mais que je n’ai plus rien à faire de ma vie. J’ai conquis le monde et je m’ennuie. Alors n’essayez pas de me faire le couplet de la plus belle expérience du monde, de l’altruisme, de l’empathie. Malgré toute notre technologie, j’ai été malade comme une chienne, j’ai eu des nausées, je me sens alourdie, difforme, handicapée. Et pourtant…

Elle se tient le ventre et claudique jusqu’à son bureau.

— Et pourtant, j’aime ces deux êtres qui me pompent et m’affaiblissent. J’ai envie de créer pour eux le meilleur. Je souhaite qu’ils soient heureux.

Elle nous regarde.

— Si je coupe l’algorithme, ils vivront dans un monde inconnu. Je ne peux garantir leur bonheur.
— Et si tu ne coupes pas l’algorithme ? susurre Eva.
— Alors, au pire ils connaitront la guerre. Au mieux, ils connaitront le bonheur…
— Le bonheur d’être les esclaves de l’algorithme ! m’écrié-je. Comme nous tous ici.
— Vous étiez très heureux tant que vous ne le saviez pas !
— Et tu pourrais ne pas le dire à tes enfants en espérant qu’ils ne le découvrent jamais ?

Elle nous lance un regard froid, cynique.

— Si je coupe l’algorithme, quelqu’un d’autre en créera un. Peut-être qu’il sera pire !
— Peut-être meilleur, susurre Georges Farreck.
– Et si c’était déjà le cas ? demandé-je. Est-on sûr que FatNerdz soit réellement un avatar de l’algorithme ? Après tout, Eva est issue de l’algorithme. Elle s’est rebellée. FatNerdz est probablement un sous-logiciel avec ses propres objectifs. Il ne doit pas être le seul. Si j’étais l’algorithme, je lancerai des programmes défensifs chargés d’identifier les algorithmes intelligents susceptibles de me faire de la concurrence.

Georges ne semble pas en croire ses yeux.

— Une véritable guerre virtuelle…
— Dont nous avons été les soldats, les trouffions, la chaire à canon.

Furieux, je crache ma haine en direction d’Eva.

— Ainsi, c’est ce que je suis, ce que nous sommes pour toi. De simples pions.
— Nellio ! hurle-t-elle. Je suis devenue humaine.
— De toutes façons, cela signifie qu’on ne peut plus couper l’algorithme. Autant chercher à couper Internet !
— Effectivement, murmure Mérissa d’une voix lourde. Mais j’ai développé un anti-algorithme. Un programme qui a accès à toutes les données de l’algorithme mais qui a pour seul et unique objectif de le contrer. Et de contrer toutes ses actions. J’ai pensé que cela serait utile si jamais l’algorithme tombait sous la coupe d’un concurrent.

Du bout des doigts, elle pianote sur le bureau. Quelques lignes de commande apparaissent sur un écran.

— Ma décision est prise depuis longtemps. Je vais lancer ce contre-algorithme. Cela m’amuse beaucoup. Mais cela m’amusait également de vous entendre argumenter. Je n’ai qu’à appuyer ici et…

Les murs se mettent soudain à clignoter. D’énormes araignées rampent sur les plafonds, les lumières clignotent, un effroyable crissement envahit la pièce.

— L’algorithme se défend ! nous crie Eva. Il essaie de nous désorienter. Il a donc développé un module d’analyse des comportements humains pour se prémunir de toute agression.
— J’ai… J’ai perdu les eaux ! hurle Mérissa, le visage pâle comme la mort.

Sous ses pieds une mare se dessine. Un liquide coule le long de ses jambes. Elle chancelle, s’appuie sur le bureau.

— Il faut… Il faut lancer le contre-algorithme, bégaie-t-elle.

Eva la soutient, les murs lancent des éclairs, les araignées grandissent, se transforment en bébés vomissant et grimaçant. Dans mon cerveau embrouillé, l’eau qui dégouline entre les jambes de Mérissa se mélange avec le vomi virtuel qui semble suinter le long des murs.

— L’algorithme ne peut rien faire physiquement, il faut se concentrer, ne pas se laisser distraire ! nous exhorte Eva tout en soutenant l’octogénaire parturiente.

Une froide douleur me transperce soudain. Je baisse les yeux. Un poinçon d’acier me traverse de part en part et me sort de l’abdomen. Une douce torpeur succède à la douleur et irradie depuis mon ventre. J’empoigne le poinçon à deux mains, je tente vainement de le tirer, de le comprimer avant de m’écrouler vers l’avant.

Les motifs du sol me semblent mouvants, passionnants. À coté de moi, le visage de Georges Farreck s’écrase soudain. Il gémit, roule des yeux horrifiés. Georges Farreck ! Je souris en le regardant, en imaginant l’érection que son corps provoque en moi.

Un museau et de longs poils gris me chatouillent le visage. Difficilement, je tente de garder les yeux ouverts mais une patte se pose sur mon front et je m’affaisse, épuisé.

Autour de moi, le bruit me semble s’atténuer. J’ai froid. Je n’éprouve plus le besoin de respirer.

Vais-je me réveiller dans un printeur ?

Photo par Malavoda

Je suis @ploum, conférencier et écrivain électronique. Si vous avez apprécié ce texte, n'hésitez pas à me soutenir sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens réguliers, même symboliques, sont une réelle motivation et reconnaissance. Merci !

Ce texte est publié sous la licence CC-By BE.

February 25, 2019

The post Run a Bitcoin Lightning Node on CentOS 7 appeared first on ma.ttias.be.

Similar to installing a Bitcoin Core full node, you can run a Lightning Node too. The same developer dependencies are needed.

Prepare your build environment to compile the Lightning Node

The next steps will install a compiler and all development libraries needed to compile a Lightning Network node.

$ yum -y install epel-release

Once EPEL is installed (which adds additional repositories), you can install all needed dependencies.

$ yum install -y autoconf automake boost-devel gcc-c++ git libdb4-cxx libdb4-cxx-devel libevent-devel libtool openssl-devel wget libsodium-devel gmp-devel sqlite-devel python34 asciidoc clang python2-devel pythong34-devel python34-pip

Next, compile the Lightning Network node.

Compile a Lightning Network node from source

With all dependencies in place, it's time to compile a Lightning Network node. I'll start by creating a custom user that will run the daemon.

$ useradd lightning
$ su - lightning

Now, while running as the new lightning user, clone & compile the project.

$ git clone https://github.com/ElementsProject/lightning.git
$ cd lightning
$ git checkout v0.6.3
$ ./configure
$ make -j $(nproc)

The above downloads and builds version 0.6.3 of the Lightning Network daemon, for a full list of available releases check out their github release page.

Once compiled, you'll find the lightning daemon in lightning/lightningd.

$ lightningd/lightningd --version
v0.6.3

The post Run a Bitcoin Lightning Node on CentOS 7 appeared first on ma.ttias.be.