Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

May 16, 2019

I published the following diary on isc.sans.edu: “The Risk of Authenticated Vulnerability Scans“:

NTLM relay attacks have been a well-known opportunity to perform attacks against Microsoft Windows environments for a while and they remain usually successful. The magic with NTLM relay attacks? You don’t need to lose time to crack the hashes, just relay them to the victim machine. To achieve this, we need a “responder” that will capture the authentication session on a system and relay it to the victim. A lab is easy to setup: Install the Responder framework. The framework contains a tool called MultiRelay.py which helps to relay the captured NTLM authentication to a specific target and, if the attack is successful, execute some code! (There are plenty of blog posts that explain in details how to (ab)use of this attack scenario)… [Read more]

[The post [SANS ISC] The Risk of Authenticated Vulnerability Scans has been first published on /dev/random]

May 13, 2019

I published the following diary on isc.sans.edu: “From Phishing To Ransomware?“:

On Friday, one of our readers reported a phishing attempt to us (thanks to him!). Usually, those emails are simply part of classic phishing waves and try to steal credentials from victims but, this time, it was not a simple phishing. Here is a copy of the email, which was nicely redacted… [Read more]

[The post [SANS ISC] From Phishing To Ransomware? has been first published on /dev/random]

May 12, 2019

In my previous two posts (1, 2 ), we created Docker Debian and Arch-based images from scratch for the i386 architecture.

In this blog post - last one in this series - we’ll do the same for yum based distributions like CentOS and Fedora.

Building your own Docker base images isn’t difficult and let you trust your distribution Gpg signing keys instead of the docker hub. As explained in the first blog post. The mkimage scripts in the contrib directory of the Moby project git repository is a good place to start if you want to build own docker images.

Fedora is one of the GNU/Linux distributions that supports 32 bits systems. Centos has a Special Interest Groups to support alternative architectures. The Alternative Architecture SIG create installation images for power, i386, armhfp (arm v732 bits) and aarch64 (arm v8 64-bit).

Centos

In this blog post, we will create centos based docker images. The procedure to create Fedora images is the same.

Clone moby

1
2
3
4
5
6
7
8
9
staf@centos386 github]$ git clone https://github.com/moby/moby
Cloning into 'moby'...
remote: Enumerating objects: 7, done.
remote: Counting objects: 100% (7/7), done.
remote: Compressing objects: 100% (7/7), done.
remote: Total 269517 (delta 0), reused 1 (delta 0), pack-reused 269510
Receiving objects: 100% (269517/269517), 139.16 MiB | 3.07 MiB/s, done.
Resolving deltas: 100% (182765/182765), done.
[staf@centos386 github]$ 

Go to the contrib directory

1
2
[staf@centos386 github]$ cd moby/contrib/
[staf@centos386 contrib]$ 

mkimage-yum.sh

When you run mkimage-yum.sh you get the usage message.

1
2
3
4
5
6
7
8
9
10
11
12
[staf@centos386 contrib]$ ./mkimage-yum.sh 
mkimage-yum.sh [OPTIONS] <name>
OPTIONS:
  -p "<packages>"  The list of packages to install in the container.
                   The default is blank. Can use multiple times.
  -g "<groups>"    The groups of packages to install in the container.
                   The default is "Core". Can use multiple times.
  -y <yumconf>     The path to the yum config to install packages from. The
                   default is /etc/yum.conf for Centos/RHEL and /etc/dnf/dnf.conf for Fedora
  -t <tag>         Specify Tag information.
                   default is reffered at /etc/{redhat,system}-release
[staf@centos386 contrib]$

build the image

The mkimage-yum.sh script will use /etc/yum.conf or /etc/dnf.conf to build the image. mkimage-yum.sh <name> will create the image with name.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
[staf@centos386 contrib]$ sudo ./mkimage-yum.sh centos
[sudo] password for staf: 
+ mkdir -m 755 /tmp/mkimage-yum.sh.LeZQNh/dev
+ mknod -m 600 /tmp/mkimage-yum.sh.LeZQNh/dev/console c 5 1
+ mknod -m 600 /tmp/mkimage-yum.sh.LeZQNh/dev/initctl p
+ mknod -m 666 /tmp/mkimage-yum.sh.LeZQNh/dev/full c 1 7
+ mknod -m 666 /tmp/mkimage-yum.sh.LeZQNh/dev/null c 1 3
+ mknod -m 666 /tmp/mkimage-yum.sh.LeZQNh/dev/ptmx c 5 2
+ mknod -m 666 /tmp/mkimage-yum.sh.LeZQNh/dev/random c 1 8
+ mknod -m 666 /tmp/mkimage-yum.sh.LeZQNh/dev/tty c 5 0
+ mknod -m 666 /tmp/mkimage-yum.sh.LeZQNh/dev/tty0 c 4 0
+ mknod -m 666 /tmp/mkimage-yum.sh.LeZQNh/dev/urandom c 1 9
+ mknod -m 666 /tmp/mkimage-yum.sh.LeZQNh/dev/zero c 1 5
+ '[' -d /etc/yum/vars ']'
+ mkdir -p -m 755 /tmp/mkimage-yum.sh.LeZQNh/etc/yum
+ cp -a /etc/yum/vars /tmp/mkimage-yum.sh.LeZQNh/etc/yum/
+ [[ -n Core ]]
+ yum -c /etc/yum.conf --installroot=/tmp/mkimage-yum.sh.LeZQNh --releasever=/ --setopt=tsflags=nodocs --setopt=group_package_types=mandatory -y groupinstall Core
Loaded plugins: fastestmirror, langpacks
There is no installed groups file.
Maybe run: yum groups mark convert (see man yum)
<snip>
+ tar --numeric-owner -c -C /tmp/mkimage-yum.sh.LeZQNh .
+ docker import - centos:7.6.1810
sha256:7cdb02046bff4c5065de670604fb3252b1221c4853cb4a905ca04488f44f52a8
+ docker run -i -t --rm centos:7.6.1810 /bin/bash -c 'echo success'
success
+ rm -rf /tmp/mkimage-yum.sh.LeZQNh
[staf@centos386 contrib]$

Rename

A new image is created with the name centos.

1
2
3
4
[staf@centos386 contrib]$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
centos              7.6.1810            7cdb02046bff        3 minutes ago       281 MB
[staf@centos386 contrib]$ 

You might want to rename to include your name or project name. You can do this by retag the image and remove the old image name.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[staf@centos386 contrib]$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
centos              7.6.1810            7cdb02046bff        20 seconds ago      281 MB
[staf@centos386 contrib]$ docker rmi centos
Error response from daemon: No such image: centos:latest
[staf@centos386 contrib]$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
centos              7.6.1810            7cdb02046bff        3 minutes ago       281 MB
[staf@centos386 contrib]$ docker tag 7cdb02046bff stafwag/centos_386:7.6.1810 
[staf@centos386 contrib]$ docker images
REPOSITORY           TAG                 IMAGE ID            CREATED             SIZE
centos               7.6.1810            7cdb02046bff        7 minutes ago       281 MB
stafwag/centos_386   7.6.1810            7cdb02046bff        7 minutes ago       281 MB
[staf@centos386 contrib]$ docker rmi centos:7.6.1810
Untagged: centos:7.6.1810
[staf@centos386 contrib]$ docker images
REPOSITORY           TAG                 IMAGE ID            CREATED             SIZE
stafwag/centos_386   7.6.1810            7cdb02046bff        8 minutes ago       281 MB
[staf@centos386 contrib]$ 

Test

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[staf@centos386 contrib]$ docker run -it --rm stafwag/centos_386:7.6.1810 /bin/sh
sh-4.2# yum update -y
Loaded plugins: fastestmirror
Determining fastest mirrors
 * base: mirror.usenet.farm
 * extras: mirror.usenet.farm
 * updates: mirror.usenet.farm
base                                                                                                                   | 3.6 kB  00:00:00     
extras                                                                                                                 | 2.9 kB  00:00:00     
updates                                                                                                                | 2.9 kB  00:00:00     
(1/4): updates/7/i386/primary_db                                                                                       | 2.5 MB  00:00:00     
(2/4): extras/7/i386/primary_db                                                                                        | 157 kB  00:00:01     
(3/4): base/7/i386/group_gz                                                                                            | 166 kB  00:00:01     
(4/4): base/7/i386/primary_db                                                                                          | 4.6 MB  00:00:02     
No packages marked for update
sh-4.2# 

Have fun!

May 10, 2019

I published the following diary on isc.sans.edu: “DSSuite – A Docker Container with Didier’s Tools“:

If you follow us and read our daily diaries, you probably already know some famous tools developed by Didier (like oledump.py, translate.py and many more). Didier is using them all the time to analyze malicious documents. His tools are also used by many security analysts and researchers. The complete toolbox is available on his github.com page. You can clone the repository or download the complete package available as a zip archive. However, it’s not convenient to install them all the time when you’re switching from computers all the time if, like me, you’re always on the road between different customers… [Read more]

[The post [SANS ISC] DSSuite – A Docker Container with Didier’s Tools has been first published on /dev/random]

May 08, 2019

Logo ChamiloCe jeudi 23 mai 2019 à 19h se déroulera la 78ème séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : Chamilo: Améliorer l’accès à une éducation de qualité partout dans le monde

Thématique : éducation|formation|e-learning

Public : développeurs|enseignants|entreprises

L’animateur conférencier : Yannick Warnier (Chamilo)

Lieu de cette séance : Université de Mons, Campus Plaine de Nimy, avenue Maistriau, Grands Amphithéâtres, Auditoire Curie (cf. ce plan sur le site de l’UMONS, ou la carte OSM).

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page http://jeudisdulibre.fikket.com/. La séance sera suivie d’un verre de l’amitié.

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, OpenSides, MeaWeb et Phonoid.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

Description : Le projet de plateforme e-learning Chamilo naît en 2010 du charbon encore rouge du projet Dokeos, lui-même basé sur le projet Claroline. En 9 ans, le projet, né en Belgique et aujourd’hui un travail combiné d’européens et de sud-américains, est classé dans tous les top 20 de comparaison de LMS et dans le top 3 des solutions LMS Open Source. Il a déjà été utilisé par plus de 21 millions d’utilisateurs et enregistre au minimum 100.000 nouveaux utilisateurs par mois. Il a désormais largement dépassé en popularité et en fonctionnalités les deux projets dont il est issu.

Chamilo est aujourd’hui une suite complète d’outils de formation ou d’éducation en ligne qui permet de réaliser des tests en ligne, de stocker des documents, d’organiser des cours structurés, de fomenter la collaboration via des forums, de lancer des sessions de vidéoconférence au sein des cours, de générer des certificats, des “badges” digitaux, et bien d’autres choses encore qui seront survolées.

Mais dans les coulisses d’un projet techno-éducatif à succès, c’est la sueur au front que le développement d’une solution qui compte plus de 2 millions de lignes de code a été rendu possible.

Quel est le secret de la réussite? Comment parvenir à développer une résilience commerciale sur base d’une solution Open Source (ou plus précisément de logiciel libre)? Où se termine la responsabilité sociale pour laisser la place aux revenus nécessaires à la croissance?

Autant de sujets passionnants que Yannick se réjouit de vous présenter ce jeudi 23 mai.

Short bio : Yannick Warnier, diplômé en Sciences Informatiques à l’Institut Paul Lambin en 2003, et ex-membre du BxLUG, est fondateur du projet Chamilo et l’actuel président de l’Association Chamilo. Pour allier le travail à la passion, il est également en charge de l’entreprise BeezNest, premier fournisseur de services sur le logiciel Chamilo, et du développement de son entreprise soeur en Amérique Latine. Il supervise le développement du logiciel et prend part aux décisions de l’Association qui déterminent les grands axes du projet, au-delà du logiciel.

Acquia joins forces with Mautic

I'm happy to announce today that Acquia acquired Mautic, an open source marketing automation and campaign management platform.

A couple of decades ago, I was convinced that every organization required a website — a thought that sounds rather obvious now. Today, I am convinced that every organization will need a Digital Experience Platform (DXP).

Having a website is no longer enough: customers expect to interact with brands through their websites, email, chat and more. They also expect these interactions to be relevant and personalized.

If you don't know Mautic, think of it as an alternative to Adobe's Marketo or Salesforce's Marketing Cloud. Just like these solutions, Mautic provides marketing automation and campaign management capabilities. It's differentiated in that it is easier to use, supports one-to-one customer experiences across many channels, integrates more easily with other tools, and is less expensive.

The flowchart style visual campaign builder you saw in the beginning of the Mautic demo video above is one of my favorite features. I love how it allows marketers to combine content, user profiles, events and a decision engine to deliver the best-next action to customers.

Mautic is a relatively young company, but has quickly grown into the largest open source player in the marketing automation space, with more than 200,000 installations. Its ease of use, flexibility and feature completeness has won over many marketers in a very short time: the company's top-line grew almost 400 percent year-over-year, its number of customers tripled, and Mautic won multiple awards for product innovation and customer service.

The acquisition of Mautic accelerates Acquia's product strategy to deliver the only Open Digital Experience Platform:

The building blocks of a Digital Experience Platform and how Mautic accelerates Acquia's vision.  The pieces that make up a Digital Experience Platform, and how Mautic fits into Acquia's Open Digital Experience Platform. Acquia is strong in content management, personalization, user profile management and commerce (yellow blocks). Mautic adds or improves Acquia's multi-channel delivery, campaign management and journey orchestration capabilities (purple blocks).

There are many reasons why we like Mautic, but here are my top 3:

Reason 1: Disrupting the market with "open"

Open Source will disrupt every component of the modern technology stack. It's not a matter of if, it's when.

Just as Drupal disrupted web content management with Open Source, we believe Mautic disrupts marketing automation.

With Mautic, Acquia is now the only open and open source alternative to the expensive, closed, and stagnant marketing clouds.

I'm both proud and excited that Acquia is doubling down on Open Source. Given our extensive open source experience, we believe we can help grow Mautic even faster.

Reason 2: Innovating through integrations

To build an optimal customer experience, marketers need to integrate with different data sources, customer technologies, and bespoke in-house platforms. Instead of buying a suite from a single vendor, most marketers want an open platform that allows for open innovation and unlimited integrations.

Only an open architecture can connect any technology in the marketing stack, and only an open source innovation model can evolve fast enough to offer integrations with thousands of marketing technologies (to date, there are 7,000 vendors in the martech landscape).

Because developers are largely responsible for creating and customizing marketing platforms, marketing technology should meet the needs of both business users and technology architects. Unlike other companies in the space, Mautic is loved by both marketers and developers. With Mautic, Acquia continues to focus on both personas.

Reason 3: The same technology stack and business model

Like Drupal, Mautic is built in PHP and Symfony, and like Drupal, Mautic uses the GNU GPL license. Having the same technology stack has many benefits.

Digital agencies or in-house teams need to deliver integrated marketing solutions. Because both Drupal and Mautic use the same technology stack, a single team of developers can work on both.

The similarities also make it possible for both open source communities to collaborate — while it is not something you can force to happen, it will be interesting to see how that dynamic naturally plays out over time.

Last but not least, our business models are also very aligned. Both Acquia and Mautic were "born in the cloud" and make money by offering subscription- and cloud-based delivery options. This means you pay for only what you need and that you can focus on using the products rather than running and maintaining them.

Mautic offers several commercial solutions:

  • Mautic Cloud, a fully managed SaaS version of Mautic with premium features not available in Open Source.
  • For larger organizations, Mautic has a proprietary product called Maestro. Large organizations operate in many regions or territories, and have teams dedicated to each territory. With Maestro, each territory can get its own Mautic instance, but they can still share campaign best-practices, and repeat successful campaigns across territories. It's a unique capability, which is very aligned with the Acquia Cloud Site Factory.

Try Mautic

If you want to try Mautic, you can either install the community version yourself or check out the demo or sandbox environment of Mautic Open Marketing Cloud.

Conclusion

We're very excited to join forces with Mautic. It is such a strategic step for Acquia. Together we'll provide our customers with more freedom, faster innovation, and more flexibility. Open digital experiences are the way of the future.

I've got a lot more to share about the Mautic acquisition, how we plan to integrate Mautic in Acquia's solutions, how we could build bridges between the Drupal and Mautic community, how it impacts the marketplace, and more.

In time, I'll write more about these topics on this blog. In the meantime, you can listen to this podcast with DB Hurley, Mautic's founder and CTO, and me.

May 05, 2019

In my previous post, we started with creating Debian based docker images from scratch for the i386 architecture.

In this blog post, we’ll create Arch GNU/Linux based images.

Arch GNU/Linux

Arch Linux stopped supporting i386 systems. When you want to run Archlinux on an i386 system there is a community maintained Archlinux32 project and the Free software version Parabola GNU/Linux-libre.

For the arm architecture, there is Archlinux Arm project that I used.

mkimage-arch.sh in moby

I used mkimage-arch.sh from the Moby/Docker project in the past, but it failed when I tried it this time…

I created a small patch to fix it and created a pull request. Till the issue is resolved, you can use the version in my cloned git repository.

Build the docker image

Install the required packages

Make sure that your system is up-to-date.

1
staf@archlinux32 contrib]$ sudo pacman -Syu

Install the required packages.

1
[staf@archlinux32 contrib]$ sudo pacman -S arch-install-scripts expect wget

Directory

Create a directory that will hold the image data.

1
2
3
[staf@archlinux32 ~]$ mkdir -p dockerbuild/archlinux32
[staf@archlinux32 ~]$ cd dockerbuild/archlinux32
[staf@archlinux32 archlinux32]$ 

Get mkimage-arch.sh

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[staf@archlinux32 archlinux32]$ wget https://raw.githubusercontent.com/stafwag/moby/master/contrib/mkimage-arch.sh
--2019-05-05 07:46:32--  https://raw.githubusercontent.com/stafwag/moby/master/contrib/mkimage-arch.sh
Loaded CA certificate '/etc/ssl/certs/ca-certificates.crt'
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.36.133
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.36.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3841 (3.8K) [text/plain]
Saving to: 'mkimage-arch.sh'

mkimage-arch.sh                 100%[====================================================>]   3.75K  --.-KB/s    in 0s      

2019-05-05 07:46:33 (34.5 MB/s) - 'mkimage-arch.sh' saved [3841/3841]

[staf@archlinux32 archlinux32]$ 

Make it executable.

1
2
[staf@archlinux32 archlinux32]$ chmod +x mkimage-arch.sh
[staf@archlinux32 archlinux32]$ 

Setup your pacman.conf

Copy your pacmnan.conf to the directory that holds mkimage-arch.sh.

1
2
[staf@archlinux32 contrib]$ cp /etc/pacman.conf mkimage-arch-pacman.conf
[staf@archlinux32 contrib]$ 

Build your image

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
[staf@archlinux32 archlinux32]$ TMPDIR=`pwd` sudo ./mkimage-arch.sh
spawn pacstrap -C ./mkimage-arch-pacman.conf -c -d -G -i /var/tmp/rootfs-archlinux-wqxW0uxy8X base bash haveged pacman pacman-mirrorlist --ignore dhcpcd,diffutils,file,inetutils,iproute2,iputils,jfsutils,licenses,linux,linux-firmware,lvm2,man-db,man-pages,mdadm,nano,netctl,openresolv,pciutils,pcmciautils,psmisc,reiserfsprogs,s-nail,sysfsutils,systemd-sysvcompat,usbutils,vi,which,xfsprogs
==> Creating install root at /var/tmp/rootfs-archlinux-wqxW0uxy8X
==> Installing packages to /var/tmp/rootfs-archlinux-wqxW0uxy8X
:: Synchronizing package databases...
 core                                              198.0 KiB   676K/s 00:00 [##########################################] 100%
 extra                                               2.4 MiB  1525K/s 00:02 [##########################################] 100%
 community                                           6.3 MiB   396K/s 00:16 [##########################################] 100%
:: dhcpcd is in IgnorePkg/IgnoreGroup. Install anyway? [Y/n] n
:: diffutils is in IgnorePkg/IgnoreGroup. Install anyway? [Y/n] n
:: file is in IgnorePkg/IgnoreGroup. Install anyway? [Y/n] n
<snip>
==> WARNING: /var/tmp/rootfs-archlinux-wqxW0uxy8X is not a mountpoint. This may have undesirable side effects.
Generating locales...
  en_US.UTF-8... done
Generation complete.
tar: ./etc/pacman.d/gnupg/S.gpg-agent.ssh: socket ignored
tar: ./etc/pacman.d/gnupg/S.gpg-agent.extra: socket ignored
tar: ./etc/pacman.d/gnupg/S.gpg-agent: socket ignored
tar: ./etc/pacman.d/gnupg/S.gpg-agent.browser: socket ignored
sha256:41cd9d9163a17e702384168733a9ca1ade0c6497d4e49a2c641b3eb34251bde1
Success.
[staf@archlinux32 archlinux32]$ 

Rename

A new image is created with the name archlinux.

1
2
3
4
[staf@archlinux32 archlinux32]$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED              SIZE
archlinux           latest              18e74c4d823c        About a minute ago   472MB
[staf@archlinux32 archlinux32]$ 

You might want to rename it. You can do this by retag the image and remove the old image name.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[staf@archlinux32 archlinux32]$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED              SIZE
archlinux           latest              18e74c4d823c        About a minute ago   472MB
[staf@archlinux32 archlinux32]$ docker tag stafwag/archlinux:386 18e74c4d823c
Error response from daemon: No such image: stafwag/archlinux:386
[staf@archlinux32 archlinux32]$ docker tag 18e74c4d823c stafwag/archlinux:386             
[staf@archlinux32 archlinux32]$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
archlinux           latest              18e74c4d823c        3 minutes ago       472MB
stafwag/archlinux   386                 18e74c4d823c        3 minutes ago       472MB
[staf@archlinux32 archlinux32]$ docker rmi archlinux
Untagged: archlinux:latest
[staf@archlinux32 archlinux32]$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
stafwag/archlinux   386                 18e74c4d823c        3 minutes ago       472MB
[staf@archlinux32 archlinux32]$ 

Test

1
2
3
4
5
6
7
8
9
[staf@archlinux32 archlinux32]$ docker run --rm -it stafwag/archlinux:386 /bin/sh
sh-5.0# pacman -Syu
:: Synchronizing package databases...
 core is up to date
 extra is up to date
 community is up to date
:: Starting full system upgrade...
 there is nothing to do
sh-5.0# 

Have fun!

May 04, 2019

The post Enable the RPC JSON API with password authentication in Bitcoin Core appeared first on ma.ttias.be.

The bitcoin daemon has a very useful & easy-to-use HTTP API built-in, that allows you to talk to it like a simple webserver and get JSON responses back.

By default, it's enabled but it only listens on localhost port 8223, and it's unauthenticated.

$ netstat -alpn | grep 8332
tcp     0   0 127.0.0.1:8332     0.0.0.0:*    LISTEN      31667/bitcoind
tcp6    0   0 ::1:8332           :::*         LISTEN      31667/bitcoind

While useful if you're on the same machine (you can query it locally without username/password), it won't help much if you're querying a remote node.

In order to allow bitcoind to bind on a public-facing IP and have username/password authentication, you can modify the bitcoin.conf.

$ cat .bitcoin/bitcoin.conf
# Expose the RPC/JSON API
server=1
rpcbind=10.0.1.5
rpcallowip=0.0.0.0/0
rpcport=8332
rpcuser=bitcoin
rpcpassword=J9JkYnPiXWqgRzg3vAA

If you restart your daemon with this config, it would try to bind to IP "10.0.1.5" and open the RCP JSON API endpoint on its default port 8332. To authenticate, you'd give the user & password as shown in the config.

If you do not pass the rpcallowip parameter, the server won't bind on the requested IP, as confirmed in the manpage:

-rpcbind=[:port]
Bind to given address to listen for JSON-RPC connections. Do not expose
the RPC server to untrusted networks such as the public internet!
This option is ignored unless -rpcallowip is also passed. Port is
optional and overrides -rpcport. Use [host]:port notation for
IPv6. This option can be specified multiple times (default:
127.0.0.1 and ::1 i.e., localhost)

Keep that note that it's a lot safer to actually pass the allowed IPs and treat it as a whitelist, not as a workaround to listen to all IPs like I did above.

Here's an example of a curl call to query the daemon.

$ curl \
  --user bitcoin \
  --data-binary '{"jsonrpc": "1.0", "id":"curltest", "method": "getnetworkinfo", "params": [] }' \ 
  -H 'content-type: text/plain;' \
  http://10.0.1.5:8332/
Enter host password for user 'bitcoin':

{
  "result":
    {
      "version":180000,
      ...
    }
}

You can now safely query your bitcoin daemon with authentication.

The post Enable the RPC JSON API with password authentication in Bitcoin Core appeared first on ma.ttias.be.

May 02, 2019

The post How to upgrade to the latest Bitcoin Core version appeared first on ma.ttias.be.

This guide assumes you've followed my other guide, where you compile Bitcoin Core from source. These steps will allow you to upgrade a running Bitcoin Core node to the latest version.

Stop current Bitcoin Core node

First, stop the running version of the Bitcoin Core daemon.

$ su - bitcoin
$ bitcoin-cli stop
Bitcoin server stopping

Check to make sure your node is stopped.

$ bitcoin-cli status
error: Could not connect to the server 127.0.0.1:8332

Once that's done, download & compile the new version.

Install the latest version of Bitcoin Core

To do so, you can follow all other steps to self-compile Bitcoin Core.

In those steps, there's a part where you do a git checkout to a specific version. Change that to refer to the latest release. In this example, we'll upgrade to 0.18.0.

$ git clone https://github.com/bitcoin/bitcoin.git
$ cd bitcoin
$ git checkout v0.18.0

And compile Bitcoin Core using the same steps as before.

$ ./autogen.sh
$ ./configure
$ make -j $(nproc)

Once done, you have the latest version of Bitcoin Core.

Start the Bitcoin Core daemon

Start it again as normal;

$ bitcoind --version
Bitcoin Core Daemon version v0.18.0
$ bitcoind -daemon
Bitcoin server starting

Check the logs to make sure everything is OK.

$ tail -f ~/.bitcoin/debug.log
Bitcoin Core version v0.18.0 (release build)
Assuming ancestors of block 0000000000000000000f1c54590ee18d15ec70e68c8cd4cfbadb1b4f11697eee have valid signatures.
Setting nMinimumChainWork=0000000000000000000000000000000000000000051dc8b82f450202ecb3d471
Using the 'sse4(1way),sse41(4way),avx2(8way)' SHA256 implementation
Using RdSeed as additional entropy source
Using RdRand as an additional entropy source
Default data directory /home/bitcoin/.bitcoin
...

And you're now running the latest version.

The post How to upgrade to the latest Bitcoin Core version appeared first on ma.ttias.be.

I published the following diary on isc.sans.edu: “Another Day, Another Suspicious UDF File“:

In my last diary, I explained that I found a malcious UDF image used to deliver a piece of malware. After this, I created a YARA rule on VT to try to spot more UDF files in the wild. It seems like the tool ImgBurn is the attacker’s best friend to generate such malicious images. To find more UDF images, I used the following very simple YARA rule… [Read more]

[The post [SANS] Another Day, Another Suspicious UDF File has been first published on /dev/random]

May 01, 2019

The post I forgot how to manage a server appeared first on ma.ttias.be.

Something embarrassing happened to me the other day. I was playing around with a new server on Digital Ocean and it occurred to me: I had no idea how to manage it.

This is slightly awkward because I've been a sysadmin for over 10yrs, the largest part of my professional career.

The spoils of configuration management

The thing is, I've been writing and using config management for the last 6-ish years. I've shared many blogposts regarding Puppet, some of its design patterns, even pretended to hold all the knowledge by sharing my lessons learned after 3yr of using Puppet.

And now I've come to the point where I no longer know how to install, configure or run software without Puppet.

My config management does this for me. Whether it's Puppet, Ansible, Chef, ... all of the boring parts of being a sysadmin have been hidden behind management tools. Yet here I am, trying to quickly configure a personal server, without my company-managed config management to aid me.

Boy did I feel useless.

I had to Google the correct SSH config syntax to allow root logins, but only via public keys. I had to Google for iptables rule syntax and using ufw to manage them. I forgot where I had to place the configs for supervisor for running jobs, let alone how to write the config.

I know these configs in our tools. In our abstractions. In our automation. But I forgot what it's like in Linux itself.

A pitfall to remember

I previously blogged about 2 pitfalls I had already known about: trying to automate a service you don't fully understand or blindly trusting the automation of someone else, not understanding what it's doing under the hood.

I should add a third one to my pitfall list: I'm forgetting the basic and core tools used to manage a Linux server.

Is this a bad thing though? I'm not sure yet. Maybe the lower level knowledge isn't that valuable, as long as we have our automation standby to take care of this? It frees us from having to think about too many things and allows us to focus on the more important aspects of being a sysadmin.

But it sure felt strange Googling things I had to Google almost a decade ago.

The post I forgot how to manage a server appeared first on ma.ttias.be.

April 30, 2019

The Drupal Association announced today that Heather Rocker has been selected as its next Executive Director.

This is exciting news because it concludes a seven month search since Megan Sanicki left.

We looked long and hard for someone who could help us grow the global Drupal community by building on its diversity, working with developers and agency partners, and expanding our work with new audiences such as content creators and marketers.

The Drupal Association (including me) believes that Heather can do all of that, and is the best person to help lead Drupal into its next phase of growth.

Heather earned her engineering degree from Georgia Tech. She has dedicated much of her career to working with women in technology, both as the CEO of Girls, Inc. of Greater Atlanta and the Executive Director of Women in Technology.

We were impressed not only with her valuable experience with volunteer organizations, but also her work in the private sector with large customers. Most recently, Heather was part of the management team at Systems Evolution, a team of 250 business consultants, where she specialized in sales operations and managed key client relationships.

She is also a robotics fanatic who organizes and judges competitions for children. So, maybe we’ll see some robots roaming around DrupalCon in the future!

As you can tell, Heather will bring a lot of great experience to the Drupal community and I look forward to partnering with her.

Last but not least, I want to thank Tim Lehnen for serving as our Interim Executive Director. He did a fantastic job leading the Drupal Association through this transition.

Comment le simple fait d’avoir un compte Facebook m’a rendu injoignable pendant 5 ans pour plusieurs dizaines de lecteurs de mon blog

Ne pas être sur Facebook ou le quitter est souvent sujet au débat : « Mais comment vont faire les gens pour te contacter ? Comment vas-tu rester en contact ? ». Tout semble se réduire au choix cornélien : préserver sa vie privée ou bien être joignable par le commun des mortels.

Je viens de me rendre compte qu’il s’agit d’un faux débat. Tout comme Facebook offre l’illusion de popularité et d’audience à travers les likes, la disponibilité en ligne est illusoire. Pire ! J’ai découvert qu’être sur Facebook m’avait rendu moins joignable pour toute une catégorie de lecteurs de mon blog !

Il y’a une raison toute simple qui me pousse à garder un compte Facebook Messenger : c’est l’endroit où les quelques apnéistes belges organisent leurs sorties. Si je n’y suis pas, je rate les sorties, aussi simple que ça. Du coup, j’ai installé l’application Messenger Lite, dans le seul but de pouvoir aller plonger. Or, en fouillant dans les options de l’app, j’ai découvert une sous-rubrique bien cachée intitulée « Invitations filtrées ».

Là, j’y ai trouvé plusieurs dizaines de messages qui m’ont été envoyés depuis 2013. Plus de 5 années de messages dont j’ignorais l’existence ! Principalement des réactions, pas toujours positives, à mes billets de blogs. D’autres étaient plus factuels. Ils émanaient d’organisateurs de conférences, de personnes que j’avais croisées et qui souhaitaient rester en contact. Tous ces messages, sans exception, auraient eu leur place dans ma boîte mail.

Je ne savais pas qu’ils existaient. À aucun moment Facebook ne m’a signalé l’existence de ces messages, ne m’a donné la chance d’y répondre alors que certains datent d’une époque où j’étais très actif sur ce réseau, où l’app était installée sur mon téléphone.

À toutes ces personnes, Facebook a donné l’illusion qu’elles m’avaient contacté. Que j’étais joignable. À tous ceux qui ont un jour posté un commentaire sous un de mes billets publiés automatiquement, Facebook a donné l’impression d’être en contact avec moi.

À moi, personnage public, Facebook a donné l’illusion qu’on pouvait me joindre, que ceux qui n’utilisaient pas l’email pouvaient me contacter.

Tous, nous avons été trompés.

Il est temps de faire tomber le voile. Facebook n’offre pas un service, il offre une illusion de service. Une illusion qui est peut-être ce que beaucoup cherchent. L’illusion d’avoir des amis, d’avoir une activité sociale, une reconnaissance, un certain succès. Mais si vous ne cherchez pas l’illusion, alors il est temps de fuir Facebook. Ce n’est pas facile, car l’illusion est forte. Tout comme les adorateurs de la Bible prétendent qu’elle est la vérité ultime « car c’est écrit dans la Bible », les utilisateurs de Facebook se sentent entendus, écoutés, car « Facebook me dit que j’ai été vu ».

C’est d’ailleurs le seul et unique but de Facebook. Nous faire croire que nous sommes connectés, peu importe que ce soit vrai ou pas.

Pour chaque contenu posté, l’algorithme Facebook va tenter de trouver les quelques utilisateurs qui ont une probabilité maximale de commenter. Et encourager les autres à cliquer sur le like sans même lire ce qui s’y passe, juste parce que la photo est jolie ou que tel utilisateur like spontanément les posts de tel autre. Au final, une conversation entre 5 individus ponctuée de 30 likes donnera l’impression d’un retentissement national. Exception faite des célébrités, qui récolteront des dizaines de milliers de likes et de messages parce que ce sont des célébrités, peu importe la plateforme.

Facebook nous donne une petite impression de célébrité et de gloriole grâce à quelques likes, Facebook nous donne l’impression de faire partie d’une tribu, d’avoir des relations sociales.

Il est indéniable que Facebook a également des effets positifs, permet des échanges qui n’existeraient pas sans cela. Mais, pour paraphraser Cal Newport dans son livre Digital Minimalism : est-ce que le prix que nous payons n’est pas trop élevé pour les bénéfices que nous en retirons ?

Je rajouterais : tirons-nous vraiment des bénéfices ? Ou bien l’illusion de ceux-ci ?

Photo by Aranka Sinnema on Unsplash

Je suis @ploum, conférencier et écrivain électronique. Si vous avez apprécié ce texte, n'hésitez pas à me soutenir sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens réguliers, même symboliques, sont une réelle motivation et reconnaissance. Merci !

Ce texte est publié sous la licence CC-By BE.

April 28, 2019

Puppet CA/puppetmasterd cert renewal

While we're still converting our puppet controlled infra to Ansible, we still have some nodes "controlled" by puppet, as converting some roles isn't something that can be done in just one or two days. Add to that other items in your backlog that all have priority set to #1 and then time is flying, until you realize this for your existing legacy puppet environment (assuming false FQDN here, but you'll get the idea):

Warning: Certificate 'Puppet CA: puppetmasterd.domain.com' will expire on 2019-05-06T12:12:56UTC
Warning: Certificate 'puppetmasterd.domain.com' will expire on 2019-05-06T12:12:56UTC

So, as long as your PKI setup for puppet is still valid, you can act in advance, resign/extend CA and puppetmasterd and distribute newer CA certs to agents, and go forward with other items in your backlog, while still converting from puppet to Ansible (at least for us)

Puppetmasterd/CA

Before anything else, (in case you don't backup this, but you should), let's take a backup on the Puppet CA (in our case, it's a Foreman driven puppetmasterd, so foreman host is where all this will happen, YMMV)

tar cvzf /root/puppet-ssl-backup.tar.gz /var/lib/puppet/ssl/

CA itself

We first need to regenerate the CSR for the CA cert, and sign it again Ideally we confirm that the ca_key.pem and the existing ca_crt.pem "matches" through modulus (should be equals)

cd /var/lib/puppet/ssl/ca
( openssl rsa -noout -modulus -in ca_key.pem  2> /dev/null | openssl md5 ; openssl x509 -noout -modulus -in ca_crt.pem  2> /dev/null | openssl md5 ) 

(stdin)= cbc4d35f58b28ad7c4dca17bd4408403
(stdin)= cbc4d35f58b28ad7c4dca17bd4408403

As it's the case, we can now Regenerate from that private key and existing crt a CSR

openssl x509 -x509toreq -in ca_crt.pem -signkey ca_key.pem -out ca_csr.pem
Getting request Private Key
Generating certificate request

Now that we have the CSR for CA, we need to sign it again, but we have to add extensions

cat > extension.cnf << EOF
[CA_extensions]
basicConstraints = critical,CA:TRUE
nsComment = "Puppet Ruby/OpenSSL Internal Certificate"
keyUsage = critical,keyCertSign,cRLSign
subjectKeyIdentifier = hash
EOF

And now archive old CA crt and sign (new) extended one

cp ca_crt.pem ca_crt.pem.old
openssl x509 -req -days 3650 -in ca_csr.pem -signkey ca_key.pem -out ca_crt.pem -extfile extension.cnf -extensions CA_extensions
Signature ok
subject=/CN=Puppet CA: puppetmasterd.domain.com
Getting Private key

openssl x509 -in ca_crt.pem -noout -text|grep -A 3 Validity
 Validity
            Not Before: Apr 29 08:25:49 2019 GMT
            Not After : Apr 26 08:25:49 2029 GMT

Puppetmasterd server

We have also to regen the CSR from the existing cert (assuming our fqdn for our cert is correctly also the currently set hostname)

cd /var/lib/puppet/ssl
openssl x509 -x509toreq -in certs/$(hostname).pem -signkey private_keys/$(hostname).pem -out certificate_requests/$(hostname)_csr.pem
Getting request Private Key
Generating certificate request

Now that we have CSR, we can sign with new CA

cp certs/$(hostname).pem certs/$(hostname).pem.old #Backing up
openssl x509 -req -days 3650 -in certificate_requests/$(hostname)_csr.pem -CA ca/ca_crt.pem \
  -CAkey ca/ca_key.pem -CAserial ca/serial -out certs/$(hostname).pem
Signature ok  

Validating that puppetmasted key and new certs are matching (so crt and private keys are ok)

( openssl rsa -noout -modulus -in private_keys/$(hostname).pem  2> /dev/null | openssl md5 ; openssl x509 -noout -modulus -in certs/$(hostname).pem 2> /dev/null | openssl md5 )

(stdin)= 0ab385eb2c6e9e65a4ed929a2dd0dbe5
(stdin)= 0ab385eb2c6e9e65a4ed929a2dd0dbe5

It seems all good, so let's restart puppetmasterd/httpd (foremand launches puppetmasterd for us)

systemctl restart puppet

Puppet agents

From this point, puppet agents will not complain about the puppetmasterd cert, but still about the fact that CA itself will expire soon :

Warning: Certificate 'Puppet CA: puppetmasterd.domain.com' will expire on 2019-05-06T12:12:56GMT

But as we have now the new ca_crt.pem at the puppetmasterd/foreman side, we can just distribute it on clients (through puppet or ansible or whatever) and then it will continue to work

cd /var/lib/puppet/ssl/certs
mv ca.pem ca.pem.old

And now distribute the new ca_crt.pem as ca.pem here

puppet snippet for this (in our puppet::agent class)

 file { '/var/lib/puppet/ssl/certs/ca.pem': 
   source => 'puppet:///puppet/ca_crt.pem', 
   owner => 'puppet', 
   group => 'puppet', 
   require => Package['puppet'],
 }

Next time you'll "puppet agent -t" or that puppet will contact puppetmasterd, it will apply the new cert on and on next call, no warning, issue anymore

Info: Computing checksum on file /var/lib/puppet/ssl/certs/ca.pem
Info: /Stage[main]/Puppet::Agent/File[/var/lib/puppet/ssl/certs/ca.pem]: Filebucketed /var/lib/puppet/ssl/certs/ca.pem to puppet with sum c63b1cc5a39489f5da7d272f00ec09fa
Notice: /Stage[main]/Puppet::Agent/File[/var/lib/puppet/ssl/certs/ca.pem]/content: content changed '{md5}c63b1cc5a39489f5da7d272f00ec09fa' to '{md5}e3d2e55edbe1ad45570eef3c9ade051f'

Hope it helps

I've loved this song since I was 15 years old, so after 25 years it definitely deserves a place in my favorite music list. When I watched this recording, I stopped breathing for a while. Beautifully devastating. Don't mix this song with alcohol.

April 25, 2019

Après avoir lu plusieurs de ses livres, j’ai eu l’occasion de rencontrer l’ingénieur philosophe Luc de Brabandere. Autour d’un jus d’orange, nous avons, entre une conversation sur le vélo et la blockchain, discuté de sa présence sur les listes Écolo aux prochaines élections européennes et de la pression de ses proches pour se mettre sur les réseaux sociaux.

« On me demande comment j’espère faire des voix si je ne suis pas sur les réseaux sociaux », m’a-t-il confié.

Une problématique que je connais bien, ayant moi-même créé un compte Facebook en 2012 dans le seul but d’être candidat aux élections pour le Parti Pirate. Si, il y’a quelques années, ne pas être sur les réseaux sociaux était perçu comme ne pas être en phase avec son époque, les choses sont-elles différentes aujourd’hui ? Les comptes Facebook ne sont-ils pas devenus l’équivalent des affiches placardées un peu partout ? En politique, un adage dit d’ailleurs que les affiches ne font pas gagner de voix, mais que ne pas avoir d’affiches peut en faire perdre.

Peut-être que pour un politicien professionnel dont la seule finalité est d’être élu à n’importe quel prix, les réseaux sociaux sont aussi indispensables que le serrage de mains dans les cafés et sur les marchés. Mais Luc n’entre clairement pas dans cette catégorie. Sa position de 4e suppléant sur la liste européenne rend son élection mathématiquement improbable.

Il ne s’en cache d’ailleurs pas, affirmant être candidat avant tout pour réconcilier écologie et économie. Mais les réseaux sociaux ne seraient-ils pas justement un moyen de diffuser ses idées ?

Je pense que c’est le contraire.

Les idées de Luc sont amplement accessibles à travers ses livres, ses écrits, ses conférences. Les réseaux sociaux broient la rigueur, la finesse de l’analyse pour la transformer en succédané idéologique, en prêt-à-partager. Sur les réseaux sociaux, le penseur devient vendeur et publicitaire. Les chiffres affolent et forcent à rentrer dans une sarabande de likes, dans une illusoire impression de fausse popularité savamment orchestrée.

Qu’on le veuille ou non, l’outil nous transforme. Les réseaux sociaux sont conçus pour nous empêcher de penser, de réfléchir. Ils échangent notre âme et notre esprit critique contre quelques chiffres indiquant une croissance de followers, de clics ou de likes. Ils se rappellent à nous, nous envahissent et nous conforment à un idéal consumériste. Ils servent d’exutoires à nos colères et à nos idées pour mieux les laisser tomber dans l’oubli une fois le feu de paille du buzz éteint.

Bref, les réseaux sociaux sont l’outil rêvé du politicien. Et l’ennemi juré du philosophe.

Une campagne électorale transformera n’importe qui en véritable politicien assoiffé de pouvoir et de popularité. Et que sont les réseaux sociaux sinon une campagne permanente pour chacun de nous ?

Ne pas être sur les réseaux sociaux est, pour l’électeur que je suis, une affirmation électorale forte. Prendre le temps de déposer ses idées dans des livres, un blog, des vidéos ou tout support hors réseaux sociaux devrait être la base du métier de penseur de la chose publique. Je cite d’ailleurs souvent l’exemple du conseiller communal Liégeois François Schreuer qui, à travers son blog et son site, transcende le débat politico-politicien pour tenter d’appréhender l’essence même des problématiques, apportant une véritable transparence citoyenne aux débats abscons d’un conseil communal.

Après, il reste la question de savoir si la présence de Luc sur les listes Écolo n’est pas qu’un attrape voix, si ses idées auront une quelconque influence une fois les élections terminées. Il y’a sans doute un peu de cela. Avant de voter pour lui, il faut se poser la question de savoir si on veut voter pour Philippe Lamberts, la tête de liste.

J’ai de nombreux différends avec Écolo au niveau local ou régional, mais je constate que c’est le parti qui me représente le mieux à l’Europe sur les sujets qui me sont chers : vie privée, copyright, contrôle d’Internet, transition écologique, revenu de base, remise en question de la place du travail.

Si je pouvais voter pour Julia Reda, du Parti Pirate allemand, la question ne se poserait pas car j’admire son travail. Mais le système électoral européen étant ce qu’il est, je crois que ma voix ira à Luc de Brabandere.

Entre autres parce qu’il n’est pas sur les réseaux sociaux.

Photo by Elijah O’Donnell on Unsplash

Je suis @ploum, conférencier et écrivain électronique. Si vous avez apprécié ce texte, n'hésitez pas à me soutenir sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens réguliers, même symboliques, sont une réelle motivation et reconnaissance. Merci !

Ce texte est publié sous la licence CC-By BE.

April 22, 2019

The post Retrieving the Genesis block in Bitcoin with bitcoin-cli appeared first on ma.ttias.be.

If you run a Bitcoin full node, you have access to every transaction and block that was ever created on the network. This also allows you to look at the content of, say, the genesis block. The first block ever created, over 10y ago.

Retrieving the genesis block

First, you can ask for the block hash by providing it the block height. As with everything in computer science, arrays and block counts start at 0.

You use command getblockhash to find the correct hash.

$ bitcoin-cli getblockhash 0
000000000019d6689c085ae165831e934ff763ae46a2a6c172b3f1b60a8ce26f

Now you have the block hash that matches with the first ever block.

You can now request the full content of that block using the getblock command.

$ bitcoin-cli getblock 000000000019d6689c085ae165831e934ff763ae46a2a6c172b3f1b60a8ce26f
{
  "hash": "000000000019d6689c085ae165831e934ff763ae46a2a6c172b3f1b60a8ce26f",
  "confirmations": 572755,
  "strippedsize": 285,
  "size": 285,
  "weight": 1140,
  "height": 0,
  "version": 1,
  "versionHex": "00000001",
  "merkleroot": "4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127b7afdeda33b",
  "tx": [
    "4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127b7afdeda33b"
  ],
  "time": 1231006505,
  "mediantime": 1231006505,
  "nonce": 2083236893,
  "bits": "1d00ffff",
  "difficulty": 1,
  "chainwork": "0000000000000000000000000000000000000000000000000000000100010001",
  "nTx": 1,
  "nextblockhash": "00000000839a8e6886ab5951d76f411475428afc90947ee320161bbf18eb6048"
}

This is the only block that doesn't have a previousblockhash, all other blocks will have one as they form the chain itself. But, the first block can't have a previous one.

Retrieving the first and only transaction from the genesis block

In this block, there is only one transaction included. The one with the hash 4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127b7afdeda33b. This is a coinbase transaction, it's the block reward for finding the miner for finding this block (50BTC).

[...]
  "tx": [
    "4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127b7afdeda33b"
  ],
[...]

Let's have a look at what's in there, shall we?

$ bitcoin-cli getrawtransaction 4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127b7afdeda33b
The genesis block coinbase is not considered an ordinary transaction and cannot be retrieved

Ah, sucks! This is a special kind of transaction, but we'll see a way to find the details of it later on.

Getting more details from the genesis block

We retrieved the block details using the getblock command, but there's actually more details in that block than initially shown. You can get more verbose output by adding the 2 at the end of the command, indicating you want a json object with transaction data.

$ bitcoin-cli getblock 000000000019d6689c085ae165831e934ff763ae46a2a6c172b3f1b60a8ce26f 2
{
  "hash": "000000000019d6689c085ae165831e934ff763ae46a2a6c172b3f1b60a8ce26f",
  "confirmations": 572758,
  "strippedsize": 285,
  "size": 285,
  "weight": 1140,
  "height": 0,
  "version": 1,
  "versionHex": "00000001",
  "merkleroot": "4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127b7afdeda33b",
  "tx": [
    {
      "txid": "4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127b7afdeda33b",
      "hash": "4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127b7afdeda33b",
      "version": 1,
      "size": 204,
      "vsize": 204,
      "weight": 816,
      "locktime": 0,
      "vin": [
        {
          "coinbase": "04ffff001d0104455468652054696d65732030332f4a616e2f32303039204368616e63656c6c6f72206f6e206272696e6b206f66207365636f6e64206261696c6f757420666f722062616e6b73",
          "sequence": 4294967295
        }
      ],
      "vout": [
        {
          "value": 50.00000000,
          "n": 0,
          "scriptPubKey": {
            "asm": "04678afdb0fe5548271967f1a67130b7105cd6a828e03909a67962e0ea1f61deb649f6bc3f4cef38c4f35504e51ec112de5c384df7ba0b8d578a4c702b6bf11d5f OP_CHECKSIG",
            "hex": "4104678afdb0fe5548271967f1a67130b7105cd6a828e03909a67962e0ea1f61deb649f6bc3f4cef38c4f35504e51ec112de5c384df7ba0b8d578a4c702b6bf11d5fac",
            "reqSigs": 1,
            "type": "pubkey",
            "addresses": [
              "1A1zP1eP5QGefi2DMPTfTL5SLmv7DivfNa"
            ]
          }
        }
      ],
      "hex": "01000000010000000000000000000000000000000000000000000000000000000000000000ffffffff4d04ffff001d0104455468652054696d65732030332f4a616e2f32303039204368616e63656c6c6f72206f6e206272696e6b206f66207365636f6e64206261696c6f757420666f722062616e6b73ffffffff0100f2052a01000000434104678afdb0fe5548271967f1a67130b7105cd6a828e03909a67962e0ea1f61deb649f6bc3f4cef38c4f35504e51ec112de5c384df7ba0b8d578a4c702b6bf11d5fac00000000"
    }
  ],
  "time": 1231006505,
  "mediantime": 1231006505,
  "nonce": 2083236893,
  "bits": "1d00ffff",
  "difficulty": 1,
  "chainwork": "0000000000000000000000000000000000000000000000000000000100010001",
  "nTx": 1,
  "nextblockhash": "00000000839a8e6886ab5951d76f411475428afc90947ee320161bbf18eb6048"
}

Aha, that's more info!

Now, you'll notice there is a section with details of the coinbase transaction. It shows the 50BTC block reward, and even though we can't retrieve it with getrawtransaction, the data is still present in the genesis block.

      "vout": [
        {
          "value": 50.00000000,
          "n": 0,
          "scriptPubKey": {
            "asm": "04678afdb0fe5548271967f1a67130b7105cd6a828e03909a67962e0ea1f61deb649f6bc3f4cef38c4f35504e51ec112de5c384df7ba0b8d578a4c702b6bf11d5f OP_CHECKSIG",
            "hex": "4104678afdb0fe5548271967f1a67130b7105cd6a828e03909a67962e0ea1f61deb649f6bc3f4cef38c4f35504e51ec112de5c384df7ba0b8d578a4c702b6bf11d5fac",
            "reqSigs": 1,
            "type": "pubkey",
            "addresses": [
              "1A1zP1eP5QGefi2DMPTfTL5SLmv7DivfNa"
            ]
          }
        }
      ],

Satoshi's Embedded Secret Message

I've always heard that Satoshi encoded a secret message in the first genesis block. Let's find it?

In our extensive output, there's a hex line in the block.

"hex": "01000000010000000000000000000000000000000000000000000000000000000000000000ffffffff4d04ffff001d0104455468652054696d65732030332f4a616e2f32303039204368616e63656c6c6f72206f6e206272696e6b206f66207365636f6e64206261696c6f757420666f722062616e6b73ffffffff0100f2052a01000000434104678afdb0fe5548271967f1a67130b7105cd6a828e03909a67962e0ea1f61deb649f6bc3f4cef38c4f35504e51ec112de5c384df7ba0b8d578a4c702b6bf11d5fac00000000"

If we transform this hexadecimal format to a more readable ASCII form, we get this:

$ echo "01000000010000000000000000000000000000000000000000000000000000000000000000ffffffff
4d04ffff001d0104455468652054696d65732030332f4a616e2f32303039204368616e63656c6c6f72206f6e20
6272696e6b206f66207365636f6e64206261696c6f757420666f722062616e6b73ffffffff0100f2052a010000
00434104678afdb0fe5548271967f1a67130b7105cd6a828e03909a67962e0ea1f61deb649f6bc3f4cef38c4f3
5504e51ec112de5c384df7ba0b8d578a4c702b6bf11d5fac00000000" | xxd -r -p

����M��EThe Times 03/Jan/2009 Chancellor on brink of second bailout for banks�����*CAg���UH'g�q0�\֨(�9	�yb��a޶I�?L�8��U���\8M�
        �W�Lp+k�_�

This confirms there is indeed a message in the form of "The Times 03/Jan/2009 Chancellor on brink of second bailout for banks", referring to a newspaper headline at the time of the genesis block.

The post Retrieving the Genesis block in Bitcoin with bitcoin-cli appeared first on ma.ttias.be.

The post Requesting certificates with Let’s Encrypt’s official certbot client appeared first on ma.ttias.be.

There's plenty of guides on this already, but I recently used Let's Encrypt certbot client again manually (instead of through already automated systems) and figured I'd write up the commands for myself. Just in case.

$ git clone https://github.com/letsencrypt/letsencrypt.git /opt/letsencrypt
$ cd /opt/letsencrypt

Now that the client is available on the system, you can request new certificates. If the DNS is already pointing to this server, it's super easy with the webroot validation.

$ /opt/letsencrypt/letsencrypt-auto certonly --expand \
  --email you@domain.tld --agree-tos \
  --webroot -w /var/www/vhosts/yoursite.tld/htdocs/public/ \
  -d yoursite.tld \
  -d www.yoursite.tld

You can add multiple domains with the -d flag and point it to the right document root using the -w flag.

After that, you'll find your certificates in

$ ls -alh /etc/letsencrypt/live/yoursite.tld/*
/etc/letsencrypt/live/yoursite.tld/cert.pem -> ../../archive/yoursite.tld/cert1.pem
/etc/letsencrypt/live/yoursite.tld/chain.pem -> ../../archive/yoursite.tld/chain1.pem
/etc/letsencrypt/live/yoursite.tld/fullchain.pem -> ../../archive/yoursite.tld/fullchain1.pem
/etc/letsencrypt/live/yoursite.tld/privkey.pem -> ../../archive/yoursite.tld/privkey1.pem

You can now use these certs in whichever webserver or application you like.

The post Requesting certificates with Let’s Encrypt’s official certbot client appeared first on ma.ttias.be.

Autoptimize 2.5 has been released earlier today (April 22nd).

Main focus of this release is more love for image optimization, now on a separate tab and including lazyload and WebP support.

Lots of other bugfixes and smaller improvements too off course, e.g. an option to disable the minification of excluded CSS/ JS (which 2.4 did by default).

No Easter eggs in there though :-)

I was using docker on an Odroid U3, but my Odroid stopped working. I switched to another system that is i386 only.

You’ll find my journey to build docker images for i386 below.

Reasons to build your own docker images

If you want to use docker you can start with docker images on the docker registry. There are several reasons to build your own base images.

  • Security

The first reason is security, docker images are not signed by default.

Anyone can upload docker images to the public docker hub with bugs or malicious code.

There are “official” docker images available at https://docs.docker.com/docker-hub/official_images/ when you execute a docker search the official docker images are tagged on the official column and are also signed by Docker. To only allow signed docker images you need to set the DOCKER_CONTENT_TRUST=1 environment variable. - This should be the default IMHO -

There is one distinction, the “official” docker images are signed by the “Repo admin” of the Docker hub, not by the official GNU/Linux distribution project. If you want to trust the official project instead of the Docker repo admin you can resolve this building your own images.

  • Support other architectures

Docker images are generally built for AMD64 architecture. If you want to use other architectures - ARM, Power, SPARC or even i386 - you’ll find some images on the Docker hub but these are usually not Official docker images.

  • Control

When you build your own images, you have more control over what goes or not goes into the image.

Building your own docker base images

There are several ways to build your own docker images.

The Mobyproject is Docker’s development project - a bit like what Fedora is to RedHat -. The Moby project has a few scripts that help you to create docker base images and is also a good start if you want to review how to build your own images.

GNU/Linux distributions

I build the images on the same GNU/Linux distribution (e.g. The debian images are build on a Debian system) to get the correct gpg keys.

Debian GNU/Linux & Co

Debian GNU/Linux makes it very easy to build your own Docker base images. Only debootstrap is required. I’ll use the moby script to the Debian base image and debootstrap to build an i386 docker Ubuntu 18.04 image.

Ubuntu doesn’t support i386 officially but includes the i386 userland so it’s possible to build i386 Docker images.

Clone moby

1
2
3
4
5
6
7
8
staf@whale:~/github$ git clone https://github.com/moby/moby
Cloning into 'moby'...
remote: Enumerating objects: 265639, done.
remote: Total 265639 (delta 0), reused 0 (delta 0), pack-reused 265640
Receiving objects: 99% (265640/265640), 137.75 MiB | 3.05 MiB/s, done.
Resolving deltas: 99% (179885/179885), done.
Checking out files: 99% (5508/5508), done.
staf@whale:~/github$ 

Make sure that debootstrap is installed

1
2
3
4
5
6
7
8
9
staf@whale:~/github/moby/contrib$ sudo apt install debootstrap
[sudo] password for staf: 
Reading package lists... Done
Building dependency tree       
Reading state information... Done
debootstrap is already the newest version (1.0.114).
debootstrap set to manually installed.
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
staf@whale:~/github/moby/contrib$ 

The Moby way

Go to the contrib directory

1
2
staf@whale:~/github$ cd moby/contrib/
staf@whale:~/github/moby/contrib$ 

mkimage.sh

mkimage.sh --help gives you more details howto use the script.

1
2
3
4
5
6
7
8
9
staf@whale:~/github/moby/contrib$ ./mkimage.sh --help
usage: mkimage.sh [-d dir] [-t tag] [--compression algo| --no-compression] script [script-args]
   ie: mkimage.sh -t someuser/debian debootstrap --variant=minbase jessie
       mkimage.sh -t someuser/ubuntu debootstrap --include=ubuntu-minimal --components=main,universe trusty
       mkimage.sh -t someuser/busybox busybox-static
       mkimage.sh -t someuser/centos:5 rinse --distribution centos-5
       mkimage.sh -t someuser/mageia:4 mageia-urpmi --version=4
       mkimage.sh -t someuser/mageia:4 mageia-urpmi --version=4 --mirror=http://somemirror/
staf@whale:~/github/moby/contrib$ 

build the image

1
2
3
4
5
6
7
8
9
10
11
12
staf@whale:~/github/moby/contrib$ sudo ./mkimage.sh -t stafwag/debian_i386:stretch debootstrap --variant=minbase stretch
[sudo] password for staf: 
+ mkdir -p /var/tmp/docker-mkimage.dY9y9apEoK/rootfs
+ debootstrap --variant=minbase stretch /var/tmp/docker-mkimage.dY9y9apEoK/rootfs
I: Target architecture can be executed
I: Retrieving InRelease 
I: Retrieving Release 
I: Retrieving Release.gpg 
I: Checking Release signature
I: Valid Release signature (key id 067E3C456BAE240ACEE88F6FEF0F382A1A7B6500)
I: Retrieving Packages 
<snip>

Test

Verify that images is imported.

1
2
3
4
staf@whale:~/github/moby/contrib$ docker images
REPOSITORY            TAG                 IMAGE ID            CREATED              SIZE
stafwag/debian_i386   stretch             cb96d1663079        About a minute ago   97.6MB
staf@whale:~/github/moby/contrib$ 

Run a test docker instance

1
2
3
4
staf@whale:~/github/moby/contrib$ docker run -t -i --rm stafwag/debian_i386:stretch /bin/sh
# cat /etc/debian_version 
9.8
# 

The debootstrap way

Make sure that debootstrap is installed

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
staf@ubuntu184:~/github/moby$ sudo apt install debootstrap
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Suggested packages:
  ubuntu-archive-keyring
The following NEW packages will be installed:
  debootstrap
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 35,7 kB of archives.
After this operation, 270 kB of additional disk space will be used.
Get:1 http://be.archive.ubuntu.com/ubuntu bionic-updates/main amd64 debootstrap all 1.0.95ubuntu0.3 [35,7 kB]
Fetched 35,7 kB in 0s (85,9 kB/s)    
Selecting previously unselected package debootstrap.
(Reading database ... 163561 files and directories currently installed.)
Preparing to unpack .../debootstrap_1.0.95ubuntu0.3_all.deb ...
Unpacking debootstrap (1.0.95ubuntu0.3) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
Setting up debootstrap (1.0.95ubuntu0.3) ...
staf@ubuntu184:~/github/moby$ 

bootsrap

Create a directory that will hold the chrooted operating system.

1
2
staf@ubuntu184:~$ mkdir -p dockerbuild/ubuntu
staf@ubuntu184:~/dockerbuild/ubuntu$ 

Bootstrap.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
staf@ubuntu184:~/dockerbuild/ubuntu$ sudo debootstrap --verbose --include=iputils-ping --arch i386 bionic ./chroot-bionic http://ftp.ubuntu.com/ubuntu/
I: Retrieving InRelease 
I: Checking Release signature
I: Valid Release signature (key id 790BC7277767219C42C86F933B4FE6ACC0B21F32)
I: Validating Packages 
I: Resolving dependencies of required packages...
I: Resolving dependencies of base packages...
I: Checking component main on http://ftp.ubuntu.com/ubuntu...
I: Retrieving adduser 3.116ubuntu1
I: Validating adduser 3.116ubuntu1
I: Retrieving apt 1.6.1
I: Validating apt 1.6.1
I: Retrieving apt-utils 1.6.1
I: Validating apt-utils 1.6.1
I: Retrieving base-files 10.1ubuntu2
<snip>
I: Configuring python3-yaml...
I: Configuring python3-dbus...
I: Configuring apt-utils...
I: Configuring netplan.io...
I: Configuring nplan...
I: Configuring networkd-dispatcher...
I: Configuring kbd...
I: Configuring console-setup-linux...
I: Configuring console-setup...
I: Configuring ubuntu-minimal...
I: Configuring libc-bin...
I: Configuring systemd...
I: Configuring ca-certificates...
I: Configuring initramfs-tools...
I: Base system installed successfully.

Customize

You can customize your installation before it goes into the image. One thing that you should customize is include update in the image.

Update /etc/resolve.conf

1
staf@ubuntu184:~/dockerbuild/ubuntu$ sudo vi chroot-bionic/etc/resolv.conf
1
nameserver 9.9.9.9

Update /etc/apt/sources.list

1
staf@ubuntu184:~/dockerbuild/ubuntu$ sudo vi chroot-bionic/etc/apt/sources.list

And include the updates

1
2
3
deb http://ftp.ubuntu.com/ubuntu bionic main
deb http://security.ubuntu.com/ubuntu bionic-security main
deb http://ftp.ubuntu.com/ubuntu/ bionic-updates main

Chroot into your installation and run apt-get update

1
2
3
4
5
6
7
8
9
10
11
12
13
staf@ubuntu184:~/dockerbuild/ubuntu$ sudo chroot $PWD/chroot-bionic
root@ubuntu184:/# apt update
Hit:1 http://ftp.ubuntu.com/ubuntu bionic InRelease
Get:2 http://ftp.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]   
Get:3 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]       
Get:4 http://ftp.ubuntu.com/ubuntu bionic/main Translation-en [516 kB]                  
Get:5 http://ftp.ubuntu.com/ubuntu bionic-updates/main i386 Packages [492 kB]           
Get:6 http://ftp.ubuntu.com/ubuntu bionic-updates/main Translation-en [214 kB]          
Get:7 http://security.ubuntu.com/ubuntu bionic-security/main i386 Packages [241 kB]     
Get:8 http://security.ubuntu.com/ubuntu bionic-security/main Translation-en [115 kB]
Fetched 1755 kB in 1s (1589 kB/s)      
Reading package lists... Done
Building dependency tree... Done

and apt-get upgrade

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
root@ubuntu184:/# apt upgrade
Reading package lists... Done
Building dependency tree... Done
Calculating upgrade... Done
The following NEW packages will be installed:
  python3-netifaces
The following packages will be upgraded:
  apt apt-utils base-files bsdutils busybox-initramfs console-setup console-setup-linux
  distro-info-data dpkg e2fsprogs fdisk file gcc-8-base gpgv initramfs-tools
  initramfs-tools-bin initramfs-tools-core keyboard-configuration kmod libapparmor1
  libapt-inst2.0 libapt-pkg5.0 libblkid1 libcom-err2 libcryptsetup12 libdns-export1100
  libext2fs2 libfdisk1 libgcc1 libgcrypt20 libglib2.0-0 libglib2.0-data libidn11
  libisc-export169 libkmod2 libmagic-mgc libmagic1 libmount1 libncurses5 libncursesw5
  libnss-systemd libpam-modules libpam-modules-bin libpam-runtime libpam-systemd
  libpam0g libprocps6 libpython3-stdlib libpython3.6-minimal libpython3.6-stdlib
  libseccomp2 libsmartcols1 libss2 libssl1.1 libstdc++6 libsystemd0 libtinfo5 libudev1
  libunistring2 libuuid1 libxml2 mount ncurses-base ncurses-bin netcat-openbsd
  netplan.io networkd-dispatcher nplan openssl perl-base procps python3 python3-gi
  python3-minimal python3.6 python3.6-minimal systemd systemd-sysv tar tzdata
  ubuntu-keyring ubuntu-minimal udev util-linux
84 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 26.6 MB of archives.
After this operation, 450 kB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://security.ubuntu.com/ubuntu bionic-security/main i386 netplan.io i386 0.40.1~18.04.4 [64.6 kB]
Get:2 http://ftp.ubuntu.com/ubuntu bionic-updates/main i386 base-files i386 10.1ubuntu2.4 [60.3 kB]
Get:3 http://security.ubuntu.com/ubuntu bionic-security/main i386 libapparmor1 i386 2.12-4ubuntu5.1 [32.7 kB]
Get:4 http://security.ubuntu.com/ubuntu bionic-security/main i386 libgcrypt20 i386 1.8.1-
<snip>
running python rtupdate hooks for python3.6...
running python post-rtupdate hooks for python3.6...
Setting up initramfs-tools-core (0.130ubuntu3.7) ...
Setting up initramfs-tools (0.130ubuntu3.7) ...
update-initramfs: deferring update (trigger activated)
Setting up python3-gi (3.26.1-2ubuntu1) ...
Setting up file (1:5.32-2ubuntu0.2) ...
Setting up python3-netifaces (0.10.4-0.1build4) ...
Processing triggers for systemd (237-3ubuntu10.20) ...
Setting up networkd-dispatcher (1.7-0ubuntu3.3) ...
Installing new version of config file /etc/default/networkd-dispatcher ...
Setting up netplan.io (0.40.1~18.04.4) ...
Setting up nplan (0.40.1~18.04.4) ...
Setting up ubuntu-minimal (1.417.1) ...
Processing triggers for libc-bin (2.27-3ubuntu1) ...
Processing triggers for initramfs-tools (0.130ubuntu3.7) ...
root@ubuntu184:/# 
staf@ubuntu184:~/dockerbuild/ubuntu$ 

Import

Go to your chroot installation.

1
2
staf@ubuntu184:~/dockerbuild/ubuntu$ cd chroot-bionic/
staf@ubuntu184:~/dockerbuild/ubuntu/chroot-bionic$ 

and import the image.

1
2
3
staf@ubuntu184:~/dockerbuild/ubuntu/chroot-bionic$ sudo tar cpf - . | docker import - stafwag/ubuntu_i386:bionic
sha256:83560ef3c8d48b737983ab8ffa3ec3836b1239664f8998038bfe1b06772bb3c2
staf@ubuntu184:~/dockerbuild/ubuntu/chroot-bionic$ 

Test

1
2
3
4
staf@ubuntu184:~/dockerbuild/ubuntu/chroot-bionic$ docker images
REPOSITORY            TAG                 IMAGE ID            CREATED              SIZE
stafwag/ubuntu_i386   bionic              83560ef3c8d4        About a minute ago   315MB
staf@ubuntu184:~/dockerbuild/ubuntu/chroot-bionic$ 
1
2
3
4
5
6
7
8
staf@ubuntu184:~/dockerbuild/ubuntu/chroot-bionic$ docker run -it --rm stafwag/ubuntu_i386:bionic /bin/bash
root@665cec6ee24f:/# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 18.04.2 LTS
Release:        18.04
Codename:       bionic
root@665cec6ee24f:/# 

Have fun!

Links

April 21, 2019

Several years ago, I created a list of ESXi versions with matching VM BIOS identifiers. The list is now complete up to vSphere 6.7 Update 2.
Your Linux runs on a VMware VM, but which on which ESXi version? You can see for yourself: run "dmidecode" and look at lines 10, 11 and 12.
ESX 2.5 - BIOS Release Date: 04/21/2004 - Address 0xE8480 - Size 97152 bytes
ESX 3.0 - BIOS Release Date: 04/17/2006 - Address 0xE7C70 - Size 99216 bytes
ESX 3.5 - BIOS Release Date: 01/30/2008 - Address 0xE7910 - Size 100080 bytes
ESX 4 - BIOS Release Date: 08/15/2008 - Address 0xEA6C0 - Size 88384 bytes
ESX 4U1 - BIOS Release Date: 09/22/2009 - Address 0xEA550 - Size 88752 bytes
ESX 4.1 - BIOS Release Date: 10/13/2009 - Address 0xEA2E0 - Size 89376 bytes
ESXi 5 - BIOS Release Date: 01/07/2011 - Address 0xE72C0 - Size 101696 bytes
ESXi 5.1 - BIOS Release Date: 06/22/2012 - Address: 0xEA0C0 - Size: 89920 bytes
ESXi 5.5 - BIOS Release Date: 07/30/2013 - Address: 0xEA050 - Size: 90032 bytes
ESXi 6 - BIOS Release Date: 09/30/2014 - Address: 0xE9A40 - Size: 91584 bytes
ESXi 6.5 - BIOS Release Date: 04/05/2016 - Address: 0xEA580 - Size: 88704 bytes 
ESXi 6.7 - BIOS Release Date: 07/03/2018 - Address: 0xEA520 - Size: 88800 bytes
ESXi 6.7 U2 - BIOS Release Date 12/12/2018 - Address: 0xEA490 - Size: 88944 bytes
NB These DMI properties are set at boot time. Even if your VM gets live-migrated to a host running a different vSphere version, your VM will keep the values it got from the host it booted on. What you see is the vSphere version of the host your VM booted on. It is the VM power-on that matters, so a guest OS reboot will not regenerate the DMI properties. A guest OS shut down on the other hand will also power off the VM, and the next power-on will regenerate the DMI properties.

April 20, 2019

We ditched the crowded streets of Seattle for a short vacation in Tuscany's beautiful countryside. After the cold winter months, Tuscany's rolling hills are coming back to life and showing their new colors.

Beautiful tuscany
Beautiful tuscany
Beautiful tuscany

April 18, 2019

I published the following diary on isc.sans.edu: “Malware Sample Delivered Through UDF Image“:

I found an interesting phishing email which was delivered with a malicious attachment: an UDF image (.img). UDF means “Universal Disk Format” and, as said by Wikipedia], is an open vendor-neutral file system for computer data storage. It has supplented the well-known ISO 9660 format (used for burning CD & DVD) that was also used in previous campaign to deliver malicious files… [Read more]

[The post [SANS ISC] Malware Sample Delivered Through UDF Image has been first published on /dev/random]

April 16, 2019

In a recent VMware project, an existing environment of vSphere ESXi hosts had to be split off to a new instance of vCenter. These hosts were member of a distributed virtual switch, an object that saves its configuration in the vCenter database. This information would be lost after the move to the new vCenter, and the hosts would be left with "orphaned" distributed vswitch configurations.

Thanks to the export/import function now available in vSphere 5.5 and 6.x, we can now move the full distributed vswitch configuration to the new vCenter:

  • In the old vCenter, right-click the switch object, click "Export configuration" and choose the default "Distributed switch and all port groups"
  • Add the hosts to the new vCenter
  • In the new vCenter, right-click the datacenter object, click "Import distributed switch" in the "Distributed switch" sub-menu.
  • Select your saved configuration file, and tick the "Preserve original distributed switch and port group identifiers" box (which is not default!)
What used to be orphaned configurations on the host, are now valid member switches of the distributed switch you just imported!
In vSphere 6, if the vi-admin account get locked because of too many failed logins, and you don't have the root password of the appliance, you can reset the account(s) using these steps:

  1. reboot the vMA
  2. from GRUB, "e"dit the entry
  3. "a"ppend init=/bin/bash
  4. "b"oot
  5. # pam_tally2 --user=vi-admin --reset
  6. # passwd vi-admin # Optional. Only if you want to change the password for vi-admin.
  7. # exit
  8. reset the vMA
  9. log in with vi-admin
These steps can be repeated for root or any other account that gets locked out.

If you do have root or vi-admin access, "sudo pam_tally2 --user=mylockeduser --reset" would do it, no reboot required.
Most VMware appliances (vCenter Appliance, VMware Support Appliance, vRealize Orchestrator) have the so called VAMI: the VMware Appliance Management Interface, generally served via https on port 5480. VAMI offers a variety of functions, including "check updates" and "install updates". Some appliances offer to check/install updates from a connected CD iso, but the default is always to check online. How does that work?
VMware uses a dedicated website to serve the updates: vapp-updates.vmware.com. Each appliance is configured with a repository URL: https://vapp-updates.vmware.com/vai-catalog/valm/vmw/PRODUCT-ID/VERSION-ID . The PRODUCT-ID is a hexadecimal code specific for the product. vRealize Orchestrator uses 00642c69-abe2-4b0c-a9e3-77a6e54bffd9, VMware Support Appliance uses 92f44311-2508-49c0-b41d-e5383282b153, vCenter Server Appliance uses 647ee3fc-e6c6-4b06-9dc2-f295d12d135c. The VERSION-ID contains the current appliance version and appends ".latest": 6.0.0.20000.latest, 6.0.4.0.latest, 6.0.0.0.latest.
The appliance will check for updates by retrieving the repository URL /manifest/manifest-latest.xml . This xml contains the latest available version in fullVersion and version (fullVersion includes the build number), pre- and post-install scripts, EULA, and a list of updated rpm packages. Each entry has a that can be appended to the repository URL and downloaded. The update procedure downloads manifest and rpms, verifies checksums on downloaded rpms, executes the preInstallScript, runs rpm -U on the downloaded rpm packages, executes the postInstallScript, displays the exit code and prompts for reboot.
With this information, you can setup your own local repository (for cases where internet access is impossible from the virtual appliances), or you can even execute the procedure manually. Be aware that manual update would be unsupported. Using a different repository is supported by a subset of VMware appliances (e.g. VCSA, VRO) but not all (VMware Support Appliance).
I did not yet update my older post when vSphere 6.7 was released. The list now complete up to vSphere 6.7. Your Linux runs on a VMware VM, but which on which ESXi version? You can see for yourself: run "dmidecode" and look at lines 10, 11 and 12.
ESX 2.5 - BIOS Release Date: 04/21/2004 - Address 0xE8480 - Size 97152 bytes
ESX 3.0 - BIOS Release Date: 04/17/2006 - Address 0xE7C70 - Size 99216 bytes
ESX 3.5 - BIOS Release Date: 01/30/2008 - Address 0xE7910 - Size 100080 bytes
ESX 4 - BIOS Release Date: 08/15/2008 - Address 0xEA6C0 - Size 88384 bytes
ESX 4U1 - BIOS Release Date: 09/22/2009 - Address 0xEA550 - Size 88752 bytes
ESX 4.1 - BIOS Release Date: 10/13/2009 - Address 0xEA2E0 - Size 89376 bytes
ESXi 5 - BIOS Release Date: 01/07/2011 - Address 0xE72C0 - Size 101696 bytes
ESXi 5.1 - BIOS Release Date: 06/22/2012 - Address: 0xEA0C0 - Size: 89920 bytes
ESXi 5.5 - BIOS Release Date: 07/30/2013 - Address: 0xEA050 - Size: 90032 bytes
ESXi 6 - BIOS Release Date: 09/30/2014 - Address: 0xE9A40 - Size: 91584 bytes
ESXi 6.5 - BIOS Release Date: 04/05/2016 - Address: 0xEA580 - Size: 88704 bytes 
ESXi 6.7 - BIOS Release Date: 07/03/2018 - Address: 0xEA520 - Size: 88800 bytes
NB These DMI properties are set at boot time. Even if your VM gets live-migrated to a host running a different vSphere version, your VM will keep the values it got from the host it booted on. What you see is the vSphere version of the host your VM booted on. It is the VM power-on that matters, so a guest OS reboot will not regenerate the DMI properties. A guest OS shut down on the other hand will also power off the VM, and the next power-on will regenerate the DMI properties.
Updating the VCSA is easy when it has internet access or if you can mount the update iso. On a private network, VMware assumes you have a webserver that can serve up the updaterepo files. In this article, we'll look at how to proceed when VCSA is on a private network where internet access is blocked, and there's no webserver available. The VCSA and PSC contain their own webserver that can be used for an HTTP based update. This procedure was tested on PSC/VCSA 6.0.

Follow these steps:


  • First, download the update repo zip (e.g. for 6.0 U3A, the filename is VMware-vCenter-Server-Appliance-6.0.0.30100-5202501-updaterepo.zip ) 
  • Transfer the updaterepo zip to a PSC or VCSA that will be used as the server. You can use Putty's pscp.exe on Windows or scp on Mac/Linux, but you'd have to run "chsh -s /bin/bash root" in the CLI shell before using pscp.exe/scp if your PSC/VCSA is set up with the appliancesh. 
    • chsh -s /bin/bash root
    • "c:\program files (x86)\putty\pscp.exe" VMware*updaterepo.zip root@psc-name-or-address:/tmp 
  • Change your PSC/VCSA root access back to the appliancesh if you changed it earlier: 
    • chsh -s /bin/appliancesh root
  • Make a directory for the repository files and unpack the updaterepo files there:
    • mkdir /srv/www/htdocs/6u3
    • chmod go+rx /srv/www/htdocs/6u3
    • cd /srv/www/htdocs/6u3
    • unzip /tmp/VMware-vCenter*updaterepo.zip
    • rm /tmp/VMware-vCenter*updaterepo.zip
  • Create a redirect using the HTTP rhttpproxy listener and restart it
    • echo "/6u3 local 7000 allow allow" > /etc/vmware-rhttpproxy/endpoints.conf.d/temp-update.conf 
    • /etc/init.d/vmware-rhttpproxy restart 
  • Create a /tmp/nginx.conf (I copied /etc/nginx/nginx.conf, changed "listen 80" to "listen 7000" and changed "mime.types" to "/etc/nginx/mime.types")
  • Start nginx
    • nginx -c /tmp/nginx.conf
  • Start the update via the VAMI. Change the repository URL in settings,  use http://psc-name-or-address/6u3/ as repository URL. Then use "Check URL". 
  • Afterwards, clean up: 
    • killall nginx
    • cd /srv/www/htdocs; rm -rf 6u3


P.S. I personally tested this using a PSC as webserver to update both that PSC, and also a VCSA appliance.
P.P.S. VMware released an update for VCSA 6.0 and 6.5 on the day I wrote this. For 6.0, the latest version is U3B at the time of writing, while I updated to U3A.
VMware's solution to a lost or forgotten root password for ESXi is simple: go to https://kb.vmware.com/s/article/1317898?lang=en_US and you'll find that "Reinstalling the ESXi host is the only supported way to reset a password on ESXi".

If your host is still connected to vCenter, you may be able to use Host Profiles to reset the root password, or alternatively you can join ESXi in Active Directory via vCenter, and log in with a user in the "ESX Admins" AD group.

If your host is no longer connected to vCenter, those options are closed. Can you avoid reinstallation? Fortunately, you can. You will need to reset and reboot your ESXi though. If you're ready for an unsupported deep dive into the bowels of ESXi, follow these steps:

  1. Create a bootable Linux USB-drive (or something else you can boot your server with). I used a CentOS 7 installation USB-drive that I could use to boot into rescue mode.
  2. Reset your ESXi and boot from the Linux medium.
  3. Identify your ESXi boot device from the Linux prompt. Use "fdisk -l /dev/sda", "fdisk -l /dev/sdb", etc. until you find a device that has 9 (maybe 8 in some cases) partitions. Partitions 5 and 6 are 250 MB and type "Microsoft basic" (for more information on this partition type, see https://en.wikipedia.org/wiki/Microsoft_basic_data_partition ). These are the ESXi boot banks. My boot device was /dev/sda, so I'll be using /dev/sda5 and/or /dev/sda6 as partition devices.
  4. Create a temporary directory for the primary boot bank: mkdir /tmp/b
  5. Mount the first ESXi bootbank on that directory: mount /dev/sda5 /tmp/b
  6. The current root password hash is stored inside state.tgz . We'll unpack this first. Create a temp directory for the state.tgz contents: mkdir /tmp/state
  7. Unpack state.tgz: cd /tmp/state ; tar xzf /tmp/b/state.tgz
  8. Inside state.tgz is local.tgz. Create a tempfile for the local.tgz contents: mkdir /tmp/local
  9. Unpack local.tgz: cd /tmp/local ; tar xzf /tmp/state/local.tgz
  10. Generate a new password hash: on a Linux system with Perl installed, you can use this: perl -e 'print crypt("MySecretPassword@","\$6\$AbCdEfGh") . "\n";' . On a Linux system with Python installed (like the CentOS rescue mode), you can use this: python -c "import crypt; print crypt.crypt('MySecretPassword@')" . Both will print out a new password hash for the given password: $6$MeOt/VCSA4PoKyHl$yk5Q5qbDVussUjt/3QZdy4UROEmn5gaRgYG7ckYIn1NC2BXXCUnCARnvNkscL5PA5ErbTddoVQWPqBUYe.S7Y0  . Alternatively, you can use an online hash generator, or you can leave the password hash field empty.
  11. Edit the shadow file to change the root password: vi /tmp/local/etc/shadow . Replace the current password hash in the second field of the first line (the line that starts with root:) with the new hash. Esc : w q Enter saves the contents of the shadow file.
  12. Recreate the local.tgz file: cd /tmp/local ; tar czf /tmp/state/local.tgz etc
  13. Recreate the state.tgz file: cd /tmp/state ; tar czf /tmp/b/state.tgz local.tgz
  14. Detach the bootbank partition: umount /tmp/b
  15. Exit from the Linux rescue environment and boot ESXi.
  16. Do the same for the other boot bank (/dev/sda6 in my case) if your system doesn't boot from the first boot bank. NB logging in via SSH doesn't work with an empty hash field. The Host UI client via a web browser does let you in with an empty password, and allows you to change your password.


April 15, 2019

Last week, many Drupalists gathered in Seattle for DrupalCon North America, for what was the largest DrupalCon in history.

As a matter of tradition, I presented my State of Drupal keynote. You can watch a recording of my keynote (starting at 32 minutes) or download a copy of my slides (153 MB).

Making Drupal more diverse and inclusive

DrupalCon Seattle was not only the largest, but also had the most diverse speakers. Nearly 50% of the DrupalCon speakers were from underrepresented groups. This number has been growing year over year, and is something to be proud of.

I actually started my keynote by talking about how we can make Drupal more diverse and inclusive. As one of the largest and most thriving Open Source communities, I believe that Drupal has an obligation to set a positive example.

Free time to contribute is a privilege

I talked about how Open Source communities often incorrectly believe that everyone can contribute. Unfortunately, not everyone has equal amounts of free time to contribute. In my keynote, I encouraged individuals and organizations in the Drupal community to strongly consider giving time to underrepresented groups.

Improving diversity is not only good for Drupal and its ecosystem, it's good for people, and it's the right thing to do. Because this topic is so important, I wrote a dedicated blog post about it.

Drupal 8 innovation update

I dedicated a significant portion of my keynote to Drupal 8. In the past year alone, there have been 35% more sites and 48% more stable modules in Drupal 8. Our pace of innovation is increasing, and we've seen important progress in several key areas.

With the release of Drupal 8.7, the Layout Builder will become stable. Drupal's new Layout Builder makes it much easier to build and change one-off page layouts, templated layouts and layout workflows. Best of all, the Layout Builder will be accessible.

Drupal 8.7 also brings a lot of improvements to the Media Library.

We also continue to innovate on headless or decoupled Drupal. The JSON:API module will ship with Drupal 8.7. I believe this not only advances Drupal's leadership in API-first, but sets Drupal up for long-term success.

These are just a few of the new capabilities that will ship with Drupal 8.7. For the complete list of new features, keep an eye out for the release announcement in a few weeks.

Drupal 7 end of life

If you're still on Drupal 7, there is no need to panic. The Drupal community will support Drupal 7 until November 2021 — two years and 10 months from today.

After the community support ends, there will be extended commercial support for a minimum of three additional years. This means that Drupal 7 will be supported for at least five more years, or until 2024.

Upgrading from Drupal 7 to Drupal 8

Upgrading from Drupal 7 to Drupal 8 can be a lot of work, especially for large sites, but the benefits outweigh the challenges.

For my keynote, I featured stories from two end-users who upgraded large sites from Drupal 7 to Drupal 8 — the State of Georgia and Pegasystems.

The keynote also featured quietone, one of the maintainers of the Migrate API. She talked about the readiness of Drupal 8 migration tools.

Preparing for Drupal 9

As announced a few months ago, Drupal 9 is targeted for June 2020. June 2020 is only 14 months away, so I dedicated a significant amount of my keynote to Drupal 9.

Making Drupal updates easier is a huge, ongoing priority for the community. Thanks to those efforts, the upgrade path to Drupal 9 will be radically easier than the upgrade path to Drupal 8.

In my keynote, I talked about how site owners, Drupal developers and Drupal module maintainers can start preparing for Drupal 9 today. I showed several tools that make Drupal 9 preparation easier. Check out my post on how to prepare for Drupal 9 for details.

A timeline with important dates and future milestones

Thank you

I'm grateful to be a part of a community that takes such pride in its work. At each DrupalCon, we get to see the tireless efforts of many volunteers that add up to one amazing event. It makes me proud to showcase the work of so many people and organizations in my presentations.

Thank you to all who have made this year's DrupalCon North America memorable. I look forward to celebrating our work and friendships at future events!

April 13, 2019

April 12, 2019

April 11, 2019

With Drupal 9 targeted to be released in June of 2020, many people are wondering what they need to do to prepare.

The good and important news is that upgrading from Drupal 8 to Drupal 9 should be really easy — radically easier than upgrading from Drupal 7 to Drupal 8.

The only caveat is that you need to manage "deprecated code" well.

If your site doesn't use deprecated code that is scheduled for removal in Drupal 9, your upgrade to Drupal 9 will be easy. In fact, it should be as easy as a minor version upgrade (like upgrading from Drupal 8.6 to Drupal 8.7).

What is deprecated code?

Code in Drupal is marked as "deprecated" when it should no longer be used. Typically, code is deprecated because there is a better alternative that should be used instead.

For example, in Drupal 8.0.0, we deprecated \Drupal::l($text, $url). Instead of using \Drupal::l(), you should use Link::fromTextAndUrl($text, $url). The \Drupal::l() function was marked for removal as part of some clean-up work; Drupal 8 had too many ways to generate links.

Deprecated code will continue to work for some time before it gets removed. For example, \Drupal::l() continues to work in Drupal 8.7 despite the fact that it was deprecated in Drupal 8.0.0 more than three years ago. This gives module maintainers ample time to update their code.

When we release Drupal 9, we will "drop" most deprecated code. In our example, this means that \Drupal::l() will not be available anymore in Drupal 9.

In other words:

  • Any Drupal 8 module that does not use deprecated code will continue to work with Drupal 9.
  • Any Drupal 8 module that uses deprecated code needs to be updated before Drupal 9 is released, or it will stop working with Drupal 9.

If you're interested, you can read more about Drupal's deprecation policy at https://www.drupal.org/core/deprecation.

How do I know if my site uses deprecated code?

There are a few ways to check if your site is using deprecated code.

If you work on a Drupal site as a developer, run drupal-check. Matt Glaman (Centarro) developed a static PHP analysis tool called drupal-check, which you can run against your codebase to check for deprecated code. I recommend running drupal-check in an automated fashion as part of your development workflow.

If you are a site owner, install the Upgrade Status module. This module was built by Acquia. The module provides a graphical user interface on top of drupal-check. The goal is to provide an easy-to-use readiness assessment for your site's migration to Drupal 9.

If you maintain a project on Drupal.org, enable Drupal.org's testing infrastructure to detect the use of deprecated code. There are two complementary ways to do so: you can run a static deprecation analysis and/or configure your existing tests to fail when calling deprecated code. Both can be set up in your drupalci.yml configuration file.

If you find deprecated code in a contributed module used on your site, consider filing an issue in the module's issue queue on Drupal.org (after having checked no issue has been created yet). If you can, provide a patch to fix the deprecation and engage with the maintainer to get it committed.

How hard is it to update my code?

While there are some deprecations that require more detailed refactoring, many are a simple matter of search-and-replace.

You can check the API documentation for instructions on how to remedy the deprecation.

When can I start updating my code?

I encourage you to start today. When you update your Drupal 8 code to use the latest and greatest APIs, you can benefit from those improvements immediately. There is no reason to wait until Drupal 9 is released.

Drupal 8.8.0 will be the last release to deprecate for Drupal 9. Today, we don't know the full set of deprecations yet.

How much time do I have to update my code?

The current plan is to release Drupal 9 in June of 2020, and to end-of-life Drupal 8 in November of 2021.

Contributed module maintainers are encouraged to remove the use of deprecated code by June of 2020 so everyone can upgrade to Drupal 9 the day it is released.

A timeline with important dates and future milestones

Drupal.org project maintainers should keep the extended security coverage policy in mind, which means that Drupal 8.8 will still be supported until Drupal 9.1 is released. Contributed projects looking to support both Drupal 8.8 and Drupal 9.0 might need to use two branches.

How ready are the contributed modules?

Dwayne McDaniel (Pantheon) analyzed all 7,000 contributed module for Drupal 8 using drupal-check.

As it stands today, 44% of the modules have no deprecation warnings. The remaining 56% of the modules need to be updated, but the majority have less than three deprecation warnings.

The benefits of backwards compatibility (BC) are clear: no users are left behind. Which leads to higher adoption rates because you’re often getting new features and you always have the latest security fixes.

Of course, that’s easy when you have a small API surface (as Nate Haug once said: “the WordPress API has like 11 functions!” — which is surprisingly close to the truth). But Drupal has an enormous API surface. In fact, it seems there’s APIs hiding in every crevice!

In my job at Acquia, I’ve been working almost exclusively on Drupal 8 core. In 2012–2013 I worked on authoring experience (in-place editing, CKEditor, and more). In 2014–2015, I worked on performance, cacheability, rendering and generally the stabilizing of Drupal 8. Drupal 8.0.0 shipped on November 19, 2015. And since then, I’ve spent most of my time on making Drupal 8 be truly API-first: improving the RESTful Web Services support that Drupal 8 ships with, and in the process also strengthening the JSON API & GraphQL contributed modules.

I’ve learned a lot about the impact of past decisions (by myself and others) on backwards compatibility. The benefit of backwards compatibility (BC). But the burden of ensuring BC can increase exponentially due to certain architectural decisions. I’ve been experiencing that first-hand, since I’m tasked with making Drupal 8’s REST support rock-solid, where I am seeing time and time again that “fixing bugs + improving DXrequires BC breaks. Tough decisions.

In Drupal 8, we have experience with some extremes:

  1. the BigPipe & Dynamic Page Cache modules have no API, but build on top of other APIs: they provide functionality only, not APIs
  2. the REST module has an API, and its functionality can be modified not just via that API, but also via other APIs

The first cannot break BC. The second requires scrutiny for every line of code modified to ensure we don’t break BC. For the second, the burden can easily outweigh the benefit, because how many sites actually are using this obscure edge case of the API?


We’ll look at:

  • How can we make our modules more evolvable in the future? (Contrib & core, D8 & D9.)
  • Ideas to improve this, and root cause hypotheses (for example, the fact that we have API cascades and not orthogonal APIs)

We should be thinking more actively about how feature X, configuration Y or API Z might get in the way of BC. I analyzed the architectural patterns in Drupal 8, and have some thoughts about how to do better. I don’t have all the answers. But what matters most is not answers, but a critical mindset going forward that is consciously considering BC implications for every patch that goes into Drupal 8! This session is only a starting point; we should continue discussing in the hallways, during dinner and of course: in the issue queues!

Preview:

DrupalCon Seattle
Seattle, WA, United States

April 10, 2019

In Open Source, there is a long-held belief in meritocracy, or the idea that the best work rises to the top, regardless of who contributes it. The problem is that a meritocracy assumes an equal distribution of time for everyone in a community.

Open Source is not a meritocracy

Free time to contribute is a privilege

I incorrectly made this assumption myself, saying: The only real limitation [to Open Source contribution] is your willingness to learn.

Today, I've come to understand that inequality makes it difficult for underrepresented groups to have the "free time" it takes to contribute to Open Source.

For example, research shows that women still spend more than double the time as men doing unpaid domestic work, such as housework or childcare. I've heard from some of my colleagues that they need to optimize every minute of time they don't spend working, which makes it more difficult to contribute to Open Source on an unpaid, volunteer basis.

Or, in other cases, many people's economic conditions require them to work more hours or several jobs in order to support themselves or their families.

Systemic issues like racial and gender wage gaps continue to plague underrepresented groups, and it's both unfair and impractical to assume that these groups of people have the same amount of free time to contribute to Open Source projects, if they have any at all.

What this means is that Open Source is not a meritocracy.

Underrepresented groups don't have the same amount of free time

Free time is a mark of privilege, rather than an equal right. Instead of chasing an unrealistic concept of meritocracy, we should be striving for equity. Rather than thinking, "everyone can contribute to open source", we should be thinking, "everyone deserves the opportunity to contribute".

Time inequality contributes to a lack of diversity in Open Source

This fallacy of "free time" makes Open Source communities suffer from a lack of diversity. The demographics are even worse than the technology industry overall: while 22.6% of professional computer programmers in the workforce identify as women (Bureau of Labor Statistics), less than 5% of contributors do in Open Source (GitHub). And while 34% of programmers identify as ethnic or national minorities (Bureau of Labor Statistics), only 16% do in Open Source (GitHub).

Diversity in data

It's important to note that time isn't the only factor; sometimes a hostile culture or unconscious bias play a part in limiting diversity. According to the same GitHub survey cited above, 21% of people who experienced negative behavior stopped contributing to Open Source projects altogether. Other recent research showed that women's pull requests were more likely to get accepted if they had a gender-neutral username. Unfortunately, examples like these are common.

Taking action: giving time to underrepresented groups

A person being ignored

While it's impossible to fix decades of gender and racial inequality with any single action, we must do better. Those in a position to help have an obligation to improve the lives of others. We should not only invite underrepresented groups into our Open Source communities, but make sure that they are welcomed, supported and empowered. One way to help is with time:

  • As individuals, by making sure you are intentionally welcoming people from underrepresented groups, through both outreach and actions. If you're in a community organizing position, encourage and make space for people from underrepresented groups to give talks or lead sprints about the work they're interested in. Or if you're asked to, mentor an underrepresented contributor.
  • As organizations in the Open Source ecosystem, by giving people more paid time to contribute.

Taking the extra effort to help onboard new members or provide added detail when reviewing code changes can be invaluable to community members who don't have an abundance of free time. Overall, being kinder, more patient and more supportive to others could go a long way in welcoming more people to Open Source.

In addition, organizations within the Open Source ecosystem capable of giving back should consider financially sponsoring underrepresented groups to contribute to Open Source. Sponsorship can look like full or part-time employment, an internship or giving to organizations like Girls Who Code, Code2040, Resilient Coders or one of the many others that support diversity in technology. Even a few hours of paid time during the workweek for underrepresented employees could help them contribute more to Open Source.

Applying the lessons to Drupal

Over the years, I've learned a lot from different people's perspectives. Learning out in the open is not always easy, but it's been an important part of my personal journey.

Knowing that Drupal is one of the largest and most influential Open Source projects, I find it important that we lead by example.

I encourage individuals and organizations in the Drupal community to strongly consider giving time and opportunities to underrepresented groups. You can start in places like:

When we have more diverse people contributing to Drupal, it will not only inject a spark of energy, but it will also help us make better, more accessible, inclusive software for everyone in the world.

Each of us needs to decide if and how we can help to create equity for everyone in Drupal. Not only is it good for business, it's good for people, and it's the right thing to do.

Special thanks to the Drupal Diversity and Inclusion group for discussing this topic with me. Ashe Dryden's thought-leadership indirectly influenced this piece. If you are interested in this topic, I recommend you check out Ashe's blog post The Ethics of Unpaid Labor and the OSS Community.

ImioCe jeudi 25 avril 2019 à 19h se déroulera la 77ème séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : Imio : clés du succès du logiciel libre dans les communes wallonnes

Thématique : communauté|développement

Public : développeurs|entreprises|étudiants

L’animateur conférencier : Joël Lambillotte (IMIO)

Lieu de cette séance : Campus technique (ISIMs) de la Haute Ecole en Hainaut, Avenue V. Maistriau, 8a, Salle Académique, 2e bâtiment (cf. ce plan sur le site de l’ISIMs, et ici sur la carte Openstreetmap).

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page http://jeudisdulibre.fikket.com/. La séance sera suivie d’un verre de l’amitié.

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, OpenSides, MeaWeb et Phonoid.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

Description : L’intercommunale Imio conçoit et héberge des solutions libres pour près de 300 administration publiques locales en Wallonie.

Ce projet, initié par deux communes en 2005, s’est spontanément associé à la communauté open source Plone afin de co-construire des solutions dont il pouvait conserver la maîtrise.

Ce sont les choix technologiques et philosophiques opérés qui ont permis d’assurer la pérennité et la croissance de la structure. Ils sont multiples : la programmation orientée composant facilitant la gestion d’un tronc commun pour des grandes villes aussi bien que pour des communes rurales, la démarche qualité via la rédaction systématique de tests et l’utilisation d’outils comme Robot Framework, l’industrialisation via Jenkins, Puppet, Docker, Rundeck, les aspect sociaux via les ateliers pour les utilisateurs et sprints.

Short bio : Joël Lambillotte est directeur général adjoint d’Imio, dont il a participé à la création. Gradué en informatique il a une longue expérience comme responsable IT de la commune de Sambreville et a co-fondé les communautés open source CommunesPlone et PloneGov, finalistes aux EU eGovernement awards en 2007 et 2009.

April 09, 2019

For most people, today marks the first day of DrupalCon Seattle.

Open Source communities create better, more inclusive software when diverse people come to the table. Unfortunately, there is still a huge gender gap in Open Source, and software more broadly. It's something I'll talk more about in my keynote tomorrow.

One way to help close the gender gap in the technology sector is to give to organizations that are actively working to solve this problem. During DrupalCon Seattle, Acquia will donate $5 to Girls Who Code for every person that visits our booth.

April 08, 2019

The post Lazily load below-the-fold images and iframes appeared first on ma.ttias.be.

A pretty cool feature has landed in Chromium that allows you to easily lazy-load images and iframes.

Here's some info directly from the mailing list:

Support deferring the load of below-the-fold images and iframes on the page until the user scrolls near them.

This is to reduce data usage, memory usage, and speed up above-the-fold content.

Web pages can use the "loading" attribute on and elements to control and interact with the default lazy loading behavior, with possible values "lazy", "eager", and "auto" (which is equivalent to leaving the "loading" attribute unset).

Source: Intent to Ship: Lazily load below-the-fold images and iframes -- Google Groups

Which leads to some pretty powerful optimizations for page loading and bandwidth savings, especially on image-heavy sites (like news sites, photo blogs, ...).

It works simply as follows:

<img src="example.jpg" loading="lazy" alt="example" />
<iframe src="example.html" loading="lazy">

Some more technical readings: Native Lazy Loading for <img> and <iframe> is Coming to the Web.

The post Lazily load below-the-fold images and iframes appeared first on ma.ttias.be.

April 06, 2019

The post Using Oh Dear! to keep your Varnish cache warm appeared first on ma.ttias.be.

If we're already crawling your site, we might as well update your cached pages in the meanwhile!

The idea is as follows: if you've enabled our broken links or mixed content checks for any of your sites, we'll crawl your sites to find any broken pages.

On top of that, we have the ability to set custom HTTP headers per website that get added to both the uptime checks and our crawler.

Combining our crawler and the custom HTTP headers allows you to authorize our crawler in your Varnish configs to let it update the cache.

Source: Using Oh Dear! to keep your Varnish cache warm -- Oh Dear! blog

The post Using Oh Dear! to keep your Varnish cache warm appeared first on ma.ttias.be.

April 05, 2019

Inquiet, je jetai un regard à ma femme qui refermait doucement la porte de notre appartement.
— Alors ? Tu en as ?
— Moins fort ! me répondit-elle. Je ne tiens pas à ce que les voisins nous dénoncent.

Puis, d’un air conspirateur, elle me tendit un minuscule paquet qu’elle gardait serré dans son poing. Je m’en saisis immédiatement.
— C’est tout ? balbutiais-je.
— Laisse-m’en ! Il faut tenir jusqu’à la prochaine livraison.

Je divisai le paquet en deux parts égales avant de lui en tendre une. Mon maigre butin dans le creux de ma main, je me retirai dans notre toilette, la seule pièce sans fenêtre.
— N’utilise pas tout d’un coup ! chuchota ma femme.

Je ne répondis même pas. Je pensais à l’époque où la vente était libre. Où on se fournissait dans les grands magasins, comparant les marques, n’achetant que de la bonne qualité. Mais le lobby sanitaire s’était joint à l’hystérie écologiste. Aujourd’hui, nous étions des hors-la-loi.

Nous avions certes tenté de nous sevrer, tenant parfois près d’une semaine. Mais, à chaque fois, nous avions craqué, nous étions retombés dans notre addiction, allant jusqu’à plusieurs fois par jour.

Seul dans la toilette, j’ouvris la main et me mis à l’ouvrage. Les muscles de ma nuque se détendirent, mes paupières se fermèrent naturellement et je me mis à pousser des soupirs de jouissance tandis que le dangereux, le précieux coton-tige explorait mon canal auriculaire.

Oui, je connaissais les méfaits de mon acte. J’étais conscient du coût écologique de ces bouts de plastiques, du risque pour mon tympan. Mais rien ne pouvait remplacer cette extase, cet unique moment de jouissance.

Photo by Simone Scarano on Unsplash

Je suis @ploum, conférencier et écrivain électronique. Si vous avez apprécié ce texte, n'hésitez pas à me soutenir sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens réguliers, même symboliques, sont une réelle motivation et reconnaissance. Merci !

Ce texte est publié sous la licence CC-By BE.

April 04, 2019

I published the following diary on isc.sans.edu: “New Waves of Scans Detected by an Old Rule“:

Who remembers the famous ShellShock (CVE-2014-6271)? This bug affected the bash shell in 2014 and was critical due to the facts that it was easy to exploit and that bash is a widespread shell used in many tools/applications. So, at this time, I created an OSSEC alerts to report ShellShock exploitation attempts against my servers. Still today, I’m getting a hit on this rule from time to time… [Read more]

[The post [SANS ISC] New Waves of Scans Detected by an Old Rule has been first published on /dev/random]

April 03, 2019

The post The end of Extended Validation certificates appeared first on ma.ttias.be.

You know those certificates you paid 5x more for than a normal one? The ones that are supposed to give you a green address bar with your company name imprinted on it?

It's been mentioned before, but my take is the same: they're dead.

That is to say, they'll still work, but they don't warrant a 5x price increase anymore. Because this is what an extended validation certificate is supposed to look like on Chrome.

And this is what it looks like for some users that are part of a Chrome "experiment".

Notice the difference?

It looks exactly the same as a free Let's Encrypt certificate, like the one we use on Oh Dear!. That green bar -- the one we paid extra for -- is gone.

Those part of the Chrome experiment will notice this message in their Developer Console.

As part of an experiment, Chrome temporarily shows only the lock icon in the address bar.
Your SSL certificate with Extended Validation is still valid.

My feeling is it won't be temporary. There's little to no added value to EV certificates, users don't look at it. From a technical point of view, they're also just certificates. They encrypt your traffic just like a Let's Encrypt certificate would.

Today, I wouldn't bother buying Extended Validation certificates anymore. I wouldn't even renew them anymore and go for automated, often-rotated, Let's Encrypt certificates instead.

(Oh, and if you're going that route, give Oh Dear! a try to help monitor your expiration dates and chains. Just to feel safe.)

The post The end of Extended Validation certificates appeared first on ma.ttias.be.

March 29, 2019

Ceci est le dernier épisode d’une aventure qui ce sera étalée sur plusieurs années. J’espère que vous avez apprécier la lecture, que vous la conseillerez à d’autres et que j’aurai l’occasion de la mettre en forme pour vous proposer un véritable livre, électronique ou sur papier. Merci pour votre fidélité à travers cette histoire !

Ce matin, le plus vieux est venu me chercher. Les bébés étaient calmes. J’étais confiant, mon pouvoir était revenu.

— Viens avec moi ! m’a dit le plus vieux. Je veux que tu racontes ton histoire à Mérissa. Je me refuse de croire qu’elle soit inhumaine à ce point là. Une future maman ne peut rester insensible, elle va comprendre, elle va agir.

Je n’ai rien dit, je l’ai suivi silencieusement à travers la ville jusque dans cette grande pièce avec une engrossée. Lorsque le plus jeune a soudainement surgit, avec une jeune femme nue, je me suis doucement mis en retrait. Je sais que mon pouvoir me permet de ne pas être remarqué, de ne pas attirer l’attention sur moi.

Ils ont parlé pendant une éternité. Mais j’ai appris la patience. Je les ai laissé. J’avais confiance. Le pouvoir me soufflerait lorsque serait venu le temps d’agir.

L’enfer s’est soudainement déchainé. Mes cauchemars sont devenus une nouvelle forme de réalité.

J’ai souris.

Elle était là, familière, présente, suintante. La peur ! Ma peur.

Sans forcer, sans colère, j’ai enfoncé la fine baguette de métal dans le dos du plus jeune. Puis du plus vieux. Une simple tige que j’avais arraché à un meuble de l’appartement dans lequel j’avais séjourné et que j’avais caché dans ma manche.

Les bébés hurlaient, dansaient mais cette fois, ce n’est pas moi qu’ils regardaient. L’engrossée regardait un écran et tentait de taper sur un clavier. La plus jeune la soutenait. Je lui ai enfoncé la baguette dans le cou.

Elle a porté ses mains à sa gorge avant de tourner vers moi un regard de surprise extrême. Ses lèvres ont articulé quelques mots.
— L’élément perturbateur, l’imprévu…
Elle s’est écroulée, renversant l’engrossée qui est tombée sur le sol en hurlant.

Je me suis approchée d’elle.

Elle gémissait, tentant de s’apaiser avec des petites respirations saccadées. J’avais déjà vu des travailleuse mettre bas, cela ne me faisait ni chaud ni froid.

D’un geste du doigt, elle me fit signe de me rapprocher. J’obtempérai.
— Comment… Comment t’appelles-tu ? haleta-t-elle.
— 689, répondis-je machinalement.

Malgré sa difficile situation, elle suintait l’autorité. Le pouvoir semblait littéralement jaillir de sa voix, de son visage. Je l’adorais, la vénérais.
— 689, murmura-t-elle, si tu appuies sur la plus grosse touche du clavier, tu détruiras le maitre du monde. La commande est tapée, il suffit de la confirmer.

Le pouvoir. L’immense pouvoir.

Lentement, je me redressai tout en contemplant le clavier, l’écran.

J’ai trouvé la touche. J’ai vu l’écran. J’ai levé le doigt. J’ai hésité.

Puis j’ai regardé la femme en train de hurler tout en se tenant le ventre. Une petite tête humide et visqueuse pointait entre ses cuisses. Les cris de la mère couvraient le cauchemar de la pièce.

— Appuie ! cria-t-elle. Appuie maintenant !

612 se tenait devant moi, le visage tordu par la douleur et le coup mais le regard pétillant de malice.

— L’un d’entre vous verra la Terre. Il la sauvera. L’Élu ! Appuie !

Ma vie se mit à défiler devant mes yeux. La douleur, l’humiliation, l’usine. Devenir G89. Tuer le vieux. Approcher le contremaître. Voir l’espace. La terre. Gagner la confiance du vieux et du jeune terrien. Tuer le jeune terrien qui était un peu trop perspicace. Rester caché dans l’appartement. Affronter mes cauchemars. Être témoin de la résurrection du plus jeune. Et puis devenir le maître du monde ?

— Accomplis ton destin ! m’ordonna le vieux 612. Appuie sur le clavier, sauve la Terre !

Au sol, la femme blonde haletait doucement, les yeux hagards, les jambes écartées. Un bébé silencieux se tortillait auprès d’elle tandis qu’un second crâne minuscule faisait son apparition dans l’enfer de la vie.

À mes pieds, ce qui avait été la maîtresse du monde se convulsait dans les affres de l’enfantement.

— Deviens… Deviens le maitre du monde ! bégaiait-elle. Appuie !
— Appuie ! me supplia 612.

Mais j’avais compris. Une mère est prête à tout pour ses enfants. Une mère ne me confierait jamais les rennes du monde.

Lentement, je m’assis devant le clavier et l’écran. Il état là, le véritable maître du monde. Celui que tout le monde craignait. Celui qui faisait suinter la peur dans les esprits, qui organisait la construction, l’achat, la destruction, la vente, infini cycle consumériste qui consumait lentement la planète.

Dans la pièce, le silence était revenu. Le cauchemar s’était tu. 612 avait disparu, chassé de mon esprit par ma nouvelle lucidité. Seuls restaient des cadavres, une parturiente agonisante et deux nouveaux-nés.

Je contemplai mon œuvre. La femme nue râla, porta la main à sa gorge et tenta de se relever. Sans succès.

Je souris.

Peur, ma fidèle conseillère, ma vieille amie. Je t’obéirai. Je suis ton humble serviteur.

Lentement, je m’éloignai du clavier et de la touche. Je vénérais l’écran, le véritable maître du monde. Mais il savait que, d’une simple pression, je pouvais l’éteindre. Le monde avait retrouvé l’équilibre. Je devais me mettre au service du maître du monde et de ma peur.

Une forme grise sauta sur le bureau, près de l’écran.
— Miaouw ! fit-elle.

Je sursautai.
— Miaouw ! insista-t-elle.

Elle retroussa ses babines, me montrant de minuscules dents blanches. Un feulement jaillit de ce petit corps poilus.
— Lancelot, murmura la femme qui accouchait. Mon petit Lancelot à sa mémère…

Incrédule, je détournai le regard. Mais, doucement, sans même avoir l’air d’y prêter attention, la bête se mit à marcher sur le bureaux. Sa patte enfonça la touche du clavier. Des lignes se mirent à défiler à toute vitesse sur l’écran avant de s’arrêter. Rien ne se passa. Était-ce un subterfuge de mon esprit où la lumière avait-elle clignoté un bref instant ?

Dans un profond borborygme, la femme nue parvint à se mettre à genoux, le corps couvert de sang.

Sur le sol, les deux bébés se mirent soudain à crier. Au dessus de moi, le plafond laissa soudain passer un voile de ciel d’un bleu trop clair, trop brillant.

Photo by Grant Durr on Unsplash

Je suis @ploum, conférencier et écrivain électronique. Si vous avez apprécié ce texte, n'hésitez pas à me soutenir sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens réguliers, même symboliques, sont une réelle motivation et reconnaissance. Merci !

Ce texte est publié sous la licence CC-By BE.

March 28, 2019

I published the following diary on isc.sans.edu: “Running your Own Passive DNS Service“:

Passive DNS is not new but remains a very interesting component to have in your hunting arsenal. As defined by CIRCL, a passive DNS is “a database storing historical DNS records from various resources. The historical data is indexed, which makes it searchable for incident handlers, security analysts or researchers”. There are plenty of existing passive DNS services: CIRCL, VirusTotal, RiskIQ, etc. I’m using them quite often but, sometimes, they simply don’t have any record for a domain or an IP address I’m interested in. If you’re working for a big organization or a juicy target (depending on your business), why not operate your own passive DNS? You’ll collect data from your network that will represent the traffic of your own users… [Read more]

[The post [SANS ISC] Running your Own Passive DNS Service has been first published on /dev/random]

March 26, 2019

I don't use Google Analytics or any other web analytics service on dri.es. Why not? Because I don't desire to know how many people visit my site, where they come from, or what operating system they use.

Because I don't have a compelling reason to track my site's visitors, I don't have to bother anyone with a "cookies consent" popup either. That is a nice bonus because the web is littered with those already. I like that dri.es is clutter-free.

This was all well and good until a couple of weeks ago, when I learned that when I embed a YouTube video in my blog posts, Google sends an HTTP cookie to track my site's visitors. Be damned!

After some research, I discovered that YouTube offers a privacy-enhanced way of embedding videos. Instead of linking to youtube.com, link to youtube-nocookie.com, and no data-collecting HTTP cookie will be sent. This is Google's way of providing GDPR-compliant YouTube videos.

So I went ahead and updated all blog posts on dri.es to use youtube-nocookie.com.

In addition to improving privacy, this change also makes my site faster. I used https://webpagetest.org to benchmark a recent blog post with a YouTube video.

Before:

A waterfall diagram that shows requests and load times before replacing youtube.com with youtube-nocookie.comWhen embedding a video using youtube.com, Google uses DoubleClick to track your users (yellow bar). A total of 22 files were loaded, and the total time to load the page was 4.4 seconds (vertical blue line). YouTube makes your pages slow, as the vast majority of requests and load time is spent on loading the YouTube video.

After:

A waterfall diagram that shows requests and load times after replacing youtube.com with youtube-nocookie.comWhen using youtube-nocookie.com, Google no longer uses DoubleClick to track your users. No HTTP cookie was sent, "only" 18 files were loaded, and the total page load time was significantly faster at 2.9 seconds (vertical blue line). Most of the load time is still the result of embedding a single YouTube video.

So on Feb 25th my Do Not Donate-page was featured on Hacker News and that obviously brought some extra page-views.

Here are some more numbers for that memorable day;

  • Most popular pages:
    1. Do not donate: 10 013
    2. Homepage/ archives: 1 108
    3. about:futtta: 235
  • Referrers:
    1. Hacker News 7 978
    2. Facebook 112
    3. Search Engines 84
  • Outgoing links:
    1. https://en.wikipedia.org/wiki/Flanders 959
    2. https://en.wikipedia.org/wiki/List_of_countries_by_inequality-adjusted_HDI#List 809
    3. https://profiles.wordpress.org/futtta 596
    4. https://www.kiva.org 87

And my server? Even at the busiest time (around 10-11 AM UTC+1) it quietly hummed along with a 0.11 system load :-)

March 24, 2019

The post Archiving the bitcoin-dev mailing lists appeared first on ma.ttias.be.

I've started yet another effort to index and archive a public mailing list in order to present it in a more readable, clean format.

The road to mailing lists

Why do I keep being drawn towards efforts to parse & present all these mailing lists?

Well, looking back at older posts, I think this piece of knowledge I apparently had in 2015 sums it pretty good.

There are no (well: very little) trolls on mailing lists. Those who take the effort of signing up to a mailing list aren't doing it to curse at others or to be violent. They do so to stay informed, to interact and to help people.

This still is true for me regarding mailing lists: quality content, smart & dedicated people and overall an attitude of helpfulness towards others. Something that's very rare on Reddit or Hackernews discussions.

In 2016 I started an e-mail archive and cancelled it again almost 2y later. The main reason is that the tooling like mhonarc, Pipermail, ... is just really bad. I couldn't find a proper alternative in all these years, so I'm building my own this time.

Solving the mailing list readability problem

What bothers me about mailing lists is the way we browse and look at them online. It's an ugly format, split and archived per month which makes you lose threads if they happen to span multiple months.

Most of us consume mailing lists via -- can you guess it? -- email, obviously. But if you want to share a story posted on a mailing list, I'd want it to be easily readable.

I don't claim to be particularly good at design, but anything is better than pre-formatted text wrapper in pre HTML tags.

The end of mailing list support at the Linux Foundation

One thing I learned from the mailing list, is that the Linux Foundation is slowly deprecating their support for email.

The Bitcoin mailing lists will migrate to groups.io as announced on the bitcoin-dev list. For mailing list users not much should change -- it's still a mailing list (I think?).

However, it presented me with yet another opportunity to go ahead and create my own online archive.

Mirroring bitcoin-dev, bitcoin-core-dev and bitcoin-discuss

I created a new repository that handles the parsing and displaying of the mailing list (and soon, other Bitcoin related projects): github.com/mattiasgeniar/CommunityBitcoin.

The name needs work, but it's the best I could think of.

The mailing lists are now mirrored here: mojah.be/mailing-list. The domain mojah.be refers to an old World of Warcraft character I had. Since I couldn't decide on a proper name yet, it's now hosted on that domain I had lying around for years and did nothing with.

The project features a couple of things I appreciate;

  • A one-page view of an email thread, that can span across multiple months (example)
  • Gravatar support (example)
  • A filter by email author (threads + messages, example)

I can use some more features that I'd happily accept contributes to. I think an RSS feed would be nice, it opens the way for IFTTT-style automation and a Twitter bot. Also pagination is a must since pages get really large.

The goal now is to experiment with the Bitcoin protocol and use this repository as a playground to throw some stuff online and see what sticks.

I'd be more than happy to accept PRs to this project to add functionality!

The post Archiving the bitcoin-dev mailing lists appeared first on ma.ttias.be.

March 22, 2019

The post Initial impressions on running a Bitcoin Core full node appeared first on ma.ttias.be.

Since about a week I'm running my own Bitcoin Core full node, one that keeps a full copy of the blockchain with all transactions included.

Node Discovery

When you first start up your node, the Bitcoin Core daemon bitcoind queries a set of DNS endpoints to do its first discovery of nodes. Once it connects to the first node, more peers will be exchanged & the node start connecting to those too. That's how the network initially bootstraps.

There are about 8 DNS Seeds defined in src/chainparams.cpp. Each node returns a handful of peer IPs to connect to. For instance, the node seed.bitcoin.sipa.be returns over 20 IPs.

$ dig seed.bitcoin.sipa.be | sort
seed.bitcoin.sipa.be.	3460	IN	A	104.197.64.3
seed.bitcoin.sipa.be.	3460	IN	A	107.191.62.217
seed.bitcoin.sipa.be.	3460	IN	A	129.232.253.2
seed.bitcoin.sipa.be.	3460	IN	A	13.238.61.97
seed.bitcoin.sipa.be.	3460	IN	A	178.218.118.81
seed.bitcoin.sipa.be.	3460	IN	A	18.136.117.109
seed.bitcoin.sipa.be.	3460	IN	A	192.206.202.6
seed.bitcoin.sipa.be.	3460	IN	A	194.14.246.85
seed.bitcoin.sipa.be.	3460	IN	A	195.135.194.3
seed.bitcoin.sipa.be.	3460	IN	A	211.110.140.47
seed.bitcoin.sipa.be.	3460	IN	A	46.19.34.236
seed.bitcoin.sipa.be.	3460	IN	A	47.92.98.119
seed.bitcoin.sipa.be.	3460	IN	A	52.47.88.66
seed.bitcoin.sipa.be.	3460	IN	A	52.60.222.172
seed.bitcoin.sipa.be.	3460	IN	A	52.67.65.129
seed.bitcoin.sipa.be.	3460	IN	A	63.32.216.190
seed.bitcoin.sipa.be.	3460	IN	A	71.60.79.214
seed.bitcoin.sipa.be.	3460	IN	A	73.188.124.183
seed.bitcoin.sipa.be.	3460	IN	A	81.206.193.115
seed.bitcoin.sipa.be.	3460	IN	A	83.49.154.118
seed.bitcoin.sipa.be.	3460	IN	A	84.254.90.125
seed.bitcoin.sipa.be.	3460	IN	A	85.227.137.129
seed.bitcoin.sipa.be.	3460	IN	A	88.198.201.125
seed.bitcoin.sipa.be.	3460	IN	A	92.53.89.123
seed.bitcoin.sipa.be.	3460	IN	A	95.211.109.194

Once a connection to one node is made, that node will share some of its peers it knows about to you.

There's no simple way to get all node IPs and map the entire network. Nodes will share some information about their peers, but by doing so selectively they hide critical information about the network design and still allow for all transactions to be fairly spread across all nodes.

Initial Block Download (IBD)

With a few connections established, a new node will start to query for the blockchain state of its peers and start downloading the missing blocks.

Currently, the entire blockchain is 224GB in size.

$ du -hs .bitcoin/blocks/
224G	.bitcoin/blocks/

Once started, your node will download that 224GB worth of blockchain data. It's reasonably fast at it, too.

I was on a Gigabit connection at the time, the first 3/5th of the chain got downloaded at about 150Mbps, the rest slightly slower at 100Mbps and later at 25Mbps.

Notice how the bandwidth consumption changes over time and lowers? There's a good reason for that too and it starts to become more obvious if we map out the CPU usage of the node at the same time.

This wasn't a one-off occurrence. I resynced the chain entirely and the effect is reproducable. More on that later.

Disk consumption

Zooming in a bit, we can see the disk space is consumed gradually as the node syncs.

Also notice how, as the CPU usage starts to spike in the chart above, the disk consumption rate slows down.

It looks like at that point a more efficient algoritme was used, which is taxing the CPUs higher for block validation, but gives us a more efficient storage method on the disk.

Looking at the transaction timestamps in the logs, as soon as transactions around 2018-07-30 (July 30th, 2018) are processed, CPU spikes.

The IOPS appear to confirm this too, as the amount of I/O operations slows down as the CPU intensity increases, indicating writes and reads to disk are slower than usual.

At first glance, this is a good thing. Syncing the chain becomes more CPU intense from that point forward, but as the block validation needs to happen only once when doing the initial block download, the disk space saved remains forever.

Thoughts on the block size

There's quite a lot of debate about the block size in Bitcoin: bigger blocks allow for more data to be saved and would allow for more complicated scripts or even smart contracts to exist on the chain.

Bigger blocks also mean more storage consumption. If the chain becomes too big, it becomes harder to run one on your own.

Because of this, I'm currently in the smaller blocks are better-camp. While diskspace is becoming cheaper & cheaper, a cloud server with more than 250GB disk space capacity quickly costs you $50/month and starts to add up over time.

We can't change the current blockchain size (I think?), but we can prevent it from getting too large by thinking about what data to store on-chain vs. off-chain.

Setting up your own node

Want to get your hands dirty with Bitcoin? One of the best ways to get started is running your own node and get some experience.

If you're on CentOS, I dedicated a full article on setting up your own node: Run a Bitcoin Core full node on CentOS 7.

If you don't want to keep ~250GB of storage, you can limit the disk consumption by just keeping the newest blocks. For more details, see here: Limit the disk space consumed by Bitcoin Core nodes on Linux.

The post Initial impressions on running a Bitcoin Core full node appeared first on ma.ttias.be.

On Facebook someone asked me how to do Gutenberg the right way to avoid loading too much JS on the frontend, this is a somewhat better organized version of my answer;

I’m not a Gutenberg specialist (for from it, really) but:

  • the wrong way is adding JS with wp-block/ wp-element and other gutenberg dependencies on init calling wp_enqueue_script,
  • the right way is either hooking into enqueue_block_editor_assets (see https://jasonyingling.me/enqueueing-scripts-and-styles-for-gutenberg-blocks/)
  • or when using init doing wp_register_script and then register_block_type referring to the correct editor_script previously registered (see https://wordpress.org/gutenberg/handbook/designers-developers/developers/tutorials/block-tutorial/writing-your-first-block-type/).

I’ve tried both of these on a “bad” plugin and can confirm both solutions do prevent those needless wp-includes/js/dist/* JS-files from being added on the front-end.

March 21, 2019

I’m in Washington, waiting for my flight back to Belgium. I just attended the 2019 edition of the OSSEC Conference, well more precisely, close to Washington in Herndon, VA. This was my first one and I’ve been honoured to be invited to speak at the event. OSSEC is a very nice project that I’m using for a long time. I also contributed to it and I’m giving training on this topic. The conference was already organized for a few years and attracted more people every year. They doubled the number of attendees for the 2019 edition.

The opening session was performed by Scott Shinn, OSSEC Project Manager, who came with some recap. The project started in 2003 and was first released in 2005. It supports a lot of different environments and, basically, if you can compile C code  on your device, it can run OSSEC! Some interesting facts were presented by Scott. What is the state of the project? OSSEC is alive with 500K downloads in 2018 and trending up. A survey is still ongoing but already demonstrates that many users are long-term users (31% are using OSSEC for >5y). If the top user profile remains based on infosec people, the second profile is IT operations and devops. There is now an OSSEC foundation (503c – a non-profit organization) which has multiple goals: to promote OSSEC, a bug bounty will probably be started, to attract more developers and to enforce the project. There is an ongoing effort to make the tool more secure with an external audit of the code.

Then, Daniel Cid presented his keynote. Daniel is the OSSEC founder and reviewed the story of his baby. Like many of us, he was facing problems in his daily job and did not find the proper tool. So he started to develop OSSEC. There was already some tools here and there like Owl, Syscheck or OSHIDS. Daniel integrated them and added a network layer and the agent/server model. He reviewed the very first versions from the 0.1 until 0.7. Funny story, some people asked him to stop flooding the mailing where he announced all the versions and suggested him to contribute to the project ’Tripwire’.

Then, Scott came back on stage to talk about the Future of OSSEC. Some times, when I mention OSSEC, people’ first reaction is to argue that OSSEC does not improve or does not have clear roadmap. Really? Scott give a nice overview of what’s coming soon. Here is a quick list:

  • Dynamic decoders – OSSEC will implement user defined variable names. They will be configured via a KV store represented in JSON. The next step will be to implement the output transport to other format to replace tools like Filebeat, ArcSight, Splunk agents, etc.
  • Real-time threat intelligence – Instead of using CDB lists (that must be re-generated at regular interval, OSSEC will be able to query threat intelligence lists on the flight, in the same way as the GeoIP lookups are working.
  • GOSSEC – Golang OSSEC. agent-auth has already been ported to Golang.
  • Noisesocket – To replace the existing encryption mechanism between the OSSEC server and agents.
  • A new web managment console

Most of these new features should be available in OSSEC 3.3.

The next presentation was about “Protecting Workloads in Google Kubernetes with OSSEC and Google Cloud Armor” by Ben Auch and Joe Miller, Gannett working at USA Today. This media company operates a huge network with 140M unique visitors monthly, 120 markets in the US and a worldwide presence. As a media company, there are often targeted (defacement, information change, fake news, etc). Ben & Joe explained how they successfully deployed OSSEC in their cloud infrastructure to automatically block attackers with a bunch of Active-Response scripts. The biggest challenge was to be able to remain independent of the cloud provider and to access logs in a simple but effective way.Detect malicious requests to GKE containers

Mike Shinn, from Atomicorp, came to speak about “Real Time Threat Intelligence for Advanced Detection“. Atomicorp, the organizer of the conference, is providing OSSEC professional services and is also working on extensions. Mike demonstrated what he called “the next-generation Active-Response”. Today, this OSSEC feature accesses data from CDB but it’s not real-time. The idea is to collect data from OSSEC agents installed in multiple locations, multiple organizations (similar to what dshield.org is doing) and to apply some machine-learning magic. The idea is also to replace the CDB lookup mechanism by something more powerful and in real time: via DNS lookups. Really interesting approach!

Ben Brooks, from Beryllium Infosec, presented “A Person Behind Every Event“. This talk was not directly related to OSSEC but interesting anyway. Tools like OSSEC are working with rules and technical information – IP addressds, files, URLs, but what about the people behind those alerts? Are we facing real attackers or rogue insides? Who’s the most critical? The presentation was focussed on the threat intelligencecycle:
Direction > Collection > Processing > Analysis > DesseminationBof

The next two talks had the same topic: automation. Ken Moini from Fierce Software Automation, presented “Automating Security Across the Enterprise with Ansible and OSSEC“. The idea behind the talk was to solve the problems that most organizations are facing: people problems (skills gaps), point tools (proliferation of tools and vendors solutions), pace of innovation. Mike Waite, from RedHat, spoke about “Containerized software for a modern world, The good, the bad and the ugly“. A few years ago, the ecosystem was based on many Linux flavors. Today, we have the same issue but with many flavours of Kubernetes. It’s all about applications. If applications can be easily deployed, software vendors are becoming also Linux maintainers!

The next presentation was performed by Andrew Hay, from LEO Cybersecurity: “Managing Multi-Cloud OSSEC Deployments“. Andrew is a long OSSEC advocate and co-wrote the book “OSSEC HIDS Host Based Intrusion Detection Guide” with Daniel Cid. He presented tips & tricks to deploy OSSEC in cloud services, how to generate configuration files with automation tools like Chef, Puppet or Ansible.

Mike Shinn came back with “Atomic Workload Protection“. Yesterday, organizations’ business was based on a secure network of servers. Tomorrow, we’ll have to use a network of secure workloads. Workloads must be security and cloud providers can’t do everything for us. Cloud providers take care of the cloud security but the security IN the cloud relies on their customers! Gartner said that, by 2023, 99% of the cloud security failures will be customer’s fault. Mike explained how Atomicorp developed extra layers on top of OSSEC to secure workloads: Hardening, Vulnerability shielding, Memory protection, Application control, Behavioral Monitoring, Micro segmentation, Deception and AV/Antimalware.

The next slot was assigned to myself, I presented “Threat Hunting with OSSEC“.

Finally, the last presentation was the one of Dmitry Dain who presented the NoiseSocket that will be implemented in the next OSSEC release. The day ended with a quick OSSEC Users panel and a nice social event.

The second day was mainly a workshop. Scott prepared some exercises to demonstrate how to use some existing features of OSSEC (FIM, Active-Response) but also the new feature called “Dynamic Decoder” (see above). I met a lot of new people who are all OSSEC users or contributors.

[The post OSSEC Conference 2019 Wrap-Up has been first published on /dev/random]

JSON:API being dropped into Drupal by crane

Breaking news: we just committed the JSON:API module to the development branch of Drupal 8.

In other words, JSON:API support is coming to all Drupal 8 sites in just a few short months! 🎉

This marks another important milestone in Drupal's evolution to be an API-first platform optimized for building both coupled and decoupled applications.

With JSON:API, developers or content creators can create their content models in Drupal’s UI without having to write a single line of code, and automatically get not only a great authoring experience, but also a powerful, standards-compliant, web service API to pull that content into JavaScript applications, digital kiosks, chatbots, voice assistants and more.

When you enable the JSON:API module, all Drupal entities such as blog posts, users, tags, comments and more become accessible via the JSON:API web service API. JSON:API provides a standardized API for reading and modifying resources (entities), interacting with relationships between resources (entity references), fetching of only the selected fields (e.g. only the "title" and "author" fields), including related resources to avoid additional requests (e.g. details about the content's author) and filtering, sorting and paginating collections of resources.

In addition to being incredibly powerful, JSON:API is easy to learn and use and uses all the tooling we already have available to test, debug and scale Drupal sites.

Drupal's JSON:API implementation was years in the making

Development of the JSON:API module started in May 2016 and reached a stable 1.0 release in May 2017. Most of the work was driven by a single developer partially in his free time: Mateu Aguiló Bosch (e0ipso).

After soliciting input and consulting others, I felt JSON:API belonged in Drupal core. I first floated this idea in July 2016, became more convinced in December 2016 and recommended that we standardize on it in October 2017.

This is why at the end of 2017, I asked Wim Leers and Gabe Sullice — as part of their roles at Acquia — to start devoting the majority of their time to getting JSON:API to a high level of stability.

Wim and Gabe quickly became key contributors alongside Mateu. They wrote hundreds of tests and added missing features to make sure we guarantee strict compliance with the JSON:API specification.

A year later, their work culminated in a JSON:API 2.0 stable release on January 7th, 2019. The 2.0 release marked the start of the module's move to Drupal core. After rigorous reviews and more improvements, the module was finally committed to core earlier today.

From beginning to end, it took 28 months, 450 commits, 32 releases and more than 5,500 test runs.

The best JSON:API implementation in existence

The JSON:API module for Drupal is almost certainly the most feature-complete and easiest-to-use JSON:API implementation in existence.

The Drupal JSON:API implementation supports every feature of the JSON:API 1.0 specification out-of-the-box. Every Drupal entity (a resource object in JSON:API terminology) is automatically made available through JSON:API. Existing access controls for both reading and writing are respected. Both translations and revisions of entities are also made available. Furthermore, querying entities (filtering resource collections in JSON:API terminology) is possible without any configuration (e.g. setting up a "Drupal View"), which means front-end developers can get started on their work right away.

What is particularly rewarding is that all of this was made possible thanks to Drupal's data model and introspection capabilities. Drupal’s decade-old Entity API, Field API, Access APIs and more recent Configuration and Typed Data APIs exist as an incredibly robust foundation for making Drupal’s data available via web service APIs. This is not to be understated, as it makes the JSON:API implementation robust, deeply integrated and elegant.

I want to extend a special thank you to the many contributors that contributed to the JSON:API module and that helped make it possible for JSON:API to be added to Drupal 8.7.

Special thanks to Wim Leers (Acquia) and Gabe Sullice (Acquia) for co-authoring this blog post and to Mateu Aguiló Bosch (e0ipso) (Lullabot), Preston So (Acquia), Alex Bronstein (Acquia) for their feedback during the writing process.

The JSON:API module was added to Drupal 8.7 as a stable module!

See Dries’ overview of why this is an important milestone for Drupal, a look behind the scenes and a look toward the future. Read that first!

Upgrading?

As Mateu said, this is the first time a new module is added to Drupal core as “stable” (non-experimental) from day one. This was the plan since July 2018 — I’m glad we delivered on that promise.

This means users of the JSON:API 8.x-2.x contrib module currently on Drupal 8.5 or 8.6 can update to Drupal 8.7 on its release day and simply delete their current contributed module, and have no disruption in their current use of JSON:API, nor in security coverage! 1

What’s happened lately?

The last JSON:API update was exactly two months ago, because … ever since then Gabe, Mateu and I are have been working very hard to get JSON:API through the core review process. This resulted in a few notable improvements:

  1. a read-only mode that is turned on by default for new installs — this strikes a nice balance between DX (still having data available via APIs by default/zero config: reading is probably the 80% use case, at least today) and minimizing risk (not allowing writes by default) 2
  2. auto-revisioning when PATCHing for eligible entity types
  3. formally documented & tested revisions and translations support 3
  4. formally documented security considerations

Get these improvements today by updating to version 2.4 of the JSON:API module — it’s identical to what was added to Drupal 8.7!

Contributors

An incredible total of 103 people contributed in JSON:API’s issue queue to help make this happen, and 50 of those even have commits to their name:

Wim Leers, ndobromirov, e0ipso, nuez, gabesullice, xjm, effulgentsia, seanB, jhodgdon, webchick, Dries, andrewmacpherson, jibran, larowlan, Gábor Hojtsy, benjifisher, phenaproxima, ckrina, dww, amateescu, voleger, plach, justageek, catch, samuel.mortenson, berdir, zhangyb, killes@www.drop.org, malik.kotob, pfrilling, Grimreaper, andriansyahnc, blainelang, btully, ebeyrent, garphy, Niklan, joelstein, joshua.boltz, govind.maloo, tstoeckler, hchonov, dawehner, kristiaanvandeneynde, dagmar, yobottehg, olexyy.mails@gmail.com, keesee, caseylau, peterdijk, mortona2k, jludwig, pixelwhip, abhisekmazumdar, izus, Mile23, mglaman, steven.wichers, omkar06, haihoi2, axle_foley00, hampercm, clemens.tolboom, gargsuchi, justafish, sonnykt, alexpott, jlscott, DavidSpiessens, BR0kEN, danielnv18, drpal, martin107, balsama, nileshlohar, gerzenstl, mgalalm, tedbow, das-peter, pwolanin, skyredwang, Dave Reid, mstef, bwinett, grndlvl, Spleshka, salmonek, tom_ek, huyby, mistermoper, jazzdrive3, harrrrrrr, Ivan Berezhnov, idebr, mwebaze, dpolant, dravenk, alan_blake, jonathan1055, GeduR, kostajh, pcambra, meba, dsdeiz, jian he, matthew.perry.

Thanks to all of you!

Future JSON:API blogging

I blogged about once a month since October 2018 about JSON:API, to get more people to switch to version 2.x of the JSON:API module, to ensure it was maximally mature and bug free prior to going into Drupal core. New capabilities were also being added at a pretty high pace because we’d been preparing the code base for that months prior. We went from ~1700 installs in January to ~2700 today!

Now that it is in Drupal core, there will be less need for frequent updates, and I think the API-First Drupal: what’s new in 8.next? blog posts that I have been doing probably make more sense. I will do one of those when Drupal 8.7.0 is released in May, because not only will it ship with JSON:API land, there are also other improvements!

Special thanks to Mateu Aguiló Bosch (e0ipso) for their feedback!


  1. We’ll of course continue to provide security releases for the contributed module. Once Drupal 8.7 is released, the Drupal Security Team stops supporting Drupal 8.5. At that time, the JSON:API contributed module will only need to provide security support for Drupal 8.6. Once Drupal 8.8 is released at the end of 2019, the JSON:API contributed module will no longer be supported: since JSON:API will then be part of both Drupal 8.7 and 8.8, there is no reason for the contributed module to continue to be supported. ↩︎

  2. Existing sites will continue to have writes enabled by default, but can choose to enable the read-only mode too. ↩︎

  3. Limitations in the underlying Drupal core APIs prevent JSON:API from 100% of desired capabilities, but with JSON:API now being in core, it’ll be much easier to make the necessary changes happen! ↩︎

I published the following diary on isc.sans.edu: “New Wave of Extortion Emails: Central Intelligence Agency Case“:

The extortion attempts haved moved to another step recently. After the “sextortion” emails that are propagating for a while, attackers started to flood people with a new type of fake emails and their imaginnation is endless… I received one two days ago and, this time, they go one step further. In many countries, child pornography is, of course, a very strong offense punished by law. What if you received an email from a Central Intelligence Agency officer who reveals that you’re listed in an international investigation about a case of child pornography and that you’ll be arrested soon… [Read more]

[The post [SANS ISC] New Wave of Extortion Emails: Central Intelligence Agency Case has been first published on /dev/random]

March 19, 2019

The post MySQL 8 & Laravel: The server requested authentication method unknown to the client appeared first on ma.ttias.be.

For local development I use Laravel Valet. Recently, the brew packages have updated to MySQL 8 which changed a few things about its user management. One thing I continue to run into is this error when working with existing Laravel applications.

 SQLSTATE[HY000] [2054] The server requested authentication method unknown to the client

So, here's the fix. You can create a user with the "old" authentication mechanisme, which the MySQL database driver for PHP still expects.

CREATE USER 'ohdear_ci'@'localhost' IDENTIFIED WITH mysql_native_password BY 'ohdear_secret';
GRANT ALL PRIVILEGES ON ohdear_ci.* TO 'ohdear_ci'@'localhost';

If you already have an existing user with permissions on databases, you can modify that user instead.

ALTER USER 'ohdear_ci'@'localhost' IDENTIFIED WITH mysql_native_password BY 'ohdear_secret';

After that, your PHP code can once again connect to MySQL 8.

The post MySQL 8 & Laravel: The server requested authentication method unknown to the client appeared first on ma.ttias.be.

March 18, 2019

Over the past couple of months, since the release of WordPress 5.0 which includes Gutenberg, the new JavaScript-based block editor, I have seen many sites loading a significant amount of extra JavaScript from wp-includes/js/dist on the frontend due to plugins doing it wrong.

So dear plugin-developer-friends; when adding Gutenberg blocks please differentiate between editor access and visitor access, only enqueue JS/ CSS if needed to display your blocks and when registering for front-end please please frigging please don’t declare wp-blocks, wp-element, … and all of those other editor goodies as dependencies unless your 100% sure this is needed (which will almost never be the case).

The performance optimization crowd will thank you for being considerate and -more likely- will curse you if you are not!

March 15, 2019

Tout en prétendant le sauver. Et pourquoi ils sont le pire modèle possible pour nos enfants.

Je déteste les films de superhéros. Je conchie cette mode abjecte qui a dirigé la moitié des conversations d’Internet sur le thème DC ou Marvel, qui a créé une génération d’exégètes de bandes-annonces en attente du prochain « film événement » que va leur fournir l’implacable machine à guimauve et à navet hors de prix appelée Hollywood.

Premièrement à cause de cette éternelle caricature du bien contre le mal, cet épuisant manichéisme qu’on tente désormais de nous camoufler en montrant que le bon doit faire des choses mauvaises, qu’il doute ! Mais, heureusement, le spectateur lui, ne doute jamais. Il sait très bien qui est le bon (celui qui lutte contre le mauvais) et qui est le mauvais (celui qui cherche à faire le Mal, avec un M majuscule, mais sans aucune véritable autre motivation, rendant le personnage complètement absurde). Le bon n’en sort que meilleur, c’est effrayant de bêtise, de faiblesse scénaristique. C’est terrifiant sur l’implication dans nos sociétés. Ce qui est Bien est Bien, c’est évident, on ne peut le questionner. Le Mal, c’est l’autre, toujours.

Mais outre ce misérabilisme intellectuel engoncé sous pléthores d’explosions et d’effets spéciaux, ce qui m’attriste le plus dans cet univers global est le message de fond, l’odieuse idée sous-jacente qui transparait dans tout ce pan de la fiction.

Car la fiction est à la fois le reflet de notre société et le véhicule de nos valeurs, de nos envies, de nos pulsions. La fiction représente ce que nous sommes et nous façonne à la fois. Qui contrôle la fiction contrôle les rêves, les identités, les aspirations.

Les blockcbusters des années 90, d’Independance Day à Armaggedon en passant par Deep Impact, mettaient tous en scène une catastrophe planétaire, une menace totale pour l’espèce. Et, dans tous les cas, les humains s’en sortaient grâce à la coopération (une coopération généralement fortement dirigée par les États-Unis avec de nauséabonds relents de patriotisme, mais de la coopération tout de même). La particularité des héros des années 90 ? C’étaient tous des monsieurs et madames Tout-le-Monde. Bon, surtout des monsieurs. Et américains. Mais le scénario insistait à chaque fois lourdement sur sa normalité, sur le fait que ça pouvait être vous ou moi et qu’il était père de famille.

Le message était clair : les États-Unis vont unir le monde pour lutter contre les catastrophes, chaque individu est un héros et peut changer le monde.

Durant mon adolescence, les films de superhéros étaient complètement ringards. Il n’y avait pas l’ombre du moindre réalisme. Les costumes fluo étaient loin de remplir les salles et, surtout, n’occupaient pas les conversations.

Puis est arrivé Batman Begins, qui selon toutes les critiques de l’époque a changé la donne. À partir de là, les films de superhéros se sont voulus plus réalistes, plus humains, plus sombres, plus glauques. Le héros n’était plus lisse. 

Mais, par essence, un superhéros n’est pas humain ni réaliste. Il peut bien sûr être plus sombre si on change l’éclairage et qu’on remplace le costume fluo. Pour le reste, on va se contenter de l’apparence. Une pincée d’explications par un acteur en blouse blanche pour faire pseudo-scientifique apportera la touche de réalisme. Pour le côté humain, on montrera le superhéros face au doute et éprouvant des caricatures d’émotions : la colère, le désir de faire du mal au Mal, la peur d’échouer, une vague pulsion sexuelle s’apparentant à l’amour. Mais il restera un superhéros, le seul capable de sauver la planète.

Le spectateur n’a plus aucune prise sur l’histoire, sur la menace. Il fait désormais partie de cette foule anonyme qui se contente d’acclamer le superhéros, de l’attendre voire de servir, avec le sourire, de victime collatérale. Car le superhéros moderne fait souvent plus de dégâts que les aliens d’Independance Day. Ce n’est pas grave, c’est pour la sauvegarde du Bien.

Désormais, pour sauver le monde, il faut un super pouvoir. Ou bien il faut être super riche. Si tu n’as aucun des deux, tu n’es que de la chair à canon, dégage-toi du chemin, essaie de ne pas gêner.

C’est tout bonnement terrifiant.

Le monde que nous renvoient ces univers est un monde passif, d’acceptation où personne ne cherche à comprendre ce qu’il y’a au-delà des apparences.  Un monde où chacun attend benoîtement que le Super Bien vienne vaincre le Super Mal, le cul vissé sur la chaise de son petit boulot gris et terne.

La puissance évocatrice de ces univers est telle que les acteurs qui jouent les superhéros sont adulés, applaudis plus encore que leurs avatars, car, comble du Super Bien, ils enfilent leur costume pour aller passer quelques heures avec les enfants malades. Les héros de notre imaginaire sont des saltimbanques multimillionnaires qui, entre deux tournages de publicité pour nous laver le cerveau, acceptent de consacrer quelques heures aux enfants malades sous le regard des caméras !

À travers moults produits dérivés et costumes, nous renforçons cet imaginaire manichéens chez notre progéniture. Alors que notre plus grand espoir serait de former les jeunes à être eux-mêmes, à découvrir leurs propres pouvoirs, à apprendre à coopérer à large échelle, à cultiver les complémentarités et l’intérêt pour le bien commun, nous préférons nous vanter de leur avoir fabriqué un super beau costume de superhéros. Parce que ça fait super bien sur Instagram, parce qu’on devient, pour quelques likes, un super papa ou une super maman.

Le reste de la société est à l’encan. Ne collaborez plus mais devenez un superhéros de l’entrepreneuriat, un superhéros de l’environnement en triant vos déchets, une rockstar de la programmation !

C’est super pathétique…

Photo by TK Hammonds on Unsplash

Je suis @ploum, conférencier et écrivain électronique. Si vous avez apprécié ce texte, n'hésitez pas à me soutenir sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens réguliers, même symboliques, sont une réelle motivation et reconnaissance. Merci !

Ce texte est publié sous la licence CC-By BE.

March 14, 2019

La cabine individuelle du monorail me déposa à quelques mètres de l’entrée du bâtiment de la Compagnie. Les larges portes de verre s’écartèrent en enfilade pour me laisser le passage. Je savais que j’avais été reconnu, scanné, identifié. L’ère des badges était bel et bien révolue. Tout cela me paraissait normal. Ce ne devait être qu’une journée de travail comme les autres.

Le colossal patio grouillait d’individus qui, comme moi, arboraient l’uniforme non officiel de la compagnie. Un pantalon de couleur grise sur des baskets délacées, une paire de bretelles colorées, une chemise au col faussement ouvert dans une recherche très travaillée de paraître insouciant de l’aspect vestimentaire, une barbe fournie, des lunettes rondes. Improbables mirliflores jouisseurs, épigones de l’hypocrite productivisme moderne.

À travers les étendues vitrées du toit, la lumière se déversait à flots, donnant au gigantesque ensemble la sensation d’être une trop parfaite simulation présentée par un cabinet d’architecture. Régulièrement, des plantes et des arbres dans de gigantesques vasques d’un blanc luisant rompaient le flux des travailleurs grâce à une disposition qui ne devait rien au hasard. Les robots nettoyeurs et les immigrés engagés par le service d’entretien ne laissaient pas un papier par terre, pas un mégot. D’ailleurs, la Compagnie n’engageait plus de fumeurs depuis des années.

J’avisais les larges tours de verre des ascenseurs. Elles se dressaient à près d’un demi-kilomètre, adamantin fanal encalminé dans cet étrange cloître futuriste. J’ignorais délibérément une trottinette électrique qui, connaissant mon parcours habituel, vint me proposer ses services. J’avais envie de marcher un peu, de longer les vitrines des salles de réunion, des salles de sport où certains de mes collègues pédalaient déjà avec un enthousiasme matinal que j’avais toujours trouvé déplacé avant ma première tasse de kombusha de la journée.

Une voix douce se mit à parler au-dessus de ma tête, claire, intelligible, désincarnée, asexuée.

— En raison d’un problème technique aux ascenseurs, nous conseillons, dans la mesure du possible, de prendre l’escalier.

J’arrivai au pied des tours de verre et de métal. La voix insistait.

— En raison d’un problème technique, l’usage des ascenseurs est déconseillé, mais reste possible.

J’avais traversé le bâtiment à pied, je n’avais aucune envie de descendre une trentaine d’étages par l’escalier. Sans que je l’admette consciemment, une certaine curiosité morbide me poussait à constater de mes yeux quel problème pouvait bien rendre l’utilisation d’un ascenseur possible, mais déconseillée.

Je rentrai dans la spacieuse cabine en compagnie d’un type assez bedonnant en costume beige et comble du mauvais goût, en cravate, ainsi que d’une dame en tailleur bleu marine, aux lunettes larges et au chignon sévère. Nous ne nous adressâmes pas la parole, pénétrant ensemble dans cet espace clos comme si nous étions chacun seuls, comme si le moindre échange était une vulgarité profane.

Les parois brillantes resplendissaient d’une lumière artificielle parfaitement calibrée. Comme à l’accoutumée, je ne réalisai pas immédiatement que les portes s’étaient silencieusement refermées et que nous avions amorcé la descente.

Une légère musique tentait subtilement d’égayer l’atmosphère tandis que nous appliquions chacun une stratégie différente pour éviter à tout prix de croiser le regard de l’autre. L’homme maintenait un visage glabre aux sourcils épais complètement impassible, le regard obstinément fixé sur la paroi d’en face. La femme gardait les yeux rivés vers le sac en cuir qu’elle avait posé à ses pieds. Elle serrait un classeur contre son buste comme un naufragé se raccroche à sa bouée de sauvetage. De mon côté, je détaillais les arêtes du plafond comme si je les découvrais pour la première fois.

La lumière baissait sensiblement à mesure que nous descendions, comme pour nous rappeler que nous nous enfoncions dans les entrailles chtoniennes de la planète.

Lorsque nous fîmes halte au -34, l’homme en costume dû toussoter pour que je m’écarte à cause du léger rétrécissement de la cabine.

La plongée reprit. La baisse de luminosité et le rétrécissement devenaient très perceptibles. Au -78, l’étage de la dame, nous évoluions dans une pénombre grisâtre. En écartant les bras, j’aurais pu toucher les deux parois.

J’étais désormais seul, comme si l’ascenseur ne m’avait pas reconnu et ignorait ma présence. Une impulsion irrationnelle me décida d’aller aussi profond que possible. Simple accès de curiosité. Après tout, cela faisait des années que je travaillais pour la Compagnie et n’était jamais descendu aussi bas.

La lumière baissait de plus en plus, mais je m’aperçus que ma compagne de descente avait oublié son sac de cuir. Je peinais à distinguer les parois que je pouvais désormais toucher des doigts. Sur le compteur lumineux, qui était de plus en plus proche de moi, les étages défilaient de moins en moins vite.

Je sentis mes épaules frotter et je dus me mettre de profil pour ne pas être écrasé. Je plaçai le sac à hauteur de mon visage et pus très vite le lâcher, car il tenait par la simple force de pression que les parois exerçaient sur lui. La cabine m’enserrait désormais de tous côtés : les épaules, le dos et la poitrine. Ma respiration se faisait difficile alors survint le noir total. Les ténèbres m’enveloppèrent. Seul brillait encore faiblement le compteur qui se stabilisa sur -118.

Calmement, la certitude que j’allais mourir étouffé s’empara de moi. C’était certainement le problème dont m’avait averti la voix. Je ne l’avais pas écoutée, j’en payais le prix. C’était logique, il n’y avait rien à faire.

Dans un silence oppressant, je me rendis compte que la paroi à ma droite était un peu moins obscure. En me contorsionnant, je parvins à me glisser sous le sac qui était désormais à moitié écrasé. La porte était ouverte. Je fis quelques pas hors de la cabine dans une glauque et moite pénombre. Je distinguais des parois en feutre gris arrivant à mi-torse, délimitant des petits espaces où s’affairaient des collègues. Ils portaient des chemises que je percevais grises, des cravates et des gilets sans manches. La faible luminosité de vieux tubes cathodiques se reflétait dans leurs lunettes. Les discussions étaient douces, feutrées. J’avais l’impression d’être un étranger, personne ne faisait attention à moi.

Dans un coin, une vieille imprimante matricielle crachotait des pages de caractères sibyllins en émettant ses stridents chuintements.

Comme un somnambule, je déambulais, étranger à ce monde. Ou du moins, je l’espérais.

Après quelques hésitations, je repris ma place en me glissant avec quelques difficultés dans la cabine dont la porte ne s’était pas refermée, comme si elle m’attendait.

De nouveau, ce fut le noir. L’oppression. Mais pas pour longtemps. Je respirais. Les parois s’écartaient, je distinguais une légère lueur. Je remontais, je renaissais.

Les chiffres défilaient de plus en plus rapidement sur le compteur. Lorsqu’ils s’arrêtèrent sur 0, je défroissai ma chemise et, le sac en cuir dans une main, je me ruai dans les lumineux rayons du soleil filtré.

Au-dessus de ma tête, la voix désincarnée continuait sa péroraison.
— En raison d’un problème technique aux ascenseurs, nous conseillons, dans la mesure du possible, de prendre l’escalier.

Je me mis à courir en riant. Des balcons aux salles de sport, toutes les têtes se retournaient sur mon passage. Je n’y prêtais guère attention. Je riais, je courais à perdre haleine. Quelques remarques fusèrent, mais je ne les entendais pas.

Bousculant un garde, je franchis la série de doubles portes et sortis hors du bâtiment, hors de la Compagnie. Il pleuvait, le ciel était gris.

De toutes mes forces, je lançai la mallette de cuir. Elle s’ouvrit à son apogée, distribuant aux vents feuillets, fiches et autres notes qui vinrent dessiner une parodie d’automne sur le bitume noir de la route détrempée.

Je m’assis sur la margelle du trottoir, les yeux fermés, inspirant profondément les relents de petrichor tandis que des gouttes ruisselaient sur mon sourire.

Ottignies, 22 février 2019. Première nouvelle écrite sur le Freewrite, en moins de 2 jours. Rêve du 14 juillet 2008.Photo by Justin Main on Unsplash

Je suis @ploum, conférencier et écrivain électronique. Si vous avez apprécié ce texte, n'hésitez pas à me soutenir sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens réguliers, même symboliques, sont une réelle motivation et reconnaissance. Merci !

Ce texte est publié sous la licence CC-By BE.

March 13, 2019

Autoptimize 2.5 is almost ready! It features a new “Images”-tab to house all Image optimization options, including support for lazy-loading images and WebP (the only next-gen image format that really matters, no?);

So download the beta and test lazy-loading and WebP (and all of the other changes) and let me know of any issue you might find!

March 12, 2019

Today, the world wide web celebrates its 30th birthday. In 1989, Sir Tim Berners-Lee invented the world wide web and changed the lives of millions of people around the globe, including mine.

Tim Berners-Lee sitting in front of a computer showing the first websiteTim Berners-Lee, inventor of the World Wide Web, in front of the early web.

Milestones like this get me thinking about the positive impact a free and Open Web has had on society. Without the web, billions of people would not have been able to connect with one another, be entertained, start businesses, exchange ideas, or even save lives. Open source communities like Drupal would not exist.

As optimistic as I am about the web's impact on society, there have been many recent events that have caused me to question the Open Web's future. Too much power has fallen into the hands of relatively few platform companies, resulting in widespread misinformation, privacy beaches, bullying, and more.

However, I'm optimistic that the Open Web has a chance to win in the future. I believe we'll see three important events happen in the next five years.

First, the day will come when regulators will implement a set of laws that govern the ownership and exchange of data online. It's already starting to happen with GDPR in the EU and various state data privacy laws taking shape in the US. These regulations will require platforms like Facebook to give users more control over their data, and when that finally happens, it will be a lot easier for users to move their data between services and for the Open Web to innovate on top of these data platforms.

Second, at some point, governments globally will disempower large platform companies. We can't leave it up to a handful of companies to judge what is false and true, or have them act as our censors. While I'm not recommending governments split up these companies, my hope is that they will institute some level of algorithmic oversight. This will offer an advantage to the Open Web and Open Source.

Third, I think we're on the verge of having a new set of building blocks that enable us to build a better, next-generation web. Thirty years into the web, our data architectures still use a client-server model; data is stored centrally on one computer, so to speak. The blockchain is turning that into a more decentralized web that operates on top of a distributed data layer and offers users control of their own data. Similar to building a traditional website, distributed applications (dApps) require file storage, payment systems, user data stores, etc. All of these components are being rebuilt on top of the blockchain. While we have a long way to go, it is only a matter of time before a tipping point is reached.

In the past, I've publicly asked the question: Can we save the Open Web? I believe we can. We can't win today, but we can keep innovating and get ready for these three events to unfold. The day will come!

With that motivation in mind, I want to wish a special happy birthday to the world wide web!