Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

January 21, 2019

Openstack is nice platform to deploy an Infrastructure as a service and is a collection of projects but it can be a bit difficult to setup. The documentation is really great if you want to setup openstack by hand and there are a few openstack distributions that makes it easier to install it.

Ansible is a very nice tool for system automatisation and is one that’s easier to learn.

Wouldn’t be nice if we could make the openstack installation easier with ansible? That’s exactly what Openstack-Ansible does.

In this blog post we’ll setup “an all-in-one” openstack installation on Centos 7. The installer will install openstack into lxc containers and it’s nice way to learn how openstack works and how to operate it.


System requirements

I use a Centos 7 virtual system running as a KVM instance with nested KVM virtualasation enabled. The system requiremensts The minimun requiremenst are:

  • 8 CPU cores
  • 50 GB of free diskspace
  • 8GB RAM

update ….

Make sure that your system is up-to-update

[staf@openstack ~]$ sudo yum update -y

We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:

    #1) Respect the privacy of others.
    #2) Think before you type.
    #3) With great power comes great responsibility.

[sudo] password for staf: 
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base:
 * extras:
 * updates:
base                                                                                                                                    | 3.6 kB  00:00:00     
extras                                                                                                                                  | 3.4 kB  00:00:00     
updates                                                                                                                                 | 3.4 kB  00:00:00     
No packages marked for update
[staf@openstack ~]$ 

Install git

We’ll need git to install the ansible playbooks and the Openstack-Ansible installation scripts.

[staf@openstack ~]$ yum install git
Loaded plugins: fastestmirror
You need to be root to perform this command.
[staf@openstack ~]$ sudo yum install git
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base:
 * extras:
 * updates:
Package git- already installed and latest version
Nothing to do
[staf@openstack ~]$ 


This is a bit of a pitfail… The Openstack-Ansible bootstrap script will download and install his own version of ansible and create a link to /usr/local/bin. So /usr/local/bin must be in your $PATH. Ansible shouldn’t be installed on your system or if it is installed it shouln’t be executed instead of the ansible version that is builded with Openstack-Ansible.

On most GNU/Linux distributions have /usr/local/bin and /usr/local/sbin is in the $PATH but not on centos, so we’ll need to add it.

Make sure that ansible insn’t installed

[staf@openstack ~]$ sudo rpm -qa | grep -i ansible
[sudo] password for staf: 
[staf@openstack ~]$ 

Update your $PATH

[root@openstack ~]# export PATH=/usr/local/bin:$PATH

If you want to have /usr/local/bin in your $PATH update /etc/profile or $HOME/.profile

ssh password authentication

The ansibe playbooks will disable PasswordAuthentication, make sure that you login with a ssh key. - Password authentication is obsolete anyway -


Firewall is enabled on Centos by default, the default iptables rules prevent communication between the openstack containers.

stop and disable firewalld

[root@openstack ~]# systemctl stop firewalld
[root@openstack ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.


root@openstack ~]# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
[root@openstack ~]# 

Openstack installation

The installation will take some time therefor it’s recommended to use an session manager like tmux or GNU screen


git clone

clone the openstack-ansible git repo

[root@openstack ~]# git clone /opt/openstack-ansible
Cloning into '/opt/openstack-ansible'...
remote: Counting objects: 67055, done.
remote: Compressing objects: 100% (32165/32165), done.
remote: Total 67055 (delta 45474), reused 52564 (delta 32073)
Receiving objects: 100% (67055/67055), 14.60 MiB | 720.00 KiB/s, done.
Resolving deltas: 100% (45474/45474), done.
[root@openstack ~]# 
[root@openstack ~]# cd /opt/openstack-ansible
[root@openstack openstack-ansible]# 

choose you Openstack releases

Openstack has release shedule about every 6 months the current stable release is Rocky. Every Openstack release has his own branch in the git repo. Each Openstack-Ansible release is tagged in the git repo. So either you’ll need checkout Openstack-Ansible release tag or the bracnh. We’ll checkout the Rocky branch.

get the list of branches

[root@openstack openstack-ansible]# git branch -a
* master
  remotes/origin/HEAD -> origin/master
[root@openstack openstack-ansible]# 
checkout the branch
[root@openstack openstack-ansible]# git checkout stable/rocky
Branch stable/rocky set up to track remote branch stable/rocky from origin.
Switched to a new branch 'stable/rocky'
[root@openstack openstack-ansible]# 

Bootstrap ansible

Execute scripts/ this will install the required packages and ansible playbooks.

[root@openstack openstack-ansible]# scripts/
+ export HTTP_PROXY=
+ export HTTPS_PROXY=
+ export ANSIBLE_PACKAGE=ansible==2.5.14
+ ANSIBLE_PACKAGE=ansible==2.5.14
+ export ANSIBLE_ROLE_FILE=ansible-role-requirements.yml
+ ANSIBLE_ROLE_FILE=ansible-role-requirements.yml
+ export SSH_DIR=/root/.ssh
+ SSH_DIR=/root/.ssh
+ export DEBIAN_FRONTEND=noninteractive
+ DEBIAN_FRONTEND=noninteractive
+ '[' false == true ']'
+ echo 'System is bootstrapped and ready for use.'
System is bootstrapped and ready for use.
[root@openstack openstack-ansible]# 


scripts/bootstrap-ansible created /opt/ansible-runtime and create amd updated //usr/local/bin with a few links.

[root@openstack openstack-ansible]# ls -ld /opt/*
drwxr-xr-x.  5 root root   56 Jan 12 11:42 /opt/ansible-runtime
drwxr-xr-x. 14 root root 4096 Jan 12 11:43 /opt/openstack-ansible
[root@openstack openstack-ansible]# ls -ltr /usr/local/bin/
total 8
lrwxrwxrwx. 1 root root   32 Jan 12 11:43 ansible -> /usr/local/bin/openstack-ansible
lrwxrwxrwx. 1 root root   39 Jan 12 11:43 ansible-config -> /opt/ansible-runtime/bin/ansible-config
lrwxrwxrwx. 1 root root   43 Jan 12 11:43 ansible-connection -> /opt/ansible-runtime/bin/ansible-connection
lrwxrwxrwx. 1 root root   40 Jan 12 11:43 ansible-console -> /opt/ansible-runtime/bin/ansible-console
lrwxrwxrwx. 1 root root   39 Jan 12 11:43 ansible-galaxy -> /opt/ansible-runtime/bin/ansible-galaxy
lrwxrwxrwx. 1 root root   36 Jan 12 11:43 ansible-doc -> /opt/ansible-runtime/bin/ansible-doc
lrwxrwxrwx. 1 root root   42 Jan 12 11:43 ansible-inventory -> /opt/ansible-runtime/bin/ansible-inventory
lrwxrwxrwx. 1 root root   32 Jan 12 11:43 ansible-playbook -> /usr/local/bin/openstack-ansible
lrwxrwxrwx. 1 root root   37 Jan 12 11:43 ansible-pull -> /opt/ansible-runtime/bin/ansible-pull
lrwxrwxrwx. 1 root root   38 Jan 12 11:43 ansible-vault -> /opt/ansible-runtime/bin/ansible-vault
-rw-r--r--. 1 root root 3169 Jan 12 11:43 openstack-ansible.rc
-rwxr-xr-x. 1 root root 2638 Jan 12 11:43 openstack-ansible

Verify that ansible command is one that’s installed bu the Openstack-Ansible bootstrap script.

[root@openstack openstack-ansible]# which ansible

Bootstrap AIO

[root@openstack openstack-ansible]# scripts/
+++ dirname scripts/
++ readlink -f scripts/..
+ export OSA_CLONE_DIR=/opt/openstack-ansible
TASK [Gathering Facts] *****************************************************************************************************
ok: [localhost]

TASK [sshd : Set OS dependent variables] ***********************************************************************************
ok: [localhost] => (item=/etc/ansible/roles/sshd/vars/RedHat_7.yml)

TASK [sshd : OS is supported] **********************************************************************************************
ok: [localhost] => {
    "changed": false, 
    "msg": "All assertions passed"

TASK [sshd : Install ssh packages] 
EXIT NOTICE [Playbook execution success] **************************************
+ popd
[root@openstack openstack-ansible]# 

Run the playbooks

We’ll to run a few playbooks to setup the containers and our Openstack environment.

Move to the openstack-ansible playbook directory.

[root@aio1 ~]# cd /opt/openstack-ansible/playbooks/
[root@aio1 playbooks]# pwd
[root@aio1 playbooks]# 

and exexcute the playbooks.

[root@openstack playbooks]# openstack-ansible setup-hosts.yml
[root@openstack playbooks]# openstack-ansible setup-infrastructure.yml
[root@aio1 playbooks]# openstack-ansible setup-openstack.yml

If all goes well your openstack installation is completed.

You can verify the openstack containers with lxc-ls

[root@aio1 playbooks]# lxc-ls --fancy
NAME                                   STATE   AUTOSTART GROUPS            IPV4                                           IPV6 
aio1_cinder_api_container-c211b759     RUNNING 1         onboot, openstack,,  -    
aio1_galera_container-9a90cbd9         RUNNING 1         onboot, openstack,                  -    
aio1_glance_container-c05aab79         RUNNING 1         onboot, openstack,, -    
aio1_horizon_container-81943ba2        RUNNING 1         onboot, openstack,                  -    
aio1_keystone_container-a5859104       RUNNING 1         onboot, openstack,                   -    
aio1_memcached_container-ab998d0e      RUNNING 1         onboot, openstack,                  -    
aio1_neutron_server_container-439aeb90 RUNNING 1         onboot, openstack,                  -    
aio1_nova_api_container-c83e5ef0       RUNNING 1         onboot, openstack,                  -    
aio1_rabbit_mq_container-4fd792fb      RUNNING 1         onboot, openstack,                    -    
aio1_repo_container-b39d88a1           RUNNING 1         onboot, openstack,                 -    
aio1_utility_container-fff0b6df        RUNNING 1         onboot, openstack,                  -    
[root@aio1 playbooks]# 

Find the correct ip address

You should see horizon running with netstat

[root@aio1 ~]# netstat -pan | grep -i 443
tcp        0      0*               LISTEN      12908/haproxy       
tcp        0      0*               LISTEN      12908/haproxy       
unix  3      [ ]         STREAM     CONNECTED     73443    31134/tmux           
unix  2      [ ]         DGRAM                    1244303  23435/rsyslogd       
[root@aio1 ~]# 

Logon to the openstack GUI (Horizon)


[root@aio1 ~]# grep keystone_auth_admin_password /etc/openstack_deploy/user_secrets.yml

Have fun


As promised 3 months ago, Gabe, Mateu and I together with twelve other contributors shipped support for revisions and file uploads today!

What happened since last month? In a nutshell:


JSON:API 2.1 follows two weeks after 2.0.

Work-arounds for two very common use cases are no longer necessary: decoupled UIs that are capable of previews and image uploads4.

  • File uploads work similarly to Drupal core’s file uploads in the REST module, with the exception that a simpler developer experience is available when uploading files to an entity that already exists.
  • Revision support is for now limited to retrieving the working copy of an entity using ?resourceVersion=rel:working-copy. This enables the use case we hear about the most: previewing draft Nodes. 5 Browsing all revisions is not yet possible due to missing infrastructure in Drupal core. With this, JSON:API leaps ahead of core’s REST API.

Please share your experience with using the JSON:API module!

  1. This was in the making for most of 2018, see the SA for details. ↩︎

  2. Note that usage statistics on are an underestimation! Any site can opt out from reporting back, and composer-based installs don’t report back by default. ↩︎

  3. Which we can do thanks to the tightly managed API surface of the JSON:API module. ↩︎

  4. These were in fact the two feature requests with the highest number of followers↩︎

  5. Unfortunately only Node and Media entities are supported, since other entity types don’t have standardized revision access control. ↩︎

January 18, 2019

Over lammetjes in een schapenvacht

Ik herinner mij nog perfect toen ik besefte dat de technologiesector een opmerkelijke culturele shift aan het ondergaan was. Het bleek bovendien redelijk profetisch voor de Westerse cultuur. Het was eind 2013 op een klein Canadees congres, waar de afsluitende keynote concludeerde met een staande ovatie. De presentatie resoneerde duidelijk enorm bij iedereen. Behalve bij mij, want ik bleef zitten, extreem verontrust, niet alleen om wat er gezegd was, maar omdat ik mij alleen voelde in mijn stille weerstand. Waarom zag niemand anders in de zaal wat er zo mis mee was?

Het is onmogelijk uit te leggen waarom zonder de specifieke inhoud in detail te beschrijven. Helaas zelfs jaren later vermoed ik dat kritiek hierop voor sommigen nog steeds in het verkeerde keelgat zal schieten. Vreemd, want je zou toch denken dat het de bedoeling van een congres is om een breed publiek te bereiken en een dialoog te openen. Als iemand enkel spreekt om naar geluisterd te worden, dan is dat een preek, geen voordracht.

Wat er gezegd werd was op het eerste zicht nochtans oncontroversieel. Een jonge dame had ons net verteld over de moeilijke les die ze op haar eerste job had moeten leren. Dat ze zich volledig in haar werk gegooid had en daarmee vergeten was om adem te komen happen, met alle gevolgen vandien. Dat ze zo met haar prestaties bezig geweest was, dat er geen tijd meer was voor haar eigen sociaal en emotioneel welzijn, en dat dat toch even belangrijk was. En dat ze dat allemaal doorstaan had om zo die dag voor het eerst op het podium te staan op een technologiecongres.

Dat stoorde mij helemaal niet. Het is een veelvoorkomend verhaal, en het is belangrijk dat jonge starters het horen. Ze zijn vaak zo gewend aan het schoolsysteem, waar het parcours en de kalender reeds vast liggen, dat ze zich niet goed kunnen aanpassen aan de werkomgeving. Je moet je eigen draai vinden, en zelf een redelijk en houdbaar tempo kiezen voor een onbeperkte duur.

Zo legde ze het echter niét uit. Ze wijtte het daarentegen aan de verwachtingen om te doen wat er je gezegd werd, je "professioneel" te gedragen, en persoonlijke problemen buiten te houden. Dat waren nu eenmaal misvattingen over hoe een werkplaats best functioneert. In plaats daarvan zou je beter het "goeie leven" opzoeken, namelijk een plek waar je je thuis voelt, met mensen waar je van houdt, waar je je eigen werk doet, met opzet. Dat was wat ze nodig had om een omgeving te scheppen waar ze zich "veilig voelde."

Op papier klinkt dat best wel fijn. Het schoentje knelde echter omdat ze absoluut haar eigen advies niet volgde.

Het Lam Gods

Het Lam Gods, door de gebroeders Van Eyck, 1432 (le wik)

Ze verkondigde dat je niet in de onzekerheid mag vallen, maar begon haar voordracht door te benadrukken hoe zenuwachtig ze wel niet was, en vroeg het publiek om de ogen even te sluiten, wat niet hielp. Dus vroeg ze of enkele vrijwilligers op het podium wilden klimmen en wat zot wilden doen, om het ijs te breken.

Ze legde uit dat ze zich absoluut niet te behoeftig wou voelen, "het degoutantste gevoel ooit," maar hield als deel van haar presentatie een videogesprek met haar collega's, die duizend kilometer verder klaarstonden om "morele steun" te bieden. Daarna had ze problemen om haar powerpoint terug te brengen, tot een organisator insprong.

Ze benadrukte hoe belangrijk het was om de mensen mensen te laten zijn, met al hun diverse vaardigheden, maar leek niet te beseffen dat het onmogelijk beklemmende professionalisme dat ze afwees net een kader was om zelfs onverzoenbare verschillen te overbruggen, voor een gemeenschappelijk doel.

Ze zei dat de technologiesector veel van de meest getalenteerde en bevoorrechte mensen in de samenleving bevat, die "goud kunnen maken uit het niets," maar bekritiseerde de idee dat rationaliteit belangrijker is dan emotionaliteit. Het is nu net datgene wat de mensheid uit de middeleeuwen heeft getild, zodat zij daar feitelijk kon zijn.

Ze prijsde de emotionele intelligentie aan van zelfbeheersing, volharding en zelfbewustzijn, maar slaagde er niet in om deze effectief te demonstreren, aangezien ze haar gevoelens uitstrooide, meteen op anderen rekende voor steun en hulp, en het niet leek te beseffen dat ze haar speech compleet aan het ondermijnen was. Ze deed alsof kritiek van emotionaliteit gewoon zelfontkenning was, van mensen die met zichzelf niet eerlijk konden zijn, maar gebruikte die juist op die manier als schild.

Bovenal leek het niet erg geloofwaardig dat ze mensen beoordeelde over onverdiend privilege, terwijl de rol die ze vertolkte net die van het onschuldige, gewonde lammetje was, en daar met verve in opging. De droomjob die ze wilde zou net de meest bevoorrechte van al zijn, waar er geen zinvolle menings- of karakterverschillen waren, en waar haar collega's haar op alle mogelijke wijze zouden valideren. Een job die ze nu eindelijk had.

En kijk, als iemand effe naar huis wilt bellen om een punt te maken, dat mag wel, en iedereen weet dat een live demo een magneet is voor technische problemen. Maar zou het nu echt zo fijn zijn als alle anderen ook als trillend riet op het toneel staan, en halverwege nog een duwtje in de rug nodig hebben, als hulpeloos kind? Je moet mij niet komen vertellen over presentatiestress, ik heb zelf al talks gehouden die niks anders dan live demos bevatten, voor een publiek van een paar honderd mensen en een camera, en dat is niet gemakkelijk. Maar dat moet je nu eenmaal zelf overwinnen door je goed voor te bereiden en veel te oefenen.

Soit, als deze voordracht zo tegenstrijdig was, waarom sloeg dit zo aan? Wel omdat ze al de juiste noten zong. Ze was een jonge moeder die foto's van haar dochter liet zien, aan wie ze wou tonen hoe een goeie mens zich moet gedragen. Er was een moeilijke strijd en een verlossing: een verloren schaap dat uit de kille, donkere woestijn kwam en een warme oase van liefde en steun vond. Ze had het ook over haar eigen intern seksisme dat ze los had gelaten, en hoe belangrijk inclusiviteit en diversiteit waren. Toen kon je dat nog zeggen zonder dat dat speciaal registreerde, maar vandaag de dag zijn die woorden beladen met morele deugd, en de strijd tegen het kwaad.

Ik kon het toen eigenlijk niet goed uitleggen, maar nadien wist ik wat er gebeurd was: dit was geen technische voordracht, dit was een seculaire geloofsbelevenis. Ik dacht er onlangs weer aan toen ik een clipje van James Lindsay zag, die uitlegde dat ondanks de neergang van formeel geloof in het Westen, allerlei mensen er nog steeds naar op zoek zijn, en religieuze praktijken op andere wijze uitdrukken. Ik geloof het gerust, want dit was een voorsmaakje, met succesvolle bekeringen troef. Een plek waar je je thuis voelt, met mensen waar je van houdt, waar je je eigen werk doet, met een belangrijk doel, dat lijkt mij een ideale kerk.

Hoewel ik uiteraard niet erg gul ben in deze beoordeling, is het mijn bedoeling niet om dit individu persoonlijk te belasteren. Daarom heb ik ook de link achterwege gelaten. Ik hoop dat ze ondertussen misschien beter weet, hoewel er sindsdien al tientallen of zelfs honderden gelijkaardige talks geweest zijn. Dit fenomeen van gewijde seculariteit is alomtegenwoordig, en ik wil gewoon aantonen dat het inderdaad gebeurt, op de manier die mij het best lijkt.

Tijdens het schrijven van deze tekst verscheen er ook een andere video die de universiteit Evergreen State College toelicht, en een wansmakelijke mix van organizatiepolitiek en kollektivschuld toont, die de aanzet vormde voor een groot schandaal. Professor Bret Weinstein, die in het oog van de storm stond samen met zijn vrouw Heather Heying, vertelt dat ook hij voelde alsof hij één van de weinigen was die besefte dat oneerlijke en manipulatieve druk werd uitgeoefend om een morele ideologie te verkopen. En dat velen die deze praktijk openbaar steunden achter gesloten deuren toegaven dat ze zich er niet tegen durfden te verzetten.

Dit is waarom het zo gevaarlijk is om emotionaliteit te verkiezen over rationaliteit: zo krijg je aangenaam klinkende platitudes in plaats van de werkelijkheid. Daarom dat fantasieën over universele harmonie en mentale veiligheid complete utopie zijn: het ontkent de fundamentele verschillen en conflicten die er bestaan. Skeptici hebben er een allergie aan en kijken er dwars door, en mijn hart vervulde zich niet met Jesus' liefde.

Shrine to Grace Hopper

Het Altaar van Grace Hopper, 35C3, 2018
Is het nog steeds ironisch als je er niet meer mee mag lachen?

Vijf jaar later, als ik kijk wat de belangrijkste effecten van seculair-religieuze praktijk zijn geweest op de technologiesector, dan is het net om dezelfde dysfunctie mogelijk te maken die de traditionele religie persona non grata maakte. Onder het mom van een betere gemeenschap te scheppen, en belangrijke "inequiteiten" recht te zetten, vinden deze clerici ogenschijnlijk vaak dat het doel de middelen heiligt en dat ze hun eigen geboden niet moeten opvolgen. Je hoort nu vaak dat controle van bovenaf nodig is om de veiligheid van de gebruikers te verzekeren, zoals de herder en zijn kudde. Er is nu infrastructuur om dit op een zorgwekkende schaal gemakkelijk te maken, klaar voor misbruik, samen met een standaardbeleid om het mogelijk te maken.

In het beste geval leidt dit tot een traag verval van de vrije meningsuiting. Artificiële intelligentie en scripts ter bewaking kennen geen nuance, en kunnen op triviale wijze onschuldige omstanders schaden. Het risico om heel je sociale netwerk op één van de weinige populaire platformen te verliezen creëert een enorm verkillend effect. Dit is des te erger omdat het niet moeilijk is om gecoördineerd meldingen in te sturen, en publieke schaam- en lastercampagnes te voeren. Zo is iedereen schuldig tot de onschuld bewezen is. In het ergste geval schept het morele beschutting voor sociopathie, met plausibele ontkenning van censuur dankzij het gebruik van shadowbans, demonetizatie en gefilterde aanbevelingen.

Dit stroomt ook door tot op het lokale niveau, omdat in religieuze praktijk vaak de idee opduikt dat men het iedere dag op iedere plek moet toepassen, en enkel zo een deugdelijk leven kan leiden. Toen ik dit jaar op het 35C3 congres allerlei opzichtige portretten van "vrouwen in de technologie" zag, en zelfs een effectief altaar voor Grace Hopper, zag ik er onvermijdelijk de lijn in met de Maagd Maria, en de Onbevlekte Ontvangenis van de softwarecompiler. Toen een DJ zijn platen draaide onder een antifa vlag, net zoals degene die boven de inkom hing, vroeg ik mij toch af wat voor transcendente extase men op deze dansvloer opzocht. Versta mij niet verkeerd, ik kan best wel ieder zijn eigen ding laten doen, maar deze tolerantieregel werd duidelijk niet gevolgd toen er nationale vlaggen werden gestolen en standjes besmeurd, nadat er genoeg bedreigende klachten waren tijdens de eerste dag dat de security aanraadde om ze uit voorzorg misschien toch maar neer te halen.

Het gaat echter over veel meer dan gewoon wat industriecongressen of stoere tweets, zoals duidelijk werd in de Grievance Studies grap. Diezelfde James Lindsay schreef vorig jaar, samen met Helen Pluckrose en Peter Boghossian, een tiental nep-papers in de sociale menswetenschappen, met al de benodigde citaten, en slaagde er in om de meerderheid in academische tijdschriften te publiceren. Dit ging onder andere over het vieren van morbide obesitas, het inspecteren van de genitalia van honden voor tekens van "rape culture," het ketenen van blanke studenten in de klas, en zelfs een herschreven stuk van Mein Kampf, met intersectioneel feminisme in plaats van nazisme. Ze verzamelden talrijke accolades en lof over hun zogenaamd "rijkelijk en spannend" materiaal, alvorens de aap uit de mouw kwam.

Hun bedoeling was te tonen dat wat ogenschijnlijk de productie van kennis is, eerder vaak sofisterij is, waarbij ongestaafde ideologie met een dun laagje respectabiliteit bedekt wordt, om zo "ideeën wit te wassen." In plaats van academische procedures te gebruiken om waarheid te destilleren, gebruiken geleerden in deze wetenschappen ze om hun vooroordelen te bevestigen en een monocultuur te fabriceren.

Als antwoord heeft hun oppositie — die de papers niet in diskrediet kon brengen zonder in hun eigen voeten te schieten — een ethische klacht ingediend tegen Boghossian. Zijn werkgever Portland State University besloot dan dat een doorlichting van onverdedigbare wetenschap effectief onderzoek op menselijke proefkonijnen was zonder hun toestemming. Deze poging om gezichtsverlies te vermijden versterkt enkel de vaststelling van misbruik van procedures, en schiet gewoon de boodschapper neer.

Geloof lijkt een fundamentele behoefte voor velen te zijn, geëvolueerd om een sterk samenhorigheidsgevoel binnen een groep te vormen als competitief voordeel, ten koste van het zwart maken van ketters en afvalligen. Als de slechtste impulsen hiervan niet ingeperkt worden, gooit men de rationaliteit weg als onvoldoende, en zoekt men een grotere morele zuiverheid op met onbetwistbare ijver. Hoewel het openlijk beleven van geloof vandaag zeer ouderwets is, leidt het enkel tot grotere problemen als men doet alsof het niet meer bestaat of relevant is.

Ik weet best wel dat "X is religie" niet bepaald een nieuwe inval is, maar religieuze oorlog is oorlog, en war, war never changes.

Antifa Vlag op 35C3

Antifa Vlag op 35C3 (via Twitter)

De situatie in technologie is dus maar één facet, met duidelijke parallelen elders. In hun rechtzoekende morele strijd tegen "blankheid" en "nazis," nergens zo aanwezig als in religieuze koortsdromen, en met nul zelfbewustzijn, is deze golf van sociale rechtvaardigheid uitgegroeid tot een verontrustende tsunami van groupthink. Afvallige ketters worden nu gewoon "trolls" en "alt-right fascisten" genoemd, zelfs als het omgekeerde waar is. Zonder een duidelijk afgelijnde ruimte om hun geloof in te beleven, hebben de gelovigen het naar de afdeling personeelszaken, hun professioneel netwerk, de academische faculteit, de pers en de politiek gebracht, om zo hun eigen inquisitie te vormen in naam van Trust and Safety, in de jacht op schadelijke ideeën.

Het belangrijkste voorbeeld blijft voor mij de excommunicatie van James Damore bij Google in 2017, ontslagen niet om wat hij zei, maar om wat sommige werknemers en de pers er tussen de lijnen in zagen. Ze slaagden er niet in om de wetenschap met de benodigde rationaliteit te behandelen. De mentale veiligheid van zijn collega's was belangrijker dan zijn eigen fysische veiligheid, ondanks de dreigingen die hij ontving, en ze vormden een emotionele, hatelijke bende om de zondebok uit te drijven. Wat een bijbels concept is, natuurlijk. Net zoals het onschuldige lam gods. Deze twee archetypes waren ook duidelijk aanwezig in de vervolging van Larry Garfield bij Drupal in hetzelfde jaar, in naam van een ingebeeld slachtoffer van seksueel misbruik wiens mening men effectief nooit gevraagd had.

Dit domino-effect escaleert verder, wegens de groeiende centrale rol van technologieplatformen in de maatschappij, zoals onlangs toen crowdfunder Patreon wispelturig figuren begon te verbannen, en ook daarna toen de concurrent SubscribeStar gesaboteerd werd doordat provider PayPal betalingen stopzette. Het verlangen om zondige gedachten uit te sluiten gaat dus zelfs zo ver dat alle kanalen van financiële steun ontzegd worden, wat ingaat tegen de autonomie van hun individuele geldschieters. Dit wordt dan verdedigd door de doelwitten duidelijk als anti-goedheid te bestempelen, en een beroep te doen op het zogezegde superieure oordeel per geval, wat in de praktijk weinig meer is dan flagrante subjectiviteit.

Het nettoresultaat is dat heelder categorieën mensen nu effectief onveilig gemaakt worden, bedreigd met censuur of diskrediet ten onrechte, gewoon om het comfort en de status van een kortzichtig en verwend doelpubliek te verzekeren. Diezelfde groep zegt dan regelmatig dat het net de blanke mannen zijn die een klaagzang opzetten omdat ze hun voorrechten verliezen. Dan vraag ik mij toch af in wat voor parallel universum zij leven waar de technologie van gisteren gemaakt werd, en waar een oprechte interesse in computers, video games of science-fiction geen reden tot sociale uitsluiting was. Dit was een last waar nerds van allerlei strepen en kleuren mee opgezadeld zaten, en die zich nu lijkt te herhalen op een veel bedrieglijkere manier.

De voordracht van hierboven stelde wel de vraag, "wat zijn de negatieve gevolgen als je mensen niet toelaat volledig zichzelf te zijn op het werk?" Wel, Damore, met zijn autisme, kreeg duidelijk geen enkele kans om dit ook maar een beetje te doen, ondanks dat hij de aangeduide procedures volgde. Het verhaal in de pers recycleerde gewoon alweer dezelfde punten, in een troebele "conversatie" die een vage "wij" moet houden, maar waar niemand het niet mee eens mag zijn. Dat de technologiesector al lang vaak dient als "safe space" voor neurologische diversiteit wordt dan mooi onder de mat geveegd, in volledige contradictie met de zogenaamde inclusiviteit.

Ik voeg wel best toe, ik ben niet zo naïef om te denken dat dit puur een links fenomeen is, want religieuze overtuiging lokt meestal hetzelfde uit in tegenstand. Maar de vraag van wie ermee begonnen is is dus wel belangrijk. Politiek links beheert nu eenmaal de meerderheid van de culturele en intellectuele hefbomen in het Westen, en het is al even tijd voor wat verantwoording. Het is ook best wel tijd om te erkennen hoe provinciaal en seizoensgebonden de huidige obsessie met Trump is, en al de bijhorige rassenlokvoerpolitiek. De Amerikanen zijn momenteel geobsedeerd met hun grens, maar weten amper wat er daarbuiten gebeurt. Dat dezelfde mensen die zich druk maken over culturele toeëigening maar niet kunnen vatten hoe nauw en imperialistisch hun perspectief wel niet is, is een extreem ironische en triestige kers op de taart.

Sociale media lijkt hierin een zeer belangrijke drijfkracht te zijn, en ik vermoed dat de vroege symptomen in de technologiesector kunnen toegeschreven kunnen worden aan de computervaardige eerste gebruikers ervan. Doordat de lijn tussen het persoonlijke en het professionele uitgeveegd wordt, worden de mensen aangespoord om emotionele harmonie te verkiezen over objectieve samenwerking. Dit soort infantilisering was één van de factoren die Lukianoff en Haidt identificeerden in The Coddling of the American Mind en is in werkelijkheid nefast voor sociale en emotionele ontwikkeling. Het is een trend die langzaam maar zeker globaal wordt. De wereld kan nu eenmaal niet als één groot joviaal dorp bestuurd worden, en wie dat toch probeert start enkel een zuiverheidsspiraal waarin de echte gemarginaliseerden uiteindelijk gewoon noodzakelijke nevenschade worden, terwijl de wachters zelf amok maken.

Ik ben eigenlijk meer en meer overtuigd dat het principieel effect van sociale media is om cluster B persoonlijkheidsstoornissen (zoals narcisme en borderline) sociaal voordelig te maken. De hoofdzakelijke wapens die een Mean Girl (m/v/nb/rgba) gebruikt zijn het selectief delen van informatie en het inkaderen van verhalen, en Vind-Ik-Leuk en Delen zijn extreem effectieve hefbomen in de juiste verkeerde handen.

Nog niet zo lang geleden werd religie als persoonlijk beschouwd, en werd de scheiding tussen kerk en staat geapprecieerd. Als bepaalde partijen in de privésector essentiële delen van de publieke sfeer besturen, zouden ook zij hieraan onderhevig moeten zijn. Het zou echt een pak beter zijn als we dit principe terug beginnen te waarderen, omdat geloof op dit moment gebruikt wordt om de idealen van een vrije samenleving op de brandstapel te gooien. Waarschijnlijk heeft het vuur jou nog niet bereikt, maar voor hoe lang nog?

Erger nog, een technologiesector die ooit onwurmbaar was in de mening dat censuur schadelijk was, en dat persoonlijke vrijheid van het opperste belang was, is hier nu enorm over verdeeld. Het maakt het mogelijk om nog veel gevaarlijkere sprongen te nemen, zoals het idee dat het belangrijkste openbare medium ooit gecompromitteerd moet worden met achterpoortjes, en bewaakt moet worden tot op het laatste hoekje en kantje voor wangedrag, voor het collectieve welzijn. Ze zijn er nog steeds mee bezig, één wet per keer. Voor moest je het nog niet weten. 't Is misschien wel de moeite om daar iets aan te doen.

Ikzelf, ik zit hier nog, aan het werk zo goed als ik kan. Als het hier stillekes is, dan is dat enkel omdat ik mijn moeite ergens anders in stop, vreedzaam aan't computeren. Stuur mij gerust een seintje. Maar liefst niet op Twitter.

On lambs in sheep's clothing

I distinctly remember when I realized the tech community was undergoing a significant cultural shift. It turned out to be quite prophetic for western culture. It was late 2013, at a small conference, where the closing keynote had just concluded with a massive standing ovation. It was clear the talk had resonated immensely. Yet there I was, still seated, profoundly uncomfortable, not just because of what I'd heard, but because I felt alone in my quiet dissent. Why did nobody else see what was wrong here?

It's impossible to explain why without describing the actual content in detail. However, even years later I expect that criticizing this talk will not be well received by some, which is odd, considering the entire purpose of public speaking is to reach a wide audience and invite a dialogue. If someone is speaking only in order to be listened to, that's called a sermon, not a talk.

What was said was on the surface mostly unobjectionable. A young woman had just told us the trials of her early working experience. How she'd thrown herself into the job and forgotten to come up for air, neglecting herself in the process. How the focus on work performance left little room for emotional and social well-being, which were equally important. And how she'd overcome all that to bring her there, her first tech conference talk ever.

That didn't bother me at all. It's a common enough tale, which is important for young professionals to hear. They're so accustomed to conforming to the school system, where the roadmap and schedule is already planned out, they're often unable to adjust to the more fluid and independent environment of the workplace. You have to find your own way, and pacing yourself for the long haul is entirely up to you.

In this case though, that wasn't how she explained it. Instead she blamed the expectations of doing as you're told, behaving "professionally," and checking personal issues at the door. These were misconceptions that people had about what a workplace should be. That instead you should be living the good life, namely being in the place you belong, with people that you love, doing the work that's yours, on purpose. This is what was necessary to create an environment in which she "felt safe."

That all sounds nice on paper. What bothered me was that she was not actually practicing what she was preaching.

Het Lam Gods

The Adoration of the Mystic Lamb, by the brothers Van Eyck, 1432 (le wik)

She proclaimed the virtue of not succumbing to insecurity, yet began by highlighting how nervous she was, and that she needed the audience to close their eyes, which failed to alleviate it. So she asked some volunteers to come on stage to goof off and break the tension for her.

She explained how the last thing she wanted was to feel overly needy, "the grossest feeling ever," yet as part of her presentation had a brief video hangout with her coworkers, who were standing by a thousand miles away to provide live "moral support." After which she struggled to bring back her slides and needed an organizer to step in.

She emphasized the importance of letting people be human beings, with all their diverse and varied abilities, yet did not seem to realize the impossibly stifling professionalism she rejected was actually a framework to allow even irreconcilable differences to be overcome, in service of a common goal.

She talked about how tech features some of the most talented and privileged members of society, who can "create gold out of air," yet criticized a mindset that favors rationality over emotionality. This is the main thing that actually lifted humanity back out of the dark ages to make it possible for her to be there.

A person who praised the emotional intelligence of self-restraint, persistence and self-awareness failed to practice all three, by spilling her feelings out, immediately falling back to others for support and assistance, and not even realizing she was undermining her own points. She seemed to view criticism of emotionality as mere denial, an unwillingness to be honest with oneself, but was using it as a shield in that exact same way.

Most of all, it rang hollow to criticize others for having privilege just for showing up, when her own demeanor seemed to be falling into the role of the innocent, wounded lamb, spinning the yarn for all it was worth. Her notion of her dream job seemed to be exactly the most privileged one, free from significant differences of opinion or character, working only with those who would validate her in all the ways she wanted. A job which she now had.

Now, if a speaker makes a point by phoning home on the spot, that's one thing, and everyone knows live demos are cursed. But would it really be desirable if everyone else walked on stage as a quivering reed, requiring a pat on the back to make it through, flailing along the way? You don't have to tell me about the stress, I have given talks that are live demos from start to finish, in front of hundreds of faces and a camera, and it's not easy. But that's up to me to deal with and mitigate, through careful preparation and practice.

So, if this talk was so self-contradictory, why did it resonate? Well, because it hit all the right notes. She was a young mother showing pictures of her daughter, whom she wanted to teach what it meant to be a good person. There was struggle and redemption, in the form of a lost sheep finding her way out of the cold, dark desert and into a warm oasis of love and support. There was talk of her own internalized sexism, which she let go of, and the importance of inclusion and diversity. Back then those words still passed by with little notice, but today they are completely charged with moral goodness, and fighting evil.

I couldn't really explain it concisely at the time, but I put the pieces together later: this was not a tech talk, it was a secular baptist revival meeting. I was reminded of this by a recent clip of James Lindsay explaining that despite the demise of organized religion in the West, people on all sides are still searching for it, and have started acting out religious practice in other ways. I can believe it, because this was an early taste, creating converts in the audience. A place you belong, with people you love and who love you back, doing important work that's yours, with a purpose, that sounds like the ideal church.

Despite this harsh assessment, I don't intend to malign or single out the speaker, and I did not link to the talk on purpose. I also hope she's grown wiser since, though similar talks have been given dozens or even hundreds of times by now. The phenomenon of sanctifying secular practice is widespread. I merely want to add my voice that this is definitely happening, and this is the best way for me to explain it.

As I was revising this post, another video appeared which goes into the Evergreen State College scandal, demonstrating the unholy mix of organisational politics and collective guilt that lay at the basis of it. Professor Bret Weinstein, at the center of the storm along with his wife Heather Heying, describes the exact same feeling of being one of the few people in the room noticing the dishonest and manipulative pressure being applied to sell a moral ideology. He also describes the people who publicly appeared to support it, but privately admitted to not daring to speak out against it.

This is why favoring emotionality over rationality can be so pernicious: it prefers nice-sounding platitudes over reality. This is why dreams of universal harmony and psychological safety are utopian: it denies the fundamental differences and conflicts that exist. Skeptics have an allergic reaction and can see right through it, and it did not fill my heart with the love of Jesus.

Shrine to Grace Hopper

Shrine to Grace Hopper, 35C3, 2018
Is it still ironic if you're not allowed to laugh at it?

Five years later, if I look at what the main effects of secular-religious practice have been on the industry, it seems to be to enable exactly the same dysfunction that made traditional religion so unwelcome in the first place. Under the guise of creating better communities and addressing severe "inequities," the clergy feels that the ends justify the means, and that it doesn't need to follow its own commandments. It is now commonly argued that top-down control is necessary to ensure the safety of the users, like the shepherd protecting his flock. Tools have been created to make this easy at a frightening scale, ready to be abused, as well as standard policies to enable it.

At best, this leads to a slow decay of free expression. Nuanceless policing bots and scripts make it trivial for innocent bystanders to get hurt. The threat of losing one's social network on just one of the handful of popular platforms creates a severe chilling effect. This is augmented by the ease of concerted flagging and other public shaming campaigns, which create a guilty-until-proven innocent environment. In the worst case, it provides moral cover for sociopathy, as plausibly deniable censorship is enacted through the use of shadow bans, demonetization and filtered recommendations.

This also ripples down to the local scale, as a common theme in religious practice is that it must be applied every day, in every place, in order to live a virtuous life. So when this year's 35C3 congress put up conspicuous portraits of women-in-tech and built an actual shrine to Grace Hopper, I couldn't help but see the analogy to portraits of the Virgin Mary, celebrating the immaculate conception of the software compiler. When a DJ spun his tunes in front of an antifa flag, just like the one hanging over the venue entrance, I wondered exactly what sort of transcendant ecstacy was being sought on this dance floor. I can take a "you do you" approach to this, don't get me wrong, but this rule of tolerance was certainly not followed when people started stealing national flags and scrawling graffiti, after severe enough complaints were made prompting security to advise taking them down on the first day.

This is about far more than some industry events or edgy tweets though, as shown by the Grievance Studies hoax. The same James Lindsay, along with scholars Helen Pluckrose and Peter Boghossian, wrote over a dozen fake humanities papers last year, with all the right citations, and got several published in peer-reviewed journals. This included dissertations on celebrating morbid obesity, inspecting the genitals of dogs for signs of rape culture, putting white students in chains in classrooms, and even a rewritten excerpt from Mein Kampf, with intersectionality replacing naziism. They collected numerous accolades and compliments for their supposedly "rich and exciting" material, before the gig was up.

As they put it, they wanted to demonstrate that what passes for knowledge production is often just sophistry, coating unsupported ideology with a veneer of respectability, as a form of "idea laundering." Instead of using the processes of academia as a crucible for arriving at the truth, scholars in these fields use them to reinforce their preconceptions and manufacture a monoculture.

In response, unable to discredit the papers on their own merits without shooting themselves in the foot, opponents targeted Boghossian with an IRB ethics complaint. His employer Portland State University concluded that a practical audit of indefensible scholarship amounts to doing research on human subjects without their consent. This attempt to save face further reinforces the mockery of process, and merely tries to shoot the messenger.

Religion appears to be a fundamental need for many, which evolved to provide strong in-group cohesion as a competitive advantage, at the expense of demonizing apostates and heretics. When its worst impulses are not contained, rationality is tossed aside as inappropriate, seeking out a greater moral purity with unquestionable zeal, playing as dirty as needed. While the overt practice of faith has now become severely outmoded, it only leads to bigger problems when we pretend it is no longer present or relevant.

I know "X is religion" is not a particularly novel take, but religious war is war, and war never changes.

Antifa Flag at 35C3

Antifa Flag at 35C3 (via Twitter)

As such, the situation in tech is just one facet, with clear parallels elsewhere. Faced with a righteous moral struggle against "whiteness" and "nazis," nowhere near as prevalent in reality as in religious fever dreams, and with zero self-awareness, this wave of social justice has turned into a disturbing tsunami of groupthink. Heretics and apostates are more commonly called hateful trolls and alt-right fascists, even if the opposite is true. Without a clearly delineated space to practice their faith in, believers have instead taken it into HR departments, professional networks, academic faculties, media outlets and political parties, bootstrapping their own inquisition in the name of Trust and Safety, rooting out harmful ideas.

The most notable example for me remains the excommunication of James Damore at Google in 2017, fired not because of what he said, but because of what some of his coworkers and the press read into it. They were unable to treat science with the rationality it requires. The psychological safety of his coworkers was more important than his physical safety, judging by the threats they sent him, as they succumbed to emotionally driven, hateful mob behavior to expel the scapegoat. Which, y'know, is a biblical concept. As is the innocent lamb of god. These two avatars also featured prominently in the Drupal community's persecution of Larry Garfield in the same year, in service of an imagined victim of sexual abuse who was never actually consulted.

The knock-on effects continue to escalate, due to the increasing dominance of tech platforms in society, as shown recently with crowdfunder Patreon's capricious banning of wrongthinkers, and the subsequent sabotage of competitor SubscribeStar when PayPal cut off payments. The desire to exclude sinful thoughts can now go as far as denying all avenues of financial support, violating the autonomy of their individual backers. These acts are then justified by clearly labeling the targets as being anti-goodness, and appealing to the supposed superior human judgement of a case-by-case approach, in practice little more than naked subjectivity.

The net result is that whole categories of people are now made actually unsafe, under constant threat of being censored or falsely discredited, in order to satisfy the mere comfort and status of a shortsighted and pampered demographic. That same group will regularly say that it's actually the white men whailing about losing their privilege. So I have to wonder what parallel universe they imagine yesterday's tech was created in, where taking a serious interest in computers, video games or science fiction was not cause for social ostracism. This was a plight shared by geeks of all colors and varieties, and a pattern which appears to be repeating itself more insidiously.

The talk mentioned above did raise the question, "what about the negative impacts of denying people to be themselves entirely at work?" Well Damore, being autistic, was clearly denied the opportunity of doing any of that at all, despite following the signposted processes. The resulting media coverage then reiterated all its old talking points, in a muddled "conversation" a nebulous "we" need to be having, but which nobody is allowed to disagree with. That tech communities have themselves long served as safe spaces for the neurologically diverse is ignored, directly contradicting the stated aim of inclusion.

I should add, I'm not naive enough to think this is a uniquely left-wing phenomenon, and religious conviction invites the same in return. But in this case, "who started it" does actually matter a lot. The left controls the vast majority of cultural and intellectual levers in the West today, and accountability is long overdue. It's also high time to acknowledge how provincial and seasonal the current American Trump-obsession and its associated race-baiting is. They are preoccupied with the US border, but mostly ignorant of what happens far from it. That the same people who seem extremely concerned about cultural appropriation can't seem to even imagine how narrow and imperialist their perspective really is, is a supremely sad irony cherry on top.

Social media appears to have been a major driver, and I suspect the early warnings in tech can be directly attributed to its tech-savvy early adopters. By blurring the lines between the personal and the professional, it encourages people to focus on emotional harmony over objective cooperation. This kind of infantilization was one of several factors identified in Lukianoff and Haidt's The Coddling of the American Mind and actually harms social and emotional development. It's a trend that's going global. The world simply cannot be run like one big, happy village, and trying to do so regardless only results in a purity spiral where the genuinely marginalized are necessary collateral damage and the watchmen run amok.

In fact, I'm more and more convinced that the main effect of social media has been to make cluster B disorders such as narcissism and borderline personality socially advantageous. The main weapons deployed by a Mean Girl (m/f/nb/rgba) are selective sharing of information and narrative framing, and instant shares and likes are incredible force multipliers in the right wrong hands.

Not so long ago we confined religion to the personal sphere, and valued the separation of church and state. In a time when elements of the private sector control essential parts of the public sphere, that should include them too. It would be significantly better if we'd start valuing this principle again, because faith is now being used to justify burning the ideals of free society at the stake. The flames probably have not reached you yet, but how much longer?

Even worse, a tech community that was once united around treating censorship as damage, and personal freedom as paramount, is now divided on these very issues. It leaves the door open to far more dangerous leaps of faith, like the idea that the greatest public resource ever created should be backdoored and surveilled down to every nook and cranny for misbehavior, for the good of all. It's happening right now, one law at a time. In case you missed it. It might be worth doing something about it.

As for me, I'm still around, getting work done as best I can. If it's quiet around here, that's only because I'm putting efforts elsewhere, coding in peace. Feel free to hit me up. Just not on Twitter.

I did not yet update my older post when vSphere 6.7 was released. The list now complete up to vSphere 6.7. Your Linux runs on a VMware VM, but which on which ESXi version? You can see for yourself: run "dmidecode" and look at lines 10, 11 and 12.
ESX 2.5 - BIOS Release Date: 04/21/2004 - Address 0xE8480 - Size 97152 bytes
ESX 3.0 - BIOS Release Date: 04/17/2006 - Address 0xE7C70 - Size 99216 bytes
ESX 3.5 - BIOS Release Date: 01/30/2008 - Address 0xE7910 - Size 100080 bytes
ESX 4 - BIOS Release Date: 08/15/2008 - Address 0xEA6C0 - Size 88384 bytes
ESX 4U1 - BIOS Release Date: 09/22/2009 - Address 0xEA550 - Size 88752 bytes
ESX 4.1 - BIOS Release Date: 10/13/2009 - Address 0xEA2E0 - Size 89376 bytes
ESXi 5 - BIOS Release Date: 01/07/2011 - Address 0xE72C0 - Size 101696 bytes
ESXi 5.1 - BIOS Release Date: 06/22/2012 - Address: 0xEA0C0 - Size: 89920 bytes
ESXi 5.5 - BIOS Release Date: 07/30/2013 - Address: 0xEA050 - Size: 90032 bytes
ESXi 6 - BIOS Release Date: 09/30/2014 - Address: 0xE9A40 - Size: 91584 bytes
ESXi 6.5 - BIOS Release Date: 04/05/2016 - Address: 0xEA580 - Size: 88704 bytes 
ESXi 6.7 - BIOS Release Date: 07/03/2018 - Address: 0xEA520 - Size: 88800 bytes
NB These DMI properties are set at boot time. Even if your VM gets live-migrated to a host running a different vSphere version, your VM will keep the values it got from the host it booted on. What you see is the vSphere version of the host your VM booted on. It is the VM power-on that matters, so a guest OS reboot will not regenerate the DMI properties. A guest OS shut down on the other hand will also power off the VM, and the next power-on will regenerate the DMI properties.
We have just published the second set of interviews with our main track and keynote speakers. The following interviews give you a lot of interesting reading material about various topics, from ethics to databases and AI systems: Bradley M. Kuhn and Karen Sandler: Can Anyone Live in Full Software Freedom Today?. Confessions of Activists Who Try But Fail to Avoid Proprietary Software Deb Nicholson: Blockchain: The Ethical Considerations Drew Moseley: Mender - an open source OTA software update manager for IoT Duarte Nunes: Raft in Scylla. Consensus in an eventually consistent database Fernando Laudares: Hugepages and databases. working with abundant舰

January 15, 2019

Eighteen years ago today, I released Drupal 1.0.0. What started from humble beginnings has grown into one of the largest Open Source communities in the world. Today, Drupal exists because of its people and the collective effort of thousands of community members. Thank you to everyone who has been and continues to contribute to Drupal.

Eighteen years is also the voting age in the US, and the legal drinking age in Europe. I'm not sure which one is better. :) Joking aside, welcome to adulthood, Drupal. May your day be bug free and filled with fresh patches!

We have this Brother P750W label printer for . It's a model with wireless networking only. We wanted something with decent wired networking for use with multiple Debian desktop clients, but that was ~300€ more expensive. So here's how we went about configuring the bloody thing...

Not wanting to use proprietary drivers, this is what the device told me after some prying:
 * The thing speaks Apple Airprint.
 * By default, it shows up as an access point with SSID "DIRECT-brPT-P750WXXXX". XXXX is the last 4 digits of the printer's serial number.
 * Default wireless password "00000000".
 * Default ip address
 * It allows only one client to connect at a time.
 * Its web interface is totally utterly broken:
   * Pointing a browser at the default device ip redirects to a page that 404's.
   * Pointing a browser at the url of the admin page gleaned from cups 404's too.
 * An upgrade to the latest firmware doesn't seem to solve any issues.

As a reminder to myself, this is what I did to get it to work:
 * Get a Debian stable desktop.
 * Add the Debian buster repositories. That is Debian testing at the time of me writing this.
 * Set up apt pinning in /etc/apt/preferences to prefer stable over buster.
 * Upgrade cups related packages to the version from Debian buster. "apt install cups* -t buster" did the trick.
 * Notice this makes the printer get autodiscovered.
 * Watch /etc/cups/printers.conf and /etc/cups/ppd/ for what happens when connected to the P750W. Copy the relevant bits.
 * Get a Debian headless thingie (rpi, olinuxino, whatever...) running Debian stable.
 * Connect it to the printer wifi using wpa_supplicant.
 * Install cups* on it.
 * Drop the printers.conf and P750W ppd from the Debian buster desktop into /etc/cups and /etc/cups/ppd respectively. The only change was in printers.conf, from the avahi autodiscovery url to the default
 * Make sure to share the printer. I don't remember if I had to set that or if it was the default, but the cups web interface on port 631 should help there if needed.
 * Add the new shared printer on the Debian buster desktop. Entering the print server IP auto discovers it. Works flawlessly.
 * Try the same on a Debian stable desktop. Fails complaining about airprint related stuff.
 * Upgrade cups* to the Debian buster version. Apt pinning blah. Apt-listchanges says something about one of the package upgrades being crucial to get some airprint devices to work. Didn't notice the exact package alas and too lazy to get it to run.
 * Install the printer again. Now works flawlessly.

January 11, 2019

Et transition vers une déconnexion douce permanente

Sans que je m’en rende particulièrement compte, voici que je suis arrivé à la fin de ma déconnexion (dont vous pouvez retrouver tous les billets ici). Une date symbolique qui imposait un bilan. Tout d’abord en enlevant mon filtre et en faisant un tour sur les réseaux sociaux désormais abhorrés.

Pas que j’en avais pas vraiment envie mais plus par curiosité, pour voir ce que ça me faisait et vérifier si j’avais raté des choses. On pourrait croire que j’étais impatient mais, contre toute attente, j’ai du me forcer. Au nom de la science, pour la complétude de l’expérience ! Ces sites ne me manquent pas, au contraire. Je n’avais pas l’impression de rater quoi que ce soit d’important et, même si c’était le cas, je m’en portais au fond très bien.

Ma première impression a été d’arriver en retard dans une soirée à l’ambiance un peu morne. Vous savez, le genre de soirée où vous arrivez stressé de rater le meilleur pour vous rendre compte qu’en fait tout le monde semble s’emmerder.

Oh certes, il y’avait des commentaires sur mes posts dont certains étaient intéressants (je n’ai pas tout lu, juste regardé rapidement les derniers). J’avais plein de notifications, des centaines de demandes d’ajout sur Linkedin (que j’ai acceptée).

Mais, au final, rien qui me donne envie de revenir. Au contraire, j’avais la nausée, comme un addict au sucre qui se tape tout un gâteau au chocolat après 3 mois de diète.

Ce qui est encore plus frappant c’est que cette demi-heure de rattrapage de réseaux sociaux m’a obsédée durant plusieurs heures. J’avais envie d’aller vérifier des choses, je pensais à ce que j’avais vu passer, je me demandais ce que je devrais répondre à tel commentaire. Mon esprit était de nouveau complètement encombré.

Il faut se rendre l’évidence : je ne suis pas capable d’utiliser sainement les réseaux sociaux. Je suis trop sensible à leurs messages inconscients, à leurs tactiques d’addiction.

Pour être tout à fait honnête avec moi-même, il faut avouer que, techniquement, je n’ai pas respecté complètement ma déconnexion. J’ai assoupli certaines règles initiales en “débloquant” Slack, pour raisons professionnelles, et Reddit. Il m’est également arrivé assez souvent de devoir désactiver mes filtres pour accéder à un lien qu’on m’envoyait sur Twitter, pour chercher les coordonnées d’un contact professionnel sur Linkedin voire pour accéder à un article de la presse généraliste qu’on m’avait envoyé. Mais ce n’est pas grave. Le but n’était pas de devenir “pur” mais bien de reprendre le contrôle sur mon utilisation d’Internet. À chaque fois, la désactivation de mes filtres ne durait que le temps strictement nécessaire à charger la page incriminée.

Une anecdote illustre bien ma déconnexion : au cours d’un repas de famille, la discussion porta sur les gilets jaunes. Je n’en avais jamais entendu parler. Après quelques secondes d’étonnement face à mon ignorance, on m’expliqua et, le soir même, je lisais la page Wikipédia sur le sujet.

Wikipédia qui s’est révélé un outil de déconnexion extraordinaire. La page d’accueil dispose en effet d’une petite section concernant les actualités et les événements en cours. J’en ai déduis que si un événement n’est pas sur Wikipedia, alors il n’est pas vraiment important.

Si ne pas être informé libère de l’espace mental et ne semble prêter à aucune conséquence néfaste, il est dramatique de constater à quel point mon cerveau est addict. Devant un écran, il veut recevoir des informations, quelle qu’elles soient. Quand je procrastine, je me retrouve à chercher tout ce qui pourrait m’apporter des news sans désactiver mon blocage.

C’est d’ailleurs je pense la raison pour laquelle mes visites à Reddit (au départ utilisé uniquement pour poser des questions dans certains subreddit) sont devenues plus fréquentes (mais sans devenir envahissante mais à surveiller). Je regarde également mon lecteur RSS tous les jours (heureusement, il n’est pas sur mon téléphone) mais les flux réellement utiles sont rares. Les réseaux sociaux m’avaient habitué à m’intéresser à tout et n’importe quoi. Avec le RSS, je dois choisir des sites qui postent des choses que je trouvent intéressantes dans la durée et qui ne noient pas cela dans du bruit marketing.

Un autre effet important de ces 3 mois de déconnexion est le début d’un détachement de mon besoin de reconnaissance immédiate. Outre les likes sur les réseaux, je me rends compte que donner des conférences gratuites ou intervenir dans les médias me rapporte peu voire rien du tout pour beaucoup d’efforts, de transports et de fatigue. De manière amusante, j’ai déjà reçu pas mal de sollicitations pour parler dans les médias de ma déconnexion (que, jusqu’à présent, j’ai toutes refusées). Mon ego est toujours là mais souhaite désormais être reconnu sur le long terme, ce qui nécessite un investissement plus profond et pas de simples apparitions médiatiques. D’ailleurs, entre nous, refuser une sollicitation médiatique est encore plus jouissif pour l’égo que de l’accepter.

J’ai également pris conscience que, contrairement à ce que Facebook essaye d’instiller, mon blog n’est pas un business. Je ne dois pas répondre dans les 24h aux messages (ce que Facebook encourage très fortement). J’ai le droit de ne répondre qu’aux emails et ne pas devoir me connecter sur différentes messageries propriétaires. J’ai le droit de rater des opportunités. Je suis un humain qui partage certaines de ses expériences à travers l’écriture. Libre à chacun de lire, de copier, de partager, de s’inspirer voire de me contacter ou de me soutenir. Mais libre à moi de ne pas être le service client de mes écrits.

La conclusion de tout ça c’est que, les 3 mois écoulés, je n’ai aucune envie de stopper ma déconnexion. Ma vie d’aujourd’hui sans Facebook ou les médias me semble meilleure. Une fois tous les deux ou trois jours, je désactive mon filtre pour voir si j’ai des notifications sur Mastodon, Twitter ou Linkedin mais je n’ai même pas envie de regarder le flux. Je lis des choses qui m’intéressent grâce au RSS, je me plonge avec délice dans les livres qui attendaient sur mon étagère et j’ai beaucoup de conversations enrichissantes par mail.

Pourquoi quitterais-je ma thébaïde ?

Photo by Jay Mantri on Unsplash

Je suis @ploum, conférencier et écrivain électronique déconnecté rémunérés en prix libre sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens, même symboliques, font une réelle différence pour moi. Merci !

Ce texte est publié sous la licence CC-By BE.

January 10, 2019

LOADays 2019 is a GO and will be held on 27th and 28th of April 2018 in Antwerp, Belgium. We'll be opening the CfP shortly.

January 09, 2019

During the holidays we have performed some interviews with main track speakers from various tracks. To get up to speed with the topics discussed in the main track talks, you can start reading the following interviews: Chris Brind: Open Source at DuckDuckGo. Raising the Standard of Trust Online Daniel Stenberg: DNS over HTTPS - the good, the bad and the ugly. Why, how, when and who gets to control how names are resolved Joe Conway: PostgreSQL Goes to 11! Juan Linietsky: Making the next blockbuster game with FOSS tools. Using Free Software tools to achieve high quality game visuals. Richard舰
With FOSDEM just around the corner, it is time for us to enlist your help. Every year, an enthusiastic band of volunteers make FOSDEM happen and make it a fun and safe place for all our attendees. We could not do this without you. This year we again need as many hands as possible, especially for heralding during the conference, during the buildup (starting Friday at noon) and teardown (Sunday evening). No need to worry about missing lunch. Food will be provided. Would you like to be part of the team that makes FOSDEM tick? Sign up here! You舰

January 08, 2019

Last year, I talked to nearly one hundred Drupal agency owners to understand what is preventing them from selling Drupal. One of the most common responses raised is that Drupal's administration UI looks outdated.

This critique is not wrong. Drupal's current administration UI was originally designed almost ten years ago when we were working on Drupal 7. In the last ten years, the world did not stand still; design trends changed, user interfaces became more dynamic and end-user expectations have changed with that.

To be fair, Drupal's administration UI has received numerous improvements in the past ten years; Drupal 8 shipped with a new toolbar, an updated content creation experience, more WYSIWYG functionality, and even some design updates.

A visual comparison of Drupal 7 and Drupal 8's administration UI A comparison of the Drupal 7 and Drupal 8 content creation screen to highlight some of the improvements in Drupal 8.

While we made important improvements between Drupal 7 and Drupal 8, the feedback from the Drupal agency owners doesn't lie: we have not done enough to keep Drupal's administration UI modern and up-to-date.

This is something we need to address.

We are introducing a new design system that defines a complete set of principles, patterns, and tools for updating Drupal's administration UI.

In the short term, we plan on updating the existing administration UI with the new design system. Longer term, we are working on creating a completely new JavaScript-based administration UI.

A screenshot of the content administration using Drupal 8's Carlo theme The content administration screen with the new design system.

As you can see on, community feedback on the proposal is overwhelmingly positive with comments like Wow! Such an improvement! and Well done! High contrast and modern look..

A screenshot of the spacing guidelines of Drupal 8's Carlo theme Sample space sizing guidelines from the new design system.

I also ran the new design system by a few people who spend their days selling Drupal and they described it as "clean" with "good use of space" and a design they would be confident showing to prospective customers.

Whether you are a Drupal end-user, or in the business of selling Drupal, I recommend you check out the new design system and provide your feedback on

Special thanks to Cristina Chumillas, Sascha Eggenberger, Roy Scholten, Archita Arora, Dennis Cohn, Ricardo Marcelino, Balazs Kantor, Lewis Nyman,and Antonella Severo for all the work on the new design system so far!

We have started implementing the new design system as a contributed theme with the name Claro. We are aiming to release a beta version for testing in the spring of 2019 and to include it in Drupal core as an experimental theme by Drupal 8.8.0 in December 2019. With more help, we might be able to get it done faster.

Throughout the development of the refreshed administration theme, we will run usability studies to ensure that the new theme indeed is an improvement over the current experience, and we can iteratively improve it along the way.

Administration themes must meet large and varied use cases. For example, accessibility is critical for the administration experience, and I'm happy to see that this initiative connected with and have taken feedback into account from the accessibility team.

Acquia has committed to being an early adopter of the theme through the Acquia Lightning distribution, broadening the potential base of projects that can test and provide feedback on the refresh. Hopefully other organizations and projects will do the same.

How can I help?

The team is looking for more designers and frontend developers to get involved. You can attend the weekly meetings on #javascript on Drupal Slack Mondays at 16:30 UTC and on #admin-ui on Drupal Slack Wednesdays at 14:30 UTC.

Thanks to Lauri Eskola, Gábor Hojtsy and Jeff Beeman for their help with this post.

January 07, 2019

Tidelift, which provides organizations with commercial-grade support for Open Source software and pays part of the proceeds to software maintainers, raises a $25 million Series B. I hadn't heard about Tidelift before, but it turns out their office is 300 meters from Acquia's. I reached out and we're going to grab a coffee soon.

Mateu, Gabe and I just released JSON:API 2.0!

Read more about it on Mateu’s blog.

I’m proud of what we’ve achieved. I’m excited to see more projects use it. And I’m confident that we’ll be able to add lots of features in the coming years, without breaking backwards compatibility. I was blown away just now while generating release notes: apparently 63 people contributed. I never realized it was that many. Thanks to all of you :)

I had a bottle of Catalan Ratafia (which has a fascinating history) waiting to celebrate the occasion. Why Ratafia? Mateu is the founder of this module and lives in Mallorca, in Catalunya. Txin txin!

If you want to read more about how it reached this point, see the July, October, November and December blog posts I did about our progress.

January 03, 2019

I published the following diary on “Malicious Script Leaking Data via FTP”:

The last day of 2018, I found an interesting Windows cmd script which was uploaded from India (SHA256: dff5fe50aae9268ae43b76729e7bb966ff4ab2be1bd940515cbfc0f0ac6b65ef) with a very low VT score. The script is not obfuscated and contains a long list of commands based on standard Windows tools. Here are some examples… [Read more]

[The post [SANS ISC] Malicious Script Leaking Data via FTP has been first published on /dev/random]

January 02, 2019

Well, an intense 2018 : traveling and climbing with old and new friends, making steady progress in climbing level, learning new techniques, practising and refining them, both in climbing and professionally :
  • Config management Camp in Ghent
  • Climbing in Gorges du Tarn en Gorges de la Jonte with Vertical Thinking, including two multipitches (Le jardin enchanté, and diagonal du Gogol)
  • Climbing in Fontainebleau with Alex and Tom, doing lots of yellow, orange and blue routes in L'éléphant, Apremont and Roche aux sabots.
  • Climbing trip to Ettringen, practicing some trad techniques, first time climbing Basalt
  • Climbing trip with Vertical Thinking to Guillestre (Haut Val Durance), doing 2 nice multipitches (4-5 pitches including a 6a)
  • Climbing day with Koen and Wouter in Moha
  • Visiting Romania (Moldova and Transilvania region) with Eduard and Ecatarina : Lady's rock, Vatra Dornei, Dochia Caban and Toaca Peak, Transfagarasan, Sighisoara, Bran castel, Brasov, Iasi
  • Short citytrip in Vienna (during a 22h layover between two flights)
  • Multipitch climbing with Koen in Yvoir, also exploring Anhée
  • Climbing training working towards 6c.
  • Percona Live Europe in Frankfurt
  • Climbing trip to Siurana with Rouslan and Rat : leading my first 6b (redpoint), 6b+ toprope and projecting a 7a
  • Quick visit to Barcelona
  • Visiting 35C3 conference in Leipzig, another 4 days of infosec, IT, technology and science. First year as an 'angel', volunteering to help with some tasks at the conference. Unfortunately, bound to my hotel room due to illness for half of the conference. Luckily, all talks are recorded and streamed, so I could I could follow a few from my bed :
  • Spending New Year's Eve in a doctor's office and looking for a pharmacy, due to earlier mentioned illness.

Plans for 2019 :
- more climbing : training for 6c/7a, fall-training to get more comfortable while leading, climbing trip to Buis-les-baronies in April, maybe a trip to Boulder, Colorado, possibly an alpine experience in the summer or a climbing trip in the US west coast in autumn
- continue Rock maintenance with Belgian Rebolting Team.
- find a new house, preferably with a small garden
- conferences : FOSDEM, Config Management  Camp, Percona Live (Austin, Texas), Percona Live Europe, 36C3
- first time Rock Werchter (Tool is coming)

January 01, 2019

Our keyserver is now accepting submissions for the FOSDEM 2019 keysigning event. The annual PGP keysigning event at FOSDEM is one of the largest of its kind. With more than one hundred participants every year, it is an excellent opportunity to strengthen the web of trust. For instructions on how to participate in this event, see the keysigning page. Key submissions on Wednesday 23 January, to give us some time to generate and distribute the list of participants. Remember to bring a printed copy of this list to FOSDEM.

December 31, 2018

Last week was my twelfth Drupalversary!

The first half dozen years as a volunteer contributor/student, the second half as a full-time contributor/Acquia employee. Which makes this a special Drupalversary and worth looking back on :)


The d.o highlights of the first six years were my Hierarchical Select and CDN modules. I started those in my first year or so of using Drupal (which coincides with my first year at university). They led to a summer job for Mollom, working with/for Dries remotely — vastly better than counting sandwiches or waiting tables!

It also resulted in me freelancing during the school holidays: the Hierarchical Select module gained many features thanks to agencies not just requesting but also sponsoring them. I couldn’t believe that companies thousands of kilometers away would trust a 21-year old to write code for them!

Then I did my bachelor thesis and master thesis on Drupal + WPO (Web Performance Optimization) + data mining. To my own amazement, my bachelor thesis (while now irrelevant) led to freelancing for the White House and an internship with Facebook.

Biggest lesson learned: opportunities are hiding in unexpected places! (But opportunities are more within reach to those who are privileged. I had the privilege to do university studies, to spend my free time contributing to an open source project, and to propose thesis subjects.)


The second half was made possible by all of the above and sheer luck.

When I was first looking for a job in early 2012, Acquia had a remote hiring freeze. It got lifted a few months later. Because I’d worked remotely with Dries before (at Mollom), I was given the opportunity to work fully remotely from day one. (This would turn out to be very valuable: since then I’ve moved three times!) Angie and Moshe thought I was a capable candidate, I think largely based on the Hierarchical Select module.
Imagine that the remote hiring freeze had not gotten lifted or I’d written a different module? I was lucky in past choices and timing.
So I joined Acquia and started working on Drupal core full-time! I was originally hired to work on the authoring experience, specifically in-place editing.
The team of four I joined in 2012 has quadrupled since then and has always been an amazing group of people — a reflection of the people in the Drupal community at large!

Getting Drupal 8 shipped was hard on everyone in the community, but definitely also on our team. We all did whatever was most important; I probably contributed to more than a dozen subsystems along the way. The Drupal 8 achievement I’m most proud of is probably the intersection of cacheability and the render pipeline: Dynamic Page Cache & BigPipe, both of which have accelerated many billions responses by now. After Drupal 8 shipped, my primary focus has been the API-First Initiative. It’s satisfying to see Drupal 8 do well.

Biggest lessons learned:

  1. code criticism is not personal criticism — not feeling the need to defend every piece of code you’ve written is not only liberating, it also makes you immensely more productive!
  2. always think about future maintainability — having to provide support and backwards compatibility made me truly understand the consequences of mistakes I’ve made.

To many more years with the Drupal community!

2018 is end of life and 2019 will be released soon. Autoptimize 2.5 is not at that point yet, but I just pushed a version to GitHub which adds image lazy loading to Autoptimize;

The actual lazy-loading is implemented by the integrated lazysizes JS lazy loader which has a lot of options some of which I will experiment with and bring to Autoptimize to the default improve user experience.

If you want you can download the beta (2.5.0-beta2) now from Github (disable 2.4.4 before activating the beta) and start using the new functionality immediately. And if you have feedback; shoot, I’ll be happy to take your remarks with me to bring AO 2.5 ready for release (I’m targeting March, but we’ll see).

Enjoy the celebrations and have a great 2019!

Pourquoi je minimalise désormais mes posts sur les réseaux sociaux, quitte à perdre des lecteurs.

Intellectuellement, je savais que les réseaux sociaux ne m’apportaient rien de bon. Ils étaient devenus un réflexe plutôt qu’une réelle source de plaisir. Ne plus les consulter était donc à la fois logique et facile. Il m’a suffit de trouver la bonne manière de les bloquer, d’enrober le tout sous la pompeuse appellation “déconnexion” et d’en faire des billets de blogs pour satisfaire mon égo tout en me libérant de l’espace mental.

Par contre, j’ai continué à poster sur les réseaux sociaux. Pour continuer à exister comme blogueur, comme personnage public. Même si je ne voyais plus les likes, les commentaires, je savais que ceux-ci existaient. Afin de garder le rythme, je postais des liens vers d’anciens billets les jours où je ne publiais pas de nouveau.

Ma première raison d’agir de cette façon c’est que l’algorithme Facebook filtre ce que vous voyez. Même si vous “aimez” ma page Facebook, il y’a à peine plus d’une chance sur dix que vous voyiez passer ma dernière publication dans votre flux. J’ai déjà constaté qu’un billet passé inaperçu pouvait attirer l’attention au troisième ou quatrième repost. Facebook va jusqu’à favoriser les pages qui postent régulièrement et n’hésitent pas à vous le faire savoir lorsque vous ne publiez pas durant un certain temps.

Sur Twitter, la situation est encore pire. La plupart des comptes postent le même lien plusieurs dizaines de fois sur la même journée.

En préparant mes posts sur les réseaux sociaux, je prenais même un malin plaisir à changer la phrase d’accroche, à la rendre le plus putaclick possible. Sans en avoir l’air, je vous manipulais pour vous donner envie de me lire. J’excitais votre curiosité comme un bon petit stagiaire employé dans un grand quotidien subventionné par l’état.

Bref, dans un monde ultra-bruyant, la seule solution pour se faire remarquer est de faire encore plus de bruit. J’ai beau avoir les meilleurs arguments du monde, je rajoutais de la pollution mentale à votre environnement.

Ma femme me l’a fait remarquer : « C’est une déconnexion de façade. Tu sais que tu es lu. Tu alimentes les réseaux sociaux. Tu fais comme si tu es déconnecté parce que tu ne le vois pas directement mais ce n’est pas grave car ton ego sais que, en ligne, tout continue comme avant. C’est hypocrite. » De fait, tant que je pollue, ma déconnexion est purement hypocrite. Elle est à sens unique. Un peu comme consommer du bio/local dans un emballage plastique.

Donc acte.

Ma déconnexion est entrée dans une phase plus dure. Elle me pousse à explorer une facette de ma personnalité que j’aurais préféré ne pas toucher : mon ego, mon besoin de reconnaissance publique.

Comme beaucoup de créateurs, je cherche la reconnaissance, quête égotiste encouragée par Facebook. Devant la nocivité de Facebook, nous nous cherchons des outils alternatifs pour continuer à exister. Alors que la vraie question est « Devons-nous à tout prix alimenter notre égo ? Quel est le sens de cette quête ? »

Pour tenter de m’en sortir, je n’alimenterai plus mes comptes de réseaux sociaux que d’une manière ultra minimale. Une simple règle automatique qui fait que chaque nouveau billet sera posté sur ma page Facebook, Twitter et Mastodon sans phrase d’accroche.

Peut-être qu’un jour je supprimerai complètement mes comptes. Mais je suis conscient qu’une énorme majorité de la population ne connait pas le RSS, que Facebook est pour eux ce qui s’en rapproche le plus malgré ses défauts.

Désormais, mes comptes sont moins polluants. Ils se contentent d’être factuels : un nouveau billet a été posté. Et si c’est encore trop bruyant pour vous, désabonnez-vous sans remords de ma page, utilisez le RSS, envoyez directement mes articles dans Pocket ou venez voir ma page lorsque le cœur vous en dit.

Mon audience va bien sûr en pâtir. Certains d’entre vous vont cesser de me lire. Ils ne s’en rendront pas compte. Moi non plus car je ne mesure pas mon audience. Je dois apprendre et accepter que je ne suis pas mon audience. Que je peux écrire sans chercher à être reconnu à tout prix. Qu’un lecteur fidèle qui me lit régulièrement vaut certainement mille internautes tombés par hasard sur cette page suite à un buzz un peu aléatoire d’un de mes billets. Que face à l’apparence de gloriole, un petit nombre de relations profondes et sincères n’a pas de prix. Que ce que les réseaux sociaux offrent n’est qu’une apparence d’audience qui flatte mon ego. Mais à un prix où le créateur comme le lecteur sont les pigeons.

Écrit comme ça, c’est beau et évident. Mais, au plus profond de moi, j’ai du mal. Je cherche la gloriole, je veux me sentir reconnu.

Vu de l’extérieur, cette recherche de reconnaissance a quelque chose de pathétique. Ceux qui sont passé au-dessus dégagent une impression de sagesse. On peut les trouver dans ce point où ils rejoignent les timides, les craintifs qui ont cherché toute leur vie à être discrets avant d’accepter de prendre des risques, de s’élever. Là, sur une fine arête, on trouve en équilibre ces personnes qu’on entend sans qu’elles aient à élever la voix, ces sages qui regardent loin et dont les silences ont autant de signification que des milliers d’égocentriques s’égosillant.

Est-ce que je veux tendre vers ça ? Est-ce que je dois tendre vers ça ? Est-ce que ça serait bon pour moi de tendre vers ça ? Est-ce que j’en suis capable ?

Soyons honnête : je suis encore incapable de “juste publier un billet” puis de l’oublier. J’ai bossé des jours sur une idée, je l’ai peaufinée et puis… Rien. Je devrais passer immédiatement à autre chose. Je crève d’envie d’avoir des retours, de voir le billet se propager, de “consulter mes statistiques”, de sentir que j’existe. C’est un peu ma came de blogueur.

Me lancer dans une cure de désintoxication me fait prendre conscience à quel point notre monde est plein de pollution mentale à laquelle nous contribuons, tant professionnellement que dans notre vie privée. Nous utilisons les mots « partager », « informer » voire « éduquer » alors qu’en réalité nous ne faisons que faire tourner le joint à la dopamine de notre ego toxicomane.

Nous lançons des projets participatifs, citoyens, basés sur les énergies renouvelables et conspuant les multinationales. Mais dès les premières contributions financières, nous engageons un marketeux/community manager pour demander à tout le monde de liker notre projet sur Facebook.

Pour quelqu’un comme moi qui tente de promouvoir ce blog ou mes projets de crowdfunding, difficile d’accepter que nous sommes malade de la publicité permanente, que nous avons besoin de devenir discret, de ne fonctionner que par le bouche à oreille, de croître doucement voire de décroître.

Mais c’est peut-être parce que c’est difficile que ça vaut la peine d’être tenté. On s’inquiète de la pollution de l’air, des sols, de l’eau, de nos corps. Mais personne ne semble s’inquiéter de la pollution de nos esprits…

Photo by Henry & Co. on Unsplash

Je suis @ploum, conférencier et écrivain électronique déconnecté rémunérés en prix libre sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens, même symboliques, font une réelle différence pour moi. Merci !

Ce texte est publié sous la licence CC-By BE.

December 30, 2018

Have you ever wanted to preview your new Drupal theme in a production environment without making it the default yet?

I did when I was working on my redesign of earlier in the year. I wanted the ability to add ?preview to the end of any URL on and have that URL render in my upcoming theme.

It allowed me to easily preview my new design with a few friends and ask for their feedback. I would send them a quick message like this: Hi Matt, check out an early preview of my site's upcoming redesign: Please let me know what you think!.

Because I use Drupal for my site, I created a custom Drupal 8 module to add this functionality.

Like all Drupal modules, my module has a *.info.yml file. The purpose of the *.info.yml file is to let Drupal know about the existence of my module and to share some basic information about the module. My theme preview module is called Previewer so it has a *.info.yml file called

name: Previewer
description: Allows previewing of a theme by adding ?preview to URLs.
package: Custom
type: module
core: 8.x

The module has only one PHP class, Previewer, that implements Drupal's ThemeNegotiatorInterface interface:


namespace Drupal\previewer\Theme;

use Drupal\Core\Routing\RouteMatchInterface;
use Drupal\Core\Theme\ThemeNegotiatorInterface;

class Previewer implements ThemeNegotiatorInterface {

   * The function applies() determines if it wants to set the 
   * active theme. If the ?preview query string is part of the
   * URL, return TRUE to denote that Previewer wants to set
   * the theme. determineActiveTheme() will be called to
   * ask for the theme's name.
  public function applies(RouteMatchInterface $route_match) {
    if (isset($_GET['preview'])) {
      return TRUE;
    return FALSE;

   * The function determineActiveTheme() is responsible
   * for returning the name of the theme that is to be used.
  public function determineActiveTheme(RouteMatchInterface $route_match) {
    return 'dries'; // Yes, the name of my theme is 'dries'.


The function applies() checks if ?preview is set as part of the current URL. If so, applies() returns TRUE to tell Drupal that it would like to specify what theme to use. If Previewer is allowed to specify the theme, its determineActiveTheme() function will be called. determineActiveTheme() returns the name of the theme. Drupal uses the specified theme to render the current page request.

For this to work, we have to tell Drupal about our theme negotiator class Previewer. This is done by registering it a service in

    class: Drupal\previewer\Theme\Previewer
      - { name: theme_negotiator, priority: 10 } tells Drupal to call our class Drupal\previewer\Theme\Previewer when it has to decide what theme to load.

A service is a common concept in Drupal (inherited from Symfony). Many of Drupal's features are separated into a service. Each service does just one job. Structuring your application around a set of independent and reusable service classes is an object-oriented programming best-practice. To some it might feel unnecessarily complex, but it actually promotes reusable, configurable and decoupled code.

Note that Drupal 8 adheres to PSR-4 namespaces and autoloading. This means that files must be named in specific ways and placed in specific directories in order to be recognized and loaded. Here is what my directory structure looks like:

$ tree previewer
└── src
    └── Theme
        └── Previewer.php

And that's it!

December 21, 2018

Drupal 8 has been growing 40 to 50 percent year over year. It's a healthy growth rate. Regardless, it is always worth exploring how we can continue to accelerate that growth.

Earlier this week, I wrote about the power of removing obstacles to growth, and shared how Amazon approaches its own growth blockers. Amazon identified at least two blockers for long-term growth: (1) shipping costs and (2) shipping times. For more than a decade, Amazon has been focused on eliminating both. They have spent an unbelievable amount of creativity, effort, time, and money to eliminate them.

In that blog post, I promised to share my thoughts around Drupal's own growth barriers. What obstacles can we eliminate to fuel Drupal's long-term growth? Well, I believe the limitations to Drupal's growth can be summarized as:

  1. Make Drupal easy to evaluate and adopt
  2. Make Drupal easy for content creators and site builders
  3. Reduce the total cost of ownership for developers and site owners
  4. Keep Drupal relevant and impactful
  5. Promote Drupal and help Drupal agencies win

For those that have read my blog or watched my DrupalCon keynote presentations, none of these will come as a surprise. Just like Amazon's examples, fixing these obstacles have been, and will be, multi-year efforts.

A mountain images with 5 product strategy tracks leading to the top Drupal's five product strategy tracks. A number of current initiatives is shown on each track.

1. Make Drupal easy to evaluate and adopt

We need to make it easy for more people to try Drupal. To help evaluators explore Drupal's possibilities, we improved the download and installation experience, and included a demonstration site with core. We made fantastic progress on this in 2018.

Now that we have improved the evaluator experience, I'd love to see us focus on the "new user" experience. When you put yourself in the shoes of a new Drupal user, you'd still find it hard to set up a local development environment. There are too many options, too little direction, and no one official way for how to get started with Drupal. The "new user" is not receiving enough attention, and that slows adoption so I'd love to see us focus on that in 2019.

2. Make Drupal easy for content creators and site builders

One of the most powerful trends I've noticed time and time again is that simplicity wins. People expect software to be functionally powerful and easy to use. This is especially true for content creators and site builders.

To make Drupal easier to use for content creators and site builders, we've introduced WYSIWYG and in-place editing in Drupal 8.0, and now we're working hard on media management, layout building, content workflows and a new administration and authoring UI.

A lot of these initiatives add tools to the UI that empower content creators and site builders to do more with less code. Long term, I believe that we need to add more of these "no-code" or "low-code" capabilities to Drupal.

3. Reduce the total cost of ownership for developers and site owners

Developers want to be agile, fast and deliver high quality projects that add value for their organization. Developers don't want their tools to get in the way.

For Drupal this means that they want to build sites, including themes and modules, without being bogged down by complex upgrades, expensive migrations or cumbersome developer workflows.

For developers and site owners we have made upgrades easier, we adopted a 6-month innovation model, and we extended security coverage for minor releases. This removes the complexity from major upgrades, gives organizations more time to upgrade, and allows us to release new capabilities more frequently. This is a very big deal for developer and site owners!

In addition, we're working on improving Drupal's Composer support and configuration management capabilities. This will help developers automate and streamline their day-to-day work.

Longer term, improved Composer support could act as a stepping stone towards automated updates, which would be one of the most effective ways to free up a developer's time.

4. Keep Drupal relevant and impactful

The innovation in the Drupal ecosystem happens thanks to Drupal contributors. We need to attract new contributors to Drupal, and keep existing contributors excited. This means we have to keep Drupal relevant and impactful.

To keep Drupal relevant, we've been investing in making Drupal an API-first platform for many years now. Headless Drupal or decoupled Drupal is one of Drupal's competitive advantages. Drupal's web service APIs allow developers to use Drupal with their JavaScript framework of choice, push content to different channels, and better integrate Drupal with different technologies in the marketing stack.

Drupal developers can now do unprecedented things with Drupal that weren't available before. JavaScript and mobile application developers have been familiarizing themselves with Drupal due to its improved API-first capabilities. All of this keeps Drupal relevant, ensures that Drupal has high impact, and that we attract new developers to Drupal.

5. Promote Drupal and help Drupal agencies win

While Drupal is well-known as an Open Source project, there isn't a deep understanding of how Drupal is evolving or how Drupal compares to its competitors.

Drupal is improving rapidly every six months with each new minor version release, but I'm not sure we're getting that message out effectively. We need to promote our amazing progress, not only to everyone in the web development community, but also to marketers and content managers, who are now often weighing in heavily on CMS decisions.

We do an incredible job collaborating on code — thousands of us are helping to build Drupal — but we do a poor job collaborating on marketing, education and promotion. Imagine what could happen if these thousands of individuals and agencies would all collaborate on promoting Drupal!

That is why the Drupal Association started the Promote Drupal initiative, and why we're trying to rally people in the community to work together on creating pitch decks, case studies, and other collateral to promote and market Drupal.

Here are a few things already happening:

  • There is an updated Drupal Brand Book for organizations to follow as they design Drupal marketing and sales materials.
  • A team of volunteers is creating a comprehensive Drupal pitch deck that Drupal agencies can use as a starting point when working with new clients.
  • DrupalCon will have new Content & Digital Marketing Track for marketing teams responsible for content generation, demand generation, user journeys, and more; and a "Agency Leadership Track" for those running Drupal agencies.
  • We will begin work on a competitive comparison chart — contrasting Drupal with other CMS competitors like Adobe, Sitecore, Contentful, WordPress, Prismic, and more.
  • A number of local Drupal Associations are hiring marketing people to help promote Drupal in their region.

Just like all open source contribution, it takes many to move things forward. So far, 40 people have signed up to help with these marketing efforts. If your organization has a marketing team that would like to contribute to the marketing of Drupal, check out the Promote Drupal initiative page and please join the Promote Drupal team.

Educating the world about how Drupal is evolving, the amazing use cases we support, and how Drupal compares to old and new competitors will go a very long way towards raising awareness of the project and growing the businesses built on and around Drupal.

Final thoughts

After talking to hundreds of Drupal users and would-be users, as well as dozens of agency owners, I believe we're working on the right things. Overcoming these growth obstacles are multi-year efforts. While the various initiatives might change, I believe we'll keep working on these five tracks for the next decade. We've been making steady progress the last few years but need to remain both patient and committed to driving them home. Just like Amazon continues to work on their growth obstacles after more than a decade, I expect we'll be working on these four obstacles for many years to come.

Vous connaissez certainement ce sentiment que nous éprouvons lorsque, après un voyage, nous rentrons vers notre foyer, notre maison.

Soudainement, les rues deviennent familières, nous connaissons chaque maison, chaque lampadaire, chaque dalle de trottoir. Physiquement, il y’a encore du trajet mais, dans la tête, on est déjà arrivé à la maison. Un sentiment qui donne généralement un petit boost d’énergie. Les chevaux ont, parait-il, la même sensation et se mettent à aller plus vite. On dit qu’ils « sentent l’écurie ».

Cette zone familière, quand on y réfléchit, est généralement délimitée par des frontières arbitraires que nous nous imposons : une route un peu large, un croisement, un pont. Au-delà s’étend la terre étrangère. On a beau la connaître, on n’est plus chez nous.

Dans ma vie, j’ai remarqué que, chez soi, c’est la zone qu’on parcourt à pied. Le nez dans le vent. La voiture, par contre, ne permet pas d’étendre notre territoire personnel. Une fois enfermé, nous ne sommes pas dans un endroit géographique, nous sommes « dans la voiture ». Sur l’écran des vitres défilent un paysage abstrait.

Et, un jour pas si lointain, j’ai découvert le vélo.

Contrairement à la voiture, le vélo nous met en contact direct avec notre environnement. On peut s’arrêter, changer d’avis, faire demi-tour sans craindre les coups de klaxons. On dit bonjour aux gens qu’on croise. On peut repérer un petit sentier qu’on n’avait jamais vu avant et l’emprunter « juste pour voir ».

Bref, le vélo permet d’étendre notre territoire. D’abord de 3-4km. Puis de 10. Puis de 20 et encore plus loin.

À force de rouler, j’ai l’impression d’être chez moi dans une zone qui s’étend jusqu’à 20km de ma maison. Je connais chaque petit sentier, chaque chemin.

Mon territoire selon Stravastats

Lorsque je m’aventure au-delà de ma « frontière », j’ai un frisson à l’idée d’entrer dans l’inconnu. Et j’éprouve un soulagement intense quand je la repasse dans l’autre sens. Mais, après quelques fois, je remarque que ma frontière est désormais un peu plus lointaine.

Ce n’est pas sans désagrément : je dois aller chaque fois plus loin pour franchir ma frontière. En voiture, j’ai tendance à me perdre en prenant des directions qui, à un moment ou un autre, sont impraticables pour l’automobile. J’oublie que je ne suis plus à vélo !

Mais je suis chez moi. Je suis le maître d’un domaine gigantesque. Je ne rêve pas spécialement de grands voyages exotiques, de contrées lointaines. Car je sais que l’aventure m’attend à 10, 20 ou 30km dans ce petit chemin que je n’ai encore jamais emprunté.

Les fesses sur une selle, les pieds sur les pédales, je suis un explorateur, un conquérant. Je m’enivre des paysages, de la lumière, des montées et des descentes.

Bref, je suis chez moi…

Note : je procrastinais la rédaction de ce billet depuis des mois lorsque Thierry Crouzet s’est mis à publié Born to Bike. Du coup, je me devais d’ajouter ma pierre à l’édifice.

Photo by Rikki Chan on Unsplash

Je suis @ploum, conférencier et écrivain électronique déconnecté rémunérés en prix libre sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens, même symboliques, font une réelle différence pour moi. Merci !

Ce texte est publié sous la licence CC-By BE.

Image Big Data WikimediaCe jeudi 17 janvier 2019 à 19h se déroulera la 74ème séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : SMACK: une pile logicielle open source pour le traitement big data

Thématique : big data|analyse de données

Public : Développeurs|analystes|data scientists

L’animateur conférencier : Mathieu Goeminne (CETIC)

Lieu de cette séance : HEPH Condorcet, Chemin du Champ de Mars, 15 – 7000 Mons – Auditoire Bloc E – situé au fond du parking (cf. ce plan sur le site d’Openstreetmap; suivre la voirie interne du site pour atteindre le bâtiment E). Le bâtiment est différent de celui utilisé lors de certaines séances précédentes.

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page La séance sera suivie d’un verre de l’amitié.

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, OpenSides, MeaWeb et Phonoid.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

Description : Les logiciels open source ont une place prédominante dans la panoplie du data scientist. L’utilisation conjointe de certains de ceux-ci a mené à la constitution de “boîtes à outils” ou de “piles technologiques” qui tendent à se standardiser. Dans le cadre de cette présentation, les constituants de la pile SMACK (pour Spark, Mesos, Akka, Cassandra, et Kafka) seront introduits, ainsi que leurs rôles et leurs interactions au sein de projets de traitement big data.

Short bio : Mathieu Goeminne a obtenu son Master en Informatique en 2009 et son doctorat en Sciences en 2013 à l’Université de Mons (UMONS). Dans le cadre de sa thèse, il s’est intéressé à l’évolution des écosystèmes logiciels. Il a rejoint l’équipe Big Data du département SST du CETIC en mars 2016, où il travaille principalement à la mise en œuvre d’outils de traitement et d’analyse de données caractérisées par une volumétrie, un débit ou une complexité importante. Il s’intéresse aux problématiques liées à l’importance grandissante des contenus numériques dans notre société.

December 20, 2018

I published the following diary on “Using OSSEC Active-Response as a DFIR Framework”:

In most of our networks, endpoints are often the weakest link because there are more difficult to control (example: laptops are travelling, used at home, etc).They can also be located in different locations even countries for biggest organizations. To better manage them, tools can be deployed to perform many different tasks… [Read more]

[The post [SANS ISC] Using OSSEC Active-Response as a DFIR Framework has been first published on /dev/random]

Nellio et Eva se retrouvent face à Georges Farreck, le célèbre acteur qui les a aidé, et Mérissa, une femme mystérieuse qui semble contrôler tout le conglomérat industriel et même plus.

— Bon sang, hurle Mérissa. J’avais pourtant interdit la publicité dans tout le bâtiment !
— C’est que, bredouille Warren, nous avons fait passer une loi qui interdit les technologies anti-publicitaires. L’architecte était donc tenu…
— Cela signifie que nous sommes espionnés, s’étrangle Mérissa. La régie sait que j’attends des jumeaux alors que c’est une information complètement privée !
— Mérissa, tu sais bien qu’on ne peut légalement pas empêcher la collecte de données depuis la loi sur la liberté d’observation, loi que nous avons soutenue et pour laquelle nous avons fait beaucoup de lobby. D’ailleurs…

Il se fige soudain au milieu de sa phrase. Portant sa main à son cœur, il éructe un râle avant de s’écrouler doucement sur le sol.

— Warren ! hurle Georges Farreck en se précipitant pour le rattraper.

Mérissa ôte posément le neurex qu’elle portait discrètement autour du crâne.
— Je ne peux donc plus faire confiance à ce truc si je suis surveillée.
— Que lui as-tu fait, demande Georges Farreck en tentant de relever le corps de Warren.
— J’ai donné l’ordre de le licencier sur le champs !
— Il est mort ! Comment…
— Peut-être portait-il un pacemaker lié à son assurance santé. C’est dommage pour lui car la fin du contrat a entrainé la résiliation immédiate de son assurance et donc de son pacemaker.
— C’est criminel ! murmuré-je.
— Oui, un tel manque de prévoyance est criminel, répond Mérissa en soutenant mon regard. Les top managers oublient souvent qu’ils ne sont que des employés comme les autres, au service du conseil d’administration. Même si c’est le plus souvent eux qui virent, ils arrivent à un top manager d’être viré à son tour. Comme aujourd’hui. Ô, certes, il aura droit à son parachute doré. Cela fera des funérailles splendides !

Dans la pièce, personne n’a bougé. Eva et Mérissa se toisent mutuellement du regard. La femme brune, fine, aux long cheveux de jais se tient nue face à la femme blonde, la peau pâle et le ventre boursouflé.

— Mérissa, il faut débrancher l’algorithme, murmure Eva d’une voix calme.
— Jamais ! L’algorithme est mon œuvre ! Il fonctionne très bien.
— Il est devenu fou.
— Qu’en sais-tu ?
— J’en suis la preuve en chair et en os ! En chair et en os !

Je lève la voix pour les interrompre.

— Mais de quoi parlez-vous ? Eva, vas-tu m’expliquer ?
— Il y’a quelques années, une brillante programmeuse a développé un algorithme de trading à haute fréquence pour anticiper les cours de la bourse. L’algorithme utilisait toutes les techniques d’apprentissage et d’intelligence artificielle. Sa grande particularité était que, contrairement aux autres algorithme boursier, il était relié à toutes les informations qu’il était possible d’imaginer : la météo, le trafic routier, les caméras de surveillance, les sites de presse… Grâce à cela, s’est dit cette programmeuse, il pourra trouver des corrélations entre les événements réels et le cours de la bourse.
— La programmeuse, c’est Mérissa ? fais-je naïvement.
— Bravo Sherlock, me répond cette dernière.
— Dans un deuxième temps, elle donna à son algorithme la possibilité d’agir sur le monde. D’abord en achetant et vendant des actions mais, par après, avec tout ce qu’il était possible de contrôler depuis Internet afin d’influencer le cours de la bourse. L’algorithme s’est mis à créer des profils sur les réseaux sociaux pour alimenter de fausses rumeurs, à changer les résultats des élections…
— Je n’ai jamais voulu cela, s’insurge Mérissa. L’algorithme l’a appris par lui-même.
— Peu importe. Au final, l’algorithme s’est mis à influencer les humains et transformer le monde dans un seul et unique objectif : augmenter les dividendes des actions de Mérissa.

Je ne peux m’empêcher de réagir.

— Mais… C’est scandaleux !
— Non, c’est logique. Cela faisait des décennies que la société ne faisait que transformer l’humanité pour optimiser les cours de la bourse. Les guerres, les famines, les attentats ne servaient qu’à manipuler, maladroitement, le cours de la bourse. Je n’ai fait que rationaliser le processus.
— Et tout ça en quelques années à peine ? Vous semblez pourtant si jeune.
— La puissance de la richesse, me sourit Mérissa en caressant son ventre rebondit. J’ai quatre-vingt-neuf ans !

Je manque de m’étrangler. Imperturbable, Eva continue son explication.

— La publicité, les neurexs, les lentilles… L’algorithme a très vite compris comment manipuler l’humanité. Les astéroïdes pénitentiaires ont été reconvertis en usines et, sur terre, l’avilissement systématique des sans-emplois a été instauré afin de les discréditer et de les empêcher de prendre conscience de leur caractère majoritaire.
— Tout cela existait déjà ! C’est facile de me mettre sur le dos tous les maux de la société. L’algorithme n’a fait qu’optimiser les situations existantes. Parfois, il n’avait même rien à faire.
– Et personne ne s’est rebellé contre cet algorithme ? ajouté-je.

Eva fais une pause et me regarde doucement.

— Comment ? L’algorithme est partout. L’algorithme contrôle tout. Il crée des avatars sur les réseaux et crée ses propres chefs rebelles afin d’identifier et d’éliminer les éléments les plus récalcitrants.
— Tu veux dire…
— Oui, FatNerdz est un compte entièrement virtuel qui ne servait qu’à repérer les rebelles.

Je reste bouche bée. Les explosions dans les appartements de Max et de Junior avait toutes les deux eux lieu juste après une communication avec FatNerdz.
— Mais… Mais il m’a pourtant donné des informations ! C’est lui qui m’a permis de trouver le printeur et qui a donné les coordonnées de cet endroit.

Eva prend une profonde inspiration. Elle regarde Mérissa. Georges Farreck ne dit rien, il semble dépassé.

– L’algorithme est programmé pour apprendre, toujours apprendre et améliorer ses modèles, le tout au bénéfice de la rentabilité. Or, il y’a une variable toujours aléatoire et incompréhensible : l’être humain. Il ne peut pas se débarrasser de l’humain car c’est sur l’humain que se base la rentabilité. Pour faire un homme riche, il faut nécessairement faire un autre homme pauvre. On ne peut pas être riche tout seul. Du coup, l’algorithme avait besoin de mieux comprendre la nature humaine. Et il conçu le plan de se transférer dans un corps humain, afin de l’étudier au plus près.
— Hein ?

Tous les trois, nous avons sursautés. Mérissa s’assied sur sa chaise en se tenant le ventre. Elle fixe Eva intensément.

— Dans un premier temps, l’algorithme utilisa un produit qu’il avait lui-même lancé, un mannequin sexuel tellement réaliste qu’il était impossible de le différencier d’un être humain. Les études avaient prouvé que si la ressemblance était importante mais pas complètement convaincante, l’effet était très perturbant. Les mannequins étaient donc vraiment parfaits en termes de réalisme. Mais leur programmation était très simple et se limitait à des conversations et des actions liées au sexe. Tout ces mannequins n’avaient donc aucune intelligence réelle. Sauf un qui reçu un traitement de faveur…

Je déglutis.

— Eva, es-tu en train de dire que…

Photo by ActionVance on Unsplash

Je suis @ploum, conférencier et écrivain électronique déconnecté rémunérés en prix libre sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens, même symboliques, font une réelle différence pour moi. Merci !

Ce texte est publié sous la licence CC-By BE.

December 19, 2018

My training submission has been accepted at the BruCON Spring Training session in April 2019. This training is intended for Blue Team members and system/security engineers who would like to take advantage of the OSSEC integration capabilities with other tools and increase the visibility of their infrastructure behaviour.

OSSEC is sometimes described as a low-cost log management solution but it has many interesting features which, when combined with external sources of information, may help in hunting for suspicious activity occurring on your servers and end-points. During this training, you will learn the basic of OSSEC and its components, how to deploy it and quickly get results. Then we will learn how to deploy specific rules to catch suspicious activities. From an input point of view, we will see how easy it is to learn new log formats to increase the detection scope and, from an output point of view, how we can generate alerts by interconnecting OSSEC with other tools like MISPTheHive or an ELK Stack / Splunk /etc…

A quick overview of the training content:

  • Day 1
    • Introduction to OSSEC
    • Day to day management
      • Deployment (automation!)
      • Maintenance
      • Debugging
    • Collecting events using homemade decoders and rules
    • Reporting and alerting
  • Day 2
    • “Pimping” OSSEC with external feeds & data
    • Automation using Active-Response
    • Integration with external tools for better visibility

The schedule is online and the registration page is here. Please spread the word!

[The post “Hunting with OSSEC” at BruCON Spring Training has been first published on /dev/random]

I published the following diary on “Restricting PowerShell Capabilities with NetSh:

The Christmas break is coming for most of us, let’s take some time to share some tips to better protect our computers. The Microsoft Windows OS has plenty of tools that, when properly used, can reduce risks to be infected by a malware. As best practices, we must have antivirus enabled, we can deploy AppLocker to allow only authorized applications to be launched, we can restrict applications to be executed from locations like %APPDATA% or %TEMP% but they are tools that are much more difficult to restrict on a regular host like… [Read more]

[The post [SANS ISC] Restricting PowerShell Capabilities with NetSh has been first published on /dev/random]

December 18, 2018

In my last blog post, I shared that when Acquia was a small startup, we were simultaneously focused on finding product-market fit and eliminating barriers to future growth.

In that light, I loved reading Eugene Wie's blog post called, Invisible asymptotes. Wie was a product leader at Amazon. In his blog post he explains how Amazon looks far into the future, identifies blockers for long-term growth, and turns eliminating these growth barriers into multi-decade efforts. As Amazon shows, eliminating barriers to growth remains very important long after you have outgrown the startup phase.

For example, Amazon considered shipping costs to be a growth blocker, or as Wie describes it, an invisible asymptote for growth. People hate paying for shipping costs, so Amazon decided to get rid of them. At first, solving this looked prohibitively expensive. How can you offer free shipping to millions of customers? Solving for this limitation became a multi-year effort. First, Amazon tried to appease customers' distaste for shipping fees with "Super Saver Shipping". Amazon introduced Super Saver Shipping in January 2002 for orders over $99. If you placed an order of $99 or more, you received free shipping. In the span of a few months, that number dropped to $49 and then to $25. Eventually this led to the launch of Amazon Prime in 2005, making all shipping "free". Members pay $79 per year for free, unlimited two-day shipping on eligible purchases. While a program like Amazon Prime doesn't actually make shipping free, it feels free to the customer, which effectively eliminates the barrier for growth. The impact on Amazon's growth was tremendous. Today, Amazon Prime provides Amazon an economic moat, or a sustainable competitive advantage – it isn't easy for other retailers to compete from a sheer economic and logistical standpoint.

Another obstacle for Amazon's growth was shipping times. People don't like having to wait for days to receive their Amazon purchase. Several years ago, I was talking to Werner Vogels, Amazon's global CTO, and asked him where most commerce investments were going. He responded that reducing shipping times was more strategic than making improvements to the commerce backend or website. As Wie points out in his blog, Amazon has been working on reducing shipping times for over a decade. First by building a higher density network of distribution centers, and more recently through delivery from local Whole Foods stores, self-service lockers at Whole Foods, predictive or anticipatory shipping, drone delivery, and more. Slowly, but certainly, Amazon is building out its own end-to-end delivery network with one primary objective: reducing shipping times.

Every organization has limitations that stunt long-term growth so there are important lessons that can be learned from how Amazon approached its blockers or invisible asymptotes:

  1. Take the time to correctly identify your long-term blockers for growth.
  2. Removing these long-term blockers for growth may look impossible at first.
  3. Removing these long-term blockers requires creativity, innovation, patience, persistence and aggressive capital allocation. It can take many initiatives and many years to eliminate them.
  4. Overcoming these obstacles can be a powerful strategy that can unlock unbelievable growth.

I spend a lot of time and effort working on eliminating Drupal's and Acquia's growth barriers so I love these kind of lessons. In a future blog post, I'll share my thoughts about Drupal's growth blockers.

December 16, 2018

Most products cycle through the infamous Innovation S-curve, which maps a product's value and growth over time.

Product lifecycle s curve

Startups are eager to find product-market fit, the inflection point in which the product takes off and experiences hockey-stick growth (the transition from phase one to phase two).

Just as important, however, is the stagnation point, or the point later in the S-curve when a product experiences growth stagnation (the transition from phase two to phase three). Many startups don't think about their stagnation point, but I believe they should, because it determines how big the product can become.

Ten years ago, a couple years after Acquia's founding, large organizations were struggling with scaling Drupal. I was absolutely convinced that Drupal could scale, but I also recognized that too few people knew how to scale Drupal successfully.

Furthermore, there was a lot of skepticism around Open Source scalability and security. People questioned whether a community of volunteers could create software as secure and scalable as their proprietary counterparts.

These struggles and concerns were holding back Drupal. To solve both problems, we built and launched Acquia Cloud, a platform to build, host and manage Drupal sites.

After we launched Acquia Cloud, Acquia grew from $1.4 million in bookings in 2009 to $8.7 million in bookings in 2010 (600% year-over-year growth), and to $22 million in bookings by 2011 (250% year-over-year growth). We had clearly found product-market fit!

Not only did it launch Acquia in rocket-ship growth, it also extended our stagnation point. We on-boarded many large organizations and showed that Drupal can scale very large. This helped unlock a lot of growth for both Drupal and Acquia. I can say with certainty that many large organizations that use Drupal would not have adopted Drupal without Acquia.

Helping to grow Drupal — or extending Drupal's stagnation point — was always part of Acquia's mission. From day one, we understood that for Acquia to grow, Drupal had to grow.

Launching Acquia Cloud was a great business decision for Acquia; it gave us product-market fit, launched us in hockey-stick growth, but also extended our S-curve.

As I think back about how Acquia approached the Innovation S-curve, a few important lessons stand out:

  • Focus on business opportunities that serve a burning customer need that can launch or accelerate your organization.
  • Focus on business opportunities that remove long-term barriers to growth and push out the stagnation point.

December 14, 2018

Gift drive

Every December, Acquia organizes a gift drive on behalf of the Wonderfund. The gift drive supports children that otherwise wouldn't be receiving gifts this holiday season. This year, more than 120 Acquians collected presents for 205 children in Massachusetts.

Acquia's annual gift drive always stands out as a heart warming and meaningful effort. It's a wonderful example of how Acquia is committed to "Give back more". Thank you to every Acquian who participated and for Wonderfund's continued partnership. Happy Holidays!

December 13, 2018

Autoptimize by default excludes inline JS and jquery.js from optimization. Inline JS is excluded because it is a typical cache-buster (due to changing variables in it) and as inline JS often requires jQuery being available as a consequence that needs to be excluded as well. The result of this “safe default” however is that jquery.js is a render-blocking resource. So even if you’re doing “inline & defer CSS” your Start-Render time (or one of the variations thereof) will be sub-optimal.

Jonas, the smart guy behind, proposed to embed inline JS that requires jQuery in a function that executes after the DomContentLoaded event. And so I created a small code snippet as a proof of concept which hooks into Autoptimize’s API and that seems to work just fine;

The next step is having some cutting-edge Autoptimize users test this in the wild. You can view/ download the code from this gist  and add it as a code snippet (or if you insist in your theme’s functions.php). Your feedback is more then welcome, I’m sure you know where to find me!

Gabe, Mateu and I just released the third RC of JSON:API 2, so time for an update! The last update is from three weeks ago.

What happened since then? In a nutshell:


Curious about RC3? RC2RC3 has five key changes:

  1. ndobromirov is all over the issue queue to fix performance issues: he fixed a critical performance regression in 2.x vs 1.x that is only noticeable when requesting responses with hundreds of resources (entities); he also fixed another performance problem that manifests itself only in those circumstances, but also exists in 1.x.
  2. One major bug was reported by dagmar: the ?filter syntax that we made less confusing in RC2 was a big step forward, but we had missed one particular edge case!
  3. A pretty obscure broken edge case was discovered, but probably fairly common for those creating custom entity types: optional entity reference base fields that are empty made the JSON:API module stumble. Turns out optional entity reference fields get different default values depending on whether they’re base fields or configured fields! Fortunately, three people gave valuable information that led to finding this root cause and the solution! Thanks, olexyy, keesee & caseylau!
  4. A minor bug that only occurs when installing JSON:API Extras and configuring it in a certain way.
  5. Version 1.1 RC1 of the JSON:API spec was published; it includes two clarifications to the existing spec. We already were doing one of them correctly (test coverage added to guarantee it), and the other one we are now complying with too. Everything else in version 1.1 of the spec is additive, this is the only thing that could be disruptive, so we chose to do it ASAP.

So … now is the time to update to 2.0-RC3. We’d love the next release of JSON:API to be the final 2.0 release!

P.S.: if you want fixes to land quickly, follow dagmar’s example:

  1. Note that usage statistics on are an underestimation! Any site can opt out from reporting back, and composer-based installs don’t report back by default. ↩︎

  2. Since we’re in the RC phase, we’re limiting ourselves to only critical issues. ↩︎

  3. This is the first officially proposed JSON:API profile! ↩︎

I published the following diary on “Phishing Attack Through Non-Delivery Notification”:

Here is a nice example of phishing attack that I found while reviewing data captured by my honeypots. We all know that phishing is a pain and attackers are always searching for new tactics to entice the potential victim to click on a link, disclose personal information or more… [Read more]

[The post [SANS ISC] Phishing Attack Through Non-Delivery Notification has been first published on /dev/random]

If you're driving into Boston, you might notice something new on I-90. Acquia has placed ads on two local billboards; more than 120,000 cars drive past these billboards everyday. This is the first time in Acquia's eleven years that we've taken out a highway billboard, and dipped our toes in more traditional media advertising. Personally, I find that exciting, because it means that more and more people will be introduced to Acquia. If you find yourself on the Mass Pike, keep an eye out!


December 12, 2018

The post Our Gitlab CI pipeline for Laravel applications – Oh Dear! blog appeared first on

We've built an extensive Gitlab CI Pipeline for our testing at Oh Dear! and we're open sourcing our configs.

This can be applied to any Laravel application and will significantly speed up your entire pipeline configurations if you're just getting started.

We're releasing our Gitlab CI pipeline that is optimized for Laravel applications.

It contains all the elements you'd expect: building (composer, yarn & webpack), database seeding, PHPUnit & copy/paste (mess) detectors & some basic security auditing of our 3rd party dependencies.

Source: Our Gitlab CI pipeline for Laravel applications -- Oh Dear! blog

The post Our Gitlab CI pipeline for Laravel applications – Oh Dear! blog appeared first on

At Drupal Europe, I announced that Drupal 9 will be released in 2020. Although I explained why we plan to release in 2020, I wasn't very specific about when we plan to release Drupal 9 in 2020. Given that 2020 is less than thirteen months away (gasp!), it's time to be more specific.

Shifting Drupal's six month release cycle

A timeline that shows how we shifted Drupal 8's release windowsWe shifted Drupal 8's minor release windows so we can adopt Symfony's releases faster.

Before I talk about the Drupal 9 release date, I want to explain another change we made, which has a minor impact on the Drupal 9 release date.

As announced over two years ago, Drupal 8 adopted a 6-month release cycle (two releases a year). Symfony, a PHP framework which Drupal depends on, uses a similar release schedule. Unfortunately the timing of Drupal's releases has historically occurred 1-2 months before Symfony's releases, which forces us to wait six months to adopt the latest Symfony release. To be able to adopt the latest Symfony releases faster, we are moving Drupal's minor releases to June and December. This will allow us to adopt the latest Symfony releases within one month. For example, Drupal 8.8.0 is now scheduled for December 2019.

We hope to release Drupal 9 on June 3, 2020

Drupal 8's biggest dependency is Symfony 3, which has an end-of-life date in November 2021. This means that after November 2021, security bugs in Symfony 3 will not get fixed. Therefore, we have to end-of-life Drupal 8 no later than November 2021. Or put differently, by November 2021, everyone should be on Drupal 9.

Working backwards from November 2021, we'd like to give site owners at least one year to upgrade from Drupal 8 to Drupal 9. While we could release Drupal 9 in December 2020, we decided it was better to try to release Drupal 9 on June 3, 2020. This gives site owners 18 months to upgrade. Plus, it also gives the Drupal core contributors an extra buffer in case we can't finish Drupal 9 in time for a summer release.

A timeline that shows we hope to release Drupal 9 in June 2020Planned Drupal 8 and 9 minor release dates.

We are building Drupal 9 in Drupal 8

Instead of working on Drupal 9 in a separate codebase, we are building Drupal 9 in Drupal 8. This means that we are adding new functionality as backwards-compatible code and experimental features. Once the code becomes stable, we deprecate any old functionality.

Let's look at an example. As mentioned, Drupal 8 currently depends on Symfony 3. Our plan is to release Drupal 9 with Symfony 4 or 5. Symfony 5's release is less than one year away, while Symfony 4 was released a year ago. Ideally Drupal 9 would ship with Symfony 5, both for the latest Symfony improvements and for longer support. However, Symfony 5 hasn't been released yet, so we don't know the scope of its changes, and we will have limited time to try to adopt it before Symfony 3's end-of-life.

We are currently working on making it possible to run Drupal 8 with Symfony 4 (without requiring it). Supporting Symfony 4 is a valuable stepping stone to Symfony 5 as it brings new capabilities for sites that choose to use it, and it eases the amount of Symfony 5 upgrade work to do for Drupal core developers. In the end, our goal is for Drupal 8 to work with Symfony 3, 4 or 5 so we can identify and fix any issues before we start requiring Symfony 4 or 5 in Drupal 9.

Another example is our support for reusable media. Drupal 8.0.0 launched without a media library. We are currently working on adding a media library to Drupal 8 so content authors can select pre-existing media from a library and easily embed them in their posts. Once the media library becomes stable, we can deprecate the use of the old file upload functionality and make the new media library the default experience.

The upgrade to Drupal 9 will be easy

Because we are building Drupal 9 in Drupal 8, the technology in Drupal 9 will have been battle-tested in Drupal 8.

For Drupal core contributors, this means that we have a limited set of tasks to do in Drupal 9 itself before we can release it. Releasing Drupal 9 will only depend on removing deprecated functionality and upgrading Drupal's dependencies, such as Symfony. This will make the release timing more predictable and the release quality more robust.

For contributed module authors, it means they already have the new technology at their service, so they can work on Drupal 9 compatibility earlier (e.g. they can start updating their media modules to use the new media library before Drupal 9 is released). Finally, their Drupal 8 know-how will remain highly relevant in Drupal 9, as there will not be a dramatic change in how Drupal is built.

But most importantly, for Drupal site owners, this means that it should be much easier to upgrade to Drupal 9 than it was to upgrade to Drupal 8. Drupal 9 will simply be the last version of Drupal 8, with its deprecations removed. This means we will not introduce new, backwards-compatibility breaking APIs or features in Drupal 9 except for our dependency updates. As long as modules and themes stay up-to-date with the latest Drupal 8 APIs, the upgrade to Drupal 9 should be easy. Therefore, we believe that a 12- to 18-month upgrade period should suffice.

So what is the big deal about Drupal 9, then?

The big deal about Drupal 9 is … that it should not be a big deal. The best way to be ready for Drupal 9 is to keep up with Drupal 8 updates. Make sure you are not using deprecated modules and APIs, and where possible, use the latest versions of dependencies. If you do that, your upgrade experience will be smooth, and that is a big deal for us.

Special thanks to Gábor Hojtsy (Acquia), Angie Byron (Acquia), xjm (Acquia), and catch for their input in this blog post.

December 11, 2018

December 10, 2018

This morning, I received a mail from Cisco to tell me that I’ve been nominated as finalist for their IT Blog Awards (Category: “Most Inspirational”). I’m maintaining this blog just for the fun and to share useful (I hope) information with my readers and don’t do this to get rewards but it’s always nice to get such feedback. The final competition is now open, if you’ve a few minutes, just vote for me!

Votes are open here. Thank you!

[The post Nominated for the IT Blog Awards has been first published on /dev/random]

December 09, 2018


In my previous blog posts we configured Stubby on GNU/Linux and FreeBSD.

In this blog article we’ll configure DNS-over-TLS with Unbound on OPNsense. Both Stubby and Unbound are written by NLnet.

DNS resolvers

Stubby is a small dns resolver to encrypt your dns traffic, which makes it perfect to increase end-user privacy. Stubby can be integrated into existing dns setups.

DNSmasq is small dns resolver that can cache dns queries and forward dns traffic to other dns servers.

Unbound is fast validating, caching DNS resolver that supports DNS-over-TLS. Unbound or dnsmaq are not full feature dns servers like BIND.

The main difference beteen Unbound and DNSmasq is that Unbound can talk the the root servers directly while dnsmasq always needs to forward your dns queries to another dns server - your ISP dns server or a public dns servicve like (Quad9, cloudfare, google, …) -

Unbound has build-in support for DNS-over-TLS. DNSmasq needs an external DNS-over-TLS resolver like Stubby.

Which one to use?

It depends - as always -, Stubby can integrating easily in existing dns setups like dnsmasq. Unbound is one package that does it all and is more feature rich compared to DNSmasq.


I use OPNsense as my firewall. Unbound is the default dns resolver on OPNsense so it makes (OPN)sense to use Unbound.

Choose your upstream DNS service

There’re a few public DNS providers that supports DNS-over-tls the best known are Quad9, cloudfare. Quad9 will block malicious domains on the default dns servers while has no security blocklist.

In this article we’ll use Quad9 but you could also with cloudfare or another dns provider that you trust and has support for DNS-over-tls.

Enable DNS-over-TLS


You need to configure your firewall to use your upstream dns provider. You also want to make sure your isp dns servers aren’t used.


If you snif the DNS traffic on your firewall tcpdump -i wan_interface udp port 53 you’ll see that the DNS traffic is unencrypted.


To enable DNS-over-TLS we’ll need to reconfigure unbound.

Go to [ Services ] -> [Unbound DNS ] -> [General] And copy/paste the setting below

name: "."
forward-ssl-upstream: yes

to Custom options these settings will reconfigure Unbound to forward the dns for the upstream dns servers Quad9 over ssl.


If you snif the udp traffic on you firewall with tcpdump -i wan_interface udp port 53 you’ll not see any unencrypted traffic anymore - unless not all your clients are configured to use your firewall as the dns server -.

If your snif TCP PORT 853 tcpdump -i vr1 tcp port 853 we’ll see your encrypted dns-over-tls traffic.

General DNS settings

You also want to make sure that your firewall isn’t configure to use an unecrypted DNS server.



Go to [ system ] -> [ settings ] -> [ general ] and set the dns servers also make sure that [ ] Allow DNS server list to be overridden by DHCP/PPP on WAN is unchecked.


You can verify the configuration by logging on to your firewall over ssh and reviewing the contents of /etc/resolv.conf.

Have fun!


December 08, 2018

This week I was in New York for a day. At lunch, Sir Martin Sorrell pointed out that Microsoft overtook Apple as the most valuable software company as measured by market capitalization. It's a close call but Microsoft is now worth $805 billion while Apple is worth $800 billion.

What is interesting to me are the radical "ebbs and flows" of each organization.

In the 80's, Apple's market cap was twice that of Microsoft. Microsoft overtook Apple in the the early 90's, and by the late 90's, Microsoft's valuation was a whopping thirty-five times Apple's. With a 35x difference in valuation, no one would have guessed Apple to ever regain the number-one position. However, Apple did the unthinkable and regained its crown in market capitalization. By 2015, Apple was, once again, valued two times more than Microsoft.

And now, eighteen years after Apple took the lead, Microsoft has taken the lead again. Everything old is new again.

As you'd expect, the change in market capitalization corresponds with the evolution and commercial success of their product portfolios. In the 90s, Microsoft took the lead based on the success of the Windows operating system. Apple regained the crown in the 2000s based on the success of the iPhone. Today, Microsoft benefits from the rise of cloud computing, Software-as-a-Service and Open Source, while Apple is trying to navigate the saturation of the smartphone market.

It's unclear if Microsoft will maintain and extend its lead. On one hand, the market trends are certainly in Microsoft's favor. On the other hand, Apple still makes a lot more money than Microsoft. I believe Apple to be slightly undervalued, and Microsoft is to be overvalued. The current valuation difference is not justified.

At the end of the day, what I find to be most interesting is how both organizations have continued to reinvent themselves. This reinvention has happened roughly every ten years. During these periods of reinvention, organizations can fall out out favor for long stretches of time. However, as both organizations prove, it pays off to reinvent yourself, and to be patient product and market builders.

December 07, 2018

And the conference is over! I’m flying back to home by tomorrow morning so I’ve time to write my third wrap-up. The last day of the conference is always harder for many attendees due to the late parties. But I was present on time to attend the last set of presentations. The first one was presented by Wu Tiejun and Zhao Guangyan: “WASM Security Analysis Reverse Engineering”. They started with an introduction about WASM or “Web Assembly”. It’s a portable technology deployed in browsers which helps to provide an efficient binary format available on many different platforms. An interesting URL they mentioned is WAVM, a standalone VM for WebAssembly. They also covered the CVE-2018-4121. I’m sorry for the lack in details but the speakers were reading their slides and it was very hard to follow them. Sad, because I’m sure that they have a deep knowledge about this technology. If you’re interesting, have a look at their slides once published or here is another resource.
The next speaker was Charles IBRAHIM who presented an interesting usage of a botnet. The title of his presentation was “Red Teamer 2.0: Automating the C&C Set up Process”. Botconf is a conference dedicated to fighting botnets but this time, it was about building a botnet! By definition, a botnet can be very useful: sharing resources, executing commands on remote hosts, collecting data, etc. All those operations can be very interesting while conducting red team exercises. Indeed, red teams need tools to perform operations in a smooth way and have to remain below the radar. There are plenty of tools available for red teamers but there is a lack of aggregation. Charles presented the botnet they developed. The goal is to reduce the time required to build the infrastructure, to easily execute common actions, log operations and reduce OPSEC risks. About the C&C infrastructure, they provide user authentication, logging capabilities, remote agent deployment and administration and cover communication techniques. Steps of the red team process were reviewed:
  • Reconnaissance: via recon-ng
  • Weaponization: installation of a RAT with AV alarms, empire agent
  • Delivery: EC2 instance creation, Gophish
  • Exploitation & post-exploitation: Receives the RAT connection, launch discovery commands
Very interesting approach to an alternative use of a botnet.
Then, Rommel Joven came on stage to talk about Mirai: “Beyond the Aftermath“. Mirai was a huge botnet that affected many IoT tools. Since if was discovered, what’s going on? Mirai was also used to DDoS major sites like, Twitter, Spotify, Reddit, etc. Later the source code was released. Why does it affect IoT devices?

  • Easy to exploit
  • 24/7 availability
  • Powerful enough for DDoS
  • Rarely monitored / patched
  • Low security awareness
  • Malware source code available for reuse
The last point was the key of the presentation. Rommel explained that, since the leak, many new malware were developed reusing some functions or some part of the code present in Mirai. 80K samples were detected in 2018 so far with 49% of the Mirai code. Malware developers are like common developers: why reinvent the wheel if you can borrow some code somewhere else? Then Rommel reviewed some malware samples that are know to re-use the Mirai code (or at least a part of):
  • Hajime – same user/password combinations
  • IoTReaper: user of 9 exploits for infection, integration LUA
  • Persirai/Http81: borrows the port scanner from Mirai as well as utils functions. Found similar strings
  • BashLite: MiraiScanner() , MiraiIPRanges(), … 
  • HideNSeek: configuration table similarity, utils functions, capability of data exfiltration
  • ADB.Miner: Port scanning from Mirai, adds a Monero miner, 
When you deploy a botnet, the key is to monetize it. How is it achieved with Mirai & alternatives: by performing cryptomining operations, by stealing ETH coins, by installing proxies or booters.
Let’s continue with Piotr BIAŁCZAK who presented “Leaving no Stone Unturned – in Search of HTTP Malware Distinctive Features“. The idea behind Piotr’s research is to analyze HTTP requests to try to identify which ones are performed by a regular browser or a malware (Windows malware samples), then to try to build families. The research was based on a huge number of PCAP files that were analyzed through the following process:
PCAP > HTTP request > IDS > SID assigned to request > Analysis > Save to the database
Data sources of PCAP files are the CERT Polska’s sandbox system, HTTP traffic was performed via popular browsers and access to the top-500 Alexa via Selenium. About numbers, 36K+ PCAP file were analyzed and 2.5M+ alerts generated. Traffic from malware samples was from many know families like Locky, Zbot, Ursif, Dreambot, Pony, Nemucod, … 172 families identified in total, 19% of requests of unknown malware. To analyze results, they searched for errors, features inherent to malicious operations (example: obfuscation) and the identification of features which reflect differences in data exchange.
About headers, interesting stuffs are:
  • Occurrence frequency
  • Mispelling
  • Lack of headers
  • Protocol version
  • Destination port
  • Strange user-agent
  • Presence of non-printable characters
About payloads:
  • Length
  • Entropy
  • Non-printable characters
  • Obfuscation
  • Presence of request pipelining
Some of the findings:
  • Lack of colon in header line
  • Unpopular white space character + space before comma
  • End of header line other than CR+LR
  • Non ASCII character in header line
  • Destination port (other ports used by some malware families)
  • Prevalence of HTTP 1.0 version request by malware samples
  • Non ascii characters in payload (downloaders, bankers and trojans)
  • Entropy
  • GET request with payload
  • POST with out Referer header
The research was interesting but I don’t see the point for a malware developer to make bad HTTP requests instead of using a standard library to make HTTP request.
Yoshihiro ISHIKAWA & Shinichi NAGANO presented Let’s Go with a Go RAT!”. The wellmess malware is written in Go and was not detected by AV before June 2018. Mirai is one of the famous malware written in this language. The performed a deep review of wellmess:
  • It’s a RAT
  • C2 : RCE, upload and download files
  • Identify: Go & .net (some binaries)
  • Windows 32/64 bits and ELF X64
  • Compiled with Ubuntu
  • The “wellmess” name is coming from “Welcome Message”
  • Usage of IRC terms
  • They make a now classic typo page 🙂
    • choise.go
    • wellMess
    • Mozzila
  • Specific user-agents
  • C&C infrastructure (no domains, only IP addresses)
  • Lateral movement not by default but performed via another tool called gost (Go Simple Tunnel)
  • Some version are in .Net
  • Bot command syntax: using XML messages
They performed a live demo of the botnet and C&C comms. Very deep analyzis. They also provided Suricata IDS and YARA rules to detect the malware (check the slides).
After the lunch break, James Wyke presented “Tracking Actors through their Webinjects”. He started with a recap about banking malware and webinjects. they are not simple because web apps are complex. Off-the-shelf solutions are available. The idea of the research: Can we classify malware families based on web injects? some are popular for years (Zeuxs, Gozi). James reviewed many webinjects:
  • Yummba
  • Tables
  • inj_inj
  • adm_ssl
  • concert_all
  • delsrc

For each of them, he gave details like the targets, origin, explanation of the name and a YARA rule to detect them and many more information.

Then Łukasz Siewierski presented “Triada: the Past, the Present, the (Hopefully not Existing) Future“. He explained in details the history of the Triada malware present in many Android smartphones. It was discovered in 2016 but involved with the time.
Matthieu Faou presented “The Snake Keeps Reinventing Itself”. It was a very nice overview of the Turla espionage group. A lot of details were provided, especially about the exploitation of Outlook. I won’t give more details here, have a look at my wrap-up from 2018 where Matthieu give the same presentation.
Finally, the scheduled was completed with Ya Liu’s presentation: “How many Mirai variants are there?“. Again a presentation about Mirai and alternative malware that re-use the same source code. There was some overlapping with Rommel’s presentation (see above) but the approach was more technical. Ya explained how automate the extraction of configurations, what are the attack methods and dictionaries. From 21K analyzed samples, they extracted configurations and attack methods. Based on these data, they created five classification schemes. More info was also published here.
As usual, there was a small closing ceremony with more information about this edition: 26 talks for a total of 1080(!) minutes, 400 attendees coming from all over the world. Note already the date of the 2019 edition: 3-6 December. The event will be organized in Bordeaux!

[The post Botconf 2018 Wrap-Up Day #3 has been first published on /dev/random]

We’ve had a week of heated discussion within the Perl 6 community. It is the type of debate where everyone seems to lose. It is not the first time we do this and it certainly won’t be the last. It seems to me that we have one of those about every six months. I decided not to link to many reiterations of the debate in order not to feed the fire.

Before defining sides in the discussion it is important to identify the problems that drives the fears and hopes of the community. I don’t think that the latest round of discussions was about the Perl 6 alias in itself (Raku), but rather about the best strategy to answer to two underlying problems:

  • Perl 5 is steadily losing popularity.
  • Perl 6, despite the many enhancements and a steady growth, is not yet a popular language and does not seem to have a niche or killer app in sight yet to spur rapid adoption.

These are my observations and I don’t present them as facts set in stone. However, to me, they are the two elephants in the room. As an indication we could refer to the likes of TIOBE where Perl 5 fell off the top 10 in just 5 years (from 8th to 16th, see “Very Long Time History” at the bottom) or compare Github stars and Reddit subscribers of Perl 5 and 6 with languages on the same level of popularity on TIOBE.

Perl 5 is not really active on Github and the code mirror there has until today only 378 stars. Rakudo, developed on Github, has understandably more: 1022. On Reddit 11726 people subscribe to Perl threads (mostly Perl 5) and only 1367 to Perl 6. By comparison, Go has 48970 stars on Github and 57536 subscribers on Reddit. CPython, Python’s main implementation, has 20724 stars on Github and 290 308 Reddit subscribers. Or put differently: If you don’t work in a Perl shop, can you remember when a young colleague knew about Perl 5 besides by reputation? Have you met people in the wild that code in Perl 6?

Maybe this isn’t your experience or maybe you don’t care about popularity and adoption. If this is the case, you probably shrugged at the discussions, frowned at the personal attacks and just continued hacking. You plan to reopen IRC/Twitter/Facebook/Reddit when the dust settles. Or you may have lost your patience and moved on to a more popular language. If this is the case, this post is not for you.

I am under the impression that the participants of what I would call “cyclical discussions” *do* agree with the evaluation of the situation of Perl 5 and 6. What is discussed most of the time is clearly not a technical issue. The arguments reflect different –and in many cases opposing— strategies to alleviate the aforementioned problems. The strategies I can uncover are as follows:

Perl 5 and Perl 6 carry on as they have for longer than a decade and half

Out of inertia, this a status-quo view is what we’ve seen in practice today. While this vision honestly subscribes to the “sister languages narrative” (Perl is made up of two languages in order to explain the weird major version situation), it doesn’t address the perceived problems. It chooses to ignore them. The flip side is that with every real or perceived threat to the status-quo the debate resurges.

Perl 6 is the next major version of Perl

This is another status-quo view. The “sister languages narrative” is dead: Perl 6 is the next major version of Perl 5. While a lot of work is needed to make this happen, it’s work that is already happening: make Perl 6 fast. The target, however, is defined by Perl 5: It must be faster than the previous release. Perl consists of many layers and VMs are interchangeable: it’s culturally still Perl if you replace the Perl 5 runtime with one from of Perl 6. This view is not well received by many people in Perl 5 community, and certainly by those emotionally or professionally invested in “Perl”, with most of the time Perl meaning Perl 5.

Both Perl 5 and Perl 6 are qualified/renamed

This is a renaming view that looks for a compromise in both communities. The “sister languages narrative” is the real world experience and both languages can stand on their own feet while being one big community. By renaming both projects and keeping Perl in the name (e.g. Rakudo Perl, Pumpkin Perl) the investment in the Perl name is kept, while the next major version number dilemma is dissolved. However this strategy is not an answer for those in the Perl 6 community that experience that the (unjustified) reputation of Perl 5 is hurting Perl 6’s adoption. On the Perl 5’s side is some resentment why good old “Perl” needs to be renamed when Perl 6 is the newcomer.

Rename Perl 6

Perl 6’s adoption is hindered by Perl 5’s reputation and, at the same time, Perl 6’s major number “squatting” places Perl 5 in limbo. The “sister language narrative” is the real world situation: Perl 5 is not going away and it should not. The unjustified reputation of Perl 5 for some people is not something Perl 6 needs to fix. Only action of the Perl 6 community is required in the view. However, a “sisterly” rename will benefit Perl 5. Liberating the next major version will not fix Perl 5’s decline, but it may be a small piece of the puzzle of the recovery. Renaming will result in more loosely coupled communities, but Perl communities are not mutually exclusive and the relation may improve without the version dilemma. The “sister language narrative” becomes a proud origin story. Mostly Perl 6 people heavily invested in the Perl *and* the Perl 6 brand opposed this strategy.

Alias Perl 6

While being very similar to the strategy above, this view is less ambitious as it only cares about Perl 6’s adoption is hindered by Perl 5’s reputation. It’s up to Perl 5 to fix their major version problem. It’s a compromise between (a number of) people in the Perl 6 community. It may or may not be a way to proof if an alias catches on, the renaming of Perl 6 should stay on the table.

Every single strategy will result in people being angry or disappointed because they honestly believe it hurts the strategy that they feel is necessary to alleviate Perl’s problems. We need to acknowledge that the fears and hopes are genuine and often related. Without going in detail to not reignite the fire (again), the tone of many of the arguments I heard this week from people opposing the Raku alias rung very close to me to the arguments Perl 5 users have against the Perl 6 name. Being a victim of injustice by people that don’t care for an investment of years and a feeling of not being listen to.

By losing the sight of the strategies in play, I feel the discussion degenerated very early in personal accusations that certainly leave scars while not resulting in even a hint of progress. We are not unique in this situation, see the recent example of the toll it took on Guido van Rossum. I can only sympathize with Larry is feeling these days.

While the heated debates may continue for years to come, it’s important to keep an eye on people that silently leave. The way to irrelevance is a choice.


(I disabled comments on this entry, feel free to discuss it on Reddit or the like. However respect the tone of this message and refrain from personal attacks.)

December 06, 2018

I’m just back from the reception that was held at the Cité de l’Espace, such a great place with animations and exhibitions of space related devices. It’s tie for my wrap-up of the second day. This morning, after some coffee refill, the first talk of the day was performed by Jose Miguel ESPARZA: “Internals of a Spam Distribution Botnet“. This talk had content flagged as a mix of TLP:Amber and TLP:Red, so no disclosure. Jose started with an introduction about well-known spam distribution botnets like Necurs or Emotet: what are their features, some volumetric statistics and how they behave. Then, he dived into a specific one Onliner, well known for its huge amount of email accounts: 711 millions! He reviewed how the bot is working, how it communicates with its C&C infrastructure, the panel, the people behind the botnet. Nice review and a lot of useful information! The conclusion was that spam bots still remain a threat. They are not only used to drop spam but also to deliver malware.

Then, Jan SIRMER & Adolf STREDA came on stage to present: “Botception: Botnet distributes script with bot capabilities”. They presented their research about a bot that it acting like in the “Inception” movie. It was about a bot that distributes a script that acts like… a bot! The “first” bot is Necurs which is has been discovered in 2012. It’s one of the largest botnets that was used to distribute huge amount of spams as well as other malware campaigns like ransomware. The explained how the bot behave and, especially, how it communicates with its C&C servers. In a second part, they explained how they created a tracker to learn more about the botnet. Based on the results, they discovered the infection chain:

Spam email > Internet shortcut > VBS control panel > C&C > Download & execute > Flawed Ammyy.

The core component that was analyzed is the VBS control panel that is accessed via SMB (file://) in an Internet shortcut file. Thanks to the SMB protocol, they get access to all the files and grabbed also payloads in advance! The behaviour is classic: hardcoded C&C addresses, features like install, upgrade, kill or execute, watchdog feature. Interesting, the code was properly documented, which is rare for malicious code. Different version were compared. At the end, the malware drops a RAT: Flawy Ammyy.

The next tall was “Stagecraft of Malicious Office Documents – A Look at Recent Campaigns” presented by Deepen DESAI, Tarun DEWAN & Dr. Nirmal SINGH. Malicious Office documents (or “maldocs”) are a very common vector of infection for a while. But how do they evolve in time? The speakers focused their research on analyzing many maldocs. Today, approximatively 1 million of documents are used daily in enterprises transactions. The typical infection path is:

Maldoc > Social engineering > Execute macro > Download & execute payload

Why “Social engineering”? Since Office 2007, macros are disabled by default and the attacker must use techniques to lure the victim and force him/her to disable this default protection.

They analyzed ~1200 documents with a low AV detection (both manual and in sandboxes). They looked at URLs, filenames time frames, obfuscation techniques. What are the findings? They categorized documents in campaign that were reviewed one by one:

Campaign 1: “AppRun” – because they used Application.Run
Campaign 2: “ProtectedMacro” because the Powershell code was stored in document elements like boxes
Campaign 3: “LeetMX” – because leet text encoding was used
Campaign 4: “OverlayCode” – because encrypted PowerShell code is accessed using bookmarks
Campaign 5: “xObjectEnum” – because Macro code in the documents were using enum values from different built-in classes in VBA objects
Campaign 6: “PingStatus” – because the document used Win32_PingStatus WMI class to detect sandbox ping to and %userdomain%
Campaign 7: “Multiple embedded macros” – because malicious RTF containing multiple embedded Excel sheets
Campaign 8 “HideInProperlty” – because Powershell code was hidden in the doc properties
Campaign 9: “USR-KL” – because they used specific User-Agents: USR-KL & TST-DC

This was a very nice study and recap about malicious documents.

Then, Tom Ueltschi came to present “Hunting and Detecting APTs using Sysmon and PowerShell Logging”.  Tom is a recurrent speaker at Botconf and always presents interesting stuff to hunt for bad guys. Today, he came with new recipes (based on Sigma!). But, as he explained, to be able to track bad behaviour, it’s mandatory to prepare your environment for investigations (log everything but also specific stuff like auditing, Powershell modules, script block and transcription logging). The MITRE ATT@CK was used as a reference in Tom’s presentation. He reviewed three techniques that deserve to be detected:

  • Malware persistence installation through WMI Event Subscription (it needs an event filter, an event consumer and a binding between the two)
  • Persistence installation through login scripts
  • Any suspicious usage of Powershell

For each techniques, Tom described what to log and how to search events to spot the bad guys. The third technique was covered deeper with more examples to track many common evasion techniques. They are not easy to describe in a few lines here. My recommendation, if you are dealing with this kind of environment, is to have a look at Tom’s slides. Usually, he publish them quickly. Excellent talk, as usual!

The Rustam Mirkasymov’s talk was the last one of the first half-day: “Hunting for Silence“. There was no abstract given and I was thinking about a presentation on threat hunting. Nope, it was a review of the “Silence” trojan which targeted financial institutions in Ukraine in 2017. After a first analyze, the trojan was attributed to APT-28 but it was not the case. The attacker did not have the exploit builder but was able to modify an existing sample. Rustam did a classic review of the malware: available commands, communications with the C&C infrastructure, persistence mechanism, … An interesting common point of many presentations for this edition: slides usually contained some mistakes performed by the malware developers.

After the lunch break, the keynote was performed by the Colonel Jean-Dominique Nollet from the French Gendarmerie. The title was “Cybercrime fighting in the Gendarmerie”. He explained the role of law enforcement authorities in France and how they work to improve the security of all citizens. This is not an easy task because they have to explain to non-technical people (citizens as well as other members of the Gendarmerie) very technical information (like botnets!). Their missions are:

  • Be sure that guys dealing with cyber issues has the good knowledge and tools (support) at a local level (and France is a big country!)
  • Intelligence! But cops cannot hack back! (like many researchers do)
  • Investigations

Coordination is key! For example, to fight against child pornography, they have a database of 11 millions of pictures that can help to identify victims or bad guys. They already rescued eight children! The evolution cycle is also important:

Information > R&D > Experience > Validate -> Industrialize

The key is speed! Finally, another key point was to request for more collaboration between security researchers and law enforcement

The next speaker was Dennis Schwarz who presented “Everything Panda Banker“. The name is coming from references to Panda in the code and the control panel. The first sample was found in 2016 and uploaded from Norway. But the malware is still alive and new releases were found until June 2018. Dennis explained the protections in place like Windows API calls resolved via hash function (obfuscation technique), encrypted strings, how configurations are stored, the DGA mechanism and other features like Man-in-the-Browser and Web-Inject. Good content with a huge amount of data that deserve to be re-read because the talk was given at light speed! I even had no time to read all the information present on each slides!

Thomas Siebert came to present “Judgement Day”. Here again, no abstract was provided but the content of the talk was amazing but released as TLP:Red, sorry! But, trust me, it was awesome!

After the afternoon break, Romain Dumont and Hugo Porcher presented “The Dark Side of the ForSSHe”. The presentation covered the Windigo malware, well-known for attacking UNIX servers through a SSH backdoor. Once connected to the victim, the bot used a Perl script piped through the connection (so, without any file stored on disk). The malware was Ebury. They deployed honeypots to collect samples and review the script features. The common OpenSSH backdoor features found are:

  • Client & server modified
  • Credential stealing
  • Hook functions that manipulate clear-text credentials
  • Write collected passwords to a file
  • From ssh client, steal only the private key
  • Exfiltration: through GET or POST, DNS, SMTP or custom protocol (TCP or UDP)
  • Backdoor mode using hardcoded credentials
  • Log evasion by hooking proper functions

Then, they reviewed specific families like:

  • Kamino: steals usernames and passwords, exfiltrate via HTTP, C&C can be upgraded remotely, XOR encrypted, attacker can login as root, anti-logging, victim identified by UUID.
  • Kessel: has a bot feature: commands via DNS TXT an SSH tunnel between the host and any server.
  • Bonadan: bot module, kill existing cryptominers, custom protocol, cryptominer

From a remediation perspective, the advices are always the same: use keys instead of passwords, disable root login, enable 2FA, monitor file descriptors and outbound connections from the SSH daemon.

The day ended with the classic lightning talks session. The principle remains the same: 3 minutes max and any topic (but related to malware, botnets or security in general). Here is a quick list of covered topics:

  • Onyphe, how to unhide the hide Internet
  • Kodi media player plugins vulnerabilities
  • Spam bots
  • 3VE botnet
  • VTHunting
  • MISP to Splunk app
  • Anatomy of a booter
  • mwdb v2
  • TSurugi / Bento toolkit
  • A very funny one but TLP:Red
  • We need your IP space (from the Shadowserver foundation)
  • Evil maids attack (hotel room) -> Power-cycle counts via!

The best lightning talk was (according to the audience) the TLP:Red one (it was crazy!). I really liked, a simple Python script that can detect if your computer was rebooted without your consent.

That’s all for today, see you tomorrow for the third wrap-up!

[The post Botconf 2018 Wrap-Up Day #2 has been first published on /dev/random]

The post PHP 7.3.0 Release Announcement appeared first on

A new PHP release is born: 7.3!

The PHP development team announces the immediate availability of PHP 7.3.0. This release marks the third feature update to the PHP 7 series.

PHP 7.3.0 comes with numerous improvements and new features such as;

-- Flexible Heredoc and Nowdoc Syntax
-- PCRE2 Migration
-- Multiple MBString Improvements
-- LDAP Controls Support
-- Improved FPM Logging
-- Windows File Deletion Improvements
-- Several Deprecations

Source: PHP: PHP 7.3.0 Release Announcement

The post PHP 7.3.0 Release Announcement appeared first on

December 05, 2018

Here is my first wrap-up for the 6th edition of the Botconf security conference. Like the previous editions, the event is organized in a different location in France. This year, the beautiful city of Toulouse saw 400 people flying from all over the world to attend the conference dedicated to botnets and how to fight them. Attendees are coming from many countries like USA, Canada, Brazil, Japan, China, Israel, etc). The opening session was performed by Eric Freyssinet. Same rules as usual, no harassment, respect of the TLP policy. Let’s start with the review of the first talks.

No keynote on the first day (the keynote speaker has been scheduled tomorrow). The first talk was assigned to Emilien LE JAMTEL from the CERT EU. He presented his research about cryptominers: “Swimming in the Monero Pools”. Attackers have two key requirements: the obfuscation of data and to perform efficient mining on all kinds of hardware. Monero, being obfuscated by default and not requiring specific ASICs CPU, is a nice choice for attackers. Event a smartphone can be used as a miner. Criminals are very creative to drop more miners everywhere but the common attacks remain phishing (emails) and exploiting vulnerabilities in application (like WebLogic). Emilien explained how he’s hunting for new samples. He wrote a bunch of scripts (available here) as well as YARA rules. Once the collection process is done, he extracts information like hardcoded wallet addresses, search for outbound connections to mining pools. Right now, he collected 15K samples and is able to generate IOCs like: C2 communications, persistence mechanism, specific strings and TTP’s. The next step was to explain how your can deobfuscate data hidden on the code and configuration files (config.js or global.js). He concluded with more funny examples of malware samples that killed themselves or another that contained usernames in the compilation path of the source code. Nice topic to smoothly start the day.

The next talk was performed by Aseel KAYAL: “APT Attack against the Middle East: The Big Bang”. She gave many details about a malware sample they found targeting the Middle-East. The campaign was assigned to APT-C-23, a threat group targeting Palestinians. She explained in a very educational way how the malware was delivered and its behaviour to infect the victim’s computer. The malware was delivered as a fake Word document that was in fact a self-extracting archive containing a decoy document and a malicious PE file. The gave details about the malware itself then more about the “context”. It was called “The Big Bang” due to the unusual module names. Assel and her team also tracked the people behind the campaign and found many references to TV shows. It was a nice presentation not simply delivering (arte)facts but also telling a story.

Daniel PLOHMANN presented last year at Botconf the Malpedia project (see my previous wrap-up). This year , he came back with more news about the project, how it evolved during 12 months. The presentation was called “Code Cartographer’s Diary”.The platform has now 850 users, 2900+ contributions. The new version has now a REST API (which helps to integrates Malpedia with third party tools like TheHive – just saying). The second part of the talk was based on ApiScout. This tools helps to detect how the Windows API is used in malware samples. Based on many samples, Daniel gave statistics about the API usage. If you don’t know Malpedia, have a look, it’s an interesting tool for security analysts and malware researchers.

The next speaker was Renato MARINHO, a fellow SANS Internet Storm Center Handler, who presented “Cutting the Wrong Wire: how a Clumsy Attacker Revealed a Global Cryptojacking Campaign”. This was a second talk about cryptominers in a half day. After a quick recap about this kind of attacks, Renato explained how he discovered a new campaign affecting servers. During the analysis of a timeline, he found suspicious files in /tmp (config.json) as well as a binary file. This binary was running with the privileges of the WebLogic server running on the box. This box was compromized using the WebLogic exploit. He tracked the attacker using the hardcoded wallet address found in the binary. The bad guy generated $204K in two months! How the malware was detected? Due to a stupid mistake of the developer, the malware killed automatically running Java processes… so the WebLogic application too!

After the lunch break, Brett STONE-GROSS & Tillmann WERNER presented “Chess with Pyotr”. This talk was a resume of a blog post they published). Basically, the reviewed previous botnets like Storm Worm, Waledac, Storm 2.0 and… Kelihos. They gave multiple details about them. Kelihos was offering many services: spam, credential theft, DDoS, FFlux DNS, click fraud, SOCKs proxy, mining, Pay-per-install (PPI). The next part of the talk was dedicated to the attribution. The main threat actor behind this botnet is Peter Yuryevich Levashov, a member of an underground forum where is communicated about his botnet.

Then, Rémi JULLIAN came to present: “In-depth Formbook Malware Analysis”. In-depth was really the key word of the presentation! FormBook is a well-know malware that is very popular and still active! It targets 92(!) different applications (via password-stealer or form-grabber). It is proposed also on demand in a MaaS model (“Malware as a Service”). The price for a full version is around $29/week. This malware is often on the top-10 of threats detected by security solutions like sandboxes. Rémi reviewed the multiple anti-analysis techniques deployed by FormBook like string obfuscation and encryption, manually mapping NTDLL (to defeat tools like Cuckoo), check for debuggers, check for inline hooks etc. The techniques of code injection and process hollowing were also explained. About the features, we have: browser hooking to access the data before being encrypted, a key-logger, clipboard data stealer, passwords harvesting from the filesystems. Communication with the C&C was also explained. Interesting finding: FormBook uses fake C&C servers during sandbox analysis to defeat the analyst. This was a great presentation full of useful details!

The next speaker was Antoine REBSTOCK who presented: “How Much Should You Pay for your own Botnet ?”. This was not a technical presentation (though – with plenty of mathematical formules) but more a legal talk. The idea presented by Antoine was interesting: Let’s assume that we decide to build a botnet to DDoS a target, what will be the total price (hosting, bandwidth, etc). After the theory, he compared different providers: Orange, Amazon, Microsoft and Google. Event if the approach is not easy to put in the context of a real attacker, the idea was interesting. But way too much formulas for me 😉

After the welcomed coffee break, Jakub SOUČEK & Jakub TOMANEK: “Collecting Malicious Particles from Neutrino Botnets”. The Neutrino bot is not new. It was discovered in 2014 but still alive today, with many changes. Lot of articles have been written about this botnet but, according to the speakers, there was some information missing like how behaves the bot during investigation, how configuration files are received. Many bots are still running in parallel and they wanted to learn more about them. Newly introduced features are: modular structure, obfuscated API call, network data stealer, CC scraper, encryption of modules, new control flow, persistence and support for new web injects. The botnet is sold to many cybercriminals, there are many builds. How to classify them in groups? What can be collected and useful to classify the botnet?

  • The C&C
  • Version
  • Bot name
  • Build ID

Only the Build ID is relevant. The name, by example, is “NONE” in 95% of the cases. They found 120 different build ID’s classified in 41 unique botnets, 18 really active  and 3 special cases. They reviewed some botnets and named them with their own convention. Of course they found some funny stories like a botnet injected “Yaaaaaar” in front of all strings in the web inject module. They also found misused commands, disclosure of data, debugging information left in the code. Conclusion: malware developers make mistakes too.

The next slot was assigned to Joie SALVIO & Floser BACURIO Jr. with “Trickbot The Trick is On You!”. They performed the same kind of presentation as today but this time on the banking malware Trickbot. Discovered in 2016, it also evolved with new features. They gave more attention on the communication channels used by the malware.

Finally, the day ended with Ivan KWIATKOWSKI & Ronan MOUCHOUX who presented “Automation, structured knowledge in Tactical Threat Intelligence”. After an introduction and definition of “intelligence” (it’s a consumer-driven activity), they explained what is the Tactical Threat Intelligence and how to implement it. Just a mention about the slides, designed with a wrong palette, making them difficult to read.

That’s all for today, be ready for my second wrap-up tomorrow!

[The post Botconf 2018 Wrap-Up Day #1 has been first published on /dev/random]

A figure opening doors, lit from behind with a bright light.

Last week, WordPress Tavern picked up my blog post about Drupal 8's upcoming Layout Builder.

While I'm grateful that WordPress Tavern covered Drupal's Layout Builder, it is not surprising that the majority of WordPress Tavern's blog post alludes to the potential challenges with accessibility. After all, Gutenberg's lack of accessibility has been a big topic of debate, and a point of frustration in the WordPress community.

I understand why organizations might be tempted to de-prioritize accessibility. Making a complex web application accessible can be a lot of work, and the pressure to ship early can be high.

In the past, I've been tempted to skip accessibility features myself. I believed that because accessibility features benefited a small group of people only, they could come in a follow-up release.

Today, I've come to believe that accessibility is not something you do for a small group of people. Accessibility is about promoting inclusion. When the product you use daily is accessible, it means that we all get to work with a greater number and a greater variety of colleagues. Accessibility benefits everyone.

As you can see in Drupal's Values and Principles, we are committed to building software that everyone can use. Accessibility should always be a priority. Making capabilities like the Layout Builder accessible is core to Drupal's DNA.

Drupal's Values and Principles translate into our development process, as what we call an accessibility gate, where we set a clearly defined "must-have bar". Prioritizing accessibility also means that we commit to trying to iteratively improve accessibility beyond that minimum over time.

Together with the accessibility maintainers, we jointly agreed that:

  1. Our first priority is WCAG 2.0 AA conformance. This means that in order to be released as a stable system, the Layout Builder must reach Level AA conformance with WCAG. Without WCAG 2.0 AA conformance, we won't release a stable version of Layout Builder.
  2. Our next priority is WCAG 2.1 AA conformance. We're thrilled at the greater inclusion provided by these new guidelines, and will strive to achieve as much of it as we can before release. Because these guidelines are still new (formally approved in June 2018), we won't hold up releasing the stable version of Layout Builder on them, but are committed to implementing them as quickly as we're able to, even if some of the items are after initial release.
  3. While WCAG AAA conformance is not something currently being pursued, there are aspects of AAA that we are discussing adopting in the future. For example, the new 2.1 AAA "Animations from Interactions", which can be framed as an achievable design constraint: anywhere an animation is used, we must ensure designs are understandable/operable for those who cannot or choose not to use animations.

Drupal's commitment to accessibility is one of the things that makes Drupal's upcoming Layout Builder special: it will not only bring tremendous and new capabilities to Drupal, it will also do so without excluding a large portion of current and potential users. We all benefit from that!

December 04, 2018

The post Deploying laravel-websockets with Nginx reverse proxy and supervisord appeared first on

There is a new PHP package available for Laravel users called laravel-websockets that allows you to quickly start a websocket server for your applications.

The added benefit is that it's fully written in PHP, which means it will run on pretty much any system that already runs your Laravel code, without additional tools. Once installed, you can start a websocket server as easily as this:

$ php artisan websocket:serve

That'll open a locally available websocket server, running on

This is great for development, but it also performs pretty well in production. To make that more manageable, we'll run this as a supervisor job with an Nginx proxy in front of it, to handle the SSL part.

Supervisor job for laravel-websockets

The first thing we'll do is make sure that process keeps running forever. If it were to crash (out of memory, killed by someone, throwing exceptions, ...), we want it to automatically restart.

For this, we'll use supervisor, a versatile task runner that is ideally suited for this. Technically, systemd would work equally good for this purpose, as you could quickly add a unit file to run this job.

First, install supervisord.

# On Debian / Ubuntu
apt install supervisor

# On Red Hat / CentOS
yum install supervisor
systemctl enable supervisor

Once installed, add a job for managing this websocket server.

$ cat /etc/supervisord.d/ohdear_websocket_server.ini
command=/usr/bin/php /var/www/vhosts/ websocket:start

This example is taken from, where it's running in production.

Once the config has been made, instruct supervisord to load the configuration and start the job.

$ supervisorctl update
$ supervisorctl start websockets

Now you have a running websocket server, but it will still only listen to, not very useful for your public visitors that want to connect to that websocket.

Note: if you are expecting a higher number of users on this websocket server, you'll need to increase the maximum number of open files supervisord can open. See this blog post: Increase the number of open files for jobs managed by supervisord.

Add an Nginx proxy to handle the TLS

Let your websocket server run locally and add an Nginx configuration in front of it, to handle the TLS portion. Oh, and while you're at it, add that domain to Oh Dear! to monitor your certificate expiration dates. ;-)

The configuration looks like this, assuming you already have Nginx installed.

$ cat /etc/nginx/conf.d/
server {
  listen        443 ssl;
  listen        [::]:443 ssl;

  access_log    /var/log/nginx/ main;
  error_log     /var/log/nginx/ error;

  # Start the SSL configurations
  ssl                         on;
  ssl_certificate             /etc/letsencrypt/live/;
  ssl_certificate_key         /etc/letsencrypt/live/;
  ssl_session_timeout         3m;
  ssl_session_cache           shared:SSL:30m;
  ssl_protocols               TLSv1.1 TLSv1.2;

  # Diffie Hellmann performance improvements
  ssl_ecdh_curve              secp384r1;

  location / {
    proxy_pass                ;
    proxy_set_header Host               $host;
    proxy_set_header X-Real-IP          $remote_addr;

    proxy_set_header X-Forwarded-For    $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto  https;
    proxy_set_header X-VerifiedViaNginx yes;
    proxy_read_timeout                  60;
    proxy_connect_timeout               60;
    proxy_redirect                      off;

    # Specific for websockets: force the use of HTTP/1.1 and set the Upgrade header
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection 'upgrade';
    proxy_set_header Host $host;
    proxy_cache_bypass $http_upgrade;

Everything that connects to over TLS will be proxied to a local service on port 6001, in plain text. This offloads all the TLS (and certificate management) to Nginx, keeping your websocket server configuration as clean and simple as possible.

This also makes automation via Let's Encrypt a lot easier, as there are already implementations that will manage the certificate configuration in your Nginx and reload them when needed.

The post Deploying laravel-websockets with Nginx reverse proxy and supervisord appeared first on

The post Benchmarking websocket server performance with Artillery appeared first on

We recently deployed a custom websocket server for Oh Dear! and wanted to test its ability to handle requests under load. This blogpost will explain how we performed that load test using Artillery.

Installing Artillery

There's a powerful tool called Artillery that allows you to -- among other things -- stresstest a websocket server. This includes as well as regular websocket servers.

First, install it using either npm or yarn. This assumes you have nodejs already installed.

# Via npm
$ npm install -g artillery

# Via Yarn
$ yarn global add artillery

Once installed, it's time for the fun part.

Creating your scenario for the load test

There are a few ways you can load test your websockets. You can, rather naively, just launch a bunch of requests -- much like ab (Apache Bench) would -- and see how many hits/second you can get.

This is relatively easy with artillery. First, create a simple scenario you want to launch.

$ cat loadtest1.yml
    target: "ws://"
      - duration: 20  # Test for 20 seconds
        arrivalRate: 10 # Every second, add 10 users
        rampTo: 100 # Ramp it up to 100 users over the 20s period
        name: "Ramping up the load"
    - engine: "ws"
        - send: 'hello'
        - think: 5

This will do a few things, as explained by the comments:

  • Run the test for 20 seconds
  • Every second, it will add 10 users until it reaches 100 users
  • Once connected, the user will send a message over the channel with the string "hello"
  • Every user will hold the connection open for 5 seconds

To start this scenario, run this artillery command.

$ artillery run loadtest1.yml
Started phase 0 (Ramping up the load), duration: 20s @ ...

However, this is a fairly naive approach, as your test is sending garbage (the string "hello") and will just test the connection limits of both your client and the server. It'll mostly stresstest the TCP stack, not so much the server itself, as it's not doing anything (besides accepting some connections).

A test like this can quickly give you a few thousand connected users, without too much hassle (assuming you've increased your max open files limits).

Testing a real life sample

It would be far more useful if you could test an actual websocket implementation. One that would look like this:

  1. Connect to a websocket
  2. Make it subscribe to a channel for events
  3. Receive the events

Aka: what a real browser would do.

To test this, consider the following scenario.

$ cat loadtest2.yml
    target: "wss://"
      - duration: 60  # Test for 60 seconds
        arrivalRate: 10 # Every second, add 10 users
        rampTo: 100 # And ramp it up to 100 users in total over the 60s period
        name: "Ramping up the load"
      - duration: 120 # Then resume the load test for 120s
        arrivalRate: 100 # With those 100 users we ramped up to in the first phase
        rampTo: 100 # And keep it steady at 100 users
        name: "Pushing a constant load"
    - engine: "ws"
        - send: '{"event":"pusher:subscribe","data":{"channel":"public"}}'  # Subscribe to the public channel
        - think: 15 # Every connection will remain open for 15s

This example is similar to our first one, but the big difference is in the message we send: {"event":"pusher:subscribe","data":{"channel":"public"}}. An actual JSON payload that instructs the client to listen to events sent on the channel public.

In our example, this subscribes to the counter of live amount of health checks being run on Oh Dear!. Every second or so, we publish a new number on the public channel, so every socket listening to that channel will receive data. And more importantly: our server is forced to send that data to every connected user.

With an example like this, we're testing our full websocket stack: data needs to be sent to our websocket and it needs to relay that to every client subscribed to that channel. This is what will cause the actual load (on both client and the server) and this is what you'll want to test.

Now, it's a matter of slowly increasing the amount of users in the scenario and finding the breaking point. ;-)

The post Benchmarking websocket server performance with Artillery appeared first on

Beste NV-A,

Als je uw propaganda honden niet in de hand hebt, ben je ongeschikt als regeringspartij. Als je die campagne bewust lanceerde, ben je racistisch en onbeschoft. Die is gericht tegen mensen. Heeft niets te maken met oplossingen bieden. Wel met haat en angst.

In beide gevallen hoor je niet in de Belgische regering.

Vertrek dus maar.

Dit was er over.

Concerning the very short-notice release-announcement of WordPress 5.0 with Gutenberg for Dec 6th: I’m with Yoast;He has a great “should I update”-checklist and conclusion in this blogpost;

  • Is now the right time to update?
  • Can your site work with Gutenberg?
  • Do you need it?

So our advice boils down to: if you can wait, wait. 

So if you have a busy end-of-year, if you’re not 100% sure your site will work with Gutenburg or if you don’t really need Gutenberg in the first place; wait (while WordPress 5.0 stabilizes with some minor releases).

Apple had a rough year; its stock price has fallen 25% since the beginning of the year. Apple also reported a weaker than expected outlook and shared that it will no longer report individual unit sales, which many consider a bearish signal within a saturated smartphone market.

It's no surprise that this has introduced some skepticism with current Apple shareholders. A friend recently asked me if she should hold or sell her Apple stock. Her financial advisor suggested she should consider selling. Knowing that Apple is the largest position in my own portfolio, she asked what I'm planning to do in light of the recent troubles.

Every time I make an investment decision, I construct a simple narrative based on my own analysis and research of the stock in question. I wrote down my narrative so I could share it with her. I decided to share it on my blog as well to give you a sense of how I develop these narratives. I've shared a few others in the past — documenting my "investment narratives" is useful as it helps me learn from my mistakes and institutes accountability.

As a brief disclaimer, this post should be considered general information, and not a formal investment recommendation. Before making any investment decisions, you should do your own proper due diligence.

Over the last five years, Apple grew earnings per share at 16% annually. This is a combination of about 10% growth in net profit, combined with almost 6% growth as the result of share buybacks.

Management has consistently used cash to buy back five to six percent of the company's outstanding shares every year. At the current valuation and with the current strength of Apple's balance sheet, buybacks are a good use of a portion of their cash. Apple will likely see similar levels of cash generation in the coming years so I expect Apple will continue to buy back five to six percent of its outstanding shares annually. By reducing the number of shares on the market, buybacks lift a company's earnings per share by the same amount.

Next, I predict that Apple will grow net profits by six to seven percent a year. Apple can achieve six to seven percent growth in profits by growing sales and improving its margins. Margins are likely to grow due to the shift in revenue from hardware to software services. This is a multi-year shift so I expect margins to slowly improve over time. I believe that six to seven percent growth in net profits is both reasonable and feasible. It's well below the average 10% growth in net profits that we've seen in recent years.

Add 5-6% growth as the result of share buybacks to 6-7% growth in profit as a result of sales growth and margin expansion, and you have a company growing earnings per share by 12% a year.

If Apple sustains that 12% percent earnings per share growth for five years, earnings per share will grow from $11.88 today to $20.94 by the end of 2023. At the current P/E ratio of 15, one share of Apple would be worth $314 by then. Add about $20 in dividends that you'd collect along the way, and you are likely looking at market beating returns.

The returns could be better as there is an opportunity for P/E expansion. I see at least two drivers for that; (a) the potential for margin improvement as a result of Apple shifting its revenue mix, and (b) Apple's growing cash position (e.g. if you subtract the cash per share from the share price, the P/E increases).

Let's assume that the P/E expands from the current 15 to 18. Now, all of a sudden, you're looking at a per share price of $397 by the end 2023, and an average annual return of 18%. If that plays out, every dollar invested in Apple today, would double in five years — and that excludes the dividend you'd collect along the way!

Needless to say, this isn't an advanced forecasting model. Regardless, my narrative shows that if we make a few very reasonable assumptions, Apple could have a great return the next five years.

While Apple's day of disruption might be behind it, it remains one of the greatest cash machines of all time. Modest growth combined with a large buyback program and a relatively low valuation, can make for a great investment.

I'm not selling my Apple stock and I'd be tempted to buy more if the share price were to drop below $155 a share.

Disclaimer: I'm long AAPL. Before making any investment decisions, you should do your own proper due diligence. Any material in this article should be considered general information, and not a formal investment recommendation.

November 30, 2018

I’m writing this quick wrap-up in Vienna, Austria where I attended my first DeepSec conference. This event was already on my schedule for a while but I never had a chance to come. This year, I submitted a training and I was accepted! Good opportunity to visit the beautiful city of Vienna! Like many security conferences, the event started with a set of trainings on Tuesday and Wednesday. My training topic was about using OSSEC for threat hunting.

On Thursday and Friday, regular talks were scheduled and split across three tracks. Two tracks for regular presentations and the third one called “Roots”, more dedicated to academic researches and papers. There was a good balance between offensive and defensive presentations.

The keynote speaker was Peter Zinn and he presented a very entertaining keynote called “We’re all gonna die“. Basically, the main idea was to review how our world is changing in many points and new threats are coming: the climate change, magnetic fields, Donald Trump, etc. But also from an information technology point of view. Peter revealed that we have to face 4 types of “cyber-zombies”:

  • People
  • Inequality
  • Operational technology and IOT
  • Artificial Intelligence (here is a funny video that demonstrate how AI may fail)

Later, we will face the “IoP” of “Internet of People”. IT will be present inside our bodies (RFID implants, sensors, contact lenses, …) and we’ll have to deal with them. Nice keynote!

Here is a quick recap of the talk that I attended. Fernando Arnaboldi and “Uncovering Vulnerabilities in Secure Coding Guidelines“. The idea behind this talk was to demonstrate that, even if you follow all well-known development guidelines (like OWASP, CWE or NIST), you can fail. He gave several snippets of code as examples. Personally, I liked the mention to the new KPI: “the WTF’s/minute”.

Then, Werner Schober presented the “Internet of Dildos“. Always entertaining to have a talk focusing on “exotic” IoT devices. He explained the different vulnerabilities that he found in a sex-toy and the associated mobile app & website in Germany. Basically, he explained how it was possible to access all (hot) pictures uploaded by the users, how to enable (make vibrate) any device connected in the world or, worse, access to personal data of the consumers…

Then, Eric Leblond talked about the new features that are constantly added to the Suricata IDS with a focus on eBPF filters. I already saw Eric’s presentation a few month ago but he added more stuff like a crazy idea to use BCC (“BPF Code Compiler”) to generate BFP filters from C code directly present in a Python script!

Joe Slowik came to speak about ICS attacks. More and more ICS attacks are reported in the news because there is some kind of aura of sophistication around them. Joe started with a recap of the major ICS attacks that industries faced in the last years. But, many attacks are successful because the IT components used to control the ICS components are vulnerable and the same tools are abuse to compromize them (like Mimikatz, PsExec, etc).  Note that the talk was a mix of offensive & defensive.

Benjamin Ridgway (from the Microsoft Incident Response Center) came to speak about incident handling. The abstract was not clear and a lot of people expected a talk explaining how to select and use the right tools to perform incident management but it was completely different and not technical. Benjamin explained how to implement your IH process with a focus on the following points:

  • Human psychological response to stressful and/or dangerous situations
  • Strategies for effectively managing human factors during a crisis
  • Polices and structures that set up incident response teams for success
  • Tools for building a healthy and happy incident response team

It was an excellent presentation, one of my preferred!

Then, Dr. Silke Holtmanns from Nokia Bell Las came to speak about new attack vectors for mobile core networks. The problem for people that are not in the field of mobile networks is the complexity of terms and abbreviations used. It’s crazy! But Silke explained very well the basic: how roaming is used, how billing profile are managed. Of course, the idea was then to explain some attacks. I like the one focusing on how to change a billing plan when you’re abroad to reduce the roaming costs. Very didactic!

The new speaker was Mark Baenziger which is doing incident handling. He explained the challenges that incident handlers might face when handling personal data (and so, how to protect their privacy). He explained how, in some case, security teams failed to achieve this properly.

The last slot was assigned to Paula de la Hoz Garrido (she’s studying in Spain). She explained her project of network monitoring tools bundled on a Raspberry Pi. Interesting but the practical part was missing (how to build the project on the Pi. The talk was more a review of tools that are used to capture/process packets.

The second day started with a nice talk called “Everything is connected: how to hack Bank Account using Instagram“. The idea was to abuse phone services provided by some banks to allow their customers to perform a lot of basic operations (through IVR). Aleksandr Kolchanov explained the attacks he performed against an Ukrainian bank. Some services are available only based on the caller-ID. This information can be easily spoofed using only services (ex: Funny but crazy!

Then, I switched to the “Roots” room to attend a talk about using data over sound. More precisely, ultrasonic sounds. Matthias Zeppelzauer explained the research he made about this technology which is used more then we could expect! It’s possible to collect interesting informations (ex: how people watch television programs) or to deliver ads to people entering a shop. He also presented the project “SoniControl” which is some kind of an ultrasonic firewall to protect the privacy of users.

My next choice was “RFID Chip Inside the Body: Reflecting the Current State of Usage, Triggers, and Ethical Issues” presented by Ulrike Hugl. RFID implants in human bodies are not new but what’s the status today? Are people ready to have such kind of hardware under their skin? There is not massive deployment but some companies try to convince their users to use this technology. But it remains usually tests or funny projects.

Finally, my last choice was “Global Deep Scans – Measuring Vulnerability Levels across Organizations, Industries, and Countries” by Luca Melette & Fabian Bräunlein. I was curious when I read the abstract. The idea behind this research was to scan the Internet, to classify scanned IP addresses by location and business. Then, they used an algorithm to compute an “hackability” level. Indeed, from a defender perspective, it’s interesting to learn how your competitor are safe. From an attacker point of view, it’s nice to know which are the most juicy targets. The result of their research is available here.

This was a very quick wrap-up of my first DeepSec (and I hope not the last one!). The conference size is nice, not too many attendees (my rough estimation is ~200 people) and properly managed by the  crew. Thanks to them!

[The post DeepSec 2018 Wrap-Up has been first published on /dev/random]

November 29, 2018

The post Increase the number of open files for jobs managed by supervisord appeared first on

In Linux, a non-privileged user by default can only open 1024 files on a machine. This includes handles to log files, but also local sockets, TCP ports, ... everything's a file and the usage is limited, as a system protection.

Normally, we can increase the amount of processes a particular user can open by increasing the system limits. This is configured in /etc/security/limits.d/.

For instance, this allows the user john to open up to 10.000 files.

$ cat /etc/security/limits.d/john.conf
john		soft		nofile		10000

You would assume that once configured, this would apply to all the commands that run as the user john. Alas, that's not the case if you use supervisord to run a process.

Take the following supervisor job for instance:

$ cat /etc/supervisord.d/john.ini
command=/usr/bin/php /path/to/script.php

This would add a job to supervisor to always keep the task /usr/bin/php /path/to/script.php running as the user john, and if it were to crash or stop, it would automatically restart it.

However, if we were to inspect the actual limits being enforced on that process, we'd find the following.

$ cat /proc/19153/limits
Limit                     Soft Limit           Hard Limit           Units
Max open files            1024                 4096                 files

The process has a soft-limit of 1024 files and a hard limit of 4096, despite an increase in the amount of files it can open in our limits.d directory.

The reason is that supervisord has a setting of its own, minfds, that it uses to set the amount of files it can open. And that setting gets inherited by all the children that supervisord spawns, so it overrides any setting you may set in limits.d.

Its default value is set to 1024 and can be increased so anything you'd like (or need).

$ cat /etc/supervisord.conf

You'll find this file on /etc/supervisor/supervisord.conf on Debian or Ubuntu systems. Either add or modify the minfds parameter, restart supervisord (which will restart all your spawned jobs, too) and you'll notice the limits have actually been increased.

The post Increase the number of open files for jobs managed by supervisord appeared first on

November 28, 2018

I published the following diary on “More obfuscated shell scripts: Fake MacOS Flash update”:

Yesterday, I wrote a diary about a nice obfuscated shell script. Today, I found another example of a malicious shell script embedded in an Apple .dmg file (an Apple Disk Image). The file was delivered through a fake Flash update webpage… [Read more]

[The post [SANS ISC] More obfuscated shell scripts: Fake MacOS Flash update has been first published on /dev/random]

November 27, 2018

I published the following diary on “Obfuscated bash script targeting QNap boxes:

One of our readers, Nathaniel Vos, shared an interesting shell script with us and thanks to him! He found it on an embedded Linux device, more precisely, a QNap NAS running QTS 4.3. After some quick investigations, it looked that the script was not brand new. we found references to it already posted in September 2018. But such shell scripts are less common: they are usually not obfuscated and they perform basic features like downloading and installing some binaries. So, I took the time to look at it… [Read more]

[The post [SANS ISC] Obfuscated bash script targeting QNap boxes has been first published on /dev/random]

The post My Laracon EU talk: Minimum Viable Linux appeared first on

The recorded video of my presentation I gave at Laracon EU last summer.

The title is, in hindsight, a badly chosen one. I tried to make it a pun on "Minimum Viable Product" (you know, the startup-y stuff), but it ended up giving perhaps a false expectation to the audience thinking it was about minimal linux distros, which it wasn't.

I plan to give this talk some more times, but i'll change a few things;

  • Skip the Linux landscape intro (does anyone still care?)
  • Change the title to something more like "Troubleshooting PHP as a Linux sysadmin"
  • Perhaps skip the SSH stuff, just keep the tunnel information (to help in debugging)
  • Focus more on live-troubleshooting with strace and introduce sysdig

Lots of lessons learned from me in this talk, it was by far the talk that received the most widely ranging feedback. From 0/10 to 10/10. Most -- I think -- was related to wrong expectations from the start (hence, changing the title).

The goal is -- and will continue to be -- to show debugging techniques that are more sysadmin focussed, to show a different perspective on debugging applications.

The post My Laracon EU talk: Minimum Viable Linux appeared first on