Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

February 20, 2018

The post Update a docker container to the latest version appeared first on

Here's a simple one, but if you're new to Docker something you might have to look up. On this server, I run Nginx as a Docker container using the official nginx:alpine  version.

I was running a fairly outdated version:

$ docker images | grep nginx
nginx    none                5a35015d93e9        10 months ago       15.5MB
nginx    latest              46102226f2fd        10 months ago       109MB
nginx    1.11-alpine         935bd7bf8ea6        18 months ago       54.8MB

In order to make sure I had the latest version, I ran pull:

$ docker pull nginx:alpine
alpine: Pulling from library/nginx
550fe1bea624: Pull complete
d421ba34525b: Pull complete
fdcbcb327323: Pull complete
bfbcec2fc4d5: Pull complete
Digest: sha256:c8ff0187cc75e1f5002c7ca9841cb191d33c4080f38140b9d6f07902ababbe66
Status: Downloaded newer image for nginx:alpine

Now, my local repository contains an up-to-date Nginx version:

$ docker images | grep nginx
nginx    alpine              bb00c21b4edf        5 weeks ago         16.8MB

To use it, you have to launch a new container based on that particular image. The currently running container will still be using the original (old) image.

$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED
4d9de6c0fba1        5a35015d93e9        "nginx -g 'daemon ..."   9 months ago

In my case, I re-created my HTTP/2 nginx container like this;

$ docker stop nginx-container
$ docker rm nginx-container
$ docker run --name nginx-container \ 
    --net="host" \
    -v /etc/nginx/:/etc/nginx/ \
    -v /etc/ssl/certs/:/etc/ssl/certs/ \
    -v /etc/letsencrypt/:/etc/letsencrypt/ \
    -v /var/log/nginx/:/var/log/nginx/ \
    --restart=always \
    -d nginx:alpine

And the Nginx/container upgrade was completed.

The post Update a docker container to the latest version appeared first on

February 19, 2018

One thing that one has to like with Entreprise distribution is the same stable api/abi during the distro lifetime. If you have one application that works, you'll know that it will continue to work.

But in parallel, one can't always decide the application to run on that distro, with the built-in components. I was personally faced with this recently, when I was in a need to migrate our Bug Tracker to a new version. Let's so use that example to see how we can use "newer" php pkgs distributed through the distro itself.

The application that we use for is MantisBT, and by reading their requirements list it was clear than a CentOS 7 default setup would not work : as a reminder the default php pkg for .el7 is 5.4.16 , so not supported anymore by "modern" application[s].

That's where SCLs come to the rescue ! With such "collections", one can install those, without overwriting the base pkgs, and so can even run multiple parallel instances of such "stack", based on configuration.

Let's just start simple with our MantisBT example : forget about the traditional php-* packages (including "php" which provides the mod_php for Apache) : it's up to you to let those installed if you need it, but on my case, I'll default to php 7.1.x for the whole vhost, and also worth knowing that I wanted to integrate php with the default httpd from the distro (to ease the configuration management side, to expect finding the .conf files at $usual_place)

The good news is that those collections are built and so then tested and released through our CentOS Infra, so you don't have to care about anything else ! (kudos to the SCLo SIG ! ). You can see the available collections here

So, how do we proceed ? easy ! First let's add the repository :

yum install centos-release-scl

And from that point, you can just install what you need. For our case, MantisBT needs php, php-xml, php-mbstring, php-gd (for the captcha, if you want to use it), and a DB driver, so php-mysql (if you targets mysql of course). You just have to "translate" that into SCLs pkgs : in our case, php becomes rh-php71 (meta pkg), php-xml becomes rh-php71-php-xml and so on (one remark though, php-mysql became rh-php71-php-mysqlnd !)

So here we go :

yum install httpd rh-php71 rh-php71-php-xml rh-php71-php-mbstring rh-php71-php-gd rh-php71-php-soap rh-php71-php-mysqlnd rh-php71-php-fpm

As said earlier, we'll target the default httpd pkg from the distro , so we just have to "link" php and httpd. Remember that mod_php isn't available anymore, but instead we'll use the php-fpm pkg (see rh-php71-php-fpm) for this (so all requests are sent to that FastCGI Process Manager daemon)

Let's do this :

systemctl enable httpd --now
systemctl enable rh-php71-php-fpm --now
cat > /etc/httpd/conf.d/php-fpm.conf << EOF
AddType text/html .php 
DirectoryIndex index.php
<FilesMatch \.php$>
      SetHandler "proxy:fcgi://"
systemctl restart httpd

And from this point, it's all basic, and application is now using php 7.1.x stack. That's a basic "howto" but you can also run multiple versions in parallel, and also tune php-fpm itself. If you're interested, I'll let you read Remi Collet's blog post about this (Thank you again Remi !)

Hope this helps, as strangely I couldn't easily find a simple howto for this, as "scl enable rh-php71 bash" wouldn't help a lot with httpd (which is probably the most used scenario)

I was working on my POSSE plan when Vanessa called and asked if I wanted to meet for a coffee. Of course, I said yes. In the car ride over, I was thinking about how I made my first website over twenty years ago. HTML table layouts were still cool and it wasn't clear if CSS was going to be widely adopted. I decided to learn CSS anyway. More than twenty years later, the workflows, the automated toolchains, and the development methods have become increasingly powerful, but also a lot more complex. Today, you simply npm your webpack via grunt with vue babel or bower to react asdfjkl;lkdhgxdlciuhw. Everything is different now, except that I'm still at my desk learning CSS.

The post Show IDN punycode in Firefox to avoid phishing URLs appeared first on

Pop quiz: can you tell the difference between these 2 domains?

Both host a version of the popular crypto exchange Binance.

The second image is the correct one, the first one is a phishing link with the letter 'n' replaced by 'n with a dot below it' (U+1E47). It's not a piece of dirt on your screen, it's an attempt to trick you to believe it's the official site.

Firefox has a very interesting option called IDN_show_punycode. You can enable it in about:config`.

Once enabled, it'll make that phishing domain look like this:

Doesn't look that legit now anymore, does it?

I wish Chrome offered a similar option though, could prevent quite a few phishing attempts.


The post Show IDN punycode in Firefox to avoid phishing URLs appeared first on

"Get your HTTPS on" because Chrome will mark all HTTP sites as "not secure" starting in July 2018. Chrome currently displays a neutral icon for sites that aren't using HTTPS, but starting with Chrome 68, the browser will warn users in the address bar.

Fortunately, HTTPS has become easier to implement through services like Let's Encrypt, who provide free certificates and aim to eliminate to complexity of setting up and maintaining HTTPS encryption.

February 18, 2018

TL;DR Autoptimize 2.4 will be a major change. Tomaš Trkulja (aka zytzagoo) has cleaned up and modernized the code significantly, making it easier to read and maintain, switched to the latest and greatest minifiers and added automated testing. And as if that isn’t enough we’re also adding new optimization options! The downside: we will be dropping support for PHP < 5.3 and for the “legacy minifiers”. AO 2.4 will first be made available as a separate “Autoptimize Beta” plugin on via GitHub and will see more releases/ iterations with new options/ optimizations there before being promoted to the official “Autoptimize”.

Back in March 2015 zytzagoo forked Autoptimize, rewriting the CDN-replacement logic (and a lot of autoptimizeStyles::minify really) and started adding automated testing. I kept an eye on the fork and later that year I contacted Tomas via Github to see how we could collaborate. We have been in touch ever since; some of his improvements have already been integrated and he is my go-to man to discuss coding best practices, bugfixes and security.

FFWD to the nearby future; Autoptimize 2.4 will be based on Tomaš’ fork and will include the following major changes:

  • New: option only minify JS/ CSS without combining them
  • New: excluded JS- or CSS-files will be automatically minified
  • Improvement: switching to the current version of JSMin and -more importantly- of YUI CSS compressor PHP port which has some major performance-improvements of itself
  • Improvement: all create_function() instances have been replaced by anonymous functions (PHP 7.2 started issuing warnings about create_function being deprecated)
  • Improvement: all code in autoptimize/classlesses/* (my stuff) has been rewritten into OOP and is now in autoptimize/classes/*
  • Improvement: use of autoload instead of manual conditional includes
  • Improvement: a nice amount of test cases (although no 100% coverage yet), allowing for Travis continuous integration-tests being done
  • dropping support for PHP below 5.3 (you really should be on PHP 7.x, it is way faster)
  • dropping support for the legacy minifiers

These improvements will be released in a separate “Autoptimize Beta” plugin soon (albeit not on as “beta”-plugins are not allowed). You can already download from GitHub here. We will start adding additional optimization options there, releasing at a higher pace. The goal is to create a healthy Beta-user pool allowing us to move code from AO Beta to AO proper with more confidence. So what new optimization options would you like to see added to Autoptimize 2.4 and beyond? :-)

[corrected 19/02; does not allow beta-plugins]

February 17, 2018

I published the following diary on “Malware Delivered via Windows Installer Files“:

For some days, I collected a few samples of malicious MSI files. MSI files are Windows installer files that users can execute to install software on a Microsoft Windows system. Of course, you can replace “software” with “malware”. MSI files look less suspicious and they could bypass simple filters based on file extensions like “(com|exe|dll|js|vbs|…)”. They also look less dangerous because they are Composite Document Files… [Read more]

[The post [SANS ISC] Malware Delivered via Windows Installer Files has been first published on /dev/random]

February 16, 2018

Update 20180216: remark about non-Ubuntu extensions.

Gnome logoI like what the Ubuntu people did when adopting Gnome as the new Desktop after the dismissal of Unity. When the change was announced some months ago, I decided to move to Gnome and see if I liked it. I did.

It’s a good idea to benefit of the small changes Ubuntu did to Gnome 3. Forking dash-to-dock was a great idea so untested updates (e.g. upstream) don’t break the desktop. I won’t discuss settings you can change through the “Settings” application (Ubuntu Dock settings) or through “Tweaks”:

$ sudo apt-get install gnome-tweak-tool

It’s a good idea, though, to remove third party extensions so you are sure you’re using the ones provided and adapted by Ubuntu. You can always add new extensions later (the most important ones are even packaged).
$ rm -rf ~/.local/share/gnome-shell/extensions/*

Working with Gnome 3, and in less extent with MacOS, taught me that I prefer bars and docks to autohide. I never did in the past, but I feel that Gnome (and MacOS) got this right. I certainly don’t like the full height dock: make it so small as needed. You can use the graphical “dconf Editor” tool to make the changes, but I prefer the safer command line (you won’t make a change by accident).

To prevent Ubuntu Dock to take all the vertical space (i.e., most of it is just an empty bar):

$ dconf write /org/gnome/shell/extensions/dash-to-dock/extend-height false

A neat Dock trick: when hovering over a icon on the dock, cycle through windows of the application while scrolling (or using two fingers). Way faster than click + select:

$ dconf write /org/gnome/shell/extensions/dash-to-dock/scroll-action "'cycle-windows'"

I set the dock to autohide in the regular “Settings” application. An extension is needed to do the same for the Top Bar (you need to log out, and the enable it through the “Tweaks” application):

$ sudo apt-get install gnome-shell-extension-autohidetopbar

Update: I don’t install extensions any more besides the ones forked by Ubuntu. In my experience, they make the desktop unstable under Wayland. That’s said, I haven’t seen crashes related to autohidetopbar.

Oh, just to be safe (e.g., in case you broke something), you can reset all the gnome settings with:

$ dconf reset -f /

Have a look at the comments for some extra settings (that I personally do not use, but many do).

Some options that I don’t use far people have asked me about (here and elsewhere)

Specially with the setting that allows scrolling above, you may want to only switch between windows of the same application in the active workspace. You can isolate workspaces with:

$ dconf write /org/gnome/shell/extensions/dash-to-dock/isolate-workspaces true

Hide the dock all the time, instead of only when needed. You can do this by disabling “intellihide”:

$ dconf write /org/gnome/shell/extensions/dash-to-dock/intellihide false

My website plan

In an effort to reclaim my blog as my thought space and take back control over my data, I want to share how I plan to evolve my website. Given the incredible feedback on my previous blog posts, I want to continue the conversation and ask for feedback.

First, I need to find a way to combine longer blog posts and status updates on one site:

  1. Update my site navigation menu to include sections for "Blog" and "Notes". The "Notes" section would resemble a Twitter or Facebook livestream that catalogs short status updates, replies, interesting links, photos and more. Instead of posting these on third-party social media sites, I want to post them on my site first (POSSE). The "Blog" section would continue to feature longer, more in-depth blog posts. The front page of my website will combine both blog posts and notes in one stream.
  2. Add support for Webmention, a web standard for tracking comments, likes, reposts and other rich interactions across the web. This way, when users retweet a post on Twitter or cite a blog post, mentions are tracked on my own website.
  3. Automatically syndicate to 3rd party services, such as syndicating photo posts to Facebook and Instagram or syndicating quick Drupal updates to Twitter. To start, I can do this manually, but it would be nice to automate this process over time.
  4. Streamline the ability to post updates from my phone. Sharing photos or updates in real-time only becomes a habit if you can publish something in 30 seconds or less. It's why I use Facebook and Twitter often. I'd like to explore building a simple iOS application to remove any friction from posting updates on the go.
  5. Streamline the ability to share other people's content. I'd like to create a browser extension to share interesting links along with some commentary. I'm a small investor in Buffer, a social media management platform, and I use their tool often. Buffer makes it incredibly easy to share interesting articles on social media, without having to actually open any social media sites. I'd like to be able to share articles on my blog that way.

Second, as I begin to introduce a larger variety of content to my site, I'd like to find a way for readers to filter content:

  1. Expand the site navigation so readers can filter by topic. If you want to read about Drupal, click "Drupal". If you just want to see some of my photos, click "Photos".
  2. Allow people to subscribe by interests. Drupal 8 make it easy to offer an RSS feed by topic. However, it doesn't look nearly as easy to allow email subscribers to receive updates by interest. Mailchimp's RSS-to-email feature, my current mailing list solution, doesn't seem to support this and neither do the obvious alternatives.

Implementing this plan is going to take me some time, especially because it's hard to prioritize this over other things. Some of the steps I've outlined are easy to implement thanks to the fact that I use Drupal. For example, creating new content types for the "Notes" section, adding new RSS feeds and integrating "Blogs" and "Notes" into one stream on my homepage are all easy – I should be able to get those done my next free evening. Other steps, like building an iPhone application, building a browser extension, or figuring out how to filter email subscriptions by topics are going to take more time. Setting up my POSSE system is a nice personal challenge for 2018. I'll keep you posted on my progress – much of that might happen via short status updates, rather than on the main blog. ;)

February 15, 2018

I just published a quick update of my imap2thehive tool. Files attached to an email can now be processed and uploaded as an observable attached to a case. It is possible to specify which MIME types to process via the configuration file. The example below will process PDF & EML files:

files: application/pdf,messages/rfc822

The script is available here.

[The post Imap2TheHive: Support of Attachments has been first published on /dev/random]

Logo ReactJSCe jeudi 15 mars 2018 à 19h se déroulera la 67ème séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : ReactJS – Présentation de la librairie et prototypage d’un jeu de type “clicker”

Thématique : Internet|Graphisme|sysadmin|communauté

Public : Tout public|sysadmin|entreprises|étudiants|…

L’animateur conférencier : Michaël Hoste (80LIMIT)

Lieu de cette séance : Campus technique (ISIMs) de la Haute Ecole en Hainaut, Avenue V. Maistriau, 8a, Salle Académique, 2e bâtiment (cf. ce plan sur le site de l’ISIMs, et ici sur la carte Openstreetmap).

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page La séance sera suivie d’un verre de l’amitié.

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, OpenSides, MeaWeb et Phonoid.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

Description : Le développement JavaScript est un véritable fouillis de solutions techniques. Il ne se passe pas un mois sans voir apparaître un nouveau framework soi-disant révolutionnaire.

Lorsque ReactJS est sorti en 2013, soutenu par un Facebook peu coutumier de la diffusion de solutions open-source, peu s’attendaient à voir arriver un réel challenger. Sa particularité de mélanger du code HTML et JavaScript au sein d’un seul fichier allait même à l’encontre du bon sens. Pourtant, 4 ans plus tard, cette solution reste idéale pour proposer des expériences web toujours plus fluides et complexes. ReactJS et ses mécanismes sous-jacents ont apporté un vent frais dans le monde du web front-end.

Lors de cette séance, nous reprendrons l’historique de ReactJS et expliquerons pourquoi cette librairie, pourtant minimaliste, aborde les problèmatiques du web d’une manière sensée.

Une fois les bases logiques détaillées, nous mettrons les mains sur le clavier pour développer un exemple concret sous la forme d’un jeu de type “clicker” (Clicker Heroes, Cookie Clicker, etc.).

Short bio : Michaël Hoste est diplômé d’une Licence en Sciences Informatiques à l’Université de Mons. En 2012, il fonde 80LIMIT, une société de développement d’applications web et mobile, centrée sur les langages Ruby et JavaScript. Il aime partager son temps entre des projets clients et des projets internes de type Software-as-a-Service. En outre, il préside l’ASBL Creative Monkeys qui permet à plusieurs sociétés de s’épanouir au sein d’un open space dans le centre de Mons.

February 14, 2018

My blog as my thought space

Last week, I shared my frustration with using social media websites like Facebook or Twitter as my primary platform for sharing photos and status updates. As an advocate of the open web, this has bothered me for some time so I made a commitment to prioritize publishing photos, updates and more to my own site.

I'm excited to share my plan for how I'd like to accomplish this, but before I do, I'd like to share two additional challenges I face on my blog. These struggles factor into some of the changes I'm considering implementing, so I feel compelled to share them with you.

First, I've struggled to cover a wide variety of topics lately. I've been primarily writing about Drupal, Acquia and the Open Web. However, I'm also interested in sharing insights on startups, investing, travel, photography and life outside of work. I often feel inspired to write about these topics, but over the years I've grown reluctant to expand outside of professional interests. My blog is primarily read by technology professionals — from Drupal users and developers, to industry analysts and technology leaders — and in my mind, they do not read my blog to learn about a wider range of topics. I'm conflicted because I would like my l blog to reflect both my personal and professional interests.

Secondly, I've been hesitant to share short updates, such as a two sentence announcement about a new Drupal feature or an Acquia milestone. I used to publish these kinds of short updates quite frequently. It's not that I don't want to share them anymore, it's that I struggle to post them. Every time I publish a new post, it goes out to more than 5,000 people that subscribe to my blog by email. I've been reluctant to share short status updates because I don't want to flood people's inbox.

Throughout the years, I worked around these two struggles by relying on social media; while I used my blog for in-depth blog posts specific to my professional life, I used social media for short updates, sharing photos and starting conversation about wider variety of topics.

But I never loved this division.

I've always written for myself, first. Writing pushes me to think, and it is the process I rely on to flesh out ideas. This blog is my space to think out loud, and to start conversations with people considering the same problems, opportunities or ideas. In the early days of my blog, I never considered restricting my blog to certain topics or making it fit specific editorial standards.

Om Malik published a blog last week that echoes my frustration. For Malik, blogs are thought spaces: a place for writers to share original opinions that reflect "how they view the world and how they are thinking". As my blog has grown, it has evolved, and along the way it has become less of a public thought space.

My commitment to implementing a POSSE approach on my site has brought these struggles to the forefront. I'm glad it did because it requires me to rethink my approach and to return to my blogging roots. After some consideration, here is what I want to do:

  1. Take back control of more of my data; I want to share more of my photos and social media data on my own site.
  2. Find a way to combine longer in-depth blog posts and shorter status updates.
  3. Enable readers and subscribers to filter content based on their own interests so that I can cover a larger variety of topics.

In my next blog post, I plan to outline more details of how I'd like to approach this. Stay tuned!

February 10, 2018

Vandaag wil ik de aandacht op een Belgische wet over het verkopen met verlies. Ons land verbiedt, bij wet, elke handelaar een goed met verlies te verkopen. Dat is de regel, in ons België.

Die regel heeft (terecht) uitzonderingen. De definitie van de uitzondering wil zeggen dat ze niet de regel zijn: de verkoop met verlies is in België slechts per uitzondering toegestaan:

  • naar aanleiding van soldenverkoop of uitverkoop;
  • met als doel de goederen die vatbaar zijn voor snel bederf van de hand te doen als hun bewaring niet meer kan worden verzekerd;
  • ten gevolge externe omstandigheden;
  • goederen die technisch voorbijgestreefd zijn of beschadigd zijn;
  • de noodzakelijkheid van concurrentie.

Ik vermoed dat onze wet bestaat om oneerlijke concurrentie te bestrijden. Een handelaar kan dus niet een bepaald product (bv. een game console) tegen verlies verkopen om zo marktdominantie te verkrijgen voor een ander product uit zijn gamma (bv. games), bv. met als doel concurrenten uit de markt te weren.

Volgens mij is het daarom zo dat, moest een game console -producent met verlies een console verkopen, dit illegaal is in België.

Laten we aannemen dat game console producenten, die actief zijn in (de verkoop in) België, de Belgische wet volgen. Dan volgt dat ze hun game consoles niet tegen verlies verkopen. Ze maken dus winst. Moesten ze dat niet doen dan moeten ze voldoen aan uitzonderlijke voorwaarden, in de (eerder vermelde) Belgische wet, die hen toelaat wel verlies te maken. In alle andere gevallen zouden ze in de ontwettigheid verkeren. Dat is de Belgische wet.

Dat maakt dat de aanschaf van zo’n game console, als Belgisch consument, betekent dat de producent -en verkoper een zekere winst hebben gemaakt door mijn aankoop. Er is dus geen sprake van verlies. Tenzij de producent -of verkoper in België betrokken is bij onwettige zaken.

Laten we aannemen dat we op zo’n console, na aanschaf, een andere software willen draaien. Dan kan de producent/verkoper dus niet beweren dat zijn winst gemaakt wordt door zaken die naderhand verkocht zouden worden (a.d.h.v. bv. originele software).

Hun winst is met andere woorden al gemaakt. Op de game console zelf. Indien niet, dan zou de producent of verkoper in onwettigheid verkeren (in België). Daarvan nemen we aan dat dit zo niet verlopen is. Want anders zou men het goed niet mogen verkopen. Het goed is wel verkocht. Volgens Belgische wetgeving (toch?).

Indien niet, dan is de producent -en of verkoper verantwoordelijk. In geen geval de consument.

February 09, 2018

A quick blog post about a module that I wrote to interconnect the malware analysis framework Viper and the malware analysis platform A1000 from ReversingLabs.

The module can perform two actions at the moment: to submit a new sample for analysis and to retrieve the analysis results (categorization):

viper sample.exe > a1000 -h
usage: a1000 [-h] [-s] [-c]

Submit files and retrieve reports from a ReversingLab A1000

optional arguments:
-h, --help show this help message and exit
-s, --submit Submit file to A1000
-c, --classification Get classification of current file from A1000
viper sample.exe > a1000 -s
[*] Successfully submitted file to A1000, task ID: 393846

viper sample.exe > a1000 -c
[*] Classification
- Threat status : malicious
- Threat name : Win32.Trojan.Fareit dw eldorado
- Trust factor : 5
- Threat level : 2
- First seen : 2018-02-09T13:03:26Z
- Last seen : 2018-02-09T13:07:00Z

The module is available on my GitHub repository.

[The post Viper and ReversingLabs A1000 Integration has been first published on /dev/random]

Dear Kettle friends,

12 years ago I joined a wonderful team of people at Pentaho who thought they could make a real change in the world of business analytics. At that point I recently open sourced my own data integration tool (then still called ‘ETL’) called Kettle and so I joined in the role of Chief Architect of Data Integration. The title sounded great and the job included everything from writing articles (and a book), massive amounts of coding, testing, software releases, giving support, doing training, workshops, … In other words, life was simply doing everything I possibly and impossibly could to make our software succeed when deployed by our users. With Kettle now being one of the most popular data integration tools on the planet I think it’s safe to say that this goal has been reached and that it’s time for me to move on.

I don’t just want to announce my exit from Pentaho/Hitachi Vantara. I would also like to thank all the people involved in making our success happen. First and foremost I want to express my gratitude to the founders (Richard, Doug, James, Marc, …) for even including a crazy Belgian like myself on the team but I also want to extend my warmest thanks to everyone who I got to become friends with at Pentaho for the always positive and constructive attitude. Without exaggeration I can say it’s been a lot of fun.

I would also explicitly like to thank the whole community of users of Kettle (now called Pentaho Data Integration). Without your invaluable support in the form of new plugins, bug reports, documentation, forum posts, talks, … we could never have pulled off what we did in the past 12 years! I hope we will continue to meet at one of the many popular community events.

Finally I want to thank everyone at Hitachi and Hitachi Vantara for being such a positive and welcoming group of people. I know that Kettle is used all over Hitachi and I’m quite confident this piece of software will not let you down any time soon.

Now I’m going to go skiing for a week and when I get back it’s time to hunt for a new job. I can’t wait to see what impossible problems need solving out there…



The post Chrome will mark all HTTP sites as “not secure” appeared first on

If you hadn't already, it's time to make "HTTPS by default" your new motto.

[...] within the last year, we’ve also helped users understand that HTTP sites are not secure by gradually marking a larger subset of HTTP pages as “not secure”. Beginning in July 2018 with the release of Chrome 68, Chrome will mark all HTTP sites as “not secure”.

Source: Chromium Blog: A secure web is here to stay

Visually, every site on HTTP will be marked as "not secure" next to the address bar.

This essentially means:

  • Your site will need HTTPS (x509 certificates needed)
  • You'll want to make sure you monitor for mixed content (HTTP resources on a HTTPS site)
  • You'll need to be aware of certificate expirations & renewals

A few years ago I wrote about "the real cost of 'S' in HTTPS", about how you only need a single error in your HTTPS setup or content to make your site unusable for visitors. HTTPS is a "it either works 100% or it doesn't at all" type of configuration.

Luckily -- and largely inspired by that blogpost and the general adoption of HTTPS -- there are tools like Oh Dear! that help monitor your SSL/TLS certificates, scan for mixed content & report general errors of your HTTPS stack.

The post Chrome will mark all HTTP sites as “not secure” appeared first on

February 06, 2018

Don't panic!

With sufficient thrust, pigs fly just fine

Yesterday I shared that I uninstalled the Facebook application from my phone. My friend Simon Surtees was quick to text me: "I for one am pleased you have left Facebook. Less Cayman Island pictures!". Not too fast Simon. I never said that I left Facebook or that I'd stop posting on Facebook. Plus, I'll have more Cayman Islands pictures to share soon. :)

As a majority of my friends and family communicate on Facebook and Twitter, I still want to share updates on social media. However, I believe I can do it in a more thoughtful manner that allows me to take back control over my own data. There are a couple of ways I could go about that:

  • I could share my status updates and photos on a service like Facebook or Twitter and then automatically download and publish them to my website.
  • I could publish my status updates and photos on my website first, and then programmatically share them on Facebook, Twitter, etc.

The IndieWeb movement has provided two clever names for these models:

  1. PESOS or Publish Elsewhere, Syndicate (to your) Own Site is a model where publishing begins on third party services, such as Facebook, and then copies can be syndicated to your own site.
  2. POSSE or Publish (on your) Own Site, Syndicate Elsewhere is a publishing model that begins with posting content on your own site first, then syndicating out copies to third party services.

Pesos vs posse

Here is the potential impact of each approach:

Dependence A 3rd party is a required intermediary within the PESOS approach. When the 3rd party platform is down or disappears completely, publishers lose their ability to post new content or retrieve old content. No dependence, as the 3rd party service is an optional endpoint, not a required intermediary.
Canonical Non-canonical: the data on the 3rd party is the original and copies on your domain may have to cite 3rd party URLs. Canonical: you have full control over URLs and host the original data. The 3rd party could cite the original URL.
Quality Pulling data from 3rd parties services could reduce its quality. For example, images could be degraded or downsized. Full control over the quality of assets on your own site.
Ease of use, implementation and maintenance 3rd party platforms make it really easy for users to publish content and you can still benefit from that. For example, you can easily upload images from your phone. The complexity inherent to the PESOS approach includes developing an infrastructure to curate archival copies to your own domain. The POSSE strategy can be significantly more work for the site owner, especially if you want comparable ease of use to 3rd party platforms. A higher level of technical expertise and time investment is likely required.

The goal of this analysis was to understand the pros and cons of how I can own my own content on While PESOS would be much easier to implement, I decided to go with POSSE. My next step is to figure out my "POSSE plan"; how to quickly and easily share status updates on my Drupal site, how to syndicate them to 3rd party services, how to re-organize my mailing list and my RSS feed, and more. If you have any experience with implementing POSSE, feel free to share your takeaways in the comments.

February 05, 2018

Cette question, vous êtes nombreux à vous la poser. Aussi, j’ai décidé de prendre le temps d’y réponde sérieusement. Promis, à la fin de l’article, vous aurez une réponse claire. Mais ce n’est peut-être pas celle que vous attendiez…

Tous les amateurs de crypto-monnaie ne sont pas millionnaires

Personnellement, grâce au Bitcoin et à la blockchain, je suis devenu milliardaire.

Si si, je vous jure !

Bon, c’est en dollars zimbabwéens mais, malgré tout, milliardaire. Et ce grâce à un billet gracieusement offert par Jean-Luc Verhelst lors d’une conférence sur la blockchain que nous avons donnée ensemble.

En euros, évidemment, je suis très loin du compte. Malgré le fait que j’ai commencé à jouer avec des bitcoins alors qu’ils valaient 0,07€.

Vous êtes frustré de ne découvrir le bitcoin que récemment et ne pas être multi-millionnaire ? Dîtes-vous que pour certains, c’est pire encore : ils ont joué avec des bitcoins, ils les ont perdus dans des crashs de disque-durs, oubliés sur d’anciens ordinateurs partis au rebut, re-perdus dans les crashs de plateformes comme TradeHill, MtGox et bien d’autres voire, comme ce fut mon cas pendant longtemps, les ont distribués pour aider les gens à mieux comprendre la technologie. Enfin il y’a également les “technologistes”, dont votre serviteur fait partie, qui se sont passionné pour la technologie du Bitcoin sans jamais s’intéresser à l’aspect financier. Rétrospectivement, une erreur…

D’autres enfin ont gardés leurs bitcoins puis les ont vendus à 30$ ou 100$ ou 1200$ car, à ce moment là, ils avaient besoin d’argent, cela leur permettait de rembourser tout juste leur crédit. Ils en ont bien profité mais, ils ne sont pas multi-millionnaires pour autant.

C’était le cas, par exemple, d’Andreas Antonopoulos, qui a fait beaucoup pour la communauté du Bitcoin depuis 2012 mais qui a avoué être fauché. Certains s’en sont moqués, d’autres, émus, lui ont envoyé un total de… 100 bitcoins ! Mais c’était amplement mérité, beaucoup d’actuels millionnaires ayant découvert le Bitcoin grâce à lui.

Le chiffre symbolique du million d’euros est tout simplement énorme. Il correspond à 40 ans de salaire à 2000€/mois. La plupart d’entre-nous ne gagneront, sur toute leur vie, qu’entre 1 et 2 millions d’euros. Alors, devenir millionnaire nécessite plus que juste acheter des bitcoins au bon moment.

Les 3 facteurs du succès financier

Oui, certains deviennent millionnaires. Mais que ce soit dans les crypto-monnaies ou dans n’importe quel domaine, il s’agit d’une combinaison de 3 facteurs : la clairvoyance, le travail et la chance

Je pense qu’il faut une juste proportion des 3 pour réussir. Le manque d’un des éléments peu être compensé par les deux autres mais c’est très difficile. La chance pure peut permettre de gagner des millions : c’est la loterie. Il faut beaucoup de chance pour gagner au loto ! Par contre, la clairvoyance et le travail acharnés ne sont rien sans une part de chance : être au bon endroit au bon moment. Un peu plus tôt ou peu plus tard et c’est raté.

Dans le cas du Bitcoin, il s’agissait de comprendre très vite l’intérêt des crypto-monnaies. La clairvoyance, je l’ai eue partiellement : l’historique de ce blog est la preuve que j’avais compris l’importance du phénomène. Mais j’ai complètement sous-estimé l’aspect économique et je ne pensais pas voir le Bitcoin au-dessus de 1000$. Et alors même que j’imaginais le Bitcoin à 1000$, il ne m’est jamais venu à l’idée de concrétiser cette vision en investissant 1000€ ou 2000€ de mon compte d’épargne. Le risque me semblait disproportionné. J’ai par contre la satisfaction intellectuelle de savoir que quelques uns ont découvert le Bitcoin à travers mon blog et ont fait fortune.

Le travail acharné, c’est se maintenir à jour, conserver ses bitcoins pendant des années, savoir ce qu’il faut en faire. Personnellement, j’ai eu des passages à vide où je ne me suis pas occupé de crypto-monnaies. En revenant sur les forums j’ai par exemple découvert que l’échange TradeHill avait fait faillite quelques mois avant, emportant une grande partie de mes bitcoins… Je n’étais même pas au courant ! Et je ne me suis pas inquiété car, à l’époque, ça ne valait finalement pas grand chose.

La chance ne se provoque pas, ne s’explique pas. Mais elle est indispensable.

Et vous ? Quel est votre facteur de prédilection ?

Mais toutes ces histoires ne vous intéressent sans doute pas. Tout ce que vous voulez c’est devenir millionnaire. Et si possible en 3 semaines et sans effort. C’est possible, votre tabloïd préféré vous a parlé d’une gamine de 7 ans devenue millionnaire avec des crypto-monnaies.

La réalité c’est qu’un tel plan n’existe pas. Tout plan qui prétend vous faire gagner de l’argent sans effort est une arnaque. Car j’ai un scoop pour vous : vous n’êtes pas le seul à vouloir devenir millionnaire.

Il faut donc compter avec la clairvoyance, le travail et la chance. Alors oui, il est possible qu’une crypto-monnaie sortie de nulle part fasse x100 demain et vous rende millionnaire. Mais je n’y crois pas plus que de gagner à la loterie. Ce serait de la chance pure. Et à la loterie, il y’a par définition plus de perdants que de gagnants… Si vous misez sur le facteur chance, allez plutôt au casino.

Il vous reste donc la clairvoyance ou le travail.

La solution du travail acharné

Il y’a plein d’opportunités de travail dans le domaine des crypto-monnaies. Moi-même, je gagne ma vie grâce à elles : je fais de la recherche sur la blockchain, donne des conférences et fais du conseil auprès des entreprises et des startups. D’autres font du développement. Je vous encourage, c’est un domaine qui me passionne et que j’estime plein d’avenir.

Plus prosaïquement, une autre solution est le trading. Grâce à leur grande volatilité et des commissions très petites, les crypto-monnaies se prêtent admirablement au trading.

D’ailleurs, la plupart des amateurs qui ont acheté un ou deux bitcoins à 5000$ et l’on revendu à 10.000$ se prétendent désormais traders.

La réalité c’est que le trading est une science/un art/un travail extrêmement difficile. Si vous voulez vous y mettre, il faut vous préparer à y passer des heures, à lire des dizaines de livres, à perdre beaucoup d’argent. Car perdre de l’argent est l’une des seules manières de réellement apprendre le trading. Après quelques gains favorisés par la chance du débutant, le trader amateur gagnera en confiance, misera gros et se verra soudain confronté aux pertes. C’est là qu’il se forgera ses premières expériences réelles de trader.

Si vous choisissez l’option du trading pour gagner de l’argent avec les crypto-monnaies, je pense que votre question “Est-ce le bon moment pour investir” n’a plus de sens. Vous êtes le trader, vous êtes le seul à même à répondre à cette question.

La solution de la vision long terme.

Si vous ne souhaitez pas trader ni travailler activement dans le domaine, il vous reste la solution d’investir. Mais, dans ce cas, il est important de faire preuve de discernement et d’investir à long terme dans des projets dans lesquels vous avez confiance.

On dit toujours de ne pas investir plus que ce que l’on peu perdre. C’est vrai en crypto-monnaies plus qu’ailleurs ou un simple piratage peut vider votre compte en banque.

Mais j’ajouterais un conseil de mon cru : si votre objectif est sur le long terme, n’investissez pas une grosse somme d’un coup. Faites un versement mensuel de 20, 100 ou 200€. Convertissez directement en crypto-monnaie. De cette manière, votre portefeuille augmentera petit à petit. Les hausses seront des bonnes nouvelles (car votre capital augmentera) et les baisses le seront également (car ce mois-là, vous pourrez acheter plus). Vous serez beaucoup moins nerveux, beaucoup moins à guetter les hausses et les baisses. Rappelez-vous que vous misez sur le long terme, que vous êtes prêt à tout perdre mais que vous ne voulez retirer l’argent que dans 5 ou 10 ans.

J’ajoute un autre conseil de mon cru : plus un projet à un historique fort, plus il me semble pertinent sur le long terme. Bitcoin et Ethereum me semblent les deux projets qui offrent le plus de garanties de pérennité.

Mais, ici on parle de vision, de clairvoyance. Donc à vous de faire fonctionner là vôtre. Comme le dit Warren Buffet : n’investissez que dans ce que vous comprenez.

Les erreurs à éviter à tout prix

Premièrement, si vous voulez vous faire de l’argent facile, passez votre chemin. Cela n’existe pas, par définition.

Solution 1: vous voulez faire un travail de trader et vous deviendrez capable de faire du profit sans même avoir besoin de savoir si la crypto-monnaie que vous venez de tradez à un potentiel à long terme.

Solution 2: vous croyez en un projet et vous le comprenez. Vous avez la foi que ce projet à de l’avenir et vous investissez dans ce projet en acceptant les risques encourus.

Si votre seul objectif est le gain, alors votre argent ira inéluctablement gonfler les poches des traders professionnels voire, pire, des arnaqueurs qui vous auront promis n’importe quoi. Faites fonctionner votre cerveau : tout le monde n’est pas millionnaire. Qu’est-ce que j’ai de particulier qui me donne une chance de gagner de l’argent ?

En deuxième lieu, agissez en gestionnaire. Transformer 200€ chaque mois en bitcoins vous semble énorme ? Vous ne pouvez pas vous le permettre sans rogner sur votre budget mensuel ? Par contre, vous avez 5000€ d’épargne que vous souhaitez investir ? Prenez une calculette et rendez-vous compte que 5000€, c’est plus de 2 années de 200€ par mois ! L’avantage des 200€ par mois, c’est que vous pouvez le moduler voire l’arrêter à votre convenance. Si vous videz votre épargne, elle est potentiellement perdue à jamais.

Enfin, prenez du plaisir. Un investissement doit vous procurer un intérêt intellectuel, une fierté, du plaisir. Si l’investissement devient stressant, s’il vous prend du temps de vie, alors arrêtez. La vie est trop courte.

Alors, dois-je investir dans les crypto-monnaies ?

Rappelez-vous qu’il n’y a pas de gains faciles sans une énorme part de chance. Que, sur les marchés, les petits amateurs sont appelés “dumb money” (l’argent stupide, facile à gagner car venant de gens inexpérimentés qui font des erreurs). Et que les multi-millionnaires sont les exceptions rarissimes qui nous font rêver mais ne représentent pas la réalité.

Si votre objectif est de gagner de l’argent en quelques semaines ou mois, alors il n’y a pas de bons moments pour investir. Vous n’investissez pas, vous jouez à la roulette. Et si certains auront de la chance, la plupart perdront.

Si votre objectif est d’investir dans un projet qui vous semble réellement prometteur, qui peut potentiellement changer le monde, alors il n’y a pas de mauvais moment pour investir.

Je suis personnellement persuadé que le Bitcoin est durable et que sa valeur dans 5 ans sera nettement supérieure à la valeur actuelle. Je pense qu’il en sera de même pour Ethereum même si j’en suis un peu moins certain.

Pour d’autres crypto-monnaies majeures (Litecoin, Dash, Monero,…), j’avoue ne pas savoir. C’est un pari qui me semble risqué.

Enfin, je suis convaincu que l’immense majorité des cryptomonnaies ne vaudront plus rien dans 5 ans, même si leur site web est super chouette et leur concept génial sur papier. Des nouvelles feront leur apparition, disparaitront.

Je suis bien conscient qu’il s’agit d’une pure conviction personnelle, d’un acte de foi irrationnel et que je peux complètement me tromper. Du coup, je n’investis pas plus que ce que je peux perdre.

Car, même avec une vision claire, même avec un travail acharné, un investissement nécessite une bonne part de chance.

Soyez prudents ! Ne jouez pas vos économies et souvenez-vous qu’on n’a pas besoin de millions pour profiter de la vie.


Vous avez aimé votre lecture ? Soutenez l’auteur sur Tipeee, Patreon, Paypal ou Liberapay. Même un don symbolique fait toute la différence ! Retrouvons-nous ensuite sur Facebook, Medium, Twitter ou Mastodon.

Ce texte est publié sous la licence CC-By BE.

TheHive is a great incident response platform which has the wind in its sails for a while. More and more organization are already using it or are strongly considering to deploy it in a near future. TheHive is tightly integrated with MISP to push/pull IOC’s. Such tool must be fed with useful information to be processed by security analysts. TheHive is using other tools from the same team: Hippocampe parses text-based feeds and store. Cortex is a tool to enrich observables by querying multiple services in parallel. Another source of information is, by example, a Splunk instance. There is a Splunk app to generate alerts directly into TheHive. And what about emails?


TheHive a nice REST API that allows performing all kind of actions, the perfect companion is the Python module TheHive4py. So it’s easy to poll a mailbox at regular interval to populate a TheHive instance with collected emails. I write a tool called to achieve this:

# ./ -h
usage: [-h] [-v] [-c CONFIG]

Process an IMAP folder to create TheHive alerts/cased.

optional arguments:
-h, --help show this help message and exit
-v, --verbose verbose output
-c CONFIG, --config CONFIG
configuration file (default: /etc/imap2thehive.conf)

The configuration file is easy to understand! How does it work? The IMAP mailbox is polled for new (“unread”) messages. If the email subject contains “[ALERT]”, an alert is created, otherwise, it will be a case with a set of predefined tasks. There is a Docker file to build a container that runs a crontab to automatically poll the mailbox every 5 mins.

The script is available here.


[The post Feeding TheHive with Emails has been first published on /dev/random]

Facebook social decay© Andrei Lacatusu

Earlier this month, I set a resolution to blog more and use social media less. While I still need to work on blogging more, I'm certainly spending less time on Facebook. I'm halfway there. So far, only my mom has complained about me spending less time on Facebook.

This morning when my alarm woke me up at 4:45am, I took it a step further. Most mornings, I spend ten minutes checking Facebook on my phone. Today, however, I deleted the Facebook application from my phone, rolled out of bed and started my workday. Great!

As an advocate for the open web, I've written a lot about the problems that Facebook and other walled gardens pose. While I have helped raise awareness and have contributed time and money to winning back the open web, I haven't fully embraced the philosophy on my own site. For over 12 years, I've blogged on my own domain and have used Open Source software instead of using a third party service like Blogger or Medium, but I can't say the same about sharing my photos or social media updates. This has bothered me for some time.

I felt even more motivated to make a change after watching David Letterman's new Netflix series. During a conversation with his first guest, President Obama, Letterman shared the fear that his son will one day ask, "Wait a minute. You knew this was a problem, and you didn't do anything about it?". Letterman's sentiment mirrors Jeff Bezos' regret minimization framework; when you look back on your life, you want to minimize the number of regrets you have. It's a principle I like to live by.

We can't have a handful of large platform companies like Facebook control what people read on the web; their impact on democracy and society is concerning. Even Facebook doesn't like what it sees when it looks in the mirror.

Today is not only the day I uninstalled Facebook from my phone, but it's the day I fully embrace and extend my new year's resolution. Not only would I like to use social media less, I want to take back control over my social media, photos and more. I also want to contribute more to the open web in the process — it will be a worthwhile personal challenge for 2018.

February 03, 2018

Our video recording and transcoding pipeline has been streamlined over the past few years; the first video to be released this year was available for viewers yesterday in the afternoon. As of this writing, there are 54 videos already released to our video site, linked to from the various talks in the schedule, and (for those who prefer it) also available on our YouTube channel. For an overview of what's available, what's on the way, and what still needs to be done, please see the overview page of our review infrastructure. If you are a speaker and did a talk舰

February 02, 2018

Bike building in progress

Throughout the years, the team at Acquia has heard me talk about doing well and doing good. We've embedded this philosophy into Acquia's culture since we started the company. As a result, one of Acquia's core values is to "give back more".

Acquia dna

The "give back more" principle recognizes that Acquia was born from community. We cherish our open source roots and contribute both time and talent to strengthen our communities. This means not only contributing to the open source communities that we are part of, but our local communities as well.

This week our global sales and marketing teams came together for a week of strategic planning and training. For one of our team building activities, we built 60 bikes for the children of the Boys and Girls Club of Boston! We felt that this was a special way to strengthen our team, in addition to being a great opportunity to give back.

Bike building done

The next day, five children from the Boys and Girls Club came to pick up their bikes in our closing session. It was very touching — a lot of our team got teary eyed. It made me very proud of what we were able to accomplish for our local community. I'm excited to continue to "give back more" in 2018.

A few days ago, I wrote a diary for the SANS ISC about a ransomware as a service found on the Darknet. Today, I found an occurrence of “RaaSberry” which is a known platform. It is available in the wild for a few months. The service is available through Tor and looks professional. How does it work?

  1. You create an account
  2. Your transfer some BTC to your account
  3. Your order a package corresponding to your needs
  4. You get a unique malware sample

The entry-level package is sold 0.00519977BTC (~37.32€ with the current price). I took some screenshots to give you an overview of the service:

RaaS Welcome Page

RaaS Registration

RaaS Dashboard

RaaS Wallet

RaaS SupportRaaS Packages



[The post Example of Ransomware As A Service has been first published on /dev/random]

Day four was a day of pancakes and stew. Oh, and some video work, too.


Did more documentation review. She finished SReview documentation, got started on the documentation of the examples of our ansible repository.


Finished splitting out the ansible configuration from the ansible code repository. The code repository now includes an example configuration that is well documented for getting started, whereas our production configuration lives in a separate repository.


Spent much time on the debconf website, mostly working on a new upstream release of wafer.

He also helped review Kyle's documentation, and spent some time together with Tzafrir debugging our ansible test setup.


Worked on documentation, and did a test run of the ansible repository. Found and fixed issues that cropped up during that.


Spent much time trying to figure out why SReview was not doing what he was expecting it to do. Side note: I hate video codecs. Things are working now, though, and most of the fixes were implemented in a way that makes it reusable for other conferences.

There's one more day coming up today. Hopefully won't forget to blog about it tonight.

February 01, 2018

The Call for Presentations, Workshops, Ignites are open for submission. The CfP will be open until 2th of March 2018. We decided to run the CfP in git, more specificaly gitlab. Please follow the following process.

Using a Merge Request by following the next steps :

I published the following diary on “Adaptive Phishing Kit“:

Phishing kits are everywhere! If your server is compromised today, they are chances that it will be used to mine cryptocurrency, to deliver malware payloads or to host a phishing kit. Phishing remains a common attack scenario to collect valid credentials and impersonate the user account or, in larger attacks, it is one of the first steps to compromise the final target… [Read more]


[The post [SANS ISC] Adaptive Phishing Kit has been first published on /dev/random]

January 31, 2018

This should really have been the "day two" post, but I forgot to do that yesterday, and now it's the end of day three already, so let's just do the two together for now.


Has been hacking on the opsis so we can get audio through it, but so far without much success. In addition, he's been working a bit more on documentation, as well as splitting up some data that's currently in our ansible repository into a separate one so that other people can use our ansible configuration more easily, without having to fork too much.


Did some tests on the ansible setup, and did some documentation work, and worked on a kodi plugin for parsing the metadata that we've generated.


Did some work on the DebConf website. This wasn't meant to be much, but yak shaving sucks. Additionally, he's been doing some work on the youtube uploader as well.


Did more work reviewing our documentation, and has been working on rewording some of the more awkward bits.


Spent much time on improving the SReview installation for FOSDEM. While at it, fixed a number of bugs in some of the newer code that were exposed by full tests of the FOSDEM installation. Additionally, added code to SReview to generate metadata files that can be handed to Stefano's youtube uploader.


Although he had less time yesterday than he did on monday (and apparently no time today) to sprint remotely, Pollo still managed to add a basic CI infrastructure to lint our ansible playbooks.

January 30, 2018

We proudly present the last set of speaker interviews. See you at FOSDEM this weekend! Jon Masters: Exploiting modern microarchitectures. Meltdown, Spectre, and other hardware attacks Michael Downey: Sustainability of Open Source in International Development. A new approach from the United Nations Foundation Nikos Roussos: SatNOGS: Crowd-sourced satellite operations. Satellite Open Source Ground Station Network Palmer Dabbelt: Igniting the Open Hardware Ecosystem with RISC-V. SiFive's Freedom U500 is the World's First Linux-capable Open Source SoC Platform Pierros Papadeas: The story of UPSat. Building the first open source software and hardware satellite Stef Walter: Cyborg Teams. Training machines to be Open舰

I'm at the Linux Belgium training center, where this last week before FOSDEM the DebConf video team is holding a sprint. The nice folks of Linux Belgium made us feel pretty welcome:

Linux Belgium message

Yesterday was the first day of that sprint, where I had planned to blog about things, but I forgot, so here goes (first thing this morning)

Nattie and Tzafrir

Nattie and Tzafrir have been spending much of their time proofreading our documentation, and giving us feedback to improve their readability and accuracy.


Spent some time working on his youtube uploader. He didn't finish it to a committable state yet, but more is to be expected today.

He also worked on landing a gstreamer pipeline change that was suggested at LCA last week (which he also visited), and did some work on setting up the debconf18 dev website.

Finally, he fixed the irker config on salsa so that it would actually work and send commit messages to IRC after a push.


Wrote a lot of extra documentation on the opsis that we use and various other subjects, and also fixed some of the templates of the documentation, so that things would look better and link correctly.


I spent much of my time working on the FOSDEM SReview instance, which will be used next weekend; that also allowed me to improve the code quality of some of the newer stuff that I wrote over the past few months. In between things, being the local guy here, I also drove around getting a bit of stuff that we needed.


Pollo isn't here, but he's sprinting remotely from home. He spent some time setting up gitlab-ci so that it would build our documentation after pushing to salsa.

January 29, 2018

Logo MagentoCe jeudi 8 février 2018 à 19h (exceptionnellement un 2ème jeudi) se déroulera la 66ème séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : Magento la plateforme e-commerce ‘One fits all’ ?

Thématique : Internet|communauté|e-commerce

Public : Tout public

L’animateur conférencier : Bérenger Laurent (Brille24 SA)

Lieu de cette séance : Université de Mons, Campus Sciences Humaines, bâtiment FWEG, 17 place Warocqué, Auditoire Hotyat, 1er étage. Cf. ce plan sur le site de l’UMONS, ou la carte OSM, ainsi que l’accès parking, rue Lucidel.

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page La séance sera suivie d’un verre de l’amitié.

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, OpenSides, MeaWeb et Phonoid.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

Description : Nous décrirons les différents organes et contraintes techniques de la plateforme de commerce électronique libre Magento dans sa version 1.9x. Puis nous procéderons à une brève démonstration d’administration de boutique si le temps le permet sur Magento 1.x et 2.x Nous ferons ensuite le point sur les différences entres magento 1.x et 2.x. Enfin nous terminerons par un bilan après 8 années d’utilisation quotidienne.

Short bio : Un Amstrad CPC464, mon premier grand amour à l’âge de 5 ans… Presque 20 ans de carrière professionnelle dans le domaine des TIC a assumer les postes de : Helpdesk et informatique de gestion, Technicien Hardware, Dev Web, Admin SYS et DB, Manager d’une équipe de DEV, CTO. A l’aube de la quarantaine, je ne compte plus le nombre d’heures, de différentes technologies et de projets, à installer des systèmes, à essayer de nouvelles choses, à pivoter et changer d’angle dans un esprit de partage avec une équipe. Je suis à l’heure actuelle chef d’entreprise et j’assume le rôle de CTO pour plusieurs petites sociétés dont certaines sont spécialisées dans la vente Online. Et si le sommeil me manque parfois, la passion, et le sentiment d’être un ‘jack of all trades’ des TIC ne m’ont jamais quitté.

January 27, 2018

If your non-geek partner and/or kids are joining you to FOSDEM, they may be interested in spending some time exploring Brussels while you attend the conference. Like previous years, FOSDEM is organising sightseeing tours.

January 26, 2018

Say you’re building a HTTP service in Go and suddenly it starts giving you these:

http: multiple response.WriteHeader calls

Horrible when that happens, right?

It’s not always very easy to figure out why you get them and where they come from. Here’s a hack to help you trace them back to their origin:

type debugLogger struct{}

func (d debugLogger) Write(p []byte) (n int, err error) {
	s := string(p)
	if strings.Contains(s, "multiple response.WriteHeader") {
	return os.Stderr.Write(p)

// Now use the logger with your http.Server:
logger := log.New(debugLogger{}, "", 0)

server := &http.Server{
    Addr:     ":3001",
    Handler:  s,
    ErrorLog: logger,

This will output a nice stack trace whenever it happens. Happy hacking!

Comments | More on | @rubenv on Twitter

I published the following diary on “Investigating Microsoft BITS Activity“:

Microsoft BITS (“Background Intelligent Transfer Service”) is a tool present[1] in all modern Microsoft Windows operating systems. As the name says, you can see it as a “curl” or “wget” tool for Windows. It helps to transfer files between a server and a client but it also has plenty of interesting features. Such a tool, being always available, is priceless for attackers. They started to use BITS to grab malicious contents from the Internet… [Read more]

[The post [SANS ISC] Investigating Microsoft BITS Activity has been first published on /dev/random]

January 25, 2018

So Gutenberg, the future of WordPress content editing, allows users to create, add and re-use blocks in posts and pages in a nice UI. These blocks are added in WordPress’ the_content inside HTML-comments to ensure backwards-compatibility. For WP YouTube Lyte I needed to extract information to be able to replace Gutenberg embedded YouTube with Lytes and I took the regex-approach. Ugly but efficient, no?

But what if you need a more failsafe method to extract Gutenberg block-data from a post? I took a hard look at the Gutenberg code and came up with this little proof-of-concept to extract all data in a nice little (or big) array:

function gutenprint($html) {
  	// check if we need to and can load the Gutenberg PEG parser
  	if ( !class_exists("Gutenberg_PEG_Parser") && file_exists(WP_PLUGIN_DIR."/gutenberg/lib/load.php") ) {

  	if ( class_exists("Gutenberg_PEG_Parser") && is_single() ) {
	  // do the actual parsing
	  $parser = new Gutenberg_PEG_Parser;
	  $result = $parser->parse(  _gutenberg_utf8_split( $html ) );
	  // we need to see the HTML, not have it rendered, so applying htmlentities
		function (&$result) { $result = htmlentities($result); }

	  // and dump the array to the screen
	  echo "<h1>Gutenprinter reads:</h1><pre>";
	  echo "</pre>";
	} else {
	  echo "Not able to load Gutenberg parser, are you sure you have Gutenberg installed?";
  	// remove filter to avoid double output
  	// and return the HTML
	return $html;

I’m not going to use it for WP YouTube Lyte as I consider the overhead not worth it (for now), but who know it could be useful for others?

I published the following diary on “Ransomware as a Service“:

Hunting on the dark web is interesting to find new malicious activities running in the background. Besides the classic sites where you can order drugs and all kind of counterfeited material, I discovered an interesting website which offers a service to create your own ransomware! The process is straightforward, you just have to… [Read more]

[The post [SANS ISC] Ransomware as a Service has been first published on /dev/random]

January 24, 2018

In the first post I talked about why configuration documentation is important. In the second post I looked into a good structure for configuration documentation of a technological service, and ended with an XCCDF template in which this documentation can be structured.

The next step is to document the rules themselves, i.e. the actual content of a configuration baseline.

Fine-grained rules

While from a high-level point of view, configuration items could be documented in a coarse-grained manner, a proper configuration baseline documents rules very fine-grained. Let's first consider a bad example:

All application code files are root-owned, with read-write privileges for owner and group, and executable where it makes sense.

While such a rule could be interpreted correctly, it also leaves room for misinterpretation and ambiguity. Furthermore, it is not explicit. What are application code files? Where are they stored? What about group ownership? The executable permission, when does that make sense? Does the rule also imply that there is no privilege for world-wide access, or does it just ignore that?

A better example (or set of examples) would be:

  • /opt/postgresql is recursively user-owned by root
  • /opt/postgresql is recursively group-owned by root
  • No files under /opt/postgresql are executable except when specified further
  • All files in /opt/postgresql/bin are executable
  • /opt/postgresql has system_u:object_r:usr_t:s0 as SELinux context
  • /opt/postgresql/bin/postgres has system_u:object_r:postgresql_exec_t:s0 as SELinux context

And even that list is still not complete, but you get the gist. The focus here is to have fine-grained rules which are explicit and not ambiguous.

Of course, the above configuration rule is still a "simple" permission set. Configuration baselines go further than that of course. They can act on file content ("no PAM configuration files can refer to except for runuser and su"), run-time processes ("The processes with /usr/sbin/sshd as command and with -D as option must run within the sshd_t SELinux domain"), database query results, etc.

This granularity is especially useful later on when you want to automate compliance checks, because the more fine-grained a description is, the easier it is to develop and maintain checks on it. But before we look into remediation, let's document the rule a bit further.

Metadata on the rules

Let's consider the following configuration rule:

/opt/postgresql/bin/postgres has system_u:object_r:postgresql_exec_t:s0 as SELinux context

In the configuration baseline, we don't just want to state that this is the rule, and be finished. We need to describe the rule in more detail, as was described in the previous post. More specifically, we definitely want to - know the rule's severity is, or how "bad" it would be if we detect a deviation from the rule - have an indication if the rule is security-sensitive or more oriented to manageability - a more elaborate description of the rule than just the title - an indication why this rule is in place (what does it solve, fix or simplify) - information on how to remediate if a deviation is found - know if the rule is applicable to our environment or not

The severity in the Security Content Automation Protocol (SCAP) standard, which defines the XCCDF standard as well as OVAL and a few others like CVSS, uses the following possible values for severity: unknown, info, low, medium, high.

To indicate if a rule is security-oriented or not, XCCDF's role attribute is best used. With the role attribute, you state if a rule is to be included in the final scoring (a weighted value given to the compliance of a system) or not. If it is, then it is security sensitive.

The indication of a rule applicability in the environment might seem strange. If you document the configuration baseline, shouldn't it include only those settings you want? Well, yes and no. Personally, I like to include recommendations that we do not follow in the baseline as well.

Suppose for instance that an audit comes along and says you need to enable data encryption on the database. Let's put aside that an auditor should focus mainly/solely on the risks, and let the solutions be managed by the team (but be involved in accepting solutions of course), the team might do an assessment and find that data encryption on the database level (i.e. the database files are encrypted so non-DBA users with operating system interactive rights cannot read the data) is actually not going to remediate any risk, yet introduce more complexity.

In that situation, and assuming that the auditor agrees with a different control, you might want to add a rule to the configuration baseline about this. Either you document the wanted state (database files do not need to be encrypted), or you document the suggestion (database files should be encrypted) but explicitly state that you do not require or implement it, and document the reasoning for it. The rule is then augmented with references to the audit recommendation for historical reasons and to facilitate future discussions.

And yes, I know the rule "database files should be encrypted" is still ambiguous. The actual rule should be more specific to the technology).

Documenting a rule in XCCDF

In XCCDF, a rule is defined through the Rule XML entity, and is placed within a Group. The Group entities are used to structure the document, while the Rule entities document specific configuration directives.

The postgres related rule of above could be written as follows:

<Rule id="xccdf_com.example_rule_pgsql-selinux-context"
    /opt/postgresql/bin/postgres has system_u:object_r:postgresql_exec_t:s0 as SELinux context
      The postgres binary is the main binary of the PostgreSQL database daemon. Once started, it launches the necessary workers. To ensure that PostgreSQL runs in the proper SELinux domain (postgresql_t) its binary must be labeled with postgresql_exec_t.
      The current state of the label can be obtained using stat, or even more simple, the -Z option to ls:
    <xhtml:pre>~$ ls -Z /opt/postgresql/bin/postgres
-rwxr-xr-x. root root system_u:object_r:postgresql_exec_t:s0 /opt/postgresql/bin/postgres
      The domain in which a process runs defines the SELinux controls that are active on the process. Services such as PostgreSQL have an established policy set that controls what a database service can and cannot do on the system.
      If the PostgreSQL daemon does not run in the postgresql_t domain, then SELinux might either block regular activities of the database (service availability impact), block behavior that impacts its effectiveness (integrity issue) or allow behavior that shouldn't be allowed. The latter can have significant consequences once a vulnerability is exploited.
    Restore the context of the file using restorecon or chcon.
  <fix strategy="restrict" system="urn:xccdf:fix:script:sh">restorecon /opt/postgresql/bin/postgres
  <ident system="">pgsql-01032</ident>

Although this is lots of XML, it is easy to see what each element declares. The NIST IR 7275 document is a very good resource to continuously consult in order to find the right elements and their interpretation.

There is one element added that is "specific" to the content of this blog post series and not the XCCDF standard, namely the identification. As mentioned in an earlier post, organizations might have their own taxonomy for technical service identification, and requirements on how to number or identify rules. In the above example, the rule is identified as pgsql-01032.

There is another attribute in use above that might need more clarification: the weight of the rule.

Abusing CVSS for configuration weight scoring

In the above example, a weight is given to the rule scoring (weight of 5.1). This number is obtained through a CVSS calculator, which is generally used to identify the risk of a security issue or vulnerability. CVSS stands for Common Vulnerability Scoring System and is a popular way to weight security risks (which are then associated with vulnerability reports, Common Vulnerabilities and Exposures (CVE)).

Misconfigurations can also be slightly interpreted as a security risk, although it requires some mental bridges. Rather than scoring the rule, you score the risk that it mitigates, and consider the worst thing that could happen if that rule is not implemented correctly. Now, worst-case thinking is subjective, so there will always be discussion on the weight of a rule. It is therefore important to have a consensus in the team (if the configuration baseline is team-owned) if this weight is actively used. Of course, an organization might choose to ignore the weight, or use a different scoring mechanism.

In the above situation, I scored what would happen if a vulnerability in PostgreSQL was successfully exploited, and SELinux couldn't mitigate the risk as the label of the file was wrong. The result of a wrong label could be that the PostgreSQL service runs in a higher privileged domain, or even in an unconfined domain (no SELinux restrictions active), so there is a heightened risk of confidentiality loss (beyond the database) and even integrity risk.

However, the confidentiality risk is scored as low, and integrity even in between (base risk is low, but due to other constraints put in place integrity impact is reduced further) because PostgreSQL runs as a non-administrative user on the system, and perhaps because the organization uses dedicated systems for database hosting (so other services are not easily impacted).

As mentioned, this is somewhat abusing the CVSS methodology, but is imo much more effective than trying to figure out your own scoring methodology. With CVSS, you start with scoring the risk regardless of context (CVSS Base), then adjust based on recent state or knowledge (CVSS Temporal), and finally adjust further with knowledge of the other settings or mitigating controls in place (CVSS Environmental).

Personally, I prefer to only use the CVSS Base scoring for configuration baselines, because the other two are highly depending on time (which is, for documentation, challenging) and the other controls (which is more of a concern for service technical documentation). So in my preferred situation, the rule would be scored as 5.4 rather than 5.1. But that's just me.

Isn't this CCE?

People who use SCAP a bit more might already be thinking if I'm not reinventing the wheel here. After all, SCAP also has a standard called Common Configuration Enumeration (CCE) which seems to be exactly what I'm doing here: enumerating the configuration of a technical service. And indeed, if you look at the CCE list you'll find a number of Excel sheets (sigh) that define common configurations.

For instance, for Red Hat Enterprise Linux v5, there is an enumeration identified as CCE-4361-2, which states:

File permissions for /etc/pki/tls/ldap should be set correctly

The CCE description then goes on stating that this is a permission setting (CCE Parameter), which can be rectified with chmod (CCE Technical Mechanism), and refers to a source for the setting.

However, CCE has a number of downsides.

First of all, it isn't being maintained anymore. And although XCCDF itself is also a quite old standard, it is still being looked into (a draft new version is being prepared) and is actively used as a standard. Red Hat is investing time and resources into secure configurations and compliancy aligned with SCAP, and other vendors publish SCAP-specific resources as well. CCE however would be a list, and thus requires continuous management. That RHELv5 is the most recent RHEL CCE list is a bad thing.

Second, CCE's structure is for me insufficient to use in configuration baselines. XCCDF has a much more mature and elaborate set of settings for this. What CCE does is actually what I use in the above example as the organization-specific identifier.

Finally, there aren't many tools that actively use CCE, unlike CVSS, XCCDF, OVAL, CVSS and other standards under the SCAP umbrella, which are all still actively used and developed upon by tools such as Open-SCAP.


Before finishing this post, I want to talk about profiling.

Within an XCCDF benchmark, several profiles can be defined. In the XCCDF template I defined a single profile that covers all rules, but this can be fine-tuned to the needs of the organization. In XCCDF profiles, you can select individual rules (which ones are active for a profile and which ones aren't) and even fine-tune values for rules. This is called tailoring in XCCDF.

A first use case for profiles is to group different rules based on the selected setup. In case of Nginx for instance, one can consider Nginx being used as either a reverse proxy, a static website hosting or a dynamic web application hosting. In all three cases, some rules will be the same, but several rules will be different. Within XCCDF, you can document all rules, and then use profiles to group the rules related to a particular service use.

XCCDF allows for profile inheritance. This means that you can define a base Profile (all the rules that need to be applied, regardless of the service use) and then extend the profiles with individual rule selections.

With profiles, you can also fine-tune values. For instance, you could have a password policy in place that states that passwords on internal machines have to be at least 10 characters long, but on DMZ systems they need to be at least 15 characters long. Instead of defining two rules, the rule could refer to a particular variable (Value in XCCDF) which is then selected based on the Profile. The value for a password length is then by default 10, but the Profile for DMZ systems selects the other value (15).

Now, value-based tailoring is imo already a more advanced use of XCCDF, and is best looked into when you also start using OVAL or other automated checks. The tailoring information is then passed on to the automated compliance check so that the right value is validated.

Value-based tailoring also makes rules either more complex to write, or ambiguous to interpret without full profile awareness. Considering the password length requirement, the rule could become:

The /etc/pam.d/password-auth file must refer to for the password service with a minimal password length of 10 (default) or 15 (DMZ) for the N4 password category

At least the rule is specific. Another approach would be to document it as follows:

The /etc/pam.d/password-auth file must refer to with the proper organizational password controls

The documentation of the rule might document the proper controls further, but the rule is much less specific. Later checks might report that a system fails this check, referring to the title, which is insufficient for engineers or administrators to resolve.

Generating the guide

To close off this post, let's finish with how to generate the guide based on an XCCDF document. Personally, I use two approaches for this.

The first one is to rely on Open-SCAP. With Open-SCAP, you can generate guides easily:

~$ oscap xccdf generate guide xccdf.xml > ConfigBaseline.html

The second one, which I use more often, is a custom XSL style sheet, which also introduces the knowledge and interpretations of what this blog post series brings up (including the organizational identification). The end result is similar (the same content) but uses a structure/organization that is more in line with expectations.

For instance, in my company, the information security officers want to have a tabular overview of all the rules in a configuration baseline. So the XSL style sheet generates such a tabular overview, and uses in-documenting linking to the more elaborate descriptions of all the rules.

An older version is online for those interested. It uses JavaScript as well (in case you are security sensitive you might want to look into it) to allow collapsing rule documentation for faster online viewing.

The custom XSL has an additional advantage, namely that there is no dependency on Open-SCAP to generate the guides (even though it is perfectly possible to copy the XSL and continue). I can successfully generate the guide using Microsoft's msxml utility, using xsltproc, etc depending on the platform I'm on.

January 23, 2018

With only ten days left until FOSDEM 2018, we have added some new interviews with our main track speakers, varying from a keynote talk about 20 years of Open Source Initiative through a speaker who forked his own project and the Next Generation Internet initiative to using TPM 2.0 as a secure keystore: Dimitri Staessens and Sander Vrijders: IPC in 1-2-3. Everything you always wanted from a network, but were afraid to ask Frank Karlitschek: Why I forked my own project and my own company. ownCloud to Nextcloud James Bottomley: Using TPM 2.0 As a Secure Keystore on your Laptop.舰

In general i rarely bother looking into WordPress core code or what’s on the horizon. The last month or so however I came across 3 problems that were due to core.

1. Shortly after pushing Autoptimize 2.3.x out, a subset of users started complaining about a flood of “cachebuster”-requests bringing their site to a crawl. It turned out all of them were using Redis or Memcached and that due to a longstanding bug in WordPress core Autoptimize did not get the correct version-string from the database, triggering the update-procedure over and over, purging and then preloading the cache. A workaround -involving a transient featuring my wife and daughter- was introduced to prevent this, but “oh my gawd” what an ugly old WordPress core bug that is! Can someone please get this fixed?

2. A couple of users of WP YouTube Lyte noticed their browsers complaining about unbalanced tags in the Lyte HTML output (which is part of the_content). It took me a lot of time to come to the conclusion that WordPress core’s wpautop was messing things up severely due to the noscript and meta-tags in Lyte’s output. As wpautop has no filters or actions to alter the way it works, I ended up disabling wpautop when lyte’s were found in the_content.

3. My last encounter was with the ghost of WordPress Yet-to-come; Gutenberg … To allow WP YouTube Lyte to turn Gutenberg embedded YouTube’s into Lyte’s, it turned out I had to dive into the tag soup that Gutenberg adds as HTML-comments to the_content. I have yet to find documented functions to extract the information the smart way, so regexes to the rescue. But any plugin that hooks into the_content better be aware that Gutenberg (as part of WordPress 5.0) will potentially mess things up severely.

Although I cursed and sighed and I am now complaining, I felt great relief when having fixed/ worked around these issues. But working in the context of a large and successful open source software project and depending on it’s quality can sometimes almost be a pain as much as it is a joy. Almost … ;-)

January 22, 2018

A security conference does not need to be “big” to be interesting. Size doesn’t matter with security conferences ;-). I’m in Lille, France where I attended the conference called “CoRIIN“. This event is held in French and means “Conférence sur la réponse aux incidents et l’investigation numérique” or “Incident Response and Digital Forensics Conference” in English. Aways organized the day before the other major even, FIC,  it was already the 4th edition and the first one for me. I was honoured to be invited as a speaker. In a few words, the conference helps people from law enforcement agencies, SOC’s, CERT’s to meet and exchange experiences during a full day. This year, there was 250 people mainly from France but also from Belgium, Luxembourg and Switzerland (as the conference is fully organized in French). I saw a lot of familiar faces. After a short introduction by Eric Freyssinet, the chairman of CoRIIN, the single-track conference started. Here is a quick wrap-up of the presentations.

As you can imagine, all the talks focussed on forensics, digital laws, investigations, feedback from useful cases, etc. Eve Matringe, a lawyer working Luxembourg, explained the impact of the GDPR in the context of investigations. As it was mentioned on Twitter during her talk, Eve explained with clear sentences what will be the impact of the data protection regulation that will be effected in a few month. Not only, organizations performing investigation must properly handle the data collected but, it is pretty sure that the regulation will also be invoked by bad guys against investigators to try to nullify their cases.

The next slot was mine, I presented “Full Packet Capture for the Masses“. Here are my slides:

Thank you for all the feedbacks, I already started to build a list of improvements!

The next slot was assigned to the ANSSI. They presented the Microsoft BITS tool or “Background Intelligent Transfer Service“. This tool is often used by malwares to download some extra payloads. This is not only a transfer tool, it can also schedule the transfers, limit the bandwidth or execute a command once the file transfer is completed. The tool can be managed by a command line (now obsolete) or PowerShell using its API. But, with API’s, there is always a risk that a rootkit will modify its behaviour. To perform investigation around malicious BITS usage, they developed their own tool in Python to extract artefacts from data files managed by the tool. They had to reverse engineer the file format for this. The tool is available for free and can be installed using pip:

$ pip install bits_parser

The source code will be released soon on their GitHub repository. Very nice tool to add to your DFIR toolbox!

After the lunch break, Paul Rascagneres presented a detailed review of the bad story that hit the well-known Windows tool: CCleaner. In the first part of his talk, Paul explained how the tool was compromized at the source. The attackers were able to recompile a malicious version of the tool and deploy it using the official website. He explained how the malware worked and what were the anti-analysis techniques used to defeat security analysts (like using a bug in IDA – the debugger – to hide some part of the malicious code in the debugging session). The second part was a long but very interesting review of statistics gathered from the database grabbed from the C&C server. How? This was not mentioned by Paul. He just “received” the data… Some numbers were impressive: 800K hosts contacted the C&C only on a period of 4 days and 1 out of 5 C&C servers!

The next talk was a very interesting feedback about the NotPetya infection that affected two organizations. Quentin Perceval et Vincent Nguyen (from the Wavestone CERT) explained how they were involved starting from the initial attack until the complete recovery of the infrastructure. Basically, everything was destroyed and they had to rebuild from scratch. If you’re a CISO, I recommend you to read their slides and watch the recorded video. Definitively!

Then, Rayna Stamboliyska explained why communication is a key point when an incident hit your organization. Communication is mandatory but, to be effective, it must be properly prepared to pass the right message to your partners/customers. A bad communication might increase the impact of the crisis. The first part was a review about key points to communicate while the second part was, of course, some badfunny example about how to NOT communicate.

Sebastien  Larinier presented his personal view of the massive attacks that hit many organizations in 2017: Wannacry, NotPetya, Bad Rabbit. Indeed, everything and nothing has been said about them. From rumours to disclosure of false information, journalists but also many security professionals failed to handle the case properly. Sébastien explained why and gave some interesting info about them. Example: If the well-know kill-switch domain really a feature or just a bug in the malware that was released too early?

Finally, François Bouchaud closed the day with an interesting approach to perform forensics investigations that involve IoT devices. More and more criminal cases involve such kind of gadgets. How to deal with them? For regular computers, it’s quite easy: take a memory dump, a disk image and launch your carving and artefact finding tools. But with gadgets that have limited features, no interface, no storage? The challenge is to define a new chain of custody. How to perform investigations? According to François, the challenge is to start from the data (“what information do I need?“) and then focus on the devices that could have such data. A good example was given: how to determine the number of people present in a room at a certain time? We could use sensors, cameras but also the wifi (number of connected devices) or a thermostat (the temperature will grow). Interesting approach!

That’s all for this quick wrap-up. If you are working in forensics, incident management and understand French, I really recommend you this event! The next edition is already scheduled maybe at another location to welcome more visitors! Tomorrow, I’ll visit the FIC, ping me you’re in the area!

[The post CoRIIN 2018 Wrap-Up has been first published on /dev/random]

January 20, 2018

Want to start a new PHP project? Perhaps yet another library you are creating? Tired of doing the same lame groundwork for the 5th time this month? Want to start a code kata and not lose time on generic setup work? I got just the project for you!

I’ve created a PHP project template that you can fork or copy to get started quickly. It contains setup for testing, lining and CI and no actual PHP code. A detailed list of the project templates contents:

  • Ready-to-go PHPUnit (configuration and working bootstrap)
  • Ready-to-go PHPCS
  • Docker environment with PHP 7.2 and Composer (so you do not need to have PHP or Composer installed!)
  • Tests and style checks runnable in the Docker environment with simple make commands
  • TravisCI ready
  • Code coverage creation on TravisCI and uploading to ScrutinizerCI (optional)
  • Coverage tag validation
  • Stub production and test classes for ultra-quick start (ideal when doing a kata)
  • COPYING and .gitignore files
  • README with instructions of how to run the tests

Getting started

  • Copy the code or fork the repository
  • If you do not want to use the MediaWiki coding style, remove mediawiki/mediawiki-codesniffer from composer.json
  • If you want to support older PHP versions, update composer.json and remove new PHP features from the stub PHP files
  • If the code is not a kata or quick experiment, update the PHP namespaces and the README
  • Start writing code!
  • If you want TravisCI and/or ScrutinizerCI integration you will need to log in to their respective websites
  • Optionally update the README

You can find the project template on GitHub.

Special thanks to weise, gbirke and KaiNissen for their contributions to the projects this template was extracted from.

January 19, 2018

This is a short summary of my 2017 reading experience, following my 2016 Year In Books and my 2015 Year In Books.

Such Stats

I read 27 books , totaling just over 9000 pages. 13 of these where non-fiction, 6 hard science fiction and 8 pleb science fiction.

In 2017 I did an effort to rate books properly using the Goodreads scale (Did not like, it was OK, I liked it, I really liked it, it was amawzing), hence the books are more distributed rating wise than the previous years, where most got 4 stars.


While only finished shortly after new year and thus not strictly a 2017 book even though I read 90% of it in 2017, my favorite for the year is The Righteous Mind: Why Good People are Divided by Politics and Religion. A few months ago I saw an interview of Jonathan Haidt, the author of this book. I saw so impressed that I went look for books written by him. As I was about to add his most recent work (this book) to my to-read list on Goodreads, I realized it was already on there, added a few months before. The book is about social and moral psychology and taught me many useful models in how people behave, where to look for ones blind spots, how to avoid polarization and how to understand many aspects of the political landscape. A highly recommended read as far as I am concerned.

Another non-fiction book that I found to be full of useful models that deepened my understanding of how the world works is The Psychopath Code: Cracking The Predators That Stalk Us. Since about 4% of people are psychopaths and thus systematically and mercilessly exploit others (while being very good at covering their tracks), you are almost guaranteed to significantly interact with some over the course of your life. Hence understanding how to detect them and prevent them from feeding on you or those you care about is an important skill. You can check out my highlights from the book to get a quick idea of the contents.



After having read a number of more mainstream Science Fiction books that do not empathize on the science or grand ideas as proper Hard Science Fiction, I picked up Incandescence, by Greg Egan, author of one of my 2015 favorite books. As expected from Greg Egan, the focus definitely is on the science and the ideas. The story is set in the The Amalgam galaxy, which I was already familiar with via a number of short stories.

Most of the book deals with a more primitive civilization discovering gradually discovering physics starting from scratch, both using observation and theorization, and eventually creating their own version of General Relativity. Following this is made extra challenging by the individuals in this civilization using their own terms for various concepts, such as the 6 different spatial directions. You can see this is not your usual SF book from this… different trailer that Greg made.

What is so great about this book is that Greg does not explain what is going on. He gradually drops clues that allow you to piece by piece get a better understanding of the situation, and how the story of the primitive civilization fits the wider context. For instance, you find out what “the Incandescence” is. At least you will if you are familiar with the relevant “space stuff”. While in this case it is hard to miss if you know the “space stuff”, the hints are never really confirmed. This is also true for perhaps the most important hinted at thing. I had a “Holy shit! Could it be… this is epic” moment when I made that connection.

Not a book I would recommend for people new to Hard SF, or the best book of Egan to start with. In any case, before reading this one, read some of the stories of The Amalgam.


Since I linked Incandescence so much I decided to finally give Diaspora a go. Diaspora had been on my to-read list for years and I never got to it because of its low rating on Goodreads. (Rant: This perhaps is a foolish metric to look at when it comes to non-mainstream books, as I found out  by reading some highly rated non-fiction books that turned out to be utter garbage geared to people with an IQ below 90, of which apparently there are a lot.)

I loved the first chapter of Diaspora. The final chapters are epic. Spoiler ahead. How can a structure spanning 200 trillion universes and encoding the thoughts of an entire civilization be anything but epic? This beats the Xeelee ring in scale. The choices of some of the characters near the end make little sense to me, though still a great book overall, and perhaps the not making sense part makes sense if one considers how much these characters are not traditional humans.

January 18, 2018

Yesterday, I got some alerts for some nodes in the CentOS Infra from both our monitoring system, but also confirmed by some folks reporting errors directly in our #centos-devel irc channel on Freenode.

The impacted nodes were the nodes we use for mirrorlist service. For people not knowing what they are used for, here is a quick overview of what happens when you run "yum update" on your CentOS node :

  • yum analyzes the .repo files contained under /etc/yum.repos.d/
  • for CentOS repositories, it knows that it has to use a list of mirrors provided by a server hosted within the centos infra (mirrorlist=$releasever&arch=$basearch&repo=updates&infra=$infra )
  • yum then contacts one of the server behind "" (we have 4 nodes so far : two in Europe and two in USA, all available over IPv4 and IPv6)
  • mirrorlist checks the src ip and sends back a list of current/up2date mirrors in the country (some GeoIP checks are done)
  • yum then opens connection to those validated mirrors

We monitor the response time for those services, and average response time is usually < 1sec (with some exceptions, mostly due to network latency also for nodes in other continents). But yesterday the values where not only higher, but also even completely missing from our monitoring system, so no data received. Here is a graph from our monitoring/Zabbix server :


So clearly something was happening and time to also find some patterns. Also from our monitoring we discovered that the number of tracked network connections by the kernel was also suddenly higher than usual. In fact, as soon as your node does some state tracking with netfilter (like for example -m state ESTABLISHED,RELATED ), it keeps that in memory. You can easily retrive number of actively tracked connections like this :

cat /proc/sys/net/netfilter/nf_conntrack_count 

So it's easy to guess what happens if the max (/proc/sys/net/netfilter/nf_conntrack_max) is reached : kernel drops packets (from dmesg):

nf_conntrack: table full, dropping packet

Depending on the available memory, you can get default values, which can be changed in real-time. Don't forget to also tune then the Hash size (basic rule is nf_conntrack_max / 4) On the mirrorlist nodes, we had default values of 262144 (so yeah, keeping track of that amount of connections in memory), so to get quickly the service in shape :

echo ${new_number} > /proc/sys/net/netfilter/nf_conntrack_max
echo $(( $new_number / 4 )) > /sys/module/nf_conntrack/parameters/hashsize

Other option was also to flush the table (you can do that with conntrack -F , tool from conntrack-tools package) but it's really only a temporary fix, and that will not help you getting the needed info for proper troubleshooting (see below)

Here is the Zabbix graph showing that for some nodes it was higher than default values, but now kernel wasn't dropping packets.


We could then confirm that service was then working fine (not "flapping" anymore).

So one can think that it was the only solution for the problem and stop investigation there. But what is the root cause of this ? What happened that opened so many (unclosed) connections to those mirrorlist nodes ? Let's dive into nf_conntrack table again !

Not only you have the number of tracked connections (through /proc/sys/net/netfilter/nf_conntrack_count) but also the whole details about those. So let's dump that into a file for full analysis and try to find a pattern :

cat /proc/net/nf_conntrack > conntrack.list
cat conntrack.list |awk '{print $7}'|sed 's/src=//g'|sort|uniq -c|sort -n -r|head

Here we go : same range of IPs on all our mirrorlist servers having thousands of ESTABLISHED connection. Not going to give you all details about this (goal of this blog post isn't "finger pointing"), but we suddenly identified the issue. So we took contact with network team behind those identified IPs to report that behaviour, still to be tracked, but wondering myself if a Firewall doing NAT wasn't closing tcp connections at all, more to come.

At least mirrorlist response time is now back at usual state :


So you can also let your configuration management now set those parameters through dedicated .conf under /etc/systctl.d/ to ensure that they'll be applied automatically.

I published the following diary on “Comment your Packet Captures!“:

When you are investigating a security incident, a key element is to take notes and to document as much as possible. There is no “best” way to take notes, some people use electronic solutions while others are using good old paper and pencil. Just keep in mind: it must be properly performed if your notes will be used as evidence later… With investigations, there are also chances to you will have to deal with packet captures… [Read more]


[The post [SANS ISC] Comment your Packet Captures! has been first published on /dev/random]

January 17, 2018

The post Red Hat reverts microcode update to mitigate Spectre, refers to hardware vendors for fix appeared first on

The content of this message is behind a pay/subscription wall, so let me highlight the most important aspects. Red Hat just informed its clients that it will rollback a microcode update that was designed to mitigate the Spectre attack (variant 2).

This was in their e-mail notification:

Latest microcode_ctl package will not contain mitigation for CVE-2017-5715 (Spectre, Variant 2)

Historically, for certain systems, Red Hat has provided updated microprocessor firmware, developed by our microprocessor partners, as a customer convenience. Further testing has uncovered problems with the microcode provided along with the “Spectre” CVE-2017-5715 mitigation that could lead to system instabilities. As a result, Red Hat is providing a microcode update that reverts to the last known and tested microcode version dated before 03 January 2018 and does not address “Spectre” CVE-2017-5715.

In order to mitigate “Spectre” CVE-2017-5715 fully, Red Hat strongly recommends that customers contact their hardware provider for the latest microprocessor firmware updates.

Here's the relevant bit from their KB article.

Red Hat Security is currently recommending that subscribers contact their CPU OEM vendor to download the latest microcode/firmware for their processor.

The latest microcode_ctl and linux-firmware packages from Red Hat do not include resolutions to the CVE-2017-5715 (variant 2) exploit. Red Hat is no longer providing microcode to address Spectre, variant 2, due to instabilities introduced that are causing customer systems to not boot.

The latest microcode_ctl and linux-firmware packages are reverting these unstable microprocessor firmware changes to versions that were known to be stable and well tested, released prior to the Spectre/Meltdown embargo lift date on Jan 3rd. Customers are advised to contact their silicon vendor to get the latest microcode for their particular processor.

Source: What CPU microcode is available via the microcode_ctl package to mitigate CVE-2017-5715 (variant 2)?

This will also affect derived distributions like CentOS, which we use heavily at Nucleus. This patching round isn't over, that's for sure.

The post Red Hat reverts microcode update to mitigate Spectre, refers to hardware vendors for fix appeared first on

A good configuration baseline has a readable structure that allows all stakeholders to quickly see if the baseline is complete, as well as find a particular setting regardless of the technology. In this blog post, I'll cover a possible structure of the baseline which attempts to be sufficiently complete and technology agnostic.

If you haven't read the blog post on documenting configuration changes, it might be a good idea to do so as it declares the scope of configuration baselines and why I think XCCDF is a good match for this.

January 16, 2018

We have just published the second set of interviews with our main track speakers. The following interviews give you a lot of interesting reading material about various topics: Howard Chu: Inside Monero. The world's first fungible cryptocurrency Liam Proven: The circuit less traveled. Investigating some alternate histories of computing. Manoj Pillai, Krutika Dhananjay and Raghavendra Gowdappa: Optimizing Software Defined Storage for the Age of Flash Michael Meeks: Re-structuring a giant, ancient code-base for new platforms. Making LibreOffice work well everywhere. Milan Broz: Data integrity protection with cryptsetup tools. what is the Linux dm-integrity module and why we extended dm-crypt to舰

January 15, 2018

Seventeen years ago today, I open-sourced the software behind and released Drupal 1.0.0. When Drupal was first founded, Google was in its infancy, the mobile web didn't exist, and JavaScript was a very unpopular word among developers.

Over the course of the past seventeen years, I've witnessed the nature of the web change and countless internet trends come and go. As we celebrate Drupal's birthday, I'm proud to say it's one of the few content management systems that has stayed relevant for this long.

While the course of my career has evolved, Drupal has always remained a constant. It's what inspires me every day, and the impact that Drupal continues to make energizes me. Millions of people around the globe depend on Drupal to deliver their business, mission and purpose. Looking at the Drupal users in the video below gives me goosebumps.

Drupal's success is not only marked by the organizations it supports, but also by our community that makes the project more than just the software. While there were hurdles in 2017, there were plenty of milestones, too:

  • At least 190,000 sites running Drupal 8, up from 105,000 sites in January 2016 (80% year over year growth)
  • 1,597 stable modules for Drupal 8, up from 810 in January 2016 (95% year over year growth)
  • 4,941 DrupalCon attendees in 2017
  • 41 DrupalCamps held in 16 different countries in the world
  • 7,240 individual code contributors, a 28% increase compared to 2016
  • 889 organizations that contributed code, a 26% increase compared to 2016
  • 13+ million visitors to in 2017
  • 76,374 instance hours for running automated tests (the equivalent of almost 9 years of continuous testing in one year)

Since Drupal 1.0.0 was released, our community's ability to challenge the status quo, embrace evolution and remain resilient has never faltered. 2018 will be a big year for Drupal as we will continue to tackle important initiatives that not only improve Drupal's ease of use and maintenance, but also to propel Drupal into new markets. No matter the challenge, I'm confident that the spirit and passion of our community will continue to grow Drupal for many birthdays to come.

Tonight, we're going to celebrate Drupal's birthday with a warm skillet chocolate chip cookie topped with vanilla ice cream. Drupal loves chocolate! ;-)

Note: The video was created by Acquia, but it is freely available for anyone to use when selling or promoting Drupal.

January 14, 2018

January 13, 2018

The post I’m taking a break from cron.weekly appeared first on

A little over 2 years ago I started a weekly newsletter for Linux & open source users, called cron.weekly. Today, I'm sending the last issue in what is probably going to be a pretty long time. I need a break.

Here's why.

tl;dr: I've got a wife, 2 kids, a (more than) full time job, 2 other side projects and a Netflix subscription. For now, cron.weekly doesn't fit in that list anymore.

The good :-)

I started cron.weekly out of a need. A need to read more technical content that I couldn't seem to find in a convenient form. So I started reading news & blogs more intensely and bookmarking whatever I found fascinating. Every week, that turned into a newsletter.

It was good timing for me, too. A few years ago my role at Nucleus, my employer, shifted from a purely technical one to the role of being a manager/management. It meant I was losing my touch with open source, projects, new releases, ... as it was no longer a core part of my role.

Writing cron.weekly forced me, on a weekly basis, to keep up with all the news, to read about new releases, to find new projects. It forced me to stay up-to-date, even if my job didn't directly require or allow it.

The bad :-|

What started as a hobby project quickly grew. At first, a handful of subscribers. After 2 years, a whopping 8.000 monthly newsletter readers. And a couple 1.000's more that read it via the web or the Reddit posts. I'm proud of that reach!

But my initial mistake became worse by the week: I called it a weekly newsletter that I send every Sunday.

That was my tagline: "cron.weekly is a weekly newsletter, delivered to you every Sunday, with news & tools tailored to Linux sysadmins.".

Weekly implies a never-ending-commitment and Sunday implies a weekly deadline. In the weekend.

In short, in the last 2 years I've spent at least one evening per weekend -- without a break -- writing a cron.weekly issue. At first because I loved it, but towards the end more because I had to. I had to, because I also found a way to monetize my newsletter: sponsors.

I won't lie, running cron.weekly has been my most profitable side business to day. Factor 10x more than all the others. But it's no passive income, it requires a newsletter issue every week, on the clock. And, it's a lot of writing & thinking, it's not a 10 minute write-up every week.

Having sponsors meant I had money coming in, justifying my time. But having sponsors also meant I had schedules, deals, commitments, ... that need to be upheld. Some sponsors want to time their new software launch with a big campaign (of which cron.weekly would be one aspect), so I can't just shift them around on a weekly basis. Sponsors -- rightfully so -- want to know when they get featured.

Adding sponsors turned it from a hobby to a job. At least, that's how it feels. It's no longer a spontaneous non-committal newsletter, it's now a business.

The ugly :-(

I all honesty, I'm burned-out from writing cron.weekly. Not Linux or open source in general, nor my day job, but I'm tired of writing cron.weekly. I'm just tired, in general. I had to force myself to write it. Toward the end, I dreaded it.

If I couldn't get it done on Friday evening, I would spend the rest of the weekend worrying that I couldn't get it done in time. It would keep haunting me in the back of my head "you need to write cron.weekly".

I did this onto myself, it's my own bloody fault. It should have been cron.random or cron.monthly. A weekly newsletter is intense & requires a lot of commitment, something I can't give at the moment.

So here we are ...

Among my other side gigs/hobbies are DNS Spy, Oh Dear, a newly found love for cryptocurrencies, ... and cron.weekly just doesn't fit at the moment.

As a result, I'm going to stop cron.weekly. For now. I don't want to say I completely quit, because I might pick it back up again.

But for now, I need a mental break from the weekly deadlines and to be able to enjoy my weekends, once again. Life's busy enough already.

If cron.weekly returns, it will give me the ability to rethink the newsletter, the timings & my commitments. Taking a break will allow me to re-launch it in a way that would fit in my life, in my family and in my hobbies.

I hope you enjoyed cron.weekly in the last 2 years. Who knows, you might receive a new surprise issue in a couple of months if I start again!

PS; I will never sell cron.weekly, nor the email userlist behind it. I appreciate the faith you had in me by giving me your e-mail address, that information remains closed and guarded. You won't be spammed.

The post I’m taking a break from cron.weekly appeared first on

With FOSDEM just around the corner, it is time for us to enlist your help. Every year, an enthusiastic band of volunteers make FOSDEM happen and make it a fun and safe place for all our attendees. We could not do this without you. This year we again need as many hands as possible, especially for heralding during the conference, during the buildup (starting Friday at noon) and teardown (Sunday evening). No need to worry about missing lunch. Food will be provided. Would you like to be part of the team that makes FOSDEM tick? Sign up here! You could舰

January 11, 2018

I published the following diary on “Mining or Nothing!“:

Cryptocurrencies mining has been a trending attack for a few weeks. Our idling CPUs are now targeted by bad guys who are looked to generate some extra revenue by abusing our resources. Other fellow handlers already posted diaries about this topic. Renato found a campaign based on a WebLogic exploit[1] and Jim detected a peak of activity on port 3333[2]… [Read more]

[The post [SANS ISC] Mining or Nothing! has been first published on /dev/random]

January 10, 2018

In this post, I'm providing some guidance on how and when to decouple Drupal.

Almost two years ago, I had written a blog post called "How should you decouple Drupal?". Many people have found the flowchart in that post to be useful in their decision-making on how to approach their Drupal architectures. Since that point, Drupal, its community, and the surrounding market have evolved, and the original flowchart needs a big update.

Drupal's API-first initiative has introduced new capabilities, and we've seen the advent of the Waterwheel ecosystem and API-first distributions like Reservoir, Headless Lightning, and Contenta. More developers both inside and outside the Drupal community are experimenting with Node.js and adopting fully decoupled architectures. As a result, Acquia now offers Node.js hosting, which means it's never been easier to implement decoupled Drupal on the Acquia platform.

Let's start with the new flowchart in full:

All the ways to decouple Drupal

The traditional approach to Drupal architecture, also referred to as coupled Drupal, is a monolithic implementation where Drupal maintains control over all front-end and back-end concerns. This is Drupal as we've known it — ideal for traditional websites. If you're a content creator, keeping Drupal in its coupled form is the optimal approach, especially if you want to achieve a fast time to market without as much reliance on front-end developers. But traditional Drupal 8 also remains a great approach for developers who love Drupal 8 and want it to own the entire stack.

A second approach, progressively decoupled Drupal, offers an approach that strikes a balance between editorial needs like layout management and developer desires to use more JavaScript, by interpolating a JavaScript framework into the Drupal front end. Progressive decoupling is in fact a spectrum, whether it is Drupal only rendering the page's shell and populating initial data — or JavaScript only controlling explicitly delineated sections of the page. Progressively decoupled Drupal hasn't taken the world by storm, likely because it's a mixture of both JavaScript and PHP and doesn't take advantage of server-side rendering via Node.js. Nonetheless, it's an attractive approach because it makes more compromises and offers features important to both editors and developers.

Last but not least, fully decoupled Drupal has gained more attention in recent years as the growth of JavaScript continues with no signs of slowing down. This involves a complete separation of concerns between the structure of your content and its presentation. In short, it's like treating your web experience as just another application that needs to be served content. Even though it results in a loss of some out-of-the-box CMS functionality such as in-place editing or content preview, it's been popular because of the freedom and control it offers front-end developers.

What do you intend to build?

The most important question to ask is what you are trying to build.

  1. If your plan is to create a single standalone website or web application, decoupling Drupal may or may not be the right choice based on the must-have features your developers and editors are asking for.
  2. If your plan is to create multiple experiences (including web, native mobile, IoT, etc.), you can use Drupal to provide web service APIs that serve content to other experiences, either as (a) a content repository with no public-facing component or (b) a traditional website that is also a content repository at the same time.

Ultimately, your needs will determine the usefulness of decoupled Drupal for your use case. There is no technical reason to decouple if you're building a standalone website that needs editorial capabilities, but that doesn't mean people don't prefer to decouple because of their preference for JavaScript over PHP. Nonetheless, you need to pay close attention to the needs of your editors and ensure you aren't removing crucial features by using a decoupled approach. By the same token, you can't avoid decoupling Drupal if you're using it as a content repository for IoT or native applications. The next part of the flowchart will help you weigh those trade-offs.

Today, Drupal makes it much easier to build applications consuming decoupled Drupal. Even if you're using Drupal as a content repository to serve content to other applications, well-understood specifications like JSON API, GraphQL, OpenAPI, and CouchDB significantly lower its learning curve and open the door to tooling ecosystems provided by the communities who wrote those standards. In addition, there are now API-first distributions optimized to serve as content repositories and SDKs like Waterwheel.js that help developers "speak" Drupal.

Are there things you can't live without?

Perhaps most critical to any decision to decouple Drupal is the must-have feature set desired for both editors and developers. In order to determine whether you should use a decoupled Drupal, it's important to isolate which features are most valuable for your editors and developers. Unfortunately, there is are no black-and-white answers here; every project will have to weigh the different pros and cons.

For example, many marketing teams choose a CMS because they want to create landing pages, and a CMS gives them the ability to lay out content on a page, quickly reorganize a page and more. The ability to do all this without the aid of a developer can make or break a CMS in marketers' eyes. Similarly, many digital marketers value the option to edit content in the context of its preview and to do so across various workflow states. These kind of features typically get lost in a fully decoupled setting where Drupal does not exert control over the front end.

On the other hand, the need for control over the visual presentation of content can hinder developers who want to craft nuanced interactions or build user experiences in a particular way. Moreover, developer teams often want to use the latest and greatest technologies, and JavaScript is no exception. Nowadays, more JavaScript developers are including modern techniques, like server-side rendering and ES6 transpilation, in their toolboxes, and this is something decision-makers should take into account as well.

How you reconcile this tension between developers' needs and editors' requirements will dictate which approach you choose. For teams that have an entirely editorial focus and lack developer resources — or whose needs are focused on the ability to edit, place, and preview content in context — decoupling Drupal will remove all of the critical linkages within Drupal that allow editors to make such visual changes. But for teams with developers itching to have more flexibility and who don't need to cater to editors or marketers, fully decoupled Drupal can be freeing and allow developers to explore new paradigms in the industry — with the caveat that many of those features that editors value are now unavailable.

What will the future hold?

In the future, and in light of the rapid evolution of decoupled Drupal, my hope is that Drupal keeps shrinking the gap between developers and editors. After all, this was the original goal of the CMS in the first place: to help content authors write and assemble their own websites. Drupal's history has always been a balancing act between editorial needs and developers' needs, even as the number of experiences driven by Drupal grows.

I believe the next big hurdle is how to begin enabling marketers to administer all of the other channels appearing now and in the future with as much ease as they manage websites in Drupal today. In an ideal future, a content creator can build a content model once, preview content on every channel, and use familiar tools to edit and place content, regardless of whether the channel in question is mobile, chatbots, digital signs, or even augmented reality.

Today, developers are beginning to use Drupal not just as a content repository for their various applications but also as a means to create custom editorial interfaces. It's my hope that we'll see more experimentation around conceiving new editorial interfaces that help give content creators the control they need over a growing number of channels. At that point, I'm sure we'll need another new flowchart.


Thankfully, Drupal is in the right place at the right time. We've anticipated the new world of decoupled CMS architectures with web services in Drupal 8 and older contributed modules. More recently, API-first distributions, SDKs, and even reference applications in Ember and React are giving developers who have never heard of Drupal the tools to interact with it in unprecedented ways. Moreover, for Acquia customers, Acquia's recent launch of Node.js hosting on Acquia Cloud means that developers can leverage the most modern approaches in JavaScript while benefiting from Drupal's capabilities as a content repository.

Unlike many other content management systems, old and new, Drupal provides a spectrum of architectural possibilities tuned to the diverse needs of different organizations. This flexibility between fully decoupling Drupal, progressively decoupling it, and traditional Drupal — in addition to each solution's proven robustness in the wild — gives teams the ability to make an educated decision about the best approach for them. This optionality sets Drupal apart from new headless content management systems and most SaaS platforms, and it also shows Drupal's maturity as a decoupled CMS over WordPress. In other words, it doesn't matter what the team looks like or what the project's requirements are; Drupal has the answer.

Special thanks to Preston So for contributions to this blog post and to Alex Bronstein, Angie Byron, Gabe Sullice, Samuel Mortenson, Ted Bowman and Wim Leers for their feedback during the writing process.

Updating the VCSA is easy when it has internet access or if you can mount the update iso. On a private network, VMware assumes you have a webserver that can serve up the updaterepo files. In this article, we'll look at how to proceed when VCSA is on a private network where internet access is blocked, and there's no webserver available. The VCSA and PSC contain their own webserver that can be used for an HTTP based update. This procedure was tested on PSC/VCSA 6.0.

Follow these steps:

  • First, download the update repo zip (e.g. for 6.0 U3A, the filename is ) 
  • Transfer the updaterepo zip to a PSC or VCSA that will be used as the server. You can use Putty's pscp.exe on Windows or scp on Mac/Linux, but you'd have to run "chsh -s /bin/bash root" in the CLI shell before using pscp.exe/scp if your PSC/VCSA is set up with the appliancesh. 
    • chsh -s /bin/bash root
    • "c:\program files (x86)\putty\pscp.exe" VMware* root@psc-name-or-address:/tmp 
  • Change your PSC/VCSA root access back to the appliancesh if you changed it earlier: 
    • chsh -s /bin/appliancesh root
  • Make a directory for the repository files and unpack the updaterepo files there:
    • mkdir /srv/www/htdocs/6u3
    • chmod go+rx /srv/www/htdocs/6u3
    • cd /srv/www/htdocs/6u3
    • unzip /tmp/VMware-vCenter*
    • rm /tmp/VMware-vCenter*
  • Create a redirect using the HTTP rhttpproxy listener and restart it
    • echo "/6u3 local 7000 allow allow" > /etc/vmware-rhttpproxy/endpoints.conf.d/temp-update.conf 
    • /etc/init.d/vmware-rhttpproxy restart 
  • Create a /tmp/nginx.conf (I copied /etc/nginx/nginx.conf, changed "listen 80" to "listen 7000" and changed "mime.types" to "/etc/nginx/mime.types")
  • Start nginx
    • nginx -c /tmp/nginx.conf
  • Start the update via the VAMI. Change the repository URL in settings,  use http://psc-name-or-address/6u3/ as repository URL. Then use "Check URL". 
  • Afterwards, clean up: 
    • killall nginx
    • cd /srv/www/htdocs; rm -rf 6u3

P.S. I personally tested this using a PSC as webserver to update both that PSC, and also a VCSA appliance.
P.P.S. VMware released an update for VCSA 6.0 and 6.5 on the day I wrote this. For 6.0, the latest version is U3B at the time of writing, while I updated to U3A.

January 09, 2018

That's something I should have blogged about earlier, but I almost forgot about it, until I read on twitter other people having replaced their home network equipment with Ubnt/Ubiquiti gear so I realized that it was on my to 'TOBLOG' list.

During the winter holidays, the whole family was at home, and also with kids on the WiFi network. Of course I already had a different wlan for them, separated/seggregated from the main one, but plenty of things weren't really working on that crappy device. So it was time to setup something else. I had opportunity to play with some Ubiquiti devices in the past, so finding even an old Unifi UAP model was enough for my needs (just need Access Point, routing/firewall being done on something else).

If you've already played with those tools, you know that you need a controller to setup the devices up , and because it's 'only' a java/mongodb stack, I thought it would be trivial to setup on a low-end device like RaspberryPi3 (not limited to that , so all armhfp boards on which you can run CentOS would work)

After having installed CentOS 7 armhfp minimal on the device, and once logged, I just had to add the mandatory unofficial epel repository for mongodb

cat > /etc/yum.repos.d/epel.repo << EOF
name=Epel rebuild for armhfp


After that, just installed what's required to run the application :

yum install mongodb mongodb-server java-1.8.0-openjdk-headless -y

The "interesting" part is that now Ubnt only provides .deb packages , so we just have to download/extract what we need (it's all java code) and start it :

tmp_dir=$(mktemp -d)
cd $tmp_dir
curl -O
ar vx unifi_sysvinit_all.deb
tar xvf data.tar.xz
mv usr/lib/unifi/ /opt/UniFi
cd /opt/UniFi/bin
/bin/rm -Rf $tmp_dir
ln -s /bin/mongod

You can start it "by hand" but let's create a simple systemd file and use it directly :

cat > /etc/systemd/system/unifi.service << EOF
Description=UBNT UniFi Controller

ExecStart=/usr/bin/java -jar /opt/UniFi/lib/ace.jar start
ExecStop=/usr/bin/java -jar /opt/UniFi/lib/ace.jar stop



systemctl daemon-reload
systemctl enable unifi --now

Don't forget that :

  • it's "Java"
  • running on slow armhfp processor

So that will take time to initialize. You can follow progress in /opt/UniFi/logs/server.log and wait for the TLS port to be opened :

while true ; do sleep 1 ; ss -tanp|grep 8443 && break ; done

Dont forget to open the needed ports for firewall and you can then reach the Unifi controller running on your armhfp board.

January 08, 2018

The entrance to Acquia's HQ in BostonThe entrance to Acquia's headquarters in Boston.

For the past nine years, I've sat down every January to write an Acquia retrospective. It's always a rewarding blog post to write as it gives me an opportunity to reflect on what Acquia has accomplished over the past 12 months. If you'd like to read my previous annual retrospectives, they can be found here: 2016, 2015, 2014, 2013, 2012, 2011, 2010, 2009. When read together, they provide insight to what has shaped Acquia into the company it is today.

This year's retrospective is especially meaningful because 2017 marked Acquia's 10th year as a company. Over the course of Acquia's first decade, our long-term investment in open source and cloud has made us the leader in web content management. 2017 was one of our most transformative years to date; not only did we have to manage leadership changes, but we also broadened our horizons beyond website management to data-driven customer journeys.

The next phase of Acquia leadership

A photo of me and TomTom Erickson joined Acquia as CEO in 2009 and worked side-by-side with me for the next eight years.

In my first retrospective from 2009, I shared that Jay Batson and I had asked Tom Erickson to come aboard as Acquia's new CEO. For the next eight years, Tom and I worked side-by-side to build and grow Acquia. Tom's expertise in taking young companies to the next level was a natural complement to my technical strength. His leadership was an example that enabled me to develop my own business building skills. When Tom announced last spring that he would be stepping down as Acquia's CEO, I assumed more responsibility to help guide the company through the transition. My priorities for 2017 were centered around three objectives: (1) the search for a new CEO, (2) expanding our product strategy through a stronger focus on innovation, and (3) running our operations more efficiently.

The search for a new CEO consumed a great deal of my time in 2017. After screening over 140 candidates and interviewing ten of them in-depth, we asked Mike Sullivan to join Acquia as CEO. Mike has been on the job for three weeks and I couldn't be more excited.

Mike Sullivan joins AcquiaMike Sullivan joins Acquia as CEO with 25 years of senior leadership in SaaS, enterprise content management and content governance.

Market trends

I see three major market trends that I believe are important to highlight and that help inform our strategy.

Trend #1: Customers are driven by time-to-value and low cost of maintenance

Time-to-value and low maintenance costs are emerging as two of the most important differentiators in the market. This is consistent with a post I wrote eleven years ago, in regards to The Ockham's Razor Principle of Content Management Systems. The principle states that given two functionally equivalent content management systems, the simplest one should be selected. Across both the low and the high ends of the market, time-to-value and total cost of ownership matter a great deal. Simplicity wins.

In the low end of the market simple sites, such as single blogs and brochure sites, are now best served by SaaS tools such as Squarespace and Wix. Over the past five years, SaaS solutions have been rising in prominence because their templated approach to simple site building makes them very easy to use. The total cost of ownership is also low as users don't have to update and maintain the software and infrastructure themselves. Today, I believe that Drupal is no longer ideal for most simple sites and instead is best suited for more ambitious use cases. Not everyone likes that statement, but I believe it to be true.

In the mid-market, SaaS tools don't offer the flexibility and customizability required to support sites with more complexity. Often mid-market companies need more customizable solutions like Drupal or WordPress. Time-to-value and total maintenance costs still matter; people don't want to spend a lot of time installing or upgrading their websites. Within the scope of Ockham's Razor Principle, WordPress does better than Drupal in this regard. WordPress is growing faster than Drupal for websites with medium complexity because ease of use and maintenance often precede functionality. However, when superior flexibility and architecture are critical to the success of building a site, Drupal will be selected.

In the enterprise, a growing emphasis on time-to-value means that customers are less interested in boil-the-ocean projects that cost hundreds of thousands (or millions) of dollars. Customers still want to do large and ambitious projects, but they want to start small, see results quickly, and measure their ROI every step along the way. Open source and cloud provide this agility by reducing time-to-market, cost and risk. This establishes a competitive advantage for Acquia compared to traditional enterprise vendors like Adobe and Sitecore.

At Acquia, understanding how we can make our products easier to use by enhancing self-service and reducing complexity will be a major focus of 2018. For Drupal, it means we have to stay focused on the initiatives that will improve usability and time to value. In addition to adopting a JavaScript framework in core to facilitate the building of a better administration experience, work needs to continue on Workspaces (content staging), Layout Builder (drag-and-drop blocks), and the Media, Outside-in and Out-of-the-box initiatives. Finally, I anticipate that a Drupal initiative around automated upgrades will kick off in 2018. I'm proud to say that Acquia has been a prominent contributor to many of these initiatives, by either sponsoring developers, contributing code, or providing development support and coordination.

Trend #2: Frictionless user experiences require greater platform complexity

For the past ten years, I've observed one significant factor that continues to influence the trajectory of digital: the internet's continuous drive to mitigate friction in user experience and business models. The history of the web dictates that lower-friction solutions will surpass what came before them because they eliminate inefficiencies from the customer experience.

This not only applies to how technology is adopted, but how customer experiences are created. Mirroring Ockham's Razor Principle, end users and consumers also crave simplicity. End users are choosing to build relationships with brands that guarantee contextual, personalized and frictionless interactions. However, simplicity for end users does not translate into simplicity for CMS owners. Organizations need to be able to manage more data, channels and integrations to deliver the engaging experiences that end users now expect. This desire on the part of end users creates greater platform complexity for CMS owners.

For example, cross-channel experiences are starting to remove certain inefficiencies around traditional websites. In order to optimize the customer experience, enterprise vendors must now expand their digital capabilities beyond web content management and invest in both systems of engagement (various front-end solutions such as conversational interfaces, chatbots, and AR/VR) and systems of intelligence (marketing tools for personalization and predictive analytics).

Shopping with augmented realityThis year, Acquia Labs built a demo to explore how augmented reality can improve shopping experiences.

These trends give organizations the opportunity to reimagine their customer experience. By taking advantage of more channels and more data (e.g. being more intelligent, personalized, and contextualized), we can leapfrog existing customer experiences. However, these ambitious experiences require a platform that prioritizes customization and functionality.

Trend #3: The decoupled CMS market is taking the world by storm

In the web development world, few trends are spreading more rapidly than decoupled content management systems. The momentum is staggering as some decoupled CMS vendors are growing at a rate of 150% year over year. This trend has a significant influence on the technology landscape surrounding Drupal, as a growing number of Drupal agencies have also started using modern JavaScript technologies. For example, more than 50% of Drupal agencies are also using Node.js to support the needs of their customers.

The Drupal community's emphasis on making Drupal API-first, in addition to supporting tools such as Waterwheel and Drupal distributions such as Reservoir, Contenta and Lightning, means that Drupal 8 is well-prepared to support decoupled CMS strategies. For years, including in 2017, Acquia has been a very prominent contributor to a variety of API-first initiatives.

Product milestones

In addition to my focus on finding a new CEO, driving innovation to expand our product offering was another primary focus in 2017.

Throughout Acquia's first decade, we've been focused primarily on providing our customers with the tools and services necessary to scale and succeed with Drupal. We've been very successful with this mission. However, many of our customers need more than content management to be digital winners. The ability to orchestrate customer experiences across different channels is increasingly important to our customers' success. We need to be able to support these efforts on the Acquia platform.

We kicked off our new product strategy by adding new products to our portfolio, and by extending our existing products with new capabilities that align with our customers' evolving needs.

  • Acquia Cloud: A "continuous integration" and "continuous delivery" service for developers was our #1 requested feature, so we delivered Acquia Cloud CD early in 2017. Later in the year, we expanded Acquia Cloud to support Node.js, the popular open-source JavaScript runtime. This was the first time we expanded our cloud beyond Drupal. Previously, if an organization wanted to build a decoupled Drupal architecture with Node.js, it was not able to host the Node.js application on Acquia Cloud. Finally, in order to make Acquia Cloud easier to use, we started to focus more on self-service. We saw rapid customer adoption of our new Stack Metrics feature, which gives customers valuable insight into performance and utilization. We also introduced a new Cloud Service Management model, which empowers our customer to scale their Acquia Cloud infrastructure on the fly.
  • Acquia Lift: In order to best support our customers as they embed personalization into their digital strategies, we have continued to add product enhancements to the new version of Acquia Lift. This included improving Acquia Lift's content authoring capabilities, enhanced content recommendations, and advanced analytics and reporting. The Acquia Lift team grew, as we also founded a machine learning and artificial intelligence team, which will lead to new features and products in 2018. In 2017, Acquia Lift has added over 200 new features, tracks 200% more profiles than in 2016, and has grown 45% in revenue.
A continuous journey across multiple digital touch points and devices

Next, we added two new products to support our evolution from content management to data-driven customer journeys: Acquia Journey and Acquia Digital Asset Manager (DAM).

  • Acquia Journey allows marketers to easily map, assemble, orchestrate and manage customer experiences across different channels. One of the strengths of Acquia Journey is that it allows technical teams to integrate many different technologies, from marketing and advertising technologies to CRM tools and commerce platforms. Acquia Journey unifies these various interaction points within a single user interface, making it possible to quickly assemble powerful and complex customer journeys. In turn, marketers can take advantage of a flowchart-style journey mapping tool with unified customer profiles and an automated decision engine to determine the best-next action for engaging customers.
  • Acquia DAM: Many organizations lack a single-source of truth when it comes to managing digital assets. This challenge has been amplified as the number of assets has rapidly increased in a world with more devices, more channels, more campaigns, and more personalized and contextualized experiences In addition to journey orchestration, it became clear that large organizations are seeking a digital asset management solution that centralizes control of creative assets for the entire company. With Acquia DAM, our customers can rely on one dedicated application to gather requirements, share drafts, consolidate feedback and collect approvals for high-value marketing assets.

Acquia's new product strategy is very ambitious. I'm proud of our stronger focus on innovation and the new features and products that we launched in 2017. Launching this many products and features is hard work and requires tactical coordination across every part of the company. The transition from a single-product company to a multi-product company is challenging, and I hope to share more lessons learned in future blog posts.

Acquia's product strategy for 2017 and beyond

While each new product we announced was well-received, there is still a lot of work to be done: we need to continue to drive end-user demand for our new products and help our digital agency partners build practices around them.

Leading by example

At Acquia, our mission is to deliver "the universal platform for the greatest digital experiences", and we want to lead by example. In an effort to become a thought-leader in our field, the Office of the CTO launched Acquia Labs, our research and innovation lab. Acquia Labs aims to link together the new realities in our market, our customers' needs in coming years, and the goals of Acquia's products and open-source efforts in the long term.

Finally, we rounded out the year by redesigning on Drupal 8. The new site places a greater emphasis on taking advantage of our own products. We wanted to show (not tell) the power of the Acquia platform. For example, Acquia Lift delivers visitors personalized content throughout the site. The new site represents a bolder and more innovative Acquia, aligned with the evolution of our product strategy.

Business momentum

We continued to grow at a steady pace in 2017 and hired a lot of new people. We focused on the growth of our recurring revenue, which includes new customers and the renewal and expansion of our work with existing customers. We also focused on our bottom line.

In 2017, the top industry analysts published very positive reviews based on their independent research. I'm proud that Acquia was recognized by Forrester Research as the leader for strategy and vision, ahead of every other vendor including Adobe and Sitecore, in The Forrester Wave: Web Content Management Systems, Q1 2017. Acquia was also named a leader in the 2017 Gartner Magic Quadrant for Web Content Management, marking our placement as a leader for the fourth year in a row. In addition to being the only leader that is open-source or has a cloud-first strategy, Acquia was hailed by analysts for our investments in open APIs across all our products.

Over the course of 2017 Acquia welcomed an impressive roster of new customers who included Astella Pharma, Glanbia, the Commonwealth of Massachusetts, Hewlett Packard Enterprise, and Bayer GmbH. As we enter 2018, Acquia can count 26 of the Fortune 100 among its customers, up from 16 at the beginning of 2017.

This year was also an incredible growth period for our Asia Pacific business, which is growing ARR at a rate of 80% year over year. We have secured new business in Japan, Hong Kong, Singapore, Indonesia, Malaysia, Philippines and India. When we started our business in Australia in 2012, 70% of the pipeline came from govCMS, the platform offered by the Australian government to all national, territorial and local agencies. Today, our business is much more diverse, with 50% of the region's pipeline coming from outside of Australia.

Jeannie Finks at The Stevies AwardsJeannie Finks, Director of Global Support Systems & Programs, accepting a Gold Stevie for Customer Service Team of the Year. Go team Acquia!

Customer success continues to be the most important driver of the evolution of Acquia's strategy. This commitment was reflected in 2017 customer satisfaction levels, which remains extremely high at 94 percent. Acquia's global support team also received top honors from the American Business Awards and won a Gold Stevie for Customer Service Team of the Year.

This year, we also saw our annual customer conference, Acquia Engage, grow. We welcomed over 650 people to Boston and saw presentations from over twenty customers, including Johnson & Johnson, NBC Sports, Whole Foods, AMD, the YMCA and many more. It was inspiring to hear our customers explain why Acquia and Drupal are essential to their business.

Finally, our partner ecosystem continues to advance. In 2016, we achieved a significant milestone as numerous global systems integrators repeatedly recommended Acquia to their clients. One year later, these partners are building large centers of excellence to scale their Acquia and Drupal practices. Digital agencies and Drupal companies also continue to extend their investments in Acquia, and are excited about the opportunity presented in our expanded product portfolio. In some markets, over 50 percent of our new subscriptions originate from our partner ecosystem.

The growth and performance of the partner community is validation of our strategy. For example, in 2017 we saw multiple agencies and integrators that were entirely committed to Adobe or Sitecore, join our program and begin to do business with us.

Opportunities for Acquia in 2018

When thinking about how Acquia has evolved its product strategy, I like to consider it in terms of Greylocks' Jerry Chen's take on the stack of enterprise systems. I've modified his thesis to fit the context of Acquia and our long-term strategy to help organizations with their digital transformation.

Chen's thesis begins with "systems of record", which are sticky and defensible not only because of their data, but also based on the core business process they own. Jerry identifies three major systems of record today; your customers, your employees and your assets. CRM owns your customers (i.e. Salesforce), HCM owns your employees (i.e. Workday), and ERP/Financials owns your assets. Other applications can be built around a system of record but are usually not as valuable as the actual system of record. For example, marketing automation companies like Marketo and Responsys built big businesses around CRM, but never became as strategic or as valuable as Salesforce. We call these "secondary systems of record". We believe that a "content repository" (API-first Drupal) and a "user profile repository" (Acquia Lift) are secondary systems of record. We will continue our efforts to improve Drupal's content repository and Lift's user profile repository to become stronger systems of record.

Systems of engagement, intelligence, record and delivery

"Systems of engagement" are the interface between users and the systems of record. They control the end-user interactions. Drupal and Lift are great examples of systems of engagement as they allow for the rapid creation of end-user experiences.

Jerry Chen further suggests that "systems of intelligence" will be a third component. Systems of intelligence will be of critical importance for determining the optimal customer journey across various applications. Personalization (Acquia Lift), recommendations (Acquia Lift) and customer journey building (Acquia Journey) are systems of intelligence. They are very important initiatives for our future.

While Chen does not include "systems of delivery" in his thesis, I believe it is an important component. Systems of delivery not only dictate how content is delivered to users, but how organizations build projects faster and more efficiently for their stakeholders and users. This includes multi-site management (Acquia Cloud Site Factory) and continuous delivery services (Acquia Cloud CD), which extend the benefits of PaaS beyond scalability and reliability to include high-productivity and faster time-to-value for our customers. As organizations increase their investments in cross-channel experiences, they must manage more complexity and orchestrate the testing, integration and deployment of different technologies. Systems of delivery, such as Acquia Cloud and Acquia Site Factory, remove complexity from building and managing modern digital experiences.

This is all consistent with the diagram I've been using for a few years now where "user profile" and "content repository" represent two systems of record, getBestNextExperience() is the system of intelligence, and Drupal is the system of engagement to build the customer experience:

Systems of engagement, intelligence, record and delivery

We are confident in the market shift towards "intelligent connected experiences" or "data-driven customer journeys" and the opportunity it provides to Acquia. Every team at Acquia has demonstrated both commitment and focus as we have initiated a shift to make our vision relevant in the market for years to come. I believe we have strong investments across systems of record, intelligence, delivery and engagement that will continue to put us at the center of our customers' technology and digital strategies in 2027.

Thank you

Of course, none of these 2017 results and milestones would be possible without the hard work of the Acquia team, our customers, partners, the Drupal community, and our many friends. Thank you for your support in 2017 and over the past ten years – I can't wait to see what the next decade will bring!

January 07, 2018

IT teams are continuously under pressure to set up and maintain infrastructure services quickly, efficiently and securely. As an infrastructure architect, my main concerns are related to the manageability of these services and the secure setup. And within those realms, a properly documented configuration setup is in my opinion very crucial.

In this blog post series, I'm going to look into using the Extensible Configuration Checklist Description Format (XCCDF) as the way to document these. This first post is an introduction to XCCDF functionally, and what I position it for.

During the holidays we have performed some interviews with main track speakers from various tracks. To get up to speed with the topics discussed in the main track talks, you can start reading the following interviews: Diomidis Spinellis: Unix Architecture Evolution from the 1970 PDP-7 to the 2017 FreeBSD. Important Milestones and Lessons Learned Mary Bennett: Reimagining EDSAC in open source. An valve computer reimplemented using FPGAs, Arduinos, 3D printing and discrete electronics Robert Foss: Running Android on the Mainline Graphics Stack Stefan Behnel: Lift your Speed Limits with Cython. Fast native code for Python Steven Goodwin: Digital Archaeology. Maintaining舰

January 05, 2018