Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

February 20, 2017

This is an ode to Dirk Engling’s OpenTracker.

It’s a BitTorrent tracker.

It’s what powered The Pirate Bay in 2007–2009.

I’ve been using it to power the downloads on since the end of November 2010. >6 years. It facilitated 9839566 downloads since December 1, 2010 until today. That’s almost 10 million downloads!


It’s one of the most stable pieces of software I ever encountered. I compiled it in 2010, it never once crashed.

wim@ajax:~$ ls -al /data/opentracker
total 456
drwxr-xr-x  3 wim  wim   4096 Feb 11 01:02 .
drwxr-x--x 10 root wim   4096 Mar  8  2012 ..
-rwxr-xr-x  1 wim  wim  84824 Nov 29  2010 opentracker
-rw-r--r--  1 wim  wim   3538 Nov 29  2010 opentracker.conf
drwxr-xr-x  4 wim  wim   4096 Nov 19  2010 src
-rw-r--r--  1 wim  wim 243611 Nov 19  2010 src.tgz
-rwxrwxrwx  1 wim  wim  14022 Dec 24  2012 whitelist


The simplicity is fantastic. Getting up and running is fantastically simple: git clone git:// .; make; ./opentracker and you’re up and running. Let me quote a bit from its homepage, to show that it goes the extra mile to make users successful:

opentracker can be run by just typing ./opentracker. This will make opentracker bind to and happily serve all torrents presented to it. If ran as root, opentracker will immediately chroot to . and drop all priviliges after binding to whatever tcp or udp ports it is requested.

Emphasis mine. And I can’t emphasize my emphasis enough.

Performance & efficiency

All the while handling dozens of requests per second, opentracker causes less load than background processes of the OS. Let me again quote a bit from its homepage:

opentracker can easily serve multiple thousands of requests on a standard plastic WLAN-router, limited only by your kernels capabilities ;)

That’s also what it said in 2010. I didn’t test it on a “plastic WLAN-router”, but everything I’ve seen confirms it.


Its defaults are sane, but what if you want to have a whitelist?

  1. Uncomment the #FEATURES+=-DWANT_ACCESSLIST_WHITE line in the Makefile.
  2. Recompile.
  3. Create a file called whitelist, with one torrent hash per line.

Have a need to update this whitelist, for example a new release of your software to distribute? Of course you don’t want to reboot your opentracker instance and lose all current state. It’s got you covered:

  1. Append a line to whitelist.
  2. Send the SIGHUP UNIX signal to make opentracker reload its whitelist1.


I’ve been in the process of moving off of my current (super reliable, but also expensive) hosting. There are plenty of specialized HTTP server hosts2 and even rsync hosts3. Thanks to their standardization and consequent scale, they can offer very low prices.

But I also needed to continue to run my own BitTorrent tracker. There are no hosts that offer that. I don’t want to rely on another tracker, because I want there to be zero affiliation with illegal files. This is a BitTorrent tracker that does not allow anything to be shared: it only allows the software releases made by to be downloaded.

So, I found the cheapest VPS I could find, with the least amount of resources. For USD $13.504, I got a VPS with 128 MB RAM, 12 GB of storage and 500 GB of monthly traffic. Then I set it up:

  1. ssh‘d onto it.
  2. rsync‘d over the files from my current server (alternatively: git clone and make)
  3. added @reboot /data/opentracker/opentracker -f /data/opentracker/opentracker.conf to my crontab.
  4. removed the CNAME record for, and instead made it an A record pointing to my new VPS.
  5. watched on both the new and the old server, to verify traffic was moving over to my new cheap opentracker VPS as the DNS changes propagated

Drupal module

Since runs on Drupal, there of course is an OpenTracker Drupal module which I wrote. It provides an API to:

  • create .torrent files for certain files uploaded to Drupal
  • append to the OpenTracker whitelist file 5
  • parse the statistics provided by the OpenTracker instance

You can see the live stats at


opentracker is the sort of simple, elegant software design that makes it a pleasure to use. And considering the low commit frequency over the past decade, with many of those commits being nitpick fixes, it also seems its simplicity also leads to excellent maintainability. It involves the HTTP and BitTorrent protocols, yet only relies on a single I/O library, and its source code is very readable. Not only that, but it’s also highly scalable.

It’s the sort of software many of us aspire to write.

Finally, its license. A glorious license indeed.

The beerware license is very open, close to public domain, but insists on honoring the original author by just not claiming that the code is yours. Instead assume that someone writing Open Source Software in the domain you’re obviously interested in would be a nice match for having a beer with.

So, just keep the name and contact details intact and if you ever meet the author in person, just have an appropriate brand of sparkling beverage choice together. The conversation will be worth the time for both of you.

Dirk, if you read this: I’d love to buy you sparkling beverages some time :)

  1. kill -s HUP pidof opentracker ↩︎

  2. I’m using Gandi’s Simple Hosting↩︎

  3. ↩︎

  4. $16.34 including 21% Belgian VAT↩︎

  5. reload */10 * * * * kill -s HUP pidof opentracker ↩︎

February 18, 2017

February 17, 2017

Alternative title: Dan North, the Straw Man That Put His Head in His Ass.

This blog post is a reply to Dans presentation Why Every Element of SOLID is Wrong. It is crammed full with straw man argumentation in which he misinterprets what the SOLID principles are about. After refuting each principle he proposes an alternative, typically a well-accepted non-SOLID principle that does not contradict SOLID. If you are not that familiar with the SOLID principles and cannot spot the bullshit in his presentation, this blog post is for you. The same goes if you enjoy bullshit being pointed out and broken down.

What follows are screenshots of select slides with comments on them underneath.

Dan starts by asking “What is a single responsibility anyway”. Perhaps he should have figured that out before giving a presentation about how it is wrong.

A short (non-comprehensive) description of the principle: systems change for various different reasons. Perhaps a database expert changes the database schema for performance reasons, perhaps a User Interface person is reorganizing the layout of a web page, perhaps a developer changes business logic. What the Single Responsibility Principle says is that ideally changes for such disparate reasons do not affect the same code. If they did, different people would get in each others way. Possibly worse still, if the concerns are mixed together, and you want to change some UI code, suddenly you need to deal with, and thus understand, the business logic and database code.

How can we predict what is going to change? Clearly you can’t, and this is simply not needed to follow the Single Responsibility Principle or to get value out of it.

Write simple code… no shit. One of the best ways to write simple code is to separate concerns. You can be needlessly vague about it and simply state “write simple code”. I’m going to label this Dan North’s Pointlessly Vague Principle. Congratulations sir.

The idea behind the Open Closed Principle is not that complicated. To partially quote the first line on the Wikipedia Page (my emphasis):

… such an entity can allow its behaviour to be extended without modifying its source code.

In other words, when you ADD behavior, you should not have to change existing code. This is very nice, since you can add new functionality without having to rewrite old code. Contrast this to shotgun surgery, where to make an addition, you need to modify existing code at various places in the codebase.

In practice, you cannot gain full adherence to this principle, and you will have places where you will need to modify existing code. Full adherence to the principle is not the point. Like with all engineering principles, they are guidelines which live in a complex world of trade offs. Knowing these guidelines is very useful.

Clearly it’s a bad idea to leave in place code that is wrong after a requirement change. That’s not what this principle is about.

Another very informative “simple code is a good thing” slide.

To be honest, I’m not entirely sure what Dan is getting at with his “is-a, has-a” vs “acts-like-a, can-be-used-as-a”. It does make me think of the Interface Segregation Principle, which, coincidentally, is the next principle he misinterprets.

The remainder of this slide is about the “favor compositions about inheritance” principle. This is really good advice, which has been well-accepted in professional circles for a long time. This principle is about code sharing, which is generally better done via composition than inheritance (the later creates very strong coupling). In the last big application I wrote I have several 100s of classes and less than a handful inherit concrete code. Inheritance has a use completely different from code reuse: sub-typing and polymorphism. I won’t go into detail about those here, and will just say that this is at the core of what Object Orientation is about, and that even in the application I mentioned, this is used all over, making the Liskov Substitution Principle very relevant.

Here Dan is slamming the principle for being too obvious? Really?

“Design small , role-based classes”. Here Dan changed “interfaces” into “classes”. Which results in a line that makes me think of the Single Responsibility Principle. More importantly, there is a misunderstanding about the meaning of the word “interface” here. This principle is about the abstract concept of an interface, not the language construct that you find in some programming languages such as Java and PHP. A class forms an interface. This principle applies to OO languages that do not have an interface keyword such as Python and even to those that do not have a class keyword such as Lua.

If you follow the Interface Segregation Principle and create interfaces designed for specific clients, it becomes much easier to construct or invoke those clients. You won’t have to provide additional dependencies that your client does not actually care about. In addition, if you are doing something with those extra dependencies, you know this client will not be affected.

This is a bit bizarre. The definition Dan provides is good enough, even though it is incomplete, which can be excused by it being a slide. From the slide it’s clear that the Dependency Inversion Principle is about dependencies (who would have guessed) and coupling. The next slide is about how reuse is overrated. As we’ve already established, this is not what the principle is about.

As to the DIP leading to DI frameworks that you than depend on… this is like saying that if you eat food you might eat non-nutritious food which is not healthy. The fix here is to not eat non-nutritious food, it is not to reject food altogether. Remember the application I mentioned? It uses dependency injection all the way, without using any framework or magic. In fact, 95% of the code does not bind to the web-framework used due to adherence to the Dependency Inversion Principle. (Read more about this application)

That attitude explains a lot about the preceding slides.

Yeah, please do write simple code. The SOLID principles and many others can help you with this difficult task. There is a lot of hard-won knowledge in our industry and many problems are well understood. Frivolously rejecting that knowledge with “I know better” is an act of supreme arrogance and ignorance.

I do hope this is the category Dan falls into, because the alternative of purposefully misleading people for personal profit (attention via controversy) rustles my jimmies.

If you’re not familiar with the SOLID principles, I recommend you start by reading their associated Wikipedia pages. If you are like me, it will take you practice to truly understand the principles and their implications and to find out where they break down or should be superseded. Knowing about them and keeping an open mind is already a good start, which will likely lead you to many other interesting principles and practices.

February 15, 2017

Being a volunteer for the SANS Internet Storm Center, I’m a big fan of the DShield service. I think that I’m feeding DShield with logs for eight or nine years now. In 2011, I wrote a Perl script to send my OSSEC firewall logs to DShield. This script has been running and pushing my logs every 30 mins for years. Later, DShield was extended to collect other logs: SSH credentials collected by honeypots (if you’ve a unused Raspberry Pi, there is a nice setup of a honeypot available). I’ve my own network of honeypots spread here and there on the Wild Internet, running Cowrie. But recently, I reconfigured all of them to use another type of honeypot: OpenCanary.

Why OpenCanary? Cowrie is a very nice honeypot which can emulate a fake vulnerable host, log commands executed by the attackers and also collect dropped files. Here is an example of Cowrie session replayed in Splunk:

Splunk Honeypot Session Replay

It’s nice to capture a lot of data but most of them (to not say “all of them”) are generated by bots. Honestly, I never detected a human attacker trying to abuse of my SSH honeypots. That’s why I decided to switch to OpenCanary. It does not record a detailed log as Cowrie but it is very modular and supports by default the following protocols:

  • FTP
  • HTTP
  • Proxy
  • MySQL
  • NTP
  • Portscan
  • RDP
  • Samba
  • SIP
  • SNMP
  • SSH
  • Telnet
  • TFTP
  • VNC

Writing extra modules is very easy, examples are provided. By default, OpenCanary is able to write logs to the console, a file, Syslog, a JSON feed over TCP or an HPFeed. There is no DShield support by default? Never mind, let’s add it.

As I said, OpenCanary is very modular and a new logging capability is just a new Python class in the module:

class DShieldHandler(logging.Handler):
    def __init__(self, dshield_userid, dshield_authkey, allowed_ports):
        self.dshield_userid = str(dshield_userid)
        self.dshield_authkey = str(dshield_authkey)
            # Extract the list of allowed ports
            self.allowed_ports = map(int, str(allowed_ports).split(','))
            # By default, report only port 22
            self.allowed_ports = [ 22 ]

    def emit(self, record):

The DShield logger needs three arguments in your opencanary.conf file:

"logger": {
    "class" : "PyLogger",
    "kwargs" : {
        "formatters": {
            "plain": {
                "format": "%(message)s"
        "handlers": {
            "dshield": {
                "class": "opencanary.logger.DShieldHandler",
                "dshield_userid": "xxxxxx",
                "dshield_authkey": "xxxxxxxx",
                "allowed_ports": "22,23"

The DShield UserID and authentication key are available in your DShield account. I added an ‘allowed_ports’ parameter that contains the list of interesting ports that will be reported to DShield (by default only SSH connections are reported). Now, I’m reporting many more connections attempts:

Daily Connections Report

Besides DShield, JSON logs are processed by my Splunk instance to generate interesting statistics:

OpenCanary Splunk Dashboard

A pull request has been submitted to the authors of OpenCanary to integrate my code. In the mean time, the code is available on my Github repository.

[The post Integrating OpenCanary & DShield has been first published on /dev/random]

I published the following diary on “How was your stay at the Hotel La Playa?“.

I made the following demo for a customer in the scope of a security awareness event. When speaking to non-technical people, it’s always difficult to demonstrate how easily attackers can abuse of their devices and data. If successfully popping up a “calc.exe” with an exploit makes a room full of security people crazy, it’s not the case for “users”. It is mandatory to demonstrate something that will ring a bell in their mind… [Read more]

[The post [SANS ISC Diary] How was your stay at the Hotel La Playa? has been first published on /dev/random]

February 14, 2017

Yesterday, after publishing a blog post about Nasdaq's Drupal 8 distribution for investor relations websites, I realized I don't talk enough about "Drupal distributions" on my blog. The ability for anyone to take Drupal and build their own distribution is not only a powerful model, but something that is relatively unique to Drupal. To the best of my knowledge, Drupal is still the only content management system that actively encourages its community to build and share distributions.

A Drupal distribution packages a set of contributed and custom modules together with Drupal core to optimize Drupal for a specific use case or industry. For example, Open Social is a free Drupal distribution for creating private social networks. Open Social was developed by GoalGorilla, a digital agency from the Netherlands. The United Nations is currently migrating many of their own social platforms to Open Social.

Another example is Lightning, a distribution developed and maintained by Acquia. While Open Social targets a specific use case, Lightning provides a framework or starting point for any Drupal 8 project that requires more advanced layout, media, workflow and preview capabilities.

For more than 10 years, I've believed that Drupal distributions are one of Drupal's biggest opportunities. As I wrote back in 2006: Distributions allow us to create ready-made downloadable packages with their own focus and vision. This will enable Drupal to reach out to both new and different markets..

To capture this opportunity we needed to (1) make distributions less costly to build and maintain and (2) make distributions more commercially interesting.

Making distributions easier to build

Over the last 12 years we have evolved the underlying technology of Drupal distributions, making them even easier to build and maintain. We began working on distribution capabilities in 2004, when the CivicSpace Drupal 4.6 distribution was created to support Howard Dean's presidential campaign. Since then, every major Drupal release has advanced Drupal's distribution building capabilities.

The release of Drupal 5 marked a big milestone for distributions as we introduced a web-based installer and support for "installation profiles", which was the foundational technology used to create Drupal distributions. We continued to make improvements to installation profiles during the Drupal 6 release. It was these improvements that resulted in an explosion of great Drupal distributions such as OpenAtrium (an intranet distribution), OpenPublish (a distribution for online publishers), Ubercart (a commerce distribution) and Pressflow (a distribution with performance and scalability improvements).

Around the release of Drupal 7, we added distribution support to This made it possible to build, host and collaborate on distributions directly on Drupal 7 inspired another wave of great distributions: Commerce Kickstart (a commerce distribution), Panopoly (a generic site building distribution), Opigno LMS (a distribution for learning management services), and more! Today, lists over 1,000 distributions.

Most recently we've made another giant leap forward with Drupal 8. There are at least 3 important changes in Drupal 8 that make building and maintaining distributions much easier:

  1. Drupal 8 has vastly improved dependency management for modules, themes and libraries thanks to support for Composer.
  2. Drupal 8 ships with a new configuration management system that makes it much easier to share configurations.
  3. We moved a dozen of the most commonly used modules into Drupal 8 core (e.g. Views, WYSIWYG, etc), which means that maintaining a distribution requires less compatibility and testing work. It also enables an easier upgrade path.

Open Restaurant is a great example of a Drupal 8 distribution that has taken advantage of these new improvements. The Open Restaurant distribution has everything you need to build a restaurant website and uses Composer when installing the distribution.

More improvements are already in the works for future versions of Drupal. One particularly exciting development is the concept of "inheriting" distributions, which allows Drupal distributions to build upon each other. For example, Acquia Lightning could "inherit" the standard core profile – adding layout, media and workflow capabilities to Drupal core, and Open Social could inherit Lightning - adding social capabilities on top of Lightning. In this model, Open Social delegates the work of maintaining Layout, Media, and Workflow to the maintainers of Lightning. It's not too hard to see how this could radically simplify the maintenance of distributions.

The less effort it takes to build and maintain a distribution, the more distributions will emerge. The more distributions that emerge, the better Drupal can compete with a wide range of turnkey solutions in addition to new markets. Over the course of twelve years we have improved the underlying technology for building distributions, and we will continue to do so for years to come.

Making distributions commercially interesting

In 2010, after having built a couple of distributions at Acquia, I used to joke that distributions are the "most expensive lead generation tool for professional services work". This is because monetizing a distribution is hard. Fortunately, we have made progress on making distributions more commercially viable.

At Acquia, our Drupal Gardens product taught us a lot about how to monetize a single Drupal distribution through a SaaS model. We discontinued Drupal Gardens but turned what we learned from operating Drupal Gardens into Acquia Cloud Site Factory. Instead of hosting a single Drupal distribution (i.e. Drupal Gardens), we can now host any number of Drupal distributions on Acquia Cloud Site Factory.

This is why Nasdaq's offering is so interesting; it offers a powerful example of how organizations can leverage the distribution "as-a-service" model. Nasdaq has built a custom Drupal 8 distribution and offers it as-a-service to their customers. When Nasdaq makes money from their Drupal distribution they can continue to invest in both their distribution and Drupal for many years to come.

In other words, distributions have evolved from an expensive lead generation tool to something you can offer as a service at a large scale. Since 2006 we have known that hosted service models are more compelling but unfortunately at the time the technology wasn't there. Today, we have the tools that make it easier to deploy and manage large constellations of websites. This also includes providing a 24x7 help desk, SLA-based support, hosting, upgrades, theming services and go-to-market strategies. All of these improvements are making distributions more commercially viable.

Last week, I joined the mgmt hackathon, just after Config Management Camp Ghent. It helped me understanding how mgmt actually works and that helped me to introduce two improvements in the codebase: prometheus support, and an augeas resource.

I will blog later about the prometheus support, today I will focus about the Augeas resource.

Defining a resource

Currently, mgmt does not have a DSL, it only uses plain yaml.

Here is how you define an Augeas resource:

graph: mygraph
  - name: sshd_config
    lens: Sshd.lns
    file: "/etc/ssh/sshd_config"
      - path: X11Forwarding
        value: no

As you can see, the augeas resource takes several parameters:

  • lens: the lens file to load
  • file: the path to the file that we want to change
  • sets: the paths/values that we want to change

Setting file will create a Watcher, which means that each time you change that file, mgmt will check if it is still aligned with what you want.


The code can be found there:

We are using go bindings for Augeas: Unfortunately, those bindings only support a recent version of Augeas. It was needed to vendor it to make it build on travis.

Future plans

Future plans regarding this resource is to add some parameters, probably a parameter to use as Puppet “onlyif” parameter, and a “rm” parameter.

February 13, 2017

Nasdaq CIO and vice president Brad Peterson at the Acquia Engage conference showing the Drupal logo on Nasdaq's MarketSite billboard at Times Square NYC

Last October, I shared the news that Nasdaq Corporate Solutions has selected Acquia and Drupal 8 for its next generation Investor Relations and Newsroom Website Platforms. 3,000 of the largest companies in the world, such as Apple, Amazon, Costco, ExxonMobil and Tesla are currently eligible to use Drupal 8 for their investor relations websites.

How does Nasdaq's investor relations website platform work?

First, Nasdaq developed a "Drupal 8 distribution" that is optimized for creating investor relations sites. They started with Drupal 8 and extended it with both contributed and custom modules, documentation, and a default Drupal configuration. The result is a version of Drupal that provides Nasdaq's clients with an investor relations website out-of-the-box.

Next, Nasdaq decided to offer this distribution "as-a-service" to all of their publicly listed clients through Acquia Cloud Site Factory. By offering it "as-a-service", Nasdaq's customers don't have to worry about installing, hosting, upgrading or maintaining their investor relations site. Nasdaq's new IR website platform also ensures top performance, scalability and meets the needs of strict security and compliance standards. Having all of these features available out-of-the-box enables Nasdaq's clients to focus on providing their stakeholders with critical news and information.

Offering Drupal as a web service is not a new idea. In fact, I have been talking about hosted service models for distributions since 2007. It's a powerful model, and Nasdaq's Drupal 8 distribution as-a-service is creating a win-win-win-win. It's good for Nasdaq's clients, good for Nasdaq, good for Drupal, and in this case, good for Acquia.

It's good for Nasdaq's customers because it provides them with a platform that incorporates the best of both worlds; it gives them the maintainability, reliability, security and scalability that comes with a cloud offering, while still providing the innovation and freedom that comes from using Open Source.

It is great for Nasdaq because it establishes a business model that leverages Open Source. It's good for Drupal because it encourages Nasdaq to invest back into Drupal and their Drupal distribution. And it's obviously good for Acquia as well, because we get to sell our Acquia Site Factory Platform.

If you don't believe me, take Nasdaq's word for it. In the video below, which features Stacie Swanstrom, executive vice president and head of Nasdaq Corporate Solutions, you can see how Nasdaq pitches the value of this offering to their customers. Swanstrom explains that with Drupal 8, Nasdaq's IR Website Platform brings "clients the advantages of open source technology, including the ability to accelerate product enhancements compared to proprietary platforms".

Soon it will be a decade since we started the RadeonHD driver, where we pushed ATI to a point of no return, got a proper C coded graphics driver and freely accessible documentation out. We all know just what happened to this in the end, and i will make a rather complete write-up spanning multiple blog entries over the following months. But while i was digging out backed up home directories for information, i came across this...

It is a copy of the content of an email i sent to an executive manager at SuSE/Novell, a textfile called bridgmans_games.txt. This was written in July 2008, after the RadeonHD project gained what was called "Executive oversight". After the executive had first hand seen several of John Bridgmans games, he asked me to provide him with a timeline of some of the games he played with us. The below explanation covers only things that i knew that this executive was not aware of, and it covers a year of RadeonHD development^Wstruggle, from July 2007 until april 2007. It took me quite a while to compile this, and it was a pretty tough task mentally, as it finally showed, unequivocally, just how we had been played all along. After this email was read by this executive, he and my department lead took me to lunch and told me to let this die out slowly, and to not make a fuss. Apparently, I should've counted myself lucky to see these sort of games being played this early on in my career.

This is the raw copy, and only the names of some people, who are not in the public eye have been redacted out (you know who you are), some other names were clarified. All changes are marked with [].

These are:
SuSE Executive manager: [EXEC]
SuSE technical project manager: [TPM]
AMD manager (from a different department than ATI): [AMDMAN]
AMD liaison: [SANTA] (as he was the one who handed me and Egbert Eich a 1 cubic meter box full of graphics cards ;))

[EXEC], i made a rather extensive write-up about the goings-on throughout 
the whole project. I hope this will not intrude too much on your time, 
please have a quick read through it, mark what you think is significant and 
where you need actual facts (like emails that were sent around...) Of 
course, this is all very subjective, but for most of this emails can be 
provided to back things up (some things were said on the weekly technical 
call only).

First some things that happened beforehand...

Before we came around with the radeonhd driver, there were two projects 

One was Dave Airlie who claimed he had a working driver for ATI Radeon R500
hardware. This was something he wrote during 2006, from information under 
NDA, and this nda didn't allow publication. He spent a lot of time bashing 
ATI for it, but the code was never seen, even when the NDA was partially 
lifted later on.

The other was a project started in March 2007 by Nokia's Daniel Stone and
Ubuntu's [canonical's] Matthew Garreth. The avivo driver. This driver was a
driver separate from the -ati or -radeon drivers, under the GPL, and written
from scratch by tracing the BIOS and tracking register changes [(only the
latter was true)]. Matthew and Daniel were only active contributors for the
first few weeks and quickly lost interest. The bulk of the development was
taken over by Jerome Glisse after that.

Now... Here is where we encounter John [Bridgman]...

Halfway july, when we first got into contact with Bridgman, he suddenly 
started bringing up AtomBIOS. When then also we got the first lowlevel 
hardware spec, but when we asked for hardware details, how the different 
blocks fitted together, we never got an answer as that information was said 
to not be available. We also got our hands on some of the atombios 
interpreter code, and a rudimentary piece of documentation explaining how to 
handle the atombios bytecode. So Matthias [Hopf] created an atombios 
disassembler to help us write up all the code for the different bits. And 
while we got some further register information when we asked for it, 
bridgman kept pushing atombios relentlessly.

This whining went on and on, until we, late august, decided to write up a
report of what problems we were seeing with atombios, both from an interface 
as from an actual implementation point of view. We never got even a single 
reply to this report, and we billed AMD 10 or so mandays, and [with] 
bridgman apparently berated, as he never brought the issue up again for the 
next few months... But the story of course didn't end there of course.

At the same time, we wanted to implement one function from atomBIOS at the
time: ASICInit. This function brings the hardware into a state where
modesetting can happen. It replaces a full int10 POST, which is the 
standard, antique way of bringing up VGA on IBM hardware, and ASICInit 
seemed like a big step forward. John organised a call with the BIOS/fglrx 
people to explain to us what ASICInit offered and how we could use it. This 
is the only time that Bridgman put us in touch with any ATI people directly. 
The proceedings of the call were kind of amusing... After 10 minutes or so, 
one of the ATI people said "for fglrx, we don't bother with ASICInit, we 
just call the int10 POST". John then quickly stated that they might have to 
end the call because apparently other people were wanting to use this 
conference room. The call went on for at least half an hour longer, and john 
from time to time stated this again. So whether there was truth to this or 
not, we don't know, it was a rather amusing coincidence though, and we of 
course never gained anything from the call.

Late august and early september, we were working relentlessly towards 
getting our driver into a shape that could be exposed to the public. 
[AMDMAN] had warned us that there was a big marketing thing planned for the 
kernel summit in september, but we never were told any of this through our 
partner directly. So we were working like maniacs on getting the driver in a 
state so that we could present it at the X Developers Summit in Cambridge.

Early september, we were stuck on several issues because we didn't get any
relevant information from Bridgman. In a mail sent on september 3, [TPM] 
made the following statement: "I'm not willing to fall down to Mr. B.'s mode 
of working nor let us hold up by him. They might (or not) be trying to stop 
the train - they will fail." It was very very clear then already that things 
were not being played straight.

Bridgman at this point was begging to see the code, so we created a copy of 
a (for us) working version, and handed this over on september the 3rd as 
well. We got an extensive review of our driver on a very limited subset of 
the hardware, and we were mainly bashed for producing broken modes (monitor 
synced off). This had of course nothing to do with us using direct 
programming, as this was some hardware limits [PLL] that we eventually had 
to go and find ourselves. Bridgman never was able to help provide us with 
suitable information on what the issue could be or what the right limits 
were, but the fact that the issue wasn't to be found in atombios versus 
direct programming didn't make him very happy.

So on the 5th of september, ATI went ahead with its marketing campaign, the 
one which we weren't told about at all. They never really mentioned SUSE's 
role in this, and Dave Airlie even made a blog entry stating how he and Alex 
were the main actors on the X side. The SUSE role in all of this was 
severely downplayed everywhere, to the point where it became borderline 
insulting. It was also one of the first times where we really felt that 
Bridgman maybe was working more closely with Dave and Alex than with us.

We then had to rush off to the X Developers Summit in Cambridge. While we 
met ATIs Matthew Tippett and Bridgman beforehand, it was clear that our role 
in this whole thing was being actively downplayed, and the attitude towards 
egbert and me was predominantly hostile. The talk bridgman and Tippett gave 
at XDS once again fully downplayed our effort with the driver. During our 
introduction of the radeonHD driver, John Bridgman handed Dave Airlie a CD 
with the register specs that went around the world, further reducing our 
relevance there. John stated, quite noisily, in the middle of our 
presentation, that he wanted Dave to be the first to receive this 
documentation. Plus, it became clear that Bridgman was very close to Alex 

The fact that we didn't catch all supper/dinner events ourselves worked 
against us as well, as we were usually still hacking away feverishly at our 
driver. We also had to leave early on the night of the big dinner, as we had 
to come back to nürnberg so that we could catch the last two days of the 
Labs conference the day after. This apparently was a mistake... We later 
found a picture ( that has the 
caption "avivo de-magicifying party". This shows Bridgman, Tippett, Deucher, 
all the redhat and intel people in the same room, playing with the (then 
competing) avivo driver. This picture was taken while we were on the plane 
back to nürnberg. Another signal that, maybe, Mr Bridgman isn't every 
committed to the radeonhd driver. Remarkably though, the main developer 
behind the avivo driver, Jerome Glisse, is not included here. He was found 
to be quite fond of the radeonhd driver and immediately moved on to 
something else as his goal was being achieved by radeonhd now.

On monday the 17th, right before the release, ATI was throwing up hurdles... 
We suddenly had to put AMD copyright statements on all our code, a 
requirement never made for any AMD64 work which is as far as i understood 
things also invalid under german copyright law. This was a tough nut for me 
to swallow, as a lot of the code in radeonhd modesetting started life 5years 
earlier in my unichrome driver. Then there was the fact that the atombios 
header didn't come with an acceptable license and copyright statement, and a 
similar situation for the atombios interpreter. The latter two gave Egbert a 
few extra hours of work that night.

I contacted the owner (Michael Larabel) to talk to him about an 
imminent driver release, and this is what he stated: "I am well aware of the 
radeonHD driver release today through AMD and already have some articles in 
the work for posting later today." The main open source graphics site had 
already been given a story by some AMD people, and we were apparently not 
supposed to be involved here. We managed to turn this article around there 
and then, so that it painted a more correct picture of the situation. And 
since then, we have been working very closely with Larabel to make sure that 
our message gets out correctly.

The next few months seemed somewhat uneventful again. We worked hard on 
making our users happy, on bringing our driver up from an initial 
implementation to a full featured modesetting driver. We had to 
painstakingly try to drag more information out of John, but mostly find 
things out for ourselves. At this time the fact that Dave Airlie was under 
NDA still was mentioned quite often in calls, and John explained that this 
NDA situation was about to change. Bridgman also brought up that he had 
started the procedure to hire somebody to help him with documentation work 
so that the flow of information could be streamlined. We kind of assumed 
that, from what we saw at XDS, this would be Alex Deucher, but we were never 
told anything.

Matthias got in touch with AMDs GPGPU people through [AMDMAN], and we were 
eventually (see later) able to get TCore code (which contains a lot of very
valuable information for us) out of this. Towards the end of oktober, i 
found an article online about AMDs upcoming Rv670, and i pasted the link in 
our radeonhd irc channel. John immediately fired off an email explaining 
about the chipset, apologising for not telling us earlier. He repeated some 
of his earlier promises of getting hardware sent out to us quicker. When the 
hardware release happened, John was the person making statements to the open 
source websites (messing up some things in the process), but [SANTA] was the 
person who eventually sent us a board.

Suddenly, around the 20th of november, things started moving in the radeon
driver. Apparently Alex Deucher and Dave Airlie had been working together 
since November 3rd to add support for r500 and r600 hardware to the radeon 
driver. I eventually dug out a statement on fedora devel, where redhat guys, 
on the last day of oktober, were actively refusing our driver from being 
accepted. Dave Airlie stated the following: "Red Hat and Fedora are 
contemplating the future wrt to AMD r5xx cards, all options are going to be 
looked at, there may even be options that you don't know about yet.."

They used chunks of code from the avivo driver, chunks of code from the
radeonhd driver (and the bits where we spent ages finding out what to do), 
and they implemented some parts of modesetting using AtomBIOS, just like 
Bridgman always wanted it. On the same day (20th) Alex posted a blog entry 
about being hired by ATI as an open source developer. Apparently, Bridgman 
mentioned that Dave had found something when doing AtomBIOS on the weekly 
phonecall beforehand. So Bridgman had been in close communication with Dave 
for quite a while already.

Our relative silence and safe working ground was shattered. Suddenly it was
completely clear that Bridgman was playing a double game. As some diversion
mechanisms, John now suddenly provided us with the TCore code, and suddenly
gave us access to the AMD NDA website, and also told us on the phone that 
Alex is not doing any Radeon work in his worktime and only does this in his 
spare time. Surely we could not be against this, as surely we too would like 
to see a stable and working radeon driver for r100-r400. He also suddenly 
provided those bits of information Dave had been using in the radeon driver, 
some of it we had been asking for before and never got an answer to then.

We quickly ramped up the development ourselves and got a 1.0.0 driver 
release out about a week later. John in the meantime was expanding his 
marketing horizon, he did an interview with the Beyond3D website (there is 
of course no mention about us), and he started doing some online forums 
(phoronix), replying to user posts there, providing his own views on 
"certain" situations (he has since massively increased his time on that 

One interesting result of the competing project is that suddenly we were
getting answers to questions we had been asking for a long time. An example 
of such an issue is the card internal view of where the cards own memory 
resides. We spent weeks asking around, not getting anywhere, and suddenly 
the registers were used in the competing driver, and within hours, we got 
the relevant information in emails from Mr Bridgman (November 17 and 20, 
"Odds and Ends"). Bridgman later explained that Dave Airlie knew these 
registers from his "previous r500 work", and that Dave asked Bridgman for 
clearance for release, which he got, after which we also got informed about 
these registers as well. The relevant commit message in the radeon driver 
predates the email we received with the related information by many hours.

[AMDMAN] had put us in touch with the GPGPU people from AMD, and matthias 
and one of the GPGPU people spent a lot of time emailing back and forth. But
suddenly, around the timeframe where everything else happened (competing
driver, alex getting hired), John suddenly conjured up the code that the 
GPGPU people had all along: TCore. This signalled to us that John had caught 
our plan to bypass him, and that he now took full control of the situation. 
It took about a month before John made big online promises about how this 
code could provide the basis for a DRM driver, and that it would become 
available to all soon. We managed to confirm, in a direct call with both the 
GPGPU people and Mr Bridgman that the GPGPU people had no idea about that 
John intended to make this code fully public straigth away. The GPGPU people 
assumed that Johns questions were fully aimed at handing us the code without 
us getting tainted, not that John intended to hand this out publically 
immediately. To this day, TCore has not surfaced publically, but we know 
that several people inside the Xorg community have this code, and i 
seriously doubt that all of those people are under NDA with AMD.

Bridgman also ramped up the marketing campaign. He did an interview with on the 30th where he broadly smeared out the fact that a 
community member was just hired to work with the community. John of course 
completely overlooked the SUSE involvement in everything. An attempt to 
rectify this with an interview of our own to match never materialised due to 
the heavy time constraints we are under.

On the 4th of december, a user came to us asking what he should do with a
certain RS600 chipset he had. We had heard from John that this chip was not
relevant to us, as it was not supposed to be handled by our driver (the way 
we remember the situation), but when we reported this to John, he claimed 
that he thought that this hardware never shipped and that it therefor was 
not relevant. The hardware of course did ship, to three different vendors, 
and Egbert had to urgently add support for it in the I2C and memory 
controller subsystems when users started complaining.

One very notable story of this time is how the bring-up of new hardware was
handled. I mentioned the Rv670 before, we still didn't get this hardware at
this point, as [SANTA] was still trying to get his hands on this. What we 
did receive from [SANTA] on the 11th of december was the next generation 
hardware, which needed a lot of bring-up work: rv620/rv635. This made us 
hopeful that, for once, we could have at least basic support in our driver 
on the same day the hardware got announced. But a month and a half later, 
when this hardware was launched, John still hadn't provided any information. 
I had quite a revealing email exchange with [SANTA] about this too, where he 
wondered why John was stalling this. The first bit of information that was 
useful to us was given to us on February the 9th, and we had to drag a lot 
of the register level information out of John ourselves. Given the massive 
changes to the hardware, and the induced complications, it of course took us 
quite some time to get this work finished. And this fact was greedily abused 
by John during the bring-up time and afterwards. But what he always 
overlooks is that it took him close to two months to get us documentation to 
use even atombios successfully.

The week of december the 11th is also where Alex was fully assimilated into
ATI. He of course didn't do anything much in his first week at AMD, but in 
his second week he started working on the radeon driver again. In the weekly 
call then John re-assured us that Alex was doing this work purely in his 
spare time, that his task was helping us get the answers we needed. In all 
fairness, Alex did provide us with register descriptions more directly than 
John, but there was no way he was doing all this radeon driver work in his 
spare time. But it would take John another month to admit it, at which time 
he took Alex working on the radeon driver as an acquired right.

Bridgman now really started to spend a lot of time on He 
posted what i personally see as rather biased comparisons of the different 
drivers out there, and he of course beat the drums heavily on atombios. This 
is also the time where he promised the TCore drop.

Then games continued as usual for the next month or so, most of which are
already encompassed in previous paragraphs.

One interesting sidenote is that both Alex and John were heavy on rebranding
AtomBIOS. They actively used the word scripts for the call tables inside
atombios, and were actively labelling C modesetting code as legacy. Both 
quite squarely go in against the nature of AtomBIOS versus C code.

Halfway february, discussions started to happen again about the RS690 
hardware. Users were having problems. After a while, it became clear that 
the RS690, all along, had a display block called DDIA capable of driving a 
DVI digital signal. RS690 was considered to be long and fully supported, and 
suddenly a major display block popped into life upon us. Bridgmans excuse 
was the same as with the RS600; he thought this never would be used in the 
first place and that therefor we weren't really supposed to know about this.

On the 23rd and 24th of February, we did FOSDEM. As usual, we had an 
Developers Room there, for which i did most of the running around. We worked
with Michael Larabel from phoronix to get the talks taped and online. 
Bridgman gave us two surprises at FOSDEM... For one, he released 3D 
documentation to the public, we learned about this just a few days before, 
we even saw another phoronix statement about TCore being released there and 
then, but this never surfaced.

We also learned, on friday evening, that the radeon driver was about to gain
Xvideo support for R5xx and up, through textured video code that alex had 
been working on. Not only did they succeed in stealing the sunshine away 
from the actual hard work (organising fosdem, finding out and implementing 
bits that were never supposed to be used in the first place, etc), they gave 
the users something quick and bling and showy, something we today still 
think was provided by others inside AMD or generated by a shader compiler. 
And this to the competing project.

At FOSDEM itself Bridgman of course was full of stories. One of the most
remarkable ones, which i overheard when on my knees clearing up the 
developers room, was bridgman talking to the Phoronix guy and some community 
members, stating that people at ATI actually had bets running on how long it 
would take Dave Airlie to implement R5xx 3D support for the radeon driver. 
He said this loudly, openly and with a lot of panache. But when this support 
eventually took more than a month, i took this up with him on the phonecall, 
leading of course to his usual nervous laughing and making up a bit of a 
coverstory again.

Right before FOSDEM, Egbert had a one-on-one phonecall with Bridgman, hoping 
to clear up some things. This is where we first learned that bridgman had 
sold atombios big time to redhat, but i think you now know this even better 
than i do, as i am pretty hazy on details there. Egbert, if i remember 
correctly, on the very same day, had a phonecall with redhat's Kevin Martin 
as well. But since nothing of this was put on mail and i was completely 
swamped with organising FOSDEM, i have no real recollection of what came out 
of that.

Alex then continued with his work on radeon, while being our only real 
point of contact for any information. There were several instances where our 
requests for information resulted in immediate commits to the radeon driver, 
where issues were fixed or bits of functionality were added. Alex also ended 
up adding RENDER acceleration to the radeon driver, and when Bridgman was in
Nuernberg, we clearly saw how Bridgman sold this to his management: alex was
working on bits that were meant to be ported directly to RadeonHD.

In march, we spent our time getting rv620/635 working, dragging information,
almost register per register out of our friends. We learned about the 
release of RS780 at CEBIT, meaning that we had missed yet another 
opportunity for same day support. John had clearly stated, at the time of 
the rv620/635 launch that "integrated parts are the only place where we are 
'officially' trying to have support in place at or very shortly after 

And with that, we pretty much hit the time frame when you, [EXEC], got

One interesting kind of new development is Kgrids. Kgrids is some code 
written by some ATI people that has some useful information for driver 
development, especially useful for the DRM. John Bridgman told the phoronix 
author 2-3 weeks ago already that he had this, and that he was planning to 
release this immediately. The phoronix author immediately informed me, but 
the release of this code never happened so far. Last thursday, Bridgman 
brought up Kgrids in the weekly technical phonecall already... When asked 
how long he knew about this, he admitted that they knew about this for 
several weeks already, which makes me wonder why we weren't informed 
earlier, and why we suddenly were informed then...

As said, the above, combined with what already was known to this executive, and the games he saw being played first hand from april 2008 throughout june 2008, marked the start of the end of the RadeonHD project.

I too more and more disconnected myself from development on this, with Egbert taking the brunt of the work. I instead spent more time on the next SuSE enterprise release, having fun doing something productive and with a future. Then, after FOSDEM 2009, when 24 people at the SuSE Nuernberg office were laid off, i was almost relieved to be amongst them.

Time flies.

February 12, 2017

The post PHP 7.2 to get modern cryptography into its standard library appeared first on

This actually makes PHP the first language, over Erlang and Go, to get a secure crypto library in its core.

Of course, having the ability doesn't necessarily mean it gets used properly by developers, but this is a major step forward.

The vote for the Libsodium RFC has been closed. The final tally is 37 yes,0 no.

I'll begin working on the implementation with the desired API (sodium_*instead of \Sodium\*).

Thank you for everyone who participated in these discussions over the past year or so and, of course, everyone who voted for better cryptography inPHP 7.2.

Scott Arciszewski


Source: php.internals: [RFC][Vote] Libsodium vote closes; accepted (37-0)

As a reminder, the Libsodium RFC:

Title: PHP RFC: Make Libsodium a Core Extension

Libmcrypt hasn't been touched in eight years (last release was in 2007), leaving openssl as the only viable option for PHP 5.x and 7.0 users.

Meanwhile, libsodium bindings have been available in PECL for a while now, and has reached stability.

Libsodium is a modern cryptography library that offers authenticated encryption, high-speed elliptic curve cryptography, and much more. Unlike other cryptography standards (which are a potluck of cryptography primitives; i.e. WebCrypto), libsodium is comprised of carefully selected algorithms implemented by security experts to avoid side-channel vulnerabilities.

Source: PHP RFC: Make Libsodium a Core Extension

The post PHP 7.2 to get modern cryptography into its standard library appeared first on is one of my favourite playground. I’m monitoring the content of all pasties posted on this website. My goal is to find juicy data like configurations, database dumps, leaks of credentials. Sometimes you can find also malicious binary files.

For sure, I knew that I’m not the only one to have interests in the content.  Plenty of researchers or organizations like CERT’s and SOC’s are doing the same but I was very surprised by the number of hits that I got on my latest pastie:

Pastebin Hits

For the purpose of my last ISC diary, I posted some data on and did not communicate the link by any mean. Before posting the diary, I had a quick look at my pastie and it had already 105 unique views! It was posted only a few minutes before., think twice before posting data to

Conclusion: Think twice before posting data to pastebin. Even if you delete quickly your pastie, there are chances that it will be already scrapped by many robots (and mine! ;-))

[The post Think Twice before Posting Data on Pastebin! has been first published on /dev/random]

I published the following diary on “Analysis of a Suspicious Piece of JavaScript“.

What to do on a cloudy lazy Sunday? You go hunting and review some alerts generated by your robots. Pastebin remains one of my favourite playground and you always find interesting stuff there. In a recent diary, I reported many malicious PE files stored in Base64 but, today, I found a suspicious piece of JavaScript code… [Read more]

[The post [SANS ISC Diary] Analysis of a Suspicious Piece of JavaScript has been first published on /dev/random]

February 11, 2017

After the PR-beating WordPress took with the massive defacements of non-upgraded WordPress installations, it is time to revisit the point-of-view of the core-team that the REST API should be active for all and that no option should be provided to disable it (as per the decisions not options philosophy). I for one installed the “Disable REST API” plugin.

YouTube Video
Watch this video on YouTube.

EDITED on 20170211: syntastic-perl6 configuration changes

If you’re a Vim user you probably use it for almost everything. Out of the box, Perl 6 support is rather limited. That’s why many people use editors like Atom for Perl 6 code.

What if with a few plugins you could configure vim to be a great Perl 6 editor? I made the following notes while configuring Vim on my main machine running Ubuntu 16.04. The instructions should be trivially easy to port to other distributions or Operating Systems. Skip the applicable steps if you already have a working vim setup (i.e. do not overwrite you .vimrc file).

I maintain my Vim plugins using pathogen, as it allows me to directly use git clones from github. This is specially important for plugins in rapid development.
(If your .vim directory is a git repository, replace ‘git clone’ in the commands by ‘git submodule add’.)

Basic vim Setup

Install vim with scripting support and pathogen. Create the directory where the plugins will live:
$ sudo apt-get install vim-nox vim-pathogen && mkdir -p ~/.vim/bundle

$ vim-addons install pathogen

Create a minimal .vimrc in your $HOME, with at least this configuration (enabling pathogen). Lines commencing with ” are comments:

“Enable extra features (e.g. when run systemwide). Must be before pathogen
set nocompatible

“Enable pathogen
execute pathogen#infect()
“Enable syntax highlighting
syntax on
“Enable indenting
filetype plugin indent on

Additionally I use these settings (the complete .vimrc is linked atthe end):

“Set line wrapping
set wrap
set linebreak
set nolist
set formatoptions+=l

“Enable 256 colours
set t_Co=256

“Set auto indenting
set autoindent

“Smart tabbing
set expandtab
set smarttab
set sw=4 ” no of spaces for indenting
set ts=4 ” show \t as 2 spaces and treat 2 spaces as \t when deleting

“Set title of xterm
set title

” Highlight search terms
set hlsearch

“Strip trailing whitespace for certain type of files
autocmd BufWritePre *.{erb,md,pl,pl6,pm,pm6,pp,rb,t,xml,yaml,go} :%s/\s\+$//e

“Override tab espace for specific languages
autocmd Filetype ruby,puppet setlocal ts=2 sw=2

“Jump to the last position when reopening a file
au BufReadPost * if line(“‘\””) > 1 && line(“‘\””) <= line(“$”) |
\ exe “normal! g’\”” | endif

“Add a coloured right margin for recent vim releases
if v:version >= 703
set colorcolumn=80

“Ubuntu suggestions
set showcmd    ” Show (partial) command in status line.
set showmatch  ” Show matching brackets.
set ignorecase ” Do case insensitive matching
set smartcase  ” Do smart case matching
set incsearch  ” Incremental search
set autowrite  ” Automatically save before commands like :next and :make
set hidden     ” Hide buffers when they are abandoned
set mouse=v    ” Enable mouse usage (all modes)

Install plugins

vim-perl for syntax highlighting:

$ git clone ~ /.vim/bundle/vim-perl


vim-airline and themes for a status bar:
$ git clone ~/.vim/bundle/vim-airline
$ git clone ~/.vim/bundle/vim-airline-themes
In vim type :Helptags

In  Ubuntu the ‘fonts-powerline’ package (sudo apt-get install fonts-powerline) installs fonts that enable nice glyphs in the statusbar (e.g. line effect instead of ‘>’, see the screenshot at

Add this to .vimrc for airline (the complete .vimrc is attached):
“airline statusbar
set laststatus=2
set ttimeoutlen=50
let g:airline#extensions#tabline#enabled = 1
let g:airline_theme=’luna’
“In order to see the powerline fonts, adapt the font of your terminal
“In Gnome Terminal: “use custom font” in the profile. I use Monospace regular.
let g:airline_powerline_fonts = 1


Tabular for aligning text (e.g. blocks):
$ git clone ~/.vim/bundle/tabular
In vim type :Helptags

vim-fugitive for Git integration:
$ git clone ~/.vim/bundle/vim-fugitive
In vim type :Helptags

vim-markdown for markdown syntax support (e.g. the of your module):
$ git clone ~/.vim/bundle/vim-markdown
In vim type :Helptags

Add this to .vimrc for markdown if you don’t want folding (the complete .vimrc is attached):
“markdown support
let g:vim_markdown_folding_disabled=1

synastic-perl6 for Perl 6 syntax checking support. I wrote this plugin to add Perl 6 syntax checking support to synastic, the leading vim syntax checking plugin. See the ‘Call for Testers/Announcement’ here. Instruction can be found in the repo, but I’ll paste it here for your convenience:

You need to install syntastic to use this plugin.
$ git clone ~/.vim/bundle/synastic
$ git clone ~/.vim/bundle/synastic-perl6

Type “:Helptags” in Vim to generate Help Tags.

Syntastic and syntastic-perl6 vimrc configuration, (comments start with “):

"airline statusbar integration if installed. De-comment if installed
"set laststatus=2
"set ttimeoutlen=50
"let g:airline#extensions#tabline#enabled = 1
"let g:airline_theme='luna'
"In order to see the powerline fonts, adapt the font of your terminal
"In Gnome Terminal: "use custom font" in the profile. I use Monospace regular.
"let g:airline_powerline_fonts = 1

“syntastic syntax checking
let g:syntastic_always_populate_loc_list = 1
let g:syntastic_auto_loc_list = 1
let g:syntastic_check_on_open = 1
let g:syntastic_check_on_wq = 0
set statusline+=%#warningmsg#
set statusline+=%{SyntasticStatuslineFlag()}
set statusline+=%*

“Perl 6 support
“Optional comma separated list of quoted paths to be included to -I
“let g:syntastic_perl6lib = [ ‘/home/user/Code/some_project/lib’, ‘lib’ ]
“Optional perl6 binary (defaults to perl6 in your PATH)
“let g:syntastic_perl6_exec = ‘/opt/rakudo/bin/perl6’
“Register the checker provided by this plugin
let g:syntastic_perl6_checkers = [‘perl6’]
“Enable the perl6 checker (disabled out of the box because of security reasons:
“‘perl6 -c’ executes the BEGIN and CHECK block of code it parses. This should
“be fine for your own code. See:
let g:syntastic_enable_perl6_checker = 1

Screenshot of syntastic-perl6

You complete me fuzzy search autocomplete:

$ git clone ~/.vim/bundle/YouCompleteMe

Read the YouCompleteMe documentation for the dependencies for your OS and for the switches for additional non-fuzzy support for additional languages like C/C++, Go and so on. If you just want fuzzy complete support for Perl 6, the default is ok. If someone is looking for a nice project, a native Perl6 autocompleter for YouCompleteMe (instead of the fuzzy one) would be a great addition. You can install YouCompleteMe like this:
$ cd ~/.vim/bundle/YouCompleteMe && ./


That’s it. I hope my notes are useful to someone. The complete .vimrc can be found here.



Filed under: Uncategorized Tagged: Perl, perl6, vim

February 10, 2017

— S’il-te-plait, ne m’abandonne pas ! Résiste ! Reste !

Écrasé par la douleur, je broie sans m’en rendre compte les doigts frêles posé sur le lit d’hôpital. De longues larmes lourdes et pesantes ruissellent sur ma joue, inondant le drap.

Elle tourne vers moi un regard fatigué, épuisé par la douleur.

– Je t’aime, fais-je d’une voix implorante.

D’un clignement des yeux, elle me répond.

— Je t’aime, murmure un souffle, une ébauche de sourire.

Soudain, je me sens apaisé. Mon esprit s’est clarifié. D’une voix nette et fluide, je me mets à parler.

— J’ai toujours été deux avec toi. Ma vie s’est construite sur nous. Je n’ai jamais imaginé que l’un de nous puisse partir avant l’autre. L’amour, ce concept abstrait des poètes, a guidé chacun de mes pas, chacun de mes soupirs. C’est vers toi que j’ai toujours marché, c’est pour toi que j’ai toujours respiré.

Comme un barrage soudainement détruit, j’éclate en sanglot. Ma voix se déforme.

— Que vais-je faire sans toi ? Comment puis-je encore vivre ? Ne me laisse pas ! Reste !
— Je… Je te promets de rester, balbutie une voix faible. De rester aussi longtemps que tu le souhaiteras. Je partirai seulement quand tu me laisseras partir. Promis, je reste…
— Mon amour…

Pendant des heures, je baise cette main désormais décharnée, je pleure, je ris.

— Je t’aime ! Je t’aime mon amour !

Rien n’a plus d’importance que l’amour qui nous unit.

Une poigne ferme s’abat soudainement sur mon épaule.

— Monsieur ! Monsieur !

Hébété, je me retourne.

— Docteur ? Que…

— Je suis désolé. Il n’y a plus rien à faire. Nous devons procéder à la toilette du corps.

— Hein ? Mais…

Perdu, je me tourne vers ma bien aimée. Ses yeux sont fermés, un très léger sourire illumine son visage.

— Elle dort ! Elle s’est simplement assoupie !

Dans mes doigts, sa main est devenue glacée, rigide.

— Venez, me dit doucement le docteur en m’accompagnant. Avez-vous de la famille à appeler ?


J’entends à peine le chauffeur démarrer et faire demi-tour derrière moi. Sous mes pieds, les familiers graviers de l’allée crissent et se mélangent. Machinalement, j’ai introduit ma clé et ouvert la porte. Un sombre silence m’accueille. Ma bouche est sèche, mes tempes bourdonnent d’avoir trop pleuré.

Sans allumer la lumière, je traverse le hall d’entrée et m’installe dans la cuisine. Ouvrant le robinet, je me sers un verre d’eau.

Un frisson me parcourt l’échine. Une porte claque. Dans l’armoire du salon, les verres en cristal se mettent à chanter.

— Qui est là ?

Une fenêtre s’ouvre violemment et un tourbillon de vent envahit la pièce, m’enveloppant dans l’air froid de la nuit.

À mon oreille, une voix proche et lointaine susurre :

— Promis, je reste…


Photo par Matthew Perkins.

Ce texte est a été publié grâce à votre soutien régulier sur Tipeee et sur Paypal. Je suis @ploum, blogueur, écrivain, conférencier et futurologue. Vous pouvez me suivre sur Facebook, Medium ou me contacter.

Ce texte est publié sous la licence CC-By BE.

February 09, 2017


#include <ViewModels/MyListClass.h>
#include <ViewModels/DisplayViewModel.h>

qmlRegisterUncreatableType<MyListClass>( a_uri, 1, 0, "MyListClass",
         "Use access via DisplayViewModel instead");
qmlRegisterType<DisplayViewModel>( a_uri, 1, 0, "DisplayViewModel");


#define MY_DECLARE_QML_LIST(type, name, owner, prop) \
QQmlListProperty<type> name(){ \
   return QQmlListProperty<type>( \
               this, 0,&owner::count##typeList, \
               &owner::at##typeList); \
} \
static int count##typeList(QQmlListProperty<type>*property){ \
   owner *m = qobject_cast<owner *>(property->object); \
   return m->prop.size(); \
} \
static type *at##typeList( \
        QQmlListProperty<type>*property, int index){ \
   owner *m = qobject_cast<owner *>(property->object); \
   return m->prop[index]; \



#include <QObject>
#include <QtQml>
#include <ViewModels/MyListClass.h>
#include <Utils/MyQMLListUtils.h>

class DisplayViewModel : public QObject

    Q_PROPERTY(constQString title READ title WRITE setTitle NOTIFY titleChanged )
    Q_PROPERTY(constQList<MyListClass*> objects READ objects
                                          NOTIFY objectsChanged ) 
    Q_PROPERTY( QQmlListProperty<MyListClass> objectList READ objectList
                                              NOTIFY objectsChanged )
    explicit DisplayViewModel( QObject *a_parent = nullptr );
    explicit DisplayViewModel( const QString &a_title,
                               QList<MyListClass*> a_objects,
                               QObject *a_parent = nullptr );
    const QString title()
        { return m_title; }
    void setTitle( const QString &a_title ); 
    const QList<MyListClass*> objects ()
        { return m_objects; } 
    Q_INVOKABLE void appendObject( MyListClass *a_object);
    Q_INVOKABLE void deleteObject( MyListClass *a_object);
    Q_INVOKABLE void reset( );

    MY_DECLARE_QML_LIST(MyListClass, objectList, DisplayViewModel, m_objects)

    void titleChanged();
    void objectsChanged();

    QString m_title;
    QList<MyListObject*> m_objects;



#include "DisplayViewModel.h"

DisplayViewModel::DisplayViewModel( const QString &a_title,
                                    QList<MyListClass*> a_objects,
                                    QObject *a_parent )
    : QObject ( a_parent )
    , m_title ( a_title )
    , m_objects ( a_objects )
    foreach (MyListClass* mobject, m_objects) {
        mobject->setParent (this);

void DisplayViewModel::setTitle (const QString &a_title )
    if ( m_title != a_title ) {
        m_title = a_title;
        emit titleChanged();

void DisplayViewModel::reset( )
    foreach ( MyListClass *mobject, m_objects ) {
    emit objectsChanged();

void DisplayViewModel::appendObject( MyListClass *a_object )
    a_object->setParent( this );
    m_objects.append( a_object );
    emit objectsChanged();

void DisplayViewModel::deleteObject( MyListClass *a_object )
    if (m_objects.contains( a_object )) {
        m_objects.removeOne( a_object );
        emit objectsChanged();


#include <ViewModels/DisplayViewModel.h>
#include <ViewModels/MyListClass.h>

QList<MyListClass*> objectList;
for( int i = 0; i < 100 ; ++i ) {
    objectList.append ( new MyListClass (i) );
DisplayViewModel *viewModel = new DisplayViewModel (objectList);
viewModel->appendObject ( new MyListClass (101) );


import QtQuick 2.5
import MyPlugin 1.0

Repeater { 
    property DisplayViewModel viewModel: DisplayViewModel { } 
    model: viewModel.objectList
    delegate: Item {
        property MyListClass object: modelData
        Text {

At Config Management Camp, James was once again presenting mgmt. He presented the project one year ago, on his blog. There are multiple ideas behind mgmt (as you can read on his blog):

  • Parallel execution
  • Event driven
  • Distributed topology

And all of that makes a lot of sense. I really like it. I do really think that this might become a huge deal in the future. But there are a couple of concerns I would like to raise.

It is James’ project

James has been developing it since 2013. His ideas are great, however the project is still at its very beginning, not usable yet, even for the more brave out there. The fact that it is so early and that even early adopters have a lot to do, makes it difficult to contribute. Even if there are 20 code contributors to the project, James is the one that does most of the job.

I do like the fact that James knows Puppet very well. It means we speak the same languages, he knows the problem Puppet users face, and is one of the best people to challenge the config management community.

How to improve that: get involved, code, try… We need to bring this to a more stable and usable shape before we can drag more and more contributors.

It is scary because it is fast

With traditional configuration management, when you screw up some thing, you still have the time to fix your configuration before it is applied everywhere. Puppet running every 30 minutes is a good example. You have a chance to fix some things in emergency before everyone gets them.

Even ansible is an order of magnitude slower than mgmt. And that makes it scary.

How to improve that: we might be able to tell mgmt to do canary releases. That would be awesome, something really great for the future of mgmt. It is quite hard to do so with traditional config management, but it might become native in the future with mgmt. That would be awesome.

It is hard to understand

mgmt internals are not as easy to understand as other tools. You can be scary at first sight. It is using directed acyclic graphes, which enables it do do things in parallel. Even if other config management use something close to that, in mgmt you need to have a deep understanding of them, especially when you want to contribute (which is the only thing you can do now). It is not hard, it is just a price we do not have to pay with the other tools.

How to improve that: continue improving mgmt, remove bugs in that graph so we are not badly affected by bugs in that graph, and we do not need to understand everything to contribute (e.g to write new resource types or fix bugs in resources). You can deal with Puppet, and know nothing about its internals.

Golang is challenging

mgmt is written in go. Whether it is a great language or not is not what matters here. What matters is that in the Ops world, go is not so popular. I am afraid that go is not as extensible as Ruby, and the fact that you need to recompile it to extend it is also a pain.

How to improve that: nothing mgmt can do. The core open-source infrastructure tools nowadays are mainly written in go. Sysadmins need to evolve here.

The AGPL license

I love copyleft. However, most of the world does not. This part of mgmt will be one of the things that will slow down its adoption. Ansible is GPL-Licensed, but the difference with Ansible and mgmt is that the last one is written in go, as a library. You could use mgmt just as a lib, backed into other softwares. But the licencing makes it too hard.

For a lot of other projects, the license will be the first thing they will see. They will not ever look deeper to see if mgmt firs their needs or not.

For now I am afraid mgmt would just become a thing in the Red Hat world, where by default lots of things are copyleft, so there would be no problem with including mgmt.

How to improve that: we need to speak with the community, and decide if we want to speed up the adoption, or to try to get more people to contribute.

The name

Naming things is hard. But mgmt is probably not the best name. It does not sell a dream, it does not stand out. It is impossible to do some internet searches about it.

How to improve that: we need to start thinking about a better name, that would reflect the community.


I am not sure about what will be the future of mgmt in a container world. Not everything is moving to that world nowadays, but a lot of things actually is. I just don’t know. Need to think further about it.

But, is there anything positive?

You can read the list above as bad things. But I see that list a my initial ideas about how we can bring the project to the next level, how we can making it more appealing for external contributors.

mgmt is one of the projects that I would qualify as unique and really creative. Creativity is a gift nowadays. It is just awesome, just give it a bit more time.

We look forward to see it more stable, and to see a language that will take advantage of it, and enable us to easily write mgmt catalogs (currently it is all yaml). Stay tuned!

Some of the largest brands in the world are emerging as leading sponsors and contributors of Drupal. Pfizer, for example, has been using Drupal to improve its internal content workflow processes. Not only is Pfizer a major user of Drupal, they are also making their Drupal improvements available for everyone's benefit, including their competitors. This kind of innovation and collaboration model is relatively unheard of and is less likely to happen with proprietary software.

Another great example is Last year the City of Boston migrated to Drupal. Shortly after the launch of, they released's source code to the public domain. By open-sourcing their project, the city of Boston is challenging the prevailing model. Anyone can see the code that makes work, point out problems, suggest improvements, or use the code for their own city, town or organization.

The City of Boston isn't the only government agency that is changing their way of innovating. In 2012, the White House released the code behind "We the People", the Drupal-based application that allows the American people to submit petitions directly to the President of the United States. By releasing the code that supports "We the People", any government in the world can take advantage of the project and implement it in their own community.

Next, the international media group Hubert Burda Media employs a team of six Drupal developers that build and maintain Thunder, a Drupal 8 distribution that can be used by any of the 164 brands that Burda supports. Last year, Burda open-sourced Thunder, allowing competitors to benefit from Burda's development, know-how and best practices. As part of their work on Thunder, Burda is an active contributor to Drupal 8's media initiative. Burda is also inviting its competitors to contribute to Thunder.

Some may wonder what is beneficial about sharing innovation with competitors. Today, technology is becoming more and more complex and the rate of change is accelerating. It is becoming increasingly difficult for any one organization to build an entire solution and do it well. By contributing back and by working together, these organizations can keep a competitive edge over those that don't use open source and collaborate. What looks strange to some, is actually perfectly logical to others. Those that contribute to open source are engaging in a virtuous cycle that benefits their own projects. It is a tide that raises all boats; a model that allows progress to accelerate due to wider exposure and public input. It's a story that is starting to play out in every industry -- from pharmaceutical companies, to media and publishing, to government.

Challenge the prevailing model

As I wrote in my 2016 Acquia retrospective, I believe that the use of open source software has finally crossed the chasm -- most organizations don't think twice about using open source software. The next step is to encourage more organizations to not just use open source, but to contribute to it. Open source offers a completely different way of working, and fosters an innovation model that is not possible with proprietary solutions. Pfizer,, the White House and Burda are remarkable examples of how organizations benefit from not only using but contributing to open source.

In order to help people understand the power of this model we have to change the lens through which organizations see the world. It's hard to disrupt the status quo, but fortunately we now have powerful examples that highlight how great organizations are using open source to change their operating model.

If you want to help challenge the prevailing model at your own organization, here are the basic steps that your organization can implement today:

  1. Embrace open source in your organization and make it successful.
  2. Assess whether any of your customizations are truly custom or if they can be used by others.
  3. Contribute back your customizations to the open source project, advance it in the open and encourage others to contribute.

February 08, 2017

The post Server-side timings in the Chrome Devtools appeared first on

Paul Irish just announced this on Twitter and it's a seriously cool feature in the Chrome Devtools! The timings tab can interpret HTTP headers sent by the application and render them.

It looks like this in the Network Inspector of Chrome:

To test this, right click on this page, go to Inspect, then the Network tab, refresh the page, click the main resource that loaded and hit the Timings tab.

The magic is in the Server Timings block below. The browser interprets the special Server-Timing header and renders that in the timeline.

You can test this on this site as I quickly cheated with some .htaccess magic:

$ cat .htaccess
Header add Server-Timing 'cpu=0.009; "CPU", mysql=0.005; "MySQL", filesystem=0.006; "Filesystem"'

The format is very simple the Server-Timing header can contain a key=value pattern with an additional description. The time is written as seconds, so the example above is translated as;

cpu=0.009; "CPU time" ==> 9ms spent on CPU
mysql=0.005; "MySQL time" ==> 5ms spent on MySQL queries
filesystem=0.006; "Filesystem" ==> 6ms spent on the filesystem

Obviously, these are just arbitrary fixed values, but this opens a lot of opportunities for either the application or the webserver to include optional debug headers that can be used inside the browser to see how server time was spent.

If you want to include this in your own application, it's just a matter of setting the correct Server-Timing header with the correct format, Chrome interprets the rest.

Such a cool feature!

The post Server-side timings in the Chrome Devtools appeared first on

February 07, 2017

The post Review: Ubiquiti’s Amplifi HD, mesh WiFi networking done right? appeared first on

Our home network got an upgrade a few weeks ago when the Amplifi HD arrived. It consists of a base WiFi unit and 2 extra antennas to guarantee good coverage in your house.

Unlike regular "Access Point Extenders", these create a mesh WiFi network that offers a single SSID and seamless transfer between antennas throughout the house. It's supposed to be WiFi done right based on the proven technology of its parent company, Ubiquiti.

But is it?

Order & unboxing

I made the mistake of ordering the Amplifi HD in August of last year, when it was available in the US. Unfortunately, they didn't ship to the EU until a few weeks ago -- so it was a long wait.

But when it did finally arrive, the packaging and unboxing experience felt premium, much like an Apple product or high-end Android device.

For starters, it's a big box that contains all the hardware.

It's a heavy box, too. But there's a lot of hardware inside.

If you've ever unpacked an iPhone or high end Android device, this looks similar. Gone are the days where packaging is just packaging. This is your first hands-on experience with your purchase, the packaging counts and offers your first impressions.

Once unpacked, we see:

  • A base unit (the square white box)
  • 2 extra antennas / extenders
  • Power and network cords (in the black box)

Those white antennas deserve some extra attention. They're essentially the power of Amplifi, as you can plug more into outlets all around the house. They connect to either the base unit (best case scenario) or another antenna (worst case scenario) and automatically extend the reach and strength of your wireless network.

They don't have any UTP/network capacities, they only extend your network via WiFi. This means you just plug them in and leave them.

It also means you can't use an existing cable network to extend your WiFi network further.

Those antennas are connected via a powerful magnet, which is useful if it's placed in a location where a vacuum cleaner or little kid could accidentally knock it of. Instead of breaking, it gracefully disconnects. It also allows you to point them every so slightly in the direction you want.

Very powerful piece of technology. In theory, I could buy more antennas and spread them throughout the garden or attic and further enhance the range.

But, because of the way the power outlet is structured, there is a downside. Amplifi was originally designed and built for the US and only recently shipped to the EU. In Belgium, that makes those antennas look like they're clumsily pointed downwards.

Update: turns out, they don't officially ship to Belgium yet, only a handful of EU countries. This might explain a thing or two.

And for the close-up:

It's an aesthetic issue, because for the WiFi signal itself it doesn't seem to matter much whether it's pointing up or down.

Speed tests

Let's benchmark this.

I have a 200Mbps internet connection at home. With the default ISP-provided router (Telenet), I could only ever reach ~100Mbps. This was measured multiple times, the screenshot below is the highest it ever got.

With the Amplifi HD, using the exact same uplink, that download capacity doubled.

For raw throughput, the Amplifi HD doubled the speed of our home network.


I mostly got the Amplifi HD for range, to get better coverage in the garden and our 2nd floor -- where my "home office" is.

The ISP-provided router got a signal up there, but it never felt very strong. With the Amplifi I get all bars on both the laptop and mobile phones.

I did miscalculate how I should setup the Amplifi hardware. My original plan was: Base Unit -> Extender -> Extender. It would allow me to keep the base unit on the ground floor and gradually place an extender on each floor of the house.

However, the documentation mentions that it's possible, but for best performance you're better of doing an Extender <- Base Unit -> Extender setup.

So I redid a few cables and it's now running as such.

Aesthetics & design

I choose the Amplifi over other competitors in part because of its design. After all, the router is usually somewhere in the living room (for best reception) so it might as well be something pretty to look at.

Forgive my amateuristic video skills, but here's a small video of it in action -- in the dark -- to show the LED & display. Note the LED strip at the bottom can be configured in strength and the display in the center is a touch screen you can use to cycle through various output options.

The small display can also display diagnostics, like which ports actually have connection and which don't.

Besides a clock display it can also show the current bandwidth (in Mbps) your internet connectivity is consuming. This updates in real-time and can give you some good insights when you're watching YouTube or Netflix to see the different bandwidth requirements each stream has.

In this example, I filmed it when running a simple speedtest (goes ~150Mbps down and 19Mbps up).

I know design is a personal flavour, but I very much prefer this look over the traditional Nighthawk routers people recommend for high-performant WiFi.

I'll take the Amplifi design over that any day of the week.

Setup & configuration via Android/iPhone app

The Amplifi doesn't have a web interface to configure the router. Well, it does actually have one, but it only allows you to configure it's static/DHCP network settings, nothing more.

There's a very powerful iOS & Android app, though. Since I have an iPhone, the following review assumes you have an iOS device too.

Before I continue, I feel I should point out that while the iOS app offers a lot of features, it doesn't encourage security. Since your mobile device is the one you use for setup, it's also the one you have to use to create a strong WPA2 key. You can't change it via the limited web interface.

You aren't exactly motivated to enter a strong passphrase if you have to type it on your mobile device.

Setup is very smooth nonetheless. The router and iPhone app communicate via bluetooth making it very easy to configure without having to guess/nmap the DHCP IP address and find the web interface.

Once inside the app you have the ability to configure the IP settings, see who's connected to the WiFi, pause/enable their internet access, enable or disable the guest WiFi (which gets put on a separate DHCP network), ...

There are limitations in the sense that you can't configure any firewall or VLAN'ing, but it's a home product after all -- nothing like the bigger brother Ubiquiti.

Feature updates & changelog

In the 2 weeks it's been running, Amplifi updated the base router firmware with new parental control options. The changelog is pretty accurate for keeping up with those features.

In the latest update, they introduced a limited parental control where you can group devices into different groups and assign time based policies for when internet connectivity is allowed.

This seems interesting for households with teenagers that want a more formal approach to limiting online time.


This doesn't come cheap, unfortunately. You can get these on (Amplifi HD) (affiliate link) for 429eur, VAT included.

Design and stability comes at a price, and while ~400eur is a lot of money for a WiFi/router system, it's also become the network that's the most heavily used in our house hold. Our mobile phones, laptops, multiple Chromecasts, ... all run from it and frustrations of lost or slow WiFi are incredibly frustrating.

In fact, since installing the Amplifi HD, I've had zero problems with casting to Chromecast anymore, a problem that frequently occurred before this setup.


There's good & there's room for improvement.

The good:

  • Excellent WiFi coverage
  • Nice looking design
  • Powerful iOS app for managing the router/guest network/parental control/...
  • iOS app gets updates
  • Very smooth update process of the base unit + extenders

Can be better:

  • No web interface for configuration
  • Grounding on power outlet of extenders causes it to look upside-down
  • Limited features in the app

Would I recommend it? If you care about stable WiFi, good throughput and a good looking WiFi box: yes.

If you're a power user that wants VLAN'ing, firewalling, a captive portal, ... you're better of looking at Ubiquiti for the same budget.

If you're from Belgium/EU and are looking at buying one of these, they're available online (and in stock) at (Amplifi HD) (affiliate link).

The post Review: Ubiquiti’s Amplifi HD, mesh WiFi networking done right? appeared first on

I sadly had to miss out on the FOSDEM event. The entire weekend was filled with me being apathetic, feverish and overall zombie-like. Yes, sickness can be cruel. It wasn't until today that I had the energy back to fire up my laptop.

Sorry for the crew that I promised to meet at FOSDEM. I'll make it up, somehow.

Dans « Pourquoi nous regardons les étoiles », j’ai expliqué que l’humanité est pour moi un organisme multicellulaire qui est en train de se doter d’un système nerveux (l’écriture et Internet) et, bientôt, d’une conscience.

D’un point de vue anecdotique, il est intéressant de constater que le logiciel d’intelligence artificielle MogIA avait prédit, en analysant les réseaux sociaux Twitter, Facebook et Google, que Trump serait élu là où les médias traditionnels étaient convaincus de la victoire d’Hillary Clinton.

J’ai la conviction que, bien qu’encore balbutiante, une conscience globale est en train d’émerger sur les réseaux sociaux.

Et vous avez un rôle primordial à jouer pour donner une direction à cette conscience, pour lui inculquer les valeurs qui vous sont chères.

Le piège des « fake news »

Depuis l’élection de Trump, un débat a lieu sur le partage des “fake news” sur les réseaux sociaux, des canulars présentés comme des nouvelles réelles et qui auraient influencés les électeurs américains.

Mais ces fake news ne sont-elles pas tout aussi représentatives de notre conscience collective que n’importe quelle autre information ? La réalité est une notion complexe et sa représentation est forcément subjective.

Après tout, notre société européenne s’est construite sur un livre, la bible, qui contient tellement de contradictions et d’absurdités que la question de sa réalité ne devrait pas se poser. Pourtant, il fut à la fois le fruit et l’influence de notre conscience collective durant plusieurs siècles.

Si nous voulons faire grandir notre conscience collective, il ne faut pas filtrer les fausses nouvelles, il faut apprendre à devenir critique et à les comprendre comme ce qu’elles sont : une partie légitime de nous-mêmes.

Il n’y a pas de vraie ou de fake news mais des expressions différentes de notre perception commune. Toute “vraie” news reste filtrée par la subjectivité de celui qui l’a écrite.

Construire une conscience collective sur Facebook

Je vous propose donc d’analyser l’influence que les réseaux sociaux ont sur nous et que nous pouvons avoir sur eux, en commençant par Facebook.

Pour chaque action que nous avons la possibilité d’effectuer, j’ai identifié trois effets :

  • L’effet que nous avons sur les autres.
  • L’effet que le réseau social a sur nous.
  • La latitude que nous donnons aux publicitaires.

Aimer une page Facebook

Sur Facebook, “Aimer” est un terme trompeur. Il serait plus juste de dire “Je soutiens publiquement”. Par exemple, en aimant la page Ploum, vous soutenez publiquement mon action de blogueur.

L’effet sur les autres est relativement important car cela revient à recommander Ploum à vos amis. Plus une page à des “J’aime”, plus elle est considérée comme crédible et importante, spécialement par les médias traditionnels. Votre “J’aime” a donc un poids réel (tout comme le fait de me suivre sur Twitter, le nombre de followers étant perçu comme une mesure de l’importance de la personne).

L’effet que cela a sur vous est très faible, voire nul à moins que vous ne cliquiez sur “Voir en premier”. Si vous ne faites pas cela, vous ne verrez presque pas les publications de la page en question. La raison est simple : Facebook fait payer les propriétaires de pages pour toucher leurs fans.

Le business de Facebook est donc de vous encourager à aimer ma page puis à me faire payer pour que mes posts vous parviennent.

Précision importante : vous offrez une part énorme de votre attention aux publicitaires si vous aimez des pages génériques ou liées à des domaines précis. Si vous aimez ce qui touche au vélo, vous serez inondés de publicités liées au vélo. C’est en utilisant cette technique que Trump a pu être élu malgré une campagne ridicule et un budget très limité.

Soyez donc vigilants et passez en revue tous vos “J’aime”, surtout ceux que vous avez fait à un moment ou un autre pour participer à un concours. N’aimez que ce que vous considérez comme un réel soutien public.

Personnellement, j’évite les marques, les grands groupes et les concepts génériques. J’aime les personnes, les artistes peu connus, les organisations ou les commerces locaux dont je souhaite activement assurer la promotion.

Profitez-en pour aimer sur Facebook et Twitter ! Ne suis-je pas un artiste peu connu ?

Aimer et repartager un contenu sur Facebook

En aimant et repartageant un contenu, vous un avez un effet maximal sur Facebook. Non seulement vous donnez du poids à un contenu mais vous augmentez la probabilité que votre entourage y soit confronté.

Attention, il y’a une astuce : ce poids va au contenu et pas au message au-dessus. Si vous aimez un message de type “Ce site d’extrême-droite est scandaleux”, vous donnez du poids… au site en question et favorisez l’apparition de ce site sur Facebook.

Il est donc important de ne pas partager ce qui vous indigne mais bien des sites, des articles, des vidéos que vous soutenez réellement.

Autre revers de la médaille : vous vous mettez à nu face aux publicitaires. Si vous aimez des articles sur le vélo, ils finiront par comprendre que vous aimez le vélo, bien que vous n’aimiez aucune page liée.

En résumé, soyez très prudents avec ce que vous partagez et soyez positifs !

Si vous voulez donner de la « conscience » à notre humanité Facebookienne, je vous encourage à partager des articles, des textes, des vidéos que vous trouvez vraiment intéressants, qui ont du fond, qui vous semblent pertinents.

Personnellement, je tente de proscrire les « révélations » de type « ce que les médecins/politiciens/médias vous cachent », les vidéos ou images amusantes, choquantes mais sans réelle réflexion. J’évite également les posts automatisés de type quizz, sondages ou « quel chat/acteur/personnage êtes-vous ? ». J’essaie également d’éviter ce qui fait réagir mais n’est au fond qu’anecdotique. Je fuis la manipulation des émotions.

En postant des articles ou des vidéos de fond, vous invitez à la réflexion, à échanger des idées. En développant des arguments serein et positifs dans les commentaires, vous faites grandir notre conscience collective. C’est justement ce que j’essaie de faire, à ma petite échelle, avec les articles que j’écris ou que je partage.

Qui façonne ceux qui nous façonnent ?

En conclusion, il apparaît que Facebook est avant tout une machine à nous façonner. L’influence que Facebook a sur nous est maximale tandis que celle que nous avons sur Facebook est minimale.

Minimale mais existante !

Si nous sommes de plus en plus nombreux à utiliser Facebook avec la pleine conscience de ce que nous faisons, l’effet sera tangible !

Je comprends le désir de beaucoup d’entre vous de quitter ou de ne jamais rejoindre Facebook. Malheureusement, on ne peut plus nier l’importance que cet outil a pris dans le façonnement de notre humanité. À tel point que je le trouve de plus en plus représentatif de la « conscience de l’humanité ». Alors est-il préférable de le quitter complètement ou d’essayer de le façonner à notre image ? À chacun de choisir sa solution en conscience…


Photo par Frans de Wit.

Ce texte est a été publié grâce à votre soutien régulier sur Tipeee et sur Paypal. Je suis @ploum, blogueur, écrivain, conférencier et futurologue. Vous pouvez me suivre sur Facebook, Medium ou me contacter.

Ce texte est publié sous la licence CC-By BE.

February 06, 2017

This is embarassing. BlackMagic Design. Expensive video hardware. Stream audio and video out of sync. Shitty image quality. In a studio. Prepared for hours.

Just yesterday, we did lots and lots better. Far far far better audio quality in the rooms where we actually controlled the audio. Tons better video quality in the rooms where we had some control of the lighting. In 40 rooms. Without someone handling the camera. Across a network that was not ours. Setup in four hours. With a pile of 120€ laptops. More live mixing features. Higher resolution. At FOSDEM

Blackmagic, call us. We might be able to help you.

Logo MySQLCe jeudi 16 mars 2017 à 19h se déroulera la 57ème séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : MySQL: InnoDB Cluster suivi de JSON Datatype (conférence double) :

  1. MySQL InnoDB Cluster: Quand MySQL et Haute Disponibilité ne font plus qu’un
  2. MySQL JSON Datatype et X DevAPI: quand SQL rencontre NoSQL

Thématique : Bases de données|développement

Public : développeurs|sysadmin|entreprises|étudiants|…

L’animateur conférencier : Frédéric Descamps (Oracle Belgium)

Lieu de cette séance : Université de Mons, Faculté Polytechnique, Site Houdain, Rue de Houdain, 9, auditoire 3 (cf. ce plan sur le site de l’UMONS, ou la carte OSM). Entrée par la porte principale, au fond de la cour d’honneur. Suivre le fléchage à partir de là.

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page La séance sera suivie d’un verre de l’amitié (le tout sera terminé au plus tard à 22h).

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, OpenSides, MeaWeb et Phonoid.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

Description :

  • 1. MySQL InnoDB Cluster: Quand MySQL et Haute Disponibilité ne font plus qu’un :

La haute disponibilité des bases de données MySQL est une problématique assez bien connue et beaucoup de solutions existent avec de plus ou moins bon résultats. Malheureusement, ces solutions sont souvent complexes, externes et pas toujours fiables.

Avec MySQL InnoDB Cluster, MySQL intègre dans une seule et même distribution, une solution de Haute Disponibilité très facile à mettre en oeuvre. Lors de cette présentation, je détaillerai les différents composants de InnoDB Cluster:

  • MySQL Group Replication
  • MySQL Router
  • MySQL Shell & Admin API.

Je terminerai la présentation par une démonstration de la création d’un cluster en quelques petites minutes.

  • 2. MySQL JSON Datatype et X DevAPI: quand SQL rencontre NoSQL

Avec la version 5.7 de MySQL, nous avons intégré le support des JSON datatypes. Le nouvel X DevAPI apporte aux utilisateurs de MySQL le meilleur des 2 mondes, le SQL et le NoSQL.

Lors de cette session, je ferai l’introduction de Document Store et des derniers développements. Des exemples étayeront la théorie et illustreront note implémentation.

Short bio : Frédéric Descamps est le MySQL Community Manager for EMEA & APAC. Il a rejoint l’équipe MySQL Community en Mai 2016. “@lefred” a été consultant OpenSource et MySQL pendant plus de 15ans. Ses domaines d’expertises favoris sont la Haute Disponibilité et la performance.

... but that doesn't mean the work is over.

One big job that needs to happen after the conference is to review and release the video recordings that were made. With several hundreds of videos to be checked and only a handful of people with the ability to do so, review was a massive job that for the past three editions took several months; e.g., in 2016 the last video work was done in July, when the preparation of the 2017 edition had already started.

Obviously this is suboptimal, and therefore another solution was required. After working on it for quite a while (in my spare time), I came up with SReview, a video review and transcoding system written in Perl.

An obvious question that could be asked is why I wrote yet another system, though, and did not use something that already existed. The short answer to that is "because what's there did not exactly do what I wanted to". The somewhat longer answer also involves the fact that I felt like writing something from scratch.

The full story, however, is this: there isn't very much out there, and what does exist is flawed in some ways. I am aware of three other review systems that are or were used by other conferences:

  1. A bunch of shell scripts that were written by the DebConf video team and hooked into the penta database. Nobody but DebConf ever used it. It allowed review via an NFS share and a webinterface, and required people to watch .dv files directly from the filesystem in a media player. For this and other reasons, it could only ever be used from the conference itself. If nothing else, that final limitation made it impossible for FOSDEM to use it, but even if that wasn't the case it was still too basic to ever be useful for a conference the size of FOSDEM.
  2. A review system used by the CCC "voc" team. I've never actually seen it in use, but I've heard people describe it. It involves a complicated setup of Samba servers, short MPEG transport stream segments, a FUSE filesystem, and kdenlive, which took someone several days to set up as an experiment back at DebConf15. Critically, important parts of it are also not licensed as free software, which to me rules it out for a tool in support of FOSDEM. Even if that wasn't the case, however, I'm still not sure it would be ideal; this system requires intimate knowledge of how it works from its user, which makes it harder for us to crowdsource the review to the speaker, as I had planned to.
  3. Veyepar. This one gets many things right, and we used it for video review at DebConf from DebConf14 onwards, as well as FOSDEM 2014 (but not 2015 or 2016). Unfortunately, it also gets many things wrong. Most of these can be traced back to the fact that Carl, as he freely admits, is not a programmer; he's more of a sysadmin type who also manages to cobble together a few scripts now and then. Some of the things it gets wrong are minor issues that would theoretically be fixable with a minimal amount of effort; others would be more involved. It is also severely underdocumented, and so as a result it is rather tedious for someone not very familiar with the system to be able to use it. On a more personal note, veyepar is also written in the wrong language, so while I might have spent some time improving it, I ended up starting from scratch.

Something all these systems have in common is that they try to avoid postprocessing as much as possible. This only makes sense; if you have to deal with loads and loads of video recordings, having to do too much postprocessing only ensures that it won't get done...

Despite the issues that I have with it, I still think that veyepar is a great system, and am not ashamed to say that SReview borrows many ideas and concepts from it. However, it does things differently in some areas, too:

  1. A major focus has been on making the review form be as easy to use as possible. While there is still room for improvement (and help would certainly be welcome in that area from someone with more experience in UI design than me), I think the SReview review form is much easier to use than the veyepar one (which has so many options that it's pretty hard to understand sometimes).
  2. SReview assumes that as soon as there are recordings in a given room sufficient to fill all the time that a particular event in that room was scheduled for, the whole event is available. It will then generate a first rough cut, and send a notification to the speaker in question, as well as the people who organized the devroom. The reviewer will then almost certainly be required to request a second (and possibly third or fourth) cut, but I think the advantage of that is that it makes the review workflow be more intuitive and easier to understand.
  3. Where veyepar requires one or more instances of per-state scripts to be running (which will then each be polling the database and just start a transcode or cut or whatever script as needed), SReview uses a single "dispatch" script, which needs to be run once for the whole system (if using an external scheduler) or once per core that may be used (if not using an external scheduler), and which does all the database polling required. The use of an external scheduler seemed more appropriate, given that things like gridengine exist; gridengine is a job scheduler which allows one to submit a job to be ran on any node in a cluster, along with the resources that this particular job requires, and which will then either find an appropriate node to run the job on, or will put the job in a "pending" state until the required resources can be found. This allows me to more easily add extra encoding capacity when required, and allows me to also do things like allocate less resources to a particular part of the whole system, even while jobs are already running, without necessarily needing to abort jobs that might be using those resources.

The system seems to be working fine, although there's certainly still room for improvement. I'm thinking of using it for DebConf17 too, and will therefore probably work on improving it during DebCamp.

Additionally, the experience of using it for FOSDEM 2017 has given me ideas of where to improve it further, so it can be used more easily by different parties, too. Some of these have been filed as issues against a "1.0" milestone on github, but others are only newly formed in my gray matter and will need some thinking through before they can be properly implemented. Certainly, it looks like this will be something that'll give me quite some fun developing further.

In the mean time, if you're interested in the state of a particular video of FOSDEM 2017, have a look at the video overview page, which lists all talks along with their review/transcode status. Also, if you were a speaker or devroom organizer at FOSDEM 2017, please check your mailbox and review your talk! With your help, we should hopefully be able to release all our videos by the end of the week.

Update (2017-02-06 17:18): clarified my position on the qualities of some of the other systems after feedback from people who were a bit disappointed by my description of them... and which was fair enough. Apologies.

Update (2017-02-08 16:26): Fixes to the c3voc stuff after feedback from them.

February 05, 2017

Autour de moi, nombreux sont ceux qui s’indignent sur les différentes injustices, sur la malhonnêteté flagrante et indiscutable des politiciens qui nous gouvernent. Et de s’interroger : « Comment se fait-il qu’on tolère ça ? Pourquoi n’y a-t-il pas de révolutions ? »

La réponse est simple : car nous avons trop à perdre.

Depuis la corruption à peine voilée et bien connue de nos dirigeants aux injustices inhumaines de notre système, nous dénonçons mais n’agissons pas.

Nous avons peur de perdre…

Grâce à la productivité accrue, notre société pourrait nous nourrir et nous loger confortablement sans soucis. Cependant, nous sommes encore tous entretenus dans la superstition qu’il est nécessaire de travailler pour mériter le droit de survivre. Nous avons peur de la pauvreté voir même de demander de l’aide, d’être assistés.

Le travail est une denrée de plus en plus rare ? Le travail est le seul moyen de survivre ?

Ces deux croyances sont si profondément ancrées qu’elles rendent difficile de prendre le moindre risque de changer les choses. Car tout changement pourrait être pire. Et induire le changement est un risque individuel !

Nous avons trop à perdre…

Emprunt hypothécaire, grosse voiture, smartphone assemblé en Chine, ordinateur portable, chemises de marque produites par des enfants au Bangladesh. La société de consommation nous pousse à trouver dans l’achat et le luxe ostentatoire une réponse à tous nos maux.

Nous soignons l’hyperpossession par des achats compulsifs. Nous ne sommes même pas intéressés par les objets en question mais par le simple fait d’acheter, de posséder. Sinon, nous achèterions de la qualité.

Et le résultat est que nous possédons énormément de brols, de produits de mauvaises qualité que nous n’utilisons pas mais que nous entassons et refusons de jeter pour ne pas remettre en question notre acte d’achat. Nous possédons tellement que tout changement nous fait peur. Serions-nous prêts, comme nos arrières-grand-parents, à partir sur les routes en n’emportant qu’une simple valise ? Nous avons travaillé tellement d’heures pour acheter ces biens que nous stockons, souvent sans les utiliser une fois l’effet de nouveauté passé.

Prendre des risques

Au plus nous possédons, au plus nous avons à perdre. Ajoutons à cela que, dans le crédo sociétal, toute déviation de la norme est perçue comme une prise de risque incroyable, vitale. Si vous n’avez pas votre maison, vous risquez d’être sans ressource à la pension. Si vous n’avez pas de travail, vous êtes sociétalement un paria et vous serez demain à la rue. Demander de l’aide à des amis ou à la famille serait le comble du déshonneur.

Il est donc préférable de perpétuer le système tel qu’il est, de ne surtout rien changer.

Alors, nous râlons devant les injustices flagrantes, les manquements. Nous nous attacherons à des petits avantages, des augmentations salariales.

Nous voterons pour “le changement” en espérant de tout cœur que rien ne change.

Les révolutions

Il faut se rendre à l’évidence : les révolutions se font par des gens qui sont prêts à sacrifier leur vie.

Or nous ne sommes même pas prêts à sacrifier notre nouvelle télévision et notre carte essence.

Tant que nous nous évertuerons à protéger notre emploi et nos petits avantages, fut-ce au mépris de nos valeurs et de nos propres règles morales, nous nous condamnons à entretenir le système.

Par contre, nous pouvons apprendre à acheter de manière responsable, à ne plus sacrifier nos valeurs pour un emploi, à sortir de nos aliénations. Nous pouvons apprendre à remettre en question ce que nous achetons, ce que nous faisons pour gagner notre vie. Nous pouvons apprendre à refuser de nous laisser manipuler.

Peut-être que c’est tout simplement cela la prochaine révolution : prendre conscience des conséquences de nos actions individuelles, reprendre le pouvoir sur nos vies au lieu de se contenter de remplacer régulièrement ceux à qui nous déléguons le pouvoir.


Photo par Albert.

Ce texte est a été publié grâce à votre soutien régulier sur Tipeee et sur Paypal. Je suis @ploum, blogueur, écrivain, conférencier et futurologue. Vous pouvez me suivre sur Facebook, Medium ou me contacter.

Ce texte est publié sous la licence CC-By BE.

I published the following diary on “Many Malware Samples Found on Pastebin“. is a wonderful website. I’m scrapping all posted pasties (not only from and pass them to a bunch of regular expressions. As I said in a previous diary, it is a good way to perform open source intelligence. Amongst many configuration files, pieces of code with hardcoded credentials, dumps of databases or passwords, sometimes it pays and you find more interesting data… [Read more]

[The post [SANS ISC Diary] Many Malware Samples Found on Pastebin has been first published on /dev/random]

February 04, 2017

FOSDEM isn't finished yet, but the first videos are already being transcoded and uploaded to our video archive. As of this writing, 43 videos have been released, with 7 more being transcoded. Speakers who had a talk on Saturday should have received an invitation by email to review their talks on our review system; if you had a talk yesterday and haven't checked your email yet, please do so that we can release videos as soon as possible. Those interested in the state of a particular talk can check out the overview page for up-to-date information.
FOSDEM was informed of possible attempts, at several FOSDEM locations, to harvest usernames and passwords. The official network names (SSIDs) on the campus during FOSDEM were "FOSDEM" and "FOSDEM-ancient". FOSDEM has not provided any other internet access. The official FOSDEM wireless network did NOT ask for any credentials. E.g. the use of the network name (SSID) "FOSDEM FreeWifi by Google" was neither authorised nor condoned by FOSDEM. This network presented a login page resembling a login page of Google. FOSDEM is taking action regarding these incidents. If you think you might be affected we advise you to take all舰

I published the following diary on “Detecting Undisclosed Vulnerabilities with Security Tools & Features“.

I’m a big fan of OSSEC. This tools is an open source HIDS and log management tool. Although often considered as the “SIEM of the poor”, it integrates a lot of interesting features and is fully configurable to solve many of your use cases. All my infrastructure is monitored by OSSEC for years… [Read more]

[The post [SANS ISC Diary] Detecting Undisclosed Vulnerabilities with Security Tools & Features has been first published on /dev/random]

February 01, 2017

I got myself a bunch of Raspberry Pi 3s a short while ago:

Stack of Raspberry Pis

...the idea being that I can use those to run a few experiments on. After the experiment, I would need to be able to somehow remove the stuff that I put on them again. The most obvious way to do that is to reimage them after every experiment; but the one downside of the multi-pi cases that I've put them in is that removing the micro-SD cards from the Pis without removing them from the case, while possible, is somewhat fiddly. That, plus the fact that I cared more about price than about speed when I got the micro-SD cards makes "image all the cards of these Raspberry Pis" something I don't want to do every day.

So I needed a different way to reset them to a clean state, and I decided to use ansible to get it done. It required a bit of experimentation, but eventually I came up with this:

- name: install package-list fact
  hosts: all
  remote_user: root
  - name: install libjson-perl
      pkg: libjson-perl
      state: latest
  - name: create ansible directory
      path: /etc/ansible
      state: directory
  - name: create facts directory
      path: /etc/ansible/facts.d
      state: directory
  - name: copy fact into ansible facts directory
      dest: /etc/ansible/facts.d/allowclean.fact
      src: allowclean.fact
      mode: 0755

- name: remove extra stuff
  hosts: pis
  remote_user: root
  - apt: pkg={{ item }} state=absent purge=yes
    with_items: "{{ ansible_local.allowclean.cleanable_packages }}"

- name: remove package facts
  hosts: pis
  remote_user: root
  - file:
      path: /etc/ansible/facts.d
      state: absent
  - apt:
      pkg: libjson-perl
      state: absent
      autoremove: yes
      purge: yes

This ansible playbook has three plays. The first play installs a custom ansible fact (written in perl) that emits a json file which defines cleanable_packages as a list of packages to remove. The second uses that fact to actually remove those packages; and the third removes the fact (and the libjson-perl library we used to produce the JSON).

Which means, really, that all the hard work is done inside the allowclean.fact file. That looks like this:


use strict;
use warnings;
use JSON;

open PACKAGES, "dpkg -l|";

my @packages;

my %whitelist = (

while(<PACKAGES>) {
    next unless /^ii\s+(\S+)/;

    my $pack = $1;

    next if(exists($whitelist{$pack}));

    if($pack =~ /(.*):armhf/) {
        $pack = $1;

    push @packages, $1;

my %hash;

$hash{cleanable_packages} = \@packages;

print to_json(\%hash);

Basically, we create a whitelist, and compare the output of dpkg -l against that whitelist. If a certain package is in the whitelist, then we don't add it to our "packages" list; if it isn't, we do.

The whitelist is just a list of package => 1 tuples. We just need the hash keys and don't care about the data, really; I could equally well have made it package => undef, I suppose, but whatever.

Creating the whitelist is not that hard. I did it by setting up one raspberry pi to the minimum that I wanted, running

dpkg -l|awk '/^ii/{print "\t\""$2"\" => 1,"}' > packagelist

and then adding the contents of that packagelist file in place of the ... in the above perl script.

Now whenever we apply the ansible playbook above, all packages (except for those in the whitelist) are removed from the system.

Doing the same for things like users and files is left as an exercise to the reader.

Side note: ... is a valid perl construct. You can write it, and the compiler won't complain. You can't run it, though.

The post Standardising the “URL” appeared first on

You'd think that the concept of "a URL" would be pretty clearly defined by now, with the internet being what it is today. Well, turns out -- it isn't.

But Daniel Stenberg, from curl fame, is trying to fix that.

This document is an attempt to describe where and how RFC 3986 (86), RFC 3987 (87) and the WHATWG URL Specification (TWUS) differ. This might be useful input when trying to interop with URLs on the modern Internet.

This document focuses on network-using URL schemes (http, https, ftp, etc) as well as 'file'.
URL Interop

What really strikes me as odd is the interoperability comparison for each "fragment" in the URL;

Component Value Known interop issues exist
scheme http no
divider :// YES
userinfo user:password YES
hostname YES
port number 80 YES
path index.html YES
fragment top no

It's amazing a "URL" even works.

I've said it before and I'll say it again: the internet is held together with duct tape. I hope this proposal gets somewhere, it'll make parsing URLs a whole lot easier and more reliable.

Source: URL Interop

The post Standardising the “URL” appeared first on

January 31, 2017

The post htop Explained Visually appeared first on

This deserves a place next to Brendan Gregg's performance diagrams.

If you take top and put it on steroids, you get htop.

htop has an awesome visual interface that you can also interact with using your keyboard. The screen packs a lot of information which can be daunting to look at. I tried to find a nice infographic to explain what each number, value or color coded bars mean, but couldn’t find any. Hence I decided to make one myself over the Christmas break.

Source: htop Explained Visually · Code Ahoy

The post htop Explained Visually appeared first on

We proudly present the last set of speaker interviews. See you at FOSDEM this weekend! Andrew Savchenko: Quantum computing and post-quantum cryptography. a gentle overview Arun Thomas: RISC-V. Open Hardware for Your Open Source Software Benjamin Henrion (zoobab): Pieter Hintjens In Memoriam. For ZeroMQ and all the rest Bradley M. Kuhn: Understanding The Complexity of Copyleft Defense. After 25 Years of GPL Enforcement, Is Copyleft Succeeding? Dwayne Bailey: Continuous Localisation using FOSS tools. Building a fast responsive localisation process using open source tools Georg Greve: Let's talk about hardware: The POWER of open.. How Open POWER is changing the game舰

The post Implementing “Save For Offline” with Service Workers in the Browser appeared first on

I've been living under a rock for a while it seems, as I've heard the term Service Worker before but had no idea what it is, does or how it works.

This guide does a stellar job of explaining it and showing a practical implementation of Service Workers.

Lots of JavaScript code examples along the way.

If you're wondering what a service worker is, it's like a little alien that lives on your page [in the browser] and relays messages for you. It can detect when you have an Internet connection and when you don't, and can responds in different ways based on the response.

Source: Implementing "Save For Offline" with Service Workers

The post Implementing “Save For Offline” with Service Workers in the Browser appeared first on

January 29, 2017

The post A change of RSS feeds appeared first on

For anyone subscribed via RSS feeds, here's a quick heads-up that the amount of posts on this blog might go up significantly.

I plan to do more link blogging, where I link to external sites with a bit of commentary. I know, it's the early 2000's all over again.

That means you'll see short, link-based blogposts appear in your RSS feed. If you don't want that, there are a few alternative RSS feeds to subscribe to:

If you previously subscribed to the /feed/ URL, you'll get all the blogposts as well as the link posts. If that's not what you want, change to one of the RSS feeds above.

Take care!

The post A change of RSS feeds appeared first on

Cette nouvelle, illustrée par Alexei Kispredilov, a été publiée dans le numéro 4 du magazine Amazing que vous pouvez dès à présent commander sur ce lien.

Un léger choc métallique m’informe que la navette vient de s’amarrer à la station spatiale. Je ne peux réprimer un sentiment de fierté, voire de supériorité en observant mes compagnon·ne·s de voyage. Que ce soit ce triplet verdâtre d’alligodiles de Procyon Beta, ce filiforme et tentaculaire Orionais ou ce couple d’humains à la peau matte, tou·te·s ne sont que de simples voyageur·euse·s qui ne feront que transiter dans la station pour quelques heures.

Alors que moi, j’ai été engagé! Je suis stagiaire!
À pleines narines, je déguste les relents tout particuliers du spatioport Omega 3000. Un remugle de mélange, de mixité, de races, d’espèces. Une odeur indéfinissable, bâtarde, propre à tous les spatioports. L’odeur de la vie dans la galaxie.

— Dis moi, jeune homme, tu n’aurais pas vu dans ta navette une dame répondant au nom de Nathan Pasavan? Je suis certaine qu’elle devait être dans ton vol mais…
Étonné, je suis sorti de ma rêverie par une imposante dame aux cheveux gris et frisés. Ses lèvres semblent s’agiter plus vite que la vitesse à laquelle me parviennent ses paroles.— Je suis Nathan Pasavan, fais-je en tentant de prendre mon ton le plus assuré.
Son corps semble se figer un instant et elle pose sur moi un regard amusé.— Toi? Tu es le stagiaire?
— Oui, j’ai été sélectionné après les années de formation, les examens et les tests physiques. Je suis très fier d’avoir été assigné à Oméga 3000.
— Mais… Tu es un homme! Voire même encore un garçon. Je ne m’attendais pas à un jeune homme. Enfin bon, après tout, pourquoi pas. Combien de spatioports as-tu déjà visité ?— Et bien… Un seul!
— Un seul! Lequel?
— Celui-ci… fais-je en bégayant un petit peu.
Elle éclate soudainement de rire.
— T’es mignon. Allez stagiaire, au boulot !
D’une poigne ferme, elle me traîne à travers les méandres de la station avant de me planter une brosse dégoulinante dans les mains.
— Tiens, me dit-elle en pointant du doigt une porte munie d’un hiéroglyphe abscons. Au boulot stagiaire ! Montre-nous qu’un homme peut accomplir un véritable boulot de femme !

Surpris, je reste tétanisé, la brosse à la main. Derrière moi, j’entends la porte se fermer alors qu’une odeur pestilentielle m’assaille violemment les narines. Dans une semi-pénombre, j’arrive à observer un incroyable carnage: une large toilette débordante d’une matière verdâtre, émettant des effluves nauséabondes miroitant à la limite du spectre visible. Une toilette bételgienne, visiblement peu entretenue et utilisée par une armée de blobs visqueux en transit entre la terre et leur planète natale.

Serrant la brosse dans ma main, je déglutis. Est-ce ainsi que sont accueillis les stagiaires? Avec les tâches les plus indignes, les plus avilissantes? Est-ce parce que je suis un homme ?

Retroussant mes manches, je pousse un soupir avant de… Des souvenirs de mon entraînement me remontent à l’esprit. Ce scintillement est caractéristique des excréments de Bételgiens femâles. Une caste sexuelle qui ne mange et n’excrète qu’une mousse organiquement friable. C’est un piège! Si je trempe ma brosse, la mousse va la coloniser et utiliser les particules de savon pour se nourrir et grandir. C’est certainement ce qui s’est passé dans cette toilette. Les excréments de Bételgiens femâles sont cependant particulièrement sensibles à la lumière. Obéissant à mon intuition, je cherche l’interrupteur et inonde la pièce d’une lumière blanche et crue. Aussitôt, l’immonde matière visqueuse se met à ramper et à fuir vers la seule issue obscure : la toilette. En quelques secondes, la pièce brille comme un sou neuf. La porte s’ouvre et ma mentor me dévisage avec un sourire étonné.

— Pas mal pour un mâle, me dit-elle. Bienvenue sur Omega 3000, je suis Yoolandia, Madame Pipi en chef de tout le spatioport. Te voici désormais Madame Pipi stagiaire. Solennellement, elle me tend le cache poussière rose à fleur traditionnel et l’assiette à piécette, insigne historique de la fonction. Ne pouvant réprimer un immense sentiment de fierté, je me mets aussitôt au garde-à-vous.


— Alerte ! Alerte ! Situation au box 137.
Mon écran digital vient de s’allumer et je bondis immédiatement. Cela fait à peine quelques mois que je suis sur Omega 3000 mais mon corps a déjà acquis les réflexes de ma fonction. Yoolandia me toise d’un regard narquois.

— Encore une mission dont notre petit génie va s’extirper avec les félicitations du jury. Tu vas bientôt obtenir le grade officiel de Madame Pipi et me faire de l’ombre si tu continues !

— Pourquoi n’ai-je pas le droit à être appelé Monsieur Pipi ?

Elle soupire…

— Jamais le terme n’a été utilisé. C’est une fonction trop importante pour un homme. Allez, va! Le box 137 a besoin de toi!

Tournant les talons, je m’engouffre dans les méandres de la station. Pourquoi les hommes ne pourraient-ils pas occuper de hautes fonctions? Car le rôle de Madame Pipi est primordial dans une station spatiale! Lorsque les races de l’univers sont entrées en contact, nous nous sommes très vite aperçu·e·s que le concept de sexe était extrêmement variable voire indéfini d’une race à l’autre. Par contre, excréter semblait être une constante de la nature.

Les êtres humains étaient la seule race à proposer des toilettes séparées pour les différents sexes. Alors que même les compétitions sportives avaient depuis longtemps aboli toute catégorisation par sexe, les toilettes restaient encore et toujours un lieu de ségrégation. Le reste de la galaxie trouva cette mode tellement excentrique qu’elle décida de l’adopter. Mais comme la plupart des races ne se divisent pas entre mâles et femelles, il fallut instituer des toilettes séparées pour chaque cas pouvant se présenter. Le lépidoptère amurien, par exemple, passe par 235 modifications de sexe au cours de son existence. Dont 30 au cours d’une seule et unique heure. Heureusement, ils ne voyagent que dans 17 de ces situations.

Le rôle de Madame Pipi est donc primordial dans un spatioport. Il requiert de longues années d’études et un entraînement physique à toute épreuve. Pour, par exemple, résoudre le cas qui se présente à moi au box 137. Un box réservé normalement aux surfemelles. Mais dans laquelle est entrée par erreur une limace tronesque agenrée.

Les excréments d’une limace tronesque ressemblent à des billes transparentes qui croissent au contact de l’eau si elles ne sont pas auparavant dissoutes. En tirant la chasse, les surfemelles Odariennes se sont donc retrouvées coincées au milieu de gigantesques sphères translucides.

Un cas d’école assez routinier.


— Alerte! Alerte! Situation au box 59!
Je pousse un profond soupir. Tous ces problèmes me semblent si simples à résoudre, si artificiels. En vérité, je m’ennuie !


– Bravo Nathan!
Tout en m’embrassant sur les deux joues, Yoolandia colle symboliquement une pièce dans mon assiette. La directrice du spatioport Omega 3000, en personne, s’adresse à l’assemblée.

— Nous sommes très fières, aujourd’hui, de nommer Nathan «Madame Pipi certifiée».
— Monsieur Pipi, grommelé-je entre mes dents.
— La pièce que vient de coller Yoolandia dans l’assiette de Nathan est une véritable pièce de monnaie préhistorique. Cet acte symbolique rappelle l’importance historique de la fonction…

Mais déjà je n’écoute plus. Je suis impatient d’exposer mon idée à Yoolandia. Une idée qui va révolutionner la fonction de Madame Pipi. Tandis que la directrice continue sa harangue, je tire Yoolandia un peu à l’écart.
— Alors, tu es prêt pour ton discours ? me fait-elle.
— Oui, justement, je compte présenter une idée incroyable !
— Quelle idée?
— Eh bien j’ai imaginé supprimer la ségrégation des toilettes…
— Comme c’est original, fait-elle en éclatant d’un rire jaune. Tu crois que tu es le premier? N’as-tu pas compris que chaque type d’excrément doit être traité différemment ? Que la séparation est nécessaire?
— Justement, fais-je, je m’attendais à cette objection. J’ai développé une méthode très simple qui évacue tous les types d’excréments sans produire la moindre réaction indésirable. Un système basé sur de la lumière pulsée, des vibrations et un assortiment d’enzymes tronesques dont…
— Malheureux !
Comme par réflexe, elle m’a couvert la bouche. Son regard est terrifié. Je tente de me dégager.

—Non, tais-toi ,me fait-elle. Tu ne comprends pas ? Tu ne vois pas que tu es en train de tuer notre métier ! Notre prestige !
— Si tu fais cela, c’est la fin des Madames Pipi! Tu n’es certainement pas le premier à arriver à cette solution. Mais celleux qui l’ont trouvée ont vite compris où était leur intérêt.
— Que veux-tu dire?
— Regarde-les, me dit-elle en pointant la foule hétéroclite des employé·e·s du spatioport. La moitié d’entre elleux nous obéissent. Iels sont plombier·e·s, nettoyeur·euse·s, spécialistes en sanitaires différenciés. Leur job dépend de nous! Tu ne peux pas simplifier la situation! Pense aux conséquences!
— Mais…
— Et maintenant, entonne la directrice dans son micro, je vous demande un tonnerre d’applaudissements pour Nathan, notre nouvelle Madame Pipi!
— Monsieur Pipi, murmuré-je machinalement !
Alors que je me dirige vers l’estrade, la voix de Yoolantia me parvient.
— Pense aux conséquences, Nathan. Pour ton emploi et ceux des autres.
À mi-chemin, je me retourne. Elle me darde de son regard perçant.
— N’oublie pas que tu es un mâle. Un mâle du box 227. Je suis une femelle du box 1. Alors, réfléchis bien aux conséquences…

Ce texte est a été publié grâce à votre soutien régulier sur Tipeee et sur Paypal. Je suis @ploum, blogueur, écrivain, conférencier et futurologue. Vous pouvez me suivre sur Facebook, Medium ou me contacter.

Ce texte est publié sous la licence CC-By BE.

My old HP Color LaserJet 2820 is still working great, however it only supports being connected via ethernet or USB. As a workaround I've been using "Power Line Communication" to bridge directly from my router to the printer over the electrical wires.
Recently I've been revamping the home network and wanted to get rid of these last power line connections. Ideally I'd want the printer to join the wireless network.

The arrival of the Raspberry Pi 3 with builtin WiFi seems like a good opportunity to implement this.
The raspberry Pi is connected with it's Ethernet port to the printer. The WiFi is connected to the local LAN where I configured the router to give it a fixed IP address.

Approach 1: Cups printer server

I install the cupsys printing system on the Raspberry Pi, and configure it to properly support the printer (using HPLIP). Besides that, I install a DHCP server to serve the printer an IP address on it's own private network: The cable between Raspberry Pi and printer.
I change the configuration of the printer in Cups on the Raspberry Pi to make the printer shared on the LAN.
Now all other computers can just use that shared printer to print.
This works really good for printing, however it doesn't work at all for scanning.
When scanning the scanning software (SANE) seems to want to connect directly to the printer and this of course isn't possible when the printer is on a different network and there is no routing.

Approach 2: Routing

I updated the Raspberry Pi to do routing, by enabling IP forwarding.
(uncomment the net.ipv4.ip_forward=1 line in /etc/sysctl.conf )
I also configure my router with an extra route to have it know about the network that is now reachable via the Raspberry Pi.
Now the printer can be directly communicated with. This allows me to configure the printer directly again on my Linux laptop.
Printing still works fine. However scanning still doesn't work.
When I tell the scanner software the IP address of the printer explicitly, it sometimes works, but most of the time it doesn't work at all.
My guess is that the scanner software relies on some communication that doesn't work outside it's own network.

Approach 3: Proxy ARP

To learn more about the technicalities of proxy ARP, read here on Wikipedia, and here on In summary it is a trick that makes it seem to the rest of the network that the Raspberry Pi and the printer are the same.
In order for this to work I want the printer to have an address in the same network as the local net.
First remove the DHCP server installed for the previous approaches, then install a DHCP proxy. This is easily installed (apt-get install dhcp-helper) and configured (DHCPHELPER_OPTS="-b wlan0" in /etc/default/dhcp-helper).
There are multiple ways to enable Proxy ARP. The easiest is to install (apt-get install parprouted) and configure it.
My /etc/network/interfaces now looks like this:

auto lo
iface lo inet loopback
auto eth0
allow-hotplug eth0
iface eth0 inet manual

allow-hotplug wlan0
iface wlan0 inet dhcp
  wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf
  post-up /usr/sbin/parprouted eth0 wlan0
  post-down /usr/bin/killall /usr/sbin/parprouted
  post-up /sbin/ip addr add $(/sbin/ip addr show wlan0 | perl -wne 'm|^\s+inet (.*)/| && print $1')/32 dev eth0
  post-down /sbin/ifdown eth0

With proxy ARP in place the printer still works great, and finally the scanner also works in a stable way.

Alternative approaches

I could try hooking the printer to the Raspberry Pi via USB. As I've never used the printer in this way, I've not attempted this.

This morning in San Francisco, I check out from the hotel and walk to Bodega, a place I discovered last time I was here. I walk past a Chinese man swinging his arms slowly and deliberately, celebrating a secret of health us Westerners will never know. It is Chinese New Year, and I pass bigger groups celebrating and children singing. My phone takes a picture of a forest of phones taking pictures.

I get to the corner hoping the place is still in business. The sign outside asks “Can it all be so simple?” The place is open, so at least for today, the answer is yes. I take a seat at the bar, and I’m flattered when the owner recognizes me even if it’s only my second time here. I ask her if her sister made it to New York to study – but no, she is trekking around Columbia after helping out at the bodega every weekend for the past few months. I get a coffee and a hibiscus mimosa as I ponder the menu.

The man next to me turns out to be her cousin, Amir. He took a plane to San Francisco from Iran yesterday after hearing an executive order might get signed banning people with visas from seven countries to enter the US. The order was signed two hours before his plane landed. He made it through immigration. The fact sheet arrived on the immigration officer’s desks right after he passed through, and the next man in his queue, coming from Turkey, did not make it through. Needles and eyes.

Now he is planning to get a job, and get a lawyer to find a way to bring over his wife and 4 year old child who are now officially banned from following him for 120 days or more. In Iran he does business strategy and teaches at University. It hits home really hard that we are not that different, him and I, and how undeservedly lucky I am that I won’t ever be faced with such a horrible choice to make. Paria, the owner, chimes in, saying that she’s a Iranian Muslim who came to the US 15 years ago with her family, and they all can’t believe what’s happening right now.

The church bell chimes a song over Washington Square Park and breaks the spell, telling me it’s eleven o’clock and time to get going to the airport.

flattr this!

January 28, 2017

The post Look before you paste from a website to terminal appeared first on

This is a clever way to tricky you into copying hidden text from command line examples, luckily it's something that iTerm (1) will warning you about when you paste a multi-line string or a string with carriage returns at the end.

Malicious code's color is set to that of the background, it's font size is set to 0, it is moved away from rest of the code and it is made un-selectable (that blue color thing doesn't reveal it); to make sure that it works in all possible OSes, browsers and screen sizes.

Source: Life plus Linux: Look before you paste from a website to terminal


The post Look before you paste from a website to terminal appeared first on

January 27, 2017

The post Chrome 56 Will Aggressively Throttle Background Tabs appeared first on

If you rely on JavaScript's setInterval() in your code, chances are you're going to be in a world of pain in future Chrome releases.

When idle, your application's timers may be delayed for minutes. Aside from terrible hacks like playing zero-volume sounds, you have no control over this. Avoiding expensive CPU work is not a panacea; that some applications must do significant work in the background, including syncing data, reading delta streams, and massaging said data to determine whether or not to alert the user.

Source: STRML: Chrome 56 Will Aggressively Throttle Background Tabs

The post Chrome 56 Will Aggressively Throttle Background Tabs appeared first on

The post Immutable Caching appeared first on

Before you implement Cache-Control: immutable, you better have a good URL structure with versioning or other methods of cache invalidation.

When Firefox 49 shipped it contained the Cache-Control: Immutable feature to allow websites to hint which HTTP resources would never change. At almost the same time, Facebook began deploying the server side of this change widely. They use a URI versioning development model which works very well with immutable. This has made a significant impact on the performance of Facebook reloads with Firefox. It looks like other content providers will adopt it as well.

Source: Using Immutable Caching To Speed Up The Web ★ Mozilla Hacks – the Web developer blog

The post Immutable Caching appeared first on

Logo QtasteCe jeudi 16 février 2017 à 19h se déroulera la 56ème séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : QTaste, quand la qualité logicielle devient vitale

Thématique : Programmation|Développement|Tests

Public : Développeur|Testeur|Ingenieur qualité|étudiants

Les conférenciers : Laurent Vanboquestal (QSpin) & Patrick Guermonprez (IBA)

Lieu de cette séance : Campus technique (ISIMs) de la Haute Ecole en Hainaut, Avenue V. Maistriau, 8a, Salle Académique, 2e bâtiment (cf. ce plan sur le site de l’ISIMs, et ici sur la carte Openstreetmap).

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page La séance sera suivie d’un verre de l’amitié.

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, Normation, MeaWeb et Phonoid.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

Description : QTaste est une application permettant d’exécuter automatiquement des scénarios de tests sur un ou plusieurs systèmes ou partie de ceux-ci afin de produire des rapports contenant les résultats d’exécution. Il a été conçu de sorte à pouvoir s’interfacer dans des environnements très divers et maintient une séparation entre la partie communication avec le système et le scenario.

QTaste, initialement développé pour IBA, a été porté en opensource ( et s’intègre particulièrement bien dans les environnements d’intégration continu, en utilisant la virtualisation ou différents labos de tests. Il permet également de faciliter des analyses de couverture de tests.

Lors de cette présentation, Laurent Vanboquestal (QSpin) et Patrick Guermonprez (IBA) vous expliqueront comment QTaste est utilisé au sein des différentes équipes chez IBA afin de réduire les coûts et d’assurer le bon fonctionnement des systèmes de protonthérapie d’IBA.

Short bio :

Laurent Vanboquestalest un passionné d’informatique depuis sa tendre enfance . Il a bourlingué dans des environnements très différents et participe début des années 2000 à la mise en production du premier téléviseur interactif utilisant la technologie Java. Convaincu par l’efficacité du test automatisé, il rejoint QSpin en 2007 pour le développement d’une solution générique de tests qu’il portera en « Open Source » par la suite.

Patrick Guermonprez est impliqué dans les activités de test logiciel depuis plus de 15 ans. Il est aujourd’hui « Test Architect » chez IBA (Ion Beam Applications) où il définit l’environnement et les outils d’intégration continue. Au quotidien il assure le bon fonctionnement de l’environnement de test et apporte son expertise aux développeurs pour la mise en place de tests automatiques.

January 26, 2017

PGP, Pretty Good Privacy, is niet enkel te koop op Het Internet; het is zowel gratis als dat het open source is. Dat wil zeggen dat iedereen op deze planeet de broncode van PGP kan downloaden, compileren en aanpassen en dat iedereen het op een telefoon kan zetten.

Velen gebruiken dan ook zulke encryptiesoftware voor allerlei redenen. Vaak zonder dat ze het door hebben (https websites, de diensten van je bank, online aankopen, Whatsapp berichten, en zo verder). Soms hebben ze het wel door (dat zijn dan vaak techneuten).

Enkelingen doen dat om hun communicatie over criminele plannen te versleutelen. Maar velen anderen doen dat om hun bedrijfsgegevens, privacy, persoonlijke data en zo verder te beveiligen. Er zouden ook overheden zijn, naar het schijnt, zoals de onze, bijvoorbeeld, die het veelvuldig gebruiken. Volgens geruchten gebruiken alle militairen het. Ik ben er van overtuigd dat jullie werkgever, de VRT, het ook heel veel gebruikt en zelfs oplegt. Een beetje serieus journalist gebruikt het tegenwoordig ook voor, laat ik hopen, alles.

PGP is niet illegaal. Het versleutelen van berichten is niet illegaal. Dat de politie het daarom niet meer kan lezen, is op zichzelf niet illegaal. Er staat in onze Belgische wetgeving helemaal niets over het illegaal zijn van het versleutelen van berichten. Er staat in onze Belgische wetgeving helemaal niets over het illegaal zijn van bv. software zoals PGP.

Versleutelen van berichten bestaat ook al eeuwen. Nee, millennia. Misschien zelfs wel al tienduizenden jaren. Kinderen doen het en leren het. Misschien niet op al te professionele manier. Maar sommige kinderlijk eenvoudige encryptietechnieken zijn theoretisch onkraakbaar. Zoals bv. de one time pad

Sterker nog, onze Belgische universiteiten staan over de hele wereld bekend om haar wiskundigen die zich met cryptografie bezig houden. De wereldwijd belangrijkste encryptiestandaard, Rijndael, werd door Leuvense studenten ontwikkeld. Die PGP software gebruikt die encryptiestandaard, ook. Het staat tegenwoordig bekend als de Advanced Encryption Standard, AES, en wordt oa. militair vrijwel overal ingezet.

Dat een aantal mensen van het gerecht en de politie het lastig vinden dat een aantal criminelen gebruik maken van software zoals PGP, betekent op geen enkele manier dat het gebruik van en het voorzien van telefoons met PGP, op zichzelf, illegaal is. Helemaal niet. Het is dus vrij onnozel om in jullie journaal te laten uitschijnen als zou dat wel het geval zijn. Dat is het niet. Zeker in extreme tijden zoals nu is het gevaarlijk wanneer de media zoiets doet uitschijnen. Straks maakt men de gewone burger weer bang over allerlei dingen, en dan gaat de politiek domme eisen stellen zoals het verbieden van encryptie.

Het versleutelen van berichten is een erg belangrijk onderdeel, in de technologie, om die technologie veilig en betrouwbaar te laten werken. Het ontbreken van de mogelijkheid om berichten en gegevens te versleutelen zou werkelijk catastrofale gevolgen hebben.

Het zou ook geen enkele crimineel weerhouden toch één en ander te encrypteren. Encryptie verbieden zou letterlijk alleen maar de onschuldige burger schaden. De crimineel zou er net voordeel uit halen; de crimineel wordt het daardoor plots veel eenvoudiger gemaakt om bij onschuldige burgers (die geen encryptie meer kunnen gebruiken) digitaal in te breken.

Denk in deze extreme tijden goed na wat je in jullie mediakanaal, het nationale nieuws, doet uitschijnen. Als techneut vind ik het bijzonder onverantwoordelijk van jullie. OOK en vooral wanneer de bedoeling was om het nieuws zo dom mogelijk te brengen. Alsof onze burgers het niet zouden aankunnen om correct geïnformeerd te worden over deze materie.

With FOSDEM just around the corner, it is time for us to enlist your help. Every year, an enthusiastic band of volunteers helps us make FOSDEM a fun and safe place for all our attendees. We could not do this without you. This year we again need as many hands as possible, especially during the buildup (starting Friday at noon) and teardown (Sunday evening). No need to worry about missing lunch. Food will be provided. If you have some spare time during the weekend and would like to be a part of the team that makes FOSDEM tick... We need舰

I published the following diary on “IOC’s: Risks of False Positive Alerts Flood Ahead“.

Yesterday, I wrote a blog post which explained how to interconnect a Cuckoo sandbox and the MISP sharing platform. MISP has a nice REST API that allows you to extract useful IOC’s in different formats. One of them is the Suricata / Snort format. Example… [Read more]

[The post [SANS ISC Diary] IOC’s: Risks of False Positive Alerts Flood Ahead has been first published on /dev/random]

January 25, 2017

With the number of attacks that we are facing today, defenders are looking for more and more IOC’s (“Indicator of Compromise) to feed their security solutions (firewalls, IDS, …). It becomes impossible to manage all those IOC’s manually and automation is the key. There are two main problems with this amount of data:

  1. How to share them in a proper way (remember: sharing is key).
  2. How to collect and prepare them to be shared.

Note that I’m considering in this post only the “technical” issues with IOC’s. There are much more issues like their accuracy (which can be different between different environments).

To search for IOC’s, I’m using the following environment: A bunch of honeypots capture samples that, if interesting, are analyzed by a Cuckoo sandbox. To share the results with peers, a MISP instance is used.


In this case, a proper integration between Cuckoo and MISP is the key. It is implemented in both ways. The results of the Cucko analyzis are enriched with IOC’s found in MISP. IOC’s found in the sample are correlated with MISP and the event ID, description and level are displayed:

In the other way, Cuckoo submits the results of the ianalyzes to MISP:

Cuckoo 2.0 comes with ready-to-use modules to interact with the MISP REST API via the PyMISP Python module. There is one processing module (to search for existing IoC’s in MISP) and one reporting module (to create a new event in MISP). The configuration is very simple, just define your MISP URL and API key in the proper configuration files and you’re good to go:

# cd $CUCKOO_HOME/conf
# grep -A 2 -B 2 misp *.conf
processing.conf-enabled = yes
processing.conf-enabled = yes
processing.conf:url = https://misp.xxxxxxxxxx
processing.conf-apikey = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
maxioc = 100
reporting.conf-logname = syslog.log
reporting.conf-enabled = yes
reporting.conf:url = https://misp.xxxxxxxxxx
reporting.conf-apikey = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

But, in my environment, the default modules generated too many false positives in both ways. I patched them to support the more configuration keywords to better control what is exchanged between the two tools.

In the processing module:

only_ids = [yes|no]

If defined, only attributes with the “IDS” flag set to “1” will be displayed in Cuckoo.

ioc_blacklist =,,

This parameter allows you to define a list (comma delimited) of IOC’s that you do NOT want in Cuckoo. Typically, you’ll add here DNS servers, specific URLs.

In the reporting module:

tag = Cuckoo

Specify the tag to be added to created event in MISP. Note that the tag must be created manually.

# Default distribution level:
# your_organization = 0
# this_community = 1
# connected_communities = 2
# all_communities = 3

The default distribution level assigned to the created event (default: 0)

# Default threat level:
# high = 1
# medium = 2
# low = 3
# undefined = 4

The default threat level assigned to the created event (default: 4)

# Default analysis level:
# initial = 0
# ongoing = 1
# completed = 2
analysis = 0

The default analysis status assigned to the created event (default: 0)

ioc_blacklist =,,,,

Again, a blacklist (comma separated) of IOC’s that you do NOT want to store in MISP.

Events are created in MISP with an “unpublished” status. This allows you to review them, add manually some IOC’s, to merge different events, add some tags or change default values.

The patched files for Cuckoo are available in my GitHub repository.

[The post Quick Integration of MISP and Cuckoo has been first published on /dev/random]

I published the following diary on “Malicious SVG Files in the Wild“.

In November 2016, the Facebook messenger application was used to deliver malicious SVG files to people [1]. SVG files (or “Scalable Vector Graphics”) are vector images that can be displayed in most modern browsers (natively or via a specific plugin). More precisely, Internet Explorer 9 supports the basic SVG feature sets and IE10 extended the support by adding SVG 1.1 support. In the Microsoft Windows operating system, SVG files are handled by Internet Explorer by default… [Read more]

[The post [SANS ISC Diary] Malicious SVG Files in the Wild has been first published on /dev/random]

January 24, 2017

Today Acquia was named a leader in the The Forrester Wave: Web Content Management Systems, Q1 2017. This report is especially exciting because for the first time ever Acquia was recognized as the leader for strategy and vision, ahead of every other vendor including Adobe and Sitecore.

I take great pride in Acquia's vision and strategy. As I wrote in my 2016 retrospective, I believe that the powerful combination of open source and cloud has finally crossed the chasm, and will continue to drive change in our industry.

Today, cloud is the starting point for any modern digital experience project, but this wasn't always the case. Acquia was one of the first organizations to explore Amazon Web Services in 2008 and to to launch a commercial cloud product in 2009. At the time that was a bold move, but we have always seen the cloud as the best way to provide our customers with the necessary tools to build, deploy and manage Drupal.

Since its founding, Acquia has helped set the standard for cloud-based delivery of web content management. This year's Forrester report shows that many of our competitors are only now introducing a cloud delivery platform. Acquia's eight year commitment to the cloud is one of the reasons why we were deemed the leading vendor for strategy.

Second, organizations can no longer afford to ignore open source software. Acquia is the only open source leader in both the Forrester Wave and the Gartner Magic Quadrant for Web Content Management. There are a lot of inherent benefits to open source software, including speed of innovation, cost, quality, security and more. But the thing that really sets open source apart is freedom. This extends to many different users: for developers freedom means there are no limitations on integrations; for designers that means creative freedom, and for organizations that means freedom from vendor lock-in.

In addition to our commitment to open source and cloud, Forrester also highlighted Acquia's focus on APIs. This includes our many contributions to Drupal 8's API-first initiative, but it also includes our APIs for Acquia Cloud, Acquia Cloud Site Factory and Acquia Lift. APIs are important because they enable organizations to build more ambitious digital experiences. With Open APIs, organizations can use our software in the way they choose and integrate with different technology solutions.

This is why Acquia's solution -- using open source, cloud, and open APIs -- is so powerful: it prepares organizations for the current and evolving demands of digital experience delivery by allowing them to innovate faster than traditional solutions.

Thank you to everyone who contributed to this result! It's another great milestone for Acquia and Drupal.

January 23, 2017

The post Return of the Unauthenticated, Unfirewalled protocols appeared first on

People are starting to realise that having insecure, unauthenticated and unfirewalled services running on the internet is a bad idea. Who knew, right?

What started with a MongoDB ransom is quickly spreading to new services that are equally easy to "exploit".

Insecure Defaults

It's already been covered in great detail that service defaults are tremendously important. I'm the kind of sysadmin that likes to be explicit in server configs, not trusting default settings. Some argue that it's pointless and a waste of time, that system packages have good sane default values.

Well here's a little packaging secret: the ones doing the packaging aren't necessarily the ones creating the software. It's often a voluntary job, helping others with good RPM or DEB packages to use. And while I'm certain packagers do a reasonable effort to create secure default values, they might be missing important gotcha's.

UX vs. Security

Case in point: if you install Memcached on a Red Hat or CentOS machine, it will by default listen to all interfaces.

Super convenient as most users run their Memcached services on different servers. It's good UX. But as Memcached is an unauthenticated protocol, it also allowed anyone to connect to your Memcached instance and read/write data to it, flush the cache, ... if it's not firewalled properly.

Where should the trade-off be between UX and security? In order for a technology to become widely adopted, UX is arguably more important. However, as adoption grows, everyone quickly realises that security should have been the priority.

Securing data

MongoDB servers worldwide have been held ransom. Attackers have already moved on to other services like Redis & ElasticSearch.

I feel it's incredibly important to highlight these are data storage engines. They're databases. They hold your company's and clients' data. This is your business.

If you're going to secure any service, it had better be the ones holding all the data.

Next Up

As an attacker, what else can you hold ransom? Or phrased differently: what other services hold important customer data that is worth extracting?

I'll give you some ideas:

  • Cassandra
  • RabbitMQ, ActiveMQ, ZeroMQ, ...
  • Beanstalkd
  • Collectd
  • Logstash
  • Neo4j

Note that most of these services have support for authentication, it just needs to be configured explicitly.

Raising awareness

In a way, these ransom attacks are a good thing: they raise awareness to the acute problem that is an unauthenticated and unfirewalled service.

There's nothing wrong with a protocol that doesn't support authentication. It probably does so for good reason (simplicity, performance, ...).

There's also nothing wrong with unfirewalled services, some just need this (think webservers, mailservers, ...).

But there is a problem when those are combined and start to hold important company data.

It's almost as if an entire new generation of sysadmins have come online, started a few services here and there and called it a day. That's not what sysadmins do. That might be something developers do when they lack time or resources to contact dedicated sysadmins.

Let's hope these recent attacks cause a shift in the way these services are managed and highlight the need for skilled sysadmins and managed services.

The post Return of the Unauthenticated, Unfirewalled protocols appeared first on

January 21, 2017

Ceci est le billet 2 sur 2 dans la série Les pistes cyclables

Je pensais avoir découvert la vérité : les pistes cyclables seraient en fait des parkings. Mais j’ai réalisé que ce n’était pas vrai. Les pistes cyclables sont tout simplement des routes comme les autres.

Il suffit donc de mettre des flèches sur le sol pour déclarer, avec fierté, être une commune cyclable.

Ce système n’apporte, bien entendu, aucune protection aux cyclistes. Pire : selon le code de la route, ils sont obligés de rester dans cette bande parfois mal définie et les voitures sont obligés de rouler sur cette bande. Ce système est donc pire pour les cyclistes que pas de piste cyclable du tout ! La couleur n’y change rien…

Tout au plus cela fait-il penser au sang versé par les cyclistes renversé. Cette couleur rouge a d’ailleurs la particularité d’être tellement glissante par temps de pluie que je l’évite à tout prix. Ce serait bien trop dangereux !

Une nouvelle idée a alors fait son chemin dans la tête des politiciens désireux de séduire l’électorat cycliste mais sans dépenser un centime. Les rues cyclables !

Une rue cyclable est une rue dans laquelle une voiture ne peut pas dépasser un vélo. Rappelons que, normalement, un automobiliste ne peut dépasser un cycliste qu’en laissant au minimum un mètre entre lui et le vélo. Cette initiative, si elle est sympathique, ne sert donc strictement à rien pour encourager l’utilisation du vélo.

Heureusement, les rues cyclables font rarement plus d’une dizaine de mètres de long. Il ne faut pas déconner non plus. Ces zones tendent donc à confiner l’utilisation du vélo dans certains rares endroits et renforcent la conviction des automobilistes que, ailleurs, ils sont les rois absolus.

Moralité, les pistes cyclables n’existent pas, ce sont tout simplement des routes ! Ou parfois, des trottoirs…

Le cycliste a donc l’obligation de circuler sur un trottoir étroit, mettant en danger les piétons. Car, rappelons-le, l’usage des pistes cyclables (qui n’existent pas), est obligatoire !

Mais heureusement, les pistes cyclables n’existent pas. Ce sont des routes…

…ou des trottoirs !

Les photos de cet article et du suivant illustrent le combat quotidien d’un cycliste à travers les communes de Grez-Doiceau (bourgmestre MR), Chaumont-Gistoux (bourgmestre MR, Ecolo dans la majorité), Mont-Saint-Guibert (bourgmestre Ecolo) et Ottignies-Louvain-la-Neuve (bourgmestre Ecolo).

Ce texte est a été publié grâce à votre soutien régulier sur Tipeee et sur Paypal. Je suis @ploum, blogueur, écrivain, conférencier et futurologue. Vous pouvez me suivre sur Facebook, Medium ou me contacter.

Ce texte est publié sous la licence CC-By BE.

January 20, 2017

With only two weeks left until FOSDEM 2017, we have added some new interviews with our main track speakers, varying from a keynote talk about Kubernetes through cryptographic functions based on Keccak and the application delivery controller Tempesta FW to ethics in network measurements: Alexander Krizhanovsky: Tempesta FW. Linux Application Delivery Controller Brandon Philips: Kubernetes on the road to GIFEE Deb Nicholson and Molly de Blanc: All Ages: How to Build a Movement Gilles Van Assche: Portfolio of optimized cryptographic functions based on Keccak Jason A. Donenfeld: WireGuard: Next Generation Secure Kernel Network Tunnel. Cutting edge crypto, shrewd kernel design,舰
Ceci est le billet 1 sur 2 dans la série Les pistes cyclables

À force de parcourir le brabant-wallon à vélo, j’ai eu une révélation : les pistes cyclables n’existent pas, ce sont tout simplement des parkings.

C’est aussi simple que cela et cela explique tout.

Dois-je accuser ce conducteur de me faire risquer ma vie en m’obligeant à brusquement me rabattre sur la chaussée sans avoir la moindre visibilité ?

Pas si l’on considère que cette bande est tout simplement un parking. Cet usage est encouragé par les pouvoirs publics qui utilisent cet espace pour mettre une signalisation temporaire.

Au moins, l’espace sert à quelque chose et cet usage ne bloque presque pas les possibilités de parking.

Au fond, si cela fonctionne pour les installations temporaires, pourquoi n’en serait-il pas autant pour les installations permanentes ?

Le terme « piste cyclable » est une sympathique blague mais soyons sérieux : qui utilise réellement un vélo pour se déplacer ? Alors qu’une route se doit d’être dégagée immédiatement, notre société considérant un embouteillage de plusieurs heures comme une catastrophe, les pistes cyclab…  pardon les voies de parking, elles, peuvent rester bloquées plusieurs jours.

Bien sûr qu’un cycliste ne pourra pas passer et devra risquer littéralement sa vie sur une voie rapide pour continuer son chemin. Mais il n’avait qu’à prendre la voiture comme tout le monde.

Plus sérieusement, je suis désormais convaincu que les pistes cyclables servent à tout sauf à passer à vélo. La preuve ? Il est explicitement demandé aux automobilistes de ne pas se garer sur les pistes cyclables lors de certaines manifestations.

Ces panneaux empêcheraient le passage d’un vélo mais nous savons désormais que les cyclistes n’existent pas vraiment.

À moins que les pistes cyclables ne soient pas des parkings ?

Mais que seraient-elles dans ce cas ? J’ai ma petite idée…  (à suivre)


Les photos de cet article et du suivant illustrent le combat quotidien d’un cycliste à travers les communes de Grez-Doiceau (bourgmestre MR), Chaumont-Gistoux (bourgmestre MR, Ecolo dans la majorité), Mont-Saint-Guibert (bourgmestre Ecolo) et Ottignies-Louvain-la-Neuve (bourgmestre Ecolo).

Ce texte est a été publié grâce à votre soutien régulier sur Tipeee et sur Paypal. Je suis @ploum, blogueur, écrivain, conférencier et futurologue. Vous pouvez me suivre sur Facebook, Medium ou me contacter.

Ce texte est publié sous la licence CC-By BE.

January 19, 2017

The post Create a SOCKS proxy on a Linux server with SSH to bypass content filters appeared first on

Are you on a network with limited access? Is someone filtering your internet traffic, limiting your abilities? Well, if you have SSH access to any server, you can probably set up your own SOCKS5 proxy and tunnel all your traffic over SSH.

From that point on, what you do on your laptop/computer is sent encrypted to the SOCKS5 proxy (your SSH server) and that server sends the traffic to the outside.

It's an SSH tunnel on steroids through which you can easily pass HTTP and HTTPs traffic.

And it isn't even that hard. This guide is for Linux/Mac OSX users that have direct access to a terminal, but the same logic applies to PuTTy on Windows too.

Set up SOCKS5 SSH tunnel

You set up a SOCKS 5 tunnel in 2 essential steps. The first one is to build an SSH tunnel to a remote server.

Once that's set up, you can configure your browser to connect to the local TCP port that the SSH client has exposed, which will then transport the data through the remote SSH server.

It boils down to a few key actions;

  1. You open an SSH connection to a remote server. As you open that connection, your SSH client will also open a local TCP port, available only to your computer. In this example, I'll use local TCP port :1337.
  2. You configure your browser (Chrome/Firefox/...) to use that local proxy instead of directly going out on the internet.
  3. The remote SSH server accepts your SSH connection and will act as the outgoing proxy/vpn for that SOCKS5 connection.

To start such a connection, run the following command in your terminal.

$ ssh -D 1337 -q -C -N

What that command does is;

  1. -D 1337: open a SOCKS proxy on local port :1337. If that port is taken, try a different port number. If you want to open multiple SOCKS proxies to multiple endpoints, choose a different port for each one.
  2. -C: compress data in the tunnel, save bandwidth
  3. -q: quiet mode, don't output anything locally
  4. -N: do not execute remote commands, useful for just forwarding ports
  5. the remote SSH server you have access to

Once you run that, ssh will stay in the foreground until you CTRL+C it to cancel it. If you prefer to keep it running in the background, add -f to fork it to a background command:

$ ssh -D 1337 -q -C -N -f

Now you have an SSH tunnel between your computer and the remote host, in this example

Use SOCKS proxy in Chrome/Firefox

Next up: tell your browser to use that proxy. This is something that should be done per application as it isn't a system-wide proxy.

In Chrome, go to the chrome://settings/ screen and click through to Advanced Settings. Find the Proxy Settings.

In Firefox, go to Preferences > Advanced > Network and find the Connection settings. Change them as such:

From now on, your browser will connect to localhost:1337, which is picked up by the SSH tunnel to the remote server, which then connects to your HTTP or HTTPs sites.

Encrypted Traffic

This has some advantages and some caveats. For instance, most of your traffic is now encrypted.

What you send between the browser and the local SOCKS proxy is encrypted if you visit an HTTPs site, it's plain text if you visit an HTTP site.

What your SSH client sends between your computer and the remote server is always encrypted.

What your remote server does to connect to the requested website may be encrypted (if it's an HTTPS site) or may be plain text, in case of plain HTTP.

Some parts of your SOCKS proxy are encrypted, some others are not.

Bypassing firewall limitations

If you're somewhere with limited access, you might not be allowed to open an SSH connection to a remote server. You only need to get an SSH connection going, and you're good to go.

So as an alternative, run your SSH server port on additional ports, like :80, :443 or :53: web and DNS traffic is usually allowed out of networks. Your best bet is :443, as it's already an encrypted protocol and less chance of deep packet inspection middleware from blocking your connection because it doesn't follow the expected protocol.

The chances of :53 working are also rather slim, as most DNS is UDP based and TCP is only use in either zone transfers or rare DNS occasions.

Testing SOCKS5 proxy

Visit any "what is my IP" website and refresh the page before and after your SOCKS proxy configuration.

If all went well, your IP should change to that of your remote SSH server, as that's now the outgoing IP for your web browsing.

If your SSH tunnel is down, crashed or wasn't started yet, your browser will kindly tell you that the SOCKS proxy is not responding.

If that's the case, restart the ssh command, try a different port or check your local firewall settings.

The post Create a SOCKS proxy on a Linux server with SSH to bypass content filters appeared first on