Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

July 26, 2016

The post Why do we automate? appeared first on

Well that's a stupid question, to save time obviously!

I know, it sounds obvious. But there's more to it.

The last few years I've been co-responsible for determining our priorities at Nucleus. Deciding what to automate and where to focus our development and sysadmin efforts. Which tasks do we automate first?

That turns out to be a rather complicated question with many if's and but's.

For a long time, I only looked at the time-saved metric to determine what we should do next. I should've been looking at many more criteria.

To save time

This is the most common reason to automate and it's usually the only factor that helps decide whether there should be an effort to automate a certain task.

Example: time consuming capacity planning

Task: every week someone has to gather statistics about the running infrastructure to calculate free capacity in order to purchase new capacity in time. This task takes takes an hour, every week.

Efforts to automate: it takes a developer 2 days work to gather info via API's and create a weekly report to management.

Gain: the development efforts pay themselves back in about 16 weeks. Whether this is worth it or not depends on your organisation.


Source: XKCD: Automation

It's an image usually referenced when talking about automation, but it holds a lot of truth.

The "time gained" metric is multiplied by the people affected by it. If you can save 10 people 5 minutes every day, you've practically gained an extra workday every week.

To gain consistency

Sometimes a task is very complicated but doesn't need to happen very often. There are checklists and procedures to follow, but it's always a human (manual) action.

Example: complicated migrations

Task: an engineer sometimes has to move e-mail accounts from one server to another. This doesn't happen very often but consists of a large number of steps where human error is easily introduced.

Efforts to automate: it may take a sysadmin a couple of hours to create a series of scripts to help automate this task.

Gain: the value in automating this is in the quality of the work. It guarantees a consistent method of migrations that everyone can follow and creates a common baseline for clients. They know what to expect and the quality of the results is the same every time.

To gain speed, momentum and velocity

There are times when things just take a really long time in between tasks. It's very easy to lose focus or forget about follow-up tasks because you're distracted in the meanwhile.

Example: faster server setups and deliveries

Task: An engineer needs to install a new Windows server. Traditionally, this takes many rounds of Windows Updates and reboots. Most software installations require even more reboots.

Efforts to automate: a combination of PXE servers or golden templates and a series of scripts or config management to get the software stack to a reasonable state. A sysadmin (or team of) can spend several days automating this.

Gain: the immediate gain is in peace of mind and speed of operations. It reduces the time of go-live from several hours to mere minutes. It allows an operation to move much faster and consider new installations trivial.

This same logic applies to automating deployments of code or applications. By taking away the burden of performing deploys, it becomes much cheaper and easier to deploy very simple changes instead of prolonging deploys and going for big waterfall-like go-lives with lots of changes at once.

To reduce boring or less fun tasks

If there's a recurring task that no one likes to do but is crucial to the organisation, it's probably worth automating.

Example: combining and merging related support tickets

Task: In a support department, someone is tasked to categorise incoming support tickets, merge the same tickets or link related tickets and distribute tasks.

Efforts to automate: A developer may spend several days writing the logic and algorithms to find and merge tickets automatically, based on pre-defined criteria.

Gain: A task that may be put on hold for too long because no one likes to do it, suddenly happens automatically. While it may not have been time consuming, the fact that it was put on hold too often impacts the organisation.

The actual improvement is to reduce the mental burden of having to perform those tasks in the first place. If your job consists of a thousand little tasks every day, it becomes easy to lose track of priorities.

To keep sysadmins and developers happy

Sometimes you automate things, not necessarily for any of the reasons above, but because your colleagues have signalled that it would be fun to automate it.

The tricky part here is assessing the value for the business. In the end, there should be value for the company.

Example: creating a dashboard with status reports

Task: Create a set of dashboards to be shown on monitors and TVs in the office.

Efforts to automate: Some hardware hacking with Raspberry Pi's, scripts to gather and display data and visualise the metrics and graphs.

Gain: More visibility in open alerts and overall status of the technology in the company.

Everyone that has dashboards knows the value they bring, but assessing whether it's worth the time and energy put into creating them is a very hard thing to do. How much time can you afford to spend creating them?

Improvements like these often come from colleagues. Listen to them and give them the time and resources to help implement them.

When to automate?

Given all these reasons on why to automate, this leaves the most difficult question of all: when to automate?

How and when do you decide whether something is worth automating? The time spent vs. time gained metric is easy to calculate, but how do you define the happiness of colleagues? How much is speed worth in your organisation?

Those are the questions that keep me up.

The post Why do we automate? appeared first on

July 25, 2016

The post A new website layout, focussed on speed and simplicity appeared first on

Out with the old, in with the new!

After a couple of years I felt it was time for a refresh of the design of this blog. It's already been through a lot of iterations, as it usually goes with WordPress websites. It's so easy to download and install a theme you can practically switch every day.

But the downside of WordPress themes is also obvious: you're running the same website as thousands of others.

Not this time, though.

Ps; if you're reading this in your RSS feed or mailclient, consider clicking through to the website to see the actual results.

Custom theme & design

This time, I decided to do do it myself. Well, sort of. The layout is based on the bootstrap CSS framework by Twitter. The design is inspired by Troy Hunt's site. Everything else I hand-crafted with varying degrees of success.

In the end, it's a WordPress theme that started out like this.



Pretty empty.

Focus on the content

The previous design was chosen with a single goal in mind: maximise advertisement revenue. There were distinct locations for Google AdSense banners in the sidebar and on the top.

This time, I'm doing things differently: fuck ads.

I'm throwing away around 1.000eur per year in advertisement revenue, but what I'm gaining is more valuable to me: peace of mind. Knowing there are ads influences your writing and topic selection. You're all about the pageviews. More views = more money. You chose link-bait titles. You write as quickly as you can just to get the exclusive of a story, not always for the better.

So from now on, it's a much more simple layout: content comes first. No more ads. No more bloat.

Speed improvements

The previous site was -- rather embarrassingly -- loading over 100 resources on every pageview. From CSS to JavaScript to images to remote trackers to ... well, everything.


The old design: 110 requests with a total of 1.7MB of content. The page took more than 2 seconds to fully render.

With this new design, I focussed on getting as few requests as possible. And I think it worked.


Most pages load with 14 HTTP requests for a total of ~300KB. It also renders a lot faster.

There are still some requests being made that I'd like to get rid of, but they're well hidden inside plugins I use -- even though I don't need their CSS files.

A lot of the improvements came from not including the default Facebook & Twitter widgets but working with the Font Awesome icon set to render the same buttons & links, without 3rd party tools.

Social links

I used to embed the Twitter follow & Facebook share buttons on the site. It had a classic "like this page" at the right column. But those are loaded from a Twitter/Facebook domain and do all sorts of JavaScript and AJAX calls in the background, all slowing down the site.

Not to mention the tracking: just by including those pieces of JavaScript I made every visitor involuntarily give their browsing habbits to those players, all for their advertisement gains. No more.

To promote my social media, you can now find all necessary links in the top right corner -- in pure CSS.


Want to share a page on social media? Those links are embedded in the bottom, also in CSS.


While the main motivator was speed and reducing the number of HTTP requests, not exposing my visitors to tracking they didn't ask for feels like a good move.

Why no static site generator?

If I'm going for speed, why didn't I pick a static site generator like Jeckyll, Hugo, or Octopress?

My biggest concern were comments.

With a statically generated site, I would have to embed some kind of 3rd party comment system like Disqus. I'm not a big fan for a couple of reasons:

  • Another 3rd party JavaScript/AJAX call that can be used for tracking
  • Comments are no longer a "part of" the website, in terms of SEO
  • I want to "own" the data: if all comments are moved to Disqus and they suddently disappear, I've lost a valuable part of this website

So, no static generator for me.

I do however combine WordPress with a static HTML plugin (similar to Wordfence). For most visitors, this should feel like a static HTML page with fast response times. It also helps me against large traffic spikes so my server doesn't collapse.


I'm a bit of a font-geek. I was a fan of webfonts for all the freedom they offered, but I'm taking a step back now to focus on speed. You see, webfonts are rather slow.

An average webfont that isn't in the browser cache takes about 150-300ms to load. All that for some typography? Doesn't seem worth it.

Now I'm following Github's font choice.

font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Helvetica, Arial, sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol";

In short: it takes the OS default wherever possible. This site will look slightly different on Mac OSX vs Windows.

For Windows, it looks like this.


And for Mac OSX:


Despite being a Linux and open source focussed blog, hardly any of my visitors use a Linux operating system -- so I've decided to ignore those special fonts for now.

Having said that, I do still use the Font Awesome webfonts for all the icons and glyphs you see. In terms of speed, I found it to be faster & more responsive to load a single webfont than to load multiple images and icons. And since I'm no frontend-guru, sprites aren't my thing.

Large, per-post cover images

This post has a cover image at the very top that's unique to this post. I now have the ability to give each post (or page) a unique feeling and design, just by modifying the cover image.

For most of my post categories I have sane defaults in case older posts don't have a custom header image. I like this approach, as it gives a sense of familiarity to each post. For instance, have a look at the design of these pages;

I like how the design can be changed for each post.

At the same time, I'm sacrificing a bit of my identity. All my previous layouts all had the same theme for each page, causing -- hopefully -- a sense of familiarity and known-ground. I'll have to see how this goes.

There's a homepage

This blog has always been a blog, pur sang. Nothing more.

But as of today, there is an actual homepage! One that doesn't just list the latest blogposts.

I figured it was time for some kind of persona building and having a homepage to showcase relevant projects or activities might persuade more visitors to keep up with my online activities (aka: Twitter followers).

Feedback appreciated!

I'm happy with the current layout, but I want to hear from you want you think: is it better or worse?

There are a couple of things I'm considering but haven't quite decided on yet:

  • related posts: should they be shown below every post? They clutter the UI and I don't think anyone ever bothers clicking through?
  • cronweekly/syscast "advertisements": the previous layout had a big -- but ugly -- call-to-action for every visitor to sign up for cron.weekly or check out the SysCast podcast. Those are missing now, I'm not yet sure if -- and how -- they should return.

If there are pages that need some additional markup, I'm all ears. Ping me on Twitter with a link!

The post A new website layout, focussed on speed and simplicity appeared first on

July 24, 2016

It’s that time of the year again where I humbly ask Autoptimize’s users to download and test the “beta”-version of the upcoming release. I’m not entirely sure whether this should be 2.0.3 (a minor release) or 2.1.0 (a major one), but I’ll let you guys & girls decide, OK?

Anyway, the following changes are in said new release;

  • Autoptimize now adds a small menu to the admin-toolbar (can be disabled with a filter) that shows the cache size and provides the possibility to purge the cache. A big thanks to Pablo Custo for his hard work on this nice feature!
  • If the cache size becomes too big, a mail will be sent to the site admin (pass `false` to `autoptimize_filter_cachecheck_sendmail` filter to disable or pass alternative email to the `autoptimize_filter_cachecheck_mailto` filter)
  • An extra tab is shown (can be hidden with a filter) with information about my upcoming premium power-ups and other optimization tools- and services.
  • Misc. bugfixes & small improvements (see the commit-log on GitHub)

So, if you’re curious about Pablo’s beautiful menu or if you just want to help Autoptimize out, download the beta and provide me with your feedback. If all goes well, we’ll be able to push it (2.1.0?) out in the first half of August!

July 23, 2016

  • visited FOSDEM; giving a lightning talk about Buildtime Trend; meeting Rouslan, Eduard and Ecaterina, and many others
  • attended a Massive Attack concert in Paleis 12.
  • visited Mount Expo, the outdoor fair organised by KBF
  • saw some amazing outdoor films on the BANFF film festival
  • spent a weekend cleaning routes with the Belgian Rebolting Team in Comblain-La-Tour. On Sunday we did some climbing in Awirs, where I finished a 6b after trying a few times.
  • First time donating blood plasma
  • Climbing trip to Gorges du Tarn with Vertical Thinking : climbing 6 days out of 7 (one day of rain), doing multipitch Le Jardin Enchanté, sending a lot of 5, 6a and 6a+ routes, focusing on reading the route, looking for footholds and taking small steps.
  • Some more route cleaning with BRT, this time in Flône, removing loose rocks and preparing to open new routes.
  • went to DebConf16 in CapeTown, talking about 365 days of Open Source and made a first contribution to Debian.
  • Visited South Africa and climbed in Rocklands/Cederberg

July 22, 2016

The post vsftpd on linux: 500 OOPS: vsftpd: refusing to run with writable root inside chroot() appeared first on

The following error can occur when you just installed vsftpd on a Linux server and trying to FTP to it.

Command:	USER xxx
Response: 	331 Please specify the password.
Command:	PASS ******************
Response: 	500 OOPS: vsftpd: refusing to run with writable root inside chroot()
Error:        	Critical error: Could not connect to server

This is caused by the fact that the directory of the user you're connecting to, is write-enabled. In normal chroot() situations, the parent directory needs to be read-only.

This means for most situations of useradd, which will create a home directory owned and writeable by the user, the above error of "vsftpd: refusing to run with writable root inside chroot()" will be shown.

To fix this, modify the configuration as such.

$ cat /etc/vsftpd/vsftpd.conf

If that parameter is missing, just add it to the bottom of the config. Next, restart vsftpd.

$ service vsftpd restart

After that, FTP should run smoothly again.

Alternatively: please consider using sFTP (FTP over SSH) or FTPs (FTP via TLS) with a modified, non-writeable, chroot.

The post vsftpd on linux: 500 OOPS: vsftpd: refusing to run with writable root inside chroot() appeared first on

July 21, 2016

The before and after of

Yesterday the City of Boston launched its new website,, on Drupal. Not only is Boston a city well-known around the world, it has also become my home over the past 9 years. That makes it extra exciting to see the city of Boston use Drupal.

As a company headquartered in Boston, I'm also extremely proud to have Acquia involved with The site is hosted on Acquia Cloud, and Acquia led a lot of the architecture, development and coordination. I remember pitching the project in the basement of Boston's City Hall, so seeing the site launched less than a year later is quite exciting.

The project was a big undertaking as the old website was 10 years old and running on Tridion. The city's digital team, Acquia, IDEO, Genuine Interactive, and others all worked together to reimagine how a government can serve its citizens better digitally. It was an ambitious project as the whole website was redesigned from scratch in 11 months; from creating a new identity, to interviewing citizens, to building, testing and launching the new site.

Along the way, the project relied heavily on feedback from a wide variety of residents. The openness and transparency of the whole process was refreshing. Even today, the city made its roadmap public at and is actively encouraging citizens to submit suggestions. This open process is one of the many reasons why I think Drupal is such a good fit for

Boston gov tell us what you think

More than 20,000 web pages and one million words were rewritten in a more human tone to make the site easier to understand and navigate. For example, rather than organize information primarily by department (as is often the case with government websites), the new site is designed around how residents think about an issue, such as moving, starting a business or owning a car. Content is authored, maintained, and updated by more than 20 content authors across 120 city departments and initiatives.

Boston gov tools and apps

The new is absolutely beautiful, welcoming and usable. And, like any great technology endeavor, it will never stop improving. The City of Boston has only just begun its journey with - I’m excited see how it grows and evolves in the years to come. Go Boston!

Boston gov launch event
Boston gov launch event
Boston gov launch event
Last night there was a launch party to celebrate the launch of It was an honor to give some remarks about this project alongside Boston Mayor Marty Walsh (pictured above), as well as Lauren Lockwood (Chief Digital Officer of the City of Boston) and Jascha Franklin-Hodge (Chief Information Officer of the City of Boston).
Ceci est le billet 39 sur 39 dans la série Printeurs

Nellio, Eva, Max et Junior fuient l’usine de mannequins sexuels à bord d’un taxi automatique gratuit.

Le taxi nous emmène à toute allure.
— Junior, tu es sûr que l’on ne sera pas tracé ?
— Pas si on utilise le mode gratuit. Les données sont agrégées et anonymisées. Un vieux reliquat d’une ancienne loi. Et comme le système informatique fonctionne, personne n’ose le mettre à jour ni triturer un peu trop les bases de données. Par contre, si on achète quoi que ce soit dans le tunnel, nous serions immédiatement remarqués !

Tout en répondant, il regarde avec émerveillement les doigts métalliques que Max lui a greffé.

— Waw, dire que j’ai attendu tout ce temps pour me faire greffer un implant auriculaire ! C’est génial !
— C’était nécessaire pour t’implanter le logiciel de gestions des doigts, ajoute Max. Mais l’implant auriculaire est fournit avec une légère euphorie pour atténuer la douleur.
— Au fait, Max, où va-t-on ?
— J’ai contacté FatNerdz sur le réseau. Il m’a filé les coordonnées du siège du conglomérat de la zone industrielle.
— Peut-on réellement faire confiance à ce FatNerdz que personne n’a jamais vu ni ne connait ?

Max semble hésiter un instant.

— À vrai dire, que peut-il nous arriver de pire que nous faire descendre par des drones explosifs ? Et c’est ce qui nous arrivera si nous ne faisons rien. Il y a un combat certain pour te capturer, Nellio. Autant tirer tout cela au clair une bonne fois pour toute…

Je me tourne vers Eva.

— Eva ? Parle moi ! Aide-nous !

Elle me darde d’un regard froid, cruel.

— Je pense savoir qui est FatNerdz. Je n’ai pas de preuve mais j’ai l’intime conviction que je le connais bien. Trop bien même…

Je n’ai pas le temps d’exprimer mon étonnement que la voiture ralentit soudainement. Toutes les vitres descendent et nos sièges se tournent automatiquement vers l’extérieur. Junior nous hurle un ordre avec un ton incroyablement autoritaire.

— Surtout, ne touchez rien, n’achetez rien ! Gardez les mains coincées en dessous de vos fesses.

Devant nos yeux se mettent à défiler des distributeurs nous présentant toutes sortes de produits : barres sucrées, boissons colorées, alcools, vêtements, accessoires…

— Junior, fais-je un peu honteux d’avouer mon ignorance, je n’ai jamais pris les tunnels gratuits. J’ai toujours pu me payer des courses individuelles…
— Heureux veinard ! Les tunnels gratuits n’ont de gratuit que le nom. À force de les utiliser, ils coûtent bien plus cher à l’usager que de payer directement des courses individuelles. C’est ce qui rend les pauvres encore plus pauvres : ils vendent la seule chose qui leur reste, leur personnalité et leur libre arbitre, pour une illusion de gratuité.

Des hologrammes commencent à danser devant mes yeux, des femmes et des hommes nus se trémoussent, boivent d’alléchantes boissons et me tendent langoureusement des cuillerées de yaourt ou des morceau de fruits recomposés. Je sens monter en moi un mélange d’appétit, de désir sexuel, de fringale… Instinctivement, je tends le bras vers une délicieusement rafraichissante bouteille de jus…

— Non ! me hurle Junior en me tapant violemment sur le bras. Si tu touches le moindre objet, il te sera crédité via un scan rétinien. Les transactions financières étant étroitement surveillées dans le cadre des lois anti-terroristes, nous serons pulvérisés dans la seconde ! Tiens bon !

La voiture me semble de plus en plus lente. Ce tunnel est interminable.

– Tant qu’on n’achète pas, la voiture ralentit, me souffle Junior. Mais il y a une durée maximale. Tiens bon !

Je ferme les yeux afin de soulager mes pulsions mais les phéromones de synthèse aguichent mes sens. Mes nerfs sont à fleur de peau, je me sens agressé, écorché, violé. Le désir monte en moi, j’ai envie de hurler, je me mords les mains jusqu’au sang. Je…

Lumière !

— Nous sommes sortis !

La voiture reprend de la vitesse Je respire douloureusement. De grosses gouttes de sueur perlent sur mon front. De sa main cybernétique, Junior me caresse l’épaule.

— C’est vrai que ça doit être violent si c’est la première fois. Le problème c’est que lorsqu’on y est exposé enfant, on développe une forme d’accoutumance. Les réflexes d’achats sont ceux ancrés dans la petite enfance. Les publicitaires sont donc dans une concurrence de plus en plus violente afin d’outrepasser ces habitudes.

Je me tourne vers Eva, qui semble être restée impassible.

— Eva, pourtant toi aussi tu m’avais dit ne pas avoir été exposé à la publicité. Encore moins que moi ! Tu m’as raconté que tes parents avaient fait d’énormes sacrifice pour cela.

Elle hésite. Se triture les lèvres. Un silence gêné s’installe que Max rompt.
— Eva, il est peut-être temps de lui dire la vérité.
— Je ne sais pas s’il est prêt à l’entendre…

Je hurle !

— Bon sang, je suis manipulé, pourchassé et traqué, j’ai bien le droit de savoir ce qui m’arrive ! Merde, Eva, je croyais sincèrement que je pouvais compter sur toi.
— Tu as toujours pu compter sur moi, Nellio. Toujours ! Je ne t’ai menti que sur une seule chose : mon origine.
— Alors dis moi tout !
— Je croyais que ce que tu as vu à l’usine Toy & Sex était suffisant.
— Et bien non ! Cela a rendu tout encore plus confus pour moi ! Pourquoi ces poupées gonflables nouvelle génération sont-elles à ton effigie ?

Max émet un son qui, s’il avait un larynx biologique, ressemblerait sans doute à un toussotement.

— Nellio, continue Eva doucement. Ces poupées ne sont pas à mon effigie.
— Mais…
— C’est moi qui suis…

Une formidable explosion retentit soudain. La voiture est soufflée et projetée violemment sur le flanc. Des crépitements d’armes à feu se font entendre.

— Ils nous ont repéré, hurlé-je !
— Non, me répond Junior. Si c’était le cas, nous serions mort. C’est certainement un attentat.

Nous sommes tous les quatre emmêlés, culs par dessus tête. Max tente de s’extirper du véhicule. Ses pieds et se genoux me broient les côtes mais la douleur reste supportable.

— Oh merde, un attentat, soupiré-je en portant la main à mon front ensanglanté. Encore ces foutus militants du sultanats islamiques !
— Ou alors, des policiers en service commandé, ajoute Junior avec un sourire narquois.
— Hein ?
— Oui, s’il n’y a pas assez d’attentat, on en organise des petits histoires de justifier les budgets. Parfois ce sont des initiatives locales. Parfois, c’est carrément des ordres qui viennent d’en haut afin de faire passer des lois ou de prendre des mesures. Dans tous les cas, ça fait consommer de l’info, ça occupe les télépass.

La voix de Max nous parvient de l’extérieur.

— Dîtes, vous vous magnez le train ? Ils sont en train de descendre tout le monde de l’autre côté de la rue. Mais ils risque bien de venir canarder les survivants de l’explosion.
— Après toi, fais-je à Junior d’un air blasé, heureux de vivre enfin une explosion dont je ne suis pas la cible prioritaire.


Photo par Oriolus.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins: 12uAH27PhWepyzZ8kzY9rMAZzwpfsWSxXt, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

July 20, 2016

A bit of history One thing that never ceases to amaze me is how Activiti is being used in some very large organisations at some very impressive scales. In the past, this has led to various optimizations and refactorings, amongst which was the async executor – replacement for the old job executor. For the uninitiated: these executors handle […]

July 19, 2016

FOSDEM 2017 will take place at ULB Campus Solbosch on Saturday 4 and Sunday 5 February 2017. Further details and calls for participation will be announced in the coming weeks and months. Have a nice summer!
We now invite proposals for main track presentations, developer rooms, stands and lightning talks. FOSDEM offers open source and free software developers a place to meet, share ideas and collaborate. Renowned for being highly developer-oriented, the event brings together some 5000+ geeks from all over the world. The seventeenth edition will take place on Saturday 4th and Sunday 5th February 2017 at the usual location: ULB Campus Solbosch in Brussels. Main Tracks Previous editions have featured main tracks centered around security, operating system development, community building, and many other topics. Presentations are expected to be 50 minutes long and舰

July 18, 2016

FOSDEM 2017 will take place at ULB Campus Solbosch on Saturday 4 and Sunday 5 February 2017. Further details and calls for participation will be announced in the coming weeks and months. Have a nice summer!

July 14, 2016

Notes from Implementing Domain Driven Design, chapter 2: Domains, Subdomains and Bounded Contexts (p58 and later only)

  • User interface and service orientated endpoints are within the context boundary
  • Domain concepts in the UI form the Smart UI Anti-Pattern
  • A database schema is part of the context if it was created for it and not influenced from the outside
  • Contexts should not be used to divide developer responsibilities; modules are a more suitable tactical approach
  • A bounded context has one team that is responsible for it (while teams can be responsible for multiple bounded contexts)
  • Access and identity is its own context and should not be visible at all in the domain of another context. The application services / use cases in the other context are responsible for interacting with the access and identity generic subdomain
  • Context Maps are supposedly real cool

July 13, 2016

The post Highly Critical Remote Code Execution patch for Drupal (PSA-2016-001) appeared first on

Update: patch released, see updates below.

For everyone running Drupal, beware: today a highly critical patch is going to be released.

There will be multiple releases of Drupal contributed modules on Wednesday July 13th 2016 16:00 UTC that will fix highly critical remote code execution vulnerabilities (risk scores up to 22/25). These contributed modules are used on between 1,000 and 10,000 sites. The Drupal Security Team urges you to reserve time for module updates at that time because exploits are expected to be developed within hours/days. Release announcements will appear at the standard announcement locations. PSA-2016-001

Important to know is that the Drupal core isn't affected.

Drupal core is not affected. Not all sites will be affected. You should review the published advisories on July 13th 2016 to see if any modules you use are affected. PSA-2016-001

The vulnerability is a "Arbitrary PHP code execution" one, meaning anyone could use this vulnerability to execute PHP code they wrote on the server. In most environments, PHP isn't limited to what it can and can not do, so allowing arbitrary PHP execution is just as dangerous as a Bash remote code execution exploit. Make sure to keep an eye on the patch!

Update 13/07/2016

3 modules have been updated:

Get patching!

Here's the diff for the Coder module:

$ diff -r coder_upgrade/scripts/ \
< if (!script_is_cli()) {
<   // Without proper web server configuration, this script can be invoked from a
<   // browser and is vulnerable to misuse.
<   return;
< }
< /**
<  * Returns boolean indicating whether script is being run from the command line.
<  *
<  * @see drupal_is_cli()
<  */
< function script_is_cli() {
<   return (!isset($_SERVER['SERVER_SOFTWARE']) && (php_sapi_name() == 'cli' || (is_numeric($_SERVER['argc']) && $_SERVER['argc'] > 0)));
< }

Here's the diff for the RESTWS module:

$ diff -r restws.module restws.module
<         'page arguments' => array($resource, 'drupal_not_found'),
>         'page arguments' => array($resource),
<         'page arguments' => array($resource, 'drupal_not_found'),
>         'page arguments' => array($resource),
<           'page arguments' => array($resource, 'drupal_not_found'),
>           'page arguments' => array($resource),
<  *
<  * @param string $resource
<  *   The name of the resource.
<  * @param string $page_callback
<  *   The page callback to pass through when the request is not handled by this
<  *   module. If no other pre-existing callback is used, 'drupal_not_found'
<  *   should be passed explicitly.
<  * @param mixed $arg1,...
<  *   Further arguments that are passed through to the given page callback.
< function restws_page_callback($resource, $page_callback) {
> function restws_page_callback($resource, $page_callback = NULL) {
<   // Fall back to the passed $page_callback and pass through more arguments.
<   $args = func_get_args();
<   return call_user_func_array($page_callback, array_slice($args, 2));
>   if (isset($page_callback)) {
>     // Further page callback arguments have been appended to our arguments.
>     $args = func_get_args();
>     return call_user_func_array($page_callback, array_slice($args, 2));
>   }
>   restws_terminate_request('404 Not Found');

The post Highly Critical Remote Code Execution patch for Drupal (PSA-2016-001) appeared first on

July 11, 2016

Earlier today I updated my performance-centric TwentyTwelve child theme to fix a problem with the mobile navigation (due to the fact that TwentyTwelve changed the menu-button from a h3 to a button, which required the navigation JS which 2012.FFWD inlines to be updated as well). You can download the update here.

This update “officially” marks the end-of-life of this child-theme. Although a lot of optimizations can be done on a theme-level, I prefer focusing on tools like my own Autoptimize, which not only optimize code spit out by the theme but also any CSS/ JS introduced by plugins or widgets.

The post How To Get Pokémon Go on iPhone Outside US appeared first on

In case you missed it, the world is going crazy over Pokémon Go. But, it's -- apparently -- only for the US, UK or Australia. That shouldn't stop you from getting the game, though!

These steps get you the game on an iPhone outside of the supported regions.

Important update: the Pokémon Go app will get full access to your Google Account. It can read all your email, see all your browsing history and see all your contacts. If you do not want this, do not install the app.

It gets full access to your google account.

Here's what full access means:

But the iOS app will look harmless.

If you can live with this, follow the steps below to get the game on your phone.

If at any point you change your mind, you can revoke the app's permissions to your Google Account in Google's Security settings.

Get a new e-mail address

One of the steps is create a new Apple ID and that requires a unique e-mail address. If you're nerdy enough to have your own domain name, create a new alias that points to your main address.

If you don't have that, use a throwaway e-mail account like on, which gives you one-time use e-mail addresses.

Sign out of the App Store

Open the App Store, scroll to the bottom of the Featured tab and sign out of your account by clicking on the "Apple ID: email address" button.


Click your e-mail address, then choose "sign out".


Go to the Australian App Store

Follow this link:

The easiest way is to open this blogpost on your iPhone and click it, it'll prompt you to open the Apple Store.


Alternatively, if you have a Mac with handoff enabled, open the above link in Safari and continue the session on your iPhone.

Once the App Store opens, you'll get a message that the game isn't available in your own store and you should switch to the Australian one, click on Change Store to proceed.


Now, on to the fun stuff!

Search for Pokémon Go

Once you're in the correct app store, search for Pokémon Go.


Install the app. It'll prompt you to create a new account.


Now, on to create your new account.

Create a new App Store ID for Pokémon Go

Create a new Apple ID with the e-mail address you chose in step 1. Chose Australia as the region.


As with everything, accept the terms and conditions.


When you are prompted for your billing information, simply chose none.


The billing address needs to be valid, so I suggest you go with this address (I randomly chose it and it seems to be valid).

  • Title: Mr.
  • First name: John
  • Last name: Doe
  • Address: 301 Dogville Avenue
  • Postcode: 7000
  • City: Hobart
  • State: TAS
  • Phone: 123-456789

At the next step, Apple will send a validation e-mail to your e-mail address.


Go to it and confirm the address.

Search for Pokémon Go and install it

While still in the App Store, search for the game again, log in with your e-mail address and password, and install away!

Cleanup: log back into your original account

As a last task, you'll want to go back to the Featured page in the app store, scroll all the way to the bottom and log back in to your original account.


If you have concerns about 'what will happen when the game officially launches in Belgium? Will I lost my state in the game?', I can't say for sure, but since you have to log into your Google account in the game -- same as with Ingress -- it appears to be tied to your Google account, not your Apple device.

If this guide didn't work for you, there's also an excellent Youtube walk-through available.

The post How To Get Pokémon Go on iPhone Outside US appeared first on

July 10, 2016

Blijkbaar zijn er weer een paar rechercheurs die geloven dat de rechtstaat niet aan hen besteed is; dat ze zoals Judge Dredd aan de slag kunnen met het afluisteren van ziekenhuizen, artsen en hulpverleners zoals psychologen.

Alles is goed om hun parallelle constructies te ondersteunen. De wet zit voor hen niet in de weg. Wie heeft dat nu nodig? Wetten? Pfuh. Daar doet Dredd niet aan mee. Judge Dredd is de wet. Wat is dat nu.

Hadden ze aanwijzing dat de arts mee in het complot zat? Nee dat was er niet. Want waarom is de orde der geneesheren dan niet op de hoogte gebracht? Het was gewoon volstrekt illegaal om dat ziekenhuis af te luisteren.

Ik hoop van harte dat deze rechercheurs een zware gevangenisstraf krijgen en tot slot nooit nog het beroep van rechercheur mogen uitoefenen.

We hebben dat hier niet nodig. Ga maar in Het VK politieagentje spelen. Zolang het nog bestaat. Bende knoeiers.

July 07, 2016

In one of my recent blog posts, I articulated a vision for the future of Drupal's web services, and at DrupalCon New Orleans, I announced the API-first initiative for Drupal 8. I believe that there is considerable momentum behind driving the web services initiative. As such, I want to provide a progress report, highlight some of the key people driving the work, and map the proposed vision from the previous blog post onto a rough timeline.

Here is a bird's-eye view of the plan for the next twelve months:

8.2 (Q4 2016) 8.3 (Q2 2017) Beyond 8.3 (2017+)
New REST API capabilities
Waterwheel initial release
New REST API capabilities
JSON API module
GraphQL module?
Entity graph iterator?

New REST API capabilities

Wim Leers (Acquia) and Daniel Wehner (Chapter Three) have produced a comprehensive list of the top priorities for the REST module. We're introducing significant REST API advancements in Drupal 8.2 and 8.3 in order to improve the developer experience and extend the capabilities of the REST API. We've been focused on configuration entity support, simplified REST configuration, translation and file upload support, pagination, and last but not least, support for user login, logout and registration. All this work starts to address differences between core's REST module and various contributed modules like Services and RELAXed Web Services. More details are available in my previous blog post.

Many thanks to Wim Leers (Acquia), Daniel Wehner (Chapter Three), Ted Bowman (Acquia), Alex Pott (Chapter Three), and others for their work on Drupal core's REST modules. Though there is considerable momentum behind efforts in core, we could always benefit from new contributors. Please consider taking a look at the REST module issue queue to help!

Waterwheel initial release

As I mentioned in my previous post, there has been exciting work surrounding Waterwheel, an SDK for JavaScript developers building Drupal-backed applications. If you want to build decoupled applications using a JavaScript framework (e.g. Angular, Ember, React, etc.) that use Drupal as a content repository, stay tuned for Waterwheel's initial release later this year.

Waterwheel aims to facilitate the construction of JavaScript applications that communicate with Drupal. Waterwheel's JavaScript library allows JavaScript developers to work with Drupal without needing deep knowledge of how requests should be authenticated against Drupal, what request headers should be included, and how responses are molded into particular data structures.

The Waterwheel Drupal module adds a new endpoint to Drupal's REST API allowing Waterwheel to discover entity resources and their fields. In other words, Waterwheel intelligently discovers and seamlessly integrates with the content model defined on any particular Drupal 8 site.

A wider ecosystem around Waterwheel is starting to grow as well. Gabe Sullice (Aten Design Group), creator of the Entity Query API module, has contributed an integration of Waterwheel which opens the door to features such as sorts, conditions and ranges. The Waterwheel team welcomes early adopters as well as those working on other REST modules such as JSON API and RELAXed or using native HTTP clients in JavaScript frameworks to add their own integrations to the mix.

Waterwheel is the currently the work of Matt Grill (Acquia) and Preston So (Acquia), who are developing the JavaScript library, and Ted Bowman (Acquia), who is working on the Drupal module.

JSON API module

In conjunction with the ongoing efforts in core REST, parallel work is underway to build a JSON API module which embraces the JSON API specification. JSON API is a particular implementation of REST that provides conventions for resource relationships, collections, filters, pagination, and sorting, in addition to error handling and full test coverage. These conventions help developers build clients faster and encourages reuse of code.

Thanks to Mateu Aguiló Bosch, Ed Faulkner and Gabe Sullice (Aten Design Group), who are spearheading the JSON API module work. The module could be ready for production use by the end of this year and included as an experimental module in core by 8.3. Contributors to JSON API are meeting weekly to discuss progress moving forward.

Beyond 8.3: GraphQL and entity graph iterator

While these other milestones are either certain or in the works, there are other projects gathering steam. Chief among these is GraphQL, which is a query language I highlighted in my Barcelona keynote and allows for clients to tailor the responses they receive based on the structure of the requests they issue.

One of the primary outcomes of the New Orleans web services discussion was the importance of a unified approach to iterating Drupal's entity graph; both GraphQL and JSON API require such an "entity graph iterator". Though much of this is still speculative and needs greater refinement, eventually, such an "entity graph iterator" could enable other functionality such as editable API responses (e.g. aliases for custom field names and timestamp formatters) and a unified versioning strategy for web services. However, more help is needed to keep making progress, and in absence of additional contributors, we do not believe this will land in Drupal until after 8.3.

Thanks to Sebastian Siemssen, who has been leading the effort around this work, which is currently available on GitHub.

Validating our work and getting involved

In order to validate all of the progress we've made, we need developers everywhere to test and experiment with what we're producing. This means stretching the limits of our core REST offerings, trying out JSON API for your own Drupal-backed applications, reporting issues and bugs as you encounter them, and participating in the discussions surrounding this exciting vision. Together, we can build towards a first-class API-first Drupal.

Special thanks to Preston So for contributions to this blog post and to Wim Leers for feedback during its writing.

July 06, 2016

The post The Bash For Loop, The First Step in Automation on Linux appeared first on

I believe mastering the for loop in Bash on Linux is one of the fundamentals for Linux sysadmins (and even developers!) that takes your automation skills to the next level. In this post I explain how they work and offer some useful examples.

Update 07/06/2016: lots of critique on Reddit (granted: well deserved), so I updated most of the examples on this page for safer/saner defaults.

Let me first start by saying something embarrassing. For the first 4 or 5 years of my Linux career -- which is nearing 10 years of professional experience -- I never used loops in Bash scripts. Or at the command line.

The thing is, I was a very fast mouse-clicker. And a very fast copy/paster. And a good search & replacer in vim and other text editors. Quite often, that got me to a working solution faster than working out the quirky syntax, testing, bugfixing, ... of loops in Bash.

And, to be completely honest, if you're managing just a couple of servers, I think you can get away with not using loops in Bash. But, once you master it, you'll wonder why you haven't learned Bash for-loops sooner.

The Bash For Loop: Example

First, let me show you the most basic -- and the one you'll see most often -- form of a Bash loop.

for i in 1 2 3 4 5; do
  echo "counter: $i"

If you execute such a script, it outputs like this.

$ ./
counter: 1
counter: 2
counter: 3
counter: 4
counter: 5

Pretty basic, right? Here's what it breaks down to.


The first part, #!/bin/bash, is the shebang, somethings called a hashbang. It indicates which interpreter is going to be used to parse the rest of the script. In short, it's what makes this a Bash script.

The rest is where the Bash for loop actually comes in.

  1. for: indicates that this is a loop, and that you'd like to iterate (or "go over") multiple items.
  2. i: a placeholder for a variable, which can later be referenced as $i. i is often used by developers to loop or iterate over an array or a hash, but this can be anything (*). For clarity, it could also have been named counter, the variable to reference it later on would then be $counter, with a dollar sign.
  3. in: a keyword, indicating the separator between the variable i and the collection of items to run over.
  4. 1 2 3 4 5: whichever comes between the in keyword and the ; delimiter is the collection of items you want to run through. In this example, the collection "1 2 3 4 5" is considered a set of 5 individual items.
  5. do: this keyword defines that from this point on, the loop starts. The code that follows will be executed n times, where n is the amount of items that's in the collection, in this case a set of 5 digits.
  6. echo "counter: $i": this is the code inside the loop, that will be repeated -- in this case -- 5 times. The $i variable is the individual value of each item.
  7. done: this keyword indicates that the code that should be repeated in this loop, has finished.

(*) Technically, the variable can't be anything as there are limitations of what characters can be used in a variable, but that's beyond the scope here. Keep it to alphanumerics without spaces or special chars, and you're probably safe.

Lots of text, isn't it?

Well, look at the screenshot again and just remember the different parts of the for loop. And remember also that this isn't limited to actual "scripts", it can be concatenated to a single line for use at the command line, too.

$ for i in 1 2 3 4 5; do echo "counter: $i"; done

The same breakdown occurs there.


There is one, important difference though. Right before the last done keyword, there is a semicolon ; to indicate that the command ends there. In the Bash script, that isn't needed, because the done keyword is placed on a new line, also ending the command above.

Actually, that very first example I showed you? It can be rewritten without a single ;, by just new-lining each line.

for i in 1 2 3 4 5
  echo "counter: $i"

You can pick which ever style you prefer or find most readable/maintainable.

The result is exactly the same: a set of items is looped and for each occurrence, an action is taken.

Values to loop in Bash

Looping variables isn't very exciting in and of itself, but it gets very useful once you start to experiment with the data to loop through.

For instance:

$ for file in *; do echo "$file"; done

Granted, you can just do ls and get the same value, but you can use * inside your for statement, which is essentially the same as an $(ls), but with safe output. It's execute before the actual for-loop and the output is being used as the collection of items to iterate.

This opens up a lot of opportunities, especially if you take the seq command in mind. With seq you can generate sequences at the CLI.

For instance:

$ seq 25 30

That generates the numbers 25 through 30. So if you'd like to loop items 1 until 255, you can do this:

$ for counter in $(seq 1 255); do echo "$counter"; done

When would you use that? Maybe to ping a couple of IPs or connect to multiple remote hosts and fire of a few commands.

$ for counter in $(seq 1 255); do ping -c 1 "10.0.0.$counter"; done
PING ( 56(84) bytes of data.
PING ( 56(84) bytes of data.

Now we're talking.

Ranges in Bash

You can also use some of the built-in Bash primitives to generate a range, without using seq. The code below does exactly the same as the ping example above.

$ for counter in {1..255}; do ping -c 1 10.0.0.$counter; done

More recent Bash version (Bash 4.x at least) can also modify this command increase the step by which each integer increments. By default it's always +1, but you can make that +5 if you like.

$ for counter in {1..255..5}; do echo "ping -c 1 10.0.0.$counter"; done
ping -c 1
ping -c 1
ping -c 1
ping -c 1

Looping items like this can allow you to quickly automate an otherwise mundane task.

Chaining multiple commands in the Bash for loop

You obviously aren't limited to a single command in a for loop, you can chain multiple ones inside the for-loop.

for i in 1 2 3 4 5; do
  echo "Hold on, connecting to 10.0.1.$i"
  ssh root@"10.0.1.$i" uptime
  echo "All done, on to the next host!"

Or, at the command line as a one-liner:

$ for i in 1 2 3 4 5; do echo "Hold on, connecting to 10.0.1.$i"; ssh root@"10.0.1.$i" uptime; echo "All done, on to the next host"; done

You can chain multiple commands with the ; semicolon, the last command will be the done keyword to indicate you're, well, done.

Bash for-loop examples

Here are a couple of "bash for loop" examples. They aren't necessarily the most useful ones, but show some of the possibilities.

For each user on the system, write their password hash to a file named after them


$ for username in $(awk -F: '{print $1}' /etc/passwd); do grep $username /etc/shadow | awk -F: '{print $2}' > $username.txt; done


for username in $(awk -F: '{print $1}' /etc/passwd)
  grep $username /etc/shadow | awk -F: '{print $2}' > $username.txt

Rename all *.txt files to remove the file extension


$ for filename in *.txt; do mv "$filename" "${filename%.txt}"; done


for filename in *.txt
  mv "$filename" "${filename%.txt}"

Use each line in a file as an IP to connect to


$ for ip in $(cat ips.txt); do ssh root@"$ip" yum -y update; done


for ip in $(cat ips.txt)
  ssh root@"$ip" yum -y update

Debugging for loops in Bash

Here's a one way I really like to debug for-loops: just echo everything. This is also a great way to "generate" a static Bash script, by catching the output.

For instance, in the ping example, you can do this:

$ for counter in {1..255..5}; do echo "ping -c 1 10.0.0.$counter"; done

That will echo each ping statement. Now you can also catch that output, write it to another Bash file and keep it for later (or modify manually if you're struggling with the Bash loop -- been there, done that).

$ for counter in {1..255..5}; do echo "ping -c 1 10.0.0.$counter"; done >
$ more
ping -c 1
ping -c 1
ping -c 1

It may be primitive, but this gets you a very long way!

The post The Bash For Loop, The First Step in Automation on Linux appeared first on

July 05, 2016

I've been tweaking the video review system which we're using here at debconf over the past few days so that videos are being published automatically after review has finished; and I can happily announce that as of a short while ago, the first two files are now visible on the meetings archive. Yes, my own talk is part of that. No, that's not a coincidence. However, the other talks should not take too long ;-)

Future plans include the addition of a video RSS feed, and showing the videos on the debconf16 website. Stay tuned.

July 01, 2016

Consider these rather simple relationships between classes

Continuing on this subject, here are some code examples.

Class1 & Class2: Composition
An instance of Class1 can not exist without an instance of Class2.

Example of composition is typically a Bicycle and its Wheels, Saddle and a HandleBar: without these the Bicycle is no longer a Bicycle but just a Frame.

It can no longer function as a Bicycle. Example of when you need to stop thinking about composition versus aggregation is whenever you say: without the other thing can’t in our software the first thing work.

Note that you must consider this in the context of Class1. You use aggregation or composition based on how Class2 exists in relation to Class1.

Class1 with QScopedPointer:

#ifndef CLASS1_H
#define CLASS1_H

#include <QObject>
#include <QScopedPointer>
#include <Class2.h>

class Class1: public QObject
    Q_PROPERTY( Class2* class2 READ class2 WRITE setClass2 NOTIFY class2Changed)
    Class1( QObject *a_parent = nullptr )
        : QObject ( a_parent) {
        // Don't use QObject parenting on top here
        m_class2.reset (new Class2() );
    Class2* class2() {
    void setClass2 ( Class2 *a_class2 ) {
        Q_ASSERT (a_class2 != nullptr); // Composition can't set a nullptr!
        if ( != a_class2 ) {
            m_class2.reset( a_class2 );
            emit class2Changed()
    void class2Changed();
    QScopedPointer<Class2> m_class2;

#endif// CLASS1_H

Class1 with QObject parenting:

#ifndef CLASS1_H
#define CLASS1_H

#include <QObject>
#include <Class2.h>

class Class1: public QObject
    Q_PROPERTY( Class2* class2 READ class2 WRITE setClass2 NOTIFY class2Changed)
    Class1( QObject *a_parent = nullptr )
        : QObject ( a_parent )
        , m_class2 ( nullptr ) {
        // Make sure to use QObject parenting here
        m_class2 = new Class2( this );
    Class2* class2() {
        return m_class2;
    void setClass2 ( Class2 *a_class2 ) {
         Q_ASSERT (a_class2 != nullptr); // Composition can't set a nullptr!
         if ( m_class2 != a_class2 ) {
             // Make sure to use QObject parenting here
             a_class2->setParent ( this );
             delete m_class2; // Composition can never be nullptr
             m_class2 = a_class2;
             emit class2Changed();
    void class2Changed();
    Class2 *m_class2;

#endif// CLASS1_H

Class1 with RAII:

#ifndef CLASS1_H
#define CLASS1_H

#include <QObject>
#include <QScopedPointer>

#include <Class2.h>

class Class1: public QObject
    Q_PROPERTY( Class2* class2 READ class2 CONSTANT)
    Class1( QObject *a_parent = nullptr )
        : QObject ( a_parent ) { }
    Class2* class2()
        { return &m_class2; }
    Class2 m_class2;
#endif// CLASS1_H

Class3 & Class4: Aggregation

An instance of Class3 can exist without an instance of Class4. Example of composition is typically a Bicycle and its driver or passenger: without the Driver or Passenger it is still a Bicycle. It can function as a Bicycle.

Example of when you need to stop thinking about composition versus aggregation is whenever you say: without the other thing can in our software the first thing work.


#ifndef CLASS3_H
#define CLASS3_H

#include <QObject>

#include <QPointer>
#include <Class4.h>

class Class3: public QObject
    Q_PROPERTY( Class4* class4 READ class4 WRITE setClass4 NOTIFY class4Changed)
    Class3( QObject *a_parent = nullptr );
    Class4* class4() {
    void setClass4 (Class4 *a_class4) {
         if ( m_class4 != a_class4 ) {
             m_class4 = a_class4;
             emit class4Changed();
    void class4Changed();
    QPointer<Class4> m_class4;
#endif// CLASS3_H

Class5, Class6 & Class7: Shared composition
An instance of Class5 and-or an instance of Class6 can not exist without a instance of Class7 shared by Class5 and Class6. When one of Class5 or Class6 can and one can not exist without the shared instance, use QWeakPointer at that place.


#ifndef CLASS5_H
#define CLASS5_H

#include <QObject>
#include <QSharedPointer>

#include <Class7.h>

class Class5: public QObject
    Q_PROPERTY( Class7* class7 READ class7 CONSTANT)
    Class5( QObject *a_parent = nullptr, Class7 *a_class7 );
        : QObject ( a_parent )
        , m_class7 ( a_class7 ) { }
    Class7* class7()
        { return; }
    QSharedPointer<Class7> m_class7;


#ifndef CLASS6_H
#define CLASS6_H

#include <QObject>
#include <QSharedPointer>

#include <Class7.h>

class Class6: public QObject
    Q_PROPERTY( Class7* class7 READ class7 CONSTANT)
    Class6( QObject *a_parent = nullptr, Class7 *a_class7 )
        : QObject ( a_parent )
        , m_class7 ( a_class7 ) { }
    Class7* class7()
        { return; }
    QSharedPointer<Class7> m_class7;
#endif// CLASS6_H

Interfaces with QObject


#include <QObject>
// Don't inherit QObject here (you'll break multiple-implements)
class FlyBehavior {
        Q_INVOKABLE virtual void fly() = 0;
Q_DECLARE_INTERFACE(FlyBehavior , "be.codeminded.Flying.FlyBehavior /1.0") 


#include <QObject>  
#include <Flying/FlyBehavior.h>
// Do inherit QObject here (this is a concrete class)
class FlyWithWings: public QObject, public FlyBehavior
    Q_INTERFACES( FlyBehavior )
    explicit FlyWithWings( QObject *a_parent = nullptr ): QObject ( *a_parent ) {}
    ~FlyWithWings() {}

    virtual void fly() Q_DECL_OVERRIDE;

It’s official, nginx is a heap of donkey dung. I replaced it with ye olde apache:

sudo service nginx stop
sudo apt-get -y purge nginx
sudo apt-get -y install apache2 apachetop libapache2-mod-php5
sudo apt-get -y autoremove
sudo service apache2 restart


The post Ye Olde Apache appeared first on

June 29, 2016

So since Autoptimize 2.0.0 got released half a year ago, minified files are not re-minified any more, which can yield important performance-gains. Or that, at least, is the goal. But as checking if a file is minified is non-trivial, AO reverts to a simpler check; does the filename indicate the file is minified. So for example whatever-min.js and thisone_too.min.css would be considered minified and will simply be aggregated, whereas not_minified.js would get minified. Mr Clay’s Minify (which is used by WP Minify, BWP Minify and W3 Total Cache and of which the core minification components are in Autoptimize as well) applies the same logic.

But apparently plugins often lie about their JS and CSS, with some files claiming to be minified which clearly are not and with some files (even WordPress core files) being minified but not having the min-suffix in the name. It’s obvious that lying like that is kind of stupid: saying your files is minified when in fact it is not, offers you no advantages. Not confirming your file is minified in the name when it is, saves you 4 characters in the filename, but I suspect you were just being lazy, sloppy or tired, no?

So, ladies and gentlemen, can we agree on the following:

  1. Ideally you ship your plugin/ theme with minified JS & CSS.
  2. If your files are minified, you confirm that in the filename by adding the “.min”-suffix and minification plugins will skip them.
  3. If your files are not minified, you don’t include the “.min”-suffix in the filename, allowing for those minification plugins tot minify them.

For a more detailed overview of how to responsibly load minified JS/ CSS in WordPress, I’ll happily point you to Matt Cromwell’s excellent article on the subject.

I had planned to do some work on NBD while here at debcamp. Here's a progress report:

Task Concept Code Tested
Change init script so it uses /etc/nbdtab rather than /etc/nbd-client for configuration
Change postinst so it converts existing /etc/nbd-client files to /etc/nbdtab
Change postinst so it generates /etc/nbdtab files from debconf
Create systemd unit for nbd based on /etc/nbdtab
Write STARTTLS support for client and/or server

The first four are needed to fix Debian bug #796633, of which "writing the systemd unit" was the one that seemed hardest. The good thing about debcamp, however, is that experts are aplenty (thanks Tollef), so that part's done now.

What's left:

  • Testing the init script modifications that I've made, so as to support those users who dislike systemd. They're fairly straightforward, and I don't anticipate any problems, but it helps to make sure.
  • Migrating the /etc/nbd-client configuration file to an nbdtab(5) one. This should be fairly straightforward, it's just a matter of Writing The Code(TM).
  • Changing the whole debconf setup so it writes (and/or updates) an nbdtab(5) file rather than a /etc/nbd-client shell snippet. This falls squarely into the "OMFG what the F*** was I thinking when I wrote that debconf stuff 10 years ago" area. I'll probably deal with it somehow. I hope. Not so sure how to do so yet, though.

If I manage to get all of the above to work and there's time left, I'll have a look at implementing STARTTLS support into nbd-client and nbd-server. A spec for that exists already, there's an alternative NBD implementation which has already implemented it, and preliminary patches exist for the reference implementation, so it's known to work; I just need to spend some time slapping the pieces together and making it work.

Ah well. Good old debcamp.

What feelings does the name Drupal evoke? Perceptions vary from person to person; where one may describe it in positive terms as "powerful" and "flexible", another may describe it negatively as "complex". People describe Drupal differently not only as a result of their professional backgrounds, but also based on what they've heard and learned.

If you ask different people what Drupal is for, you'll get many different answers. This isn't a surprise because over the years, the answers to this fundamental question have evolved. Drupal started as a tool for hobbyists building community websites, but over time it has evolved to support large and sophisticated use cases.

Perception is everything

Perception is everything; it sets expectations and guides actions and inactions. We need to better communicate Drupal's identity, demonstrate its true value, and manage its perceptions and misconceptions. Words do lead to actions. Spending the time to capture what Drupal is for could energize and empower people to make better decisions when adopting, building and marketing Drupal.

Truth be told, I've been reluctant to define what Drupal is for, as it requires making trade-offs. I have feared that we would make the wrong choice or limit our growth. Over the years, it has become clear that not defining what Drupal is used for leaves more people confused even within our own community.

For example, because Drupal evolved from a simple tool for hobbyists to a more powerful digital experience platform, many people believe that Drupal is now "for the enterprise". While I agree that Drupal is a great fit for the enterprise, I personally never loved that categorization. It's not just large organizations that use Drupal. Individuals, small startups, universities, museums and non-profits can be equally ambitious in what they'd like to accomplish and Drupal can be an incredibly solution for them.

Defining what Drupal is for

Rather than using "for the enterprise", I thought "for ambitious digital experiences" was a good phrase to describe what people can build using Drupal. I say "digital experiences" because I don't want to confine this definition to traditional browser-based websites. As I've stated in my Drupalcon New Orleans keynote, Drupal is used to power mobile applications, digital kiosks, conversational user experiences, and more. Today I really wanted to focus on the word "ambitious".

"Ambitious" is a good word because it aligns with the flexibility, scalability, speed and creative freedom that Drupal provides. Drupal projects may be ambitious because of the sheer scale (e.g. The Weather Channel), their security requirements (e.g. The White House), the number of sites (e.g. Johnson & Johnson manages thousands of Drupal sites), or specialized requirements of the project (e.g. the New York MTA powering digital kiosks with Drupal). Organizations are turning to Drupal because it gives them greater flexibility, better usability, deeper integrations, and faster innovation. Not all Drupal projects need these features on day one -- or needs to know about them -- but it is good to have them in case you need them later on.

"Ambitious" also aligns with our community's culture. Our industry is in constant change (responsive design, web services, social media, IoT), and we never look away. Drupal 8 was a very ambitious release; a reboot that took one-third of Drupal's lifespan to complete, but maneuvered Drupal to the right place for the future that is now coming. I have always believed that the Drupal community is ambitious, and believe that attitude remains strong in our community.

Last but not least, our adopters are also ambitious. They are using Drupal to transform their organizations digitally, leaving established business models and old business processes in the dust.

I like the position that Drupal is ambitious. Stating that Drupal is for ambitious digital experiences however is only a start. It only gives a taste of Drupal's objectives, scope, target audience and advantages. I think we'd benefit from being much more clear. I'm curious to know how you feel about the term "for ambitious digital experiences" versus "for the enterprise" versus not specifying anything. Let me know in the comments so we can figure out how to collectively change the perception of Drupal.

PS: I'm borrowing the term "ambitious" from the Ember.js community. They use the term in their tagline and slogan on their main page.

June 28, 2016

June 27, 2016

Captain: What happen?
Mechanic: Somebody set up us the bomb!

So yeah, my blog was off the air for a couple of days. So what happened?

This is what /var/log/nginx/error.log told me:

2016/06/27 08:48:46 [error] 22758#0: *21197
connect() to unix:/var/run/php5-fpm.sock failed (11: Resource temporarily unavailable) while connecting to upstream, client:, server:, request: "GET /wuala-0 HTTP/1.0", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host:

So I asked Doctor Google “connect() to unix:/var/run/php5-fpm.sock failed (11: resource temporarily unavailable)” and got this answer from StackOverflow:

The issue is socket itself, its problems on high-load cases is well-known. Please consider using TCP/IP connection instead of unix socket, for that you need to make these changes:

  • in php-fpm pool configuration replace listen = /var/run/php5-fpm.sock with listen =
  • in /etc/nginx/php_location replace fastcgi_pass unix:/var/run/php5-fpm.sock; with fastcgi_pass;

followed by a carefull application of

sudo /etc/init.d/php-fpm restart
sudo /etc/init.d/nginx restart

Tl;dr version: don’t use a Unix socket, use an IP socket. For great justice!

I leave you with this classic:

The post The Website Was Down appeared first on

June 26, 2016

Little Britain is an amalgamation of the terms ‘Little England‘ and ‘Great Britain’, and is also the name of a Victorian neighbourhood and a modern street in London. Says Wikipedia. It’s also what I think will remain of Great Britain in a few years. Maybe defacto already in a few days or weeks. But okay.

This is not a big problem. More serious problems are geopolitical. I do think Russia will gain from not having England (in the end not Great Britain, but just England) in the European Union: it’ll make the UK’s (or England’s) voice in NATO sound less part of one block. To remain significant will the EU block have to find a new way.

I propose to form a European military. Initially make NATO part of it. The idea would be that each country in Western Europe can join this military alliance based on negotiated contribution criteria. Let’s learn from our mistakes and allow countries to leave and be kicked out: Especially if a country doesn’t contribute enough to the alliance, it should be kicked out (temporarily).

That allows for England or Little Britain to keep its geopolitical relevance, yet allows for the EU member states to exchange economy-currency into military-currency and vice versa. Let’s show some European teeth. But let’s also remain calm and intelligent.

Meanwhile we can slow down NATO becoming a geopolitical playball against Russia. This Cold-War 2.0 nonsense isn’t benefiting world peace. Keeping the world of humans in as much peace as possible should nowadays be NATO’s only goal. I hope there is still some time before any big war starts, to stop it from happening at all. We have so much technology, happiness and growth to give to the world of humans. Let us not waste it in a big stupid worldwide conflict.



June 24, 2016


Une piste cyclable parfaitement sécurisée et sur site propre pour relier Ottignies à Bruxelles en seulement 16km ? Le tout entièrement financé par l’argent du contribuable ?

Un rêve ?

En fait, c’est déjà une réalité que vous avez déjà financé à hauteur de plusieurs milliards d’euros.

Seul petit problème à régler : les contribuables qui ont financé cette merveille sont interdits d’accès.

Car cette merveilleuse piste cyclable, c’est le tracé du futur RER. Un chantier qui a déjà englouti des milliards d’euros d’argent public pour un résultat qui serait, au mieux, utilisable en 2024. Mais les prévisions les plus réalistes tablent pour une arrivée du RER aux alentours de 2030. Si jamais il est finalement terminé et n’est pas déjà périmé avant même sa mise en service.

De Ottignies à Bruxelles (gare de Boitsfort), il existe donc une véritable route goudronnée, lisse, plate, sans aucune côte et sans aucun trafic. Cette route en parfait état ne s’approche jamais à moins de trois mètres des voies de chemin de fer et en est toujours séparé par une bordure et un écran minimal de végétation. Nous l’avons baptisé le VER, Vélo Express Régional.


Cliquez pour voir l’animation

L’association cycliste Gracq a très récemment annoncé que certains de ses membres utilisaient certains tronçons du VER. La réaction d’Infrabel, gestionnaire des voies, ne s’est pas fait attendre : l’accès à cette route est strictement interdit voire serait dangereux.

Cette route en parfait état devrait donc rester inutilisée et se dégrader inutilement pendant au minimum une décennie.

C’est pour en avoir le cœur net que cinq cyclistes ont décidé de relier Ottignies à Boitsfort en vélo, un jour de grève générale : Stéphane, Nils, Natacha, Yves et votre serviteur.


Preuve que l’idée est dans l’air du temps : nous préparions notre action alors qu’aucun de nous n’était au courant de l’action très similaire du Gracq.

Le résultat est sans appel : seul le tronçon entre les gares de Genval et La Hulpe (2km) n’est pas encore aménagé. Le passage est strictement impossible sans s’approcher dangereusement des voies ou en les traversant (l’aménagement étant fait de l’autre côté des voies). Il est donc impératif de quitter le VER avant la gare de Genval et de le reprendre à la gare de La Hulpe, impliquant un détour de 15 minutes.

Le reste du trajet se fait de manière entièrement sécurisée sur une route large et dégagée. Deux passages d’une centaine de mètres sont en sable et en terre mais restent praticables en VTT, le premier à Profondsart et le second dans la gare de Boitsfort même.


Passage boueux à Profondsart

Au total ? Un VER d’un peu plus de 16km sur un terrain absolument plat. Pour un cycliste entraîné, ce trajet est réalisable en une demi-heure. Et pour ceux qui préfèrent prendre le temps et admirer le cadre très agréable, 45 à 50 minutes semble un grand maximum. Tant que la jonction Genval vers La Hulpe n’est pas finalisée, une petite heure semble un temps raisonnable, même pour un cycliste néophyte.


Une partie du trajet est même couverte

Autre obstacle imprévu : une étendue de verre brisé dans la gare de Rixensart qui déchirera le pneu de votre serviteur, le forçant à faire demi-tour tandis que les quatre autres continuaient vers Boitsfort.

Mais rien de mieux pour vous convaincre qu’une petite vidéo (d’où il ne manque que les derniers kilomètres).

Alors, est-ce dangereux ?

Oui, clairement. Le fait de devoir faire un détour entre Genval et La Hulpe nécessitant de passer par des rues ouvertes au trafic automobile et sans pistes cyclables est certainement la partie la plus dangereuse du trajet. Un danger que les cyclistes vivent au quotidien mais qui pourrait désormais être évité grâce au VER.

En dehors du tronçon Genval/La Hulpe, les trains restant toujours à une bonne distance ne peuvent en aucun cas représenter le moindre danger.

Est-ce légal ?

Non. Bien qu’il n’y ait ni dégâts matériel, ni victimes, cette action que nous avons entreprise est illégale.

Cette illégalité est-elle justifiable ?

Suite à l’action du Gracq, la réaction d’Infrabel ne s’est pas fait attendre : des bacs de ciment ont été volontairement placés pour bloquer l’accès aux cyclistes. Cette réaction vous semble-t-elle responsable et utile ?


Infrabel ne supporte pas la concurrence intolérable du vélo

Le pouvoir politique qui lutte pour la mobilité, la réduction des polluants peut-il légitimement décider que les cyclistes n’ont pas le droit d’être protégés et ne doivent en aucun cas bénéficier du VER ?

Ces politiciens ne seront-ils pas moralement responsables si un cycliste se fait renverser par une voiture car il a décidé de respecter l’interdiction d’utiliser le VER et roule au milieu de routes pensées pour l’automobile ?

Un état démocratique qui a financé le VER avec l’argent du contribuable a-t-il le droit d’interdir ces mêmes contribuables de l’utiliser ?

Ne devrait-on pas au contraire finaliser au plus vite la jonction Genval/La Hulpe et inaugurer une formidable voie verte sur laquelle pourrait naître une véritable économie de proximité : buvette pour cyclistes assoiffés, ateliers de réparation, salles de réunions et espaces de travail.

La créativité est sans limite. Il ne reste plus qu’à finaliser l’effort accompli.

Mesdames et Messieurs les politiciens, vous avez aujourd’hui l’opportunité de transformer le plus grand des travaux inutiles belges, véritable gabegie d’argent public (le RER) en un formidable investissement écologique et économique, le VER.

Mesdames et messieurs les politiciens, il suffit d’une impulsion pour finaliser le VER. La balle est dans votre camp !


Photo de couverture : départ du VER depuis le pont de Jassans à Ottignies. Photos et vidéos par Stéphane Vandeneede et moi-même.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins: 12uAH27PhWepyzZ8kzY9rMAZzwpfsWSxXt, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

I’m happy to announce the immediate availability of Maps 3.7. This feature release brings some minor enhancements.

  • Added rotate control support for Google Maps (by Peter Grassberger)
  • Changed coordinate display on OpenLayers maps from long-lat to lat-long (by Peter Grassberger)
  • Upgraded Google marker cluster library to its latest version (2.1.2) (by Peter Grassberger)
  • Upgraded Leaflet library to its latest version (0.7.7) (by Peter Grassberger)
  • Added missing system messages (by Karsten Hoffmeyer)
  • Internal code enhancements (by Peter Grassberger)
  • Removed broken custom map layer functionality. You no longer need to run update.php for full installation.
  • Translation updates by TranslateWiki


Since this is a feature release, there are no breaking changes, and you can simply run composer update, or replace the old files with the new ones.

Beware that as of Maps 3.6, you need MediaWiki 1.23 or later, and PHP 5.5 or later. If you choose to remain with an older version of PHP or MediaWiki, use Maps 3.5. Maps works with the latest stable versions of both MediaWiki and PHP, which are the versions I recommend you use.

June 23, 2016


De tous temps, la jeunesse est entrée en rébellion contre les vieillards afin de faire évoluer une société que les conservateurs, par essence, veulent figée.

La jeunesse finit toujours par gagner même s’il faut parfois plusieurs générations de jeunes pour faire admettre une idée, avec potentiellement des retours en arrière. Au final, il suffit d’être patient.

Mais aujourd’hui, il y’a un problème pour lequel nous n’avons malheureusement plus le temps d’attendre : la sauvegarde de notre planète.

Nous n’avons plus le loisir de discuter et de laisser le conservatisme accepter péniblement l’idée que, tiens, peut-être que les ressources de la planète sont limitées. Nous ne pouvons plus nous permettre de mettre quinze ans à apprendre à mettre les déchets plastiques dans des sacs bleus pour avoir l’impression de faire un geste pour l’environnement.

Il faut agir radicalement aujourd’hui et maintenant. Il faut repenser fondamentalement tout ce qui, dans notre société, détruit ou justifie la destruction de la planète.

Et l’une des principales sources de destruction est clairement identifiée : l’emploi ! Personne n’ose le dire voire le penser car il s’agit d’un pilier de notre société et de notre identité.

Car quel est le réel problème auquel nous sommes confrontés ? Nous consommons et nous produisons trop ! C’est aussi simple que cela : tout notre modèle de société est basé sur produire plus pour pouvoir consommer plus et consommer plus pour pouvoir produire plus.

Et comme nous sommes de plus en plus productifs pour produire avec moins de travail, nous n’avons d’autre choix que d’augmenter la consommation.

Les emballages biodégradables, les réductions d’émissions, l’isolation des bâtiments et même les marches pour l’environnement pleines de bons sentiments ne sont que cela : des bons sentiments, des vœux pieux.

Tous les discours, toutes les décisions politiques et toutes les technologies “vertes” ne pourront rien faire d’autre que légèrement ralentir l’inéluctable tant que nous n’aurons pas conscience que le seul et unique problème est notre relation au travail.

Car un travail n’est finalement rien d’autre que prendre une partie de ressources de la planète et la transformer en autre chose, en produisant au passage des déchets.

Tant que nous nous évertuerons à vouloir “créer des emplois”, nous consommerons, nous polluerons, nous détruirons la planète.

Or, loin de remettre en question cette cause fondamentale, nous en sommes arrivé à la suprême hypocrisie qui consiste à “créer des emplois verts”. Le discours des partis écologistes est de dire que “être écologique crée de l’emploi”.

Nous essayons de faire en sorte que les voitures polluent un peu moins par kilomètre parcouru, quitte à truquer les tests pour faire semblant, alors que l’unique problème est que nous parcourons bien trop de kilomètres pour… nous rendre au travail. Kilomètres qui nécessitent des routes de plus en plus larges afin d’attirer de plus en plus d’automobilistes qui sont de plus en plus ralentis et donc polluent encore plus.

Nous ne pouvons plus nous permettre de “polluer moins”. Nous ne pouvons plus accepter que les mentions “écologique” ou “vert” soient apposées à coté de tous ce qui est légèrement moins polluant que la concurrence. Nous devons radicalement changer notre mode de vie pour ne plus polluer du tout voir pour régénérer la planète.

La remise en question du travail génère des peurs fondamentales : plus personne ne va rien faire, les gens vont être désœuvrés, la civilisation va s’écrouler.

Mais le pire des scénarios n’est-il pas préférable à l’issue vers laquelle nous nous dirigeons inexorablement ?

Car si nous observons ce que les gens font en dehors du travail, que ce soit en bénévolat, en activité artistique, en entraide, en faisant de l’artisanat ou du sport, une tendance nette s’observe : ces activités détruisent très peu la planète (à l’exception de quelques sports moteurs ou de la chasse).

À l’opposé, le travail est une activité rarement réalisée avec plaisir qui a pour essence même de détruire la planète ou d’encourager à sa destruction à travers la consommation.

Dans le pire et le plus effrayant des futurs, une société de loisirs entraînerait des inégalités, un appauvrissement général voire un écroulement de la civilisation. Le tout potentiellement agrémenté de famines, d’épidémies, de guerre. Nous sommes d’accord que ce scénario catastrophe est improbable mais considérons le pire.

Nous constatons que, pour l’humanité, ce scénario catastrophe n’est pas mortel. Une nouvelle civilisation finira toujours pas renaître.

Tandis qu’en continuant à travailler, à créer de l’emploi et à valoriser le travail, nous détruisons peut-être définitivement notre planète.

Par peur des incertitudes, nous préférons offrir à nos enfants une quasi-certitude : celle d’être l’une des dernières générations d’êtres humains.

L’humanité peut se remettre de toutes les catastrophes. Sauf une. La perte de son unique planète.

Il est urgent de nous débarrasser de l’emploi le plus vite possible. D’arrêter d’essayer de négocier avec les conservateurs inquiets et d’agir sans tenir compte de leur avis. Nous devons unir nos forces aujourd’hui car nous n’aurons pas de seconde chance.

Alors ? Comment fait-on pour arrêter de nourrir le système ?


Photo par Alan Cleaver.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins: 12uAH27PhWepyzZ8kzY9rMAZzwpfsWSxXt, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

The post Podcast: curl, libcurl and the future of the web appeared first on

I recorded a new episode of the SysCast podcast earlier this week, with Daniel Stenberg.

He's the author and maintainer of the curl project and we talk about curl & libcurl, HTTP/3, IETF and standards, OpenSSL vs LibreSSL and where the web is heading.

If you've got an interest in the web, HTTP and standards, this one's for you.

The post Podcast: curl, libcurl and the future of the web appeared first on

June 22, 2016

Most VMware appliances (vCenter Appliance, VMware Support Appliance, vRealize Orchestrator) have the so called VAMI: the VMware Appliance Management Interface, generally served via https on port 5480. VAMI offers a variety of functions, including "check updates" and "install updates". Some appliances offer to check/install updates from a connected CD iso, but the default is always to check online. How does that work?

VMware uses a dedicated website to serve the updates: Each appliance is configured with a repository URL: . The PRODUCT-ID is a hexadecimal code specific for the product. vRealize Orchestrator uses 00642c69-abe2-4b0c-a9e3-77a6e54bffd9, VMware Support Appliance uses 92f44311-2508-49c0-b41d-e5383282b153, vCenter Server Appliance uses 647ee3fc-e6c6-4b06-9dc2-f295d12d135c. The VERSION-ID contains the current appliance version and appends ".latest":,,

The appliance will check for updates by retrieving the repository URL /manifest/manifest-latest.xml . This xml contains the latest available version in and (fullVersion includes the build number), pre- and post-install scripts, EULA, and a list of updated rpm packages. Each entry has a that can be appended to the repository URL and downloaded. The update procedure downloads manifest and rpms, verifies checksums on downloaded rpms, executes the preInstallScript, runs rpm -U on the downloaded rpm packages, executes the postInstallScript, displays the exit code and prompts for reboot.

With this information, you can setup your own local repository (for cases where internet access is impossible from the virtual appliances), or you can even execute the procedure manually. Be aware that manual update would be unsupported. Using a different repository is supported by a subset of VMware appliances (e.g. VCSA, VRO) but not all (VMware Support Appliance).

June 21, 2016

I sent an internal note to all of Acquia's 700+ employees today and decided to cross-post it to my blog because it contains a valuable lesson for any startup. One of my personal challenges — both as an Open Source evangelist/leader and entrepreneur — has been to learn to be comfortable with not being understood. Lots of people didn't believe in Open Source in Drupal's early days (and some still don't). Many people didn't believe Acquia could succeed (and some still don't). Something is radically different in software today, and the world is finally understanding and validating that some big shifts are happening. In many cases, an idea takes years to gain general acceptance. Such is the story of Drupal and Acquia. Along the way it can be difficult to deal with the naysayers and rejections. If you ever have an idea that is not understood, I want you to think of my story.


This week, Acquia got a nice mention on Techcrunch in an article written by Jake Flomenberg, a partner at Accel Partners. For those of you who don't know Accel Partners, they are one of the most prominent venture capital investors and were early investors in companies like Facebook, Dropbox, Slack, Etsy, Atlassian,, Kayak and more.

The article, called "The next wave in software is open adoption software", talks about how the enterprise IT stack is being redrawn atop powerful Open Source projects like MongoDB, Hadoop, Drupal and more. Included in the article is a graph that shows Acquia's place in the latest wave of change to transform the technology landscape, a place showing our opportunity is bigger than anything before as the software industry migrated from mainframes to client-server, then SaaS/PaaS and now - to what Flomenberg dubs, the age of Open Adoption Software.

Waves of software adoption

It's a great article, but it isn't new to any of us per se – we have been promoting this vision since our start nine years ago and we have seen over and over again how Open Source is becoming the dominant model for how enterprises build and deliver IT. We have also shown that we are building a successful technology company using Open Source.

Why then do I feel compelled to share this article, you ask? The article marks a small but important milestone for Acquia.

We started Acquia to build a new kind of company with a new kind of business model, a new innovation model, all optimized for a new world. A world where businesses are moving most applications into the cloud, where a lot of software is becoming Open Source, where IT infrastructure is becoming a metered utility, and where data-driven services make or break business results.

We've been steadily executing on this vision; it is why we invest in Open Source (e.g. Drupal), cloud infrastructure (e.g. Acquia Cloud and Site Factory), and data-centric business tools (e.g. Acquia Lift).

In my 15+ years as an Open Source evangelist, I've argued with thousands of people who didn't believe in Open Source. In my 8+ years as an entrepreneur, I've talked to thousands of business people and dozens of investors who didn't understand or believe in Acquia's vision. Throughout the years, Tom and I have presented Acquia's vision to many investors – some have bought in and some, like Accel, have not (for various reasons). I see more and more major corporations and venture capital firms coming around to Open Source business models every day. This trend is promising for new Open Source companies; I'm proud that Acquia has been a part of clearing their path to being understood.

When former skeptics become believers, you know you are finally being understood. The Techcrunch article is a small but important milestone because it signifies that Acquia is finally starting to be understood more widely. As flattering as the Techcrunch article is, true validation doesn't come in the form of an article written by a prominent venture capitalist; it comes day-in and day-out by our continued focus and passion to grow Drupal and Acquia bit by bit, one successful customer at a time.

Building a new kind of company like we are doing with Acquia is the harder, less-traveled path, but we always believed it would be the best path for our customers, our communities, and ultimately, our world. Success starts with building a great team that not only understands what we do, but truly believes in what we do and remains undeterred in its execution. Together, we can build this new kind of company.

Dries Buytaert
Founder and Project Lead, Drupal
Co-founder and Chief Technology Officer, Acquia

These are currently the popular search terms on my blog:

  • blog amedee be
    Yeah, that’s this blog.
  • localhost
    Which used to be my IRC handle a looooong time ago.
  • upgrade squeeze to wheezy sed -i
    Sometimes I blog about Ubuntu, or Linux in general.
  • guild wars bornem
    Okay, I have played Guild Wars, but not very often, and I have been in Bornem, but the combination???
  • giftige amedeeamedee giftig
    Wait, I am toxic???
  • orgasme
    Ehhhh… dunno why people come looking for orgasms on my blog.
  • telenet service
    I used to blog about bad service I got a couple of times from Telenet.
  • taxipost 2007
  • ik bond ixq

The post Popular Search Terms appeared first on

June 18, 2016

The standard WordPress RSS-feeds don’t include posts featured image. Below code adds the medium-format thumbnail to each item in a RSS2 standards-compliant manner by inserting it as an enclosure.

add_action('rss2_item', 'add_enclosure_thumb');
function add_enclosure_thumb() {
  global $post;
  if(has_post_thumbnail($post->ID)) {
    $thumbUrl = get_the_post_thumbnail_url($post->ID,"medium");

    if ((substr($thumbUrl, -4) === "jpeg") || (substr($thumbUrl, -3) === "jpg")) {
    } else if (substr($thumbUrl, -3) === "png") {
    } else if (substr($thumbUrl, -3) === "gif") {
    } else {

    $thumbSize = filesize(WP_CONTENT_DIR.str_replace(WP_CONTENT_URL,'',$thumbUrl));

    echo "<enclosure url=\"".$thumbUrl."\" size=\"".$thumbSize."\" type=\"".$mimeType."\" />\n";

A more advanced & flexible approach would be to add support for the media RSS namespace, but the above suffices for the purpose I have in mind.

June 16, 2016

As a general rule, I try not to include new features in angular-gettext: small is beautiful and for the most part I consider the project as finished. However, Ernest Nowacki just contributed one feature that was too good to leave out: translation parameters.

To understand what translation parameters are, consider the following piece of HTML:

<span translate>Last modified: {{post.modificationDate | date : 'yyyy-MM-dd HH:mm'}} by {{}}.</span>

The resulting string that needs to be handled by your translators is both ugly and hard to use:

msgid "Last modified: {{post.modificationDate | date : 'yyyy-MM-dd HH:mm'}} by {{}}."

With translation parameters you can add local aliases:

<span translate
      translate-params-date="post.modificationDate | date : 'yyyy-MM-dd HH:mm'"
    Last modified: {{date}} by {{author}}.

With this, translators only see the following:

msgid "Last modified: {{date}} by {{author}}."

Simply beautiful.

You’ll need angular-gettext v2.3.0 or newer to use this feature.

More information in the documentation:

Comments | More on | @rubenv on Twitter

June 15, 2016

and if so why haven't they done so yet ?

Unlike many people think, containers are not new, they have been around for more than a decade, they however just became popular for a larger part of our ecosystem. Some people think containers will eventually take over.

Imvho It is all about application workloads, when 8 years ago I wrote about a decade of open source virtualization, we looked at containers as the solution for running a large number of isolated instances of something on a machine. And with large we meant hundreds or more instances of apache, this was one of the example use cases for an ISP that wanted to give a secure but isolated platform to his users. One container per user.

The majority of enterprise usecases however were full VM's Partly because we were still consolidating existing services to VM's and weren't planning on changing the deployment patterns yet. But mainly because most organisations didn't have the need to run 100 similar or identical instances of an application or a service, they were going from 4 bare metal servers to 40 something VM's but they had not yet come to the need to run 100's of them. The software architecture had just moved from FatClient applications that talked directly to bloated relational databases containing business logic, to web enabled multi-tier
applications. In those days when you suggested to run 1 Tomcat instance per VM because VM's were cheap and it would make management easier, (Oh oops I shut down the wrong tomcat instance) , people gave you very weird looks

Slowly software architectures are changing , today the new breed of applications is small, single function, dedicated, and it interacts frequently with it's peers, together combined they provide similar functionality as a big fat application 10 years ago, But when you look at the market that new breed is a minority. So a modern application might consist of 30-50 really small ones, all with different deployment speeds. And unlike 10 years ago where we needed to fight hard to be able to build both dev, acceptance and production platforms, people now consider that practice normal. So today we do get environments that quickly go to 100+ instances , but requiring similar CPU power as before, so the use case for containers like we proposed it in the early days is now slowly becoming a more common use case.

So yes containers might take over ... but before that happens .. a lot of software architectures will need to change, a lot of elephants will need to be sliced, and that is usually what blocks cloud, container, agile and devops adoption.

The first law of Serge van Ginderachter, which would be myself, is

One has more problems with anti-virus software than with the viruses themselves.

Originally stated in 2006, in Dutch.

One of the prominent features of the recent Activiti 5.21.0 release is ‘secure scripting’. The way to enable and use this feature is documented in detail in the Activiti user guide. In this post, I’ll show you how we came to its final implementation and what it’s doing under the hood. And of course, as it […]

June 14, 2016

In my latest SXSW talk, I showed a graphic of each of the major technology giants to demonstrate how much of our user data each company owned.

Microsoft linkedin data

I said they won't stop until they know everything about us. Microsoft just bought LinkedIn, so here is what happened:

Data ownership

By acquiring the world's largest professional social network, Microsoft gets immediate access to data from more than 433 million LinkedIn members. Microsoft fills out the "social graph" and "interests" circles. There is speculation over what Microsoft will do with LinkedIn over time, but here is what I think is most likely:

  • With LinkedIn, Microsoft could build out its Microsoft Dynamics CRM business to reinvent the sales and marketing process, helping the company compete more directly with SalesForce.
  • LinkedIn could allow Microsoft to implement a "Log in with LinkedIn" system similar to Facebook Connect. Microsoft could turn LinkedIn profiles into a cross-platform business identity to better compete with Google and Facebook.
  • LinkedIn could allow Microsoft to build out Cortana, a workplace-tailored digital assistant. One scenario Microsoft referenced was walking into a meeting and getting a snapshot of each attendee based on his or her LinkedIn profile. This capability will allow Microsoft to better compete against virtual assistants like Google Now, Apple Siri and Amazon Echo.
  • LinkedIn could be integrated in applications like Outlook, Skype, Office, and even Windows itself. Buying LinkedIn helps Microsoft limit how Facebook and Google are starting to get into business applications.

Data is eating the world

In the past I wrote that data, not software, is eating the world. The real value in technology comes less and less from software and more and more from data. As most businesses are moving applications into the cloud, a lot of software is becoming free, IT infrastructure is becoming a metered utility, and data is what is really makes or breaks business results. Here is one excerpt from my post: "As value shifts from software to the ability to leverage data, companies will have to rethink their businesses. In the next decade, data-driven, personalized experiences will continue to accelerate, and development efforts will shift towards using contextual data.". This statement is certainly true in Microsoft / LinkedIn's case.

Microsoft linkedin graphs
Source: Microsoft.

If this deal shows us anything, it's about the value of user data. Microsoft paid more than $60 per registered LinkedIn user. The $26.2 billion price tag values LinkedIn at about 91 times earnings, and about 7 percent of Microsoft's market cap. This is a very bold acquisition. You could argue that this is too hefty a price tag for LinkedIn, but this deal is symbolic of Microsoft rethinking its business strategy to be more data and context-centric. Microsoft sees that the future for them is about data and I don't disagree with that. While I believe acquiring LinkedIn is a right strategic move for Microsoft, I'm torn over whether or not Microsoft overpaid for LinkedIn. Maybe we'll look back on this acquisition five years from now and find that it wasn't so crazy, after all.

June 13, 2016

I`m working on getting even more moving parts automated, those who use Jenkins frequently probably also have Love - Hate relationship with it.

The love coming from the flexibility , stability and the power you get from it, the hate from it's UI. If you've ever had to create a new Jenkins job or even pipeline based on one that already existed you've gone trough the horror of click and paste errors , and you know where the hate breeds.

We've been trying to automate this with different levels of success, we've puppetized the XML jobs, we've used the Buildflow Plugin (reusing the same job for different pipelines is a bad idea..) We played with JJB running into issues with some plugins (Promoted Build) and most recently we have put our hope in the Job DSL.

While toying with the DSL I ran into a couple of interresting behaviours. Imagine you have an entry like this which is supposed to replace the $foldername with the content of the variable and actually take the correct upstream

  1. cloneWorkspace('${foldername}/dashing-dashboard-test', 'Successful')

You generate the job, look inside the Jenkins UI to verify what the build result was .. save the job and run it .. success ..
Then a couple of times later that same job gives an error ... It can't find the upstream job to copy the workspace from. You once again open up the job in the UI, look at it .. save it , run it again and then it works.. a typical case of Heisenbug ..

When you start looking closer to the XML of the job you notice ..

  1. <parentJobName>${foldername}/dashing-dashboard-test</parentJobName>

obviously wrong .. I should have used double quotes ..

But why doesn't it look wrong in the UI ? That's because the UI autoselects the first option from it's autogenerated pull down list .. Which actually contains the right upstream workplace I wanted to trigger (that will teach me to use 00 as a prefix for the foldername for all my tests..)

So when working with the DSL .. review the generated XML .. not just if the job works ..

I was playing around with Easy Digital Downloads (because this) and I choose EUR as currency, but I wanted the price to be also displayed in USD. Obviously there’s a premium add-on for that, but as I don’t want to purchase stuff just yet, I concocted an alternative myself. Here’s the resulting snippet of code that shows the price in USD for shops with EUR currency and shows the price in EUR when the shop is in USD;

function edd_curconv_init() {
	$curpos = edd_get_option( 'currency_position', 'before' );
	$curcur = strtolower(edd_get_currency());
  	if (in_array($curcur, array("eur","usd"))) {
	  add_filter($filtername, "edd_eur_dollar_conv",10,3);

function edd_eur_dollar_conv($formatted, $currency, $price) {
  if ($currency === "EUR") {
	$outprice = $price * $rate;
	$outrate = "USD";
  } else if ($currency === "USD") {
	$outprice = $price / $rate;
	$outrate = "EUR";
  if (!empty($outprice)) {
	$out = " ( ~ ".edd_currency_filter(round($outprice,2),$outrate).")";
  return $formatted;

This obviously lacks the features and robustness of that Currency Converter add-on, so (don’t) use (unless) at your own risk.

We just released Activiti version 5.21.0! This release contains some quite important bugfixes, more specifically It fixes some cases where the end time was not set for activities in a process definition under certain conditions. A concurrency bug was discovered when using delegateExpressions together with field injection. Make sure to read the updated documentation section […]

June 10, 2016

From the day we started Acquia, we had big dreams: we wanted to build a successful company, while giving back to the Open Source community. Michael Skok was our first investor in Acquia and instrumental in making Acquia one of the largest Open Source companies in the world, creating hundreds of careers for people passionate about Open Source. This week, Michael and his team officially announced a new venture firm called _Underscore.VC. I'm excited to share that I joined _Underscore.VC as a syndicate lead for the "Open Source _Core".

I'm very passionate about Open Source and startups, and want to see more Open Source startups succeed. In my role as the syndicate lead for the Open Source _Core, I can help other Open Source entrepreneurs raise money, get started and scale their companies and Open Source projects.

Does that mean I'll be leaving Drupal or Acquia? No. I'll continue as the lead of the Drupal project and the CTO of Acquia. Drupal and Acquia continue to be my full-time focus. I have been advising entrepreneurs and startups for the last 5+ years, and have been a moderately active angel investor the past two years. Not much, if anything, will change about my day-to-day. _Underscore.VC gives me a better platform to advise and invest, give back and help others succeed with Open Source startups. It's a chance to amplify the "do well and do good" mantra that drives me.

Mautic and the power of syndicates

While Michael, the _Underscore.VC team and I have been working on _Underscore.VC for quite some time, I'm excited to share that on top of formally launching this week, they've unveiled a $75 million fund, as well as our first seed investment. This first investment is in Mautic, an Open Source marketing automation company.

Mautic is run by David Hurley, who I've known since he was a community manager at Joomla!. I've had the opportunity to watch David grow for many months. His resourcefulness, founding and building the Mautic product and Open Source community impressed me.

The Mautic investment is a great example of _Underscore.VC's model in action. Unlike a traditional firm, _Underscore.VC co-invests with a group of experts, called a syndicate, or in the case of _Underscore.VC a "_Core". Each _Core has one or more leads that bring companies into the process and gather the rest of the investors to form a syndicate.

As the lead of the Open Source _Core, I helped pull together a group of investors with expertise in Open Source business models, marketing automation, and SaaS. The list of people includes Larry Augustin (CEO of SugarCRM), Gail Goodman (CEO of Constant Contact), Erica Brescia (Co-Founder and COO of Bitnami), Andrew Aitken (Open Source Lead at Wipro) and more. Together with _Underscore.VC, we made a $600,000 seed investment in Mautic. In addition to the funding, Mautic will get access to a set of world-class advisors invested in helping them succeed.

I personally believe the _Underscore.VC model has the power to transform venture capital. Having raised over $180 million for Acquia, I can tell you that fundraising is no walk in the park. Most investors still don't understand Open Source business models. To contrast, our Open Source _Core group understands Open Source deeply; we can invest time in helping Mautic acquire new customers, recruit great talent familiar with Open Source, partner with the right companies and navigate the complexities of running an Open Source business. With our group's combined expertise, I believe we can help jumpstart Mautic and reduce their learnings by one to two years.

It's also great for us as investors. By combining our operating experience, we hope to attract entrepreneurs and startups that most investors may not get the opportunity to back. Furthermore, the _Core puts in money at the same valuation and terms as _Underscore.VC, so we can take advantage of the due diligence horsepower that _Underscore.VC provides. The fact that _Underscore.VC can write much larger checks is also mutually beneficial to the _Core investor and the entrepreneur; it increases the chances of the entrepreneur succeeding.

If you're starting an Open Source business, or if you're an angel investor willing to co-invest in the Open Source _Core, feel free to reach out to me or to get in touch with _Underscore.VC.

June 09, 2016

This morning I was reminded that, 4 years ago, I was looking for a project to get some experience with Java, C or C++.
Looking back, I started working on an Getback GPS, an Android app (learning some Java) and later on another project called Buildtime Trend, which gave me some Python and JavaScript experience.
So in 4 years, I started 2 Open Source projects, learned 3 new programming languages, and some other technologies and frameworks along the way.

I can say I learned a lot the last few years, on a technical level, but it also made me realise that it is possible to learn new things, if you set your mind to it. You just have to start doing it, try things, fail, learn from it, try again, read a tutorial, look for questions and answers (fe. on Stack Overflow), go to conferences, talk to experienced people, join a project that uses the technology you want to learn.

And this is not limited to technology. Want to learn a musical instrument? How to make a cake? How to become a great speaker? Learn to swim longer or faster?

This is all possible. You just have to start doing it and practice. Taking small steps at the start. Allow yourself to fail, but learn from it and improve. You might need some guidance or coaching, or take a course to give you a headstart.

I'm not saying it won't be hard, sometimes you keep failing, stop making progress and you get frustrated. And that's a time to take a step back, monitor your progress, examine the goals you have set yourself. Are you doing it the right way? Can it be done differently? Do you have all the required skills to make progress? Maybe you need to practise something else first?

Anyway, keep the end goal in mind, take small steps and enjoy  the journey. Enjoying what you are doing or achieving is an important motivator.
If you set your mind to it, you can learn anything you want.

Which reminds of this video, how to learn anything in 20 hours :

June 06, 2016

The post Open Torrent Tracker List (2016) appeared first on

If you've ever downloaded a torrent, chances are you've cursed at the slow download speeds. That could be your ISP throttling the connection (thanks, Telenet), but it could also be that the trackers or peers you're using are just slow or unresponsive.

Since torrents aren't only used for illegal downloads, I figure I'd share a list of known public good trackers. If your Linux ISO downloads are slow, add these to the mix and you should see a significant speedup.










Some of these are provided by the OpenBitTorrent initiative, other are community supported trackers.

To add any of these, edit the properties of your torrent and add the trackers listed above.

In the case of uTorrent, edit the properties of the torrent and just copy/paste the list above in the Trackers list.

If all goes well, your torrent client should show a list of trackers it's using.


The result should be a significantly faster download because it can find more peers.

If you're still suffering from slow downloads, look into using a VPN or seedbox that downloads the torrents for you, so you can download it via ssh or another protocol.

The post Open Torrent Tracker List (2016) appeared first on

The Gotthard Base Tunnel, under construction for the last 17 years, was officially opened last week. This is the world's longest and deepest railroad tunnel, spanning 57 kilometers from Erstfeld to Bolio, Switzerland, underneath the Swiss Alps. To celebrate its opening, Switzerland also launched a multi-lingual multimedia website to celebrate the project's completion. I was excited to see they chose to build their site on Drupal 8! The site is a fitting digital tribute to an incredible project and launch event. Congratulations to the Gotthard Base Tunnel team!



Nous avons cru que tout était propriété, que chaque atome appartenait au premier qui le réclamerait.

Mais nous avons oublié que la matière a toujours existé, qu’elle nous a été transmise et que nous la transmettrons à notre tour, peu importe les transactions, les ventes et les achats. Nous n’en sommes que les dépositaires temporaires.

Nous avons cru que tout se vendait et tout s’achetait. Que pour subsister, il fallait acheter et donc vendre pour gagner de quoi acheter.

Mais nous avons oublié que, parfois, nous n’avons même plus de quoi acheter le minimum vital. Alors nous avons puni ceux qui étaient dans cette situation, nous les avons accusé et nous nous sommes convaincu que nous ne serions jamais comme eux. Nous avons séparé l’humanité en deux.

Nous avons cru que nous devions gagner plus afin de vivre plus et de posséder plus. Que nous n’avions pas le choix. Que nous devions vendre notre corps, notre intelligence ou bien des objets. Ou vendre des idées afin d’aider d’autres à vendre plus. Ou d’enseigner à d’autres la meilleure manière de vendre.

Mais nous avons oublié que le choix, il se prend. Qu’accepter un travail plus loin mais mieux rémunéré afin de consommer plus est un choix. Qu’accepter un travail qui pousse d’autres à consommer est un choix. Nous avons refusé de voir que nous étions chacun responsable de notre travail, de l’impact que celui-ci avait sur le monde.

Nous avons cru que le fait de posséder était notre objectif ultime, que nous devions amasser, acheter, consommer.

Mais nous avons oublié que les objets n’ont pas de maître. Qu’ils peuvent tout au plus nous procurer quelques soupçons de joie lorsque nous les utilisons durant quelques minutes ou quelques heures. Et que, le reste du temps, ils nous encombrent, nous rendent malheureux et nous convainquent d’acheter encore plus.

Nous avons cru que la propriété apportait la liberté. Que le propriétaire pouvait jouir de son bien à sa guise sans se préoccuper des conséquences.

Mais nous avons oublié que les frontières et les tracés ne sont que des délimitations virtuelles. Que nous ne possédons qu’une seule et unique planète qui souffre globalement de chacune de nos actions.

Nous avons cru que les idées étaient une propriété. Que même les semences et le génome devait être breveté. Que partager revenait à voler.

Mais nous avons oublié qu’une idée qui ne se partage pas se fige et s’oublie. Que le vivant n’a que faire de nos brevets. Qu’en tentant de contrôler la propriété, nous ne pouvions qu’arrêter de penser.

Nous avons cru jouir de la propriété.

Mais nous avons oublié que nous ne faisons qu’emprunter au futur chaque molécule, chaque journée.

Nous avons cru ne pas avoir le choix et devoir acheter la liberté.

Mais nous avons oublié que la liberté, c’est avant tout de faire des choix. Nos choix.


Photo par Stefano Corso.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins: 12uAH27PhWepyzZ8kzY9rMAZzwpfsWSxXt, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

In an earlier blog post, I looked at the web services solutions available in Drupal 8 and compared their strengths and weaknesses. That blog post was intended to help developers choose between different solutions when building Drupal 8 sites. In this blog post, I want to talk about how to advance Drupal's web services beyond Drupal 8.1 for the benefit of Drupal core contributors, module creators and technical decision-makers.

I believe it is really important to continue advancing Drupal's web services support. There are powerful market trends that oblige us to keep focused on this: integration with diverse systems having their own APIs, the proliferation of new devices, the expanding Internet of Things (IoT), and the widening adoption of JavaScript frameworks. All of these depend to some degree on robust web services.

Moreover, newer headless content-as-a-service solutions (e.g. Contentful,, Backand and CloudCMS) have entered the market and represent a widening interest in content repositories enabling more flexible content delivery. They provide content modeling tools, easy-to-use tools to construct REST APIs, and SDKs for different programming languages and client-side frameworks.

In my view, we need to do the following, which I summarize in each of the following sections: (1) facilitate a single robust REST module in core; (2) add functionality to help web services modules more easily query and manipulate Drupal's entity graph; (3) incorporate GraphQL and JSON API out of the box; and (4) add SDKs enabling easy integration with Drupal. Though I shared some of this in my DrupalCon New Orleans keynote, I wanted to provide more details in this blog post. I'm hoping to discuss this and revise it based on feedback from you.

One great REST module in core

While core REST can be enabled with only a few configuration changes, the full extent of possibilities in Drupal is only unlocked either when leveraging modules which add to or work alongside core REST's functionality, such as Services or RELAXed, or when augmenting core REST's capabilities with additional resources to interact with (by providing corresponding plugins) or using other custom code.

Having such disparate REST modules complicates the experience. These REST modules have overlapping or conflicting feature sets, which are shown in the following table.

Feature Core REST RELAXed Services Ideal core REST
Content entity CRUD Yes Yes Yes Yes
Configuration entity CRUD Create resource plugin (issue) Create resource plugin Yes Yes
Custom resources Create resource plugin Create resource plugin Create Services plugin Possible without code
Custom routes Create resource plugin or Views REST export (GET) Create resource plugin Configurable route prefixes Possible without code
Translations Not yet (issue) Yes Create Services plugin Yes
Revisions Create resource plugin Yes Create Services plugin Yes
File attachments Create resource plugin Yes Create Services plugin Yes
Authenticated user resources (log in/out, password reset) Not yet (issue) No User login and logout Yes

I would like to see a convergence where all of these can be achieved in Drupal core with minimal configuration and minimal code.

Working with Drupal's entity graph

Recently, a discussion at DrupalCon New Orleans with key contributors to the core REST modules, maintainers of important contributed web services modules, and external observers led to a proposed path forward for all of Drupal's web services.

Web services entity graph
A visual example of an entity graph in Drupal.

Buried inside Drupal is an "entity graph" over which different API approaches like traditional REST, JSON API, and GraphQL can be layered. These varied approaches all traverse and manipulate Drupal's entity graph, with differences solely in the syntax and features made possible by that syntax. Unlike core's REST API which only returns a single level (single entity or lists of entities), GraphQL and JSON API can return multiple levels of nested entities as the result of a single query. To better understand what this means, have a look at the GraphQL demo video I shared in my DrupalCon Barcelona keynote.

What we concluded at DrupalCon New Orleans is that Drupal's GraphQL and JSON API implementations require a substantial amount of custom code to traverse and manipulate Drupal's entity graph, that there was a lot of duplication in that code, and that there is an opportunity to provide more flexibility and simplicity. Therefore, it was agreed that we should first focus on building an "entity graph iterator" that can be reused by JSON API, GraphQL, and other modules.

This entity graph iterator would also enable manipulation of the graph, e.g. for aliasing fields in the graph or simplifying the structure. For example, the difference between Drupal's "base fields" and "configured fields" is irrelevant to an application developer using Drupal's web services API, but Drupal's responses leak this internal distinction by prefixing configured fields with field_ (see the left column in the table below). By the same token, all fields, even if they carry single values, expose the verbosity of Drupal's typed data system by being presented as arrays (see the left column in the table below). While there are both advantages and disadvantages to exposing single-value fields as arrays, many developers prefer more control over the output or the ability to opt into simpler outputs.

A good Drupal entity graph iterator would simplify the development of Drupal web service APIs, provide more flexibility over naming and structure, and eliminate duplicate code.

Current core REST (shortened response) Ideal core REST (shortened response)
  "nid": [
      "value": "2"
  "title": [
      "value": "Lorem ipsum"
  "field_product_number": [
      "value": "35"
  "field_image": [
      "target_id": "2",
      "alt": "Image",
      "title": "Hover text",
      "width": "210",
      "height": "281",
      "url": ""
  "nid": "2"
  "title": "Lorem ipsum",
  "product_number": {
    "value": 35
  "image": {
    "target_id": 2,
    "alt": "Image",
    "title": "Hover text",
    "width": 210,
    "height": 281,
    "url": ""

GraphQL and JSON API in core

We should acknowledge simultaneously that the wider JavaScript community is beginning to embrace different approaches, like JSON API and GraphQL, which both enable complex relational queries that require fewer requests between Drupal and the client (thanks to the ability to follow relationships, as mentioned in the section concerning the entity graph).

While both JSON API and GraphQL are preferred over traditional REST due to their ability to provide nested entity relationships, GraphQL goes a step further than JSON API by facilitating explicitly client-driven queries, in which the client dictates its data requirements.

I believe that GraphQL and JSON API in core would be a big win for those building decoupled applications with Drupal, and these modules can use existing foundations in Drupal 8 such as the Serialization module. Furthermore, Drupal's own built-in JavaScript-driven UIs could benefit tremendously from GraphQL and JSON API. I'd love to see them in core rather than as contributed modules, as we could leverage them when building decoupled applications backed by Drupal or exchanging data with other server-side implementations. We could also "eat our own dog food" by using them to power JavaScript-driven UIs for block placement, media management, and other administrative interfaces. I can even see a future where Views and GraphQL are closely integrated.

Web services rest json grapql
A comparison of different API approaches for Drupal 8, with amended and simplified payloads for illustrative purposes.

SDKs to consume web services

While a unified REST API and support for GraphQL and JSON API would dramatically improve Drupal as a web services back end, we need to be attentive to the needs of consumers of those web services as well by providing SDKs and helper libraries for developers new to Drupal.

An SDK could make it easy to retrieve an article node, modify a field, and send it back without having to learn the details of Drupal's particular REST API implementation or the structure of Drupal's underlying data storage. For example, this would allow front-end developers to not have to deal with the details of single- versus multi-value fields, optional vs required fields, validation errors, and so on. As an additional example, incorporating user account creation and password change requests into decoupled applications would empower front-end developers building these forms on a decoupled front end such that they would not need to know anything about how Drupal performs user authentication.

As starting points for JavaScript applications, native mobile applications, and even other back-end applications, these SDKs could handle authenticating against the API and juggling of the correct routes to resources without the front-end developer needing an understanding of those nuances.

In fact, at Acquia we're now in the early stages of building the first of several SDKs for consuming and manipulating data via Drupal 8's REST API. Waterwheel (previously Hydrant), a new generic helper library intended for JavaScript developers building applications backed by Drupal, is the work of Acquia's Matt Grill and Preston So, and it is already seeing community contributions. We're eager to share our work more widely and welcome new contributors.


I believe that it is important to have first-class web services in Drupal out of the box in order to enable top-notch APIs and continue our evolution to become API-first.

In parallel with our ongoing work on shoring up our REST module in core, we should provide the underpinnings for even richer web services solutions in the future. With reusable helper functionality that operates on Drupal's entity graph available in core, we open the door to GraphQL, JSON API, and even our current core REST implementation eventually relying on the same robust foundation. Both GraphQL and JSON API could also be promising modules in core. Last but not least, SDKs like Hydrant that empower developers to work with Drupal without learning its complexities will further advance our web services.

Collectively, these tracks of work will make Drupal uniquely compelling for application developers within our own community and well beyond.

Special thanks to Preston So for contributions to this blog post and to Moshe Weitzman, Kyle Browning, Kris Vanderwater, Wim Leers, Sebastian Siemssen, Tim Millwood, Ted Bowman, and Mateu Aguiló Bosch for their feedback during its writing.

June 04, 2016

Dear Opa,

We just got the news that you passed away while we were in flight from Boston to Amsterdam. We landed an hour ago, and now I'm writing you this letter on the train from Amsterdam to Antwerp. We were on our way to come visit you. We still will.

I wish I could have had one last drink with you, chat about days gone by, and listen to your many amazing life stories. But most of all, I wanted to thank you in person. I wanted to thank you for making a lasting mark on me.

I visited you in the hospital two months ago, but I never had the courage to truly say goodbye or to really thank you. I was hoping I'd see you again. I'm in tears now because I feel you might never know how important you were to me.

I can't even begin to thank you for everything you've taught me. The way you invented things -- first in your job as an engineer and researcher, and later in automating and improving your home. The way you taught me how to sketch -- I think of you each time I draw something. The way you shared your knowledge and insight and how you always kept reading and learning -- even as recent as 2 months ago you asked me to bring you a book on quantum physics. The way you cooked and cared for Oma every single day and the way you were satisfied with a modest, but happy family life. The way you unconditionally loved all your grandchildren, no matter what choices we made -- with you we never had to live up to expectations, yet you encouraged us to make most out of our talents.

There are no words. No words at all for how you impacted my life and how you helped me become the person I've become. Few adults have the opportunity to really get to know their grandparents. I have been lucky to have known you for 37 years. Thank you for our time together. Your impact on me is deep, and forever. You made your mark.




We heart opa

Een consument kan zijn geld maar één keer uitgeven. Maar men geeft het wel uit onderheving aan inflatie. Dit wil zeggen dat wanneer we een goed kochten in juni 2006, bv. een auto, dat het uitgegeven geld, volgens de EUCPI2005 index, zo’n 17,95% goedkoper is in 2016. Louter op basis van inflatie. Dit was zo tijdens een decennium met jaren van quasi deflatie (= ongezien). Toch halen we bijna 18% waardevermindering op tien jaar.

Wat gebeurt er economisch wanneer de consument betaalt met privacy? Men zal in de toekomst immers een deel van de auto betalen d.m.v. privacy op te geven: de prijs van die wagen zal dalen precies omdat allerlei organizaties zich gaan bezig houden met locatiedata van de consument (en weet ik veel wat nog allemaal). De consument betaalt dat deel met wat ik privacy-currency zal noemen.

Mijn eigen inzicht is dat een privacy-currency weliswaar meervoudig uitgegeven wordt; oude gegevens worden steeds minder waard. Maar diensten die met privacy-currency werken hebben vaak een langdurige stroom bij haar consument bemachtigd. Daarmee bedoel ik dat het over een sensor gaat (een smartphone met pervasieve app, een thermostaat die jaren aan de muur hangt, een digicorder die dankzij monopolie jaren lang TV-kijkgewoontes vastlegt) die niet éénmalig maar wel steeds weer dezelfde consumentenprivacy “verkoopt” aan het bedrijf.

Zo’n sensor kan slechts enkelmalig geïnstalleerd worden. Want de markt zorgt ervoor dat een privacygegeven zo goedkoop mogelijk geëxtraheerd wordt. Vijf keer vastleggen wat de TV-kijkgewoontes van een consument zijn, heeft in de markt geen nut: de markt zal de efficiëntste verkiezen. Die zal het aan de anderen verkopen.

Dit wil voor mij zeggen dat privacy-currency in inflatie zal gaan. De currency wordt steeds minder waard. De installatie van een sensor heeft nu een zekere prijs (je moet je dwaze product aan de man brengen), maar zal in de toekomst steeds minder opbrengen.

Voorts kost het voor de consument steeds meer om zijn of haar privacy op te geven: men verliest opties bij verzekeringen, men verliest werkgelegenheid, men verliest vriendschappen en zal gepest of aangesproken worden. Typisch zijn oudere mensen dan ook meer gesteld op hun privacy. Ze verkopen hun privacy steeds duurder. Hun ongeletterdheid in technologie ontwijkt dit nog even; maar iedereen weet dat dat van korte duur is.

Dit geeft dat er twee vectoren zijn die de privacy-currency in inflatie doet gaan: de markt maakt een sensor minder veel waard uit efficiëntieoverweging, en de consument maakt een sensor minder wenselijk door een kleine maar niet onbestaande vergroting van kennis in technologie (en haar kwalen).

Bedrijven die hun waarde in de vermeerdering van privacy-currency leggen, zullen op middellange termijn failliet gaan. Want zelfs geld is een betere focus. De financiële sector heeft het traditioneel dan ook goed.

June 03, 2016

The post I started a podcast for sysadmins and developers: SysCast appeared first on

In preparation of the launch of SysCast, the screencasting site where you can learn about linux and open source, I started a podcast: the SysCast podcast!

I've been playing with this idea for a couple of months and having been guest on a number of podcasts (on HTTP/2 and DevOps), I decided I wanted to start my own.

As a result, the SysCast podcast was born!


The first 3 episodes have been recorded and are available online:

  1. The Caddy webserver, with Matt Holt
  2. An introduction to Docker, with Nils de Moor
  3. Managing secrets with Vault, with Seth Vargo

In terms of content, I try to find a solid mix between Linux, open source, web development and system administration. Expect a mix of both Dev and Ops topics. You might even call it DevOps.

I'm an avid listener of podcasts in my daily commute or when going shopping, trying to fill every bit of spare time with an interesting podcasts so I can learn new things (something about being obsessively efficient). My goal is to make SysCast fit into that category, too.

Want to subscribe to updates? There are a couple of ways:

There are plenty of podcasting apps out there, if you search them for SysCast, it should pop up.

A very big thanks to my first 3 guests and to everyone who's been listening and sending in feedback! I'd love to hear what other topics you would like to hear about or which interesting guests I could bring on the show.

I've already got some interesting guests lined up for next few weeks too!

So as of now, I can add Podcaster to my Twitter bio. Geek status++.

The post I started a podcast for sysadmins and developers: SysCast appeared first on

I first heard Débruit a couple of weeks ago while dozing off listening to Lefto’s late-nite show on local radiostation Studio Brussels and the set was that good that I wanted to wake up to listen more carefully.

Débruit is a French producer (apparently currently living in Brussels), who seamlessly merges electronica with African and Middle-Eastern influences and collaborations. He just released “Débruit & Istanbul”, an album based on his Europalia-commissioned explorations of Istanbul in 2015.

Below video is of the great Boiler Room series and although it doesn’t feature his latest work, it is just as diverse and exciting as what I heard on the radio;

YouTube Video
Watch this video on YouTube.

June 02, 2016

The battle for the marketing cloud just got way more interesting. This week, Salesforce announced its acquisition of Demandware for $2.8B in cash. It will enable Salesforce to offer a "Commerce Cloud" alongside its sales and marketing solutions.

The large platform companies like Oracle and Adobe are trying to own the digital customer experience market from top to bottom by acquiring and integrating together tools for marketing, commerce, customer support, analytics, mobile apps, and more. Oracle's acquisition of Eloqua, SAP's acquisition of hybris and Salesforce's acquisitions of ExactTarget were earlier indicators of market players consolidating SaaS apps for customer experience onto their platforms.

In my view, the Demandware acquisition is an interesting strategic move for Salesforce that aligns them more closely as a competitor to marketing stack mega-vendors such as Adobe, Oracle and IBM. Adding a commerce solution to its suite, makes it easier for Salesforce's customers to build an integrated experience and see what their customers are buying. There are advantages to integrated solutions that have a single system of record about the customer. The Demandware acquisition also makes sense from a technology point of view; there just aren't many Java-based commerce platforms that are purely SaaS-based, that can operate at scale, and that are for sale.

However, we've also seen this movie before. When big companies acquire smaller, innovative companies, over time the innovation goes away in favor of integration. Big companies can't innovate fast enough, and the suite lock-in only benefits the vendor.

There is a really strong case to be made for a best-of-breed approach where you choose and integrate the best software from different vendors. This is a market that literally changes too much and too fast for any organization to buy into a single mega-platform. From my experience talking to hundreds of customer organizations, most prefer an open platform that integrates different solutions and acts as an orchestration hub. An open platform ultimately presents more freedom for customers to build the exact experiences they want. Open Source solutions, like Drupal, that have thousands of integrations, allow organizations to build these experiences in less time, with a lower overall total cost of ownership, more flexibility and faster innovation.

Adobe clearly missed out on buying Demandware, after it missed out on buying Hybris years ago. Demandware would have fit in Adobe's strategy and technology stack. Now Adobe might be the only mega-platform that doesn't have an embedded commerce capability. More interestingly, there don't appear to be large independent commerce operators left to buy.

I continue to believe there is a great opportunity for new independent commerce platforms, especially now Salesforce and Demandware will spend the next year or two figuring out the inevitable challenges of integrating their complex software solutions. I'd love to see more commerce platforms emerge, especially those with a modern micro-services based architecture, and an Open Source license and innovation model.


Les raisons de l’échec de Google+ et des tentatives sociales chez Google.

Avec 90% du marché mondial des recherches web, 1 milliard de personnes utilisant des téléphones Android, 1 milliard de visiteurs mensuels sur Youtube et 900 millions d’utilisateurs de GMail, difficile pour un internaute de passer à côté de Google.

Aussi, quand Google a décidé de se lancer dans les réseaux sociaux en 2011, personne ne donnait cher de la peau de Twitter et Facebook.

Pourtant, Google Buzz, la tentative de concurrencer Twitter, fut un échec cuisant et Google+, l’équivalent Google de Facebook, reste vivement controversé et assez peu utilisé alors même qu’il est intégré avec la plupart des smartphones vendus dans le monde aujourd’hui !

Et s’il est impensable pour une marque ou une célébrité de ne pas avoir une page Facebook ou un compte Twitter, qu’en est-il d’une page Google+ ? La plupart ne sont-elles pas créées par acquis de conscience ?

Trouver une personne, la base d’un réseau social

Google+ serait-il techniquement tellement inférieur à ses concurrents que, même imposé, il soit si peu utilisé ? Au contraire, certains, parmi lesquels l’auteur de ces lignes, considèrent que Google+ est techniquement plus abouti et plus riche que Facebook : possibilité d’avoir des relations asymétriques entre personnes, facilité de regroupement des amis dans des “cercles”, meilleur contrôle des permissions, …

Mais alors, pourquoi même les aficionados les plus accros à Google ont-il le réflexe d’aller sur Twitter et Facebook ?

La réponse la plus souvent pointée est que tout le monde est sur Facebook et que les utilisateurs vont où les autres sont. Facebook aurait l’avantage d’avoir été le premier à bénéficier de cet effet de réseau à large échelle.

Mais c’est sans compter que Google bénéficie déjà d’énormes réservoirs d’utilisateurs que sont Gmail, Android et Youtube. S’il ne s’agissait que d’atteindre une masse critique, Google+ aurait pu être un succès instantané.

Un réseau social, ce n’est jamais qu’un groupe de personnes avec des liens entre eux. Et ce réseau ne peut se construire qu’avec les personnes. La première fonctionnalité d’un réseau social est bien celle-là : trouver une personne, étape indispensable avant la création d’un lien. La motivation première pour ajouter un ami sur Facebook n’est pas de voir ses photos de vacances, c’est de rester en contact. Les photos de vacances ne sont qu’une conséquence !

Certains utilisateurs sur Facebook n’utilisent d’ailleurs pas le flux d’activité. D’autres n’ont jamais ouvert la messagerie. Mais tous ont un point commun : ils ont confiance de pouvoir trouver n’importe qui ou presque sur Facebook. Même le « Jean Dupont » que je cherche se démarquera au milieu de ses homonymes grâce à nos amis communs, ses centres d’intérêts, ses photos ou sa description.

Sur Twitter, aucun doute possible grâce à l’identifiant unique que Jean Dupont m’aura très aisément communiqué.

Google+, un réseau asocial ?

Google, par contre, a complètement perdu de vue la fonctionnalité de base : « trouver une personne ». Google+ s’est immédiatement concentré sur les conséquences (avoir un flux d’activité, partager des photos, chatter) en oubliant la raison première d’un tel produit : rester en contact. De l’aveu même des ingénieurs travaillant sur le projet, il fallait toujours « développer une nouvelle fonctionnalité ».

Que ce soit dans mon téléphone ou dans Gmail, le fait de taper « Jean Dupont » me donne des dizaines d’occurrences dont certaines sont des doublons et d’autres des homonymes. À qui appartient ce numéro de téléphone associé à un « Jean » qui a sans doute été importé depuis ma carte SIM à un moment donné ? Est-ce l’ancien numéro de Jean Dupont ? Au contraire un nouveau numéro ? Ou bien un homonyme ?

Impossible, depuis GMail, d’envoyer un mail à certaines personnes avec qui je suis pourtant en contact sur Google+ ! Et si les innovations de Google Inbox ont largement amélioré la situation, elle n’en reste pas moins loin d’être parfaite !

 Google Inbox me propose deux fois la même personne. Laquelle choisir ?

Google Inbox me propose deux fois la même personne. Laquelle choisir ?

Détail révélateur : la photo de profil d’une personne varie d’un produit Google à l’autre voire, au sein même de GMail et Google Inbox, d’un mail à l’autre ! Certaines anciennes photos de profils Google+, pourtant supprimées depuis longtemps, apparaissent parfois comme par enchantement au détour d’un mail. Mais le plus souvent, aucune image ne s’affiche. Il m’est donc impossible d’associer avec confiance une personne à une photo de profil unique, contrairement à Twitter ou Facebook.

Facebook l’a bien compris et, sur cette plateforme, le changement de photo de profil d’un de vos contacts est un événement majeur qui sera particulièrement mis en avant.

Hangouts et Contacts, des échecs lourds.

Sur Android, l’application Hangout est incroyablement lente quand il s’agit de lancer une conversation avec un nouveau contact. Parfois, elle ne trouve tout simplement pas ce contact ou n’associe pas le numéro de téléphone avec le profil de la personne, ne me laissant que le choix d’envoyer un message Hangout à la place d’un SMS. À d’autres moments, elle me met en avant des « suggestions » de profil Google+ que je ne connais pas et cache ceux que je connais.

 Hangout me propose 5 fois Marie qui sont la même et unique personne sur G+ !

Hangout me propose 5 fois Marie qui sont la même et unique personne sur G+ !

Avant ce mois de mars 2016, l’interface web de Google Contacts n’avait jamais connu de refonte complète depuis sa mise en service. Google n’a même jamais pris la peine de développer une application Android de gestion de contacts.

Si cette nouvelle version rassure sur le fait que cette partie de Google n’a pas été complètement laissée à l’abandon, elle est néanmoins très frustrante : il ne s’agit que d’un changement purement esthétique sans réelle nouvelle fonctionnalité ni meilleure intégration avec les autres produits Google.

L'interface de Google Contacts, restées inchangée pendant des années.

L’interface de Google Contacts, restées inchangée pendant des années.

C’est comme si Google considérait qu’unifier et gérer une liste de contacts n’avait aucun intérêt. Google s’est contenté de développer les fonctionnalités d’un réseau social en oubliant ce qui est selon moi la fondation même de l’interaction sociale : entrer en contact avec une personne donnée.

Une fonctionnalité que Google a laissé, peut-être volontairement, aux fabricants de smartphones. Avec un résultat assez catastrophique.

3 vincents identiques, 2 vincents différents et de nouveau 3 vincents identiques. Merci Samsung !

3 vincents identiques, 2 vincents différents et de nouveau 3 vincents identiques. Merci Samsung !

Un désintérêt que Google paie très cher, y compris dans le domaine de la messagerie où, malgré une position dominante confortable, Gmail et Hangouts se sont vite fait dépasser par Whatsapp.

Le désespoir de l’incompréhension

Est-ce que Whatsapp offre une fonctionnalité incroyable, nouvelle ou particulièrement utile ?

Non, la principale caractéristique de Whatsapp est de trouver mes amis qui utilisent Whatsapp en se basant sur les numéros stockés sur mon téléphone. Que ce soit sur Facebook, Twitter ou Whatsapp, j’ai donc confiance de facilement trouver une personne donnée. Oui, Google est très fort pour me faire explorer, pour me suggérer des nouvelles personnes. C’est d’ailleurs ce qui fait la joie des aficionados de Google+. Mais la plupart du temps, je veux simplement contacter une personne donnée le plus vite possible.

Avec sa nouvelle version, Google+ semble d’ailleurs faire progressivement son deuil de l’aspect social pour se concentrer sur la découverte de nouveaux contenus, de thématiques et de centres d’intérêt.

Le lancement d’un enième produit social, Google Spaces, et d’une enième application de chat, Google Allo, sont la confirmation de la totale incompréhension de Google face au social. Plutôt que de réfléchir, d’essayer de trouver les racines du problème, le géant américain lance des dizaines d’applications en espérant trouver, par hasard, le succès. On lance tout contre un mur et on regarde ce qui reste collé…

Mais, ce faisant, il ne fait que créer des espaces supplémentaires où potentiellement chercher une personne. Il rend encore plus complexe la recherche d’une personne précise.

Peut-être car, dans la culture des ingénieurs de chez Google, on ne recherche que des solutions à des problèmes, des informations. Pas des personnes. Jamais des personnes.

Ceci expliquerait tout : Google ne peut développer un réseau social car il est, tout simplement, profondément asocial.


Photo par Thomas Hawk.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins: 12uAH27PhWepyzZ8kzY9rMAZzwpfsWSxXt, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

June 01, 2016

Last week I attended the 2016 edition of the PHP Unconference Europe, taking place in Palma De Mallorca. This post contains my notes from various conference sessions. Be warned, some of them are quite rough.

Overall impression

Before getting to the notes, I’d like to explain the setup of the unconference and my general impression.

The unconference is two days long, not counting associated social events before and afterwards. The first day started with people discussing in small groups which sessions they would like to have, either by leading them themselves, or just wanting to attend. These session ideas where written down and put on papers on the wall. We then went through them one by one, with someone explaining the idea behind each session, and one or more presenters / hosts being chosen. The final step of the process was to vote on the sessions. For this, each person got two “sticky dots” (what are those things called anyway?), which they could either both put onto a single session, or split and vote on two sessions.

One each day we had 4 such sessions, with long breaks in between, to promote interaction between the attendees.

Onto my notes for individual sessions:

How we analyze your code

Analysis and metrics can be used for tracking progress and for analyzing the current state. Talk focuses on current state.

  • Which code is important
  • Probably buggy code
  • Badly tested code
  • Untested code

Finding the core (kore?): code rank (like Google page rank): importance flows to classes that are dependent upon (fan-in). Qafoo Quality Analyzer. Reverse code rank: classes that depend on lots of other classes (fan-out)

Where do we expect bugs? Typically where code is hard to understand. We can look at method complexity: cyclomatic complexity, NPath complexity. Line Coverage exists, Path Coverage is being worked upon. Parameter Value Coverage. CRAP.

Excessive coupling is bad. Incoming and outgoing dependencies. Different from code rank in that only direct dependencies are counted. Things that are depended on a lot should be stable and well tested (essentially the Stable Dependencies Principle).

Qafoo Quality Analyzer can be used to find dependencies across layers when they are in different directories. Very limited at present.

When finding highly complex code, don’t immediately assume it is bad. There are valid reasons for high complexity. Metrics can also be tricked.

The evolution of web application architecture

How systems interact with each other. Starting with simple architecture, looking at problems that arise as more visitors arrive, and then seeing how we can deal with those problems.

Users -> Single web app server -> DB

Next step: Multiple app servers + load balancers (round robin + session caching server)

Launch of shopping system resulted in app going down, as master db got too many writes, due to logging “cache was hit” in it.

Different ways of caching: entities, collections, full pages. Cache invalidation is hard, lots of dependencies even in simple domains.

When too many writes: sharding (split data across multiple nodes), vertical (by columns) or horizontal (by rows). Loss of referential integrity checking.

Complexity with relational database systems -> NoSQL: sharding, multi master, cross-shard queries. Usually no SQL or referential integrity, though those features are already lost when using sharding.

Combination of multiple persistence systems: problems with synchronization. Transactions are slow. Embrace eventual consistency. Same updating strategies can be used for caches.

Business people often know SQL, yet not NoSQL query languages.

Queues can be used to pass data asynchronously to multiple consumers. Following data flow of an action can be tricky. Data consistency is still a thing.

Microservices: separation of concerns on service and team level. Can simplify via optimal tech stack per serve. Make things more complicated, need automated deployment, orchestration, eventual consistency, failure handling.

Boring technology often works best, especially at the beginning of a project. Start with the simplest solution that works. Hold team skills into account.

How to fuck up projects

Before the project

  • Buzzword first design
  • Mismatching expectations: huge customer expectations, no budget
  • Fuzzy ambitious vocabulary, directly into the contract (including made up words)
  • Meetings, bad mood, no eye contact
  • No decisions (no decision making process -> no managers -> saves money)
  • Customer Driven Development: customer makes decisions
  • Decide on environment: tools, mouse/touchpad, 1 big monitor or 2 small ones, JIRA, etc
  • Estimates: should be done by management

During the project

  • Avoid ALL communication, especially with the customer
  • If communication cannot be avoided: mix channels
  • Responsibility: use group chats and use “you” instead of specific names (cc everyone in mails)
  • Avoid issue trackers, this is what email and Facebook are for
  • If you cannot avoid issue trackers: use multiple or have one ticket with 2000 notes
  • Use ALL the programming languages, including PHP-COBOL
  • Do YOUR job, but nothing more
  • Only pressure makes diamonds: coding on the weekend
  • No breaks so people don’t lose focus
  • Collect metrics: Hours in office, LOC, emails answered, tickets closed

Completing the project

  • 3/4 projects fail: we can’t do anything about it
  • New features? Outsource
  • Ignore the client when they ask about the completed project
  • Change the team often, fire people on a daily basis
  • Rotate the customer’s contact person


  • No VCS. FTP works. Live editing on production is even better
  • Encoding: emjois in function names, umlaut in file names. Mix encodings, also in MySQL
  • Agile is just guidelines, change goals during sprints often
  • Help others fuck up: release it as open source
  • git blame-someone-else

The future of PHP

This session started with some words from the moderator, who mainly talked about performance, portability and future adoption of, or moving away from, PHP.

  • PHP now fast enough to use many PHP libraries
  • PHP now better for long running tasks (though still no 64 bit for windows)
  • PHP now has an Abstract Syntax Tree

The discussion that followed after was primarily about the future of PHP in terms of adoption. The two languages most mentioned as competitors where Javascript and Java.

Java because it is very hard to get PHP into big enterprise, where people tend to cling to Java. A point made several times about this is that such choices have very little to do with technical sensibility, and are instead influenced by the eduction system, languages already used, newness/ hipness and the HiPPO. Most people also don’t have the relevant information to make an informed choice, and do not do the effort to look up this information as they already have a preference.

Javascript is a competitor because web based projects, be it with a backend in PHP or in another language, need more and more Javascript, with no real alternatives. It was mentioned several times that not having alternatives it bad. Having multiple JS interpreters is cool, JS being the only choice for browser programming is not.

Introduction to sensible load testing

In this talk the speaker explained why it is important to do realistic load testing, and how to avoid common pitfalls. He explained how jMeter can be used to simulate real user behavior during peak load times. Preliminary slides link.

Domain Objects: not just for Domain Driven Design

This session was hard to choose, as it coincided with “What to look for in a developer when hiring, and how to test it”, which I also wanted to attend.

The Domain Objects session introduced what Value Objects are, and why they are better than long parameter lists and passing around values that might be invalid. While sensible enough, all very basic, with unfortunately no information for me whatsoever. I’m thinking it’d have been better to do this as a discussion, partly because the speaker was clearly very inexperienced, and gave most of the talk with his arms crossed in front of him. (Speaker, if you are reading this, please don’t be discouraged, practice makes perfect.)

Performance monitoring

I was only in the second half of this session, during which two performance monitoring tools where presented. Tideways by Qafoo and Instana.

Some tweets

Back in 2006 I wrote a blog post about linux troubleshoooting. Bert Van Vreckem pointed out that it might be time for an update ..

There's not that much that has changed .. however :)

Everything is a DNS Problem

Everything is a Fscking DNS Problem
No really, Everything is a Fscking DNS Problem
If it's not a fucking DNS Problem ..
It's a Full Filesystem Problem
If your filesystem isn't full
It is a SELinux problem
If you have SELinux disabled
It might be an ntp problem
If it's not an ntp problem
It's an arp problem
If it's not an arp problem...
It is a Java Garbage Collection problem
If you ain't running Java
It's a natting problem
If you are already on IPv6
It's a Spanning Tree problem
If it's not a spanning Tree problem...
It's a USB problem
If it's not a USB Problem
It's a sharing IRQ Problem
If it's not a sharing IRQ Problem
But most often .. its a Freaking Dns Problem !


May 31, 2016

Sticker TrytonCe jeudi 16 juin 2016 à 19h se déroulera la 50ème séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : Tryton, un framework libre d’application métier

Thématique : Progiciel de Gestion Intégré

Public : Programmeurs|Responsables d’entreprise|étudiants

L’animateur conférencier : Cédric Krier (B2CK SPRL)

Lieu de cette séance : Université de Mons, Faculté Polytechnique, Site Houdain, Rue de Houdain, 9, auditoire 3 (cf. ce plan sur le site de l’UMONS, ou la carte OSM). Entrée par la porte principale, au fond de la cour d’honneur. Suivre le fléchage à partir de là.

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page La séance sera suivie d’un verre de l’amitié (le tout sera terminé au plus tard à 22h).

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, Normation, OpenSides, MeaWeb et Phonoid.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

Description :

Tryton est une plate-forme de développement d’application pour entreprise (progiciel de gestion intégré/PGI/ERP) sous licence GPL-3+. Grâce à son ensemble de modules qui grandit à chaque version, elle couvre de base bon nombre de besoins de l’entreprise. Et ceux qui seraient manquants peuvent être comblés grâce à son architecture modulaire. Ecrit en Python dans une architecture trois tiers, le système peut être utilisé avec PostgreSQL, SQLite, MySQL.

L’exposé ciblera les sujets suivants :

  • L’historique et gouvernance du projet
  • Architecture du logiciel
  • Découverte de quelques modules: achats, ventes, comptabilité et stock
  • Démonstration: création d’un module simple