Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

May 28, 2016

The Drupal community is very special because of its culture of adapting to change, determination and passion, but also its fun and friendship. It is a combination that is hard to come by, even in the Open Source world. Our culture enabled us to work through really long, but ground-breaking release cycles, which also prompted us to celebrate the release of Drupal 8 with 240 parties around the world.

Throughout Drupal's 15 years history, that culture has served us really well. As the larger industry around us continues to change -- see my DrupalCon New Orleans keynote for recent examples -- we have been able to evolve Drupal accordingly. Drupal has not only survived massive changes in our industry; it has also helped drive them. Very few open source projects are 15 years old and continue to gain momentum.

Drupal 8 is creating new kinds of opportunities for Drupal. For example, who could have imagined that Lufthansa would be using Drupal 8 to build its next-generation in-flight entertainment system? Drupal 8 changes the kind of end-user experiences people can build, how we think about Drupal, and what kind of people we'll attract to our community. I firmly believe that these changes are positive for Drupal, increase Drupal's impact on the world, and grow the opportunity for our commercial ecosystem.

To seize the big opportunity ahead of us and to adjust to the changing environment, it was the Drupal Association's turn to adapt and carefully realign the Drupal Association's strategic focus.

The last couple of years the Drupal Association invested heavily in to support the development and the release of Drupal 8. Now Drupal 8 is released, the Drupal Association's Board of Directors made the strategic decision to shift some focus from the "contribution journey" to the "evaluator's adoption journey" -- without compromising our ability to build and maintain the Drupal software. The Drupal Association will reduce its efforts on's collaboration tools and expand its efforts to grow Drupal's adoption and to build a larger ecosystem of technology partners.

We believe this is not only the right strategic focus at this point in Drupal 8's lifecycle, but also a necessary decision. While the Drupal Association's revenues continued to grow at a healthy pace, we invested heavily, and exhausted our available reserves supporting the Drupal 8 release. As a result, we have to right-size the organization, balance our income with our expenses, and focus on rebuilding our reserves.

In a blog post today, we provide more details on why we made these decisions and how we will continue to build a healthy long-term organization. The changes we made today help ensure that Drupal will gain momentum for decades to come. We could not make this community what it is without the participation of each and every one of you. Thanks for your support!

May 27, 2016

In the XAML world it’s very common to use the MVVM pattern. I will explain how to use the technique in a similar way with Qt and QML.

The idea is to not have too much code in the view component. Instead we have declarative bindings and move most if not all of our view code to a so called ViewModel. The ViewModel will sit in between the actual model and the view. The ViewModel typically has one to one properties for everything that the view displays. Manipulating the properties of the ViewModel alters the view through bindings. You typically don’t alter the view directly.

In our example we have two list-models, two texts and one button: available-items, accepted-items, available-count, accepted-count and a button. Pressing the button moves stuff from available to accepted. Should be a simple example.

First the ViewModel.h file. The class will have a property for ~ everything the view displays:


#include <QAbstractListModel>
#include <QObject>

class ViewModel : public QObject

	Q_PROPERTY(QAbstractListModel* availableItems READ availableItems NOTIFY availableItemsChanged )
	Q_PROPERTY(QAbstractListModel* acceptedItems READ acceptedItems NOTIFY acceptedItemsChanged )
	Q_PROPERTY(int available READ available NOTIFY availableChanged )
	Q_PROPERTY(int accepted READ accepted NOTIFY acceptedChanged )

	ViewModel( QObject *parent = 0 );
	~ViewModel() { }

	QAbstractListModel* availableItems()
		{ return m_availableItems; }

	QAbstractListModel* acceptedItems()
		{ return m_acceptedItems; }

	int available ()
		{ return m_availableItems->rowCount(); }

	int accepted ()
		{ return m_acceptedItems->rowCount(); }

	Q_INVOKABLE void onButtonClicked( int availableRow );

	void availableItemsChanged();
	void acceptedItemsChanged();
	void availableChanged();
	void acceptedChanged();

	QAbstractListModel* m_availableItems;
	QAbstractListModel* m_acceptedItems;


The ViewModel.cpp implementation of the ViewModel. This is of course a simple example. The idea is that ViewModels can be quite complicated while the view.qml remains simple:

#include <QStringListModel>

#include "ViewModel.h"

ViewModel::ViewModel( QObject *parent ) : QObject ( parent )
	QStringList available;
	QStringList accepted;

	available << "Two" << "Three" << "Four" << "Five";
	accepted << "One";

	m_availableItems = new QStringListModel( available, this );
	emit availableItemsChanged();

	m_acceptedItems = new QStringListModel( accepted, this );
	emit acceptedItemsChanged();

void ViewModel::onButtonClicked(int availableRow)
	QModelIndex availableIndex = m_availableItems->index( availableRow, 0, QModelIndex() );
	QVariant availableItem = m_availableItems->data( availableIndex, Qt::DisplayRole );

	int acceptedRow = m_acceptedItems->rowCount();

	m_acceptedItems->insertRows( acceptedRow, 1 );

	QModelIndex acceptedIndex = m_acceptedItems->index( acceptedRow, 0, QModelIndex() );
	m_acceptedItems->setData( acceptedIndex, availableItem );
	emit acceptedChanged();

	m_availableItems->removeRows ( availableRow, 1, QModelIndex() );
	emit availableChanged();

The view.qml. We’ll try to have as few JavaScript code as possible; the idea is that coding itself is done in the ViewModel. The view should only be view code (styling, UI, animations, etc). The import url and version are defined by the use of qmlRegisterType in the main.cpp file, lower:

import QtQuick 2.0
import QtQuick.Controls 1.2

import be.codeminded.ViewModelExample 1.0

Rectangle {
    id: root
    width: 640; height: 320

	property var viewModel: ViewModel { }

	Rectangle {
		id: left
		anchors.left: parent.left
		width: parent.width / 2
		ListView {
		    id: leftView
			anchors.left: parent.left
			anchors.right: parent.right

			delegate: rowDelegate
		        model: viewModel.availableItems
		Text {
			id: leftText
			anchors.left: parent.left
			anchors.right: parent.right
			anchors.bottom: parent.bottom
			height: 20
			text: viewModel.available

	Rectangle {
		id: right
		anchors.left: left.right
		anchors.right: parent.right
		ListView {
		    id: rightView
			anchors.left: parent.left
			anchors.right: parent.right

			delegate: rowDelegate
		        model: viewModel.acceptedItems
		Text {
			id: rightText
			anchors.left: parent.left
			anchors.right: parent.right
			anchors.bottom: parent.bottom
			height: 20
			text: viewModel.accepted

	Component {
		id: rowDelegate
		Rectangle {
			width: parent.width
			height: 20
			color: ListView.view.currentIndex == index ? "red" : "white"
			Text { text: 'Name:' + display }
			MouseArea {
				anchors.fill: parent
				onClicked: parent.ListView.view.currentIndex = index

	Button {
		id: button
		anchors.left: parent.left
		anchors.right: parent.right
		anchors.bottom: parent.bottom
		height: 20
	        text: "Accept item"
		onClicked: viewModel.onButtonClicked( leftView.currentIndex );

A main.cpp example. The qmlRegisterType defines the url to import in the view.qml file:

#include <QGuiApplication>
#include <QQuickView>
#include <QtQml>
#include <QAbstractListModel>

#include "ViewModel.h"

int main(int argc, char *argv[])
	QGuiApplication app(argc, argv);
	QQuickView view;
	qmlRegisterType<ViewModel>("be.codeminded.ViewModelExample", 1, 0, "ViewModel");
	return app.exec();

A file. Obviously should you use cmake nowadays. But oh well:

QT += quick
SOURCES += ViewModel.cpp main.cpp
HEADERS += ViewModel.h
RESOURCES += project.qrc

And a project.qrc file:

<RCC version="1.0">
<qresource prefix="/">

May 26, 2016

I’m happy to announce the immediate availability of Maps 3.6. This feature release brings marker clustering enhancements and a number of fixes.

These parameters where added to the display_map parser function, to allow for greater control over marker clustering. They are only supported together with Google Maps.

  • clustergridsize: The grid size of a cluster in pixels
  • clustermaxzoom: The maximum zoom level that a marker can be part of a cluster
  • clusterzoomonclick: If the default behavior of clicking on a cluster is to zoom in on it
  • clusteraveragecenter: If the cluster location should be the average of all its markers
  • clusterminsize: The minimum number of markers required to form a cluster


  • Fixed missing marker cluster images for Google Maps
  • Fixed duplicate markers in OpenLayers maps
  • Fixed URL support in the icon parameter


Many thanks to Peter Grassberger, who made the listed fixes and added the new clustering parameters. Thanks also go to Karsten Hoffmeyer for miscellaneous support and to TranslateWiki for providing translations.


Since this is a feature release, there are no breaking changes, and you can simply run composer update, or replace the old files with the new ones.

There are, however, compatibility changes to keep in mind. As of this version, Maps requires PHP 5.5 or later and MediaWiki 1.23 or later. composer update will not give you a version of Maps incompatible with your version of PHP, though it is presently not checking your MediaWiki version. Fun fact: this is the first bump in minimum requirements since the release of Maps 2.0, way back in 2012.



May 25, 2016

Every now and then I get asked how to convince ones team members that Pair Programming is worthwhile. Often the person asking, or people I did pair programming with, while obviously enthusiastic about the practice, and willing to give it plenty of chance, are themselves not really convinced that it actually is worth the time. In this short post I share how I look at it, in the hope it is useful to you personally, and in convincing others.

Extreme Programming

The cost of Pair Programming

Suppose you are new to the practice and doing it very badly. You have one person hogging the keyboard and not sharing their thoughts, with the other paying more attention to twitter than to the development work. In this case you basically spend twice the time for the same output. In other words, the development cost is multiplied by two.

Personally I find it tempting to think about Pair Programming as doubling the cost, even though I know better. How much more total developer time you need is unclear, and really depends on the task. The more complex the task, the less overhead Pair Programming will cause. What is clear, is that when your execution of the practice is not pathologically bad, and when the task is more complicated than something you could trivially automate, the cost multiplication is well below two. An article on c2 wiki suggests 10-15% more total developer time, with the time elapsed being about 55% compared to solo development.

If these are all the cost implications you think about with regards to Pair Programming, it’s easy to see how you will have a hard time to justify it. Let’s look at what makes the practice actually worthwhile.

The cost of not Pair Programming

If you do Pair Programming, you do not need a dedicated code review step. This is because Pair Programming is a continuous application of review. Not only do you not have to put time into a dedicated review step, the quality of the review goes up, as communication is much easier. The involved feedback loops are shortened. With dedicated review, the reviewer will often have a hard time understanding all the relevant context and intent. Questions get asked and issues get pointed out. Some time later the author of the change, who in the meanwhile has been working on something else, needs to get back to the reviewer, presumably forcing two mental context switches. When you are used to such a process, it becomes easy to become blind to this kind of waste when not paying deliberate attention to it. Pair Programming eliminates this waste.

The shorter feedback loops and enhanced documentation also help you with design questions. You have a fellow developer sitting next to you who you can bounce ideas off and they are even up to speed with what you are doing. How great is that? Pair Programming can be a lot of fun.

The above two points make Pair Programming pay more than for itself in my opinion, though it offers a number of additional benefits. You gain true collective ownership, and build shared commitment. There is knowledge transfer and Pair Programming is an excellent way of onboarding new developers. You gain higher quality, both internal in the form of better design, and external, in the form of fewer defects. While those benefits are easy to state, they are by no means insignificant, and deserve thorough consideration.

Give Pair Programming a try

As with most practices there is a reasonable learning curve, which will slow you down at first. Such investments are needed to become a better programmer and contribute more to your team.

Many programmers are more introverted and find the notion of having to pair program rather daunting. My advice when starting is to begin with short sessions. Find a colleague you get along with reasonably well and sit down together for an hour. Don’t focus too much on how much you got done. Rather than setting some performance goal with an arbitrary deadline, focus on creating a habit such as doing one hour of Pair Programming every two days. You will automatically get better at it over time.

If you are looking for instructions on how to Pair Program, there is plenty of google-able material out there. You can start by reading the Wikipedia page. I recommend paying particular attention to the listed non-performance indicators. There are also many videos, be it conference tasks, or dedicated explanations of the basics.

Such disclaimer

I should note that while I have some experience with Pair Programming, I am very much a novice compared to those who have done it full time for multiple years, and can only guess at the sage incantations these mythical creatures would send your way.

Extreme Pair Programming

Extreme Pair Programming

May 24, 2016

So although I am taking things rather slowly, I am in fact still working on Power-Ups for Autoptimize, focusing on the one most people were asking for; critical CSS. The Critical CSS Power-Up will allow one to add “above the fold”-CSS for specific pages or types of pages.

The first screenshot shows the main screen (as a tab in Autoptimize), listing the pages for which Critical CSS is to be applied:

The second screenshot shows the “edit”-modal (which is almost the same when adding new rules) where you can choose what rule to create (based on URL or on WordPress Conditional Tag), the actual string from the URL or Conditional Tag and a textarea to copy/ paste the critical CSS:


The next step will be to contact people who already expressed interest in beta-testing Power-Ups, getting feedback from them to improve and hopefully make “Autoptimize Critical Css” available somewhere in Q3 2016 (but no promises, off course).

Last week I attended the I T.A.K.E. unconference in Bucharest. This unconference is about software development, and has tracks such as code quality, DevOps, craftsmanship, microservices and leadership. In this post I share my overall impressions as well as the notes I took during the uncoference.

Conference impression

itakeThis was my first attendance of I T.A.K.E, and I had not researched in high detail what the setup would look like, so I did not really know what to expect. What surprised me is that most of the unconference is actually pretty much a regular conference. For the majority of the two days, there where several tracks in parallel, with talks on various topics. The unconference part is limited to two hours each day during which there is an open space.

Overall I enjoyed the conference and learned some interesting new things. Some talks were a bit underwhelming quality wise, with speakers not properly using the microphone, code on slides in such a quantity that no one can read it, and speakers looking at their slides the whole time and not connecting to the audience. The parts I enjoyed most were the open space, conversations during coffee breaks, and a little pair programming. I liked I T.A.K.E more than the recent CraftConf, though less than SoCraTes, which perhaps is a high standard to set.

Keynote: Scaling Agile

Day one started with a keynote by James Shore (who you might know from Let’s Code: Test-Driven JavaScript) on how to apply agile methods when growing beyond a single team.

The first half of the talk focused on how to divide work amongst developers, be it between multiple teams, or within a team using “lanes”. The main point that was made is that one wants to minimize dependencies between groups of developers (so people don’t get blocked by things outside of their control), and therefore the split should happen along feature boundaries, not within features themselves. This of course builds on the premise that the whole team picks up a story, and not some subset or even individuals.


A point that caught my interest is that while collective ownership of code within teams is desired, sharing responsibility between teams is more problematic. The reason for this being that supposedly people will not clean up after themselves enough, as it’s not their code, and rather resort to finger-pointing to the other team(s). As James eloquently put it:

My TL;DR for this talk is basically: low coupling, high cohesion 🙂

Mutation Testing to the rescue of your Tests

During this talk, one of the first things the speaker said is that the only goal of tests is to make sure there are no bugs in production. This very much goes against my point of view, as I think the primary value is that they allow refactoring with confidence, without which code quality suffers greatly. Additionally, tests provide plenty of other advantages, such as documenting what the system does, and forcing you to pay a minimal amount of attention to certain aspects of software design.

The speaker continued to ask about who uses test coverage, and had a quote from Uncle Bob on needing 100% test coverage. After another few minutes of build up to the inevitable denunciation of chasing test coverage as being a good idea, I left to go find a more interesting talk.

Afterwards during one of the coffee breaks I talked with some people that had joined the talk 10 minutes or so after it started and had actually found it interesting. Apparently the speaker got to the actual topic of the talk; mutation testing, and presented it as a superior metric. I did not know about mutation testing before and recommend you have a look at the Wikipedia page about it if you do not know what it is. It automates an approximation of what you do in trying to determine which tests are valuable to write. As with code coverage, one should not focus on the metric though, and merely use it as the tool that it is.

Interesting related posts:

Raising The Bar

A talk on Software Craftsmanship that made me add The Coding Dojo Handbook to my to-read list.

Metrics For Good Developers

  • Metrics are for developers, not for management.
  • Developers should be able to choose the metrics.
  • Metrics to get a real measure of quality, not just “it feels like we’re doing well”
  • Measuring the number of production defects.
  • Make metrics visible.
  • Sometimes it is good to have metrics for individuals and not the whole team.
  • They can be a feedback mechanism for self improvement.

Open Space

The Open Space is a two hour slot which puts the “un” in unconference. It starts by having a market place, where people propose sessions on topics of their interest. These sessions are typically highly interactive, in the form of self-organized discussions.

Open Space: Leadership

This session started by people writing down things they associate with good leadership, and then discussing those points.

Two books where mentioned, the first being The Five Dysfunctions of a Team.

The second book was Leadership and the One Minute Manager: Increasing Effectiveness Through Situational Leadership.

Open Space: Maintenance work: bad and good

This session was about finding reasons to dislike doing maintenance work, and then finding out how to look at it more positively. My input here was that a lot of the negative things, such as having to deal with crufty legacy code, can also be positive, in that they provide technical challenges absent in greenfield projects, and that you can refactor a mess into something nice.

I did not stay in this session until the very end, and unfortunately cannot find any pictures of the whiteboard.

Open Space: Coaching dojo

I had misheard what this was about and thought the topic was “Coding Dojo“. Instead we did a coaching exercise focused on asking open ended questions.

Are your Mocks Mocking at You?

This session was spread over two time slots, and I only attended the first part, as during the second one I had some pair programming scheduled. One of the first things covered in this talk was an explanation of the different types of Test Doubles, much like in my recent post 5 ways to write better mocks. The speakers also covered the differences between inside-out and outside-in TDD, and ended (the first time slot) with JavaScript peculiarities.

Never Develop Alone : always with a partner

In this talk, the speaker, who has been doing full-time pair programming for several years, outlined the primary benefits provided by, and challenges encountered during, pair programming.

Benefits: more focus / less distractions, more confidence, rapid feedback, knowledge sharing, fun, helps on-boarding, continuous improvement, less blaming.

Challenges: synchronization / communication, keyboard hogging


  • Ping-Pong TDD
  • Time boxing
  • Multiple keyboards
  • Pay attention and remind your pair if they don’t
  • Share your thoughts
  • Be open to new ideas and accept feedback
  • Mob programming

Live coding: Easier To Change Code

In this session the presenter walked us through some typical legacy code, and then demonstrated how one can start refactoring (relatively) safely. The code made me think of the Gilded Rose kata, though it was more elaborate/interesting. The presenter started by adding a safety net in the form of golden master tests and then proceeded with incremental refactoring.

Is management dead?WMDE management

Uncle Abraham certainly is most of the time! (Though when he is not, he approves of the below list.)

  • Many books on Agile, few on Agile management
  • Most common reasons for failure of Agile projects are management related
  • The Agile Manifesto includes two management principles
  • Intrinsic motivation via Autonomy, Mastery, Purpose and Connection
  • Self-organization: fully engaged, making own choices, taking responsibility
  • Needed for self-organization: skills, T-shaped, team players, collocation, long-lived team
  • Amplify and dampen voices
  • Lean more towards delegation to foster self-organization (levels of delegation)


Visualizing codebases

This talk was about how to extract and visualize metrics from codebases. I was hoping it would include various code quality related metrics, but alas, the talk only included file level details and simple line counts.

May 22, 2016

We zijn goed. We tonen dat door ons respect voor privacy en veiligheid te combineren. Kennis is daar onontbeerlijk voor. Ik pleit voor investeren in techneuten die de twee beheersen.

Onze overheid moet niet alles investeren in miljoenen voor het bestrijden van computervredebreuk; wel ook investeren in betere software.

Belgische bedrijven maken soms software. Ze moeten aangemoedig worden, gestuurd, om het goede te doen.

Ik zou graag van ons centrum cybersecurity zien dat ze bedrijven aanmoedigt om goede en dus veilige software te maken. We moeten ook inzetten op repressie. Maar we moeten net zo veel inzetten op hoge kwaliteit.

Wij denken wel eens dat, ach, wij te klein zijn. Maar dat is niet waar. Als wij beslissen dat hier, in België, de software goed moet zijn: dan creërt dat een markt die zich zal aanpassen aan wat wij willen. Het is zaak standvastig te zijn.

Wanneer wij zeggen dat a – b hier welkom is, of niet, geven we vorm aan technologie.

Ik verwacht niet minder van mijn land. Geef vorm.

May 21, 2016

Recently, I came across some code of a web application that, on brief inspection, was vulnerable to XSS and SQL injection attacks : the SQL queries and the HTML output were not properly escaped, the input variables were not sanitized. After a bit more reviewing I made a list of measures and notified the developer who quickly fixed the issues.

I was a bit surprised to come across code that was very insecure, which took the author only a few hours to drastically improve with a few simple changes. I started wondering why the code wasn't of better quality in the first place? Did the developer not know about vulnerabilities like SQL injection and how to prevent them? Was it time pressure that kept him from writing safer code?

Anyway, there are a few guidelines to write better and safer code.

Educate yourself

As a developer you should familiarize yourself with possible vulnerabilities and how to avoid them. There are plenty of books and online tutorials covering this. A good starting point is the Top 25 Most Dangerous Software Errors list. Reading security related blogs and going to conferences (or watch talks online) is useful as well.

Use frameworks and libraries

About every language has a framework for web applications (Drupal, Symfony (PHP), Spring (Java), Django (Python), ...) that has tools and libraries for creating forms, sanitizing input variables, properly escaping HTML output, handling cookies, check authorization and do user and privileges management, database-object abstraction (so you don't have to write your own SQL queries) and much more.
Those frameworks and libraries are used by a lot of applications and developers, so they are tested much more than code you write yourself, so bugs are found more quickly.

It is also important to regularly update the libraries and frameworks you use, to have the latest bugs and vulnerabilities fixed.

Code review

More people see more than one. Have your code reviewed by a coworker and use automated tools to check your code for vulnerabilities. Most IDEs have code checking tools, or you can implement them in a Continuous Integration (CI) environment like Jenkins, Travis CI, Circle CI, ... to check your code during every build.
A lot of online code checking tools exist that can check your code every time you push your code to your version control system.
There is no silver bullet here, but a combination manual code review and automated checks will help to spot vulnerabilities sooner.

Test your code

Code reviewing tools can't spot every bug, so testing your code is important as well. You will need automated unit tests, integration tests, ... so you can test your code during every build in you CI environment.
Writing good tests is an art and takes time, but more tests means less possible bugs remaining in your code.

Coding style

While not directly a measure against vulnerabilities, using a coding style that is common for the programming language you are using, makes your code more readable both for you, the reviewer and future maintainers of your code. Better readability makes it easier to spot bugs, maintain code and avoid new bugs.

I guess there are many more ways to improve code quality and reduce vulnerabilities. Feel free to leave a comment with your ideas.

May 20, 2016

May 19, 2016

My colleague Henk Van Der Laak made a interesting tool that checks your code against the QML coding conventions. It uses the internal parser’s abstract syntax tree of Qt 5.6 and a visitor design.

It has a command line, but being developers ourselves we want an API too of course. Then we can integrate it in our development environments without having to use popen!

So this is how to use that API:

// Parse the code
QQmlJS::Engine engine;
QQmlJS::Lexer lexer(&engine);
QQmlJS::Parser parser(&engine);

QFileInfo info(a_filename);
bool isJavaScript = info.suffix().toLower() == QLatin1String("js");
lexer.setCode(code,  1, !isJavaScript);
bool success = isJavaScript ? parser.parseProgram() : parser.parse();
if (success) {
    // Check the code
    QQmlJS::AST::UiProgram *program = parser.ast();
    CheckingVisitor checkingVisitor(a_filename);
    foreach (const QString &warning, checkingVisitor.getWarnings()) {
        qWarning() << qPrintable(warning);

May 18, 2016

This is a time of transition for the Drupal Association. As you might have read on the Drupal Association blog, Holly Ross, our Executive Director, is moving on. Megan Sanicki, who has been with the Drupal Association for almost 6 years, and was working alongside Holly as the Drupal Association's COO, will take over Holly's role as the Executive Director.

Open source stewardship is not easy but in the 3 years Holly was leading the Drupal Association, she lead with passion, determination and transparency. She operationalized the Drupal Association and built a team that truly embraces its mission to serve the community, growing that team by over 50% over three years of her tenure. She established a relationship with the community that wasn't there before, allowing the Drupal Association to help in new ways like supporting the Drupal 8 launch, providing test infrastructure, implementing the Drupal contribution credit system, and more. Holly also matured our DrupalCon, expanding its reach to more users with conferences in Latin America and India. She also executed the Drupal 8 Accelerate Fund, which allowed direct funding of key contributors to help lead Drupal 8 to a successful release.

Holly did a lot for Drupal. She touched all of us in the Drupal community. She helped us become better and work closer together. It is sad to see her leave, but I'm confident she'll find success in future endeavors. Thanks, Holly!

Megan, the Drupal Association staff and the Board of Directors are committed to supporting the Drupal project. In this time of transition, we are focused on the work that Drupal Association must do and looking at how to do that in a sustainable way so we can support the project for many years to come.

cache enablerCache Enabler – WordPress Cache is a new page caching kid on the WordPress plugin block by the Switzerland-based KeyCDN. It’s based in part on Cachify (which has a strong user-base in Germany) but seems less complex/ flexible. What makes it unique though, is it that it allows one to serve pages with WEBP images (which are not supported by Safari, MS IE/ Edge or Firefox) instead of JPEG’s to browsers that support WEBP. To be able to do that, you’ll need to also install Optimus, an image optimization plugin that plugs into a freemium service by KeyCDN (you’ll need a premium account to convert to WEBP though).

I did some tests with Cache Enabler and it works great together with Autoptimize out of the box, especially after the latest release (1.1.0) which also hooks into AO’s autoptimize_action_cachepurged action to clear Cache Enabler’s cache if AO’s get purged (to avoid having pages in cache the refer to deleted autoptimized CSS/ JS-files).

Just not sure I agree with this text on the plugin’s settings page;

Avoid […] concatenation of your assets to benefit from parallelism of HTTP/2.

because based on previous tests by smarter people than me concatenation of assets can still make (a lot of) sense, even when on HTTP/2 :-)

Damned, QML is inconsistent! Things have a content, data or children. And apparently they can all mean the same thing. So how do we know if something is a child of something else?

After a failed stackoverflow search I gave up on copy-paste coding and invented the damn thing myself.

function isChild( a_child, a_parent ) {
	if ( a_parent === null ) {
		return false

	var tmp = ( a_parent.hasOwnProperty("content") ? a_parent.content
		: ( a_parent.hasOwnProperty("children") ? a_parent.children : ) )

	if ( tmp === null || tmp === undefined ) {
		return false

	for (var i = 0; i < tmp.length; ++i) {

		if ( tmp[i] === a_child ) {
			return true
		} else {
			if ( isChild ( a_child, tmp[i] ) ) {
				return true
	return false

May 17, 2016

The post The async Puppet pattern appeared first on

I'm pretty sure this isn't tied to Puppet and is probably widely used by everyone else, but it only occurred to me recently what the structural benefits of this pattern are.

Async Puppet: stop fixing things in one Puppet run

This has always been a bit of a debated topic, both for me internally as well as in the Puppet community at large: should a Puppet run be 100% complete after the first run?

I'm starting to back away from that idea, having spent countless hours optimising my Puppet code to have the "one-puppet-run-to-rule-them-all" scenario. It's much easier to gradually build your Puppet logic in steps, each step activating when the next one has caused its final state to be set.

What I'm mostly seeing this scenario shine in is the ability to automatically add monitoring from within your Puppet code. There's support for Nagios out of the box and I contributed to the zabbixapi ruby gem to facilitate managing Zabbix host and templates from within Puppet.

Monitoring should only be added to a server when there's something to monitor. And there's only something to monitor once Puppet has done its thing and caused state on the server to be as expected.

Custom facts for async behaviour

So here's a pattern I particularly like. There are many alternatives to this one, but it's simple, straight forward and super easy to understand -- even for beginning Puppeteers.

  1. A first Puppet run starts and installs Apache with all its vhosts
  2. The second Puppet run starts and gets a fact called "apache_vhost_count", a simple integer that counts the amount of vhosts configured
  3. When that fact is a positive integer (aka: there are vhosts configured), monitoring is added

This pattern takes 2 Puppet runs to be completely done: the first gets everything up-and-running, the second detects that there are things up-and-running and adds the monitoring.

Monitoring wrappers around existing Puppet modules

You've probably done this: you get a cool module from Forge (Apache, MySQL, Redis, ...), you implement it and want to add your monitoring to it. But how? It's not cool to hack away in the modules themselves, those come via r10k or puppet-librarian.

Here's my take on it:

  1. Create a new module, call it "monitoring"
  2. Add custom facts in there, called has_mysql, has_apache, ... for all the services you want
  3. If you want to go further, create facts like apache_vhost_count, mysql_databases_count, ... to count the specific instance of each service, to determine if it's being used or not.
  4. Use those facts to determine whether to add monitoring or not:
    if ($::has_apache > 0) and ($::apache_vhost_count > 0) {
      @@zabbix_template_link { "zbx_application_apache_${::fqdn}":
        ensure   => present,
        template => 'Application - PHP-FPM',
        host     => $::fqdn,
        require  => Zabbix_host [ $::fqdn ],

Is this perfect? Far from it. But it's pragmatic and it gets the job done.

The facts are easy to write and understand, too.

Facter.add(:apache_vhost_count) do
  confine :kernel => :linux
  setcode do
    if File.exists? "/etc/httpd/conf.d/"
      Facter::Util::Resolution.exec('ls -l /etc/httpd/conf.d | grep \'vhost-\' | wc -l')

It's mostly bash (which most sysadmins understand) -- and very little Ruby (which few sysadmins understand).

The biggest benefit I see to it is that whoever implements the modules and creates the server manifests doesn't have to toggle a parameter called enable_monitoring (been there, done that) to decide whether or not that particular service should be monitored. Puppet can now figure that out on its own.

Detecting Puppet-managed services

Because some services are installed because of dependencies, the custom facts need to be clever enough to understand when they're being managed by Puppet. For instance, when you install the package "httpd-tools" because it contains the useful htpasswd tool, most package managers will automatically install the "httpd" (Apache) package, too.

Having that package present shouldn't trigger your custom facts to automatically enable monitoring, it should probably only do that when it's being managed by Puppet.

A very simple workaround (up for debate whether it's a good one), is to have each Puppet module write a simple file to /etc/puppet-managed in each module.

$ ls /etc/puppet-managed
apache mysql php postfix ...

Now you can extend your custom facts with the presence of that file to determine if A) a service is Puppet managed and B) if monitoring should be added.

Facter.add(:has_apache) do
  confine :kernel => :linux
  setcode do
    if File.exists? "/sbin/httpd"
      if File.exists? "/etc/puppet-managed/apache"
        # Apache installed and Puppet managed
        # Apache is installed, but isn't Puppet managed
      # Apache isn't installed

(example explicitly split up in order to add comments)

You may also be tempted to use the defined() (manual) function, to check if Apache has been defined in your Puppet code and then add monitoring. However, that's dependent on the resource order in which it's evaluated.

Your code may look like this:

if (defined(Service['httpd']) {
   # Apache is managed by Puppet, add monitoring ? 

Puppet's manual explains the big caveat though:

Puppet depends on the configuration’s evaluation order when checking whether a resource is declared.

In other words: if your monitoring code is evaluated before your Apache code, that defined() will always return false.

Working with facter circumvents this.

Again, this pattern isn't perfect, but it allows for a clean separation of logic and -- if your team grows -- an easier way to separate responsibilities for the monitoring team and the implementation team to each have their own modules with their own responsibilities.

The post The async Puppet pattern appeared first on

May 16, 2016

Last year around this time, I wrote that The Big Reverse of Web would force a major re-architecture of the web to bring the right information, to the right person, at the right time, in the right context. I believe that conversational interfaces like Amazon Echo are further proof that the big reverse is happening.

New user experience and distribution platforms only come along every 5-10 years, and when they do, they cause massive shifts in the web's underlying technology. The last big one was mobile, and the web industry adapted. Conversational interfaces could be the next user experience and distribution platform – just look at Amazon Echo (aka Alexa), Facebook's messenger or Microsoft's Conversation-as-a-Platform.

Today, hardly anyone questions whether to build a mobile-optimized website. A decade from now, we might be saying the same thing about optimizing digital experiences for voice or chat commands. The convenience of a customer experience will be a critical key differentiator. As a result, no one will think twice about optimizing their websites for multiple interaction patterns, including conversational interfaces like voice and chat. Anyone will be able to deliver a continuous user experience across multiple channels, devices and interaction patterns. In some of these cross-channel experiences, users will never even look at a website. Conversational interfaces let users disintermediate the website by asking anything and getting instant, often personalized, results.

To prototype this future, my team at Acquia built a fully functional demo based on Drupal 8 and recorded a video of it. In the demo video below, we show a sample supermarket chain called Gourmet Market. Gourmet Market wants their customers to not only shop online using their website, but also use Echo or push notifications to do business with them.

We built an Alexa integration module to connect Alexa to the Gourmet Market site and to answer questions about sale items. For example, you can speak the command: "Alexa, ask Gourmet Market what fruits are on sale today". From there, Alexa would make a call to the Gourmet Market website, finding what is on sale in the specified category and pull only the needed information related to your ask.

On the website's side, a store manager can tag certain items as "on sale", and Alexa's voice responses will automatically and instantly reflect those changes. The marketing manager needs no expertise in programming -- Alexa composes its response by talking to Drupal 8 using web service APIs.

The demo video also shows how a site could deliver smart notifications. If you ask for an item that is not on sale, the Gourmet Market site can automatically notify you via text once the store manager tags it as "On Sale".

From a technical point of view, we've had to teach Drupal how to respond to a voice command, otherwise known as a "Skill", coming into Alexa. Alexa Skills are fairly straightforward to create. First, you specify a list of "Intents", which are basically the commands you want users to run in a way very similar to Drupal's routes. From there, you specify a list of "Utterances", or sentences you want Echo to react to that map to the Intents. In the example of Gourmet Market above, the Intents would have a command called GetSaleItems. Once the command is executed, your Drupal site will receive a webhook callback on /alexa/callback with a payload of the command and any arguments. The Alexa module for Drupal 8 will validate that the request really came from Alexa, and fire a Drupal Event that allows any Drupal module to respond.

It's exciting to think about how new user experiences and distribution platforms will change the way we build the web in the future. As I referenced in Drupalcon New Orleans keynote, the Drupal community needs to put some thought into how to design and build multichannel customer experiences. Voice assistance, chatbots or notifications are just one part of the greater equation. If you have any further thoughts on this topic, please share them in the comments.

Digital trends

The post Redis: OOM command not allowed when used memory > ‘maxmemory’ appeared first on

If you're using Redis, you can find your application logs start to show the following error messages:

$ tail -f error.log
OOM command not allowed when used memory > 'maxmemory'

This can happen every time a WRITE operations is sent to Redis, to store new data.

What does it mean?

The OOM command not allowed when used memory > 'maxmemory' error means that Redis was configured with a memory limit and that particular limit was reached. In other words: its memory is full, it can't store any new data.

You can see the memory values by using the redis CLI tool.

$ redis-cli -p 6903> info memory
# Memory

If you run a Redis instance with a password on it, change the redis-cli command to this:

$ redis-cli -p 6903 -a your_secret_pass

The info memory command remains the same.

The example above shows a Redis instance configured to run with a maximum of 3GB of memory and consuming all of it (=used_memory counter).

Fixing the OOM command problem

There are 3 potential fixes.

1. Increase Redis memory

Probably the easiest to do, but it has its limits. Find the Redis config (usually somewhere in /etc/redis/*) and increase the memory limit.

 $ vim /etc/redis/6903.conf
maxmemory 3gb

Somewhere in that config file, you'll find the maxmemory parameter. Modify it to your needs and restart the Redis instance afterwards.

2. Change the cache invalidation settings

Redis is throwing the error because it can't store new items in memory. By default, the "cache invalidation" setting is set pretty conservatively, to volatile-lru. This means it'll remove a key with an expire set using an LRU algorithm.

This can cause items to be kept in the queue even when new items try to be stored. In other words, if your Redis instance is full, it won't just throw away the oldest items (like a Memcached would).

You can change this to a couple of alternatives:

# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
# is reached? You can select among five behavior:
# volatile-lru -> remove the key with an expire set using an LRU algorithm
# allkeys-lru -> remove any key accordingly to the LRU algorithm
# volatile-random -> remove a random key with an expire set
# allkeys->random -> remove a random key, any key
# volatile-ttl -> remove the key with the nearest expire time (minor TTL)
# noeviction -> don't expire at all, just return an error on write operations
# Note: with all the kind of policies, Redis will return an error on write
#       operations, when there are not suitable keys for eviction.
#       At the date of writing this commands are: set setnx setex append
#       incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
#       sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
#       zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
#       getset mset msetnx exec sort

In the very same Redis config you can find the directive (somewhere in /etc/redis/*), there's also an option called maxmemory-policy.

The default is:

$ grep maxmemory-policy /etc/redis/*
maxmemory-policy volatile-lru

If you don't really care about the data in memory, you can change it to something more agressive, like allkeys-lru.

$ vim /etc/redis/6903.conf
maxmemory-policy allkeys-lru

Afterwards, restart your Redis again.

Keep in mind though that this can mean Redis removes items from its memory that haven't been persisted to disk just yet. This is configured with the save parameter, so make sure you look at this values too to determine a correct "max memory" policy. Here are the defaults:

#   In the example below the behaviour will be to save:
#   after 900 sec (15 min) if at least 1 key changed
#   after 300 sec (5 min) if at least 10 keys changed
#   after 60 sec if at least 10000 keys changed
#   Note: you can disable saving at all commenting all the save lines.

save 900 1
save 300 10
save 60 10000

With the above in mind, setting a different maxmemory-policy could mean dataloss in your Redis instance!

3. Store less data in Redis

I know, stupid 'solution', right? But ask yourself this: is everything you're storing in Redis really needed? Or are you using Redis as a caching solution and just storing too much data in it?

If your SQL queries return 10 columns but realistically you only need 3 of those on a regular basis, just store those 3 values -- not all 10.

The post Redis: OOM command not allowed when used memory > ‘maxmemory’ appeared first on

May 14, 2016

I love street photography. Walking and shooting. Walking, talking and shooting. Slightly pushing me out of my comfort zone looking for that one great photo.

Street life
Street life
Street life
Street life

Street photography is all fun and games until someone pulls out a handgun. The anarchy sign in the background makes these shots complete.


For more photos, check out the entire album.

The post The day Google Chrome disables HTTP/2 for nearly everyone: May 31st, 2016 appeared first on

If you've been reading this blog for a while (or have been reading my rants on Twitter), you'll probably know this was coming already. If you haven't, here's the short version.

The Chromium project (whose end result is the Chrome Browser) has switched the negotiation protocol by which it decides whether to use HTTP/1.1 or the newer HTTP/2 on May 15th, 2016 May 31st, 2016.

Update: Chrome 51 is released and should be updated everywhere in a couple of hours/days. If HTTP/2 stops working for you, it's probably because NPN has been disabled from now on.

That in and of itself isn't a really big deal, but the consequences unfortunately are. Previously (as in: before May 31st, 2016), a protocol named NPN was used -- Next Protocol Negotiation. This wasn't a very efficient protocol, but it got the job done.

There's a newer negotiation protocol in town called ALPN -- Application-Layer Protocol Negotiation. This is a more efficient version with more future-oriented features. It's a good decision to switch from NPN to ALPN, there are far more benefits than there are downsides.

However, on the server side -- the side which runs the webservers that in turn run HTTP/2 -- there's a rather practical issue: to support ALPN, you need at least OpenSSL 1.0.2.

So what? You're a sysadmin, upgrade your shit already!

I know. It sounds easy, right? Well, it isn't. Just for comparison, here's the current (May 2016) state of OpenSSL on Linux.

Operating System OpenSSL version
CentOS 5 0.9.8e
CentOS 6 1.0.1e
CentOS 7 1.0.1e
Ubuntu 14.04 LTS 1.0.1f
Ubuntu 16.04 LTS 1.0.2g
Debian 7 (Wheezy) 1.0.1e
Debian 8 (Jessie) 1.0.1k

As you can tell from the list, there's a problem: out of the box, only the latest Ubuntu 16.04 LTS (out for less than a month) supports OpenSSL 1.0.2.

Upgrading OpenSSL packages isn't a trivial task, either. Since just about every other service links against the OpenSSL libraries, they too should be re-packaged (and tested!) to work against the latest OpenSSL release.

On the other hand, it's just a matter of time before distributions have to upgrade as support for OpenSSL 1.0.1 ends soon.

Support for version 1.0.1 will cease on 2016-12-31. No further releases of 1.0.1 will be made after that date. Security fixes only will be applied to 1.0.1 until then.

OpenSSL Release Strategy

To give you an idea of the scope of such an operation, on a typical LAMP server (the one powering the blogpost you're now reading), the following services all make use of the OpenSSL libraries.

$ lsof | grep libssl | awk '{print $1}' | sort | uniq

A proper OpenSSL upgrade would cause all of those packages to be recreated too. That's a hassle, to say the least. And truth be told, it probably isn't just repackaging but potentially changing the code of each application to be compatible to the newer or changed API's in OpenSSL 1.0.2.

Right now, the proper simplest way to run HTTP/2 on a modern server (that isn't Ubuntu 16.04 LTS) would be to run a Docker container, based on Ubuntu 16.04, and run your webserver inside of it.

I don't blame Google for switching protocols and evolving the web, but I'm sad to see that as a result of it, a very large portion of Google Chrome users will have to live without HTTP/2, once again.

Before May 15th, 2016 -- a Google Chrome user would see this in its network inspector:


After May 31st, it'll be old-skool HTTP/1.1.


It used to be that enabling HTTP/2 in Nginx was a very simple operation, but in order to support Chrome it'll be a bit more complicated from now on.

This change also didn't come out of the blue: Chrome had disabled NPN back in 2015 but quickly undid that change when the impact was clear. We knew, since the end of 2015, that this change was coming -- we were given 6 months time to get support for ALPN going, but by the current state of OpenSSL packages that was too little time.

If you want to keep track of the state of Red Hat (Fedora, RHEL & CentOS) upgrades, here's some further reading: RFE: Need OpenSSL 1.0.2.

As I'm mostly a CentOS user, I'm unaware of the state of Debian or Ubuntu OpenSSL packages at this time.

The post The day Google Chrome disables HTTP/2 for nearly everyone: May 31st, 2016 appeared first on

May 13, 2016

As we all know has Qt types like QPointerQSharedPointer and we know about its object trees. So when do we use what?

Let’s first go back to school, and remember the difference between composition and aggregation. Most of you probably remember drawings like this?

It thought us when to use composition, and when to use aggregation:

  • Use composition when the user can’t exist without the dependency. For example a Human can’t exist without a Head unless it ceases to be a human. You could also model Arm, Hand, Finger and Leg as aggregates but it might not make sense in your model (for a patient in a hospital perhaps it does?)
  • Use aggregate when the user can exist without the dependency: A car without a passenger is still a car in most models.

This model in the picture will for example tell us that a car’s passenger must have ten fingers.

But what does this have to do with QPointer, QSharedPointer and Qt’s object trees?

First situation is a shared composition. Both Owner1 and Owner2 can’t survive without Shared (composition, filled up diamonds). For this situation you would typically use a QSharedPointer<Shared> at Owner1 and Owner2:

If there is no other owner, then it’s probably better to just use Qt’s object trees and setParent() instead. Note that for example QML’s GC is not very well aware of QSharedPointer, but does seem to understand Qt’s object trees.

Second situation are shared users. User1 and User2 can stay alive when Shared goes away (aggregation, empty diamonds). In this situation you typically use a QPointer<Shared> at User1 and at User2. You want to be aware when Shared goes away. QPointer<Shared>’s isNull() will become true after that happened.

Third situation is a mixed one. In this case you could at Owner use a QSharedPointer<Shared> or a parented raw QObject pointer (using setParent()), but a QPointer<Shared> at User. When Owner goes away and its destructor (due to the parenting) deletes Shared, User can check for it using the previously mentioned isNull check.

Finally if you have a typical object tree, then use QObject’s infrastructure for this.



In this post I share 5 easy ways to write better mocks that I picked up over the years. These will help you write tests that break less, are easier to read, are more IDE friendly, and are easier to refactor. The focus is on PHPUnit and PHP, yet most of the techniques used, and principles touched upon, are also applicable when using different languages and testing frameworks.

Before we get down to it, some terminology needs to be agreed upon. This post is about Test Doubles, which are commonly referred to as Mocks. It’s somewhat unfortunate that Mock has become the common name, as it also is a specific type of Test Double. I will use the following, more precise, terminology for the rest of the post:

  • Test Double: General term for test code that stands in for production code
  • Stub: A Test Double that does nothing except for returning hardcoded values
  • Fake: A Test Double that has real behavior in it, though does not make any assertions
  • Spy: A Test Double that records calls to its methods, though does not make assertions
  • Mock: A Test Double that makes assertions

1. Reference classes using ::class

This one is really simple. Instead of calling

$this->getMock( 'KittenRepository' )
, use the ::class keyword added in PHP 5.5:
$this->getMock( KittenRepository::class )
. This avoids your tests getting broken when renaming or moving your class using decent editors. Typos are immediately apparent, and navigating to the class or interface becomes easier.

2. Don’t bind to method names when you don’t have to

Imagine you are testing some code that uses a PSR-3 compliant logger. This logger has a general log method that takes a log level, and a specific method for each of the log levels, such as warning and info. Even in case where you want to test the specific log level being used, the code under test can use either log or the more specific method. Which one is used is an internal implementation detail of the production code, and something the test preferably does not know about. Consider this Mock:

$logger->expects( $this->never() )->method( 'log' );

If your production code changes to use a more specific method, the test will no longer be correct. In this case you might not even notice, as the test does not further rely on behavior of the created Test Double. Another cost to consider is that you have a string reference to a method name, which amongst other things, breaks refactoring.

In a number of cases this is easily avoided using the not so well known anything PHPUnit method. Want to verify your logger is never invoked in a given situation?

$logger->expects( $this->never() )->method( $this->anything() );

Want to test what happens when the repository your code uses throws an exception?

$repository->expects( $this->any() )->method( $this->anything() )
	->willThrowException( new RuntimeException() );

This approach only works in some situations. In others you will either need to bear the cost of binding to implementation details, change your test to be state based, or resort to more complicated workarounds.

3. Don’t bind to call count when you don’t have to

When constructing a Stub or a Fake, it’s easy to turn it into a Mock as well. This is very similar to binding to method calls: your test becomes aware of implementation details.

The previous code snippet shows you a very simple Stub. It’s a logger that always throws an exception. Note the use of the any method, as opposed to the once method in this snippet:

$repository->expects( $this->once() )->method( $this->anything() )
	->willThrowException( new RuntimeException() );

If you are not intentionally creating a Mock, then don’t make assertions about the call count. It’s easy to add the assertion in nearly all cases, yet it does not come for free.

4. Encapsulate your Test Double creation

When constructing a Test Double via the PHPUnit Test Double API, you get back an instance of PHPUnit_Framework_MockObject_MockObject. While you know that it is also an instance of the class or interface that you fed into the Mock API, tools need a little help (before they are able to help you in return). One way of doing this is extracting the Test Double creation into its own method, and using a return type hint. If you are still using PHP 5.x, you can add a DocBlock with @return KittenRepository to achieve the same effect.

private function newThrowingRepository(): KittenRepository {
    $repository = $this->getMock( KittenRepository::class );

    $repository->expects( $this->once() )->method( $this->anything() )
        ->willThrowException( new RuntimeException() );

    return $repository;

Now tools will stop complaining that you are giving a MockObject to code expecting a KittenRepository.

This extraction has two additional benefits. Firstly, you hide the details of Test Double construction from your actual test method, which now no longer knows about the mocking framework. Secondly, your test method becomes more readable, as it is no longer polluted by details on the wrong level of abstraction. That brings us to clean functions and test methods in general, which is out of scope for this particular blog post.

5. Create your own Test Doubles

While often it’s great to use the PHPUnit Test Double API, there are plenty of cases where creating your own Test Doubles yields significant advantages.

To create your own Test Doubles, simply implement the interface as you would do in production code. For stubs, there is not much to do, just return the stub value. Some tools even allow you to automatically create these. Spies are also quite simple, just create a field of type array to store the calls to a method, and provide a getter.

Remember how we do not want to bind to our production code’s choice of logger method? If we want to assert something different than no calls being made, the PHPUnit Test Double API is not of much help. It is however easy to create a simple Spy.

class LoggerSpy extends \Psr\Log\AbstractLogger {
	private $logCalls = [];

	public function log( $level, $message, array $context = [] ) {
		$this->logCalls[] = [ $level, $message, $context ];

	public function getLogCalls(): array {
		return $this->logCalls;

Since AbstractLogger provides the specific logging methods such as warning and info and has them call log, all calls end up looking the same to the test using the Spy.

The test that uses the Spy needs to make its own assertions on the spied upon method calls. If certain assertions are common, you can place them in your Spy. Since the Spy itself does not invoke these assertions, it remains a Spy and does not become a Mock. You can even use PHPUnit to do the actual assertions.

class MailerSpy implements Mailer {
	private $testCase;
	private $sendMailCalls = [];

	public function __construct( PHPUnit_Framework_TestCase $testCase ) {
		$this->testCase = $testCase;

	public function sendMail( EmailAddress $recipient, array $templateArguments = [] ) {
		$this->sendMailCalls[] = func_get_args();

	public function assertMailerCalledOnceWith( EmailAddress $expectedEmail, array $expectedArguments ) {
		$this->testCase->assertCount( 1, $this->sendMailCalls, 'Mailer should be called exactly once' );


Creating your own Test Doubles completely sidesteps the problem of referencing method names using strings. I’ve yet to see a tool that understands the Test Doubles created by PHPUnit. Your IDE won’t find them when you search for all implementors of an interface, making refactoring, discovery and navigation harder.

A related advantage is that lack of magic not only makes the code easier to understand to tools, but also to developers. You do not need knowledge of the PHPUnit Test Double API to understand and modify your own Test Doubles.

In my projects I put Test Doubles into tests/Fixtures. Since I have a dedicated class for each Test Double, it’s easy to reuse them. And the tests in which I use them focus on what they want to test, without being polluted with Test Double creation code.

Wrapping up

Treat your tests as first class code. Avoid not needed dependencies, respect encapsulation as much as you can, try not to use magic, and keep things simple. The ::class keyword, the any and anything methods, encapsulated Test Double construction, and creating your own Test Doubles, are all things that can help with this.

May 12, 2016

DrupalCon New Orleans comes at an important time in the history of Drupal. Now that Drupal 8 has launched, we have a lot of work to do to accelerate Drupal 8's adoption as well as plan what is next.

In my keynote presentation, I shared my thoughts on where we should focus our efforts in order for Drupal to continue its path to become the leading platform for assembling the world's best digital experiences.

Based on recent survey data, I proposed key initiatives for Drupal, as well as shared my vision for building cross-channel customer experiences that span various devices, including conversational technologies like Amazon Echo.

You can watch a recording of my keynote (starting at 3:43) or download a copy of my slides (162 MB).

Take a look, and as always feel free to leave your opinions in the comments!

Did you know that often the majority of the time spent generating a HTML page is spent on a few dynamic/uncacheable/personalized parts? Pages are only sent to the client after everything is rendered. Well, what if Drupal could start sending a page before it’s fully finished? What if Drupal could let the browser start downloading CSS, JS, and images, and even make the majority of the page available for interaction while Drupal generates those uncacheable parts? Good news — it can!

This is where Drupal 8’s improved architecture comes in: it allows us to do Facebook BigPipe-style rendering. You can now simply install the BigPipe module for Drupal 8.0 and see the difference for yourself. It has been included in Drupal 8.1 as an optional module (still marked experimental for now).

In this session, we will cover:

  • The ideas behind BigPipe
  • How to use BigPipe on your Drupal site
  • How Drupal is able to do this (high-level technical information)
  • How you can make your code BigPipe-compatible (technical details)
  • What else is made possible thanks to the improved architecture in Drupal 8 (ESI etc.), without any custom code

(We start with the basics/fundamentals, and gradually go deeper into how it works. Even non-Drupal people should be able to follow along.)

May 10, 2016

I was getting old yesterday,with pessimism taking over. But then there’s that Git pull request on your open source project, from an Argentinian developer you don’t know at all. And you discuss the idea and together you build on it, step by step and the merged result is an enrichment not only for your little software-project, but also for you personally. Because it reminds you that too is the web; a place where people collaborate for nothing but the selfless desire to improve things. Thanks for reminding me Pablo!

May 09, 2016

Gerry Mc Govern about the echo-chambers of the web and our diminishing attention-span;

They call it the World Wide Web. It may be worldwide in its physical reach, but is it leading to a worldwide culture, or a sense that we are citizens of the world? […] in many countries today […], we see the emergence of a new hyper-tribalism led by populist, strongman, authoritarian figures. It’s like we’re going back to the Nineteenth Century rather than advancing forward into the 21st. […] There are indications that the Web is a web of the like-minded. A Web where we search for what we’re interested in and ignore the rest. […] For a great many, the Web does not expand horizons, or change minds or attitudes. Instead, it reinforces existing attitudes and intentions.

This is a sad realization for those of us whom Stephen Fry described as “early netizens”;

I and millions of other early ‘netizens’ as we embarrassingly called ourselves, joined an online world that seemed to offer an alternative human space, to welcome in a friendly way (the word netiquette was used) all kinds of people with all kinds of views. We were outside the world of power and control. […] So we felt like an alternative culture; we were outsiders.

Pessimism is taking over, I must be getting old.

May 06, 2016

The post Podcast: Devops, SSL & HTTP2 op HTTP Café appeared first on

(Regular readers: this post is, by exception, in Dutch as it talks about a Dutch podcast.)

Een tijdje terug was ik te gast op de HTTP Café podcast met Jelle & Koen. We hadden het over het leven van een "Devops" (wat een term), het gebruik van SSL, HTTP/2 en andere nerdy topics.

Buiten de blunder dat ik maakte over de Caddy webserver (ik zei foutief dat die in NodeJS gemaakt was, het is dus wel degelijk in Go) ben ik tevreden over het resultaat.

Als je interesse hebt in eens een Nederlandse podcast kan je aflevering 17 hier beluisteren:

Feedback of remarks hoor ik uiteraard graag.

The post Podcast: Devops, SSL & HTTP2 op HTTP Café appeared first on


Il y a quelques semaines à peine, j’ai reçu un coup de téléphone m’invitant à venir donner une présentation TEDx à Liège. Honoré, je me suis aussitôt empressé d’accepter.

Le temps pour la préparation m’était compté mais j’étais à la fois fier et motivé. Sous le titre énigmatique « Changer le monde sans travailler », j’ai décidé de parler du revenu de base.

Grâce à la collaboration active de ma compagne, je produisis un texte dont j’étais assez content, texte que je me suis mis à étudier frénétiquement. Le texte amenait le sujet du revenu de base et possèdait un moment assez confrontant au cours duquel j’accusais explicitement toute personne vivant de près ou de loin de la publicité de contribuer à la surconsommation et au mal-être de la planète. J’espérais oser, malgré l’aspect terrifiant que cela représentait pour moi, attaquer directement le public et l’organisation de la conférence elle-même pour n’être, en fait, qu’un grand panneau publicitaire au service des sponsors.

Le 8 avril, je me suis donc retrouvé un peu stressé dans les coulisses de l’extraordinaire salle philharmonique de Liège, à faire connaissance avec les autres intervenants, tous aussi passionnants les uns que les autres.

Pour l’anecdote, juste avant d’entrer en scène, je discute avec le conférencier Steven Laureys. Son visage me dit quelque chose. Je suis sûr de l’avoir déjà vu.

Et soudain, je comprends : il a écrit plusieurs articles sur la conscience dans le magazine Athéna, articles qui m’ont passionnés et inspirés, entre autres, pour écrire le billet « Qu’est-ce que la conscience ».

Mais à peine ai-je le temps de savourer cette rencontre inattendue qu’il est temps d’entrer en scène et de jouer ma partition.

Jusqu’au moment où…

Un trou de mémoire ! L’impensable !

Vous l’avez vu, non ? Mais si, entre 8:10 et 8:30 !

Durant ce qui me semble être une éternité, je fixe le vide, j’improvise. Puis, je retombe sur mes pattes et continue mon texte.

Horreur. Tout en récitant, je constate que j’ai sauté précisément le moment clé, le moment choc de ma présentation.

Ma compagne, qui était assise au second rang et qui connaissait mieux le texte que moi a hésité de me souffler la suite mais, voyant que je me reprenais très rapidement, pensa que j’avais volontairement adoucit mon texte.

Alors que je quittai la scène, je reçu les félicitations des organisateurs puis du public. Tout le monde semblait content. Ce trou de mémoire n’avait que peu ou prou été perçu. Cette éternité de silence n’avait été, pour le public, qu’une brève pause.

Mais, au fond de moi, je bouillonnais de colère. Ma présentation que j’avais voulu confrontante avait été amputée et, de ce fait, transformée en un réquisitoire certes pertinent mais fade et consensuel.


Essayant de faire fi de ma déception, j’allai m’installer dans la salle afin de profiter des autres orateurs dont la diversité brassait tous les styles et tous les goûts. En écoutant le public, je constatai que les plus acclamés étaient également les plus détestés. Chaque membre du public avait sa préférence, sa vision.

Mais, surtout, malgré les nombreux TEDx que j’avais visionné en vidéo, je découvris quelque chose que le web ne m’avait jamais transmis : certains conférenciers faisaient vibrer le public. Ils ne parlaient pas spécialement biens, ils n’étaient pas spécialement les meilleurs. Mais ils s’exprimaient avec leurs tripes, ils s’exposaient, ils partageaient une expérience unique. Ils me transportaient.

Ce que je n’avais pas fait. Ce que je n’avais même jamais imaginé faire.

J’avais abordé l’expérience comme toutes mes conférences : une idée intellectuelle à exposer.

Et je n’avais pas réussi aussi bien que je l’aurais voulu. J’avais fait une erreur.

J’ai adoré l’expérience de ce TEDx Liège. Les organisateurs ont été parfaits, l’ambiance entre les conférenciers était incroyable et j’ai fait des rencontres passionnantes.

Alors, oui, j’ai envie de refaire un TEDx. J’ai envie de revivre cette expérience.

Mais cette fois, j’ai envie de venir m’exposer, me mettre à nu. Je veux parler d’un sujet qui me touche profondément, émotionnellement et non plus d’une théorie intellectuelle.

Le thème s’est immédiatement imposé. J’ai envie de faire un TEDx où j’expliquerai pourquoi je me ressens la publicité comme un étouffement, un contrôle absolu de la créativité humaine et comment j’expérimente le prix libre, en tant que créateur et public, afin de favoriser la création et la liberté artistique.

Alors, merci TEDx Liège pour m’avoir permis de vivre cette expérience !  Et si vous organisez un TEDx et êtes à la recherche d’un orateur qui cherche à s’améliorer, je me porte volontaire !

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins: 12uAH27PhWepyzZ8kzY9rMAZzwpfsWSxXt, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

Logo de PostgreSQLCe jeudi 19 mai 2016 à 19h se déroulera la 49ème séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : PostgreSQL et la streaming replication

Thématique : Base de données|sysadmin|communauté

Public : DBA|sysadmin|entreprises|étudiants

L’animateur conférencier : Stefan Fercot

Lieu de cette séance : HEPH Condorcet, Chemin du Champ de Mars, 15 – 7000 Mons – Auditoire Bloc E – situé au fond du parking (cf. ce plan sur le site d’Openstreetmap; ATTENTION, l’entrée est peu visible de la voie principale, elle se trouve dans l’angle formé par un très grand parking. le bâtiment est différent de celui utilisé lors des autres séances).

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page La séance sera suivie d’un verre de l’amitié.

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, Normation, OpenSides, MeaWeb et Phonoid.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

Description : Dans le milieu hospitalier, Stefan Fercot et son équipe sont chargés de gérer au quotidien l’infrastructure serveurs hébergeant la solution développée par leur société. Ce dossier patient informatisé est un élément central pour les équipes soignantes de nombreuses institutions. Maintenir une application performante et hautement disponible est donc très important. C’est à cette fin qu’est utilisé PostgreSQL, fonctionnant sur serveurs CentOS/RedHat.

PostgreSQL est un moteur de base de données relationnelle open source utilisé à travers le monde dans de nombreux secteurs : hébergement web, hôpitaux, banques,… Parmi ses plus fervents utilisateurs, citons notamment Skype.

L’exposé ciblera les sujets suivants :

  • Installation
  • Brève explication du fonctionnement interne
  • Méthodes de sauvegarde (dumps, point-in-time recovery, streaming replication)
  • Streaming replication : retour d’expérience d’utilisation dans un environnement de type « cluster »
  • Quelques chiffres des volumétries gérées

May 05, 2016

I’m in agreement with what Richard has to say in this specific context.


The post Apple prepares iOS to move to IPv6-only networks appeared first on

Update 11/05/2016: the original message in this article was wrong, please read the updates below.

Some pretty big news for the adoption of IPv6 today from Apple for the iOS platform (iPhone, iPad and I'm guessing Apple Watch too).

At WWDC 2015 we announced the transition to IPv6-only network services in iOS 9. Starting June 1, 2016 all apps submitted to the App Store must support IPv6-only networking. Most apps will not require any changes because IPv6 is already supported by NSURLSession and CFNetwork APIs.

If your app uses IPv4-specific APIs or hard-coded IP addresses, you will need to make some changes. Learn how to ensure compatibility by reading Supporting IPv6 DNS64/NAT64 Networks and watch Your App and Next Generation Networks.

Supporting IPv6-only Networks

If your app still uses IPv4 endpoints (API's etc.), those will most likely have to be modified to also support IPv6. As of June 1st 2016, it looks like iOS apps will only prefer to make IPv6 DNS requests and will eventually stop query'ing for IPv4 records for NSURLSession calls.

There's a bit of speculation going on here, as the news from Apple was very low on details.

However, "supporting an IPv6-only network" sure sounds like it'll be IPv6-only, without support for IPv4 inside your iOS Apps. For WiFi or 3G/4G/LTE connections, I'm pretty sure they won't be dropping IPv4 support any time soon, this only reflects your Swift/iOS code.

Apple has been giving IPv6 a 25ms timing benefit in requests over IPv4 for a while now, preparing their IPv6-only move.

Update 6/5/2016

Some good & bad news: I got some of the details wrong (that's the bad news), but the good news is there's still a very active push for IPv6 by Apple: your applications need to be ready to go IPv6-only. That means your remote endpoints need to support both IPv4 and IPv6. But it doesn't look like Apple will disable IPv4 entirely, they're just adding all the capabilities and support to be able to make that switch in a couple of months/years.

The crucial bits in Apple's announcement are in the sentence "Starting June 1, 2016 all apps submitted to the App Store must support IPv6-only networking.". They "must support IPv6-only networking", so adding support for it in your application is sufficient.

Update 11/05/2015

I should've read more about this: Supporting IPv6 DNS64/NAT64 Networks.

IPv6 and App Store Requirements

Compatibility with IPv6 DNS64/NAT64 networks will be an App Store submission requirement, so it is essential that apps ensure compatibility. The good news is that the majority of apps are already IPv6-compatible. For these apps, it’s still important to regularly test your app to watch for regressions. Apps that aren’t IPv6-compatible may encounter problems when operating on DNS64/NAT64 networks. Fortunately, it’s usually fairly simple to resolve these issues, as discussed throughout this chapter.
Supporting IPv6 DNS64/NAT64 Networks

So: you don't need to have IPv6 support on all your servers, you just need to make all your network calls using a high-level network framework like NSUrlSession, NSUrlRequest, Cocoa, ... so that they can transparently query for IPv4/IPv6 records.

I was wrong.

The post Apple prepares iOS to move to IPv6-only networks appeared first on

May 04, 2016

The post Security week: 2x High Severity OpenSSL vulnerability & critical ImageMagick flaw appeared first on

OpenSSL high severity vulnerabilities

The OpenSSL team has revealed the previously announced security vulnerabilities: 2 security issues with severity "high" have been disclosed.

The details were posted to the mailing list.

The first bug is a out-of-bounds memory write, potentially allowing a malicious certificate to write data in memory areas used by other applications.

If an application deserializes untrusted ASN.1 structures containing an ANY field, and later reserializes them, an attacker may be able to trigger an out-of-bounds write.

This has been shown to cause memory corruption that is potentially exploitable with some malloc implementations.

Applications that parse and re-encode X509 certificates are known to be vulnerable.

OpenSSL Security Advisory

The second vulnerability could allow a man-in-the-middle (MITM) decryption of traffic. If you have access in between the server performing TLS encryption and the client connecting to it, this would allow you to decrypt that traffic.

A MITM attacker can use a padding oracle attack to decrypt traffic when the connection uses an AES CBC cipher and the server support AES-NI.

OpenSSL Security Advisory

For both fixes, an openssl update has been released. All packages are available on Linux distributions, so it's time to update and restart all depending services (or, when in doubt, restart the entire server).

ImageMagick: Remote Code Execution

The second vulnerability this week comes from the ImageMagick application, used widely in web applications for processing images, generating thumbnails, resizing, rotating, ... images. ImageMagick is used in both PHP and Ruby applications worldwide.

There are multiple vulnerabilities in ImageMagick, a package commonly used by web services to process images. One of the vulnerabilities can lead to remote code execution (RCE) if you process user submitted images. The exploit for this vulnerability is being used in the wild.

A number of image processing plugins depend on the ImageMagick library, including, but not limited to, PHP’s imagick, Ruby’s rmagick and paperclip, and nodejs’s imagemagick.

If an attacker can upload an image to your webserver (think forum or blog avatars, webshop product photos, blog uploads, ...), ImageMagick would parse it and could trigger a remote code execution, offering the attacker a shell or backdoor on your server.

The quickest fix here is to apply an ImageMagick policy file, that prevents the exploitable image formats from being used. Place this in the file /etc/ImageMagick/policy.xml.

  <policy domain="coder" rights="none" pattern="EPHEMERAL" />
  <policy domain="coder" rights="none" pattern="URL" />
  <policy domain="coder" rights="none" pattern="HTTPS" />
  <policy domain="coder" rights="none" pattern="MVG" />
  <olicy domain="coder" rights="none" pattern="MSL" />

Update packages for ImageMagick will surely follow soon.

The post Security week: 2x High Severity OpenSSL vulnerability & critical ImageMagick flaw appeared first on

Venez nous retrouver ce jeudi 12 mai à Arlon !

Dans le cadre de ce jeudi du libre, Jonathan Basse, CTO de Data Essential viendra nous parler de sa société et de ses services autour de la plateforme Elastic (aussi encore connue sous le nom de stack ELK pour ElasticSearch, Logstash, Kibana) ainsi que, si le temps le permet, répondre à vos éventuelles questions sur Hadoop.

Rémi Laurent quant à lui présentera quelques cas concrets d’utilisation de la stack ELK avec une rapide introduction à l’installation et configuration de filtres simples, la génération de visualisations et quelques plugins courants.

Concernant Data Essential;
la mission de Data Essential est de permettre à ses clients de construire de nouveaux types d’applications, mettant en avant un « big data » rapide et une infrastructure de type « cloud ». Data Essential a pour objectif de rendre accessible l’analyse de données complexes à tout un chacun et de simplifier la vie en mettant à disposition des produits intelligents et des services innovants.

Data Essential est une startup constituée d’une équipe mature fournissant intégration et support et travaillant sur les axes suivants:

  • Big Data & Analytics
  • Modern Data Architecture
  • Native Cloud Infrastructure

Informations pratiques

  • Jeudi 12 mai 2016 à 19h00
  • InforJeunes Arlon – 1er étage
  • 31, place Didier à 6700 Arlon
  • Pour les détails et  inscriptions, c’est ici !
  • Pour des renseignement complémentaires, contactez l’ASBL 6×7.

ElastikSearch, Logstash, and Kibana

Today's third-party applications increasingly depend on web services to retrieve and manipulate data, and Drupal offers a range of web services options for API-first content delivery. For example, a robust first-class web services layer is now available out-of-the-box with Drupal 8. But there are also new approaches to expose Drupal data, including Services and newer entrants like RELAXed Web Services and GraphQL.

The goal of this blog post is to enable Drupal developers in need of web services to make an educated decision about the right web services solution for their project. This blog post also sets the stage for a future blog post, where I plan to share my thoughts about how I believe we should move Drupal core's web services API forward. Getting aligned on our strengths and weaknesses is an essential first step before we can brainstorm about the future.

The Drupal community now has a range of web services modules available in core and as contributed modules sharing overlapping missions but leveraging disparate mechanisms and architectural styles to achieve them. Here is a comparison table of the most notable web services modules in Drupal 8:

Feature Core REST RELAXed Services
Content entity CRUD Yes Yes Yes
Configuration entity CRUD Create resource plugin (issue) Create resource plugin Yes
Custom resources Create resource plugin Create resource plugin Create Services plugin
Custom routes Create resource plugin or Views REST export (GET) Create resource plugin Configurable route prefixes
Renderable objects Not applicable Not applicable Yes (no contextual blocks or views)
Translations Not yet (issue) Yes Create Services plugin
Revisions Create resource plugin Yes Create Services plugin
File attachments Create resource plugin Yes Create Services plugin
Shareable UUIDs (GET) Yes Yes Yes
Authenticated user resources (log in/out, password reset) Not yet (issue) No User login and logout

Core RESTful Web Services

Thanks to the Web Services and Context Core Initiative (WSCCI), Drupal 8 is now an out-of-the-box REST server with operations to create, read, update, and delete (CRUD) content entities such as nodes, users, taxonomy terms, and comments. The four primary REST modules in core are:

  • Serialization is able to perform serialization by providing normalizers and encoders. First, it normalizes Drupal data (entities and their fields) into arrays with a particular structure. Any normalization can then be sent to an encoder, which transforms those arrays into data formats such as JSON or XML.
  • RESTful Web Services allows for HTTP methods to be performed on existing resources including but not limited to content entities and views (the latter facilitated through the “REST export" display in Views) and custom resources added through REST plugins.
  • HAL builds on top of the Serialization module and adds the Hypertext Application Language normalization, a format that enables you to design an API geared toward clients moving between distinct resources through hyperlinks.
  • Basic Auth allows you to include a username and password with request headers for operations requiring permissions beyond that of an anonymous user. It should only be used with HTTPS.

Core REST adheres strictly to REST principles in that resources directly match their URIs (accessible via a query parameter, e.g. ?_format=json for JSON) and in the ability to serialize non-content into JSON or XML representations. By default, core REST also includes two authentication mechanisms: basic authentication and cookie-based authentication.

While core REST provides a range of features with only a few steps of configuration there are several reasons why other options, available as contributed modules, may be a better choice. Limitations of core REST include the lack of support for configuration entities as well as the inability to include file attachments and revisions in response payloads. With your help, we can continue to improve and expand core's REST support.

RELAXed Web Services

As I highlighted in my recent blog post about improving Drupal's content workflow, RELAXed Web Services, is part of a larger suite of modules handling content staging and deployment across environments. It is explicitly tied to the CouchDB API specification, and when enabled, will yield a REST API that operates like the CouchDB REST API. This means that CouchDB integration with client-side libraries such as PouchDB and makes possible offline-enabled Drupal, which synchronizes content once the client regains connectivity. Moreover, people new to Drupal with exposure to CouchDB will immediately understand the API, since there is robust documentation for the endpoints.

RELAXed Web Services depends on core's REST modules and extends its functionality by adding support for translations, parent revisions (through the Multiversion module), file attachments, and especially cross-environment UUID references, which make it possible to replicate content to Drupal sites or other CouchDB compatible services. UUID references and revisions are essential to resolving merge conflicts during the content staging process. I believe it would be great to support translations, parent revisions, file attachments, and UUID references in core's RESTful web services — we simply didn't get around to them in time for Drupal 8.0.0.


Since RESTful Web Services are now incorporated into Drupal 8 core, relevant contributed modules have either been superseded or have gained new missions in the interest of extending existing core REST functionality. In the case of Services, a popular Drupal 7 module for providing Drupal data to external applications, the module has evolved considerably for its upcoming Drupal 8 release.

With Services in Drupal 8 you can assign a custom name to your endpoint to distinguish your resources from those provisioned by core and also provision custom resources similar to core's RESTful Web Services. In addition to content entities, Services supports configuration entities such as blocks and menus — this can be important when you want to build a decoupled application that leverages Drupal's menu and blocks system. Moreover, Services is capable of returning renderable objects encoded in JSON, which allows you to use Drupal's server-side rendering of blocks and menus in an entirely distinct application.

At the time of this writing, the Drupal 8 version of Services module is not yet feature-complete: there is no test coverage, no content entity validation (when creating or modifying), no field access checking, and no CSRF protection, so caution is important when using Services in its current state, and contributions are greatly appreciated.


GraphQL, originally created by Facebook to power its data fetching, is a query language that enables fewer queries and limits response bloat. Rather than tightly coupling responses with a predefined schema, GraphQL overturns this common practice by allowing for the client's request to explicitly tailor a response so that the client only receives what it needs: no more and no less. To accomplish this, client requests and server responses have a shared shape. It doesn't fall into the same category as the web services modules that expose a REST API and as such is absent from the table above.

GraphQL shifts responsibility from the server to the client: the server publishes its possibilities, and the client publishes its requirements instead of receiving a response dictated solely by the server. In addition, information from related entities (e.g. both a node's body and its author's e-mail address) can be retrieved in a single request rather than successive ones.

Typical REST APIs tend to be static (or versioned, in many cases, e.g. /api/v1) in order to facilitate backwards compatibility for applications. However, in Drupal's case, when the underlying content model is inevitably augmented or otherwise changed, schema compatibility is no longer guaranteed. For instance, when you remove a field from a content type or modify it, Drupal's core REST API is no longer compatible with those applications expecting that field to be present. With GraphQL's native schema introspection and client-specified queries, the API is much less opaque from the client's perspective in that the client is aware of what response will result according to its own requirements.

I'm very bullish on the potential for GraphQL, which I believe makes a lot of sense in core in the long term. I featured the project in my Barcelona keynote (demo video), and Acquia also sponsored development of the GraphQL module (Drupal 8 only) following DrupalCon Barcelona. The GraphQL module, created by Sebastian Siemssen, now supports read queries, implements the GraphiQL query testing interface, and can be integrated with Relay (with some limitations).


For most simple REST API use cases, core REST is adequate, but core REST can be insufficient for more complex use cases. Depending on your use case, you may need more off-the-shelf functionality without the need to write a resource plugin or custom code, such as support for configuration entity CRUD (Services); for revisions, file attachments, translations, and cross-environment UUIDs (RELAXed); or for client-driven queries (GraphQL).

Special thanks to Preston So for contributions to this blog post and to Moshe Weitzman, Kyle Browning, Kris Vanderwater, Wim Leers, Sebastian Siemssen, Tim Millwood and Ted Bowman for their feedback during its writing.

May 03, 2016

May 02, 2016

Recently I was discussing with some people about TLS everywhere, and we then started to discuss about the Letsencrypt initiative. I had to admit that I just tested it some time ago (just for "fun") but I suddenly looked at it from a different angle : while the most used case is when you install/run the letsencrypt client on your node to directly configure it, I have to admit that it's something I didn't want to have to deal with. I still think that proper web server configuration has to happen through cfgmgmt, and not through another process. (and same for the key/cert distribution, something for a different blog post maybe).

If so you're (pushing|pulling) automatically your web servers configuration from $cfgmgmt, but that you want to use/deploy TLS certificates signed by letsencrypt, what can you do ? Well, the good news is that you don't have to be forced to let the letsencrypt client touch your configuration at all : you can use the "certonly" option to just generate the private key locally, send the csr and get the signed cert back (and the whole chain too) One thing to know about letsencrypt is that the validation/verification process isn't the one that you can see in most of the companies providing CA/signing capabilities : as there is no ID/Paper verification (or something else) , the only validation for the domain/sub-domain that you want to generate a certificate for happens over http request (basically creating a file with a challenge , process a request from their "ACME" server[s] to retrieve that file back, and validate content)

So what are our options then ? The letsencrypt documentation mentions several plugins like manual (involves you to then create the file with the challenge answer to the webserver, then launching the validation process) , or standalone (doesn't work if you already have a httpd/nginx process as there will be a port conflict) , or even webroot (working fine as it will then just write the file itself under /.well-kwown/ under the DocumentRoot)

The webroot seems easy, but as said, we don't want to even install letsencrypt on the web server[s]. Even worse, suppose (and that's the case I had in mind) that you have multiple web nodes configured in a kind of CDN way : you don't want to distribute that file on all the nodes for validation/verification (when using the "manual" plugin) and you'd have to do it on all the nodes (as you don't know in advance which one will be verified by the ACME server)

So what about something centralized (where you'd run the letsencrypt client locally) for all your certs (including some with SANs ) in a transpartent way ? I so thought about something like this :

Single Letsencrypt node

The idea would be to :

  • use a central node : let's call it (vm, docker container, make-your-choice-here) to launch the letsencrypt client
  • have the ACME server hitting transparently one of the web servers without any changed/uploaded file
  • the server getting the GET request for that file using the letsencrypt central node as a backend node
  • ACME server being happy and so signed certificates being available automatically on the centralize letsencrypt node.

The good news is that it's possible and even really easy to implement, through ProxyPass (for httpd/Apache web server) or proxy_pass (for nginx based setup)

For example, for the httpd vhost config for (three nodes in our example) we can just add this in the .conf file :

<Location "/.well-known/">
    ProxyPass ""

So now, once in place everywhere, you can generate the cert for that domain on the central letsencrypt node (assuming that httpd is running on that node, and reachable from the "frontend" nodes, and that /var/www/html is indeed the DocumentRoot (default) for httpd on that node):

letsencrypt certonly --webroot --webroot-path /var/www/html --manual-public-ip-logging-ok --agree-tos --email -d

Same if you run nginx instead (let's assume this for and , you just have to add a snippet in your vhost .conf file (and before the / definition too):

location /.well-known/ {
        proxy_pass ; 

And then on the central node, do the same thing, but you can add multiple -d for multiple SubjectAltName in the same cert :

letsencrypt certonly --webroot --webroot-path /var/www/html --manual-public-ip-logging-ok --agree-tos --email -d -d

Transparent, smart, easy to do and even something you can deploy when you need to renew, and then remove to be back with initial config files too (if you don't want to have those ProxyPass directives active all the time)

The only thing you have also to know is that once you have proper TLS in place, it's usually better to redirect transpartently all requests to your http server to the https version. Most of the people will do that (next example for httpd/apache) like this :

   RewriteEngine On
   RewriteCond %{HTTPS} !=on
   RewriteRule ^/?(.*) https://%{SERVER_NAME}/$1 [R,L]

It's good, but when you'll renew the certificate, you'll probably just want to be sure that the GET request for /.well-known/* will continue to work over http (from the ACME server) so we can tune a little bit those rules (RewriteCond are cumulatives so it will not be redirect if url starts with .well-known:

   RewriteEngine On
   RewriteCond $1 !^.well-known
   RewriteCond %{HTTPS} !=on
   RewriteRule ^/?(.*) https://%{SERVER_NAME}/$1 [R,L]

Different syntax, but same principle for nginx : (also snippet, not full configuration file for that server/vhost):

location /.well-known/ {
        proxy_pass ; 
location / {
        rewrite        ^ https://$server_name$request_uri? permanent;

Hope that you'll have found that useful, especially if you don't want to deploy letsencrypt everywhere but still use it to generate locally your keys/certs. Once done, you can then distribute/push/pull (depending on your cfgmgmt) those files and don't forget to also implement proper monitoring for cert validity and automation around that too (consider that your homework)

Felt good about explaining my work last time. For no reason. I guess I’m happy, or I no longer feel PGO’s pressure or something. Having to be politically correct all the times, sucks. Making technically and architecturally good solutions is what drives me.

Today I explained the visitor pattern. We want to parse Klartext in such a way that we can present its structure in a editing component. It’s the same component for which I utilized a LRU last week. We want to visualize significant lines like tool changes, but also make cycles foldable like SciTe does with source code and a whole lot of other stuff that I can’t tell you because of teh secretz. Meanwile these files are, especially when generated using cad-cam software, amazingly huge.

Today I had some success with explaining visitor using the Louvre as that what is “visitable” (the AST) and a Japanese guy who wants to collect state (photos) as a visitor of fine arts. Hoping my good-taste solutions (not my words, it’s how Matthias Hasselmann describes my work at Nokia) will once again yield a certain amount of success.

ps. I made sure that all the politically correcting categories are added to this post. So if you’d have filtered away the condescending and controversial posts from my blog, you could have protected yourself from being in total shock now (because I used the sexually tinted word “sucks”, earlier). Guess you didn’t. Those categories have been in place on my blog’s infrastructure since many years. They are like the Körperwelten (Bodyworlds) exhibitions; you don’t have to visit them.

In a recent post we talked about how introducing outside-in experiences could improve the Drupal site-building experience by letting you immediately edit simple configuration without leaving the page. In a follow-up blog post, we provided concrete examples of how we can apply outside-in to Drupal.

The feedback was overwhelmingly positive. However, there were also some really important questions raised. The most common concern was the idea that the mockups ignored "context".

When we showed how to place a block "outside-in", we placed it on a single page. However, in Drupal a block can also be made visible for specific pages, types, roles, languages, or any number of other contexts. The flexibility this provides is one place where Drupal shines.

Why context matters

For the sake of simplicity and focus we intentionally did not address how to handle context in outside-in in the last post. However, incorporating context into "outside-in" thinking is fundamentally important for at least two reasons:

  1. Managing context is essential to site building. Site builders commonly want to place a block or menu item that will be visible on not just one but several pages or to not all but some users. A key principle of outside-in is previewing as you edit. The challenge is that you want to preview what site visitors will see, not what you see as a site builder or site administrator.
  2. Managing context is a big usability problem on its own. Even without outside-in patterns, making context simple and usable is an unsolved problem. Modules like Context and Panels have added lots of useful functionality, but all of it happens away from the rendered page.

The ingredients: user groups and page groups

To begin to incorporate context into outside-in, Kevin Oleary, with input from yoroy, Bojhan, Angie Byron, Gábor Hojtsy and others, has iterated on the block placement examples that we presented in the last post, to incorporate some ideas for how we can make context outside-in. We're excited to share our ideas and we'd love your feedback so we can keep iterating.

To solve the problem, we recommend introducing 3 new concepts:

  1. Page groups: re-usable collections of URLs, wildcards, content types, etc.
  2. User groups: reusable collections of roles, user languages, or other user attributes.
  3. Impersonation: the ability to view the page as a user group.

Page groups

Most sites have some concept of a "section" or "type" of page that may or may not equate to a content type. A commerce store for example may have a "kids" section with several product types that share navigation or other blocks. Page groups adapts to this by creating reusable "bundles" of content consisting either of a certain type (e.g. all research reports), or of manually curated lists of pages (e.g. a group that includes /home, /contact us, and /about us), or a combination of the two (similar to Context module but context never provided an in-place UI).

User groups

User groups would combine multiple user contexts like role, language, location, etc. Example user groups could be "Authenticated users logged in from the United States", or "Anonymous users that signed up to our newsletter". The goal is to combine the massive number of potential contexts into understandable "bundles" that can be used for context and impersonation.


As mentioned earlier, a challenge is that you want to preview what site visitors will see, not what you see as a site builder or site administrator. Impersonation allows site builders to switch between different user groups. Switching between different user groups allow a page to be previewed as that type of user.

Using page groups, user groups and impersonation

Let's take a look at how we use these 3 ingredients in an example. For the purpose of this blog post, we want to focus on two use cases:

  1. I'm a site builder working on a life sciences journal with a paywall and I want to place a block called "Download report" next to all entities of type "Research summary" (content type), but only to users with the role "Subscriber" (user role).
  2. I want to place a block called "Access reports" on the main page, the "About us" page, and the "Contact us" page (URL based), and all research summary pages, but only to users who are anonymous users.

Things can get more complex but these two use cases are a good starting point and realistic examples of what people do with Drupal.

Step #1: place a block for anonymous users

Let's assume the user is a content editor, and the user groups "Anonymous" and "Subscriber" as well as the page groups "Subscriber pages" and "Public pages" have already been created for her by a site builder. Her first task is to place the "Access reports" block and make it visible only for anonymous users.

Place a block for anonymous users

First the editor changes the impersonation to "Anonymous" then she places the block. She is informed about the impact of the change.

Step #2: place a block for subscribers

Our editor's next task is to place the "Download reports" block and make it visible only for subscribers. To do that she is going to want to view the page as a subscriber. Here it's important that this interactions happens smoothly, and with animation, so that changes that occur on the page are not missed.

Place a block for subscribers

The editor changes the impersonation to "Subscribers". When she does the "Access reports" block is hidden as it is not visible for subscribers. When she places the "Download report" block and chooses the "Subscriber pages" page group, she is notified about the impact of the change.

Step #3: see if you did it right

Once our editor has finished step one and two she will want to go back and make sure that step two did not undo or complicate what was done in step one, for example by making the "Download report" block visible for Anonymous users or vice versa. This is where impersonation comes in.

Confirm you did it right

The anonymous users need to see the "Access reports" block and subscribers need to see the "Download report" block. Impersonation lets you see what that looks like for each user group.


The idea of combining a number of contexts into a single object is not new, both context and panels do this. What is new here is that when you bring this to the front-end with impersonation, you can make a change that has broad impact while seeing it exactly as your user will.

April 30, 2016

I’ve created a new release of the “NameStuff” mod for Supreme Commander Forged Alliance, now titled “Name All The Things“. So no, this is not a post where I rant about good variable naming 🙂

The mod automatically sets names for your units, which can change based on their status. For instance, damage can cause a unit to be labeled “angry”, while idle units can be labeled “lazy” or “useless”. These names are visible to you, and to the people you are playing with. It is a so called UI-mod, which means you can have it enabled for yourself, without other players in the game having it or needing to enable it as well.

A game with the Name All The Things mod

The mod was created in 2014 by Sheeo, at least if the details in the mod info file are correct. Going on other clearly incorrect information, that might well not be the case. Anyway, the credits for creating this mod do not go to me, and the original author has my thanks for creating this fun mod.

New features in Name All The Things 2.0.0

  • All texts are now configurable at the top of the file
  • Out of fuel units will be “hungry”
  • Workers and non-workers now have different IDLE messages
  • You can now set special names that only show up for UEF units

Furthermore, the mod does now have an icon and a README. The code also got cleaned up a bit, so it should now be easier to understand and modify further.

The default list of “base-names” got changed to those of my FAF friends, or names otherwise funny to them. Many of the other name parts also got changed, for instance a full health unit will no longer be “happy unit” but rather “such a unit”.

You can get the new version here. I’d like to get it into the FAF mod vault, unfortunately that thing still is ALL of the broken.

What is up next?

When doing a game with thousands of units, the mod can cause some lag. An obvious solution for this is to only give names to the oldest 500ish units alive. I had a go at implementing this, though found it’s harder to do than I expected.

Some other ideas

  • Make it easier to configure the mod. Ideally in the game/lobby, else via config files (so no more lua editing)
  • Name buildings
  • Hold veterancy into account (Idea by Such a Figaro)
  • Do something with ACU names
  • Special names for hover tanks and arty “Such Floaty”, “Floaty Crap”, etc

From my point of view the code interacts with black boxes and is next to impossible to test. If someone can point me to where I can find the actual interface definition of unit, that’d be much appreciated. Knowing which information is available is of course very helpful when looking for cool new things that this mod could do.

Your ideas and suggestions are welcome. Got new prefixes to add or have a great new feature in mind, please share your ideas. If you have code modifications, you can submit them as a patch.


Name All The Things

April 29, 2016


Si vous ne vous intéressez pas du tout au cyclisme, vous n’êtes peut-être pas au courant d’un débat qui fait rage actuellement au sein du peloton professionnel : doit-on autoriser les freins à disque sur les vélos de compétition ?

Peut-être que le cyclisme ne vous intéresse pas mais cette anecdote est intéressante à plus d’un titre car elle illustre très bien l’incapacité que nous avons à évaluer rationnellement un danger et l’importance que les médias émotionnels peuvent avoir sur des processus de décision politique.

Au final, elle nous démontre que nous ne recherchons pas la sécurité mais seulement une illusion de celle-ci.

Les freins à disque, kézako ?

Le but d’un frein est de ralentir voire de stopper un véhicule. La plupart du temps, cela se fait en transformant l’énergie cinétique en chaleur.

Sur la plupart des vélos jusqu’il y’a quelques années, un frein consistait en deux patins qui venaient pincer la jante. En frottant sur les patins, la jante ralentit tout en chauffant.

Screen Shot 2016-04-30 at 00.43.08

Un frein sur jante, par Bart Heird.

Sont ensuite apparus les freins à disque : le principe est exactement le même mais au lieu d’appliquer le patin sur la jante, on va l’appliquer sur un disque spécialement conçu pour cela fixé au centre de la roue.


Un frein à disque, par Jamis Bicycle Canada.


Les avantages sont multiples :

  • Contrairement à la jante, le disque est très fin et n’est pas soumis à de constantes torsions mécaniques. Il est donc possible d’appliquer une pression précise. Sur une jante très légèrement voilée, le freinage est assez aléatoire. Le frein peut frotter sans être activé ou ne pas bien s’activer quand on freine. Pas de problème avec le disque.
  • Le disque reste généralement beaucoup plus propre que la jante (qui passe dans la boue et la poussière), ce qui permet un meilleur freinage par tous les temps.
  • Le disque est conçu uniquement pour le freinage. Il est donc possible de choisir le matériau le plus adapté. La jante, elle doit obéir à des contraintes mécaniques de solidité et de légèreté. La qualité de freinage est accessoire.
  • Le disque est conçu pour évacuer la chaleur générée. La jante pas. En cas de trop long freinage, la jante peut chauffer tellement que le pneu se décolle. (Ce qui est arrivé en 2006 à Beloki, forçant Amstrong à faire une désormais célèbre sortie de route).

Le résultat est qu’un frein à disque fournit un freinage cohérent et constant quelle que soient les conditions météo, la vitesse et le revêtement. Un cycliste équipé de freins à disque dispose d’un contrôle sans commune mesure avec les freins sur jantes.

Le frein à disque en compétition

Les freins à disque ont donc conquis tous les domaines du cyclisme, en commençant par le VTT et le cyclocross. Tous ? Non, pas le cyclisme sur route.

Les raisons ? Tout d’abord, les freins à disque sont plus lourds et moins aérodynamiques, données particulièrement importantes dans cette discipline. Mais les professionnels ont aussi peur qu’un disque puisse causer de vilaines blessures en cas de chutes en peloton où les cyclistes s’empilent les uns sur les autres.

L’union internationale de cyclisme avait néanmoins décidé de les autoriser à titre provisoire afin de tester graduellement en 2015 puis 2016.

Tout semblait bien se passer jusqu’à ce que le cycliste Fran Ventoso se coupe au cours d’une chute sur la célèbre course Paris-Roubaix. Sa blessure est impressionnante et aurait, selon lui, été causée par un disque de frein. Le plus grand conditionnel est de rigueur car le coureur lui-même n’a pas vu qu’il s’agissait d’un disque et qu’aucun coureur équipé de freins à disque n’est tombé ou n’a reporté avoir été touché dans ce secteur.

Néanmoins, les photos de la blessure ont fait le tour du web et les témoignages comparant les disque à des lames de rasoir ou des trancheuses de boucherie ont rapidement fait le buzz.

La preuve est-elle donc faites que les freins à disque sont dangereux et qu’il faut les bannir ?

Analyser le danger

Comme toujours, l’être humain est prompt à se saisir des anecdotes qui lui conviennent afin de se convaincre. Mais si on analyse rationnellement le problème, on voit émerger une réalité toute différente.

Un vélo est, par nature, composée d’éléments pouvant être particulièrement dangereux : une chaîne, des roues dentées, des rayons de métal très fins sur des roues tournant à haute vitesse. Aucun de ces éléments n’a jamais été considéré comme un problème, ils font partie du cyclisme. Une vidéo sur Facebook semble démontrer que le frein à disque n’est pas particulièrement coupant . Tout au plus peut-on noter les risques de brûlures si on le touche juste après un très long freinage.

Durant la période de tests 2015-2016, le cyclisme de route professionnel a donc connu un et un seul accident impliquant (potentiellement) un frein à disque.

Au cours de la même période, les courses ont connues un nombre importants d’accidents majeurs impliquant des motos ou des voitures faisant partie de l’organisation de la course. Le plus cocasse est certainement celui de Greg Van Avermaet, alors en tête de course et qui sera propulsé dans le fossé par une moto de télévision. Le second de la course, Adam Yates, dépassera Van Avermaet sans le voir et passera la ligne d’arrivée persuadé d’être arrivé deuxième. Mais l’accident le plus dramatique reste la mort du coureur Antoine Demoitié, heurté à la tête par une moto de l’organisation après avoir fait une chute sans gravité.

Une course cycliste, de nos jours, est en effet une débauche de véhicules motorisés tentant de se frayer un passage entre les vélos. Avec des conséquences graves : il ne se passe plus un tour de France sans qu’au moins un coureur soit mis à terre par un véhicule.

Si la sécurité physique des coureurs était réellement un souci, l’utilisation de véhicules lors des courses cyclistes serait sévèrement revue. C’est d’ailleurs ce que demandent beaucoup de coureurs mais sans écho auprès de la fédération ni des médias. Après tout, les motos de la télévision sont la seule motivation des sponsors qui payent les salaires des coureurs…

Les enjeux du débats

Aujourd’hui, une seule blessure statistiquement anecdotique va potentiellement repousser de plusieurs années l’apparition des freins à disque au sein du peloton professionnel pour la simple raison que les photos sont impressionnantes.

Pourtant, il est évident que pour un cycliste isolé, les freins à disque améliorent grandement la sécurité. Ils sont également utilisés avec succès depuis des années au plus haut niveau en VTT et en cyclocross. Le cyclisme sur route est-il une exception ? Les gains évidents de sécurité d’un meilleur freinage ne compensent-ils pas le risque de se couper ?

N’ayant pas l’expérience de la course, je ne peux absolument pas juger.

Tout au plus puis-je remarquer que les coureurs cyclistes ont, pendant des années, lutté contre le port obligatoire du casque, pourtant élément de sécurité aujourd’hui indiscutable. L’opposition a été telle qu’il a été nécessaire d’établir une période de transition durant laquelle les cyclistes pouvaient se débarrasser de leur casque en arrivant sur la dernière montée d’une course.

Ne devrait-on pas également considérer l’exemple qu’ils donnent à une époque où la promotion du cyclisme face à la voiture devient un enjeu sociétal ?

Suite au buzz des photos particulièrement impressionnante de la blessure de Ventoso, j’ai entendu des particuliers refusant d’acheter un vélo de ballade avec freins à disque voire croyant que ceux-ci allaient désormais être interdit sur tous les vélos. Les organisateurs des courses amateurs amicales parlent aussi d’interdir les disques. Interdir une technologie qui pourrait potentiellement éviter des accidents ! Interdire des amateurs, utilisant majoritairement leur vélo dans le traffic quotidien, d’avoir des freins à disque s’ils veulent participer à des « sportives » mi-ballades, mi compétition amicales.

La résistance au changement

Vu sous cet angle, les implications et les enjeux de cette histoire sont bien plus importantes qu’une vilaine coupure. Mais cela illustre à quel point l’être humain est en permanence en train de lutter contre le changement, quelle que soit la forme qu’il puisse prendre.

Dans la narration des médias sociaux, la proposition suivant paraît logique : « Un cycliste professionnel dans une course très particulière se coupe et pense que sa blessure est due à des freins. Tous les vélos du monde devraient donc désormais utiliser des freins moins efficaces. »

Notre perception du danger est complètement tronquée par les médias (dans ce cas-ci une photo de blessure), par la narration (l’usage d’analogies avec des lames de rasoirs) et complètement irrationnelle (les motos et les voitures étant familières, elles n’apparaissent pas comme dangereuses, l’accident est un cas unique,etc).

Sous de fallacieux prétexte de risques supposés, nous refusons généralement de voir en face les risques que nous courrons déjà pour la simple raison que nous voulons nous complaire dans notre confortable immobilisme suranné. Nous exagérons les risques apportés par toute nouveauté. Et nous refusons les innovations qui pourraient nous apporter une réelle sécurité.

Finalement, l’être humain ne cherche absolument pas la sécurité. Il cherche l’illusion de celle-ci. Du coup, nos politiciens ne nous donnent-ils pas exactement ce que nous cherchons ?

Le fait que les les vélos freineront désormais moins bien à cause d’une photo sanguinolente sur les réseaux sociaux n’est-elle pas une merveilleuse analogie, un extraordinaire résumé de toute la politique sécuritaire que nous mettons en place ces dernières décennies ?


Photo par photographer.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins: 12uAH27PhWepyzZ8kzY9rMAZzwpfsWSxXt, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

For the ones who didn’t find the LRU in Tracker’s code (and for the ones who where lazy).

Let’s say we will have instances of something called a Statement. Each of those instances is relatively big. But we can recreate them relatively cheap. We will have a huge amount of them. But at any time we only need a handful of them.

The ones that are most recently used are most likely to be needed again soon.

First you make a structure that will hold some administration of the LRU:

typedef struct {
	Statement *head;
	Statement *tail;
	unsigned int size;
	unsigned int max;
} StatementLru;

Then we make the user of a Statement (a view or a model). I’ll be using a Map here. You can in Qt for example use QMap for this. Usually I want relatively fast access based on a key. You could also each time loop the stmt_lru to find the instance you want in useStatement based on something in Statement itself. That would rid yourself of the overhead of a map.

class StatementUser
	void useStatement(KeyType key);
	StatementLru stmt_lru;
	Map<KeyType, Statement*> stmts;
	StatementFactory stmt_factory;

Then we will add to the private fields of the Statement class the members prev and next: We’ll make a circular doubly linked list.

class Statement: QObject {
	Statement *next;
	Statement *prev;

Next we initialize the LRU:

	stmt_lru.max = 500;
	stmt_lru.size = 0;		

Then we implement using the statements

void StatementUser::useStatement(KeyType key)
	Statement *stmt;

	if (!stmts.get (key, &stmt)) {

		stmt = stmt_factory.createStatement(key);

		stmts.insert (key, stmt);

		/* So the ring looks a bit like this: *
		 *                                    *
		 *    .--tail  .--head                *
		 *    |        |                      *
		 *  [p-n] -> [p-n] -> [p-n] -> [p-n]  *
		 *    ^                          |    *
		 *    `- [n-p] <- [n-p] <--------'    */

		if (stmt_lru.size >= stmt_lru.max) {
			Statement *new_head;

		/* We reached max-size of the LRU stmt cache. Destroy current
		 * least recently used (stmt_lru.head) and fix the ring. For
		 * that we take out the current head, and close the ring.
		 * Then we assign head->next as new head. */

			new_head = stmt_lru.head->next;
			auto to_del = stmts.find (stmt_lru.head);
			stmts.remove (to_del);
			delete stmt_lru.head;
			stmt_lru.head = new_head;
		} else {
			if (stmt_lru.size == 0) {
				stmt_lru.head = stmt;
				stmt_lru.tail = stmt;

	/* Set the current stmt (which is always new here) as the new tail
	 * (new most recent used). We insert current stmt between head and
	 * current tail, and we set tail to current stmt. */

		stmt->next = stmt_lru.head;
		stmt_lru.head->prev = stmt;

		stmt_lru.tail->next = stmt;
		stmt->prev = stmt_lru.tail;
		stmt_lru.tail = stmt;

	} else {
		if (stmt == stmt_lru.head) {

		/* Current stmt is least recently used, shift head and tail
		 * of the ring to efficiently make it most recently used. */

			stmt_lru.head = stmt_lru.head->next;
			stmt_lru.tail = stmt_lru.tail->next;
		} else if (stmt != stmt_lru.tail) {

		/* Current statement isn't most recently used, make it most
		 * recently used now (less efficient way than above). */

		/* Take stmt out of the list and close the ring */
			stmt->prev->next = stmt->next;
			stmt->next->prev = stmt->prev;

		/* Put stmt as tail (most recent used) */
			stmt->next = stmt_lru.head;
			stmt_lru.head->prev = stmt;
			stmt->prev = stmt_lru.tail;
			stmt_lru.tail->next = stmt;
			stmt_lru.tail = stmt;

	/* if (stmt == tail), it's already the most recently used in the
	 * ring, so in this case we do nothing of course */

	/* Use stmt */


In case StatementUser and Statement form a composition (StatementUser owns Statement, which is what makes most sense), don’t forget to delete the instances in the destructor of StatementUser. In the example’s case we used heap objects. You can loop the stmt_lru or the map here.

	Map<KeyType, Statement*>::iterator i;
    	for (i = stmts.begin(); i != stmts.end(); ++i) {
		delete i.value();

April 28, 2016

Recently, some people started to ask proper IPv6/AAAA record for some of our public mirror infrastructure, like, and also

Reason is that a lot of people are now using IPv6 wherever possible and from a CentOS point of view, we should ensure that everybody can have content over (legacy) ipv4 and ipv6. Funny that I call ipv4 "legacy" as we still have to admit that it's still the default everywhere, even in 2016 with the available pools now exhausted.

While we had already some AAAA records for some of our public nodes (like as an example), I started to "chase" after proper and native ipv6 connectivity for our nodes. That's where I had to take contact with all our valuable sponsors. First thing to say is that we'd like to thank them all for their support for the CentOS Project over the years : it wouldn't have been possible to deliver multiple terrabytes of data per month without their sponsorship !

WRT ipv6 connectivity that's where the results of my quest where really different : while some DCs support ipv6 natively, and even answer you in 5 minutes when asking for a /64 subnet to be allocated , some other aren't still ipv6 ready : For the worst case the answer was "nothing ready and no plan for that" or for sometimes the received answer was something like "it's on the roadmap for 2018/2019").

The good news is that ~30% of our nodes behind have now ipv6 connectivity, so the next step is now to test our various configurations (distributed by puppet) and then also our GeoIP redirection (done at the PowerDNS level for such records, for which we'll also then add proper AAAA record)

Hopefully we'll have that tested and then announced soon, and also for other public services that we're providing to you.

Stay tuned for more info about ipv6 deployment within !

The one big question I get asked over and over these days is: "How is Drupal 8 doing?". It's understandable. Drupal 8 is the first new version of Drupal in five years and represents a significant rethinking of Drupal.

So how is Drupal 8 doing? With less than half a year since Drupal 8 was released, I'm happy to answer: outstanding!

As of late March, counted over 60,000 Drupal 8 sites. Looking back at the first four months of Drupal 7, about 30,000 sites had been counted. In other words, Drupal 8 is being adopted twice as fast as Drupal 7 had been in its first four months following the release.

As we near the six-month mark since releasing Drupal 8, the question "How is Drupal 8 doing?" takes on more urgency for the Drupal community with a stake in its success. For the answer, I can turn to years of experience and say while the number of new Drupal projects typically slows down in the year leading up to the release of a new version; adoption of the newest version takes up to a full year before we see the number of new projects really take off.

Drupal 8 is the middle of an interesting point in its adoption cycle. This is the phase where customers are looking for budgets to pay for migrations. This is the time when people focus on learning Drupal 8 and its new features. This is when the modules that extend and enhance Drupal need to be ported to Drupal 8; and this is the time when Drupal shops and builders are deep in the three to six month sales cycle it takes to sell Drupal 8 projects. This is often a phase of uncertainty but all of this is happening now, and every day there is less and less uncertainty. Based on my past experience, I am confident that Drupal 8 will be adopted at "full-force" by the end of 2016.

A few weeks ago I launched the Drupal 2016 product survey to take pulse of the Drupal community. I plan to talk about the survey results in my DrupalCon keynote in New Orleans on May 10th but in light of this blog post I felt the results to one of the questions is worth sharing and commenting on sooner:

Survey drupal adoption

Over 1,800 people have answered that question so far. People were allowed to pick up to 3 answers for the single question from a list of answers. As you can see in the graph, the top two reasons people say they haven't upgraded to Drupal 8 yet are (1) the fact that they are waiting for contributed modules to become available and (2) they are still learning Drupal 8. The results from the survey confirm what we see every release of Drupal; it takes time for the ecosystem, both the technology and the people, to come along.

Fortunately, many of the most important modules, such as Rules, Pathauto, Metatag, Field Collection, Token, Panels, Services, and Workbench Moderation, have already been ported and tested for Drupal 8. Combined with the fact that many important modules, like Views and CKEditor, moved to core, I believe we are getting really close to being able to build most websites with Drupal 8.

The second reason people cited for not jumping onto Drupal 8 yet was that they are still learning Drupal 8. One of the great strengths of Drupal has long been the willingness of the community to share its knowledge and teach others how to work with Drupal. We need to stay committed to educating builders and developers who are new to Drupal 8, and DrupalCon New Orleans is an excellent opportunity to share expertise and learn about Drupal 8.

What is most exciting to me is that less than 3% answered that they plan to move off Drupal altogether, and therefore won't upgrade at all. Non-response bias aside, that is an incredible number as it means the vast majority of Drupal users plan to eventually upgrade.

Yes, Drupal 8 is a significant rethinking of Drupal from the version we all knew and loved for so long. It will take time for the Drupal community to understand Drupal's new design and capabilities and how to harness that power but I am confident Drupal 8 is the right technology at the right time, and the adoption numbers so far back that up. Expect Drupal 8 adoption to start accelerating.

Stephen Fry wrote an insightful critique about the what the web was and what it has become:

The real internet [as opposed to AOL] was that Wild West where anything went, shit in the streets and Bad Men abounding, no road-signs and no law, but ideas were freely exchanged, the view wasn’t ruined by advertising billboards and every moment was very very exciting.


I and millions of other early ‘netizens’ as we embarrassingly called ourselves, joined an online world that seemed to offer an alternative human space, to welcome in a friendly way (the word netiquette was used) all kinds of people with all kinds of views. We were outside the world of power and control. Politicians, advertisers, broadcasters, media moguls, corporates and journalists had absolutely zero understanding of the net and zero belief that it mattered. So we felt like an alternative culture; we were outsiders.

Those very politicians, advertisers, media moguls, corporates and journalists who thought the internet a passing fad have moved in and grabbed the land. They have all the reach, scope, power and ‘social bandwidth’ there is. Everyone else is squeezed out — given little hutches, plastic megaphones and a pretence of autonomy and connectivity. No wonder so many have become so rude, resentful, threatening and unkind. […]

The radical alternative now must be to jack out of the matrix, to go off the grid. […]

I live in a world without Facebook, and now without Twitter. I manage to survive too without Kiki, Snapchat, Viber, Telegram, Signal and the rest of them. I haven’t yet learned to cope without iMessage and SMS. I haven’t yet turned my back on email and the Cloud. I haven’t yet jacked out of the matrix and gone off the grid. Maybe I will pluck up the courage. After you …

While not off the grid yet, Stephen Fry blogs on WordPress and his blog uses my own little Autoptimize plugin. Let that be my proudest boast of the day. Sorry Stephen …

April 27, 2016

Last week, I secretly reused my own LRU code in the model of the editor of a CNC machine (has truly huge files, needs a statement editor). I rewrote my own code, of course. It’s Qt based, not GLib. Wouldn’t work in original form anyway. But the same principle. Don’t tell Jürg who helped me write that, back then.

Extra points and free beer for people who can find it in Tracker’s code.

April 26, 2016

The post Yum Update: DB_RUNRECOVERY Fatal error, run database recovery appeared first on

If for some reason your server's disk I/O fails during a yum or RPM manipulation, you can see the following error whenever you run yum or rpc:

# yum update
rpmdb: page 18816: illegal page type or format
rpmdb: PANIC: Invalid argument
rpmdb: Packages: pgin failed for page 18816
error: db4 error(-30974) from dbcursor->c_get: DB_RUNRECOVERY: Fatal error, run database recovery
rpmdb: PANIC: fatal region error detected; run recovery
error: db4 error(-30974) from dbcursor->c_close: DB_RUNRECOVERY: Fatal error, run database recovery
No Packages marked for Update
rpmdb: PANIC: fatal region error detected; run recovery
error: db4 error(-30974) from db->close: DB_RUNRECOVERY: Fatal error, run database recovery
rpmdb: PANIC: fatal region error detected; run recovery
error: db4 error(-30974) from db->close: DB_RUNRECOVERY: Fatal error, run database recovery
rpmdb: File handles still open at environment close
rpmdb: Open file handle: /var/lib/rpm/Packages
rpmdb: Open file handle: /var/lib/rpm/Name
rpmdb: PANIC: fatal region error detected; run recovery
error: db4 error(-30974) from dbenv->close: DB_RUNRECOVERY: Fatal error, run database recovery

The fix is, thankfully, rather easy: remove the RPM database, rebuild it and let yum download all the mirror's file lists.

$ mv /var/lib/rpm/__db* /tmp/
$ rpm -rebuilddb
$ yum clean all

The commands above are safe to run. If for some reason this does not fix, you can get the files back from the /tmp path where they were moved.

The post Yum Update: DB_RUNRECOVERY Fatal error, run database recovery appeared first on

The post Nginx 1.10 brings HTTP/2 support to the stable releases appeared first on

A very small update was sent to the nginx-announce mailing list today. And I do mean very small:

Changes with nginx 1.10.0 --- 26 Apr 2016

*) 1.10.x stable branch.

Maxim Dounin
[nginx-announce] nginx-1.10.0

At first, you wouldn't think much of it.

However, this new release includes support for HTTP/2. That means from now on, HTTP/2 is available in the stable Nginx releases and you no longer need the "experimental" mainline releases.

$ nginx -V
nginx version: nginx/1.10.0
built by gcc 4.4.7 20120313 (Red Hat 4.4.7-16) (GCC)
built with OpenSSL 1.0.1e-fips 11 Feb 2013
TLS SNI support enabled
configure arguments: ... --with-http_v2_module

This is very good news for the adoption of HTTP/2! If you're running Nginx from their official repositories and you have SSL/TLS enabled, I suggest you go ahead and enable HTTP/2 right now.

If you're new to HTTP/2 and want to learn more about it, here are some resources I created:

Exciting news!

Update: Alan kindly reminded of the impending doom happening on May 15th when Chrome disables NPN support. In short: having the http2 option enabled won't help, as your OS will no longer be able to support HTTP/2.

The post Nginx 1.10 brings HTTP/2 support to the stable releases appeared first on

April 25, 2016


En éternel optimiste, je suis confiant dans le fait que l’immense majorité de l’humanité est bienveillante. Nous ne souhaitons que le bonheur pour nous-mêmes et les autres.

Mais alors, comment expliquer la multiplication des conflits, des guerres, des disputes et des violences ?

Ma réponse est toute simple : parce que nous ne sommes pas assez égoïstes et que nos différentes cultures nous poussent à “penser d’abord aux autres”.

« Et alors ? » me diriez vous avec un air étonné en vous tapant la tempe de l’index. Contrairement à ce qu’on pourrait croire, avoir des bonnes intentions pour les autres ne fait que paver l’enfer, pour paraphraser le proverbe. La solution ? Soyons égoïstes et arrêtons un peu d’essayer de penser pour les autres !

Petit exemple introductif

Marie a offert une boîte de pralines à Jean. Ils viennent de la manger ensemble. Il n’en reste plus qu’une dans la boîte. Marie en a très envie. Mais elle veut avant tout faire plaisir à Jean.

— Tiens, prends la dernière !

Jean n’a pas du tout envie de la praline car il sait qu’elle est à l’alcool et il a horreur de ça. Cependant, il ne veut pas froisser Marie ni montrer que son refus est purement égoïste.

— Non, merci, je te la laisse.
— J’insiste, tu en as pris moins que moi !
— Vraiment, sans façon !
— Ce serait bête de la jeter !
— Bon, d’accord…

Moralité : Marie et Jean sont tous les deux frustrés mais sont persuadés de s’être frustrés pour le bien de l’autre. Ce qui a eu l’effet inverse !

Peut-on généraliser cet exemple ? Oui, je le pense !

Une hypocrite bienveillance

Le problème d’une société altruiste, c’est qu’il devient virtuellement impossible d’exprimer son propre désir, celui-ci étant perçu comme égoïste. Il devient également impossible de signifier à une personne bien intentionnée que son intention n’a pas eu l’effet escompté.

Il s’ensuit que les altruistes sont, par construction, forcés de vivre leurs propres plaisirs par procuration. Dans notre cas, c’est Marie forçant Jean à manger la praline qu’elle aurait bien voulu avoir.

La praline parait peut-être anecdotique mais remplaçons le chocolat par la morale et nous avons la source même des conflits et du fanatisme. Si un homme pense qu’il n’est pas sain que ses enfants soient exposés à de la pornographie, il va militer pour interdire la pornographie dans toutes la société afin de protéger tous les enfants ! Les opposants du mariage homosexuels militent pour, selon leur propre mot, le bien de tous et de la société. Ils sont donc essentiellement altruistes.

Exemple extrême : les extrémistes religieux ne cherchent jamais qu’à sauver les âmes égarées, quitte à les torturer et les tuer un petit peu en passant. Mais c’est pour leur bien.

L’inévitable frustration

Mais ces enfoirés d’altruistes font encore pire !

En effet, frustrés inconsciemment par le non-assouvissement de leurs désirs, ils en viennent à haïr les égoïstes qui n’ont rien demandé à personne.

Sans le savoir, ils exigent que tout le monde fasse le même sacrifice qu’eux. Ou, au minimum, ils veulent être reconnus pour leur sacrifice.

Certains vont jusqu’à affirmer tirer leur bonheur du bonheur des autres ! Cette rhétorique est paradoxale. Car si la phrase est vraie, alors l’altruiste est en fait profondément égoïste. Comme l’égoïsme n’est pas acceptable pour l’altruiste, il s’en suit que la proposition est hypocrite.

En résumé, les altruistes imposent leur vision du monde aux autres et ne supportent pas ceux qui s’occupent d’eux-mêmes.

Les conflits

Vous m’objecterez que si tout le monde était égoïste, il y aurait encore plus de conflits car, forcément, les envies sont parfois incompatibles.

Mais je pense le contraire. Car tout être humain normalement constitué est capable d’accepter une frustration si celle-ci est consciente et justifiée.
— J’ai envie de la dernière praline.
— Moi aussi.
— Tu en as mangé plus que moi.
— Effectivement, je te la laisse pour cette fois.

L’égoïsme améliore la communication, la transparence. De manière contre-intuitive, il est beaucoup plus facile de faire confiance à un égoïste : il ne cherche pas à nous faire plaisir, il suffit que ses intérêts soient alignés avec les nôtres. La frustration, elle, est verbalisée et rationalisée : « J’avais envie de la dernière praline mais il est juste que Marie aie pu la manger. »

L’égoïsme et la franchise entraîne donc une diminution des incompréhensions. Les conflits restants sont, au moins, clairement identifiés et négociables.


Mais la véritable raison qui me fait abhorrer les altruistes est bien plus profonde.

Comment voulez-vous apporter de l’harmonie au monde si vous n’êtes pas en harmonie avec vous-mêmes ? Comment voulez-vous écouter les autres si vous n’êtes pas capable de vous écouter ? Comment satisfaire les envies de ceux que vous aimez si vous êtes vous même frustrés ?

L’altruisme est essentiellement morbide.

Vous voulez changer le monde ? Rendre les autres heureux ? Apporter du bonheur à vos proches ?

Charité bien ordonnée commence par soi-même ! Travaillez à être heureux, à votre propre bonheur et arrêtez de penser à la place des autres.


Photo par Lorenzoclick.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins: 12uAH27PhWepyzZ8kzY9rMAZzwpfsWSxXt, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

In March, I did a presentation at SxSW that asked the audience a question I've been thinking about a lot lately: "Can we save the open web?".

The web is centralizing around a handful of large companies that control what we see, limit creative freedom, and capture a lot of information about us. I worry that we risk losing the serendipity, creativity and decentralization that made the open web great.

The open web closing

While there are no easy answers to this question, the presentation started a good discussion about the future of the open web, the role of algorithms in society, and how we might be able to take back control of our personal information.

I'm going to use my blog to continue the conversation about the open web, since it impacts the future of Drupal. I'm including the video and slides (PDF, 76 MB) of my SxSW presentation below, as well as an overview of what I discussed.

Here are the key ideas I discussed in my presentation, along with a few questions to discuss in the comments.

Idea 1: An FDA-like organization to provide oversight for algorithms. While an "FDA" in and of itself may not be the most ideal solution, algorithms are nearly everywhere in society and are beginning to impact life-or-death decisions. I gave the example of an algorithm for a self-driving car having to decide whether to save the driver or hit a pedestrian crossing the street. There are many other life-or-death examples of how unregulated technology could impact people in the future, and I believe this is an issue we need to begin thinking about now. What do you suggest we do to make the use of algorithms fair and trustworthy?

Idea 2: Open standards that will allow for information-sharing across sites and applications. Closed platforms like Facebook and Google are winning because they're able to deliver a superior user experience driven by massive amounts of data and compute power. For the vast majority of people, ease-of-use will trump most concerns around privacy and control. I believe we need to create a set of open standards that enable drastically better information-sharing and integration between websites and applications so independent websites can offer user experiences that meet or exceeds that of the large platforms. How can the Drupal community help solve this problem?

Idea 3: A personal information broker that allows people more control over their data. In the past, I've written about the idea for a personal information broker that will give people control over how, where and for how long their data is used, across every single interaction on the web. This is no small feat. An audience member asked an interesting question about who will build this personal information broker -- whether it will be a private company, a government, an NGO, or a non-profit organization? I'm not really sure I have the answer, but I am optimistic that we can figure that out. I wish I had the resources to build this myself as I believe this will be a critical building block for the web. What do you think is the best way forward?

Ultimately, we should be building the web that we want to use, and that we want our children to be using for decades to come. It's time to start to rethink the foundations, before it's too late. If we can move any of these ideas forward in a meaningful way, they will impact billions of people, and billions more in the future.

Off course the web is not doomed, but despite the fact that web performance is immensely important (think impact on mobile experience, think impact on search engine ranking, think impact on conversion) the web keeps getting fatter, as witnessed by this graph from mobiforge;


Yup; your average web page now has the same size as the Doom installer. From the original mobiforge article;

Recall that Doom is a multi-level first person shooter that ships with an advanced 3D rendering engine and multiple levels, each comprised of maps, sprites and sound effects. By comparison, 2016’s web struggles to deliver a page of web content in the same size. If that doesn’t give you pause you’re missing something.

There’s some interesting follow-up remarks & hopeful conclusions in the original article, but still, over 2 Megabyte for a web page? Seriously? Think about what that does that do to your bounce-rate, esp. knowing that Google Analytics systematically underestimates bounce rate on slow pages because people leave before even being seen by your favorite webstats solution?

So, you might want to reconsider if you really should:

  • push high resolution images to all visitors because your CMO says so (“this hero image does not look nice on my iPad”)
  • push custom webfonts just because corporate communications say so (“our corporate identity requires the use of these fonts”)
  • use angular.js (or react.js or any other JS client-side framework for that matter) because the CTO says so (“We need MVC and the modularity and testibility are great for developers”)

Because on the web faster is always better and being slower will always cost you in the end, even if you might not (want to) know.

April 22, 2016

Just read this on PPK’s blog;

Have you ever felt that you have no talent whatever? […] Cherish [that] impostor syndrome. Don’t trust people who don’t have it.

If you want to know what happens in the […], you’ll have to read his blogpost, but it is pretty insightful!

April 20, 2016

Today, Drupal 8.1 has been released and it includes BigPipe as an experimental module.

Six months ago, on the day of the release of Drupal 8, the BigPipe contrib module was released.

So BigPipe was first prototyped in contrib, then moved into core as an experimental module.

Experimental module?

Quoting d.o/core/experimental:

Experimental modules allow core contributors to iterate quickly on functionality that may be supported in an upcoming minor release and receive feedback, without needing to conform to the rigorous requirements for production versions of Drupal core.

Experimental modules allow site builders and contributed project authors to test out functionality that might eventually be included as a stable part of Drupal core.

With your help (in other words: by testing), we can help BigPipe “graduate” as a stable module in Drupal 8.2. This is the sort of module that needs wider testing because it changes how pages are delivered, so before it can be considered stable, it must be tested in as many circumstances as possible, including the most exotic ones.

(If your site offers personalization to end users, you are encouraged to enable BigPipe and report issues. There is zero risk of data loss. And when the environment — i.e. web server or (reverse) proxy — doesn’t support streaming, then BigPipe-delivered responses behave as if BigPipe was not installed. Nothing breaks, you just go back to the same perceived performance as before.)

About 500 sites are currently using the contrib module. With the release of Drupal 8.1, hopefully thousands of sites will test it.12

Please report any issues you encounter! Hopefully there won’t be many. I’d be very grateful to hear about success stories too — feel free to share those as issues too!


Of course, documentation is ready too:

What about the contrib module?

The BigPipe contrib module is still available for Drupal 8.0, and will remain available.

  • 1.0-beta1 was released on the same day as Drupal 8.0.0
  • 1.0-beta2 was released on the same day as Drupal 8.0.1, and made it feature-complete
  • 1.0-beta3 contained only improved documentation
  • 1.0-rc1 brought comprehensive test coverage, which was the last thing necessary for BigPipe to become a core-worthy module — the same day as the work continued on the core issue:
  • 1.0 was tagged today, on the same day as Drupal 8.1.0

Going forward, I’ll make sure to tag releases of the BigPipe contrib module matching Drupal 8.1 patch releases, if they contain BigPipe fixes/improvements. So, when Drupal 8.1.3 is released, BigPipe 1.3 for Drupal 8.0 will be released also. That makes it easy to keep things in sync.


When you upgrade from Drupal 8.0 to Drupal 8.1, and you were using the BigPipe module on your 8.0 site, then follow the instructions in the 8.1.0 release notes:

If you previously installed the BigPipe contributed module, you must uninstall and remove it before upgrading from Drupal 8.0.x to 8.1.x.

  1. Note there is also the BigPipe demo module (d.o/project/big_pipe_demo), which makes it easy to simulate the impact of BigPipe on your particular site. 

  2. There’s also a live demo: 

Today is another big day for Drupal as we just released Drupal 8.1.0. Drupal 8.1.0 is an important milestone as it is a departure from the Drupal 7 release schedule where we couldn't add significant new features until Drupal 8. Drupal 8.1.0 balances maintenance with innovation.

On my blog and in presentations, I often talk about the future of Drupal and where we need to innovate. I highlight important developments in the Drupal community, and push my own ideas to disrupt the status quo. People, myself included, like to talk about the shiny innovations, but it is crucial to understand that innovation is only a piece of how we grow Drupal's success. What can't be forgotten is the maintenance, the bug fixing, the work on and our test infrastructure, the documentation writing, the ongoing coordination and the processes that allow us to crank out stable releases.

We often recognize those who help Drupal innovate or introduce novel things, but today, I'd like us to praise those who maintain and improve what already exists and that was innovated years ago. So much of what makes Drupal successful is the "daily upkeep". The seemingly mundane and unglamorous effort that goes into maintaining Drupal has a tremendous impact on the daily life of hundreds of thousands of Drupal developers, millions of Drupal content managers, and billions of people that visit Drupal sites. Without that maintenance, there would be no stability, and without stability, no room for innovation.

April 19, 2016

The post Staying up-to-date on open source announcements & security issues via Twitter appeared first on

For those who follow me on Twitter it's probably no surprise: I'm pretty active there. I enjoy the interactions, the platform for sharing links and for keeping me up-to-date on technical news. To help with the latter, I built 2 twitter bots that keep me informed about open source announcements & security vulnerabilities.

Both are built on top of the data aggregation that's happening in MARC, the mailing list archive.

If you're into Twitter, you can follow these accounts:

If I'm missing any news sources, let me know.

For the last one, @foss_security [1], I even enabled push notifications. Several 0days have already been disclosed there (including the latest remote code execution vulnerability in git) so I find it worth keeping an eye out on that one.

I'm not planning on making RSS feeds for those, as the current implementation was a really quick 30-minute hack on top of MARC that easily allowed me to send tweets using the Twitter API [2].

No spam, no advertising, no bullshit: just the content on both accounts.

If you're into that kind of thing, follow @oss_announce and @foss_security.

[1] There's also an existing account, called @oss_security (mine is named @foss_security, with an extra 'f'), but that one only pipes everything from the oss-security mailing list to Twitter, including replies and discussions, no other data sources.
[2] Built with dg/twitter-php library

The post Staying up-to-date on open source announcements & security issues via Twitter appeared first on

April 18, 2016

We’ve just released Activiti 5.20.0. This is a bugfix release (mainly fixing the bug around losing message and signal start event subscriptions on process definition redeployment, see ACT-4117). Get it from the download page or of course via Maven.

April 17, 2016

The post Bash on Windows: a hidden bitcoin goldmine? appeared first on

Bash on Windows is available as an insider preview, nothing generally available and nothing final (so this behaviour hopefully will still change). Processes started in Bash do not show up in the Windows Task Manager and can be used to hide CPU intensive workers, like bitcoin miners.

I ran the sysbench tool inside Bash (which is just a apt-get install sysbench away) to stresstest the CPU.

$ sysbench --test=cpu --cpu-max-prime=40000 run

That looks like this:


The result: my test VM went to 100% CPU usage, but there is no (easy?) way to see that from within Windows.

The typical task manager reports 100% CPU usage, but can't show the cause of it.


The task history, which can normally show which processes used how much CPU, memory or bandwidth, stays blank.


There's a details tab with more specific process names etc., it's not shown there either.


And the performance tab clearly shows a CPU increase as soon as the benchmark is started.


To me, this shows an odd duality with the "Bash on Windows" story. I know it's beta/alpha and things will probably change, but I can't help but wonder: if this behaviour remains, Bash will become a perfect place to hide your Bitcoin miners or malware.

You can see your server or desktop is consuming 100% CPU, but finding the source can prove to be very tricky for Windows sysadmins with little Linux knowledge.

Update: this is a confirmed and fixed issue in the BashOnWindows team. So we should expect this to be fixed in the next release!

The post Bash on Windows: a hidden bitcoin goldmine? appeared first on

April 16, 2016

De Belgische Marechaussee heeft goed werk verricht. Het gespuis is grotendeels opgepakt. Daar zal speurwerk voor nodig geweest zijn. Toch bleken er weinig inbreuken te zijn en heeft de bevolking weinig vrijheden ingeleverd.

Met andere woorden: er is gericht werk verricht.

Hoe hoger het resultaat met hoe minder ingeleverde vrijheden, te hoger de kwaliteit van onze diensten.

We gaan dat landje hier vrij houden.

April 15, 2016