Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

May 02, 2016

Felt good about explaining my work last time. For no reason. I guess I’m happy, or I no longer feel PGO’s pressure or something. Having to be politically correct all the times, sucks. Making technically and architecturally good solutions is what drives me.

Today I explained the visitor pattern. We want to parse Klartext in such a way that we can present its structure in a editing component. It’s the same component for which I utilized a LRU last week. We want to visualize significant lines like tool changes, but also make cycles foldable like SciTe does with source code and a whole lot of other stuff that I can’t tell you because of teh secretz. Meanwile these files are, especially when generated using cad-cam software, amazingly huge.

Today I had some success with explaining visitor using the Louvre as that what is “visitable” (the AST) and a Japanese guy who wants to collect state (photos) as a visitor of fine arts. Hoping my good-taste solutions (not my words, it’s how Matthias Hasselmann describes my work at Nokia) will once again yield a certain amount of success.

ps. I made sure that all the politically correcting categories are added to this post. So if you’d have filtered away the condescending and controversial posts from my blog, you could have protected yourself from being in total shock now (because I used the sexually tinted word “sucks”, earlier). Guess you didn’t. Those categories have been in place on my blog’s infrastructure since many years. They are like the Körperwelten (Bodyworlds) exhibitions; you don’t have to visit them.

In a recent post we talked about how introducing outside-in experiences could improve the Drupal site-building experience by letting you immediately edit simple configuration without leaving the page. In a follow-up blog post, we provided concrete examples of how we can apply outside-in to Drupal.

The feedback was overwhelmingly positive. However, there were also some really important questions raised. The most common concern was the idea that the mockups ignored "context".

When we showed how to place a block "outside-in", we placed it on a single page. However, in Drupal a block can also be made visible for specific pages, types, roles, languages, or any number of other contexts. The flexibility this provides is one place where Drupal shines.

Why context matters

For the sake of simplicity and focus we intentionally did not address how to handle context in outside-in in the last post. However, incorporating context into "outside-in" thinking is fundamentally important for at least two reasons:

  1. Managing context is essential to site building. Site builders commonly want to place a block or menu item that will be visible on not just one but several pages or to not all but some users. A key principle of outside-in is previewing as you edit. The challenge is that you want to preview what site visitors will see, not what you see as a site builder or site administrator.
  2. Managing context is a big usability problem on its own. Even without outside-in patterns, making context simple and usable is an unsolved problem. Modules like Context and Panels have added lots of useful functionality, but all of it happens away from the rendered page.

The ingredients: user groups and page groups

To begin to incorporate context into outside-in, Kevin Oleary, with input from yoroy, Bojhan, Angie Byron, Gábor Hojtsy and others, has iterated on the block placement examples that we presented in the last post, to incorporate some ideas for how we can make context outside-in. We're excited to share our ideas and we'd love your feedback so we can keep iterating.

To solve the problem, we recommend introducing 3 new concepts:

  1. Page groups: re-usable collections of URLs, wildcards, content types, etc.
  2. User groups: reusable collections of roles, user languages, or other user attributes.
  3. Impersonation: the ability to view the page as a user group.

Page groups

Most sites have some concept of a "section" or "type" of page that may or may not equate to a content type. A commerce store for example may have a "kids" section with several product types that share navigation or other blocks. Page groups adapts to this by creating reusable "bundles" of content consisting either of a certain type (e.g. all research reports), or of manually curated lists of pages (e.g. a group that includes /home, /contact us, and /about us), or a combination of the two (similar to Context module but context never provided an in-place UI).

User groups

User groups would combine multiple user contexts like role, language, location, etc. Example user groups could be "Authenticated users logged in from the United States", or "Anonymous users that signed up to our newsletter". The goal is to combine the massive number of potential contexts into understandable "bundles" that can be used for context and impersonation.

Impersonation

As mentioned earlier, a challenge is that you want to preview what site visitors will see, not what you see as a site builder or site administrator. Impersonation allows site builders to switch between different user groups. Switching between different user groups allow a page to be previewed as that type of user.

Using page groups, user groups and impersonation

Let's take a look at how we use these 3 ingredients in an example. For the purpose of this blog post, we want to focus on two use cases:

  1. I'm a site builder working on a life sciences journal with a paywall and I want to place a block called "Download report" next to all entities of type "Research summary" (content type), but only to users with the role "Subscriber" (user role).
  2. I want to place a block called "Access reports" on the main page, the "About us" page, and the "Contact us" page (URL based), and all research summary pages, but only to users who are anonymous users.

Things can get more complex but these two use cases are a good starting point and realistic examples of what people do with Drupal.

Step #1: place a block for anonymous users

Let's assume the user is a content editor, and the user groups "Anonymous" and "Subscriber" as well as the page groups "Subscriber pages" and "Public pages" have already been created for her by a site builder. Her first task is to place the "Access reports" block and make it visible only for anonymous users.


Place a block for anonymous users

First the editor changes the impersonation to "Anonymous" then she places the block. She is informed about the impact of the change.

Step #2: place a block for subscribers

Our editor's next task is to place the "Download reports" block and make it visible only for subscribers. To do that she is going to want to view the page as a subscriber. Here it's important that this interactions happens smoothly, and with animation, so that changes that occur on the page are not missed.


Place a block for subscribers

The editor changes the impersonation to "Subscribers". When she does the "Access reports" block is hidden as it is not visible for subscribers. When she places the "Download report" block and chooses the "Subscriber pages" page group, she is notified about the impact of the change.

Step #3: see if you did it right

Once our editor has finished step one and two she will want to go back and make sure that step two did not undo or complicate what was done in step one, for example by making the "Download report" block visible for Anonymous users or vice versa. This is where impersonation comes in.


Confirm you did it right

The anonymous users need to see the "Access reports" block and subscribers need to see the "Download report" block. Impersonation lets you see what that looks like for each user group.

Summary

The idea of combining a number of contexts into a single object is not new, both context and panels do this. What is new here is that when you bring this to the front-end with impersonation, you can make a change that has broad impact while seeing it exactly as your user will.

April 30, 2016

I’ve created a new release of the “NameStuff” mod for Supreme Commander Forged Alliance, now titled “Name All The Things“. So no, this is not a post where I rant about good variable naming 🙂

The mod automatically sets names for your units, which can change based on their status. For instance, damage can cause a unit to be labeled “angry”, while idle units can be labeled “lazy” or “useless”. These names are visible to you, and to the people you are playing with. It is a so called UI-mod, which means you can have it enabled for yourself, without other players in the game having it or needing to enable it as well.

A game with the Name All The Things mod

The mod was created in 2014 by Sheeo, at least if the details in the mod info file are correct. Going on other clearly incorrect information, that might well not be the case. Anyway, the credits for creating this mod do not go to me, and the original author has my thanks for creating this fun mod.

New features in Name All The Things 2.0.0

  • All texts are now configurable at the top of the file
  • Out of fuel units will be “hungry”
  • Workers and non-workers now have different IDLE messages
  • You can now set special names that only show up for UEF units

Furthermore, the mod does now have an icon and a README. The code also got cleaned up a bit, so it should now be easier to understand and modify further.

The default list of “base-names” got changed to those of my FAF friends, or names otherwise funny to them. Many of the other name parts also got changed, for instance a full health unit will no longer be “happy unit” but rather “such a unit”.

You can get the new version here. I’d like to get it into the FAF mod vault, unfortunately that thing still is ALL of the broken.

What is up next?

When doing a game with thousands of units, the mod can cause some lag. An obvious solution for this is to only give names to the oldest 500ish units alive. I had a go at implementing this, though found it’s harder to do than I expected.

Some other ideas

  • Make it easier to configure the mod. Ideally in the game/lobby, else via config files (so no more lua editing)
  • Name buildings
  • Hold veterancy into account (Idea by Such a Figaro)
  • Do something with ACU names
  • Special names for hover tanks and arty “Such Floaty”, “Floaty Crap”, etc

From my point of view the code interacts with black boxes and is next to impossible to test. If someone can point me to where I can find the actual interface definition of unit, that’d be much appreciated. Knowing which information is available is of course very helpful when looking for cool new things that this mod could do.

Your ideas and suggestions are welcome. Got new prefixes to add or have a great new feature in mind, please share your ideas. If you have code modifications, you can submit them as a patch.

Screenshots

Name All The Things

April 29, 2016

5237879183_bf2a500f2f_b

Si vous ne vous intéressez pas du tout au cyclisme, vous n’êtes peut-être pas au courant d’un débat qui fait rage actuellement au sein du peloton professionnel : doit-on autoriser les freins à disque sur les vélos de compétition ?

Peut-être que le cyclisme ne vous intéresse pas mais cette anecdote est intéressante à plus d’un titre car elle illustre très bien l’incapacité que nous avons à évaluer rationnellement un danger et l’importance que les médias émotionnels peuvent avoir sur des processus de décision politique.

Au final, elle nous démontre que nous ne recherchons pas la sécurité mais seulement une illusion de celle-ci.

Les freins à disque, kézako ?

Le but d’un frein est de ralentir voire de stopper un véhicule. La plupart du temps, cela se fait en transformant l’énergie cinétique en chaleur.

Sur la plupart des vélos jusqu’il y’a quelques années, un frein consistait en deux patins qui venaient pincer la jante. En frottant sur les patins, la jante ralentit tout en chauffant.

Screen Shot 2016-04-30 at 00.43.08

Un frein sur jante, par Bart Heird.

Sont ensuite apparus les freins à disque : le principe est exactement le même mais au lieu d’appliquer le patin sur la jante, on va l’appliquer sur un disque spécialement conçu pour cela fixé au centre de la roue.

15480841850_8572dd8964_z

Un frein à disque, par Jamis Bicycle Canada.

 

Les avantages sont multiples :

  • Contrairement à la jante, le disque est très fin et n’est pas soumis à de constantes torsions mécaniques. Il est donc possible d’appliquer une pression précise. Sur une jante très légèrement voilée, le freinage est assez aléatoire. Le frein peut frotter sans être activé ou ne pas bien s’activer quand on freine. Pas de problème avec le disque.
  • Le disque reste généralement beaucoup plus propre que la jante (qui passe dans la boue et la poussière), ce qui permet un meilleur freinage par tous les temps.
  • Le disque est conçu uniquement pour le freinage. Il est donc possible de choisir le matériau le plus adapté. La jante, elle doit obéir à des contraintes mécaniques de solidité et de légèreté. La qualité de freinage est accessoire.
  • Le disque est conçu pour évacuer la chaleur générée. La jante pas. En cas de trop long freinage, la jante peut chauffer tellement que le pneu se décolle. (Ce qui est arrivé en 2006 à Beloki, forçant Amstrong à faire une désormais célèbre sortie de route).

Le résultat est qu’un frein à disque fournit un freinage cohérent et constant quelle que soient les conditions météo, la vitesse et le revêtement. Un cycliste équipé de freins à disque dispose d’un contrôle sans commune mesure avec les freins sur jantes.

Le frein à disque en compétition

Les freins à disque ont donc conquis tous les domaines du cyclisme, en commençant par le VTT et le cyclocross. Tous ? Non, pas le cyclisme sur route.

Les raisons ? Tout d’abord, les freins à disque sont plus lourds et moins aérodynamiques, données particulièrement importantes dans cette discipline. Mais les professionnels ont aussi peur qu’un disque puisse causer de vilaines blessures en cas de chutes en peloton où les cyclistes s’empilent les uns sur les autres.

L’union internationale de cyclisme avait néanmoins décidé de les autoriser à titre provisoire afin de tester graduellement en 2015 puis 2016.

Tout semblait bien se passer jusqu’à ce que le cycliste Fran Ventoso se coupe au cours d’une chute sur la célèbre course Paris-Roubaix. Sa blessure est impressionnante et aurait, selon lui, été causée par un disque de frein. Le plus grand conditionnel est de rigueur car le coureur lui-même n’a pas vu qu’il s’agissait d’un disque et qu’aucun coureur équipé de freins à disque n’est tombé ou n’a reporté avoir été touché dans ce secteur.

Néanmoins, les photos de la blessure ont fait le tour du web et les témoignages comparant les disque à des lames de rasoir ou des trancheuses de boucherie ont rapidement fait le buzz.

La preuve est-elle donc faites que les freins à disque sont dangereux et qu’il faut les bannir ?

Analyser le danger

Comme toujours, l’être humain est prompt à se saisir des anecdotes qui lui conviennent afin de se convaincre. Mais si on analyse rationnellement le problème, on voit émerger une réalité toute différente.

Un vélo est, par nature, composée d’éléments pouvant être particulièrement dangereux : une chaîne, des roues dentées, des rayons de métal très fins sur des roues tournant à haute vitesse. Aucun de ces éléments n’a jamais été considéré comme un problème, ils font partie du cyclisme. Une vidéo sur Facebook semble démontrer que le frein à disque n’est pas particulièrement coupant . Tout au plus peut-on noter les risques de brûlures si on le touche juste après un très long freinage.

Durant la période de tests 2015-2016, le cyclisme de route professionnel a donc connu un et un seul accident impliquant (potentiellement) un frein à disque.

Au cours de la même période, les courses ont connues un nombre importants d’accidents majeurs impliquant des motos ou des voitures faisant partie de l’organisation de la course. Le plus cocasse est certainement celui de Greg Van Avermaet, alors en tête de course et qui sera propulsé dans le fossé par une moto de télévision. Le second de la course, Adam Yates, dépassera Van Avermaet sans le voir et passera la ligne d’arrivée persuadé d’être arrivé deuxième. Mais l’accident le plus dramatique reste la mort du coureur Antoine Demoitié, heurté à la tête par une moto de l’organisation après avoir fait une chute sans gravité.

Une course cycliste, de nos jours, est en effet une débauche de véhicules motorisés tentant de se frayer un passage entre les vélos. Avec des conséquences graves : il ne se passe plus un tour de France sans qu’au moins un coureur soit mis à terre par un véhicule.

Si la sécurité physique des coureurs était réellement un souci, l’utilisation de véhicules lors des courses cyclistes serait sévèrement revue. C’est d’ailleurs ce que demandent beaucoup de coureurs mais sans écho auprès de la fédération ni des médias. Après tout, les motos de la télévision sont la seule motivation des sponsors qui payent les salaires des coureurs…

Les enjeux du débats

Aujourd’hui, une seule blessure statistiquement anecdotique va potentiellement repousser de plusieurs années l’apparition des freins à disque au sein du peloton professionnel pour la simple raison que les photos sont impressionnantes.

Pourtant, il est évident que pour un cycliste isolé, les freins à disque améliorent grandement la sécurité. Ils sont également utilisés avec succès depuis des années au plus haut niveau en VTT et en cyclocross. Le cyclisme sur route est-il une exception ? Les gains évidents de sécurité d’un meilleur freinage ne compensent-ils pas le risque de se couper ?

N’ayant pas l’expérience de la course, je ne peux absolument pas juger.

Tout au plus puis-je remarquer que les coureurs cyclistes ont, pendant des années, lutté contre le port obligatoire du casque, pourtant élément de sécurité aujourd’hui indiscutable. L’opposition a été telle qu’il a été nécessaire d’établir une période de transition durant laquelle les cyclistes pouvaient se débarrasser de leur casque en arrivant sur la dernière montée d’une course.

Ne devrait-on pas également considérer l’exemple qu’ils donnent à une époque où la promotion du cyclisme face à la voiture devient un enjeu sociétal ?

Suite au buzz des photos particulièrement impressionnante de la blessure de Ventoso, j’ai entendu des particuliers refusant d’acheter un vélo de ballade avec freins à disque voire croyant que ceux-ci allaient désormais être interdit sur tous les vélos. Les organisateurs des courses amateurs amicales parlent aussi d’interdir les disques. Interdir une technologie qui pourrait potentiellement éviter des accidents ! Interdire des amateurs, utilisant majoritairement leur vélo dans le traffic quotidien, d’avoir des freins à disque s’ils veulent participer à des « sportives » mi-ballades, mi compétition amicales.

La résistance au changement

Vu sous cet angle, les implications et les enjeux de cette histoire sont bien plus importantes qu’une vilaine coupure. Mais cela illustre à quel point l’être humain est en permanence en train de lutter contre le changement, quelle que soit la forme qu’il puisse prendre.

Dans la narration des médias sociaux, la proposition suivant paraît logique : « Un cycliste professionnel dans une course très particulière se coupe et pense que sa blessure est due à des freins. Tous les vélos du monde devraient donc désormais utiliser des freins moins efficaces. »

Notre perception du danger est complètement tronquée par les médias (dans ce cas-ci une photo de blessure), par la narration (l’usage d’analogies avec des lames de rasoirs) et complètement irrationnelle (les motos et les voitures étant familières, elles n’apparaissent pas comme dangereuses, l’accident est un cas unique,etc).

Sous de fallacieux prétexte de risques supposés, nous refusons généralement de voir en face les risques que nous courrons déjà pour la simple raison que nous voulons nous complaire dans notre confortable immobilisme suranné. Nous exagérons les risques apportés par toute nouveauté. Et nous refusons les innovations qui pourraient nous apporter une réelle sécurité.

Finalement, l’être humain ne cherche absolument pas la sécurité. Il cherche l’illusion de celle-ci. Du coup, nos politiciens ne nous donnent-ils pas exactement ce que nous cherchons ?

Le fait que les les vélos freineront désormais moins bien à cause d’une photo sanguinolente sur les réseaux sociaux n’est-elle pas une merveilleuse analogie, un extraordinaire résumé de toute la politique sécuritaire que nous mettons en place ces dernières décennies ?

 

Photo par photographer.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins: 12uAH27PhWepyzZ8kzY9rMAZzwpfsWSxXt, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

For the ones who didn’t find the LRU in Tracker’s code (and for the ones who where lazy).

Let’s say we will have instances of something called a Statement. Each of those instances is relatively big. But we can recreate them relatively cheap. We will have a huge amount of them. But at any time we only need a handful of them.

The ones that are most recently used are most likely to be needed again soon.

First you make a structure that will hold some administration of the LRU:

typedef struct {
	Statement *head;
	Statement *tail;
	unsigned int size;
	unsigned int max;
} StatementLru;

Then we make the user of a Statement (a view or a model). I’ll be using a Map here. You can in Qt for example use QMap for this. Usually I want relatively fast access based on a key. You could also each time loop the stmt_lru to find the instance you want in useStatement based on something in Statement itself. That would rid yourself of the overhead of a map.

class StatementUser
{
	StatementUser();
	~StatementUser();
	void useStatement(KeyType key);
private:
	StatementLru stmt_lru;
	Map<KeyType, Statement*> stmts;
	StatementFactory stmt_factory;
}

Then we will add to the private fields of the Statement class the members prev and next: We’ll make a circular doubly linked list.

class Statement: QObject {
	Q_OBJECT
    ...
private:
	Statement *next;
	Statement *prev;
};

Next we initialize the LRU:

StatementUser::StatementUser() 
{
	stmt_lru.max = 500;
	stmt_lru.size = 0;		
}

Then we implement using the statements

void StatementUser::useStatement(KeyType key)
{
	Statement *stmt;

	if (!stmts.get (key, &stmt)) {

		stmt = stmt_factory.createStatement(key);

		stmts.insert (key, stmt);

		/* So the ring looks a bit like this: *
		 *                                    *
		 *    .--tail  .--head                *
		 *    |        |                      *
		 *  [p-n] -> [p-n] -> [p-n] -> [p-n]  *
		 *    ^                          |    *
		 *    `- [n-p] <- [n-p] <--------'    */

		if (stmt_lru.size >= stmt_lru.max) {
			Statement *new_head;

		/* We reached max-size of the LRU stmt cache. Destroy current
		 * least recently used (stmt_lru.head) and fix the ring. For
		 * that we take out the current head, and close the ring.
		 * Then we assign head->next as new head. */

			new_head = stmt_lru.head->next;
			auto to_del = stmts.find (stmt_lru.head);
			stmts.remove (to_del);
			delete stmt_lru.head;
			stmt_lru.size--;
			stmt_lru.head = new_head;
		} else {
			if (stmt_lru.size == 0) {
				stmt_lru.head = stmt;
				stmt_lru.tail = stmt;
			}
		}

	/* Set the current stmt (which is always new here) as the new tail
	 * (new most recent used). We insert current stmt between head and
	 * current tail, and we set tail to current stmt. */

		stmt_lru.size++;
		stmt->next = stmt_lru.head;
		stmt_lru.head->prev = stmt;

		stmt_lru.tail->next = stmt;
		stmt->prev = stmt_lru.tail;
		stmt_lru.tail = stmt;

	} else {
		if (stmt == stmt_lru.head) {

		/* Current stmt is least recently used, shift head and tail
		 * of the ring to efficiently make it most recently used. */

			stmt_lru.head = stmt_lru.head->next;
			stmt_lru.tail = stmt_lru.tail->next;
		} else if (stmt != stmt_lru.tail) {

		/* Current statement isn't most recently used, make it most
		 * recently used now (less efficient way than above). */

		/* Take stmt out of the list and close the ring */
			stmt->prev->next = stmt->next;
			stmt->next->prev = stmt->prev;

		/* Put stmt as tail (most recent used) */
			stmt->next = stmt_lru.head;
			stmt_lru.head->prev = stmt;
			stmt->prev = stmt_lru.tail;
			stmt_lru.tail->next = stmt;
			stmt_lru.tail = stmt;
		}

	/* if (stmt == tail), it's already the most recently used in the
	 * ring, so in this case we do nothing of course */
	}

	/* Use stmt */

	return;
}

In case StatementUser and Statement form a composition (StatementUser owns Statement, which is what makes most sense), don’t forget to delete the instances in the destructor of StatementUser. In the example’s case we used heap objects. You can loop the stmt_lru or the map here.

StatementUser::~StatementUser()
{
	Map<KeyType, Statement*>::iterator i;
    	for (i = stmts.begin(); i != stmts.end(); ++i) {
		delete i.value();
	}
}

April 28, 2016

Recently, some people started to ask proper IPv6/AAAA record for some of our public mirror infrastructure, like mirror.centos.org, and also msync.centos.org

Reason is that a lot of people are now using IPv6 wherever possible and from a CentOS point of view, we should ensure that everybody can have content over (legacy) ipv4 and ipv6. Funny that I call ipv4 "legacy" as we still have to admit that it's still the default everywhere, even in 2016 with the available pools now exhausted.

While we had already some AAAA records for some of our public nodes (like www.centos.org as an example), I started to "chase" after proper and native ipv6 connectivity for our nodes. That's where I had to take contact with all our valuable sponsors. First thing to say is that we'd like to thank them all for their support for the CentOS Project over the years : it wouldn't have been possible to deliver multiple terrabytes of data per month without their sponsorship !

WRT ipv6 connectivity that's where the results of my quest where really different : while some DCs support ipv6 natively, and even answer you in 5 minutes when asking for a /64 subnet to be allocated , some other aren't still ipv6 ready : For the worst case the answer was "nothing ready and no plan for that" or for sometimes the received answer was something like "it's on the roadmap for 2018/2019").

The good news is that ~30% of our nodes behind msync.centos.org have now ipv6 connectivity, so the next step is now to test our various configurations (distributed by puppet) and then also our GeoIP redirection (done at the PowerDNS level for such records, for which we'll also then add proper AAAA record)

Hopefully we'll have that tested and then announced soon, and also for other public services that we're providing to you.

Stay tuned for more info about ipv6 deployment within centos.org !

The one big question I get asked over and over these days is: "How is Drupal 8 doing?". It's understandable. Drupal 8 is the first new version of Drupal in five years and represents a significant rethinking of Drupal.

So how is Drupal 8 doing? With less than half a year since Drupal 8 was released, I'm happy to answer: outstanding!

As of late March, Drupal.org counted over 60,000 Drupal 8 sites. Looking back at the first four months of Drupal 7, about 30,000 sites had been counted. In other words, Drupal 8 is being adopted twice as fast as Drupal 7 had been in its first four months following the release.

As we near the six-month mark since releasing Drupal 8, the question "How is Drupal 8 doing?" takes on more urgency for the Drupal community with a stake in its success. For the answer, I can turn to years of experience and say while the number of new Drupal projects typically slows down in the year leading up to the release of a new version; adoption of the newest version takes up to a full year before we see the number of new projects really take off.

Drupal 8 is the middle of an interesting point in its adoption cycle. This is the phase where customers are looking for budgets to pay for migrations. This is the time when people focus on learning Drupal 8 and its new features. This is when the modules that extend and enhance Drupal need to be ported to Drupal 8; and this is the time when Drupal shops and builders are deep in the three to six month sales cycle it takes to sell Drupal 8 projects. This is often a phase of uncertainty but all of this is happening now, and every day there is less and less uncertainty. Based on my past experience, I am confident that Drupal 8 will be adopted at "full-force" by the end of 2016.

A few weeks ago I launched the Drupal 2016 product survey to take pulse of the Drupal community. I plan to talk about the survey results in my DrupalCon keynote in New Orleans on May 10th but in light of this blog post I felt the results to one of the questions is worth sharing and commenting on sooner:

Survey drupal adoption

Over 1,800 people have answered that question so far. People were allowed to pick up to 3 answers for the single question from a list of answers. As you can see in the graph, the top two reasons people say they haven't upgraded to Drupal 8 yet are (1) the fact that they are waiting for contributed modules to become available and (2) they are still learning Drupal 8. The results from the survey confirm what we see every release of Drupal; it takes time for the ecosystem, both the technology and the people, to come along.

Fortunately, many of the most important modules, such as Rules, Pathauto, Metatag, Field Collection, Token, Panels, Services, and Workbench Moderation, have already been ported and tested for Drupal 8. Combined with the fact that many important modules, like Views and CKEditor, moved to core, I believe we are getting really close to being able to build most websites with Drupal 8.

The second reason people cited for not jumping onto Drupal 8 yet was that they are still learning Drupal 8. One of the great strengths of Drupal has long been the willingness of the community to share its knowledge and teach others how to work with Drupal. We need to stay committed to educating builders and developers who are new to Drupal 8, and DrupalCon New Orleans is an excellent opportunity to share expertise and learn about Drupal 8.

What is most exciting to me is that less than 3% answered that they plan to move off Drupal altogether, and therefore won't upgrade at all. Non-response bias aside, that is an incredible number as it means the vast majority of Drupal users plan to eventually upgrade.

Yes, Drupal 8 is a significant rethinking of Drupal from the version we all knew and loved for so long. It will take time for the Drupal community to understand Drupal's new design and capabilities and how to harness that power but I am confident Drupal 8 is the right technology at the right time, and the adoption numbers so far back that up. Expect Drupal 8 adoption to start accelerating.

Stephen Fry wrote an insightful critique about the what the web was and what it has become:

The real internet [as opposed to AOL] was that Wild West where anything went, shit in the streets and Bad Men abounding, no road-signs and no law, but ideas were freely exchanged, the view wasn’t ruined by advertising billboards and every moment was very very exciting.

[…]

I and millions of other early ‘netizens’ as we embarrassingly called ourselves, joined an online world that seemed to offer an alternative human space, to welcome in a friendly way (the word netiquette was used) all kinds of people with all kinds of views. We were outside the world of power and control. Politicians, advertisers, broadcasters, media moguls, corporates and journalists had absolutely zero understanding of the net and zero belief that it mattered. So we felt like an alternative culture; we were outsiders.

Those very politicians, advertisers, media moguls, corporates and journalists who thought the internet a passing fad have moved in and grabbed the land. They have all the reach, scope, power and ‘social bandwidth’ there is. Everyone else is squeezed out — given little hutches, plastic megaphones and a pretence of autonomy and connectivity. No wonder so many have become so rude, resentful, threatening and unkind. […]

The radical alternative now must be to jack out of the matrix, to go off the grid. […]

I live in a world without Facebook, and now without Twitter. I manage to survive too without Kiki, Snapchat, Viber, Telegram, Signal and the rest of them. I haven’t yet learned to cope without iMessage and SMS. I haven’t yet turned my back on email and the Cloud. I haven’t yet jacked out of the matrix and gone off the grid. Maybe I will pluck up the courage. After you …

While not off the grid yet, Stephen Fry blogs on WordPress and his blog uses my own little Autoptimize plugin. Let that be my proudest boast of the day. Sorry Stephen …

April 27, 2016

Last week, I secretly reused my own LRU code in the model of the editor of a CNC machine (has truly huge files, needs a statement editor). I rewrote my own code, of course. It’s Qt based, not GLib. Wouldn’t work in original form anyway. But the same principle. Don’t tell Jürg who helped me write that, back then.

Extra points and free beer for people who can find it in Tracker’s code.

April 26, 2016

The post Yum Update: DB_RUNRECOVERY Fatal error, run database recovery appeared first on ma.ttias.be.

If for some reason your server's disk I/O fails during a yum or RPM manipulation, you can see the following error whenever you run yum or rpc:

# yum update
...
rpmdb: page 18816: illegal page type or format
rpmdb: PANIC: Invalid argument
rpmdb: Packages: pgin failed for page 18816
error: db4 error(-30974) from dbcursor->c_get: DB_RUNRECOVERY: Fatal error, run database recovery
rpmdb: PANIC: fatal region error detected; run recovery
error: db4 error(-30974) from dbcursor->c_close: DB_RUNRECOVERY: Fatal error, run database recovery
No Packages marked for Update
rpmdb: PANIC: fatal region error detected; run recovery
error: db4 error(-30974) from db->close: DB_RUNRECOVERY: Fatal error, run database recovery
rpmdb: PANIC: fatal region error detected; run recovery
error: db4 error(-30974) from db->close: DB_RUNRECOVERY: Fatal error, run database recovery
rpmdb: File handles still open at environment close
rpmdb: Open file handle: /var/lib/rpm/Packages
rpmdb: Open file handle: /var/lib/rpm/Name
rpmdb: PANIC: fatal region error detected; run recovery
error: db4 error(-30974) from dbenv->close: DB_RUNRECOVERY: Fatal error, run database recovery

The fix is, thankfully, rather easy: remove the RPM database, rebuild it and let yum download all the mirror's file lists.

$ mv /var/lib/rpm/__db* /tmp/
$ rpm -rebuilddb
$ yum clean all

The commands above are safe to run. If for some reason this does not fix, you can get the files back from the /tmp path where they were moved.

The post Yum Update: DB_RUNRECOVERY Fatal error, run database recovery appeared first on ma.ttias.be.

The post Nginx 1.10 brings HTTP/2 support to the stable releases appeared first on ma.ttias.be.

A very small update was sent to the nginx-announce mailing list today. And I do mean very small:

Changes with nginx 1.10.0 --- 26 Apr 2016

*) 1.10.x stable branch.

---
Maxim Dounin
http://nginx.org/
[nginx-announce] nginx-1.10.0

At first, you wouldn't think much of it.

However, this new release includes support for HTTP/2. That means from now on, HTTP/2 is available in the stable Nginx releases and you no longer need the "experimental" mainline releases.

$ nginx -V
nginx version: nginx/1.10.0
built by gcc 4.4.7 20120313 (Red Hat 4.4.7-16) (GCC)
built with OpenSSL 1.0.1e-fips 11 Feb 2013
TLS SNI support enabled
configure arguments: ... --with-http_v2_module

This is very good news for the adoption of HTTP/2! If you're running Nginx from their official repositories and you have SSL/TLS enabled, I suggest you go ahead and enable HTTP/2 right now.

If you're new to HTTP/2 and want to learn more about it, here are some resources I created:

Exciting news!

Update: Alan kindly reminded of the impending doom happening on May 15th when Chrome disables NPN support. In short: having the http2 option enabled won't help, as your OS will no longer be able to support HTTP/2.

The post Nginx 1.10 brings HTTP/2 support to the stable releases appeared first on ma.ttias.be.

Logo de PostgreSQLCe jeudi 19 mai 2016 à 19h se déroulera la 49ème séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : PostgreSQL et la streaming replication

Thématique : Base de données|sysadmin|communauté

Public : DBA|sysadmin|entreprises|étudiants

L’animateur conférencier : Stefan Fercot

Lieu de cette séance : HEPH Condorcet, Chemin du Champ de Mars, 15 – 7000 Mons – Auditorium 2 (G01) situé au rez de chaussée (cf. ce plan sur le site d’Openstreetmap; ATTENTION, l’entrée est peu visible de la voie principale, elle se trouve dans l’angle formé par un très grand parking).

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page http://jeudisdulibre.fikket.com/. La séance sera suivie d’un verre de l’amitié.

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, Normation, OpenSides, MeaWeb et Phonoid.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

Description : Dans le milieu hospitalier, Stefan Fercot et son équipe sont chargés de gérer au quotidien l’infrastructure serveurs hébergeant la solution développée par leur société. Ce dossier patient informatisé est un élément central pour les équipes soignantes de nombreuses institutions. Maintenir une application performante et hautement disponible est donc très important. C’est à cette fin qu’est utilisé PostgreSQL, fonctionnant sur serveurs CentOS/RedHat.

PostgreSQL est un moteur de base de données relationnelle open source utilisé à travers le monde dans de nombreux secteurs : hébergement web, hôpitaux, banques,… Parmi ses plus fervents utilisateurs, citons notamment Skype.

L’exposé ciblera les sujets suivants :

  • Installation
  • Brève explication du fonctionnement interne
  • Méthodes de sauvegarde (dumps, point-in-time recovery, streaming replication)
  • Streaming replication : retour d’expérience d’utilisation dans un environnement de type « cluster »
  • Quelques chiffres des volumétries gérées

April 25, 2016

8488646629_59b959ea94_z

En éternel optimiste, je suis confiant dans le fait que l’immense majorité de l’humanité est bienveillante. Nous ne souhaitons que le bonheur pour nous-mêmes et les autres.

Mais alors, comment expliquer la multiplication des conflits, des guerres, des disputes et des violences ?

Ma réponse est toute simple : parce que nous ne sommes pas assez égoïstes et que nos différentes cultures nous poussent à “penser d’abord aux autres”.

« Et alors ? » me diriez vous avec un air étonné en vous tapant la tempe de l’index. Contrairement à ce qu’on pourrait croire, avoir des bonnes intentions pour les autres ne fait que paver l’enfer, pour paraphraser le proverbe. La solution ? Soyons égoïstes et arrêtons un peu d’essayer de penser pour les autres !

Petit exemple introductif

Marie a offert une boîte de pralines à Jean. Ils viennent de la manger ensemble. Il n’en reste plus qu’une dans la boîte. Marie en a très envie. Mais elle veut avant tout faire plaisir à Jean.

— Tiens, prends la dernière !

Jean n’a pas du tout envie de la praline car il sait qu’elle est à l’alcool et il a horreur de ça. Cependant, il ne veut pas froisser Marie ni montrer que son refus est purement égoïste.

— Non, merci, je te la laisse.
— J’insiste, tu en as pris moins que moi !
— Vraiment, sans façon !
— Ce serait bête de la jeter !
— Bon, d’accord…

Moralité : Marie et Jean sont tous les deux frustrés mais sont persuadés de s’être frustrés pour le bien de l’autre. Ce qui a eu l’effet inverse !

Peut-on généraliser cet exemple ? Oui, je le pense !

Une hypocrite bienveillance

Le problème d’une société altruiste, c’est qu’il devient virtuellement impossible d’exprimer son propre désir, celui-ci étant perçu comme égoïste. Il devient également impossible de signifier à une personne bien intentionnée que son intention n’a pas eu l’effet escompté.

Il s’ensuit que les altruistes sont, par construction, forcés de vivre leurs propres plaisirs par procuration. Dans notre cas, c’est Marie forçant Jean à manger la praline qu’elle aurait bien voulu avoir.

La praline parait peut-être anecdotique mais remplaçons le chocolat par la morale et nous avons la source même des conflits et du fanatisme. Si un homme pense qu’il n’est pas sain que ses enfants soient exposés à de la pornographie, il va militer pour interdire la pornographie dans toutes la société afin de protéger tous les enfants ! Les opposants du mariage homosexuels militent pour, selon leur propre mot, le bien de tous et de la société. Ils sont donc essentiellement altruistes.

Exemple extrême : les extrémistes religieux ne cherchent jamais qu’à sauver les âmes égarées, quitte à les torturer et les tuer un petit peu en passant. Mais c’est pour leur bien.

L’inévitable frustration

Mais ces enfoirés d’altruistes font encore pire !

En effet, frustrés inconsciemment par le non-assouvissement de leurs désirs, ils en viennent à haïr les égoïstes qui n’ont rien demandé à personne.

Sans le savoir, ils exigent que tout le monde fasse le même sacrifice qu’eux. Ou, au minimum, ils veulent être reconnus pour leur sacrifice.

Certains vont jusqu’à affirmer tirer leur bonheur du bonheur des autres ! Cette rhétorique est paradoxale. Car si la phrase est vraie, alors l’altruiste est en fait profondément égoïste. Comme l’égoïsme n’est pas acceptable pour l’altruiste, il s’en suit que la proposition est hypocrite.

En résumé, les altruistes imposent leur vision du monde aux autres et ne supportent pas ceux qui s’occupent d’eux-mêmes.

Les conflits

Vous m’objecterez que si tout le monde était égoïste, il y aurait encore plus de conflits car, forcément, les envies sont parfois incompatibles.

Mais je pense le contraire. Car tout être humain normalement constitué est capable d’accepter une frustration si celle-ci est consciente et justifiée.
— J’ai envie de la dernière praline.
— Moi aussi.
— Tu en as mangé plus que moi.
— Effectivement, je te la laisse pour cette fois.

L’égoïsme améliore la communication, la transparence. De manière contre-intuitive, il est beaucoup plus facile de faire confiance à un égoïste : il ne cherche pas à nous faire plaisir, il suffit que ses intérêts soient alignés avec les nôtres. La frustration, elle, est verbalisée et rationalisée : « J’avais envie de la dernière praline mais il est juste que Marie aie pu la manger. »

L’égoïsme et la franchise entraîne donc une diminution des incompréhensions. Les conflits restants sont, au moins, clairement identifiés et négociables.

L’harmonie

Mais la véritable raison qui me fait abhorrer les altruistes est bien plus profonde.

Comment voulez-vous apporter de l’harmonie au monde si vous n’êtes pas en harmonie avec vous-mêmes ? Comment voulez-vous écouter les autres si vous n’êtes pas capable de vous écouter ? Comment satisfaire les envies de ceux que vous aimez si vous êtes vous même frustrés ?

L’altruisme est essentiellement morbide.

Vous voulez changer le monde ? Rendre les autres heureux ? Apporter du bonheur à vos proches ?

Charité bien ordonnée commence par soi-même ! Travaillez à être heureux, à votre propre bonheur et arrêtez de penser à la place des autres.

 

Photo par Lorenzoclick.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins: 12uAH27PhWepyzZ8kzY9rMAZzwpfsWSxXt, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

In March, I did a presentation at SxSW that asked the audience a question I've been thinking about a lot lately: "Can we save the open web?".

The web is centralizing around a handful of large companies that control what we see, limit creative freedom, and capture a lot of information about us. I worry that we risk losing the serendipity, creativity and decentralization that made the open web great.


The open web closing

While there are no easy answers to this question, the presentation started a good discussion about the future of the open web, the role of algorithms in society, and how we might be able to take back control of our personal information.

I'm going to use my blog to continue the conversation about the open web, since it impacts the future of Drupal. I'm including the video and slides (PDF, 76 MB) of my SxSW presentation below, as well as an overview of what I discussed.

Here are the key ideas I discussed in my presentation, along with a few questions to discuss in the comments.

Idea 1: An FDA-like organization to provide oversight for algorithms. While an "FDA" in and of itself may not be the most ideal solution, algorithms are nearly everywhere in society and are beginning to impact life-or-death decisions. I gave the example of an algorithm for a self-driving car having to decide whether to save the driver or hit a pedestrian crossing the street. There are many other life-or-death examples of how unregulated technology could impact people in the future, and I believe this is an issue we need to begin thinking about now. What do you suggest we do to make the use of algorithms fair and trustworthy?

Idea 2: Open standards that will allow for information-sharing across sites and applications. Closed platforms like Facebook and Google are winning because they're able to deliver a superior user experience driven by massive amounts of data and compute power. For the vast majority of people, ease-of-use will trump most concerns around privacy and control. I believe we need to create a set of open standards that enable drastically better information-sharing and integration between websites and applications so independent websites can offer user experiences that meet or exceeds that of the large platforms. How can the Drupal community help solve this problem?

Idea 3: A personal information broker that allows people more control over their data. In the past, I've written about the idea for a personal information broker that will give people control over how, where and for how long their data is used, across every single interaction on the web. This is no small feat. An audience member asked an interesting question about who will build this personal information broker -- whether it will be a private company, a government, an NGO, or a non-profit organization? I'm not really sure I have the answer, but I am optimistic that we can figure that out. I wish I had the resources to build this myself as I believe this will be a critical building block for the web. What do you think is the best way forward?

Ultimately, we should be building the web that we want to use, and that we want our children to be using for decades to come. It's time to start to rethink the foundations, before it's too late. If we can move any of these ideas forward in a meaningful way, they will impact billions of people, and billions more in the future.

Off course the web is not doomed, but despite the fact that web performance is immensely important (think impact on mobile experience, think impact on search engine ranking, think impact on conversion) the web keeps getting fatter, as witnessed by this graph from mobiforge;

web-page-size-revisited-revised

Yup; your average web page now has the same size as the Doom installer. From the original mobiforge article;

Recall that Doom is a multi-level first person shooter that ships with an advanced 3D rendering engine and multiple levels, each comprised of maps, sprites and sound effects. By comparison, 2016’s web struggles to deliver a page of web content in the same size. If that doesn’t give you pause you’re missing something.

There’s some interesting follow-up remarks & hopeful conclusions in the original article, but still, over 2 Megabyte for a web page? Seriously? Think about what that does that do to your bounce-rate, esp. knowing that Google Analytics systematically underestimates bounce rate on slow pages because people leave before even being seen by your favorite webstats solution?

So, you might want to reconsider if you really should:

  • push high resolution images to all visitors because your CMO says so (“this hero image does not look nice on my iPad”)
  • push custom webfonts just because corporate communications say so (“our corporate identity requires the use of these fonts”)
  • use angular.js (or react.js or any other JS client-side framework for that matter) because the CTO says so (“We need MVC and the modularity and testibility are great for developers”)

Because on the web faster is always better and being slower will always cost you in the end, even if you might not (want to) know.

April 22, 2016

Just read this on PPK’s blog;

Have you ever felt that you have no talent whatever? […] Cherish [that] impostor syndrome. Don’t trust people who don’t have it.

If you want to know what happens in the […], you’ll have to read his blogpost, but it is pretty insightful!

April 20, 2016

Today, Drupal 8.1 has been released and it includes BigPipe as an experimental module.

Six months ago, on the day of the release of Drupal 8, the BigPipe contrib module was released.

So BigPipe was first prototyped in contrib, then moved into core as an experimental module.

Experimental module?

Quoting d.o/core/experimental:

Experimental modules allow core contributors to iterate quickly on functionality that may be supported in an upcoming minor release and receive feedback, without needing to conform to the rigorous requirements for production versions of Drupal core.

Experimental modules allow site builders and contributed project authors to test out functionality that might eventually be included as a stable part of Drupal core.

With your help (in other words: by testing), we can help BigPipe “graduate” as a stable module in Drupal 8.2. This is the sort of module that needs wider testing because it changes how pages are delivered, so before it can be considered stable, it must be tested in as many circumstances as possible, including the most exotic ones.

(If your site offers personalization to end users, you are encouraged to enable BigPipe and report issues. There is zero risk of data loss. And when the environment — i.e. web server or (reverse) proxy — doesn’t support streaming, then BigPipe-delivered responses behave as if BigPipe was not installed. Nothing breaks, you just go back to the same perceived performance as before.)

About 500 sites are currently using the contrib module. With the release of Drupal 8.1, hopefully thousands of sites will test it.12

Please report any issues you encounter! Hopefully there won’t be many. I’d be very grateful to hear about success stories too — feel free to share those as issues too!

Documentation

Of course, documentation is ready too:

What about the contrib module?

The BigPipe contrib module is still available for Drupal 8.0, and will remain available.

  • 1.0-beta1 was released on the same day as Drupal 8.0.0
  • 1.0-beta2 was released on the same day as Drupal 8.0.1, and made it feature-complete
  • 1.0-beta3 contained only improved documentation
  • 1.0-rc1 brought comprehensive test coverage, which was the last thing necessary for BigPipe to become a core-worthy module — the same day as the work continued on the core issue: https://www.drupal.org/node/2469431#comment-10899308
  • 1.0 was tagged today, on the same day as Drupal 8.1.0

Going forward, I’ll make sure to tag releases of the BigPipe contrib module matching Drupal 8.1 patch releases, if they contain BigPipe fixes/improvements. So, when Drupal 8.1.3 is released, BigPipe 1.3 for Drupal 8.0 will be released also. That makes it easy to keep things in sync.

Upgrading?

When you upgrade from Drupal 8.0 to Drupal 8.1, and you were using the BigPipe module on your 8.0 site, then follow the instructions in the 8.1.0 release notes:

If you previously installed the BigPipe contributed module, you must uninstall and remove it before upgrading from Drupal 8.0.x to 8.1.x.


  1. Note there is also the BigPipe demo module (d.o/project/big_pipe_demo), which makes it easy to simulate the impact of BigPipe on your particular site. 

  2. There’s also a live demo: http://bigpipe.demo.wimleers.com/ 

Today is another big day for Drupal as we just released Drupal 8.1.0. Drupal 8.1.0 is an important milestone as it is a departure from the Drupal 7 release schedule where we couldn't add significant new features until Drupal 8. Drupal 8.1.0 balances maintenance with innovation.

On my blog and in presentations, I often talk about the future of Drupal and where we need to innovate. I highlight important developments in the Drupal community, and push my own ideas to disrupt the status quo. People, myself included, like to talk about the shiny innovations, but it is crucial to understand that innovation is only a piece of how we grow Drupal's success. What can't be forgotten is the maintenance, the bug fixing, the work on Drupal.org and our test infrastructure, the documentation writing, the ongoing coordination and the processes that allow us to crank out stable releases.

We often recognize those who help Drupal innovate or introduce novel things, but today, I'd like us to praise those who maintain and improve what already exists and that was innovated years ago. So much of what makes Drupal successful is the "daily upkeep". The seemingly mundane and unglamorous effort that goes into maintaining Drupal has a tremendous impact on the daily life of hundreds of thousands of Drupal developers, millions of Drupal content managers, and billions of people that visit Drupal sites. Without that maintenance, there would be no stability, and without stability, no room for innovation.

April 19, 2016

The post Staying up-to-date on open source announcements & security issues via Twitter appeared first on ma.ttias.be.

For those who follow me on Twitter it's probably no surprise: I'm pretty active there. I enjoy the interactions, the platform for sharing links and for keeping me up-to-date on technical news. To help with the latter, I built 2 twitter bots that keep me informed about open source announcements & security vulnerabilities.

Both are built on top of the data aggregation that's happening in MARC, the mailing list archive.

If you're into Twitter, you can follow these accounts:

If I'm missing any news sources, let me know.

For the last one, @foss_security [1], I even enabled push notifications. Several 0days have already been disclosed there (including the latest remote code execution vulnerability in git) so I find it worth keeping an eye out on that one.

I'm not planning on making RSS feeds for those, as the current implementation was a really quick 30-minute hack on top of MARC that easily allowed me to send tweets using the Twitter API [2].

No spam, no advertising, no bullshit: just the content on both accounts.

If you're into that kind of thing, follow @oss_announce and @foss_security.

[1] There's also an existing account, called @oss_security (mine is named @foss_security, with an extra 'f'), but that one only pipes everything from the oss-security mailing list to Twitter, including replies and discussions, no other data sources.
[2] Built with dg/twitter-php library

The post Staying up-to-date on open source announcements & security issues via Twitter appeared first on ma.ttias.be.

April 18, 2016

We’ve just released Activiti 5.20.0. This is a bugfix release (mainly fixing the bug around losing message and signal start event subscriptions on process definition redeployment, see ACT-4117). Get it from the download page or of course via Maven.

April 17, 2016

The post Bash on Windows: a hidden bitcoin goldmine? appeared first on ma.ttias.be.

Bash on Windows is available as an insider preview, nothing generally available and nothing final (so this behaviour hopefully will still change). Processes started in Bash do not show up in the Windows Task Manager and can be used to hide CPU intensive workers, like bitcoin miners.

I ran the sysbench tool inside Bash (which is just a apt-get install sysbench away) to stresstest the CPU.

$ sysbench --test=cpu --cpu-max-prime=40000 run

That looks like this:

1_bash_windows_sysbench

The result: my test VM went to 100% CPU usage, but there is no (easy?) way to see that from within Windows.

The typical task manager reports 100% CPU usage, but can't show the cause of it.

2_bash_windows_task_manager

The task history, which can normally show which processes used how much CPU, memory or bandwidth, stays blank.

3_bash_windows_task_history

There's a details tab with more specific process names etc., it's not shown there either.

5_bash_windows_details

And the performance tab clearly shows a CPU increase as soon as the benchmark is started.

4_bash_windows_performance

To me, this shows an odd duality with the "Bash on Windows" story. I know it's beta/alpha and things will probably change, but I can't help but wonder: if this behaviour remains, Bash will become a perfect place to hide your Bitcoin miners or malware.

You can see your server or desktop is consuming 100% CPU, but finding the source can prove to be very tricky for Windows sysadmins with little Linux knowledge.

Update: this is a confirmed and fixed issue in the BashOnWindows team. So we should expect this to be fixed in the next release!

The post Bash on Windows: a hidden bitcoin goldmine? appeared first on ma.ttias.be.

April 16, 2016

De Belgische Marechaussee heeft goed werk verricht. Het gespuis is grotendeels opgepakt. Daar zal speurwerk voor nodig geweest zijn. Toch bleken er weinig inbreuken te zijn en heeft de bevolking weinig vrijheden ingeleverd.

Met andere woorden: er is gericht werk verricht.

Hoe hoger het resultaat met hoe minder ingeleverde vrijheden, te hoger de kwaliteit van onze diensten.

We gaan dat landje hier vrij houden.

April 15, 2016

April 14, 2016

6821525402_0d43290ee2_k

J’aime me balader seul la nuit dans les rues désertes. Couper par un bois obscur. Respirer l’air de la nuit. Je me sens bien.

Peur ?

Je ne connais pas vraiment la peur. J’ai confiance dans mes capacités de défense. J’ai confiance dans ma pointe de vitesse. J’ai surtout confiance dans le fait que m’agresser est un acte très grave, complètement condamné par la société. Les agresseurs sont donc relativement peu nombreux et doivent être vraiment acculés. De plus, une agression éventuelle aurait très peu de chance de laisser des séquelles durables. Même si c’était le cas, je sais que j’aurais le support de l’ensemble de la société, que mon agresseur serait unanimement condamné.

Je suis donc en confiance, j’ai la chance de vivre dans un endroit sûr.

Parce que je suis un homme.

Si je me transforme en femme, le monde est immédiatement différent. Physiquement, j’ai moins de chance d’être plus forte ou plus rapide qu’un éventuel agresseur. Je suis obligée de considérer chaque homme comme un agresseur potentiel. Les regards, les remarques, même les plus gentilles, me rappellent à chaque instant que je suis avant tout une proie sexuelle dans le regard des mâles.

Que ce soit dans un contexte professionnel, intellectuel ou de détente, les premières remarques concerneront toujours mon physique et, implicitement, ma baisabilité. Et si l’on sort des considérations sexuelles, ce sera essentiellement pour parler de mon rôle potentiel d’épouse et de mère.

Des agresseurs potentiels n’ont pas besoin d’être acculés, désespérés pour passer à l’acte avec moi : il suffit qu’ils aient des pulsions sexuelles un peu trop vives.

De plus, je sais que la société les condamne très peu. Si je suis victime d’un viol, les policiers commenceront par me culpabiliser en me demandant pourquoi je me suis promené à tel endroit dans telle tenue. Pourquoi j’étais seule. Victime d’un viol, je suis en partie coupable. Et, pour toujours, impure et détruite à vie au regard des hommes, même les plus bienveillants.

Dans le meilleur des cas, mon agresseur sera puni comme s’il m’avait donné quelques coups. Si on le retrouve.

Je ne suis donc pas en confiance, je vis dans la peur. Que ce soit dans la rue, dans un événement professionnel, une soirée, un concert. Sans aucune gêne, les hommes à faible distance parlent de mon décolleté, des positions sexuelles qu’ils vont essayer avec moi ou une autre, du fait que je suis baisable ou non. Ils ne se cachent même pas, ils n’imaginent pas que je les entends. Ils sont même persuadés d’être gentils et attentionnés en soulignant mes qualités physiques. Et je suis forcée de me sentir flattée de peur d’être perçue comme prude, d’être exclue de leurs cercles et, dans le cas professionnel, de voir se volatiliser les opportunités de carrière.

Une immense majorité d’hommes et de femmes me critiquent, ouvertement ou sans s’en rendre compte car je tente de m’octroyer les mêmes libertés de parole et d’action que les hommes. Ils me trouvent bizarre, dérangeante, choquante. Pour les femmes, je ne suis pas assez féminine, sensible. Un simple gros mot ou une allusion sexuelle prend dans ma bouche une ampleur insoupçonnée. Je n’ai pas le droit de les dire, seulement d’en rire quand elles proviennent d’un homme.

Me plaindre est hors de question : mon audience me soulignera immédiatement la chance que j’ai de vivre dans un pays où les femmes ont déjà tellement de droits, que j’exagère, qu’en Arabie Saoudite, c’est bien pire mais qu’au moins les hommes ont la paix là bas (rires gras). Mais ces droits dont je jouis sont si fragiles… Chaque jour, nous devons lutter pour les faire exister, les matérialiser et la lutte est d’autant plus difficile que la plupart sont persuadés que le combat est terminé, que l’égalité est acquise.

Pire, certains hommes vont jusqu’à se sentir opprimés du fait que je tente de jouir des mêmes droits qu’eux. Il est vrai qu’après des millénaires de privilèges, le retour à l’égalité doit ressembler à de l’oppression.

Aujourd’hui, j’ai peur d’être ce que je suis, j’ai peur de perdre les droits pour lesquels les femmes se sont battues et, malgré tout, je dois cacher cette peur, être belle et forte pour avancer.

Enfin, heureusement, je ne connais pas cette peur. Car je suis un homme.

Mais, souvent, la honte m’étreint à l’idée de vivre dans un monde soi-disant libre où plus de la moitié de la population est forcée de vivre dans la peur de l’autre moitié.

 

Photo par Nicola Albertini.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins: 12uAH27PhWepyzZ8kzY9rMAZzwpfsWSxXt, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

So now and again people try out Autoptimize in a … clueless manner, see things breaking, deactivating it immediately (no problem up to that point) and posting a bad review. In general I will patiently reply that they can fix almost any issue themselves using the settings-screen and that there’s info in the FAQ and I’ll even troubleshoot for them if they’re kind of nice. This gentle and helpful approach has, in the past, already resulted in updated, raving reviews and can only be highly recommended.

But sometimes I’m just in a bad mood, and I post an rant like this one;

Zoukspot
★☆☆☆☆ You’re doing it wrong

If I could review your review, the title would be “you’re doing it wrong”. Because, you are indeed doing it wrong @zoukspot, on multiple levels.

First and foremost; optimizing JavaScript can be tricky and it is not uncommon to have to adjust AO (or any JS-optimizer) configuration to fix things. So if you have a problem in your context (beaverbuilder) and you don’t configure AO to fix that problem but simply deactivate AO instead, then you’re doing it wrong.

But your doing it wrong on another level as well;
1. If something (say a plugin) doesn’t work, you’re supposed to look at the controls (in this case, the settings page) to see if you can fix it there (spoiler: I’m 100% this is fixable without having to deactivate AO entirely).
2. If you can’t find your way around those controls (the settings-page) you’re supposed to look in the manual (in this case; the FAQ). RTFM, as they used to say in the days of yore.
3. If the manual (that great FAQ) doesn’t help you fix your stuff, you could ask customer service (in this case; the forum) for assistance. I tend to “hang out” there quite often and am very responsive.

If you’ve gone through these steps and you still can’t get your something (a plugin) working and you feel you haven’t received the support you think you’re entitled to, then (and as far as I’m concerned, only then, but I might be biased) you can post a negative review about how bad that piece of junk broke your site.

Now I can’t know for sure if you had a look at the settings page or the FAQ (although I would very much doubt that), but I know for a fact that you haven’t sought help for your problem. So why, might I ask, are you posting a 3-star review if you clearly did no effort to look for a solution for your problem yourself?

Based on all of this, I can only rate your review with 1 star. I will gladly reconsider my review, if you reconsider yours.

(sorry, bad day at the office and kind of frustrated about pointless reviews like these. you’re not the first, you won’t be the last, but you happened to be in the wrong place at the wrong time. no hard feelings. well, not a lot of them anyway ;-) )

But all will be better tomorrow and I’ll be patient and helpful once again, I promise!

April 13, 2016

Hello Kettle and Pentaho fans!

Yes indeed we’ve got another present for you in the form of a new Pentaho release: version 6.1
This predictable steady flow of releases has in my opinion pushed the popularity of PDI/Kettle over the years so it’s great that we manage to keep this up.

The image above shows the evolution of PDI download counts over the years on SourceForge only.

 

There’s actually a ton of really nice stuff to be found in version 6.1 so for a more complete recap I’m going to refer to my friend and PDI product manager Jens on his blog.

However, there are a few favorite PDI topics I would like to highlight…

Dynamic ETL

Doing dynamic ETL has been on my mind for a long time. In fact, we started working on this idea in the summer of 2010 to have something to show for at the Pentaho Community Meetup of that year in the beautiful Cascais (Portugal). Back then I remember getting a lot of blank stares and incomprehensive grunts from the audience when I presented the idea. However, the last couple of years dynamic ETL (or ETL metadata injection) has been a tremendous driver for solving the really complex cases out there in many areas like Big Data, Data Ingestion and archiving, IoT and many more.  For a short video explaining a few driving principles behind the concept see here:

More comprehensive material on the topic can be found here.

Well in any case I’m really happy to see us keeping up the continued investment to make metadata injection better and more widely supported.  So in version 6.1 we’re adding support for a bunch of new steps including:

  • Stream Lookup (!!!)
  • S3 Input and Output
  • Metadata Injection (try to keep your sanity while you’re wrestling with this recursive puzzle)
  • Excel Output
  • XML Output
  • Value Mapper
  • Google Analytics

It’s really nice to see these new improvements drive solutions across the Pentaho stack, helping out with Streamlined Data Refinery, auto-modeling and much more.  Jens has a tutorial on his blog with step by step instructions so make sure to check it out!

Data Services

PDI data services is another of these core technologies which frankly take time to mature and be accepted by the larger Pentaho community.  However, I strongly feel like these technologies make such a big difference with what anyone else in the DI/ETL market is doing. In this case, simple being able to run standard SQL on a Kettle transformation is a game changer.  As you can tell I’m very happy to see the following advances being piled on the improvements of the last couple of releases:

  • Extra Parameter Pushdown Optimization for Data Services – You can improve the performance of your Pentaho data service through the new Parameter Pushdown optimization technique. This technique is helpful if your transformation contains any step that should be optimized, including input steps like REST where a parameter in the URL could limit the results returned by a web service.
  • Driver Download for Data Services in Pentaho Data Integration – When connecting to a Pentaho Data Service from a non-Pentaho tool, you previously needed to manually download a Pentaho Data Service driver and install it. Now in 6.1, you can use the Driver Details dialog in Pentaho Data Integration to download the driver.
  • Pentaho Data Service as a Build Model Source Edit section – You can use a Pentaho Data Service as the source in your Build Model job entry, which streamlines the ability to generate data models when you are working with virtual tables.

 

Virtual Data Sets overview in 6.1

Virtual Data Sets overview in 6.1

Other noteworthy PDI improvements

As always, the change-list for even the point releases like 6.1 is rather large but I just wanted to pick 2 improvements that I really like:

  • JSON Input: we made it a lot faster and the step can now handle large files (hundreds of MBs) with 100% backward compatibility
  • The transformation and job execution dialogs have been cleaned up!
The new run dialog in 6.1

The new run dialog in 6.1

I hope you’re all as excited as I am to see these improvements release after release after release…

As usual, please keep giving us feedback on the forums or through our JIRA case tracking system.  This helps us to keep our software stable in an ever changing ICT landscape.

Cheers,
Matt

April 12, 2016

The post What happens when you run “rm -fr /” on a Linux machine? appeared first on ma.ttias.be.

I made a quick video to show what exactly happens when you type rm -fr / on a Linux machine.

Spoiler: nothing, actually. At least, not until you use the --no-preserve-root flag.

Big, very big, warning: do. not. run. these. commands. on. your .servers. Some distributions prevent you from running the 'rm', but older distributions will happily delete all your files.

This was a nice test in screen capture and basic video editing, there isn't much to it otherwise.

Sidenote: while making this video, I learned the hard way that Vagrant auto-mounts its project directories. And when you tell rm to be recursive, it'll be recursive. Woops..

The post What happens when you run “rm -fr /” on a Linux machine? appeared first on ma.ttias.be.

April 11, 2016

The post Video: HTTP/2 for PHP Developers at FOSDEM 2016 appeared first on ma.ttias.be.

One of my most recent presentations about HTTP/2, given at FOSDEM 2016 in February, is available for viewing.

This was a lot of fun to give a presentation at conference with so many great speakers and open source contributors.

There's a bit of background noise (as always on FOSDEM videos), but it's pretty doable. It also gets better about 10 minutes in, once everyone has their seat.

The slides are available online too, in case they're not very visible on the video.

I hope to submit a new talk next year, too!

The post Video: HTTP/2 for PHP Developers at FOSDEM 2016 appeared first on ma.ttias.be.

Next monday, OSGeo.be will be organising their first GeoDevEvening. The theme will be webmapping and especially Openlayers 3. There will be two speakers, Pierre Marchand of Atelier Cartographique wil give an overview of Webmapping tools. Later we will take a deeper dive with Thomas Gratier, one of the authors of the OpenLayers 3 Beginners guide.

OpenLayers 3 is a JavaScript based library for web mapping, published under BSD license. It groups a set of functionalities to create both lightweight web mapping applications and more heavy web mapping applications for advanced usages. It enables display of geographic features (markers, lines, polygons) with custom styles, most of the time with background maps services from various sources (including OpenStreetMap). It manages all interactions to select, draw geographic entities, display, measure geographic coordinates and manage different Earth representations using cartographic projections.

The event will happen at  BEL in Brussels (those who were at FOSS4G.be will remember the place), next monday 18 april at 19:00 until 22:00.  The event is free. You can register here.

Hope to see many of you in Brussels!

About OSGeoDevEvening:
OSGeoDevEvening is an event where developers / researchers can learn & speak about a specific tech. We are always looking for speakers for the next events, so if you want to give a presentation about R, grass, QGIS, PostGIS... please contact us.

April 10, 2016

April 08, 2016

Earlier this month, the international media group Hubert Burda Media (about 2.5 billion annual revenue, more than 10,000 employees, and more than 300 titles) released its Drupal 8 distribution, Thunder. Thunder includes custom modules specifically tailored to the needs of professional publishers.

This is great news for three reasons: (1) I've long been a believer in Drupal distributions, (2) I believe that publishers shouldn't compete through CMS technology, but through strong content and brands, and (3) Thunder is based on Drupal 8.

Distributions enable Drupal to compete against a wide range of turnkey solutions, as well as break into new markets. The number of vertical distributions that can be created is nearly limitless, and the possibilities are endless. Thunder is a great example of that.

Professional publishing is one of the industries that has faced the most extensive business model disruption because of the web. Many companies are feeling pressure on their revenue streams, yet you'll find that some companies still focus their efforts on building proprietary, custom CMS platforms as a way to differentiate. This doesn't have to be the case – I've long believed that Drupal (and open source, more generally) can give publishers endless ways to differentiate themselves at much lower costs.

The following video gives an overview of the Thunder approach:

Custom features for publishers

Thunder adds a range of publisher-centric Drupal modules to Drupal 8 core. Specifically, Burda added integrations with audience "circulation" counting tools and ad servers, as well as single sign-on (SSO) support across multiple sites. They've also developed a theme which implements infinite scrolling.

Thunder users also benefit from a range of channel- and feature-specific enhancements through collaboration with industry partners. The following extensions are already available or in the final stages of development:

  • Riddle.com provides an easy-to-use editor for interactive content. The data from the resulting polls and quizzes is available to the publisher.
  • nexx.tv offers a video CMS and their video player. And Microsoft will support the video solution with 100,000 free video streamings per month through their Azure cloud.
  • Facebook will provide a module for integrated publishing to their Instant Articles, exclusively for Thunder users.

Smart collaboration

I admire the approach Burda is taking to bring publishers, partners and developers together from throughout the industry to develop the best open-source CMS platform for publishers.

At the core is a team of publishing experts and developers led by Ingo Rübe, CTO for Burda's German publishing operations, and initiator of Thunder. This team will also be responsible for coordinating the continuous development and enhancement of Thunder.

Under Thunder's policy, all features provided by industry partners must be offered for free or with a freemium model; in other words, a significant part of the functionality has to be provided at no cost at all. Smaller publishers will likely benefit from this approach, as they will be able to use a full-fledged publishing solution that is continuously enhanced and maintained by larger partners.

Big brands are already using Thunder

Although Thunder is still in public beta, Burda has migrated three brands to Thunder. The German edition of Playboy (about 2M monthly visits) was the first to move at the end of 2015. The fashion brand InStyle (about 1.8M monthly visits) and gardening website "Mein schöner Garten" (about 1.5M monthly visits) are also running on Thunder. Most of the other German Burda brands are planning to adopt Thunder in the next 12 months. This includes at least 20 brands such as Elle.de and Bunte.de, which have more than 20 million monthly visits each.

You can download Thunder from https://www.drupal.org/project/thunder.

April 07, 2016

Logo WordPressCe jeudi 21 avril 2016 à 19h se déroulera la 48ème séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : WordPress, plus qu’un outil de blogging

Thématique : Web

Public : Développeurs, graphistes web, curieux technophiles

L’animateur conférencier : Romain Carlier (Reaklab)

Lieu de cette séance : Campus technique (ISIMs) de la Haute Ecole en Hainaut, Avenue V. Maistriau, 8a, Salle Académique, 2e bâtiment (cf. ce plan sur le site de l’ISIMs, et ici sur la carte Openstreetmap).

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page http://jeudisdulibre.fikket.com/. La séance sera suivie d’un verre de l’amitié.

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, Normation, OpenSides, MeaWeb et Phonoid.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

Description : Vous l’avez tous lu au moins une fois : WordPress est un moteur de blog. Mais depuis de nombreuses années, il est aussi un CMS de plus en plus complet qui propulse aujourd’hui plus de 25% du web. Dans sa grande diversité, le web ne s’arrête évidemment et heureusement pas aux blogs. Découvrez WordPress dans les yeux d’une agence spécialisée dans le développement sur ses fondements, et familiarisez vous avec les termes thèmes, plugins, hooks, filtres, actions, CPT, metaboxes et shortcodes.

April 06, 2016

I've been writing a lot about what I believe is important for the future of Drupal, but now it is your turn. After every major release of Drupal I do a "product management survey" to get your ideas and data on what to focus on for future releases of Drupal (8.x/9).

The last time we had such a survey was after the release of Drupal 7, six months into the development of Drupal 8. I presented the results at DrupalCon London in 2011. The results informed the Drupal community at large, but were also the basis for defining initiatives for Drupal 8. This time around, I'm hoping for similar impact, but also some higher-level strategic thinking about how Drupal should respond to various market trends.

Take the State of Drupal 2016 survey now

It shouldn't take more than 10-15 minutes to fill out the survey. We'd like to hear from everyone who cares about Drupal: content managers, site owners, site builders, module developers, front-end developers, people selling and marketing Drupal, etc. Whether you are a Drupal expert or just getting started with Drupal, every voice counts! Best of all, with Drupal 8's new 6-month release cycle, we can act on the results of this survey much sooner than in the past.

I will be presenting the results during my DrupalCon New Orleans keynote (the video recording of the keynote, the presentation slides and the survey results will be downloadable on my blog after). Please tell us what you think about Drupal; your feedback will shape future versions of Drupal.

April 05, 2016

April 04, 2016

At DrupalCon Mumbai I sat down for several hours with the Drupal team at Pfizer to understand the work they have been doing on improving Drupal content management features. They built a set of foundational modules that help advance Drupal's content workflow capabilities; from content staging, to multi-site content staging, to better auditability, offline support, and several key user experience improvements like full-site preview, and more. In this post, I want to point a spotlight on some of Pfizer's modules, and kick-off an initial discussion around the usefulness of including some of these features in core.

Use cases

Before jumping to the technical details, let's talk a bit more about the problems these modules are solving.

  1. Cross-site content staging — In this case you want to synchronize content from one site to another. The first site may be a staging site where content editors make changes. The second site may be the live production site. Changes are previewed on the stage site and then pushed to the production site. More complex workflows could involve multiple staging environments like multiple sites publishing into a single master site.
  2. Content branching — For a new product launch you might want to prepare a version of your site with a new section on the site featuring the new product. The new section would introduce several new pages, updates to existing pages, and new menu items. You want to be able to build out the updated version in a self-contained 'branch' and merge all the changes as a whole when the product is ready to launch. In an election case scenario, you might want to prepare multiple sections; one for each candidate that could win.
  3. Preview your site — When you're building out a new section on your site for launch, you want to preview your entire site, as it will look on the day it goes live. This is effectively content staging on a single site.
  4. Offline browse and publish — Here is a use-case that Pfizer is trying to solve. A sales rep goes to a hospital and needs access to information when there is no wi-fi or a slow connection. The site should be fully functional in offline mode and any changes or forms submitted, should automatically sync and resolve conflicts when the connection is restored.
  5. Content recovery — Even with confirmation dialogs, people delete things they didn’t want to delete. This case is about giving users the ability to “undelete” or recover content that has been deleted from their database.
  6. Audit logs — For compliance reasons, some organizations need all content revisions to be logged, with the ability to review content that has been deleted and connect each action to a specific user so that employees are accountable for their actions in the CMS.

Technical details

All these use cases share a few key traits:

  1. Content needs to be synchronized from one place to another, e.g. from workspace to workspace, from site to site or from frontend to backend
  2. Full revision history needs to be kept
  3. Content revision conflicts needs to be tracked

Much of this started as a single module: Deploy. The Deploy module was first created by Greg Dunlap for Drupal 6 in 2008 for a customer of Palantir. In 2012, Greg handed over maintainership to Dick Olsson. Dick continued to improve Deploy module for Al Jazeera while working at Node One. Later, Dave Hall created a second Drupal 7 version which more significant improvements based on feedback from different users. Today, both Dick and Dave work for Pfizer and have continued to include lessons learned in the Drupal 8 version of the module. After years of experience working on Deploy module and various redesigns, the team has extracted the functionality in a set of modules:

Pfizer content workflow improvements modules

Multiversion

This module does three things: (1) it adds revision support for all content entities in Drupal, not just nodes and block content as provided by core, and (2) it introduces the concept of parent revisions so you can create different branches of your content or site, and (3) it keeps track of conflicts in the revision tree (e.g. when two revisions share the same parent). Many of these features complement the ongoing improvements to Drupal's Entity API.

Replication

Built on top of Multiversion module, this lightweight module reads revision information stored by Multiversion, and uses that to determine what revisions are missing from a given location and lets you replicate content between a source and a target location. The next two modules, Workspace and RELAXed Web Services depend on replication module.

Workspace

This module enables single site content staging and full-site previews. The UI lets you create workspaces and switch between them. With Replication module different workspaces on the same site can behave like different editorial environments.

RELAXed Web Services

This module facilitates cross-site content staging. It provides a more extensive REST API for Drupal with support for UUIDs, translations, file attachments and parent revisions — all important to solve unique challenges with content staging (e.g. UUID and revision information is needed to resolve merge conflicts). The RELAXed Web Services module extends the Replication module and makes it possible to replicate content from local workspaces to workspaces on remote sites using this API.

In short, Multiversion provides the "storage plumbing", whereas Replication, Workspace, and RELAXed Web Services, provide the "transport plumbing".

Deploy

Historically Deploy module has taken care of everything from bottom to top related to content staging. But for Drupal 8 Deploy has been rewritten to just provide a UI on-top of the Workspace and Replication modules. This UI lets you manage content deployments between workspaces on a single site, or between workspaces across sites (if used together with RELAXed Web Services module). The maintainers of the Deploy module have put together a marketing site with more details on what it does: http://www.drupaldeploy.org.

Trash

To handle use case #5 (content recovery) the Trash module was implemented to restore entities marked as deleted. Much like a desktop trash or recycle bin, the module displays all entities from all supported content types where the default revision is flagged as deleted. Restoring creates a newer revision, which is not flagged as deleted.

Synchronizing sites with a battle-tested API

When a Drupal site installs and enables RELAXed Web Services it will look and behave like the REST API from CouchDB. This is a pretty clever trick because it enables us to leverage the CouchDB ecosystem of tools. For example, you can use PouchDB, a JavaScript implementation of CouchDB, to provide a fully-decoupled offline database in the web browser or on a mobile device. Using the same API design as CouchDB also gives you "streaming replication" with instant updates between the backend and frontend. This is how Pfizer implements off-line browse and publish.

Switching between multiple workspaces
This animated gif shows how a content creator can switch between multiple workspaces and get a full-site preview on a single site. In this example the Live workspace is empty, while there has been a lot of content addition on the Stage workspace in.
Deploying a workspace
This animated gif shows how a workspace is deployed from 'Stage' to 'Live' on a single site. In this example the Live workspace is initially empty.

Conclusion

Drupal 8.0 core packed many great improvements, but we didn't focus much on advancing Drupal's content workflow capabilities. As we think about Drupal 8.x and beyond, it might be good to move some of our focus to features like content staging, better audit-ability, off-line support, full-site preview, and more. If you are a content manager, I'd love to hear what you think about better supporting some or all of these use cases. And if you are a developer, I encourage you to take a look at these modules, try them out and let us know what you think.

Thanks to Tim Millwood, Dick Olsson and Nathaniel Catchpole for their feedback on the blog post. Special thanks to Pfizer for contributing to Drupal.

For many years, the talks at SMWCon, the biyearly Semantic MediaWiki conference, have been recorded. These recordings have ended up at various locations, though are always linked from the conference program pages on the wiki. Perhaps long overdue, we now have a dedicated YouTube channel for our videos.

The talks from the recent SMWCon Fall 2015 in Barcelona have now been made available via this new channel. Many thanks to the organizers of SMWCon Fall 2015 for the effort, and to BennyBeat for the recording and editing of the talks.

To help you find the video with most cats, here it is 🙂

April 02, 2016

I’m happy to announce the immediate availability of Maps 3.5 and Semantic Maps 3.3.

Both of these are feature releases that bring new functionality, enhancements and bugfixes. They do not contain any breaking changes, and the installation process remains unchanged. Upgrading is thus easy and always recommended.

New in Maps 3.5

  • Added egMapsGMaps3Language setting (by James Hong Kong and Karsten Hoffmeyer)
  • Added osm-mapquest layer for OpenLayers (by Bernhard Krabina)
  • Added license lable to display on “Special:Version” (by Karsten Hoffmeyer)
  • Improved Mobile Frontend support (by James Hong Kong)
  • Added missing Leaflet system messages (by Karsten Hoffmeyer)

New in Semantic Maps 3.3

  • Added userparam support for the map result formats (by James Hong Kong)
  • Made Google Maps initialization more robust (by Karlpietsch)
  • Added missing system messages (by Karsten Hoffmeyer)

Both extensions have now also been tested with PHP 7 and the latest development version of MediaWiki. These are the last releases that will support PHP older than 5.5 and MediaWiki older than 1.23.

I’d like to thank everyone who contributed, which includes the people listed above, the translators on TranslateWiki and everyone that reported issues. Contributions are always welcome. For good starting points, have a look at newcomer tasks for Maps and newcomer tasks for Semantic Maps.

April 01, 2016

The post ssh fatal: Access denied for user by PAM account configuration [preauth] appeared first on ma.ttias.be.

This was an interesting issue I encountered on a Linux machine. I renamed a user in /etc/passwd, but forgot to rename its entry in /etc/shadow.

The result was that every login attempt ended up logging the following and immediately closing the SSH connection for the user.

sshd[22715]: fatal: Access denied for user username by PAM account configuration [preauth]
sshd[22723]: fatal: Access denied for user username by PAM account configuration [preauth]
sshd[23144]: fatal: Access denied for user username by PAM account configuration [preauth]

So a reminder to myself: if you rename a user in /etc/passwd, also rename it in /etc/shadow.

If you encounter this error too, check if the user that's trying to log in has a shadow-entry. It doesn't need a password, but it needs a corresponding entry in /etc/shadow.

The post ssh fatal: Access denied for user by PAM account configuration [preauth] appeared first on ma.ttias.be.

Some people don't like the vulture credit companies praying on the working poor.

As seen in the tram in Riga, Latvia.

OK, as laser cut and hung up in the tram in Riga, Latvia...

March 31, 2016

The post Running Bash on Ubuntu on Windows: Demo appeared first on ma.ttias.be.

Microsoft surprised everyone last night when they announced that Ubuntu is running natively in Windows 10. That means we get all the Ubuntu/Linux perks in the Windows world.

Microsoft has also released a demo of Ubuntu/Bash running on Windows.

What follows are screen captures of Bash running inside Windows. If we combine this with Microsoft porting OpensSSH to the platform too, we suddenly have an environment we can manage with a unix mindset. That means:

  1. Git pre/post commit hooks written in bash
  2. A native package manager (apt)
  3. All our Linux scripts and tools working within Windows

Screenshots of Bash on Windows

As provided by the demo, here are some screencaptures of Bash and Ubuntu running in Linux.

Very impressive stuff. It covers Linux binaries (ELF), mountpoints, apt installing, running a Webrick-like webserver, ...

1_windows_ubuntu_lsb_release

2_windows_ubuntu_elf

3_windows_ubuntu_mountpoints.png

4_windows_ubuntu_gcc

5_windows_ubuntu_apt

6_windows_ubuntu_rackup

I'm curious about the quirks and caveats Ubuntu/Bash on Windows will bring, but I'm excited about the possibilities here!

If you want to know more, check out the video.

But now I can't decide if I should categorise this post as "Windows" or "Linux". Confusing times ahead!

The post Running Bash on Ubuntu on Windows: Demo appeared first on ma.ttias.be.

March 30, 2016

The post Ruby gem search: show all remote versions appeared first on ma.ttias.be.

Here's a quick reminder on how to use Ruby gems at the command line to search for remote packages and show all their versions.

This example searches for a gem called "wkhtmltoimage-binary". By default, it will tell you which is the latest version of that particular gem.

$ gem search wkhtmltoimage-binary

*** REMOTE GEMS ***
wkhtmltoimage-binary (0.12.2)

If you want to search all available versions, because you might need a very specific one, you can query Ruby Gems with --remote --all.

$ gem search wkhtmltoimage-binary --remote --all

*** REMOTE GEMS ***
wkhtmltoimage-binary (0.12.2, 0.12.1, 0.12, 0.11.0.1.1, 0.11.0.1)

And as a bonus, if you want to install a specific version, you can do so with the -v parameter.

$ gem install wkhtmltoimage-binary -v 0.11.0.1
Fetching: wkhtmltoimage-binary-0.11.0.1.gem (100%)
Successfully installed wkhtmltoimage-binary-0.11.0.1
...
1 gem installed

Done.

The post Ruby gem search: show all remote versions appeared first on ma.ttias.be.

The authoring experience improvements we made in Drupal 8 appear to be well-received, but that doesn't mean we are done. With semantic versioning in Drupal 8, we can now make more UX changes in follow-up releases of Drupal 8. So now is a good time to start moving Drupal's user experience forward.

The goal of this post is to advance the conversation we started over a month ago in a blog post talking about the concept of turning Drupal outside-in. In today's blog post, we'll show concrete examples of how we could make site building in Drupal easier by embracing the concept of outside-in. We hope to provide inspiration to designers, core contributors and module maintainers for how we can improve the user experience of Drupal 8.2 and beyond.

What is outside-in?

In Drupal you often have to build things from the ground up. If you want to make a list of events you need to wade through 20 different menu options to a user interface with configuration options like "boolean" and "float", build a content type, content, and then a view. Essentially you need to understand everything before you can build anything.

In an "outside-in" experience – or what Kevin OLeary (Director of Design on my team at Acquia) calls Literal UI – you can click anything on the page, edit its configuration in-place, and watch it take effect immediately.

Over the past few years Drupal has adopted a more outside-in approach, the most obvious example being Drupal 8's in-place editing. It enables you to edit what you see with an uninterrupted workflow, faster preview, and removes the need to visit the administration backend before you can start editing.

To evaluate how outside-in can be applied more broadly in order to make Drupal easier to use, Kevin created a few animated gifs.

Example #1: editing menu items

The current inside-out experience

Editing menu options in Drupal has already become more outside-in with contextual links. The contextual links take away some of the guesswork of finding the proper administration UI, but once you arrive at the configuration page there is still a lot of stuff to take in, only some of which is relevant to this task.


The current inside-out experience for editing a menu item in Drupal

The current inside-out experience for editing a menu item: adding a menu link and changing its position involves 2 separate administration pages, 4 page refreshes and more than 400 words of text on the UI.

Anyone familiar with Drupal will recognize the pattern above; you go to an administration UI, make some changes, than you go back to the page to see if it worked. This context switching creates what UX people call "cognitive load". On an administration page you need to remember what was on the site page and vice-versa.

To complete this task you need to:

  1. Find the contextual link for the menu (not simple since it's on the far side of the page)
  2. Choose the correct option "edit menu"
  3. Find and click the "add link" button
  4. Type the menu link name
  5. Type the menu link path beginning with a forward slash
  6. Understand that this change needs to be saved
  7. Scroll to the bottom of the page
  8. Save the link
  9. Find the link in the list of links
  10. Understand that dragging up/down in this abstraction is equivalent to moving right/left in that actual page
  11. Scroll to the bottom of the page
  12. Save the menu

The problem is not just that there are too many pages, clicks, or words, it's that each step away from the actual page introduces new opportunities for confusion, error and repetition. In user testing, we have seen users who were unable to find the contextual link, or to understand which option to choose, or to find the "add link" button, or to add a path, or drag-drop links, or to save before leaving the UI. When these things happen in context, feedback about whether you are "doing it right" is immediate.

The proposed outside-in experience


The proposed outside-in experience for editing a menu item in Drupal

The proposed outside-in experience for editing a menu item. Rather than moving out-of-context to an administration page to get the job done, configuration happens right on the page where you see the effect of each action. You start adding a menu item simply by selecting the menu on the page. Both the menu item and the item's path are in focus to reinforce the connection. As you type a path is proposed from the link text.

Now all you need to do is:

  1. Click the menu on the page
  2. Find and click the "add link" button
  3. Type the name of the menu item
  4. Revise the menu item's path if needed
  5. Drag the link to its new location
  6. Close the drawer

One important aspect of this approach is that all actions that produce a visible change have bi-directional control and bi-directional feedback. In other words, if you can drag something in the configuration drawer you should also be able to drag it on the page, and the changes should happen simultaneously.

Example #2: adding a block to a page

The current inside-out experience

The process of placing a block on a page can be difficult. Once you discover where to go to add a block, which is in itself a challenge, the experience requires a lot of reading and learning, as well as trial and error.


The current inside-out experience for adding a block

The current inside-out experience for adding a block. Not all steps are shown.

To complete this task you need to:

  1. Figure out where to go to place a block
  2. Go to /block-layout
  3. Go to /display-regions to find out where the region is on the page
  4. Go back to /block-layout
  5. Find the region and click "add block"
  6. Find the block you want to place and click "place block"
  7. Configure the block
  8. Read about how blocks can appear on multiple pages and for different content types and roles
  9. Read what content types are
  10. Read what roles are
  11. Read what a "path" is
  12. Read how to find the path for a page
  13. Go back to the page and get its path
  14. Go back to /block-layout and add the path to the visibility settings
  15. Drag your block to the position where you want it
  16. If your blocks are arranged horizontally, learn that "up and down" in the block layout UI will mean "left and right" on the actual page
  17. Find the"back to site" link
  18. Go back to the page to see if you did it right

Eventually you'll use what you just learned, but Drupal makes you learn it first instead of just showing what is immediately necessary. Both the task and the learning can be simplified by bringing the configuration closer to the object you are configuring.

The proposed outside-in experience


The proposed outside-in experience for adding a block

The proposed outside-in experience for adding a block. Everything happens on the actual page rather than on an administration page. Places where things can be added appear on hover. On click they show thumbnails that you can filter with autocomplete.

Now all you need to do is:

  1. Choose where to place the block
  2. Find the block you want to place
  3. Place the block
  4. Close the drawer

The "plus" button, the drop target (blue dotted rectangle) and the autocomplete are all commonly understood patterns used by other software. The task requires no or little explanation as the interactions reveal the process. By starting with selecting the location of where to place the block, we avoid the need for drag-and-drop and the complexity of dragging a block on a page that requires scrolling.

Principles and patterns

These examples show the principle that rather than taking the site builder to a separate administration backend, the experience should begin with the actual page and show how the changes will be seen by the end-user. The patterns shown here are less important. For example, the animated gifs above show two different approaches to the options panel and there are certainly others. The important thing is not yet where the panel comes from or how it looks, but that the following criteria are met:

  • An option panel is triggered by direct interaction.
  • When triggered it only shows options for what you selected.
  • It primarily shows configuration options that produce a visible change to the page. More complex options could be accessed through an 'Advanced options' link.

The ideas in this post are meant to provide some food for thought and become the seed of some patches for Drupal 8.2. Rather than dive right into development, we're looking to the Drupal community to take these designs as a starting point to prototype and user-test more outside-in experiences in Drupal.

The next post in this series will cover outside-in experiences with more complex contexts and the problem of "leaky abstractions". If you don't want to miss the next blog post, make sure to subscribe. In the mean time, I'd love to hear what you think!

Special thanks to Kevin OLeary for advocating outside-in thinking. Thanks to Preston So, Gábor Hojtsy, Angie Byron, Bojhan Somers and Roy Scholten for their feedback.

March 29, 2016

The post nginx-1.9.13: no longer retry non-idempotent upstream requests by default appeared first on ma.ttias.be.

Release 1.9.13 of Nginx, just out a couple of hours ago, fixes a long standing bug: non-idempotent requests could be resent to another upstream, potentially causing multiple form submits, multiple deletes, ...

In simpler terms: a POST request sent to Nginx could be sent to multiple upstreams if for some reason the connection to the initial upstream got closed. That upstream may very well have received that POST request and modified data as a result. But Nginx would resend that POST to a second upstream either way.

You can image the potential hard-to-debug error that come as a result of that.

Either way, Nginx 1.9.13 is available now and changes that default behaviour so it no longer repeats POST, LOCK or PATCH requests.

*) Change: non-idempotent requests (POST, LOCK, PATCH) are no longer
passed to the next server by default if a request has been sent to a
backend; the "non_idempotent" parameter of the "proxy_next_upstream"
directive explicitly allows retrying such requests.

[nginx-announce] nginx-1.9.13

It's interesting that this took so long to implement.

The post nginx-1.9.13: no longer retry non-idempotent upstream requests by default appeared first on ma.ttias.be.

March 28, 2016

The post certdiff appeared first on ma.ttias.be.

Here's a new tool I just open sourced: certdiff.

It's a very simple bash script that solves an annoying problem for me. If I want to diff 2 certificate files, I can't just run the diff tool. After all, certificates look like this.

-----BEGIN CERTIFICATE-----
MIIFFDCCA/ygAwIBAgISAYYqv7v7f0f38l+hU0J5/iCwMA0GCSqGSIb3DQEBCwUA
MEoxCzAJBgNVBAYTAlVTMRYwFAYDVQQKEw1MZXQncyBFbmNyeXB0MSMwIQYDVQQD
ExpMZXQncyBFbmNyeXB0IEF1dGhvcml0eSBYMTAeFw0xNjAyMTUwODQ4MDBaFw0x
...
OxSCq8jsCMKYcRanY/CKgYkzENLMtKZnPkLTDfl8IbVGXgHxTMZmU3XoV+qdK9A0
ko99LNSWpHNKF2kEm/UXGVmfFfBovRCmejGZzjzS0VCtlO86oYcVKZYAOBMz21Eq
UKiESWnXfaQbCnmcOB3P+LLRg8uIluQVozktJ6FIgsR3WXvV/qiP/tnjMX3BsOtW
4CPevXtyb/k=
-----END CERTIFICATE-----

If you diff 2 of those similar files, it would just report a difference on every line -- but you still don't know which data is different.

Or in other words: if you're renewing or reinstalling a certificate, are you sure your domain name, SANs, ... are all the same? You can parse the certificates with openssl and manually check them. Or, you can use certdiff.

$ certdiff cronweekly.com/cert.pem sysca.st/cert.pem
subject= /CN=cronweekly.com           |  subject= /CN=sysca.st
notBefore=Feb 15 08:48:00 2016 GMT    |  notBefore=Feb 15 09:11:00 2016 GMT
notAfter=May 15 08:48:00 2016 GMT     |  notAfter=May 15 09:11:00 2016 GMT

This example compares the SSL certificates for cronweekly.com (the open source mailing list you should subscribe to) and syscast.

The result is much cleaner and just shows the differences of the actual certificate, not just its encoded result.

Certdiff is available as open source on github as mattiasgeniar/certdiff.

The post certdiff appeared first on ma.ttias.be.

March 27, 2016

The post Postfix: cannot update mailbox /var/mail for user, File too large appeared first on ma.ttias.be.

I ran into a problem while maintaining MARC, the open source mailing list archive with some very large user groups. Especially the linux-kernel archive, with almost 10.000 messages per month.

Some of those messages stopped getting parsed, with these errors in the logs.

Mar 23 21:51:42 ma postfix/local[14420]: 3874942816: to=,
  relay=local, delay=0.42, delays=0.41/0/0/0.01, dsn=5.2.2,
  status=bounced (cannot update mailbox /var/mail/linux-kernel for
  user linux-kernel. error writing message: File too large)

Mar 23 21:51:59 ma postfix/local[14420]: 6288D42816: to=,
  relay=local, delay=0.33, delays=0.32/0/0/0.01, dsn=5.2.2,
  status=bounced (cannot update mailbox /var/mail/linux-kernel for 
  user linux-kernel. error writing message: File too large)

This message got triggered when the mailbox reached 50MB in size. This is, in fact, a default limit in Postfix called mailbox_size_limit. Here's the default:

$ postconf -d | grep mailbox_size_limit
mailbox_size_limit = 51200000

In order to increase the limit, edit /etc/postfix/main.cf and increase the size.

$ vim /etc/postfix/main.cf
...
mailbox_size_limit = 5120000000

This changes the limit from 50MB to 5.000MB. Now reload Postfix and your new mailbox limit is in effect.

$ service postfix reload

Done.

The post Postfix: cannot update mailbox /var/mail for user, File too large appeared first on ma.ttias.be.

The post Optimize the size of .PNG images automatically on your webserver with optipng appeared first on ma.ttias.be.

A couple of weeks ago I write a blogpost on the technical aspects of SEO. One of the readers translated it into French and reported an additional tip: my images in that post were way too big.

François was even kind enough to include a screenshot with the savings I missed out on.

image_optim

I used to have a workflow where I ran every image through ImageOptim to compress them as much as possible. But it's a manual step that's too easy to ignore.

So here's a way to automate it. These couple of scripts automatically run and compress your files for you, so you don't have to think about them ever again. They use the optipng tool in the background.

Why even do this?

Now, normally, optimizing images is part of the build or release process. Before your images even hit your servers, they're in its most optimal form. However, reality isn't always that easy. If you run a dynamic site like WordPress or Drupal, anyone can upload images and they don't necessarily have the same requirements for images like you do.

This technique catches those uploads, encodes them in a better format and nobody notices.

Nobody but you: your server uses less bandwidth and consumes less disk space. This technique also processes asynchronously: there is no delay in the uploads, images are processing a while after they're uploaded.

Install optipng

You start by installing the optipng tool.

On Red Hat or CentOS:

$ yum install optipng

On Ubuntu / Debian:

$ apt-get install optipng

Now, let's automate this.

Automatically optimise your new images every hour

Friendly reminder: before running any of these tools, make sure you have a back-up to restore from!

Update: just to very clear, this only optimises new images. It searches for them, every hour, and processes them. It does not keep processing the same images over and over. The search hardly demands any server resources.

The optipng tool itself works pretty simple. The most basic command is optipng image.png, which optimises the image called 'image.png' and replaces it on your server (so you lose the original!).

We can add some additional parameters that further reduce the size, increase the compression ratio and strip all META info from the PNG file you don't need. Here's my current command for the most optimal size.

$ optipng -o7 -f4 -strip all -quiet image.png

This takes the file called 'image.png', runs all sorts of optimisations over it, strips meta data and overwrites the file with the new and improved version.

Now, to run this automatically on your server, all you need is the following cronjob. Let this run every hour, and it'll take all PNG files created in the last hour and optimises them for you.

$ find htdocs/wp-content/ -mmin -60 -name '*.png' -print0 | xargs -0 optipng -o7 -f4 -strip all -quiet -preserve

This essentially does these things:

  1. Find all *.png files in the 'htdocs/wp-content' directory that were created in the last hour (this is the directory where WordPress uploads its content)
  2. For each of those found files, run the optipng command on them

Add that to your crontab and you never have to think about it again.

$ crontab -l
0 * * * * find ~/htdocs/wp-content/ -mmin -60 -name '*.png' -print0 | xargs -0 optipng -o7 -f4 -strip all -quiet -preserve

All done!

Optimise all your existing images

If you're only thinking about activating this after you have a bunch of files, you may want to run this tool over all your images -- not just the ones modified in the last hour.

I'll repeat myself, but here it goes: before you run this over all your files, have a back-up! These settings replace the original file, so you no longer have the original.

$ find ~/htdocs/wp-content/ -name '*.png' -print0 | xargs -0 optipng -o7 -f4 -strip all -quiet -preserve

That command searches for all '*.png' files in the ~/htdocs/wp-content/ folder and runs the optipng command over all of them.

The post Optimize the size of .PNG images automatically on your webserver with optipng appeared first on ma.ttias.be.

Toeristen, geef alstublieft een extra stukje privacy prijs, zodat we U en uw koopgedrag nog meer en beter kunnen tracken bij Thomas Cook.

We hebben de neiging minder flexibel te zijn als het om onze privacy gaat. Maar we zullen niet anders kunnen dan onze privacy voor een stuk opgeven. Voor onze eigen veiligheid, zodat de overheden kunnen screenen wie op welke vlucht zit. Die evolutie zagen we al na de aanslagen van 11 september 2001 in New York.

Hoe juist helpt het dat Thomas Cook weet wie op welke vlucht zal zitten? De veiligheidsdiensten weten dat trouwens nu al. Hoe helpt dit wanneer iemand lang voor om het even welke identiteitscontrole in de vertrekhal een bom laat afgaan? Hoe heeft dat securitytheater na 9/11 trouwens om het even welke aanslag vermeden? Welke bewijzen zijn er dat het geholpen heeft? Welke bewijzen zijn er dat de in extreme zin toegenomen surveillance helpt?

Dit ruikt sterk naar goedkope politieke recuperatie van een stuk menselijke ellende. Jan Dekeyser, CEO van Thomas Cook België, wil U en uw koopgedrag kunnen tracken. Zodat hij veel geld met uw privacy kan verdienen. Nog meer geld. Hij roept daarvoor op nu dat de mensen bang zijn en bereidwillig hun privacy op te geven. Van timing gesproken.

We moeten nu ook van de CEO van Thomas Cook onze Facebook profielen aan hem en zijn marketing afdeling geven. Dat zegt hij letterlijk in zijn opiniestuk:

Dat kan veranderen. Intussen heeft zowat iedereen een smartphone, een Facebook-profiel of een Twitter-account. Het is dus perfect mogelijk om via persoonlijke communicatie de bezorgdheid van de reizigers weg te nemen en hen rechtstreeks te informeren over hoe hun reis verder zal verlopen.

Waarschijnlijk kan je dus binnenkort geen klant meer zijn bij Thomas Cook tenzij je een Facebook profiel hebt waar hij en zijn bedrijf volledige toegang toe hebben. Zullen we de luchthaven nog wel binnenmogen tenzij hij en zijn bedrijven alles over U weten, zodoende hij U en uw koopgedrag volledig in kaart kan brengen?

Wanneer wordt het nu eindelijk eens duidelijk dat geen enkele hoeveelheid surveillance bovenop wat er nu reeds is, aanslagen zal voorkomen? In tegendeel zal de onnodige verdachtmaking onschuldige burgers net kwaad en wantrouwig tegenover hun overheid maken.

Dus neen, Jan Dekeyser CEO van Thomas Cook België, ik geef niet een extra stukje privacy aan U en uw marketing mensen prijs. Onze veiligheidsdiensten weten nu al perfect wie er op welke vlucht zit. Die databases bestaan al.

Geef de actoren verantwoordelijk voor het doorspelen van informatie naar de juiste diensten en mensen een betere opleiding, verhoog getalmatig manschappen en budgetten en werk met de informatie die er al is. M.a.w. doe als ambtenaar in een veiligheidsfunctie uw job.

Maar een Thomas Cook moet echt niet nog meer van onze privacy weten.

March 25, 2016

Allereerst, na Brussel blijf ik hier bij:

[over encryptie] Allereerst zolang er geen nood is moet men niets doen. Want ik zie niet meteen een echte noodzaak om alle mensen hun elektronica te kunnen kraken. Dat was vroeger niet nodig; vandaag zijn er niet meer of minder gevaarlijke gekken en elektronica voegt maar weinig toe aan hun wapenarsenaal. Dus is dat nu niet nodig.

Ik sta ook hier achter:

Oh, and to our security services: well done catching those guys from Molenbeek and Vorst. Good job

Niet gestresseerd geraken. Zoals alle mensen presteer je slecht wanneer je gestresseerd bent. Hold the line. Molenbeek en Vorst was trouwens prachtig werk. Goed gedaan.

We vinden die balans wel.

A couple of days ago a WP YouTube Lyte user asked me if Featured Video Plus and WP YouTube Lyte were compatible. It took me a day to find the answer (I first said “no”), but Featured Video Plus actually has a filter (get_the_post_video_filter) that allows one to override the code used to display the featured video. And after a bit of trial and error this is what I came up with;

add_filter('get_the_post_video_filter','lyte_featured_video');
function lyte_featured_video($in) {
	$post_id = ( null === $post_id ) ? get_the_ID() : $post_id;
	$meta = get_post_meta( $post_id, '_fvp_video', true );
  	if (($meta["provider"]==="youtube") && (function_exists('lyte_parse'))) {
		return lyte_parse($meta["full"]);
	}
 	return $in;
}

Some extra checks ($post_id should not be empty, $meta should not be empty) would make this more reliable, but that’s something you should be able to add, no?

March 24, 2016

The post PHP Warning: Zend OPcache can’t be temporary enabled appeared first on ma.ttias.be.

I recently ran into this problem on a PHP server and I figure I'd share the solution.

The logs were being filled with this alert:

[24-Mar-2016 Europe/Brussels] PHP Warning:  Zend OPcache can't be temporary enabled (it may be only disabled till the end of request) in Unknown on line 0
[24-Mar-2016 Europe/Brussels] PHP Warning:  Zend OPcache can't be temporary enabled (it may be only disabled till the end of request) in Unknown on line 0
[24-Mar-2016 Europe/Brussels] PHP Warning:  Zend OPcache can't be temporary enabled (it may be only disabled till the end of request) in Unknown on line 0

The interesting problem here is that, by default, OPcache was enabled. The default php.ini contained this value:

$ grep opcache /etc/php.ini
opcache.enable = 1

The problem occurred when I tried to enable the OPcache again. This time, from within the PHP-FPM pool configuration.

$ grep opcache /etc/php-fpm.d/*
php_admin_value[opcache.enable] = on

It appears that PHP isn't amused when you try to enable the OPcache again, when it's already enabled. It does appear to be just a cosmetic issue and doesn't actually impact the functionality of the PHP code itself.

Either way, in this case the fix was to just remove the OPcache setting from PHP-FPM and keep it enabled in the default php.ini file.

The post PHP Warning: Zend OPcache can’t be temporary enabled appeared first on ma.ttias.be.

The post Mac OSX: mtr: unable to get raw sockets appeared first on ma.ttias.be.

So you went and installed mtr via the Brew package manager, only to find it doesn't really work? Bummer. Good thing there's a fix.

$ mtr ma.ttias.be
host: ma.ttias.be
mtr: unable to get raw sockets.

But the sudo variant does work:

$ sudo mtr ma.ttias.be
  1.|-- 10.200.200.200             0.0%     1   14.4  14.4  14.4  14.4   0.0
  2.|-- 10.0.1.1                   0.0%     1   17.7  17.7  17.7  17.7   0.0
  3.|-- ge-0-0-24-605.rtr1.ant1.n  0.0%     1   14.7  14.7  14.7  14.7   0.0
  4.|-- ge-0-0-20-2.csw01.ocs.ant  0.0%     1   15.5  15.5  15.5  15.5   0.0
  5.|-- ma.ttias.be                0.0%     1   15.5  15.5  15.5  15.5   0.0

Having to sudo every time is a bit of a pain, so we can use the magic setuid bit that allows us to execute mtr without additional hassle.

$ which mtr
/usr/local/sbin/mtr

$ chmod 4755 /usr/local/sbin/mtr

$ sudo chown root /usr/local/sbin/mtr

Now, next time you execute mtr, it'll just work. It's like magic.

The post Mac OSX: mtr: unable to get raw sockets appeared first on ma.ttias.be.

In May last year I saw/ heard this Four Tet DJ Set during which a crazy jazz tune on white-label vinyl heated things up (with the needle jumping due to dancing people bumping into the decks). That crazy tune was by Moses Boyd, a young British drummer and his band, which now seems to have finally been released and was just “premiered” by Gilles Peterson;

YouTube Video
Watch this video on YouTube.

Jazz, my friends, is very much alive!

2015 was an amazing year, learning a lot and making some progress in my Open Source development and climbing activities.
Buildtime Trend keeps growing, with Angular and a Facebook Open Sourced project as new users, improving my Python and JavaScript skills, setting up a CherryPy based service on Heroku, backed by a Celery/RabbitMQ task queue to make the service more responsive.

I have no real resolutions for 2016, I'll just spend my time on climbing and Open Source software development, learning new skills along the way and putting them into practice :

  • use Ansible (or another configuration management tool) to provision a Vagrant based development environment for Buildtime Trend
  • start using Django to add user management to Buildtime Trend as a Service
  • learn how to climb safely in less equiped areas using friends, nuts, and other mobile protection
  • apply the lessons learned form "The Rock Warrior's Way' while climbing
  • visit a few conferences : FOSDEM, 33C3 and Newline, and maybe some more : LinuxTag, DebConf, FossAsia, KeenCon, ...
  • do some improvements to my house

Plenty to do in 2016! I wish anyone an joyful year full of insights and opportunities to learn and improve.
And remember, it's all about enjoying the journey!


Here are some highlights from 2015 :
  • Celebrated New Year in Lisbon.
  • Reached a 365 day commit streak contributing to Open Source projects, with 2300+ commits over that period.
  • visited FOSDEM 2015, another great weekend of Open Source enthousiast meeting and sharing knowledge in Brussels, with over 500 speakers and 5-10.000 visitors. Happy to meet Eduard and Ecaterina again who came over from Romania, and many others.
  • Buildtime Trend was mentioned in the Black Duck newsletter 
  • Buildtime Trend made it to the top 3 of hot projects on OpenHub
  • Reached the most active developers top 10 on OpenHub
  • Released Buildtime Trend v0.2 and launched a Heroku app hosting the service.
  • Visited Cork and Dublin with Sofie, attending Jeroen's PhD graduation ceremony and meeting Rouslan and his friends.
  • Attended Newline 0x05 and did two talks : Buildtime Trend : Visualise what's trending in your build process and What I learned from 365 days of contributing to Open Source projects
  • Ended my commit streak after 452 days
  • Went on a climbing trip to Gorges du Tarn
  • Flashed my first 6a lead climbing on rock.
  • Traveled to the US, East coast this time, visiting Washington DC, meeting Lieven H and Wim, exploring New York City with Tine, Lukas and Marie-Hélène.
  • One-day climbing trip to Freyr with Peter.
  • And another climbing trip to Beez with Lieven V, Ben, Patrick and others.
  • 4th blood donation, convinced Tine to join me for her first donation! Well done!
  • Deployed a Celery/RabbitMQ based worker to Buildtime Trend as a Service on Heroku, taking some load off the webservice and improving the response times.
  • Climbing trip to La Bérarde, with Bleau, doing my first multipitches (Li Maye laya and Pin Thotal) with Mariska and lead climbing a few 6a+ routes. Weather was great, atmosphere was superb, climbing was fun!
  • Went to Fontainebleau for the birtdayparty of Andreas. Great fun, nice people, lots of routes. Finished my first red route in Fontainebleau.
  • Travelled to California and made roadtrip from San-Francisco, to Yosemite, over Tioga Pass to Mojave Dessert, Red Rock Canyon, Las Vegas, Zion National Park, Bryce Canyon, Grand Canyon and flying back from Phoenix to San-Francisco. I did some climbing and hiking, took a climbing course on using cams and nuts. On the way I met a lot of nice people, with whom I had interesting conversations.
  • Released Buildtime Trend v0.3
  • Finished online Stanford University course Algorithms: Design and Analysis, Part 1
  • Read The Rock Warrior's Way, a must read for any climber
  • Visited 32C3 in Hamburg, 4 days of lectures, writing software and talking to other developers. It was amazing, next year again!

I was excited to travel to India a few months ago for DrupalCon, an area where we have a really big opportunity for the growth of Drupal. In keeping with tradition, here are the slides and video from my keynote presentation. You can watch the recording of my keynote (starting at 20:15) or download a copy of my slides (PDF, 158 MB).

The main areas of focus for the talk included Drupal's rapid growth and progress in India, key technology trends driving the future of the web, and how Drupal is responding to these trends. As a call-to-action, I encouraged Drupalists in India to form grassroots communities locally, to become a part of the larger Drupal community conversation, and to port modules from Drupal 7 to Drupal 8 to accelerate its adoption.

Have a look and as always, feel free to leave your thoughts in the comments!

March 23, 2016

- A child balloon
you inflate it, play with it for a while, then it explodes, you throw it away you inflate another one, maybe even a different color. the kid plays with it .. till it breaks, then you throw it away...

- An inflatable castle.
you inflate it, play with it for a while, deflate it, move it around, inflate it again, if it has a hole in it, you patch the hole.

- The tyres on your kids bike,.
You inflate it , kid rides on it, if it starts losing air, you fix it asap so you can continue using it.

All of the 3 are really valuable use cases for rubber with air in it,
in a way they are all equally valuable, but serve different purposes, different use cases.

Now think about this next time you spin up a container, that's running a database, application server and where your users ssh to.

It's not just another VirtualMachine

When people sign up for Configuration Management Camp, we ask them what community room they are mostly interested in.
We ask this question because we have rooms in different sizes and we don't want to put communities with 20 people showing interest in a 120 seat room and we don't want to put a community with 200 people in a 60 seat room.

But it also gives us to opportunity to build some very interesting graph over the potential evolution of the communities.

So looking at the figures ... the overall community is obviously growing,From 350 to 420, to just short of 600 people registered now.

The Puppet Community is not the biggest anymore, that spot went to the Ansible Community room. And all but the CFengine communities are growing.

One more thing , The organisation team discussed several times if we should rebrand the event. We opted not to .. Infracoders.eu could have been an alternative name .. but we decided to stick with the name that already is known,
the content will evolve.. but Config Management Camp will stay the place where people that care about Infrastructure as Code and Infrastructure automation meet.