Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

December 10, 2016

This summer I had the opportunity to present my practical fault detection concepts and hands-on approach as conference presentations.

First at Velocity and then at SRECon16 Europe. The latter page also contains the recorded video.

image

If you’re interested at all in tackling non-trivial timeseries alerting use cases (e.g. working with seasonal or trending data) this video should be useful to you.

It’s basically me trying to convey in a concrete way why I think the big-data and math-centered algorithmic approaches come with a variety of problems making them unrealistic and unfit, whereas the real breakthroughs happen when tools recognize the symbiotic relationship between operators and software, and focus on supporting a collaborative, iterative process to managing alerting over time. There should be a harmonious relationship between operator and monitoring tool, leveraging the strengths of both sides, with minimal factors harming the interaction. From what I can tell, bosun is pioneering this concept of a modern alerting IDE and is far ahead of other alerting tools in terms of providing high alignment between alerting configuration, the infrastructure being monitored, and individual team members, which are all moving targets, often even fast moving. In my experience this results in high signal/noise alerts and a happy team. (according to Kyle, the bosun project leader, my take is a useful one)

That said, figuring out the tool and using it properly has been, and remains, rather hard. I know many who rather not fight the learning curve. Recently the bosun team has been making strides at making it easier for newcomers - e.g. reloadable configuration and Grafana integration - but there is lots more to do. Part of the reason is that some of the UI tabs aren’t implemented for non-opentsdb databases and integrating Graphite for example into the tag-focused system that is bosun, is bound to be a bit weird. (that’s on me)

For an interesting juxtaposition, we released Grafana v4 with alerting functionality which approaches the problem from the complete other side: simplicity and a unified dashboard/alerting workflow first, more advanced alerting methods later. I’m doing what I can to make the ideas of both projects converge, or at least make the projects take inspiration from each other and combine the good parts. (just as I hope to bring the ideas behind graph-explorer into Grafana, eventually…)

Note: One thing that somebody correctly pointed out to me, is that I’ve been inaccurate with my terminology. Basically, machine learning and anomaly detection can be as simple or complex as you want to make it. In particular, what we’re doing with our alerting software (e.g. bosun) can rightfully also be considered machine learning, since we construct models that learn from data and make predictions. It may not be what we think of at first, and indeed, even a simple linear regression is a machine learning model. So most of my critique was more about the big data approach to machine learning, rather than machine learning itself. As it turns out then the key to applying machine learning successfully is tooling that assists the human operator in every possible way, which is what IDE’s like bosun do and how I should have phrased it.

December 09, 2016

Maar, rechters moeten de wet uitvoeren. Niet zelf proberen wetten te maken. De scheiding der machten geldt in alle richtingen.

Imperator Bart heeft een punt.

Among the most important initiatives for Drupal 8's future is the "API-first initiative", whose goal is to improve Drupal's web services capabilities for building decoupled applications. In my previous blog posts, I evaluated the current state of, outlined a vision for, and mapped a path forward for Drupal's web services solutions.

In the five months since my last update, web services in Drupal have moved forward substantially in functionality and ease of use. This blog post discusses the biggest improvements.

An overview of the key building blocks to create web services in Drupal. Out of the box, Drupal core can expose raw JSON structures reflecting its internal storage, or it can expose them in HAL. Note: Waterwheel has an optional dependency on JSON API if JSON API methods are invoked through Waterwheel.js.

Core REST improvements

With the release of Drupal 8.2, the core REST API now supports important requirements for decoupled applications. First, you can now retrieve configuration entities (such as blocks and menus) as REST resources. You can also conduct user registration through the REST API. The developer experience of core REST has improved as well, with simplified REST configuration and better response messages and status codes for requests causing errors.

Moreover, you can now access Drupal content from decoupled applications and sites hosted on other domains thanks to opt-in cross-origin resource sharing (CORS) support. This is particularly useful if you have a JavaScript application hosted directly on AWS, for instance, while your Drupal repository is hosted on a separate platform. Thanks to all this progress, you can now build more feature-rich decoupled applications with Drupal.

All of these improvements are thanks to the hard work of Wim Leers (Acquia), Ted Bowman (Acquia), Daniel Wehner (Chapter Three), Clemens Tolboom, José Manuel Rodríguez Vélez, Klaus Purer, James Gilliland (APQC), and Gabe Sullice (Aten Design Group).

JSON API as an emerging standard

JSON API is a specification for REST APIs in JSON which offers several compelling advantages over the current core REST API. First, JSON API provides a standard way to query not only single entities but also all relationships tied to that entity and to perform query operations through query string parameters (for example, listing all tags associated with an article). Second, JSON API allows you to fetch lists of content entities (including filtering, sorting and paging), whereas the only options for this currently available in core are to issue multiple requests, which is undesirable for performance, or to query Drupal views, which require additional work to create.

Moreover, JSON API is increasingly common in the JavaScript community due to its adoption by developers creating REST APIs in JSON and by members of both the Ruby on Rails and Ember communities. In short, the momentum outside of Drupal currently favors JSON API over HAL. It's my belief that JSON API should become an experimental core module, and it may one day potentially even supersede HAL, Views-based REST endpoints, and more. Though Views REST exports will always be valuable for special needs, JSON API is suitable for more common tasks due to query operations needing no additional configuration. All this being said, we should discuss and evaluate the implications of prioritizing JSON API.

Thanks to the efforts of Mateu Aguiló Bosch (Lullabot) and Gabe Sullice (Aten Design Group), the JSON API module in Drupal 8 is quickly approaching a level of stability that could put it under consideration as a core experimental module.

OAuth2 bearer token authentication

One of the issues facing many decoupled applications today is the lack of a robust authentication mechanism when working with Drupal's REST API. Currently, core REST only has two options available out of the box, namely cookie-based authentication, which is unhelpful for decoupled applications on domains other than the Drupal site, and basic authentication, which is less secure than other mechanisms.

Due to its wide acceptance, OAuth2 seems a logical next step for Drupal sites that need to authenticate requests. Because it is more secure than what is available in core REST's basic authentication, OAuth2 would help developers build more secure decoupled Drupal architectures and allow us to deprecate the other less secure approaches.

It would make sense to see an OAuth2 solution such as Simple OAuth, the work of Mateu Aguiló Bosch (Lullabot), as an experimental core module, so that we can make REST APIs in Drupal more secure.

Waterwheel, Drupal's SDK ecosystem

The API-first initiative's work goes beyond making improvements on the Drupal back end. Acquia released Waterwheel.js 1.0 as open-source, the first iteration of a helper SDK for JavaScript developers. The Waterwheel.js SDK helps JavaScript developers perform requests against Drupal without requiring extensive knowledge of how Drupal's REST API functions. For example, the Waterwheel module allows you to benefit from resource discovery, which makes your JavaScript application aware of Drupal's data model. Waterwheel.js also integrates easily with the JSON API contributed module.

Thanks to Kyle Browning (Acquia), the Waterwheel.swift SDK allows developers of applications for Apple devices such as iPads, iPhones, Apple Watches, and Apple TVs to more quickly build their Drupal-powered applications. To learn more about the Waterwheel ecosystem, check the blog posts by Preston So (Acquia).

Conclusion

Thanks to the hard work of the many contributors involved, the API-first initiative is progressing with great momentum. In this post, we ended with the following conclusions:

  • The JSON API and Simple OAuth modules could be candidates for inclusion in Drupal 8 core — this is something we should discuss and evaluate.
  • We should discuss where support for HAL and JSON API stops and starts, because it helps us focus our time and effort.

If you are building decoupled applications with Drupal 8, we would benefit significantly from your impressions of the core REST API and JSON API implementations now available. What features are missing? What queries are difficult or need work? How can we make the APIs more suited to your requirements? Use the comments section on this blog post to tell us more about the work you are doing so we can learn from your experiences.

Of course, if you can help accelerate the work of the API-first initiative by contributing code, reviewing and testing patches, or merely by participating in the discussions within the issues, you will not only be helping improve Drupal itself; you'll be helping improve the experience of all developers who wish to use Drupal as their ideal API-first back end.

Special thanks to Preston So for contributions to this blog post and to Wim Leers, Angie Byron, Ted Bowman and Chris Hamper for their feedback during the writing process.

Chez les cyclistes, il y’a un dicton qui dit que le nombre idéal de vélos à avoir dans son garage est N+1, où N est le nombre de vélo qu’on possède actuellement.

Oui, il y’a toujours une bonne raison pour acheter un nouveau vélo : avoir un vélo de route, un VTT, un vélo de gravel. Mais si N+1 est le nombre idéal, le nombre minimal est 2. Surtout si vous utilisez votre vélo quotidiennement.

Car il n’y a qu’une seule chose de pire pour un cycliste que de devoir pédaler de nuit dans le froid et la pluie : être forcé d’utiliser une voiture !

J’ai personnellement découvert ce problème lorsque, suite à un bris de dérailleur, j’ai appris que j’allais rester dix jours sans pédaler, la pièce n’étant pas de stock.

Le lendemain, j’avais trouvé un vélo d’occasion.

Un poil vieillotte mais ayant visiblement très peu servi, cette seconde monture joue admirablement son rôle de réserviste. Je l’enfourche lorsque mon fier destrier est à l’entretien, lorsqu’il me révèle une crevaison lente le matin et que je n’ai pas le courage de changer la chambre à air avant d’aller travailler, ou lorsque la simple envie me prend de changer un peu d’air.

Après deux années de ce rythme, j’eus une intuition bizarre.

En deux ans, mon vélo principal, que j’entretiens et que je bichonne, avait subit plusieurs révisions. Il avait connu de multiples crevaisons, plusieurs jeux de pneus. Les chaînes se succédaient, les roulements devaient être changés et les câbles se grippaient.

Rien de tout cela ne semblait atteindre mon réserviste. J’avais changé une fois une chambre à air mais la chaîne et les pneus étaient toujours ceux d’origine malgré des entretiens beaucoup plus succincts voire inexistants.

Cette constatation me conduisait à une explication impitoyable : bien que plus vieux, mon réserviste était de nettement meilleure qualité. Les nouveaux vélos avec les nouveaux pneus sont fragiles et faits pour encourager la consommation.

Intuitif, non ?

Et pourtant complètement faux. Car l’usage d’un vélo ne se compare pas en temps passé dans le garage mais en kilomètres parcourus.

Comme je vous l’ai raconté, je raffole de Strava. Chacune de mes sorties est enregistrée dans l’application avec le vélo utilisé pour l’occasion. J’indique même quand je change une pièce afin de connaître le kilométrage exact de chaque composant.

Je sais ainsi que pour mon vélo principal, une bonne chaine dure 2500km alors qu’une mauvaise s’use après 1500km. Les pneus, à l’arrière, durent entre 800 et 1600km. Facilement le triple à l’avant. Outre cela, mon utilisation du vélo nécessite un passage par l’atelier en moyenne tous les 2000km. Et le vélo va sur ses 8000km.

Mon vélo de réserve, par contre, vient lui tout juste de passer les 500km !

Il est donc parfaitement normal que je n’ai eu aucun ennui, aucun entretien, aucun travail sur ce vélo : il n’a tout simplement pas encore roulé le tiers de la distance à partir de laquelle l’usure de certains composants se fait sentir.

Mon intuition était complètement trompeuse.

Cette anecdote prête à une morale générale : souvent, nous nous laissons emporter par des intuitions, des similitudes. Les deux vélos étant en permanence côte à côte dans le garage, il est normal de les comparer. Le fait de prendre de temps en temps le vélo de réserve me donnait l’impression que les deux vélos vivaient une vie presque similaire. Certes, je pensais l’utiliser trois ou quatre fois moins. Mais les chiffres révèlent qu’il s’agit de quinze fois moins !

Lorsque vous avez des convictions, des intuitions, confrontez-les aux faits, aux chiffres réels.

Beaucoup de nos problèmes viennent du fait que nous suivons aveuglément nos intuitions, même lorsque les chiffres nous démontrent l’absurdité de nos croyances. Aujourd’hui encore, les politiciens prônent l’austérité afin de relancer l’économie, ils prônent l’expulsion d’étrangers pour libérer des emplois là où tous les exemples connus ont démontré l’absurde inanité et la contre-productivité de ce genre de mesures.

Pire : aujourd’hui, il y’a encore des gens pour voter pour ces politiciens.

À ceux-là, racontez-leur donc l’histoire de mon second vélo.

 

Photo par AdamNCSU.

Ce texte est a été publié grâce à votre soutien régulier sur Tipeee et sur Paypal. Je suis @ploum, blogueur, écrivain, conférencier et futurologue. Vous pouvez me suivre sur Facebook, Medium ou me contacter.

Ce texte est publié sous la licence CC-By BE.

Two of the lights in our living are Philips Hue lights.
I especially like them for their ability to reach 2000K color temperature (very warm white light), which gives a nice mellow yellowish white light.

We control the lights via normal light switches contained in the light fixture.
Due to the way Philips Hue lights work this implies that whenever we turn on the lights again they go back to the default setting of around 2700K (normal warm white).

In order to fix this once and for all I wrote a little program in rust that does this:

  • poll the state of the lights every 10 seconds (there is no subscribe/notify :( )
  • keep the light state in memory
  • if a light changes from unreachable (usually due to being turned off by the switch) to reachable (turned on by the switch), restore the hue to the last state before it became unreachable
Code is in rust (of course) and can be found at https://github.com/andete/hue_persistence.

Just run it on a local server (I use my shuttle xs36v) and it automatically starts working.
Keep in mind that it can take up to a minute for the Hue system to detect that a light is reachable again.

A short video demoing the system in action:


December 07, 2016

As a freelancer I saw many companies, many situations and worked with many Project Managers, Product Owners, Scrum Masters and god knows what names the HR department came up with.

What is most important, in my experience, is that the people leading the team try to focus the people in the group on as few common goals as possible during one sprint. The more stories or goals the team has to finish, the more individualism and the fewer things will get done (with done being defined by your definition of done).

Differently put you should try to make your team work like how a soccer team plays. You try to make three, four or five goals per sprint. But you do this as a team.

Example: When a story isn’t finished at the end of the sprint; it’s the team’s fault. Not the fault of the one guy that worked on it. The team has to find a solution. If it’s always the same guy being lazy, that’s not the project’s or the team’s results. You or HR deals with that guy later. Doing so is outside of Scrum’s scope. It’s not for your retrospective discussion. But do sanction the team for not delivering.

Another example is not to put too much stories on the task board and yet to keep stories as small and/or well defined as possible. Too much stories or too large stories will result in every individual picking a different story (or subtask of the larger story) to be responsible for. At the end of the sprint none of these will be really finished. And the developers in the team won’t care about the other features being developed during the sprint. So they will dislike having to review other people’s features. They’ll have difficulty finding a reviewer. They won’t communicate with each other. They’ll become careless and dispassionate.

Your planning caused that. That makes you a bad team player. The coach is part of the team.

If you encountered this bug but are not using Autoptimize, leave a comment below or contact me here. Your info can help understand if the regression has an impact outside of Autoptimize as well!


Gotta love Sarah, but there’s a small bug in Vaughan’s (WordPress 4.7) that breaks (part of the CSS) when Autoptimized. If you have a theme that supports custom backgrounds (e.g. TwentySixteen) and you set that custom background in the Customizer, the URL ends up escaped (well, they wp_json_encode() it actually) like this;

body{background-image: url("http:\/\/localhost\/wordpress\/wp-content\/uploads\/layerslider\/Full-width-demo-slider\/left.png");}

Which results in the Autoptimized CSS for the background-image being broken because the URL is not recognized as such. The bug has been confirmed already and the fix should land in WordPress 4.7.1.

If you’re impacted and can’t wait for 4.7.1 to be released, there are 2 workarounds:
1. simple: disable the “Aggregate inline CSS”-option
2. geeky: use this code snippet to fix the URL before AO has a change to misinterpret it;

add_filter('autoptimize_filter_html_before_minify','fix_encoded_urls');
function fix_encoded_urls($htmlIn) {
	if ( strpos($htmlIn,"body.custom-background") !== false ) {
		preg_match_all("#background-image:\s?url\(?[\"|']((http(s)?)?:\\\/\\\/.*)[\"|']\)?#",$htmlIn,$badUrls);
	  	if ($badUrls) {
			foreach ($badUrls[1] as $badUrl) {
				$htmlIn = str_replace($badUrl, stripslashes($badUrl), $htmlIn);
			}
		}
	}
  	return $htmlIn;
}

December 06, 2016

Ceci est le billet 42 sur 42 dans la série Printeurs

En parallèle aux aventures de Nellio, Eva et les autres, un étrange personnage continue sa vie : l’ex-ouvrier 689.

Je n’en reviens pas. Je n’en crois pas mes yeux. J’étais sûr de l’avoir tué. Pourtant, aujourd’hui, le plus jeune est revenu.

Cela fait des cycles et des cycles de sommeil que je suis cloîtré dans un confortable appartement. Le plus vieux m’a expliqué qu’il s’agissait de me protéger. En tant que travailleur, je suis un témoin clé dans le procès qu’il veut faire. À qui ? Je ne suis pas sûr de comprendre… Au système, aux riches, aux puissants, aux contre-maîtres. Peu importe, mon pouvoir marche d’autant mieux lorsqu’il est confronté à un idéalisme naïf et béat.

De temps en temps, le plus vieux vient me rendre visite afin de m’interroger, de dresser un tableau de la vie dans l’usine. Au vu de ses réactions à mes révélations initiales, j’ai préféré ne pas tout dire. Il risquerait de ne pas me croire.

Mais lorsque la porte s’est ouverte aujourd’hui, j’ai failli défaillir de surprise. Le plus jeune était là, souriant, complètement insensible à mon pouvoir. Il s’est dit amnésique et j’ai réussi à cacher mon trouble, à prétendre que je le connaissais pas.

Ce monde obéit-il à d’autres lois ? La mort et la douleur ne sont-elles pas les armes ultimes que je croyais maitriser ? Pour la première fois, le doute me gagne, m’envahit. Mon pouvoir s’étiole. Je tremble !

Tout fonctionnait pourtant comme sur des roulettes. Le plus vieux était bel et bien persuadé que le plus jeune était tombé du ballon par accident. Une fois le plus jeune parti, le plus vieux s’est révélé beaucoup plus vulnérable à mon pouvoir. Du moins le croyais-je…

Je regrette les contre-maitres qui étaient tellement facile à manipuler, je…

612 se tient en face de moi. Il sourit béatement et me couvre de son regard apaisant. Baissant les yeux, je constate que je suis un enfant. Machinalement, mon pouce s’est introduit dans ma bouche.

— La vie est pleine de mystère. Elle ne s’arrête pas à l’atelier et aux contre-maîtres. Un jour, l’un de vous le découvrira. Un jour, il percera les mystères de la vie et nous libérera…

Je hurle, je me rue en vociférant sur 612. Dans un craquement sourd, mon corps d’adulte s’écrase sur les parois de l’appartement. La douleur me réveille, me rassure. Ô toi, ma vieille amie, ma fidèle compagne, celle qui m’accompagnera jusqu’à la mort et au delà, celle qui me fera lutter, qui me réveillera, ô toi douleur…

Le crâne de 612 éclate tout autour de moi. Du sang ruisselle sur les murs, des lambeaux de cervelles gluants dégoulinent du plafond et, partout, le visage de 612 flotte en murmurant :
— Tu es noble !

À mes pieds, le sol est jonché de cadavres des travailleurs que j’ai fréquenté. Je reconnais chaque visage, chaque numéro. Les corps se décomposent, l’odeur me prend à la gorge et, soudain, chaque mort donne naissance, dans une explosion de pu et de chaires putrides, à un bébé sanguinolent, hurlant. Tournant leurs têtes vers moi, les bébés se mettent à ramper. Ils tiennent dans leurs petites menottes les jouets, les appareils électroniques, les outils que j’ai fabriqué. Ils les dévorent avant de ramper et de toucher, un par un, tous les objets de l’appartement.

L’un des bébés, mi-homme, mi-fœtus, caresse le rideau qui se gorge aussitôt de sang. Un autre s’empare de la tablette de divertissement qui se décompose en chaires putréfiées. Le plus effrayant se met soudain à flotter jusqu’au plafond avant d’avaler l’ampoule intelligente qui se transforme en millions de mouches bourdonnantes.

Les vêtements que je porte se mettent à hurler, à briller de longs éclairs de douleurs.

Plié en deux, je me met à vomir sous les sordides ricanements des bébés dont les visages se couvrent de rides et d’une barbe blanche.

Combien de temps suis-je resté dans le coma, étendu au milieu de la pièce ? Des heures ? Des jours ?

À mon réveil, tout m’a semblé effroyablement normal. Mais, au creux de mon estomac, j’ai ressenti une émotion nouvelle, angoissante. Une peur non physique. Ma vie n’est pas en danger, je n’ai pas à me défendre et, pourtant, j’ai peur, je tremble. Je veux oublier ! Et si le plus vieux décidait de me renvoyer à l’usine ? J’ai failli à ma mission ! Je serai probablement rétrogradé au plus bas de l’échelle, je redeviendrai le travailleur que j’ai toujours été.

Les genoux tremblants, je tente de me redresser et de me ressaisir. Je n’ai pas le choix, je dois continuer, je dois escalader chaque échelon. S’arrêter, c’est tomber. Monter, encore et encore, tel est mon destin.

Mais pour aller où ? Vers quels sommets ? C’est peut-être la question qu’il ne faut pas poser car seule l’ignorance me permettra de continuer.

Par quel miracle le plus jeune est-il encore en vie ? Je n’en sais rien et je n’ai pas besoin de le savoir. Je dois juste attraper le prochain échelon et monter, encore et toujours. Je dois écraser le plus vieux, je dois l’utiliser et le jeter. Ce n’est qu’à ce prix que je ne tomberai pas.

Prenant une profonde inspiration, je retrouve mon calme. Mon pouvoir est revenu, je le sens ! Il ne m’a jamais quitté. Le plus jeune est encore vivant ? Qu’à cela ne tienne, je le tuerai une seconde fois. Ou dix fois, cent fois, mille fois s’il le faut ! Car mon pouvoir est revenu et rien ni personne ne pourra plus arrêter ma fulgurante ascension.

Rien ! Pas même ces bébés à tête de vieillards qui rampent désormais partout où je porte mon regard, éructant en silence des moues terrifiées, touchant de leurs mains poisseuses chaque objet, chaque meuble, chaque outil.

Mais je les tiens à l’œil. Car eux aussi subiront désormais l’étendue de mon pouvoir.

 

Photo par Antoine Skipper.

Ce texte est a été publié grâce à votre soutien régulier sur Tipeee et sur Paypal. Je suis @ploum, blogueur, écrivain, conférencier et futurologue. Vous pouvez me suivre sur Facebook, Medium ou me contacter.

Ce texte est publié sous la licence CC-By BE.

December 03, 2016

Just picked up on Worldwide FM; James Brandon Lewis Trio featuring Anthony Pirog and Nicholas Ryan Gant (aka @ghetto_falsetto). Kind of mellow-y, which I’m not always into, but duet of Lewis’ tenor-sax and the Gant’s falsetto scatting makes for a great mix, keeping the feeling fresh and real.

YouTube Video
Watch this video on YouTube.

And if you want to something more in-your-face; have a go at “No Filter” by the same trio. Just tenor-sax, bass & drums, but you’ll be headbanging as if you were on the front-row of a metal-festival.

December 02, 2016

It’s over! The 4th edition of Botconf just finished and I’m in the train back to Belgium writing the daily wrap-up. Yesterday, the reception was organized in a very nice place (the “Chapelle de la Trinité”). Awesome place, awesome food, interesting chats as usual. To allow people to recover smoothly, the day started a little bit later and some doses of caffeine.

The day was kicked off by Alberto Ortega with a presentation called “Nymaim Origins, Revival and Reversing Tales”. Nymaim is a malware family discovered in 2013 mainly used to lock computers and drop ransomware. It was known to be highly obfuscated and to use anti-analysis techniques (anti-VM, string description on demand, anti-dumping, DGA, campaign timer). During the analysis of the code, a nice list of artefacts was found to detect virtualized environments (ex: “#*INTEL – 6040000#*” to detect the CPU used by VMware guests). Alberto reviewed the different obfuscation techniques. In the code obfuscation, they found a “craft_call” function to dynamically calculate a return address based on an operation with the two hard-coded parameters. Campaign timer is also classic for Nymaim: There is a date in the code (20/11/2016) which will prevent the malware to execute after this date (the system date must be changed to permit the execution of the malware, which is not easy on a sandbox). Network traffic is encrypted with different layers. The first one is encrypted with RC4 with the static key and a variable salt for each request/response. The DNS resolution is also obfuscated. Domains are resolved using a homemade algorithm. “A” records returned are NOT the actual IP addresses. (Google DNS are used). To know the real IP address, the algorithm must be used. Of course, DGA is used of course. It uses PRNG based on Xorshift algorithm Seeded with the current system time and a fixed seed. Nymaim is used to perform banking fraud. The way this feature is configured is similar to Gozi/IFSB (that will be covered later today). They use redirects to the injects panel. Nice review of the malware with an amazing job to reverse and understand all the features.

Then Jose Miguel Esparza and Frank Ruiz presented “Rough Diamonds in Banking Botnets”. Criminals keep improving their backends : C&C’s and control panels. They made a nice review of the current landscape but this talk was flagged as TLP:RED so no more information.

After a coffee break, we restarted with Maciej Kotowicz who presented “ISFB, Still Live and Kicking”. Also named Gozi2/Ursnif, ISFB is a malware which appeared in 2014 and is, still today, one of the most popular bankers on the market. Why this name? “ISFB” string was found in debugging instruction in the code. Targets are numerous (all over the world). The dropper performs persistency, inject worker, setups IPC and (new!) download the 2nd stage. It also has anti-VM tricks. It uses GetCursorInfo() to get the movement of the mouse. No movement, no execution! It also enumerates devices. Then Maciej reviewed deeper how the malware works, its configuration and registry keys. Communications to phone home are based on the Tor network and P2P network and a DGA of course.

The next slot was assigned to Margarita Louca with a non-technical talk: “Challenges for a cross-jurisdictional botnet takedown”. Margarita being from Europol, she also asked to consider her presentation as TLP:RED.

The afternoon started with two last talks. Kurtis Armour presented “Preventing File-Based Botnet Persistence and Growth”. The goal of this talk was education of threat landscape and the layers of protection. The most part of his talk focused on post-exploitation (only on classic computers – No IoT). Botnet delivery mechanism is based on social engineering, tricking users to go unsafe source and to gain some money (monetise) or via browsers, 3rd party apps (EK), this was already covered. Dropped code can be file based or memory based. Starting from the fact that code is made to be executed by the computer, Kurtis reviewed how this code can be executed by a Windows computer. The code can be a binary, shell code or a script. This code must be dropped on the victim by a dropper and they are many way to achieve that! Via HTML, JavaScript, ZIP, EXE, DLL, Macros, PowerShell, VBA, VBS, PDF, etc. The talk was mainly defensive and Kurtis explained with several examples how to prevent (or at least reduce the chances) code execution. First golden rule: “No admin rights!“. But we can also use pieces of software to reduce the attack surface. A good example is LAPS from Microsoft. Windows Script Files are a pain. Don’t allow their execution. This can be achieved by replaced the default association (wscript.exe -> notepad.exe). Microsoft Office files are well known to be (ab)used to distribute macros. Starting with Office 2016, more granularity has been introduced via GPO’s. Here is an interesting document about the security of macros published (source)

Office Macros Security

PowerShell is a nice tool for system administrator but also very dangerous: it runs from memory, download & exec from remote systems. But powershell is harder to block. Execution policies can be implemented but are easy to bypass. Powershell v5 to the rescue? Improved logging and security features (but don’t forget to uninstall previous versions). Application whitelisting to the rescue? (the talk focused on AppLocker because being by default and can be managed via GPO’s). It can be used to perform an inventory of the applications, to protect against unwanted applications and increase software standardisation. Another technique is to restrict access to writable directories like %APPDATA% but they are many others. hta files are nasty and are executed via a specific interpreter (mshta.exe). There are many controls that could be implemented but they are really a pain to deploy. There was an interesting question from somebody in the audience: Who’s using such controls (or at least a few of them). Only five people raised their hand!

And finally, Magal Baz & Gal Meiri closed the event with another talk about Dridex: “Dridex Gone Phishing”. After a short introduction about Dridex (or a recap – who don’t know this malware?), they explained in details how the banking fraud is working from the infection to the fraudulent transaction. You have a token from your bank? 2FA? Good but it’s not a problem for Dridex. It infects the browser and places hooks in ouput and input functions of the browser and all data is duplicated (sent to the bank and the attack. This is fully transparent to the user. A good way to protect yourself is to rename your browser executable (ex: “firefox_clean.exe” instead of “firefox.exe”). Why? Dridex has a list of common browsers process names (hashed) and compromize the browser based on this list.

As usual, there was a small closing session with some announcements. A few numbers about the 2016 edition? 325 attendes, 4 workshops, 25 talks (out of 48 proposals). And what about the 2017 edition? It should be organized in Montpellier, another nice French city between 5th and 8th of December.

 

[The post Botconf 2016 Wrap-Up Day #3 has been first published on /dev/random]

Logo QBayLogicCe jeudi 15 décembre 2016 à 19h se déroulera la 54ème séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : Hardware design using CLaSH

Exceptionnellement, l’exposé sera donné en anglais

Thématique : Programmation|Développement

Public : developpeurs|entreprises|étudiants

L’animateur conférencier : Jan Kuper (Université de Twente & QBayLogic)

Lieu de cette séance : HEPH Condorcet, Chemin du Champ de Mars, 15 – 7000 Mons – Auditorium 2 (G01) situé au rez de chaussée (cf. ce plan sur le site d’Openstreetmap; ATTENTION, l’entrée est peu visible de la voie principale, elle se trouve dans l’angle formé par un très grand parking).

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page http://jeudisdulibre.fikket.com/. La séance sera suivie d’un verre de l’amitié.

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, Normation, MeaWeb et Phonoid.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

Description : Mainstream hardware design languages such as VHDL and Verilog have poor abstraction mechanisms and many attempts are done to design so-called High Level Synthesis languages. Most of these languages take an imperative perspective whereas CλaSH, on the other hand, starts from a functional perspective and is based on the functional language Haskell. That starting point offers high level abstraction mechanisms such as polymorphism, type derivation, higher order functions. Besides, it offers a direct simulation environment, every CλaSH specification is an executable program. During the presentation we will illustrate this with several examples such as elementary computational architectures, filters for signal processors, a simple processor.

Keywords : Functional HDL|FPGA design

Target audience : people who are familiar with programming and interested in usage of other platforms than just a traditional processor. Some knowledge of hardware design is handy but not necessary.

Short bio : Jan Kuper studied Logic and Mathematics and did his PhD under Henk Barendregt on the foundations of mathematics and computer science (1994). He worked in the areas of logic of language, theoretical computer science, and mathematical methods for architecture design. Currently he works at the Embedded Systems group of the University of Twente where he initiated the development of CλaSH. His lecturing experience comprises philosophical and mathematical logic, imperative and functional programming languages, and design of digital architectures. Together with Christiaan Baaij, he recently started the company QBayLogic to apply formal design methodologies to FPGA design.

I recently re-installed this system to use it as a local server.

Installation of Debian 8 went smooth. I choose not to install a graphical environment as I only want to attach a screen in emergency situations.

The system boots fine but then the screen goes into power-safe mode. The system is still reachable remotely.

After some searching on the internet [1] the solution is to blacklist the graphics driver of the system.

Create a file called /etc/modprobe.d/blacklist.conf and add the following to it:

blacklist gma500_gfx
This tells the system not to load the driver. Then do
 sudo update-initramfs -u
this updates the boot RAM disk to reflect the change.

Reboot the system and enjoy a working console.

Picture or it didn't happen:

PHP 7.1 has been released, bringing some features I was eagerly anticipating and some surprises that had gone under my radar.

New iterable pseudo-type

This is the feature I’m most exited about, perhaps because I had no clue it was in the works. In short, iterable allows for type hinting in functions that just loop though their parameters value without restricting the type to either array or Traversable, or not having any type hint at all. This partially solves one of the points I raised in my Missing in PHP 7 series post Collections.

Nullable types

This feature I also already addressed in Missing in PHP 7 Nullable return types. What somehow escaped my attention is that PHP 7.1 comes not just with nullable return types, but also new syntax for nullable parameters.

Intent revealing

Other new features that I’m excited about are the Void Return Type and Class Constant Visibility Modifiers. Both of these help with revealing the authors intent, reduce the need for comments and make it easier to catch bugs.

A big thank you to the PHP contributors that made these things possible and keep pushing the language forwards.

For a full list of new features, see the PHP 7.1 release announcement.

December 01, 2016

The second is over, so here is my daily wrap-up! After some welcomed coffee cups, it started sharp at 9AM with Christiaan Beek who spoke about Ransomware: “Ransomware & Beyond”. When I read the title, my first reaction was “What can be said in a conference like Botconf about ransomware?”. I was wrong! After a short review of the ransomware landscape, Christiaan went deeper of course. It’s a fact: The number of ransomware really exploded in 2016 with new versions or techniques to attack and/or evade detections mechanisms. Good examples are Petya which encrypts the MBR or Mamba which performs FDE (“Full Disk Encryption“). We also see today “RaaS” or “Ransomware as a Service“. Not only residential customers are targeted but also business users (do you remember the nice story of the hospital in the US which was infected?). Why remains ransomware so successful?

  • It’s an organised crime with affiliate program
  • Open source code available
  • Buying is easy (aaS)
  • Customer satisfaction

What does Christiaan means by “Customer satisfaction”? On many forums, we can find thankful messages from victims towards the attackers for providing the decryption key once they paid the ransom. This could be compared to the Stockholm syndrome. We see also services like Ransomware for dummies where people are able to customise the warning messages and so customise their campaign. An important remark was that, sometimes, ransomware is used like DDoS as a lure to keep security teams attention while another attack is ongoing, below the radar.

Then, Christiaan explained how he tried to implement machine learning to help in the detection/categorisation of samples but it stopped to work when attackers started to use Powershell. The memory analysis approach remains interesting but consumes a lot of resources. They are interesting initiative to improve our daily fight against ransomware: McAfee released a tool called “Ransomware Interceptor” that gives good results. An nice initiative is the website nomoreransom.org which compiles a lot of resources (how to report an incident, how to decrypt some ransomware – tools are available).  What about the future? According to Christiaan, it’s scaring! Ransomware will target new devices like home routers or… your cars! Cars look a nice target when their CAN bus is directly available from the entertainment system! Be prepared! A very nice talk to start the day!

The second talk was about Moose! “Attacking Linux/Moose 2.0 Unraveled an EGO MARKET” by Olivier Bilodeau and Masarah Paquet-Clouston. Yes, Moose is back! This was already covered during Botconf 2015. So, its started with a quick recap: Moose infects routers and IoT devices running an embedded Linux with a BusyBox userland. Once infected, the worm tries to spread by brute forcing credentials of new potential victims. The installed payload is a proxy service used to reach social media websites. So, the question was “What are the attackers doing with this botnet? How do they monetize it?“. The next was to attack the botnet to better understand the business behind it. To achieve this, a specific honeypot environment was deployed. A real machine (based on Linux ARM) was deployed with other components like Cowrie. By default cowrie has no telnet support. So, they contributed to the project and added telnet support. The next step was to break communications with the C&C. This was also explained, it was based on mitmproxy. What about their findings?

  • The botnet remains stealthy (no x86 version), no add fraud, spam or DDoS, no persistence mechanism
  • It is constantly adapting
  • It uses obfuscated C&C IP addresses (XORd with a static value in the binary)
  • Update bot enrolment process (with more correlation checks)
  • Protocols updated (real HTTP and less easy to spot)

As the botnet has no direct victimes, how is it used? Social media fraud is the key! This kind of fraud proposes you to buy “clicks” or “followers” with a few bucks. Instragram is the main targeted social network with 86% of the bot traffic. Masarah explained that this business is in fact very lucrative. But who are the buyers of those services?

Buy Instagram Followers

Based on the people followed by Moose fake accounts, we may found:

  • Business related accounts (like e-cigarette vendors)
  • Business related account but represented by one individual (ex: people looking for business)
  • Celebrities expecting more visibility.

So, indeed, no direct victims but some get fooled by the “false popularity”. What about the prices? They were analysed and they’re differences: Instagram is cheaper than LinkedIn. Who’s being Moose? They identified 7 IP addresses, 3 languages used (Dutch, English and Spanish) and, at least, 7 web interfaces. It was a nice talk with a good balance between technical and social stuff. I liked it. If you’re interested, two interesting links about Moose:

  • IOC’s are available here
  • The paper is available here

Then, John Bambenek came on stage to present “Tracking Exploit Kits”. The first question addressed by John was “Why tracking exploit kits?”. Indeed, when one is brought down, another comes alive. Law enforcement services are overloaded. Because they are one of the major way to infect computers (the other one being spam). An EK can be seen as an ecosystem. Many people work around it: operators, exploit writers, traffic generators, sellers, mules, carders, … There is only one way to protect yourself about EK’s: patch, patch and patch again. Rarely 0-day vulnerabilities are used by EK so you don’t have a good reason to not install patches. Also, the analyse of an EK may reveal very interesting stuff to better understand how they are operated. The second part of the talk covered techniques (and tools) to track EK’s. Geo-blacklisting is a common feature used by most of the EK (not only countries but also sensitive organsations are blacklisted like AV vendors or companies doing research). But VPN are not always properly detected so use a VPN! As each EK has its own set of exploits to test, you must tune your sandbox to be vulnerable to the EK you are analysing. Note that many EK pages can be identified via patterns (regex are very useful in this case). Then, John covered the landing pages and how to decode them. He also mentioned the tool ekdeco which can be very helpful during this task. If you’re interested in IOC’s, the following URLs has many ones related to exploit kits: https://github.com/John-Lin/docker-snort/blob/master/snortrules-snapshot-2972/rules/exploit-kit.rules. Again, a very nice talk for the first part of the day.

The next topic covered DDoS botnets and how to track them. Ya Liu’s presentation was called “Improve DDoS Botnet Tracking With Honeypots”. In fast, Ya’s research was based on PGA or “Packet Generation Algorithm“. During his research, he found that a Botnet family can be identified by inspecting how packets are generated. It was a nice research with a huge data set: 30+ botnet families, 6000+ tracked botnets, 250K+ targets (it started in 2014). Ya’s goal is to collect packets during attacks and, based on the analysis, to map them to the right family. He explained how packets are generated and the identification is due to bad programming. Then algorithm to analyse data and extract useful info was reviewed. Interesting approach but the main issue is to find enough and good packets.

The first half-day ended with Angel Villega who presented “Function Identification and Recovery Signature Tool” or, in short, “FIRST“. The idea behind this project is to improve the daily life of malware researchers by automatic boring stuff. We don’t like boring tasks right? Here again, the goal is to analyse a sample to tag it with the right family. FIRST is an IDApro plugins which extracts interesting information from functions found in the malware samples (opcodes, architecture, syscalls, …) and send them to a server which will check the information in his database to help to identify the malware. Why does it work? Because people are lazy and don’t want to reinvent the wheel. So chances to find shared functions in multiple samples are good. As a kick-off for the project, there is a public server available (fisrt-plugin.us). It can be used as a first repository but you can also start your own server. Nice project but less interesting for me.

After the lunch break, Tom Ueltschi came on stage to present “Advanced Incident Detection and Threat Hunting using Sysmon (and Splunk)”. Tom is a regular speaker as botconf and, this year again, he came with nice stuff that can be re-used by most of us. First, he made an introduction about detection techniques. It is very important to deploy the right tools to collect interesting data from your infrastructure. Basically, we have two types:

  • NBD or network based detection
  • HBD or host base detection

For Tom, the best tools are: Bro as NBD and SysmonSplunk as HBD. I agree on his choices. Why Sysmon? Basically, it’s free (part of the SysInternals suite) and, being developed by Microsoft, it is easy to deploy it and integration is smooth. It logs to the Windows Event Logs. Tom research started with a presentation he attended at RSA with the author of Sysmon. The setup is quite simple:

Sysmon > Windows Event Logs > Splunkforwarder > Splunk

Just one remark: take care of your Splunk license! You may quickly collect tons of events! It is recommended to filter them to just collect the ones interesting for your use cases. The second part of the presentation covered such use cases. How to start? The SANS DFIR poster is often a good starting point. Here are some examples covered by Tom:

  • Tracking rogue svchost.exe: Always executed from the same path and MUST be executed with a ‘-k’ parameters. If it’s not the case, this is suspicious.
  • Detection of Java Adwind RAT
  • Advanced Hancitor (referring to Didier Stevens’s SANS diary)

It is also possible to perform automatic malware analysis, to detect key loggers and password stealers, lateral movements (why a workstations has flows with another one via TCP/445?). It is also interesting to keep an eye on parent / child processes relationship. And a lot of interesting tips were reviewed by Tom. For me, it was the best talk of the day! If you’re working in a blue team, have a look at his slides asap.

Sébastien Larinier & Alexandra Toussaint presented “How Does Dridex Hide Friends?”. They presented the findings of an investigation they performed at one of their customers. He suffered of a major bank fraud (800K€!). The customer was “safe”, had 2FA activated but it was compromised and asked to find “how?”. They reviewed the classic investigation process which started with a disk image that was analysed. What they found, how, the artefacts etc. The first conclusion was that the computer was infected by Dridex. But 6 days later, another suspicious was created which dropped a RAT on the computer. Nice example of a real infection (I was surprised that the customer accepted to be taken as an example for the talk).

And the afternoon continued with “A Tete-a-Tete with RSA Bots” by Jens Frieß and Laura Guevara. Here again, the idea of the talk was to automate the analyse of encrypted communications between a bot and its C&C. When you are facing encryption every day, it may quickly become a pain. The good news for the research was that there is not a log of changes in encrypted communications across malware families. Also, bad guys are following best practices and rely on standard encryption libraries. The analyse process was reviewed to be able to decrypt communications: It is based on a fake C&C with a rogue certificate, a DNS server to spoof requests and the injection of a DLL with cryptographic functions.

After a quick coffee break, the next presentation was “Takedown client-server botnets the ISP-way” by Quảng Trần who is working for a Vietnamese ISP. Takedowns are important countermeasures in fighting botnets. Why is it so important from an ISP perspective? Because they need to protect their customers, their network and also respond to local law enforcement agencies requests. Quang explained the technique used to bring a botnet down: Once the IP & domains have been identified, the ISP can re-buy expired domains or perform domain / hosting termination. In this case, the major issue is that some ISP’s are not always reactive. But for an ISP, it can be interesting to maintain the DNS, monitor the traffic, reroute it or perform DPI. The technique used is a sinkhole server that can serve multiple botnets via their own protocols and mimic their C&C via a set of plugins. With this technique, they can fake the C&C and send termination commands to the bots to bring the botnet down in a safe way. Interesting remark during the Q&A session: such kind of operation is illegal in many countries!

For the next talk, the topic was again “automation” and “machine learning“. Sebastian Garcia presented “Detecting the Behavioural Relationships of Malware Connections”. The idea of his research is not rely only on IOC’s (that the only real info that we have today to fight against malwares and botnets). But, it is not performant enough with new malwares. Sebastian’s ideas was to put more interests on network flows. Sometimes it can help but not always. What about checking the behaviour? He explained his project which produces indeed nice graphs:

Traffic Behaviour

It was not easy for me to follow this talk but the information shared looked very interesting. More information is available here.

The last talk had an attractive title: “Analysis of Free Movies and Series Websites Guided by Users Search Terms” by Martin Clauß and Luis Alberto Benthin Sanguino. The idea is to evaluate the risk to visit FMS websites. FMS means “Free Movies and Series“. They are juicy targets to deliver malicious content to the visitors (millions of people visit such websites). The idea was to collect URLs visited while browsing to such websites and check if they are malicious or not. Classic targets are Flash Players but they are many others. How to reach FMS websites? Just search via Google. Search for your favourite singer, the latest Hollywood movie. The x first results were crawled and analysed via online tools like VT, Sucury, ESet, etc. The results are quite interesting. As you can imagine, a lot of web sites redirect to malicious content. Based on 18K pages analised, 7% were malicious (and 10% of the domains). Other interesting stats were reported like the singer which returned the biggest amount of malicious page or who’s most targeted (Spanish people). Nice talk to finish the day.

Before the official reception,  a lightning talks session was organised with 10 x 3 minutes about nice topics. That’s enough for today, seen you tomorrow with more content!

[The post Botconf 2016 Wrap-Up Day #2 has been first published on /dev/random]

En tant qu’humains, nous fonctionnons avec des objectifs. La mesure liée à cet objectif, que j’appelle « observable », peut pervertir complètement le système au point de le détourner de son objectif initial. J’ai introduit le concept dans « Méfiez-vous des observables » mais sachez que, dans notre société, l’observable par défaut est l’argent.

L’amour, le couple et le restaurant

Comme je l’ai expliqué à travers la fable des voitures en Observabilie, nous vivons dans un monde dont certains aspects sont difficiles à quantifier. Nous nous rabattons alors sur des « observables », faciles à mesurer. Mais si une observable est même très légèrement décorrélée de l’aspect original, le fait de mesurer va amplifier cette décorrélation jusqu’à l’absurde.

« Toute utilisation d’une observable imparfaitement liée à un objectif va tendre à maximiser la décorrélation entre cette observable et l’objectif »

Prenons un exemple simple : si vous souhaitez augmenter l’amour dans votre couple, l’amour est une donnée difficilement mesurable. Par contre, si votre conjoint remarque que vous vous invitez mutuellement plus souvent au restaurant lorsque vous êtes amoureux, vous pouvez décider de prendre le nombre de sorties gastronomiques par mois comme observable de votre amour.

Après quelques temps, vous serez sans même en avoir conscience encouragé à aller au restaurant le plus possible. Idéalement tous les soirs !

Serez-vous pour autant plus amoureux ? Dans le meilleur des cas, rien n’aura changé. Dans le pire, vous pourriez même détruire votre couple par cette absurde obsession des restaurants, vous prendrez du poids et dilapiderez vos économies.

Cela semble évident dans ce cas de figure mais pourtant nous le reproduisons en permanence avec une observable presqu’aussi absurde que le nombre de sorties au restaurant par mois. Une observable devenue universelle. L’argent !

L’argent, notre principale observable

Toute notre société, toutes nos valeurs nous poussent à maximiser l’argent.

Telle personne prétend que sa priorité dans la vie est d’éduquer ses enfants et va tout faire pour… gagner de l’argent afin de payer une école privée même si cela implique d’aller travailler à l’étranger en ne voyant ses enfants qu’une fois par mois.

Une autre veut vivre paisiblement dans un coin tranquille et va, en conséquence, travailler très dur dans une grande ville pendant des décennies afin de… “gagner assez pour arrêter de travailler”.

Ces cas ne sont bien entendu pas universels et nombreuses sont les occasions où nous refusons un gain financier. Mais il est amusant de remarquer que lesdites occasions seront mûrement réfléchies et devront être justifiées de long et en large. Par défaut, gagner de l’argent est la situation la plus intéressante. Nous pousserons la perversité jusqu’à quantifier n’importe quoi, y compris notre bonheur, au moyen d’équivalents financiers.

L’argent est devenu une observable tellement universelle que même le bonheur se mesure en argent. Saviez-vous qu’aux États-Unis le fait de supprimer les trajets quotidiens maison-travail correspondait, en terme de bonheur, à une augmentation salariale de 40.000 dollars par an ? Le bonheur est donc quantifiable en dollars ? N’est-il pas choquant que l’un des arguments majeurs dans la prévention des suicides soit… le coût à la société d’un suicide (849.878 $ au Canada. C’est précis !) ? Une vie de moins importe peu. Par contre, si cela coûte, il faut agir !

Si nous voulons diminuer le nombre de suicide, ce serait uniquement pour économiser de l’argent !

Est-il tout simplement possible de convertir la douleur d’un suicide en une perte financière ? Et quelle conclusion devrait-on tirer si, par hasard, le calcul avait eu pour résultat qu’un suicide rapporte à la société ?

La disparition des abeilles est inquiétante ? Non, pas réellement. Mais il suffit d’affirmer que les pollinisateurs effectuent un travail évalué entre 2 et 5 milliards d’euros par an en France pour obtenir l’attention de l’auditoire. Ajoutons qu’ils créent 1.4 milliards d’emplois dans le monde et le tour est joué. Sans insectes pollinisateurs, nous crèverons littéralement de faim. Mais ce n’est pas grave. Ce qui est grave, ce serait de perdre des milliards d’euros et des emplois…

Même le manque de sommeil est monétisé et est estimé à 411 milliards de dollars par an pour l’économie américaine.

C’est d’ailleurs le piège dans lequel sont tombés les écologistes en prétendant que l’écologie était plus économique et permettrait de créer des emplois. Il suffit de leur répondre que, dans ce cas, le marché s’orientera naturellement vers la solution la plus écologique et qu’il ne faut surtout pas intervenir.

Tout comme compter le nombre de sorties au restaurant, l’argent est une observable bien pratique et, de plus, universelle. À quelques très rares exceptions, tous les êtres humains utilisent aujourd’hui de l’argent qui est convertissable en n’importe quelle autre monnaie.

L’impossibilité des objectifs multiples.

La sagesse populaire nous enseigne qu’à courir deux lièvres, on n’en attrape aucun. Et, inconsciemment, tout humain et toute institution humaine applique ce principe en ne maximisant qu’un seul et unique objectif.

Si plusieurs objectifs sont énoncés, tout le système optimisera l’objectif principal via son observable. Si cela permet d’atteindre également les autres objectifs, tant mieux. Sinon, et bien, par définition, un objectif secondaire cédera le pas face à l’objectif principal. Il s’ensuit que tout objectif secondaire est inutile : s’il est atteint, c’est par pure chance.

Or, comme nous venons de le voir, l’observable par défaut est l’argent. L’objectif par défaut devient donc le fait de s’enrichir.

On peut d’ailleurs remarquer que les personnes dont l’objectif principal n’est clairement pas s’enrichir détonnent dans notre société. Comme l’argent n’est qu’un moyen de subsistance pour eux, ils gagnent un strict minimum et se consacrent à un objectif qu’ils ont choisi en conscience. Ils paraissent rebelles, alternatifs, étonnants. Ironiquement, affirmer vouloir gagner de l’argent est souvent mal perçu. Gagner de l’argent est notre seul et unique objectif mais il faut le cacher, être hypocrite.

Sans une direction très forte et très claire posant une observable autre que l’argent, tout projet se tournera automatiquement vers le profit. Au mieux le projet deviendra commercial, au pire les membres s’entre-déchireront et tenteront de gagner ou de perdre le moins possible d’argent.

Créer un projet dont l’observable n’est pas l’argent implique donc un travail permanent d’affirmation d’un objectif principal et de l’observable qui lui est associée.

Si l’affirmation de cet objectif n’est pas assez forte, l’observable argent reprendra le dessus. Si l’observable commune manque ou est floue, les individus se baseront sur leur observable personnelle. Très souvent, il s’agira de l’argent. La cupidité individuelle détruira le projet ou, au moins, en détournera l’intention initiale.

Dans le monde du business et des entreprises, la question ne se pose même pas : le but d’une entreprise étant de faire de l’argent, tout autre objectif sera graduellement réduit et sera corrompu au moindre signe de conflit entre cet objectif secondaire et celui de gagner de l’argent. L’écologie, le bio, le social sont des exemples frappants : d’objectifs secondaires louables, ils sont devenus de simples arguments marketing, cachant parfois des pratiques d’un cynisme total. Dans le meilleur des cas, ces objectifs secondaires sont devenus des vœux pieux qui donnent bonne conscience aux travailleurs.

L’absurdité ultime : le PIB

Le parangon de l’absurdité des observables revient à la plus grande de nos institutions : l’état, pour qui l’observable principale est également devenu l’argent avec la mesure du PIB.

Je ne détaillerais pas l’absurdité du PIB, certains l’ont fait mieux que moi. Il suffit de savoir que si vous me payez 50€ pour creuser un trou et que je vous paye le même prix pour le reboucher, nous avons augmenté le PIB de 100€ alors que rien, absolument rien, n’a changé dans le monde. Ni le trou (qui est rebouché), ni nos comptes en banque respectifs.

Pourtant cette mesure est désormais celle qui contrôle absolument tout le reste. L’exemple le plus frappant nous vient de la Grèce : alors que la crise a poussé un nombre incalculable de grecs dans la misère la plus totale, que le taux de suicide est au plus haut et que la santé s’y détériore rapidement, personne ne s’en préoccupe réellement.

Mais que le gouvernement grec annonce peut-être prendre des mesures qui pourraient impacter le PIB des pays voisins et toute la classe politique s’indigne soudainement. Il faut vous y faire : votre seule utilité dans un tel système est de faire croître le PIB.

Identifiez l’objectif de votre interlocuteur

Une fois ce principe bien acquis, tout un univers qui semble absurde devient soudainement logique. Il suffit d’identifier l’objectif réel de votre interlocuteur. L’unique objectif d’un politicien, par exemple, sera d’être réélu. Toute action qu’il entreprend ne l’est que dans le seul et unique objectif de maximiser son observable : les voix reçues aux prochaines élections.

Tout argent public dépensé ne le sera donc que de deux manières possibles : soit parce que cela donne de la visibilité au politicien qui a pris la décision, soit parce que cela lui rapporte directement ou indirectement. C’est ce que j’ai appelé « la boucle d’évaporation ».

Tout employé payé à l’unité temporelle (heure, semaine, mois, …) aura pour unique objectif de justifier le temps qu’il passe. Si le travail se réduit au point de disparaître, l’employé fera tout, même inconsciemment, pour inventer une complexité permettant de justifier ce temps. Au contraire, toute personne payée au forfait aura pour unique objectif d’y passer le moins de temps possible.

Tout organe de presse financé par la publicité optimisera son fonctionnement pour maximiser l’exposition de son audience à la publicité. Si cette audience se mesure en “clics”, alors l’organe de presse se transformera en machine à générer des clics, quel que soient les idéaux sincères des personnes qui composent l’organe de presse.

Notons bien que tout ceci n’est ni positif, ni négatif. C’est juste un fait mécanique et, pour moi, inéluctable.

« Toute organisation humaine tend naturellement vers la maximisation du profit des personnes contrôlant l’organisation ».

Si vos objectifs sont en alignement avec ceux de votre interlocuteur, tout va très bien. Si par exemple vous souhaitez organiser un bal populaire, que vous demandez des subsides et que vous proposez à un politicien de devenir le « parrain » du bal et d’y faire un discours, vos objectifs seront alignés et vous obtiendrez plus que probablement le subside.

Et vous, quel est votre observable ?

L’argent est-il le seul et unique observable universel ? Peut-être. Dans tous les cas, c’est aujourd’hui le plus courant et le plus utilisé. Il faut donc en tenir compte sans le rejeter en bloc. Construire une société sans argent me semble une utopie irréalisable et probablement pas souhaitable.

Par contre, au niveau individuel, nous sommes bien peu à considérer l’argent comme le seul moteur de notre vie. Pourtant, par facilité, nous nous y abandonnons. Nous travaillons plus pour gagner plus. Nous repoussons les prises de risque qui pourraient nous faire perdre de l’argent.

Confronté à cette réalité, nous avons tendance à camoufler. À brandir des objectifs secondaires, des déclarations d’intention. À nous tromper nous-mêmes.

Mais alors, quel est l’observable de nos vrais objectifs personnels, ceux que nous n’avons jamais pris la peine d’explorer, de conscientiser ?

Car si nous voulons changer le monde et nous changer nous-même, il faut se fixer un réel objectif principal avec une observable digne de lui.

 

Photo par Glenn Halog.

Ce texte est a été publié grâce à votre soutien régulier sur Tipeee et sur Paypal. Je suis @ploum, blogueur, écrivain, conférencier et futurologue. Vous pouvez me suivre sur Facebook, Medium ou me contacter.

Ce texte est publié sous la licence CC-By BE.

November 30, 2016

This is already the fourth edition of the Botconf security conference, fully dedicated to fighting malware and botnets. Since the first edition, the event location changed every year and it allowed me to visit nice cities in France. After Nantes, Nancy and Paris, the conference invaded Lyon. I arrived yesterday in the evening and missed the workshop (four of them were organised the day before the conference). The first started smoothly with coffee and pastries. What about this edition? Tickets were sold out for a while (300 people registered) and the organisation remains top. Eric Freysinnet did the open session with the classic information about the event. About the content, there is something important at Botconf: some talks contains touchy information and cannot be disclosed publicly. The organisers take the TLP protocol seriously. Each speaker is free to refuse to be recorded and his/her presentation to be covered on Twitter or blog (this is part of their social media policy. Of course, I’ll respect this. Last Eric’s remark: “No bad stuff on the wifi, there are some people from French LE in the room!😉

After Eric’s introduction, the first slot was assigned to Jean-Michel Picot from Google (with the collaboration of many others Googlers) who presented “Locky, Dridex, Recurs: the Evil Triad”. He presentation how those malicious codes are seen from a Gmail perspective. As said above, I respect the speaker’s choice: this talk was flagged as TLP:RED. It was interesting but (too?) short and some questions from the audience remained unanswered by the speaker/Google. More information could be found about their research here. For me, nothing was really TLP:RED, nothing touchy was disclosed.

The next talk was called “Visiting the Bear’s Den” by Jean-Ian Boutin, Joan Calvet and Jessy Campos (the only speaker present on stage). The “Sednit” group (also known as APT28, Fancy Bear or Sofacy) has been active since 2004. It is very active and famous.The analysis of Sednit started with a mistake from the developers: they forgot to set the “forgot” field on bit.ly. The URL shortener service was indeed used to propagate their malicious links. Basically, Sednit uses an ecosystem based on tens of software components like RAT’s, backdoor, keyloggers, etc). To explain how Sednit works, Jessy had a good idea to tell the story of a regular user called Serge. Serge is working for a juicy company and could be a nice target.

09:30 AM, Monday morning, Serge arrives at the office and reads his emails. In one of them, there is a link with a typo error and an ID to identify the target. Serge meets then SEDKIT (the exploit kit). On the landing page, his browser details are disclosed to select the best exploit to infect him. Serge’s computer is vulnerable and he visits the SEDNIT Exploit Factory. The exploit downloads a payload and Serge meets now the SEDUPLOADER. The dropper uses anti-analysis techniques, drops the payload, performs privilege escalation and implements persistence.

10:00 AM: Serge is a victim. SEDRECO deployment.

02:00 PM: Serge meets XAGENT (the modular backdoor). At this step, Jessi explained how communications are performed with the C&C. It works via SMTP. Messages are sent to created or stolen mailboxes and the C&C retrieves the messages:

Victim > SMTPS > Gmail.com < POP3S < C&C > SMTPS > Gmail.com < POP3S < Victim

What’s happening during the next three days? Modules are executed to steal credentials (via Mimikatz!), registry hives are collected. Note that multiple backdoors can be installed for redundancy purposes. Finally, lateral movement occurs (pivoting)

Friday, 11:00AM, long term persistence is implemented via a rogue DLL called msi.dd which is used by Microsoft Office. When an Office component is started, the rogue DLL is loaded then loads the original one (they offer the same functions). Finally, the DOWNDELPH bootkit was explained. This was a very interesting talk with many information. For more information, two links: the research paper and a link to Sednit IOC‘s if you are interested in hunting.

After the lunch break (which was very good as usual at Botconf), Vladimir Kropotov and Fyodor Yarochkin presented “LURK – The Story about Five Years of Activity”. Sednit was covered in the morning and this talk was almost the same but about “LURK“. It is a banking trojan targeting mainly Russia and the group behind it is active for a few years. The research was based on the analyse of proxy logs. It was first detected in 2011 and was the first to have a payload residing in memory (no traces, no persistence). It is easy to identify because malicious URLs contains the following strings:

  • /GLMF (text/html)
  • /0GLMF (application/3dr)
  • /1GLMF (application/octet-stream)

Those can be easily detected via a simple regular expression. Vladimir & Fyodor reviewed the different waves of attacks and how they infected victims from 2012 to 2014. A specific mention for the “ADDPERIOD” abuse which is a flag set to a domain during registration. When the owner would like to cancel the DNS, the price can be refunded. Web sites are infected via memcache cache poisoning or an extra module added to the Apache web server.How to infect websites? Note that, in 2012, No antivirus on VT was able to flag LURK samples as malicious (score: 0). The talk ended with a video recorded by the Russian police when the LURK owners were arrested.

Then Andrey Kovalev and Evgeny Sidorov presented “Browser-based Malware: Evolution and Prevention”. This is not a new type of attack. MitB (“Man in the Browser“) is an attack where hooked interaction changes the information on the rendered page like adding some Javascript code). Targets are banking sites, mail providers or social networks. In 2013, there was already a talk proposed by Thomas Siebert from GData about this topic). There are many drawbacks with this kind of attack:

  • Browser upgrades may break the hook
  • App containers make the injection more difficult
  • A web injector process has to be in the target system
  • They are evidences (IOC’s)
  • AV software are getting better to detect them.

So, can we speak about MitB-NG or MitB 2.0? They are performed via malware or adware browser extensions, WFP or remote proxies. Don’t forget VPN end-point or Tor exit-nodes that can inject code! The next part of the talk was dedicated to the Eko backdoor and SmartBrowse. So, what about dDetection and prevention? CSP (“Content Security Policy“) usually used to make XSS harder can be useful. JavaScript validation can be implemented in code. The JS checks the page integrity (like file hashes). Conclusions: there are new challenges ahead and the extension stores should implement more checks. Finally, browser developers should pay more attention to mechanisms to protect users from rogue extensions (with signatures), AV should give more focus on extension and not only droppers.

The next talk was titled “Language Agnostic Botnet Detection Based on ESOM and DNS” and is the result of the research performed by a team: Urs Enliser, Christian Dietz, Gabi Drei and Rocco Mäandrisch. The talk started with the motivations to perform such research. Most malware uses DGA or “Domain Generation Algorithm”. If this algorithm can be reversed (sometimes it is), we are able to generate the list of all domains and build useful lists of IOC’s. The approach presented here was different. Why not use ESOM (“Emergent Self-Organising Maps“) to try to detect malicious domains amongst all the DNS traffic? The analysis was based on the following steps:

  • Extract language features from domain names in data samples
  • Run ESOM training
  • Categorise domain names from real live traffic with ESOM map

Here is an example of ESOM output:

SOM Example

The magic behing ESOM was explained but, for newbies like me, it was difficult to follow. I’d like to get a deeper introduction about this topic. The research is still ongoing but it looks promising.

The afternoon coffee break was welcome before the last set of presentations.  Victor Acin and  Raashid Bhat came on stage to present “Vawtrak Banking Trojan : A Threat to the Banking Ecosystem”. Vawtrak is in the wild for a while. It is performing MitB (see above) and is very modular & decentralised. Know before as Neverquest. The malware internals were reviewed (LZMAT compression, 32 & 64 bits versions, XOR encoding, etc) as well as the configuration containing C&C servers, botnet configuration, version, sign keys). By default, it injects itself into explorer.exe and iexplore.exe then in the child processes. After a review of the infection processes and all techniques used, the speakers reviewed the communications with the C&C. Vawtrak is a modular trojan. Multiple plugins provide opcode / API interface. Some statistics were disclosed and look very impressive:

  • Used by 2 groups
  • 85k botnet infections
  • Top-5: US, Canada, UK, India, France
  • 2.5M credentials exfiltrated
  • 82% of infection are in USA
  • 4000 identified IOC’s
  • Win7 is the most affected OS
  • 2058 infections on windows server 2008

Then, Wayne Crowder presented “Snoring Is Optional: The Metrics and Economics of Cyber Insurance for Malware Related Claims”. Not a technical talk at all but very interesting! Wayne’s exercise was to try to talk about botnets not from a technical point of view but from a cyber-insurance point of view. An interesting statistics reported by Wayne:If

If cybercrime had been a US company in 2014 it would have been the 2nd largest…

From an insurance perspective, more  malware samples mean more risks of data leak and more costs for the company. Insurance drives safety & security in many domains and it will follow this trend in cyber security for sure. What’s covered? Data theft (PII, cards, health data,), malwares, DDOS, hacking, business interruption, phishing, extortions, mistakes. What must cover a good policy?

  • Legal damages
  • Crisis
  • PCI (or similar) fines
  • Availability
  • Extortion
  • “cyber related perils” (who said “IoT”?)

Cyber Risks

The talk was clearly based on the US market and Wayne gave a lot of statistics. But, keep in mind that this could change in the EU zone with the new notification law coming in 2018 (GDPR) . Wayne also reviewed several cases where cyber insurance was involved (with nice names like Sony, Target, hospital ransomware, …). Not that not everything is covered (brand, reputation, state sponsored hacking). Very interesting and I’m curious to see how the cyber-insurance market will evolve in the (near) future.

The last talk was “Hunting Droids from the Inside” by Lukas Siewierski, also from Google. This was given again under TLP:RED.

That’s all for the first day and stay tuned for a new wrap-up. Don’t forget that some talks are streaming live via Youtube.

[The post Botconf 2016 Wrap-Up Day #1 has been first published on /dev/random]

November 27, 2016

I’m happy to announce the immediate availability of the Final Rush Pro 5 map for Supreme Commander Forged Alliance Forever. During the past few weeks I’ve been reworking version 4 of the map, and have added many new features, fixed some bugs and improved balance in the team vs team modes.

final-rush-pro-5-largeVersion 4 has a Game Mode setting with Paragon Wars, Survival Versus, Normal and 4 different difficulties of Survival Classic. Version 5 has a dedicated Survival Difficulty setting, so it’s not possible to change the difficulty for Survival Versus. Furthermore, a ton of new lobby options have been added that allow changing the delay of the various tech level waves, their frequency, how quickly the units should gain health, how often random events should happen, etc. This gives you much greater control over the difficulty in all survival modes, and adds a lot of replayability by enabling alteration of the nature of the challenge the map provides.

The team vs team modes, Survival Versus and Paragon Wars, both had some serious balance issues. In Survival Versus, random events and bounty hunters would attack a random player. While that works fine in Survival Classic, in the modes with two teams, it makes things unfairly harder for a single team. I’ve observed this several times, where suddenly one team gets whacked by a few random events even though they where doing better than the other team. In this new version, random events and bounty hunters target a random player from each team.

In Paragon Wars, the issue was that the civilian base protecting the Paragon Activator would be randomly constructed. Within a certain bounding box, the Paragon Activator and a bunch of defensive structures would spawn. This means the Activator could be at the far side of the bounding box, and the defenses mostly on the other side, making it a lot easier for one team to approach the Activator than for the other. Now the base is entirely symmetrical (using a circular layout) and spawns at the exact center of the map.

Version 4 had 6 working lobby options, while version 5 has 23. Besides the new difficulty related options, it is now possible to turn off aspects of the game. For instance, you can now completely disable random events, MMLs and “aggression tracking” (punishing of fast tech, high eco and aggressive ACU placement).

Some significant bugs where fixed, most notably Paragon Wars not working correctly when playing with less than 8 people. You can now play it 2v2, 4v1 or however else you see fit. Another thing that was fixed is the Auto Reclaim option, so there is no more need for the Vampire mod. Unlike the mod, this option allows you to specify how much resources you should get, all the way from none, to over 9000% (that is an actual value you can select yes).

Another mod that is no longer needed is the FinalRushPro3 itself. You now just need the map, and it will function properly without any mods. Due to removing integration with the FinalRushPro3 mod, the special UI is no longer present. In a lot of cases it did not work properly anyway and just took up space, and I’ve not gotten around to making a better replacement yet.

You can download the map as a zip. Unfortunately the FAF map vault infrastructure is rather broken, so I’ve not been able to upload the map to the vault. Hopefully this gets resolved soon.

For a full list of changes, see the readme. I got a number of ideas for future enhancements, which might become part of a version 5.1, 5.2, etc. Feel free to submit your own feature requests!

If you’re interested in how I went about creating version 5 from a technical point of view, see my post on Refactoring horrible Lua code.

November 26, 2016

Our keyserver is now accepting submissions for the FOSDEM 2017 keysigning event. The annual PGP keysigning event at FOSDEM is one of the largest of its kind. With more than one hundred participants every year, it is an excellent opportunity to strengthen the web of trust. For instructions on how to participate in this event, see the keysigning page. Key submissions close a week before FOSDEM (27 January), to give us some time to generate and distribute the list of participants. Remember to bring a printed copy of this list to FOSDEM.

Étymologiquement, la démocratie signifie le pouvoir par le peuple. Elle s’oppose à l’aristocratie où le pouvoir est détenu par une minorité.

Dans la société d’aujourd’hui, force est de constater que ce que nous appelons démocratie n’en est pas une. Le pouvoir réel est toujours détenu par une minorité.

Mais, contrairement à une aristocratie traditionnelle héréditaire, l’aristocratie moderne est désormais choisie par le peuple à travers le processus électoral. Nous vivons dans une aristocratie démocratiquement représentative, le processus électoral nous permettant de nous affubler du titre de « démocratie ».

Les faiblesses des élections

Comme je le soulignais dans « Et si on tuait le parti pirate ? », se faire élire et être un élu sont deux métiers fondamentalement différents voire antagonistes. Ceux qui maîtrisent l’art de se faire élire vont conquérir le pouvoir et s’y maintenir, quelles que soient leurs actions. Notre système est donc complètement inféodé aux faiblesses du processus électoral choisi.

Deux qualités sont essentielles pour être élus : la popularité et l’accès à l’argent, l’argent permettant d’acheter la popularité à travers les campagnes électorales. L’élection va donc favoriser l’émergence de personnages riches, représentants les intérêts d’autres riches et étant maîtres dans l’art de l’apparence ou du détournement de l’attention à travers un programme aussi inutile que mensonger.

Les méthodes de calcul des résultats électoraux, elles-mêmes, vont avoir un poids définitif. Ainsi, le système à deux tours français va avoir tendance à tuer tout candidat favorisant le compromis ou l’innovation au profit de celui qui sera inacceptable pour 49% des électeurs mais suffisant pour 51%.

Aux États-Unis, le système des grands électeurs permet l’élection d’un président moins populaire que son adversaire. Deux fois (Bush vs Gore en 2000 et Trump vs Clinton en 2016), le gagnant ne l’est devenu qu’en emportant de manière très suspecte l’état de Floride.

Ce que nous appelons démocratie est donc un ensemble de règles donnant le pouvoir à celui qui saura le mieux les exploiter, légalement ou illégalement.

Le futur de la démocratie

Comme je le décris dans mes billets « Il faudra la construire sans eux » et « Obéir, lire, écrire, les trois apprentissages de l’humain », je pense que nous sommes arrivés à un tournant de l’histoire.

Après l’aristocratie et l’aristocratie démocratiquement représentative (plus communément appelée « démocratie »), il est temps de réinventer une nouvelle forme de gouvernance.

Selon moi, cette post-démocratie sera fondée sur les outils technologiques de son époque, à savoir Internet et la blockchain. Les expériences de démocratie liquide nous ouvrent la voie en ce sens.

Cependant, cette (r)évolution n’est pas encore là et il faut bien composer avec le système en place. Comment apporter de nouvelles idées dans un système structurellement construit pour favoriser le conservatisme ?

Le candidat Jean-Luc Mélenchon, par exemple, promet s’il est élu de former une assemblée constituante puis de démissionner. Dans l’idée, c’est évidemment magnifique. Mais Mr Mélenchon reste un homme politique traditionnel issu d’un parti traditionnel cherchant avant tout à défendre des valeurs. Cette défense de valeurs pourrait être en conflit avec la mise en place d’un nouveau système.

L’expérience de laprimaire.org

Tenter d’utiliser les outils modernes pour perturber, même légèrement, le système en place, c’est exactement ce que tente de faire laprimaire.org pour les élections présidentielles françaises de 2017.

Le principe est assez simple : un règlement fixé à l’avance pour désigner un candidat unique, un candidat issu du net et choisi par tous les citoyens qui le souhaitent.

Début 2016, tout citoyen français pouvait se déclarer candidat. Afin d’être sélectionné pour le second tour, il fallait obtenir 500 soutiens d’autres citoyens français. Une barrière arbitraire, certes, mais facilement franchissable pour qui avait la motivation de devenir réellement candidat.

Sur 215 candidats déclarés, ils furent 16 à passer le cap des 500 soutiens. Tous les citoyens inscrits sur laprimaire.org ont ensuite été appelés à choisir leurs 5 candidats préférés parmi les 16 qualifiés à travers un processus se basant sur le « jugement majoritaire ».

De ces 5 candidats, un seul sera finalement choisi comme le candidat de laprimaire.org pour se présenter réellement aux présidentielles. Mais les organisateurs avaient fixé une limite minimale de 100.000 citoyens inscrits avant de poursuivre l’aventure.

Ils sont actuellement près de 97.000. Alors, si vous êtes citoyen français, vous savez ce qu’il vous reste à faire pour voir se présenter un candidat à la présidentielle issu non pas d’un parti mais bien d’un processus citoyen innovant.

Même si ce candidat n’a aucune chance, sa simple présence sur les bulletins de vote assurera une incroyable publicité au fait que, oui, Internet permet désormais un nouveau mode de gouvernance. Que nous avons faim de remises en question, d’idées nouvelles, d’explorations. Que nous souhaitons prendre en main notre destin !

Laprimaire.org n’est certainement pas parfaite mais c’est une expérience réelle qui innove, qui essaie et qui s’ouvre à toute les tendances politiques. Étant belge, je ne peux participer mais je recommande chaudement à mes lecteurs français de s’inscrire.

Vite, il ne vous reste que deux semaines !

 

Photo par Will Keightley.

Ce texte est a été publié grâce à votre soutien régulier sur Tipeee et sur Paypal. Je suis @ploum, blogueur, écrivain, conférencier et futurologue. Vous pouvez me suivre sur Facebook, Medium ou me contacter.

Ce texte est publié sous la licence CC-By BE.

November 25, 2016

Back to school Dries Fri, 11/25/2016 - 16:58

Last week I presented at the University of Antwerp, my alma mater. I was selected to be the 2016/2017 ambassador of the alumni and was asked to talk about my career and work. Presentations like this are a bit surreal because I still feel like I have a lot to learn and accomplish. Deep down I'll always be searching for something more. I want my life and career to be meaningful and creative, and full of laughter and friends. This presentation was very special as it was attended by my parents, friends from high school and college, professors whose classes I attended 20 years ago and the university's rector or chancellor, Herman Van Goethem. It was great to laugh and catch-up with old friends and family, and it felt meaningful to share some of my lessons learned to a group of young students.

Antwerp university presentation
The university's rector or chancellor, Herman Van Goethem, introducing me.
Antwerp university presentation
My parents sitting on the front row.
Antwerp university presentation
Antwerp university presentation
Antwerp university presentation
Antwerp university presentation
Me with some of my friends from high school that I hadn't seen in 20 years!

Add new comment

Last week I presented at the University of Antwerp, my alma mater. I was selected to be the 2016/2017 ambassador of the alumni and was asked to talk about my career and work. Presentations like this are a bit surreal because I still feel like I have a lot to learn and accomplish. Deep down I'll always be searching for something more. I want my life and career to be meaningful and creative, and full of laughter and friends. This presentation was very special as it was attended by my parents, friends from high school and college, professors whose classes I attended 20 years ago and the university's rector or chancellor, Herman Van Goethem. It was great to laugh and catch-up with old friends and family, and it felt meaningful to share some of my lessons learned to a group of young students.

Antwerp university presentation
The university's rector or chancellor, Herman Van Goethem, introducing me.
Antwerp university presentation
My parents sitting on the front row.
Antwerp university presentation
Antwerp university presentation
Antwerp university presentation
Antwerp university presentation
Me with some of my friends from high school that I hadn't seen in 20 years!

I published the following diary on isc.sans.org: “Free Software Quick Security Checklist“.

Free software (open source or not) is interesting for many reasons. It can be adapted to your own needs, it can be easily integrated within complex architectures but the most important remains, of course, the price. Even if they are many hidden costs related to “free” software. In case of issues, a lot of time may be spent in searching for a solution or diving into the source code (and everybody knows that time is money!)… [Read more]

 

[The post [SANS ISC Diary] Free Software Quick Security Checklist has been first published on /dev/random]

November 24, 2016

Nie? Hoeveel politiek of maatschappelijk relevante blog-artikels heb ik wel niet geschreven de laatste tijd? En dat terwijl mijn zelfingenomen doelstelling is om één of andere jonge programmeur te informeren over al die dingen die ik dan dan maar mee maak.

Enfin. Ik wil nog schrijven over een QObject die in een QSharedPointer zit, veilig naar QML te brengen. En terug. Daar zijn methoden voor. Zonder allerlei onzin zoals hier uitgelegd.

Ik zou ook graag een deftige versie van dit en dit neerschrijven. Het zit nu best wel goed in elkaar bij de nieuwe software van Heidenhain. Hun editor zal belachelijk goed worden. Maar ik moet de Chris Loos van Materialise teleurstellen dat ik het van Walter Scheepens nu niet open source mag maken.

Ik, het varkske, moet braaf wachten op wat er komen zal.

Moraal filosofe Katleen Gabriels presenteerde gisteren haar boekOnlife, Hoe de digitale wereld je leven bepaalt’. Ik ben dus maar eens gaan kijken. Ik was onder de indruk.

Haar uiteenzetting was gebalanceerd; ze klonk geïnformeerd. Het debat met oa. Sven Gatz, Pedro De Bruyckere en Karel Verhoeven was eigenlijk ook wel cava. Ik ben dus erg benieuwd naar het boek.

Na een Spinoza-kenner hebben we dus nu ook een moraal filosofe die zich met oa. Internet of Things dingen zal gaan bezig houden. Ik vind het dus wel goed dat de filosofie van’t land zich eindelijk eens komt moeien. De consument gelooft ons, techneuten, toch niet dat al die rommel die ze gekocht hebben dikke rotzooi is. Dus misschien dat ze wat gaan luisteren naar s’lands filosofen? Ik denk het niet. Maar slechter zal de situatie er ook niet van worden, hé?

Enfin. Medeneurdjes contacteer die Katleen en vertel over wat je zoal hebt meegemaakt wat onethisch is. Wie weet verwerkt ze je verhaal in een uiteenzetting of volgend boek? Je kan dat nooit weten he. Je moest trouwens toch eens van die zolder afkomen.

Both Domain Driven Design and architectures such as the Clean Architecture and Hexagonal are often talked about. It’s hard to go to a conference on software development and not run into one of these topics. However it can be challenging to find good real-world examples. In this blog post I’ll introduce you to an application following the Clean Architecture and incorporating a lot of DDD patterns. The focus is on the key concepts of the Clean Architecture, and the most important lessons we learned implementing it.

The application

The real-world application we’ll be looking at is the Wikimedia Deutschland fundraising software. It is a PHP application written in 2016, replacing an older legacy system. While the application is written in PHP, the patterns followed are by and large language agnostic, and are thus relevant for anyone writing object orientated software.

I’ve outlined what the application is and why we replaced the legacy system in a blog post titled Rewriting the Wikimedia Deutschland fundraising. I recommend you have a look at least at its “The application” section, as it will give you a rough idea of the domain we’re dealing with.

A family of architectures

Architectures such as Hexagonal and the Clean Architecture are very similar. At their core, they are about separation of concerns. They decouple from mechanisms such as persistence and used frameworks and instead focus on the domain and high level policies. A nice short read on this topic is Unclebob’s blog post on the Clean Architecture. Another recommended post is Hexagonal != Layers, which explains that how just creating a bunch of layers is missing the point.

The Clean Architecture

cleanarchitecture

The arrows crossing the circle boundaries represent the allowed direction of dependencies. At the core is the domain. “Entities” here means Entities such as in Domain Driven Design, not to be confused by ORM entities. The domain is surrounded by a layer containing use cases (sometimes called interactors) that form an API that the outside world, such as a controller, can use to interact with the domain. The use cases themselves only bind to the domain and certain cross cutting concerns such as logging, and are devoid of binding to the web, the database and the framework.

class CancelDonationUseCase {
    private /* DonationRepository */ $repository;
    private /* Mailer */ $mailer;

    public function cancelDonation( CancelDonationRequest $r ): CancelDonationResponse {
        $this->validateRequest( $r );

        $donation = $this->repository->getDonationById( $r->getDonationId() );
        $donation->cancel();
        $this->repository->storeDonation( $donation );

        $this->sendConfirmationEmail( $donation );

        return new CancelDonationResponse( /* ... */ );
    }
}

In this example you can see how the UC for canceling a donation gets a request object, does some stuff, and then returns a response object. Both the request and response objects are specific to this UC and lack both domain and presentation mechanism binding. The stuff that is actually done is mainly interaction with the domain through Entities, Aggregates and Repositories.

$app->post(
    '/cancel-donation',
    public function( Request $httpRequest ) use ( $factory ) {
        $requestModel = new CancelDonationRequest(
            $httpRequest->request->get( 'donation_id' ),
            $httpRequest->request->get( 'update_token' )
        );

        $useCase = $factory->newCancelDonationUseCase();
        $responseModel = $useCase->cancelDonation( $requestModel );

        $presenter = $factory->newNukeLaunchingResultPresenter();
        return new Response( $presenter->present( $responseModel ) );
    }
);

This is a typical way of invoking a UC. The framework we’re using is Silex, which calls the function we provided when the route matches. Inside this function we construct our framework agnostic request model and invoke the UC with it. Then we hand over the response model to a presenter to create the appropriate HTML or other such format. This is all the framework bound code we have for canceling donations. Even the presenter does not bind to the framework, though it does depend on Twig.

If you are familiar with Silex, you might already have noticed that we’re constructing our UC different than you might expect. We decided to go with our own top level factory, rather than using the dependency injection mechanism provided by Silex: Pimple. Our factory internally actually uses Pimple, though this is not visible from the outside. With this approach we gain a nicer access to service construction, since we can have a getLogger() method with LoggerInterface return type hint, rather than accessing $app['logger'] or some such, which forces us to bind to a string and leaves us without type hint.

use-case-list

This use case based approach makes it very easy to see what our system is capable off at a glance.

use-case-directory

And it makes it very easy to find where certain behavior is located, or to figure out where new behavior should be put.

All code in our src/ directory is framework independent, and all code binding to specific persistence mechanisms resides in src/DataAccess. The only framework bound code we have are our very slim “route handlers” (kinda like controllers), the web entry point and the Silex bootstrap.

For more information on The Clean Architecture I can recommend Robert C Martins NDC 2013 talk. If you watch it, you will hopefully notice how we slightly deviated from the UseCase structure like he presented it. This is due to PHP being an interpreted language, and thus does not need certain interfaces that are beneficial in compiled languages.

Lesson learned: bounded contexts

By and large we started with the donation related use cases and then moved on to the membership application related ones. At some point, we had a Donation entity/aggregate in our domain, and a bunch of value objects that it contained.

class Donation {
    private /* int|null */            $id
    private /* PersonalInfo|null */   $personalInfo
    /* ... */
}

class PersonalInfo {
    private /* PersonName */          $name
    private /* PhysicalAddress */     $address
    private /* string */              $emailAddress
}

As you can see, one of those value objects is PersonalInfo. Then we needed to add an entity for membership applications. Like donations, membership applications require a name, a physical address and an email address. Hence it was tempting to reuse our existing PersonalInfo class.

class MembershipApplication {
    private /* int|null */            $id
    private /* PersonalInfo|null */   $personalInfo
    /* ... */
}

Luckily a complication made us realize that going down this path was not a good idea. This complication was that membership applications also have a phone number and an optional date of birth. We could have forced code sharing by doing something hacky like adding new optional fields to PersonalInfo, or by creating a MorePersonalInfo derivative.

Approaches such as these, while resulting in some code sharing, also result in creating binding between Donation and MembershipApplication. That’s not good, as those two entities don’t have anything to do with each other. Sharing what happens to be the same at present is simply not a good idea. Just imagine that we did not have the phone number and date of birth in our first version, and then needed to add them. We’d either end up with one of those hacky solutions, or need to refactor code that has nothing to do (apart from the bad coupling) with what we want to modify.

What we did is renaming PersonalInfo to Donor and introduce a new Applicant class.

class Donor {
    private /* PersonName */          $name
    private /* PhysicalAddress */     $address
    private /* string */              $emailAddress
}

class Applicant {
    private /* PersonName */          $name
    private /* PhysicalAddress */     $address
    private /* EmailAddress */        $email
    private /* PhoneNumber */         $phone
    private /* DateTime|null */       $dateOfBirth
}

These names are better since they are about the domain (see ubiquitous language) rather than some technical terms we needed to come up with.

Amongst other things, this rename made us realize that we where missing some explicit boundaries in our application. The donation related code and the membership application related code where mostly independent from each other, and we agreed this was a good thing. To make it more clear that this is the case and highlight violations of that rule, we decided to reorganize our code to follow the strategic DDD pattern of Bounded Contexts.

contexts-directory

This mainly consisted out of reorganizing our directory and namespace structure, and a few instances of splitting some code that should not have been bound together.

Based on this we created a new diagram to reflect the high level structure of our application. This diagram, and a version with just one context, are available for use under CC-0.

Clean Architecture + Bounded Contexts

Lesson learned: validation

A big question we had near the start of our project was where to put validation code. Do we put it in the UCs, or in the controller-like code that calls the UCs?

One of the first UCs we added was the one for adding donations. This one has a request model that contains a lot of information, including the donor’s name, their email, their address, the payment method, payment amount, payment interval, etc. In our domain we had several value objects for representing parts of donations, such as the donor or the payment information.

class Donation {
    private /* int|null */            $id
    private /* Donor|null */          $donor
    private /* DonationPayment */     $payment
    /* ... */
}

class Donor {
    private /* PersonName */          $name
    private /* PhysicalAddress */     $address
    private /* string */              $emailAddress
}

Since we did not want to have one object with two dozen fields, and did not want to duplicate code, we used the value objects from our domain in the request model.

class AddDonationRequest {
    private /* Donor|null */          $donor
    private /* DonationPayment */     $payment
    /* ... */
}

If you’ve been paying attention, you’ll have realized that this approach violates one of the earlier outlined rules: nothing outside the UC layer is supposed to access anything from the domain. If value objects from the domain are exposed to whatever constructs the request model, i.e. a controller, this rule is violated. Loose from the this abstract objection, we got into real trouble by doing this.

Since we started doing validation in our UCs, this usage of objects from the domain in the request necessarily forced those objects to allow invalid values. For instance, if we’re validating the validity of an email address in the UC (or a service used by the UC), then the request model cannot use an EmailAddress which does sanity checks in its constructor.

We thus refactored our code to avoid using any of our domain objects in the request models (and response models), so that those objects could contain basic safeguards.

We made a similar change by altering which objects get validated. At the start of our project we created a number of validators that worked on objects from the domain. For instance a DonationValidator working with the Donation Entity. This DonationValidator would then be used by the AddDonationUseCase. This is not a good idea, since the validation that needs to happen depends on the context. In the AddDonationUseCase certain restrictions apply that don’t always hold for donations. Hence having a general looking DonationValidator is misleading. What we ended up doing instead is having validation code specific to the UCs, be it as part of the UC, or when too complex, a separate validation service in the same namespace. In both cases the validation code would work on the request model, i.e. AddDonationRequest, and not bind to the domain.

After learning these two lessons, we had a nice approach for policy-based validation. That’s not all validation that needs to be done though. For instance, if you get a number via a web request, the framework will typically give it to you as a string, which might thus not be an actual number. As the request model is supposed to be presentation mechanism agnostic, certain validation, conversion and error handling needs to happen before constructing the request model and invoking the UC. This means that often you will have validation in two places: policy based validation in the UC, and presentation specific validation in your controllers or equivalent code. If you have a string to integer conversion, number parsing or something internationalization specific, in your UC, you almost certainly messed up.

Closing notes

You can find the Wikimedia Deutschland fundraising application on GitHub and see it running in production. Unfortunately the code of the old application is not available for comparison, as it is not public. If you have questions, you can leave a comment, or contact me. If you find an issue or want to contribute, you can create a pull request.

As a team we learned a lot during this project, and we set a number of firsts at Wikimedia Deutschland, or the wider Wikimedia movement for that matter. The new codebase is the cleanest non-trivial application we have, or that I know of in PHP world. It is fully tested, contains less than 5% framework bound code, has strong strategic separation between both contexts and layers, has roughly 5% data access specific code and has tests that can be run without any real setup. (I might write another blog post on how we designed our tests and testing environment.)

Many thanks for my colleagues Kai Nissen and Gabriel Birke for being pretty awesome during our rewrite project.

Last year we rewrote the Wikimedia Deutschland fundraising software. In this blog post I’ll give you an idea of what this software does, why we rewrote it and the outcome of this rewrite.

The application

Our fundraising software is a homegrown PHP application. Its primary functions are donations and membership applications. It supports multiple payment methods, needs to interact with payment providers, supports submission and listing of comments and exchanges data with another homegrown PHP application that does analysis, reporting and moderation.

fun-app

The codebase was originally written in a procedural style, with most code residing directly in files (i.e., not even in a global function). There was very little design and completely separate concerns such as presentation and data access were mixed together. As you can probably imagine, this code was highly complex and very hard to understand or change. There was unused code, broken code, features that might not be needed anymore, and mysterious parts that even our guru that maintained the codebase during the last few years did not know what they did. This mess, combined with the complete lack of a specification and units tests, made development of new features extremely slow and error prone.

derp-code

Why we rewrote

During the last year of the old application’s lifetime, we did refactor some parts and tried adding tests. In doing so, we figured that rewriting from scratch would be easier than trying to make incremental changes. We could start with a fresh design, add only the features we really need, and perhaps borrow some reusable code from the less horrible parts of the old application.

They did it by making the single worst strategic mistake that any software company can make: […] rewrite the code from scratch. —Joel Spolsky

We were aware of the risks involved with doing a rewrite of this nature and that often such rewrites fail. One big reason we did not decide against rewriting is that we had a time period of 9 months during which no new features needed to be developed. This meant we could freeze the old application and avoid parallel development, resulting in some kind of feature race. Additionally, we set some constraints: we would only rewrite this application and leave the analysis and moderation application alone, and we would do a pure rewrite, avoiding the addition of new features into the new application until the rewrite was done.

How we got started

Since we had no specification, we tried visualizing the conceptual components of the old application, and then identified the “commands” they received from the outside world.

old-fun-code-diagram

Creating the new software

After some consideration, we decided to try out The Clean Architecture as a high level structure for the new application. For technical details on what we did and the lessons we learned, see Implementing the Clean Architecture.

The result

With a team of 3 people, we took about 8 months to finish the rewrite successfully. Our codebase is now clean and much, much easier to understand and work with. It took us over two man years to do this clean up, and presumably an even greater amount of time was wasted in dealing with the old application in the first place. This goes to show that the cost of not working towards technical excellence is very high.

We’re very happy with the result. For us, the team that wrote it, it’s easy to understand, and the same seems to be true for other people based on feedback we got from our colleagues in other teams. We have tests for pretty much all functionality, so can refactor and add new functionality with confidence. So far we’ve encountered very few bugs, with most issues arising from us forgetting to add minor but important features to the new application, or misunderstanding what the behavior should be and then correctly implementing the wrong thing. This of course has more to do with the old codebase than with the new one. We now have a solid platform upon which we can quickly build new functionality or improve what we already have.

The new application is the first Wikimedia (Deutschland) deployed on, and wrote in, PHP7. Even though not an explicit goal of the rewrite, the new application has ended up with better performance than the old one, in part due to the PHP7 usage.

Near the end of the rewrite we got an external review performed by thePHPcc, during which Sebastian Bergmann, who you might know from PHPUnit fame, looked for code quality issues in the new codebase. The general result of that was a thumbs up, which we took the creative license to translate into this totally non-Sebastian approved image:

You can see our new application in action in production. I recommend you try it out by donating 🙂

Technical statistics

These are some statistics for fun. They have been compiled after we did our rewrite, and where not used during development at all. As with most software metrics, they should be taken with a grain of salt.

In this visualization, each dot represents a single file. The size represents the Cyclomatic complexity while the color represents the Maintainability Index. The complexity is scored relative to the highest complexity in the project, which in the old application was 266 and in the new one is 30. This means that the red on the right (the new application) is a lot less problematic than the red on the left. (This visualization was created with PhpMetrics.)

fun-complexity

Global access in various Wikimedia codebases (lower is better). The rightmost is the old version of the fundraising application, and the one next to it is the new one. The new one has no global access whatsoever. LLOC stands for Logical Lines of Code. You can see the numbers in this public spreadsheet.

global-access-stats

Static method calls, often a big source of global state access, where omitted, since the tools used count many false positives (i.e. alternative constructors).

The differences between the projects can be made more apparent by visualizing them in another way. Here you have the number of lines per global access, represented on a logarithmic scale.

lloc-per-global

The following stats have been obtained using phploc, which counts namespace declarations and imports as LLOC. This means that for the new application some of the numbers are very slightly inflated.

  • Average class LLOC: 31 => 21
  • Average method LLOC: 4 => 3
  • Cyclomatic Complexity / LLOC : 0.39 => 0.10
  • Cyclomatic Complexity / Number of Methods: 2.67 => 1.32
  • Global functions: 58 => 0
  • Total LLOC: 5517 => 10187
  • Test LLOC: 979 => 5516
  • Production LLOC: 4538 => 4671
  • Classes 105 => 366
  • Namespaces: 14 => 105

This is another visualization created with PhpMetrics that shows the dependencies between classes. Dependencies are static calls (including to the constructor), implementation and extension and type hinting. The applications top level factory can be seen at the top right of the visualization.

fun-dependencies

November 23, 2016

Comme je le soulignais dans « Il faudra le construire sans eux », l’humanité est passée par plusieurs étapes liées à l’écriture. Chaque étape a modifié notre conscience de nous-même et le rapport que nous entretenons avec la réalité.

Au commencement

Durant la préhistoire, l’humanité est dans sa petite enfance. Sans écriture, l’information est mouvante, sujette à interprétation. La transmission se fait de manière floue, interprétée et adaptée. La vérité n’existe pas, tout est sujet à interprétation.

Le terme « préhistoire » n’est pas anodin. Les humains n’ont en effet pas la conscience de faire partie de l’histoire, d’une évolution. Ils vivent dans le moment présent.

Cette situation est comparable à celle d’un nouveau-né qui réagit par réflexe aux stimuli extérieurs mais sans conscientiser son existence et ni celle d’une réalité extérieure.

Obéir, le premier apprentissage

Avec l’invention de l’écriture apparaît la notion d’une réalité extérieure immuable, la vérité. Ce qui est écrit ne peut être modifié et, contrairement aux souvenirs ou à la transmission orale, permet la création d’une vérité immuable. Comme toute innovation technologique importante, l’écriture est parfois perçue comme magique, surhumaine.

De la même manière, un enfant comprend que les adultes détiennent un savoir que lui ne possède pas. Il apprend donc à obéir à l’autorité.

Pour l’humanité, l’écriture est un instrument d’autorité à la fois géographique (les écrits voyagent) et temporel (les écrits se transmettent). L’écrit est donc perçu comme étant la vérité ultime, indiscutable. Comme pour un enfant à qui on raconte des histoires, la fiction et l’imaginaire sont des concepts incompréhensibles.

Lire, le second apprentissage

L’imprimerie provoque un véritable bouleversement. Pour la première fois, les écrits deviennent accessibles au plus grand nombre. L’humain apprend à lire.

Le fait de lire permet de prendre conscience que tout écrit doit être interprété. Face à différentes sources parfois contradictoires, le lecteur comprend qu’il doit reconstruire la vérité en utilisant son esprit critique. L’impression de la bible, par exemple, aura pour conséquence directe la naissance du protestantisme.

Si l’imprimerie a permis de diffuser l’information, seule une minorité contrôlait ce qui était imprimé. Cette compréhension du rôle fondamental de l’écriture a d’ailleurs conduit au copyright, outil de censure créé dans le seul et unique but de s’assurer que rien de ce qui était imprimé ne remettait en cause le système (et non pas pour protéger les auteurs).

Comme nous l’a montré la révolution française, le copyright n’a pas été un garde-fou suffisant. La technologie l’a emporté, l’autorité suprême est rejetée au profit du débat et du consensus. Du moins en apparence. L’individu devient citoyen. Il n’accepte plus une autorité arbitraire mais uniquement celle qu’il a choisi (ou qu’il a l’impression d’avoir choisi). C’est l’avènement de la démocratie représentative et de son apanage direct, la presse.

L’enfant apprend à désobéir, à questionner l’autorité et à se faire sa propre opinion. Il n’est pas encore autonome mais souvent rebelle. C’est l’adolescence.

Écrire, le troisième apprentissage

Avec l’apparition d’Internet et des réseaux sociaux, écrire, être diffusé et lu devient soudain à la portée de n’importe quel individu.

Que ce soit pour commenter un match de football, partager une photo ou ce texte, tout être humain dispose désormais de la faculté d’être lu par l’humanité entière.

L’adolescent a grandi et partage son opinion avec d’autres. Il enrichit le collectif et s’enrichit lui-même de ces échanges. Il devient le créateur de sa propre conscience, une conscience qu’il perçoit comme s’insérant dans un réseau d’autres consciences, parfois fort différentes de la sienne : l’humanité.

L’autorité, même choisie, n’est plus tolérable. Chaque humain étant en conscience, il veut pouvoir décider en direct, être maître de son propre destin tout en s’insérant dans la société. C’est un mode de gouvernance qui reste à inventer mais dont on aperçoit les prémices dans la démocratie liquide.

Selon moi, une conscience globale ne sera donc atteinte que lorsque chaque humain sera lui-même pleinement conscient de son libre arbitre et de sa contribution à l’humanité. Qu’il aura la volonté d’être maître de son destin tout en ayant la maturité pour s’insérer dans une société qu’il co-construit.

Le déséquilibre de l’écriture sans lecture

Obéir, lire, écrire. Trois étapes indispensables à la création d’une conscience autonome et responsable.

Le problème est que, pour l’autorité, il est préférable de garder les humains dans le stade de l’obéissance, de la croyance en une seule vérité immuable et indiscutable. La démocratie représentative n’est alors qu’une façade : il suffit de rendre les électeurs obéissants en ne leur apprenant pas à « lire ».

Avec la généralisation des réseaux sociaux, des humains accèdent directement à l’écriture sans jamais avoir appris à « lire », à être critiques. Ils sont confrontés à des débats, à des opinions divergentes, à des remises en question pour la première fois de leur vie.

Comme je l’explique dans « Le coût de la conviction », cette confrontation est souvent violente et conduit au rejet. Les personnes concernées vont se contenter de hurler leur opinion en se bouchant les oreilles.

La solution la plus pertinente serait de patiemment tenter de mettre tout le monde à niveau, d’enseigner la lecture et l’esprit critique.

Le dangereux attrait de l’autoritarisme

Malheureusement, nous sommes enclins à tomber dans nos travers. Ceux qui écrivent ont tendance à se considérer comme supérieurs et ne veulent pas que d’autres puissent apprendre à lire.

Le fait de pointer Facebook et Google dans la propagation de « Fake News » va exactement dans cette direction. Facebook et Google sont intronisés détenteur de la vérité ultime comme l’étaient les médias. En leur demandant explicitement de traiter différemment les “fake news” et les “real news”, nous leur donnons le pouvoir de contrôler ce qui est vrai et ce qui ne l’est pas, nous leur demandons de contrôler la réalité.

Est-ce souhaitable ?

Je pense que ce serait une véritable catastrophe. Au contraire, nous devons être sans cesse confronté à des informations contradictoires, à des nouvelles qui contredisent nos croyances, à des réflexions qui démontent nos convictions.

Nous devons réaliser que les médias à qui nous avons délégué ce pouvoir de vérité sont mensongers et manipulateurs, même inconsciemment. Ils gardent jalousement l’écriture et sont donc devenus des obstacles à l’évolution de l’humanité.

Si nous voulons sortir de l’obéissance aveugle, nous devons apprendre à critiquer, à accepter nos erreurs. Nous devons apprendre à lire et à écrire !

Désobéir, lire, écrire…

 

Photo par David Fielke.

Ce texte est a été publié grâce à votre soutien régulier sur Tipeee et sur Paypal. Je suis @ploum, blogueur, écrivain, conférencier et futurologue. Vous pouvez me suivre sur Facebook, Medium ou me contacter.

Ce texte est publié sous la licence CC-By BE.

(an update of an older post, now complete up to vSphere 6.5) Your Linux runs on a VMware VM, but which on which ESXi version? You can see for yourself: run "dmidecode" and look at lines 10, 11 and 12.
ESX 2.5 - BIOS Release Date: 04/21/2004 - Address 0xE8480 - Size 97152 bytes
ESX 3.0 - BIOS Release Date: 04/17/2006 - Address 0xE7C70 - Size 99216 bytes
ESX 3.5 - BIOS Release Date: 01/30/2008 - Address 0xE7910 - Size 100080 bytes
ESX 4 - BIOS Release Date: 08/15/2008 - Address 0xEA6C0 - Size 88384 bytes
ESX 4U1 - BIOS Release Date: 09/22/2009 - Address 0xEA550 - Size 88752 bytes
ESX 4.1 - BIOS Release Date: 10/13/2009 - Address 0xEA2E0 - Size 89376 bytes
ESXi 5 - BIOS Release Date: 01/07/2011 - Address 0xE72C0 - Size 101696 bytes
ESXi 5.1 - BIOS Release Date: 06/22/2012 - Address: 0xEA0C0 - Size: 89920 bytes
ESXi 5.5 - BIOS Release Date: 07/30/2013 - Address: 0xEA050 - Size: 90032 bytes
ESXi 6 - BIOS Release Date: 09/30/2014 - Address: 0xE9A40 - Size: 91584 bytes
ESXi 6.5 - BIOS Release Date: 04/05/2016 - Address: 0xEA580 - Size: 88704 bytes
NB These DMI properties are set at boot time. Even if your VM gets live-migrated to a host running a different vSphere version, your VM will keep the values it got from the host it booted on. What you see is the vSphere version of the host your VM booted on. It is the VM power-on that matters, so a guest OS reboot will not regenerate the DMI properties. A guest OS shut down on the other hand will also power off the VM, and the next power-on will regenerate the DMI properties.

November 18, 2016

At customer, we are using GitSlave, as a way to build and release multiple releases at the same time. Gitslave is like git submodules, except that you always get the latest version of the submodules (no commits needed in the super repository).

For CI, we are using Jenkins. A Jenkins plugin was developed for it. Unfortunately, it was written in a way that would make the work of opensourcing that module too big (IP was all over the place in the testsuite). In addition to that, the integration with Jenkins was not as best as with the Git plugin.

The multi scm plugin is not an option as the list of repos to clone is in a file; and that it sometimes changes.

Fortunately, we will eventually drop that proprietary module, switching to the new Pipeline plugin. I would then like to share a Jenkinsfile script that takes the .gitslave configuration file and clones every repository. Please note that it takes some parameters: gitBaseUrl, and superRepo. It will clone the repositories in multiple threads.

lock('checkout') {
    def checkouts = [:]
    def threads = 8

    stage('Super Repo') {
        checkout([$class: 'GitSCM',
                branches: [[name: branch]],
                doGenerateSubmoduleConfigurations: false,
                extensions: [[$class: 'CleanCheckout'], [$class: 'ScmName', name: 'super']],
                submoduleCfg: [],
                userRemoteConfigs: [[url: "${gitBaseUrl}/${superRepo}"]]])

        def repos = readFile('.gitslave')
        def reposLines = repos.readLines()
        for (i = 0; i < threads; i++){
            checkouts["Thread ${i}"] = []
        }
        def idx = 0
        for (line in reposLines) {
            def repoInfo = line.split(' ')
            def repoUrl = repoInfo[0]
            def repoPath = repoInfo[1]
            def curatedRepoUrl = repoUrl.substring(4, repoUrl.length()-1)
            def curatedRepoPath = repoPath.substring(1, repoPath.length()-1)
            echo("Thread ${idx%threads}")
            checkouts["Thread ${idx%threads}"] << [path: curatedRepoPath, url: "${gitBaseUrl}/${curatedRepoUrl}"]
            idx++
        }
    }
    stage('GitSlave Repos') {
        def doCheckouts = [:]
        for (i = 0; i < threads; i++){
            def j = i
            doCheckouts["Thread ${j}"] = {
                for (co in checkouts["Thread ${j}"]) {
                    retry(3) {
                        checkout([$class: 'GitSCM',
                                branches: [[name: branch]],
                                doGenerateSubmoduleConfigurations: false,
                                extensions: [[$class: 'RelativeTargetDirectory', relativeTargetDir: co.path], [$class: 'CleanCheckout'], [$class: 'ScmName', name: co.path]],
                                submoduleCfg: [],
                                userRemoteConfigs: [[url: co.url]]])
                    }
                }
            }
        }
        parallel doCheckouts
    }
}

That script is under CC0.

Why do we run in threads and not everything in parallel? First, Blue Ocean can only display 100 threads. Running several operations in all the repos would end up with more that 100 threads. Plus it is probably not clever to run such a big amount of parallelism.

What is still to be done? We do not delete repositories that disappear from the .gitslave file.

Here is the result:

GitSlave in Jenkins Pipeline

PS: GitSlave is named GitSlave. I do not like the name -- and kudos to the Jenkins team to drop the Master/Slave terminology as part of the Jenkins 2.0 release.

This article has been written by Julien Pivotto and is licensed under a Creative Commons Attribution 4.0 International License.
Drupal 8 turns one! Dries Fri, 11/18/2016 - 06:49

Tomorrow is the one year anniversary of Drupal 8. On this day last year we celebrated the release of Drupal 8 with over 200 parties around the world. It's a project we worked on for almost five years, bringing the work of more than 3,000 contributors together to make Drupal more flexible, innovative, scalable, and easier to use.

To celebrate tomorrow's release-versary, I wanted to look back at a few of the amazing Drupal 8 projects that have launched in the past year.

1. NBA.com

The NBA is one of the largest professional sports leagues in the United States and Canada. Millions of fans around the globe rely on the NBA's Drupal 8 website to livestream games, read stats and standings, and stay up to date on their favorite team. Drupal 8 will bring you courtside, no matter who you're rooting for.

2. Nasdaq

Nasdaq Corporate Solutions has selected Drupal 8 as the basis for its next generation Investor Relations Website Platform. IR websites are where public companies share their most sensitive and critical news and information with their shareholders, institutional investors, the media and analysts. With Drupal 8, Nasdaq Corporate Solutions will be providing companies with the most engaging, secure, and innovative IR websites to date.

3.Hubert Burda Media

For more than 100 years, Hubert Burda Media has been Germany's premier media company. Burda is using Drupal 8 to expand their traditional business of print publishing to reach more than 52 million readers online. Burda didn't stop there, the media company also open sourced Thunder, a distribution for professional publishers built on Drupal 8.

4. Jurassic World

Drupal 8 propels a wide variety of sites, some of Jurassic proportion. Following the release of the blockbuster film, Jurassic World built its digital park on Drupal 8. Jurassic World offers fans games, video, community forums, and even interactive profiles all of the epic dinosaurs found on Isla Nublar.

5. WWF

The World Wide Fund for Nature has been a leading conservation organization since its founding in 1961. WWF's mission is to protect our planet and Drupal 8 is on their team. WWF UK uses Drupal 8 to engage the community, enabling users to adopt, donate and join online. From pole to pole, Drupal 8 and WWF are making an impact.

6. YMCA Greater Twin Cities

The YMCA is one the leading non-profit organizations for youth development, healthy living, and social responsibility. The YMCA serves more than 45 million people in 119 countries. The team at YMCA Greater Twin Cities turned to Drupal 8 to build OpenY, a platform that allows YMCA members to check in, set fitness goals, and book classes. They even hooked up Drupal to workout machines and wearables like Fitbit, which enables visitors to track their workouts from a Drupal 8 powered mobile app. The team at Greater Twin Cities also took advantage of Drupal 8's built-in multilingual capabilities so that other YMCAs around the world can participate. The YMCA has set a new personal record, and is a great example of what is possible with Drupal 8.

7. Jack Daniels

The one year anniversary of Drupal 8 is cause for celebration, so why not raise a glass? You might try Jack Daniels and their Drupal 8 website. Jack Daniels has been making whiskey for 150 years and you can get your fill with Drupal 8.

8. Al Jazeera Media Network

Al Jazeera is the largest news organization focused on the Middle East, and broadcasts news and current affairs 24 hours a day, 7 days a week. Al Jazeera required a platform that could unify several different content streams and support a complicated editorial workflow, allowing network wide collaboration and search. Drupal 8 allowed Al Jazeera to do that and then some. Content creators can now easily deliver critical news to their readers in real time.

9. Alabama.gov

From Boston to LA and even Australia, Drupal is supporting the digital needs of governments around the globe. Alabama is leading the way with Drupal 8. Alabama.gov puts its citizens first, and demonstrates how open source can change the way the public sector engages online.

10. Box

Box has been a leader in the technology industry since its founding in 2005. Box takes advantage of Drupal 8 and the improved features made available right out-of-the-box. Bad puns aside, companies like Box are using Drupal 8's new features and improved user interface to build the best digital experiences yet.

11. Habitat for Humanity

The historic nonprofit Habitat for Humanity doesn't just build houses for those in need; they build habitat.org on Drupal 8. Habitat for Humanity provides affordable housing for communities in over 70 countries around the world. You can discover their impact through the "Where we Build” interactive map, donate, and volunteer all on their Drupal 8 site.

12. Obermeyer

Obermeyer and Drupal 8 will take you into new territory. The ski wear company offers seamless end to end commerce integration, providing both new and loyal customers a great shopping experience. Let Obermeyer's Drupal 8 integration with Drupal Commerce keep you warm because winter is coming ...

Happy 1st birthday Drupal 8!

I can't think of a better way to celebrate Drupal 8's one year anniversary than by sharing some incredible experiences that are being created with Drupal 8. Whether the project is big or small, features dinosaurs, or spreads awareness for an important cause, I'm proud to say that Drupal 8 is supporting an amazing array of projects. In my recent keynote at DrupalCon Dublin, I explained why the why of Drupal is so important. After one year of Drupal 8, it's clear how powerful our collective purpose, projects, and passions can be.

Thank you to everyone who has continued to contribute to Drupal 8! I can't wait for another year of exciting projects. Special thanks to Paul Johnson for crowdsourcing great examples that I wouldn't have known about otherwise.

Comments

Dimitar (not verified):

Great examples, great sites! @Dries, you're a Bulls fan? No Celtics gif? :)

November 18, 2016
Clint Randall (not verified):

Hey Dries! So we saw your highlights of sites built on D8 and the thought occurred to me to send you a link to the YMCA scheduling app we also built in D8. :)

YMCA of Greater Williamson County - counter.ymcagwc.org

Thanks!

Clint Randall

November 18, 2016
Margaret Plett (not verified):

Yay for one year of Drupal 8! Not sure if you've heard, but we're hosting a "celebration" tomorrow at http://www.drupal8day.com/. We're spending the day hosting 17 free, live, sessions to share information about Drupal 8 for all who are interested. Kind of like a virtual Drupal Camp. I heard from Steve Burge yesterday that we have more than 1,700 registrations already!

November 18, 2016
Dries:

I wasn't aware of "Drupal 8 Day". I like the idea of doing virtual DrupalCamps. I'll help promote it today.

November 19, 2016
Margaret Plett (not verified):

Thanks Dries!

November 19, 2016
Elijah Lynn (not verified):

That is great you recorded all the sessions. I look forward to watching some of them!

And I definitely look forward to more virtual camps!

November 21, 2016
Anoop John (not verified):

Happy birthday to Drupal 8 and congratulations to the best free software community in the world.

November 19, 2016
Ankur Singh (not verified):

Fantastic achievements so far. Congratulations Drupal 8 on turning one. We would like to see some more amazing involvement with Drupal 8.

Dries, you are doing simply great. Keep it up. Special thanks to its community.

November 21, 2016

Tomorrow is the one year anniversary of Drupal 8. On this day last year we celebrated the release of Drupal 8 with over 200 parties around the world. It's a project we worked on for almost five years, bringing the work of more than 3,000 contributors together to make Drupal more flexible, innovative, scalable, and easier to use.

To celebrate tomorrow's release-versary, I wanted to look back at a few of the amazing Drupal 8 projects that have launched in the past year.

1. NBA.com

The NBA is one of the largest professional sports leagues in the United States and Canada. Millions of fans around the globe rely on the NBA's Drupal 8 website to livestream games, read stats and standings, and stay up to date on their favorite team. Drupal 8 will bring you courtside, no matter who you're rooting for.

2. Nasdaq

Nasdaq Corporate Solutions has selected Drupal 8 as the basis for its next generation Investor Relations Website Platform. IR websites are where public companies share their most sensitive and critical news and information with their shareholders, institutional investors, the media and analysts. With Drupal 8, Nasdaq Corporate Solutions will be providing companies with the most engaging, secure, and innovative IR websites to date.

3.Hubert Burda Media

For more than 100 years, Hubert Burda Media has been Germany's premier media company. Burda is using Drupal 8 to expand their traditional business of print publishing to reach more than 52 million readers online. Burda didn't stop there, the media company also open sourced Thunder, a distribution for professional publishers built on Drupal 8.

4. Jurassic World

Drupal 8 propels a wide variety of sites, some of Jurassic proportion. Following the release of the blockbuster film, Jurassic World built its digital park on Drupal 8. Jurassic World offers fans games, video, community forums, and even interactive profiles all of the epic dinosaurs found on Isla Nublar.

5. WWF

The World Wide Fund for Nature has been a leading conservation organization since its founding in 1961. WWF's mission is to protect our planet and Drupal 8 is on their team. WWF UK uses Drupal 8 to engage the community, enabling users to adopt, donate and join online. From pole to pole, Drupal 8 and WWF are making an impact.

6. YMCA Greater Twin Cities

The YMCA is one the leading non-profit organizations for youth development, healthy living, and social responsibility. The YMCA serves more than 45 million people in 119 countries. The team at YMCA Greater Twin Cities turned to Drupal 8 to build OpenY, a platform that allows YMCA members to check in, set fitness goals, and book classes. They even hooked up Drupal to workout machines and wearables like Fitbit, which enables visitors to track their workouts from a Drupal 8 powered mobile app. The team at Greater Twin Cities also took advantage of Drupal 8's built-in multilingual capabilities so that other YMCAs around the world can participate. The YMCA has set a new personal record, and is a great example of what is possible with Drupal 8.

7. Jack Daniels

The one year anniversary of Drupal 8 is cause for celebration, so why not raise a glass? You might try Jack Daniels and their Drupal 8 website. Jack Daniels has been making whiskey for 150 years and you can get your fill with Drupal 8.

8. Al Jazeera Media Network

Al Jazeera is the largest news organization focused on the Middle East, and broadcasts news and current affairs 24 hours a day, 7 days a week. Al Jazeera required a platform that could unify several different content streams and support a complicated editorial workflow, allowing network wide collaboration and search. Drupal 8 allowed Al Jazeera to do that and then some. Content creators can now easily deliver critical news to their readers in real time.

9. Alabama.gov

From Boston to LA and even Australia, Drupal is supporting the digital needs of governments around the globe. Alabama is leading the way with Drupal 8. Alabama.gov puts its citizens first, and demonstrates how open source can change the way the public sector engages online.

10. Box

Box has been a leader in the technology industry since its founding in 2005. Box takes advantage of Drupal 8 and the improved features made available right out-of-the-box. Bad puns aside, companies like Box are using Drupal 8's new features and improved user interface to build the best digital experiences yet.

11. Habitat for Humanity

The historic nonprofit Habitat for Humanity doesn't just build houses for those in need; they build habitat.org on Drupal 8. Habitat for Humanity provides affordable housing for communities in over 70 countries around the world. You can discover their impact through the "Where we Build” interactive map, donate, and volunteer all on their Drupal 8 site.

12. Obermeyer

Obermeyer and Drupal 8 will take you into new territory. The ski wear company offers seamless end to end commerce integration, providing both new and loyal customers a great shopping experience. Let Obermeyer's Drupal 8 integration with Drupal Commerce keep you warm because winter is coming ...

Happy 1st birthday Drupal 8!

I can't think of a better way to celebrate Drupal 8's one year anniversary than by sharing some incredible experiences that are being created with Drupal 8. Whether the project is big or small, features dinosaurs, or spreads awareness for an important cause, I'm proud to say that Drupal 8 is supporting an amazing array of projects. In my recent keynote at DrupalCon Dublin, I explained why the why of Drupal is so important. After one year of Drupal 8, it's clear how powerful our collective purpose, projects, and passions can be.

Thank you to everyone who has continued to contribute to Drupal 8! I can't wait for another year of exciting projects. Special thanks to Paul Johnson for crowdsourcing great examples that I wouldn't have known about otherwise.

November 17, 2016

I published the following diary on isc.sans.org: “Example of Getting Analysts & Researchers Away“.

It is well-known that bad guys implement pieces of code to defeat security analysts and researchers. Modern malware’s have VM evasion techniques to detect as soon as possible if they are executed in a sandbox environment. The same applies for web services like phishing pages or C&C control panels… [Read more]

[The post [SANS ISC Diary] Example of Getting Analysts & Researchers Away has been first published on /dev/random]

November 16, 2016

Normale manier van versie controle als je terug komt uit verlof: dag beste collega. Dit is de laatste stand van zaken en die feature zit in die feature branch en die feature zit in die feature branch. De verschillen zitten mooi in git en je kan snel en in één opslag zien wat er gewijzigd is.

De ClearCase manier van versie controle als je terug komt uit verlof: dag beste collega. Dit is whatever. Je whatever is whatever. En whatever de config_spec is, is whatever. Je moet trouwens de hele dag bezig zijn om whatever uit te zoeken hoe je de code terug kan compilen. Want als je whatever whatevered dan eclipsen er files, of zo, whatever. Die config_spec daar moet je ook een timestamp op invullen want whatever de whatever whatevers. Ja ja. Het whatever nogal wat he. Whatever. ClearCase is top. IBM is goed bezig. Whatever. Ik bedoel, burp. Nee. Whatever.

Wat is dit voor shit? Is die ClearCase gemaakt om programmeurs de duivel aan te doen of zo? Dit kan toch gewoon niet waar zijn? Mijn god.

November 15, 2016

Den NV-A vandaag in De Afspraak (onze eigen dezinformatsiya):

Het kinderrechtenverdrag versus de vrijheid van meningsuiting van de prinses van België. Dat, om de liefde voor het koningshuis van een liberaal, Herman De Croo, te pareren.

Het vrije debat, het is toch iets mooi.

Niet waar?

November 14, 2016

I like cleaning git history, in feature branches, at least. The goal is a set of logical commits without other cruft, that can be cleanly merged into master. This can be easily achieved with git rebase and force pushing to the feature branch on GitHub.

Today I had a little accident and found myself in this situation:

  • I accidentally ran git push origin -f instead of my usual git push origin -f branchname or git push origin -f HEAD
  • This meant that I not only overwrote the branch I wanted to update, but also by accident a feature branch (called httpRefactor in this case) to which a colleague had been force pushing various improvements which I did not have on my computer. And my colleague is on the other side of the world so I didn’t want to wait until he wakes up. (if you can talk to someone who has the commits just have him/her re-force-push, that’s quite a bit easier than this) It looked something like so:
$ git push origin -f
  <here was the force push that succeeded as desired>
+ 92a817d...065bf68 httpRefactor -> httpRefactor (forced update)

Oops! So I wanted to reset the branch on GitHub to what it should be, and also it would be nice to update the local copy on my computer while we’re at it. Note that the commit (or rather the abbreviated hash) on the left refers to the commit that was the latest version in GitHub, i.e. the one I did not have on my computer. A little strange if you’re to accustomed to git diff and git log output showing hashes you have in your local repository.

Normally in a git repository, the objects dangle around until git gc is run, which clears any commits except those reachable by any branches or tags. I figured the commit is probably still in the GitHub repo (either cause it’s dangling, or perhaps there’s a reference to it that’s not public such as a remote branch), I just need a way to attach a regular branch to it (either on GitHub, or fetch it somehow to my computer, attach the branch there and re-force-push), so step one is finding it on GitHub.

The first obstacle is that GitHub wouldn’t recognize this abbreviated hash anymore: going to https://github.com/raintank/metrictank/commit/92a817d resulted in a 404 commit not found.

Now, we use CircleCI, so I could see what had been the full commit hash in the CI build log. Once I had it, I could see that https://github.com/raintank/metrictank/commit/92a817d2ba0b38d3f18b19457f5fe0a706c77370 showed it. An alternative way of opening a view of the dangling commit we need, is using the reflog syntax. Git reflog is a pretty sweet tool that often comes in handy when you made a bit too much of a mess on your local repository, but also on GitHub it works: if you navigate to https://github.com/raintank/metrictank/tree/httpRefactor@{1} you will be presented with the commit that the branch head was at before the last change, i.e. the missing commit, 92a817d in my case.

Then follows the problem of re-attaching a branch to it. Running on my laptop git fetch --all doesn’t seem to fetch dangling objects, so I couldn’t bring the object in.

Then I tried to create a tag for the non-existant object. I figured, the tag may not reference an object in my repo, but it will on GitHub, so if only I can create the tag, manually if needed (it seems to be just a file containing a commit hash), and push it, I should be good. So:

~/g/s/g/r/metrictank ❯❯❯ git tag recover 92a817d2ba0b38d3f18b19457f5fe0a706c77370
fatal: cannot update ref 'refs/tags/recover': trying to write ref 'refs/tags/recover' with nonexistent object 92a817d2ba0b38d3f18b19457f5fe0a706c77370
~/g/s/g/r/metrictank ❯❯❯ echo 92a817d2ba0b38d3f18b19457f5fe0a706c77370 > .git/refs/tags/recover
~/g/s/g/r/metrictank ❯❯❯ git push origin --tags
error: refs/tags/recover does not point to a valid object!
Everything up-to-date

So this approach won’t work. I can create the tag, but not push it, even though the object exists on the remote.

So I was looking for a way to attach a tag or branch to the commit on GitHub, and then I found a way. While having the view of the needed commit open, click the branch dropdown, which you typically use to switch the view to another branch or tag. If you type any word in there that does not match any existing branch, it will let you create a branch with that name. So I created recover.

From then on, it’s easy.. on my computer I went into httpRefactor, backed my version up as httpRefactor-old (so I could diff against my colleague’s recent work), deleted httpRefactor, and set it to the same commit as what origin/recover is pointing to, pushed it out again, and removed the recover branch on GitHub:

~/g/s/g/r/metrictank ❯❯❯ git fetch --all
(...)
~/g/s/g/r/metrictank ❯❯❯ git checkout httpRefactor
~/g/s/g/r/metrictank ❯❯❯ git checkout -b httpRefactor-old
Switched to a new branch 'httpRefactor-old'
~/g/s/g/r/metrictank ❯❯❯ git branch -D httpRefactor
Deleted branch httpRefactor (was 065bf68).
~/g/s/g/r/metrictank ❯❯❯ git checkout recover
HEAD is now at 92a817d... include response text in error message
~/g/s/g/r/metrictank ❯❯❯ git checkout -b httpRefactor
Switched to a new branch 'httpRefactor'
~/g/s/g/r/metrictank ❯❯❯ git push -f origin httpRefactor
Total 0 (delta 0), reused 0 (delta 0)
To github.com:raintank/metrictank.git
 + 065bf68...92a817d httpRefactor -> httpRefactor (forced update)
~/g/s/g/r/metrictank ❯❯❯ git push origin :recover                                                                                                                                            ⏎
To github.com:raintank/metrictank.git
 - [deleted]         recover

And that was that… If you’re ever in this situation and you don’t have anyone who can do the force push again, this should help you out.

November 13, 2016

Dear friends,

I just came back from PCM16, the 9th annual edition of our European Pentaho Community Meetup.  We had close to 200 subscriptions for this event of which about 150 showed up making this the biggest so far.  Even though veterans of the conference like myself really appreciate the warmth in previous locations like Barcelona and Cascais, I have to admit we did get a great venue in Antwerp this year with 2 large rooms, great catering, top notch audiovisual support in a nice part in the city center. (Free high speed Antwerp city WiFi, Yeah!)

Content-wise everything was more than OK with back-to-back presentation on a large variety of subjects and I’m happy to say lots of Kettle related stuff as well.

For an in depth recap of the content you can see here for the technical track and here for the other sessions.

Personally I was touched by the incredibly positive response from the audience after my presentation on the state of the PDI unit testing project.  However, the big bomb was dropped when Hiromu Hota from Hitachi Reseach America started to present a new “WebSpoon” project.  You could almost hear everyone think: “Oh no, not another attempt at making a new Kettle web interface”.  However, 2 minutes into the presentation everyone in the audience started to realize that it was the real, original Spoon with all the functionality it has, ported 1:1 to a web browser on your laptop, thin device, phone or tablet. Applause spontaneously erupted, twitter exploded and people couldn’t stop talking about about it until I left the PCM crowd a day later.  Now I’m obviously very happy we managed to keep the WebSpoon project a secret for the past few months and it’s impossible to thank Hota-san enough for traveling all the way to present at our event this weekend.

My heartfelt thanks also go out to Bart Maertens and the whole know.bi crew for making PCM16 a wonderful experience and an unforgettable event!

See you all at PCM17!

Matt

November 12, 2016

Si on prend la peine d’y réfléchir, notre vie est un incroyable équilibre, une formidable symbiose entre des milliards de cellules.

Chacune est unique, périssable, remplaçable et pourtant toutes sont dépendantes des autres pour exister. Les cellules n’existent que grâce à l’entité supérieure qu’elles forment : un être humain !

Que cette fragile harmonie entre des milliards d’individus se rompe et, aussitôt, c’est le corps tout entier qui souffre. Que quelques cellules décident de ne plus collaborer, d’entrer dans une égoïste croissance incontrôlée et c’est la personne entière qui est atteinte d’un cancer.

Lorsque le déséquilibre devient trop grand, que la symphonie se transforme en fausses notes, vient l’heure de mourir.

La mort signifie la fin de la coopération. Les cellules vivent encore mais ne peuvent plus compter sur les autres. Elles sont condamnées à disparaître, à pourrir.

Le corps devient rigide, froid. Les cellules qui le composent s’éteignent petit à petit. Certaines ne veulent pas l’accepter mais il est trop tard. La conscience a disparu, le corps est condamné.

L’agonie se transforme en repos, le râle en silence, les convulsions en immobilité. La moite et étouffante chaleur est devenue un courant d’air. La vie s’en est allée.

Attendez !

Si on prend la peine d’y réfléchir, l’humanité est un incroyable équilibre, une formidable symbiose entre des milliards d’individus.

Que quelques personnes décident de ne plus collaborer, d’amasser au détriment des autres et c’est la planète entière qui est atteinte d’un cancer.

Mais contrairement à un corps, l’humanité n’est pas encore arrivé au stade de la conscience. Nous pouvons encore la sauver, soigner les terribles cancers qui la rongent.

Si nous pouvons compter l’un sur l’autre, si nous pouvons collaborer, considérer chaque être humain comme une cellule indispensable d’un seul et unique corps, alors, l’humanité survivra.

Pour survivre, nous n’avons sans doute pas d’autre choix que de créer et devenir la conscience de l’humanité.

 

À ma mamy, décédée le 15 octobre 2016. Photo par Sarahwynne.

Ce texte est a été publié grâce à votre soutien régulier sur Tipeee et sur Paypal. Je suis @ploum, blogueur, écrivain, conférencier et futurologue. Vous pouvez me suivre sur Facebook, Medium ou me contacter.

Ce texte est publié sous la licence CC-By BE.

Last month, I was abroad with my trusty old camera, but without its SD cards. Since the old camera has an SD only slot, which does not accept SDHC (let alone SDXC) cards, I cannot use it with cards larger than 2GiB. Today, such cards are not being manufactured anymore. So, I found myself with a few options:

  1. Forget about the camera, just don't take any photos. Given the nature of the trip, I did not fancy this option.
  2. Go on eBay or some such, and find a second-hand 2GiB card.
  3. Find a local shop, and buy a new camera body.

While option 2 would have worked, the lack of certain features on my old camera had meant that I'd been wanting to buy a new camera body for a while, but it just hadn't happened yet; so I decided to go with option 3.

The Nikon D7200 is the latest model in the Nikon D7xxx series of cameras, a DX-format ("APS-C") camera that is still fairly advanced. Slightly cheaper than the D610, the cheapest full-frame Nikon camera (which I considered for a moment until I realized that two of my three lenses are DX-only lenses), it is packed with a similar amount of features. It can shoot photos at shutter speeds of 1/8000th of a second (twice as fast as my old camera), and its sensor can be set to ISO speeds of up to 102400 (64 times as much as the old one) -- although for the two modes beyond 25600, the sensor is switched to black-and-white only, since the amount of color available in such lighting conditions is very very low already.

A camera which is not only ten years more recent than the older one, but also is targeted at a more advanced user profile, took some getting used to at first. For instance, it took a few days until I had tamed the camera's autofocus system, which is much more advanced than the older one, so that it would focus on the things I wanted it to focus on, rather than just whatever object happens to be closest.

The camera shoots photos at up to twice the resolution in both dimensions (which combines to it having four times the amount of megapixels as the old body), which is not something I'm unhappy about. Also, it does turn out that a DX camera with a 24 megapixel sensor ends up taking photos with a digital resolution that is much higher than the optical resolution of my lenses, so I don't think more than 24 megapixels is going to be all that useful.

The builtin WiFi and NFC communication options are a nice touch, allowing me to use Nikon's app to take photos remotely, and see what's going through the lens while doing so. Additionally, the time-lapse functionality is something I've used already, and which I'm sure I'll be using again in the future.

The new camera is definitely a huge step forward from the old one, and while the price over there was a few hundred euros higher than it would have been here, I don't regret buying the new camera.

The result is nice, too:

DSC_1012

All in all, I'm definitely happy with it.

November 11, 2016

November 10, 2016

Content and Commerce: a big opportunity for Drupal Dries Thu, 11/10/2016 - 16:08

Last week Acquia announced a partnership with Magento. I wanted to use this opportunity to explain why I am excited about this. I also want to take a step back and share what I see is a big opportunity for both Drupal, Acquia and commerce platforms.

State of the commerce market

First, it is important to understand what is one of the most important market trends in online commerce: consumers are demanding better experiences when they shop online. In particular, commerce teams are looking to leverage vastly greater levels of content throughout the customer's shopping journey - editorials, lookbooks, tutorials, product demonstration videos, mood videos, testimonials, etc.

At the same time, commerce platforms have not added many tools for rich content management. Instead they have been investing in capabilities needed to compete in the commerce market; order management systems (OMS), omnichannel shopping (point of sale, mobile, desktop, kiosk, etc), improved product information management (PIM) and other vital commerce capabilities. The limited investment in content management capabilities has left merchants looking for better tools to take control of the customer experience, something that Drupal addresses extremely well.

To overcome the limitations that today's commerce platforms have with building content-rich shopping experiences, organizations want to integrate their commerce platform with a content management system (CMS). Depending on the situation, the combined solution is architected for either system to be "the glass", i.e. the driver of the shopping experience.

Lush.com is a nice example of a content-rich shopping experience built with Drupal and Drupal Commerce.

Drupal's unique advantage for commerce

Drupal is unique in its ability to easily integrate into ambitious commerce architectures in precisely the manner the brand prefers. We are seeing this first hand at Acquia. We have helped many customers implement a "Content for Commerce" strategy where Acquia products and Drupal were integrated with an existing commerce platform. Those integrations spanned commerce platforms including IBM WebSphere Commerce, Demandware, Oracle/ATG, SAP/hybris, Magento and even custom transaction platforms. Check out Quicken (Magento), Puma (Demandware), Motorola (Broadleaf Commerce), Tesla (custom to order a car, and Shopify to order accessories) as great examples of Drupal working with commerce platforms.

We've seen a variety of approaches to "Content for Commerce" but one thing that is clear is that a best-of-breed approach is preferred. The more complex demands may end up with IBM WebSphere Commerce or SAP/hybris. Less demanding requirements may be solved with Commerce Tools, Elastic Path or Drupal Commerce, while Magento historically has fit in between.

Additionally, having to rip and replace an existing commerce platform is not something most organizations aspire to do. This is true for smaller organizations who can't afford to replace their commerce platform, but also for large organizations who can't afford the business risk to forklift a complex commerce backend. Remember that commerce platforms have complex integrations with ERP systems, point-of-sales systems, CRM systems, warehousing systems, payment systems, marketplaces, product information systems, etc. It's often easier to add a content management system than to replace everything they have in place.

This year's "State of Retailing Online" series asked retailers and brands to prioritize their initiatives for the year. Just 16% of respondents prioritized a commerce re-platform project while 41-59% prioritized investments to evolve the customer experience including content development, responsive design and personalization. In other words, organizations are 3 times more likely to invest in improving the shopping experience than in switching commerce platforms.

The market trends, customer use cases and survey data make me believe that (1) there are hundreds of thousands of existing commerce sites that would prefer to have a better shopping experience and (2) that many of those organizations prefer to keep their commerce backend untouched while swapping out the shopping experience.

Acquia's near-term commerce strategy

There is a really strong case to be made for a best-of-breed approach where you choose and integrate the best software from different vendors. Countless point solutions exist that are optimized for narrow use cases (e.g. mobile commerce, marketplaces and industry specific solutions) as well as solutions optimized for different technology stacks (e.g. Reaction Commerce is JavaScript-based, Magento is PHP-based, Drupal Commerce is Drupal-based).

A big part of Acquia's commerce strategy is to focus on integrating Drupal with multiple commerce platforms, and to offer personalization through Lift. The partnership with Magento is an important part of this strategy, and one that will drive adoption of both Drupal and Magento.

There are over 250,000 commerce sites built with Magento and many of these organizations will want a better shopping experience. Furthermore, given the consolidation seen in the commerce platform space, there are few, proven enterprise solutions left on the market. This has increased the market opportunity for Magento and Drupal. Drupal and Magento are a natural fit; we share the same technology stack (PHP, MySQL) and we are both open source (albeit using different licenses). Last but not least, the market is pushing us to partner; we've seen strong demand for Drupal-Magento integration.

We're keen to partner with other commerce platforms as well. In fact, Acquia has existing partnerships with SAP/hybris, Demandware, Elastic Path and Commerce Tools.

Conclusion

Global brands are seeing increased opportunity to sell direct to consumers and want to build content-rich shopping journeys, and merchants are looking for better tools to take control of the customer experience.

Most organizations prefer best of breed solutions. There are hundreds of thousands of existing commerce sites that would like to have more differentiation enabled by a stronger shopping experience, yet leave their commerce capabilities relatively untouched.

Drupal is a great fit. It's power and flexibility allow it to be molded to virtually any systems architecture, while vastly improving the content experience of both authors and customers along the shopping journey. I believe commerce is evolving to be the next massive use case for Drupal and I'm excited to partner with different commerce platforms.

Special thanks to Tom Erickson and Kelly O'Neill for their contributions to this blog post.

Comments

Richard Jones (not verified):

Thanks for taking the time to post this Dries. The Magento announcement last week caused a few waves in the Drupal community and I think it's very important for you to put this into context.

Having been heavily involved in some of the Drupal Commerce examples you mention in this post I have a strong belief that Drupal Commerce with Drupal as a single destination content and commerce platform is a very powerful and widely underrated solution. Since iKOS joined Inviqa I've had the benefit of working with some of the best Magento engineers around and this has allowed me to experience first hand where the short falls in content management are in these mainstream commerce platforms. I have also seen enough to know just how far we can push Drupal Commerce.

I like the phrasing you use - determining which platform is the "glass" when combining two systems. It's an important decision and often not an obvious one. Drupal 8 now being a capable decoupled system makes this a little easier as we can benefit from the full strengths of Drupal whilst maintaining the well respected back office capabilities of Magento.

I have huge respect for what Ryan, Bojan and the team have done with Commerce 2. It's so important for people to realise that the split of Commerceguys from Platform.sh did not and does not mean the death of Drupal Commerce - slow or otherwise. It's understandable that this massive effort in rewriting commerce and addressing many of the thing we learnt from D7 has taken a long time with a small core team. It's now time for those of us maintaining the supporting modules to step up and complete the picture.

As a long term partner of Acquia, Commerceguys and Magento we have interesting times ahead. Our approach is always to look at the business value our clients get from their systems rather that to dictate the solution that meets our interests. As you rightly point out replacing an commerce platform is not always possible so we need to be flexible in how we operate. I expect we will have the opportunity to bring in Drupal Commerce solutions in the future as well as continuing our efforts to bring Drupal and Magento closer together.

More case studies and success stories will benefit all of us against the strong competition in the market.

November 10, 2016
Ryan Szrama (not verified):

Thanks, Rich - we really appreciate your support. : )

November 11, 2016
Ryan Szrama (not verified):

Thanks for sharing, Dries! My perspective on the partnership is colored by my long history of promoting native eCommerce solutions within the Drupal community (and with Commerce 2.x we're pushing hard to be more competitive with dedicated commerce applications out of the box), but I think your strategy w/ respect to winning over existing Magento users is clear / commendable and still points to the opportunity for Drupal Commerce.

I would've shared this assessment even before your partnership was announced for the key reason you mention in the post: re-platforming can be a tough pill to swallow, and even if you are committed to it (that 16% still represents thousands of merchants), the fewer integrations you have to rewrite between your eCommerce application and the various back-end services you depend on for order management, fulfillment, reporting, etc. the better. Every dollar you save on what is purely a cost (i.e. rewriting integrations, absent scalability or other performance issues) you can invest in future revenue by creating a more appealing, engaging experience on the front end.

That said, maintaining separate applications side-by-side will itself increase your technology costs (e.g. hosting, software updates, developer training, etc.), so I believe in a future where Drupal Commerce is sufficient for more use cases and easily scales to meet the demands of the largest merchants. It's great for digital commerce, for event sites, for selling user generated content, etc. (essentially, for verticals where the content *is* the product) - and it's a decent foundation for building robust D2C brand sites (e.g. lush.co.uk or the more recent obermeyer.com) and highly custom business requirements. However, we know that we need to be more competitive both out of the box (esp. with regards to merchandising) and as an ecosystem (esp. with regards to order management / fulfillment), so we've prioritized addressing both of those during Commerce 2.x development.

While making Drupal Commerce an easy replacement for a third party eCommerce application is a long term goal, the short term opportunity is for merchants to use Drupal Commerce as a front-end for third party applications just as you are suggesting. Meaningful integrations will require structured representation of commerce data within Drupal, and there's no reason they shouldn't take advantage of the relevant Commerce modules and libraries, which were designed to be used in part and not just as a whole. For sites requiring currency / address localization, I'd almost say it's a requirement, and the libraries we built to handle this have been humming along in production environments for some time now. : )

November 11, 2016
Erlend Strømsvik (not verified):

You mention costs and software updates as something which argues against "separate applications side-by-side". I would argue those a absolutely minimal. To host a commerce solution you already need a huge stack of software/applications. One more does not add much to the total cost. That's even more true if one add the costs of development and not only the continued maintenance costs.

The cost of developer training depends solely on the longevity of the skills learned. If skills learned for one set of library/framework is viable for the next 5, 10 or 15 years then that has to factored into the equation. From personal experience I can say that the costs of having to relearn a framework is huge. There is a major undertaking both upfront and for several years when new bugs/problems surface. I've been following UberCart for a long time. We ventured into Drupal Commerce and felt a bit blindsided by that experience. Now Commerce2 is fronted as a "total rewrite" which really makes one wonder what has changed in the real world - tax rules (only tax rates has changed), discount calculations, price handling, currency handling, payment handling, order registrations (backend), order handling (backend), and more has not changed for the last 15-20 years. The presentation of information has changed quite a lot of course, and also so has user interaction, but the backend side of commerce has not changed.

What we experienced from UberCart to Commerce, and our fear is that going from Commerce to Commerce2 the backend again changes (a lot). Those of us who develop and maintain large commerce sites with a myriad of backend integrations (logistics, accounting, third party pricing-systems, giftcards, etc) are faced with a huge undertaking the day we have/want to upgrade a commerce site (UberCart -> Commerce -> Commerce2). The "upgrade" project is so large that we end up facing the same project costs as when we initially set up the commerce site. The result is that the customer might conclude that the best thing is to put out request for proposals on a new site. "A new site" is a pretty accurate description of every major Drupal upgrade and especially if the site is a commerce site with UberCart/Commerce/Commerce2.

We have just begun nibbling at D8. For us the most important thing when considering how to get out of the current Pressflow+UberCart and Drupal+Commerce stack will be how decoupled is the commerce backend from Drupal. If the next iteration of Drupal will again require huge amount of work on the commerce backend then that's not a good option at all. In my opinion an upgrade of the presentation of information (Drupal) should not touch the way prices are calculated or when tax is applied... Neither how orders are stored or handled. Just to give an example. Except for a few, most of our customers never touch an order in Drupal+Commerce. Orders and payments are handled and processed in their own ERP system. Some of those systems are 10+ years old. Some are even nearing 20 years. They still work and there is absolutely no need to upgrade or change system.

Drupal + Magento is something we have looked at earlier. Then we were in them midst of D7 + Commerce development. It will however be quite interesting to compare D8 + Commerce2 and D8 and a Magento stack next year. It will come down to how reusable is current Commerce code/modules with Commerce2 and what the planned life cycle of Commerce2 and (current version) of Magento be.

November 17, 2016
Justin (not verified):

Great post Dries.

I'm flattered to see our innovative Drupal + Magento work with Quicken mentioned by you. Quicken is one of the earliest and largest Magento 2 stores launched in the world. We have built headless commerce with Drupal many times now, most recently with Quicken.com (headless Magento 2) and BenefitCosmetics.com (headless Hybris).

We are also thrilled to be partnering with you and Acquia on building this Drupal Magento integration by porting the work we have already done for many enterprise clients and bringing it into the Acquia stack. Exciting times!

TAG has been integrating best-of-breed commerce solutions with Drupal for many years. We recently shared our analysis of where we think the content and commerce space is going: https://medium.com/third-grove/the-death-of-drupal-commerce-as-an-ecomm…

We recently contributed the connectors we developed for Drupal and Hybris back to the community (https://www.drupal.org/project/hybris). These connectors -- like our Quicken work connecting Drupal and Magento -- are used today in production at enterprise-scale to power awesome shopping experiences with Drupal and headless Hybris. As our sponsorship of one of Drupal's six core maintainers (catch) shows, giving back to Drupal is very important to Third & Grove. And it's something we will continue to do as we continue our journey in the content and commerce space.

November 12, 2016
Dries:

In my mind, the opportunity for Drupal Commerce is (1) to be an abstraction/integration layer for a variety of different commerce-related services, (2) to provide merchant-optimized tools for creating content-rich shopping experiences and (3) to help deliver those shopping experiences to different channels (i.e. be the "glass").

Organizations uses a variety of different commerce tools and often have different teams. Drupal Commerce could bring much of that together and provide a unified user interface and merchant-optimized workflow to make building content-rich shopping experiences better, faster, cheaper.

So in a way, Drupal Commerce would be a layer that sits above the Hybris-Drupal integration module. The Hybris-Drupal integration module (thanks Third & Grove) allows you to import products from Hybris into Drupal as standard Drupal entities and keep them synced, but the Drupal Commerce modules allow you to use those product entities to build shopping experiences. A Magento-Drupal module could be used to import products from Magento and for the merchants it would be invisible and irrelevant wether these products come from Hybris or Magento. It doesn't have to be limited to product information -- Drupal Commerce could also help unify analytics, promotions, shopping carts, etc.

Thoughts?

November 14, 2016
Justin (not verified):

Thanks! Third & Grove also wrote modules for Drupal-Magento that bring in product information into Drupal, along with a variety of other features (hopefully releasing soon...).

In our recent projects with headless Hybris and headless Magento we haven't found a need for the Drupal Commerce components. We have found that Drupal itself is the most compelling unification layer. Our typical approach has been to keep in the commerce platform all the things commerce engines are good at (pricing, promotions, payments, etc) and to focus Drupal on what it's good at (content and engaging digital experiences). And herein lies the tension (we feel) with trying to use DC as the unification layer - DC is a commerce engine.

1. Abstraction layer - Because of Drupal 7's (and even more Drupal 8's) robust Entity (and other) APIs we have not found a need to leverage what the Drupal Commerce (DC) modules provide. Components like products and promotions become regular Drupal entities, and checkout flows tend to be custom so they are just Drupal page building. It's possible a Drupal Commerce component focused exclusively on checkout might be useful, at least as a starting point that can be customized.

2. Merchant optimized tools - Many of these tools are content driven so they are interacting with the website through the lens of content, not commerce platform components. Because the work of commerce is pushed to the commerce platform Drupal Commerce's components are less focused on the glass, and thus at less at play here. A good example is content personalization, which includes products because once they are in Drupal they are regular Drupal nodes. One potential area where DC may be able to play an important role is in price personalization.

3. Different channels - Say for example you have a headless Magento store that is powering commerce experiences on a website (Drupal) and in a mobile app (iOS). In our typical approach other channels would be going to Magento, not Drupal, in order to calculate discounts, get prices, and process payment. If the app needed some of the marketing content around products (that typically would be stored in Drupal) they would go right to Drupal for that (which has great support for this use case). In my opinion going through Drupal to talk to the commerce engine wouldn't be ideal from a performance, complexity, or level of implementation effort perspective.

Granted most of our clients are enterprise-scale, but in our projects we have had more success with a strict delineation between content information and commerce data. Our clients tend to organize their teams in this way as well - there tends to be a content team and a separate ecommerce/fulfillment team, so each only has to be trained and login into one system.

November 14, 2016
Anoop John (not verified):

It would be good to see some strong case studies of Drupal + Magento as marketing collaterals for pushing the integration into the market. It would also be good to see a bit of analysis on what would be the best fits for the different scenarios for ecommerce merchants. Maybe a decision tree for different business use cases for business decision makers in ecommerce space.

November 14, 2016
Justin (not verified):

I agree! So much so that this in the works ;) Third & Grove will be releasing information along these lines in three weeks. I will update here when we do.

November 14, 2016
Dries:

Robert Douglass from Platform.sh (formerly Commerce Guys) didn't like the integration strategy that I shared in this blog post. He strongly believes Drupal benefits from a native commerce solution like Drupal Commerce. In response, he wrote 8 blog posts to make the case for Drupal Commerce. I read through the blog posts and extracted the 4 main reasons why he believes the world should care more about Drupal Commerce. I'm including them below along with my thoughts.

Reason 1: the world need a true Open Source ecommerce solution. Not a solution that is proprietary (e.g. Hybris or Demandware) or has a dual-licensing model (e.g. Magento).

While Open Source idealism is important to many in the Drupal community, it is not a primary driver to convince customers, digital agencies, system integrators or industry analysts about Drupal Commerce. If we want more organizations to use Drupal Commerce and more organizations to invest in Drupal Commerce, we should focus on how Drupal Commerce is (or can be) a significantly better solution for their needs.

Reason 2: Drupal has already solved user management, authentication, content modeling, creation, editing, and rendering, rules engines, rich asset management, and multichannel publishing are all solved problems. This makes building a Drupal-native commerce solution much easier than building a commerce platform from scratch.

The effort required to build a commerce platform doesn't really concern customers or organizations who are building commerce websites with Drupal. It would have been much more useful to look at this from the customer's point of view; be it their merchants or the IT organization implementing and managing the commerce website.

A more convincing approach could have been to weigh the advantages of a monolithic approach (e.g. a consistent and unified user experience for merchants) against the advantages of a decoupled approach (e.g. the shopping experience and commerce backend usually have different upgrade and innovation cycles, different teams and different hosting requirements).

While a monolithic architecture like Drupal Commerce avoids duplication of functionality and can more easily leverage Drupal's strong content management capabilities, it might not be preferred. Most organizations will favor a decoupled commerce architecture over a unified merchant experience. The often complex demands of a commerce backend favor a decoupled architecture.

Now, one day this might be achieved with two Drupal instances; one Drupal instance using Drupal Commerce that acts as the commerce backend and a second Drupal instance that provides the shopping experience, connected through web services APIs. In such a setup, having two Drupal systems talk to one another might offer real practical advantages (e.g. same technology stack, easier integration, etc). This should be something to strive for.

Reason 3: Drupal will never become the premiere tool for making specialized, vertically targeted distributions without Drupal Commerce.

I don't understand this argument. First, it assumes that a commercial SaaS platform is the only way to make Drupal distributions successful. Second, there exist many solutions that can be used to add billing capabilities to a SaaS platform. The billing system most likely doesn't require deep integration with the Drupal distribution.

Reason 4: Drupal needs to have native commerce solution in other to be a good multi-site platform for organizations like J&J.

I don't understand why a monolithic approach is a requirement for Drupal to be a good multi-site platform. Universal Music, for example, has successfully standardized on a Drupal-Magento stack for hundreds of their artist websites. Almost all of Acquia's customers that manage hundreds of Drupal sites, use Drupal alongside of adjacent technologies, including commerce platforms, marketing automation platforms, enterprise search, customer relationship management software, and more. Why would commerce be different?

Conclusion

If we want more organizations to adopt Drupal Commerce or contribute meaningfully to Drupal Commerce's development, we have to provide them a very compelling reason to -- one that provides them an unfair advantage compared to alternative technologies. I wish Robert helped all of us in the Drupal community understand Drupal Commerce's unique advantage in the market; one that catches the attention of customers, digital agencies, system integrators and industry analysts. I hope we all keep working towards that or that we get better at articulating it.

November 30, 2016
Django Beatty (not verified):

Not sure I'd agree with the positioning of Drupal Commerce at the low end, particularly given that we used it to re-platform from a huge ATG install across Royal Mail Group (using the separate instances integrated via services approach as you mention in your reply to Robert).

The other key use case for Drupal Commerce was Eurostar, which required deep integration with content along with personalisation and segmentation which we couldn't find in any non-bespoke system. This was similar to an airline booking system (if more complex) and the checkout flow needed to adjust depending on various contexts (eg upsell tickets to a festival in your destination at the time you are traveling, part pay with various types of points, fewer steps when bookings are in high demand, A/B checkout flows).

Both of these examples are going back a very long way and doubtless the product landscape has improved (although I'd imagine finding Hybris devs is still a challenge and ATG still clunky and prohibitively expensive even for enterprise), but I'd hope that Drupal Commerce as a solution hasn't gone backwards with the advent of Drupal 8. I would position Commerce as for where you precisely don't want a separate 'web shop' (for which there are many solutions such as Magento etc.), but instead want a seamless context-driven experience between content and commerce. My expectation has always been that Drupal 8 and Commerce 2 would cement that and allow easier 'omni-channel' use via native services.

November 30, 2016

Add new comment

Last week Acquia announced a partnership with Magento. I wanted to use this opportunity to explain why I am excited about this. I also want to take a step back and share what I see is a big opportunity for both Drupal, Acquia and commerce platforms.

State of the commerce market

First, it is important to understand what is one of the most important market trends in online commerce: consumers are demanding better experiences when they shop online. In particular, commerce teams are looking to leverage vastly greater levels of content throughout the customer's shopping journey - editorials, lookbooks, tutorials, product demonstration videos, mood videos, testimonials, etc.

At the same time, commerce platforms have not added many tools for rich content management. Instead they have been investing in capabilities needed to compete in the commerce market; order management systems (OMS), omnichannel shopping (point of sale, mobile, desktop, kiosk, etc), improved product information management (PIM) and other vital commerce capabilities. The limited investment in content management capabilities has left merchants looking for better tools to take control of the customer experience, something that Drupal addresses extremely well.

To overcome the limitations that today's commerce platforms have with building content-rich shopping experiences, organizations want to integrate their commerce platform with a content management system (CMS). Depending on the situation, the combined solution is architected for either system to be "the glass", i.e. the driver of the shopping experience.

Lush.com is a nice example of a content-rich shopping experience built with Drupal and Drupal Commerce.

Drupal's unique advantage for commerce

Drupal is unique in its ability to easily integrate into ambitious commerce architectures in precisely the manner the brand prefers. We are seeing this first hand at Acquia. We have helped many customers implement a "Content for Commerce" strategy where Acquia products and Drupal were integrated with an existing commerce platform. Those integrations spanned commerce platforms including IBM WebSphere Commerce, Demandware, Oracle/ATG, SAP/hybris, Magento and even custom transaction platforms. Check out Quicken (Magento), Puma (Demandware), Motorola (Broadleaf Commerce), Tesla (custom to order a car, and Shopify to order accessories) as great examples of Drupal working with commerce platforms.

We've seen a variety of approaches to "Content for Commerce" but one thing that is clear is that a best-of-breed approach is preferred. The more complex demands may end up with IBM WebSphere Commerce or SAP/hybris. Less demanding requirements may be solved with Commerce Tools, Elastic Path or Drupal Commerce, while Magento historically has fit in between.

Additionally, having to rip and replace an existing commerce platform is not something most organizations aspire to do. This is true for smaller organizations who can't afford to replace their commerce platform, but also for large organizations who can't afford the business risk to forklift a complex commerce backend. Remember that commerce platforms have complex integrations with ERP systems, point-of-sales systems, CRM systems, warehousing systems, payment systems, marketplaces, product information systems, etc. It's often easier to add a content management system than to replace everything they have in place.

This year's "State of Retailing Online" series asked retailers and brands to prioritize their initiatives for the year. Just 16% of respondents prioritized a commerce re-platform project while 41-59% prioritized investments to evolve the customer experience including content development, responsive design and personalization. In other words, organizations are 3 times more likely to invest in improving the shopping experience than in switching commerce platforms.

The market trends, customer use cases and survey data make me believe that (1) there are hundreds of thousands of existing commerce sites that would prefer to have a better shopping experience and (2) that many of those organizations prefer to keep their commerce backend untouched while swapping out the shopping experience.

Acquia's near-term commerce strategy

There is a really strong case to be made for a best-of-breed approach where you choose and integrate the best software from different vendors. Countless point solutions exist that are optimized for narrow use cases (e.g. mobile commerce, marketplaces and industry specific solutions) as well as solutions optimized for different technology stacks (e.g. Reaction Commerce is JavaScript-based, Magento is PHP-based, Drupal Commerce is Drupal-based).

A big part of Acquia's commerce strategy is to focus on integrating Drupal with multiple commerce platforms, and to offer personalization through Lift. The partnership with Magento is an important part of this strategy, and one that will drive adoption of both Drupal and Magento.

There are over 250,000 commerce sites built with Magento and many of these organizations will want a better shopping experience. Furthermore, given the consolidation seen in the commerce platform space, there are few, proven enterprise solutions left on the market. This has increased the market opportunity for Magento and Drupal. Drupal and Magento are a natural fit; we share the same technology stack (PHP, MySQL) and we are both open source (albeit using different licenses). Last but not least, the market is pushing us to partner; we've seen strong demand for Drupal-Magento integration.

We're keen to partner with other commerce platforms as well. In fact, Acquia has existing partnerships with SAP/hybris, Demandware, Elastic Path and Commerce Tools.

Conclusion

Global brands are seeing increased opportunity to sell direct to consumers and want to build content-rich shopping journeys, and merchants are looking for better tools to take control of the customer experience.

Most organizations prefer best of breed solutions. There are hundreds of thousands of existing commerce sites that would like to have more differentiation enabled by a stronger shopping experience, yet leave their commerce capabilities relatively untouched.

Drupal is a great fit. It's power and flexibility allow it to be molded to virtually any systems architecture, while vastly improving the content experience of both authors and customers along the shopping journey. I believe commerce is evolving to be the next massive use case for Drupal and I'm excited to partner with different commerce platforms.

Special thanks to Tom Erickson and Kelly O'Neill for their contributions to this blog post.

Nous étions satisfaits d’avoir appris à lire et à obéir. Après le Brexit, la victoire de Trump vient douloureusement nous rappeler que nous devons apprendre à réfléchir ensemble. Apprendre à douter et à construire. À écrire, à penser et à remettre en question notre petit confort mondain.

Le pouvoir de l’écriture

Comme je l’expliquais dans « Il faudra le construire sans eux », je pense que l’humanité est rythmée par les technologies de l’information.

L’écriture nous a donné le concept d’histoire, de vérité, de transmission. Avec l’apparition de l’écriture est également apparue l’autorité centralisée, le pouvoir. L’humain a appris à obéir afin de coopérer.

Mais la lecture restait l’apanage des puissants.

L’imprimerie bouleversa complètement cette structure en apprenant à l’humanité à lire. Désormais, tout le monde, ou presque, pouvait lire, apprendre, découvrir de nouvelles idées.

Le pouvoir s’en trouva modifié. Il prit le nom trompeur de démocratie mais resta concentré dans les mains d’une minorité, ceux qui pouvaient écrire. Écrire les lois. Écrire la vérité telle qu’elle devait être à travers la presse puis les médias.

Avec Internet, chaque humain se retrouva soudainement en mesure d’écrire. Chaque humain avait désormais le pouvoir de façonner la vérité, sa vérité.

Mais, comme les rois et empereurs avant eux, les présidents et autres dirigeants furent incapables de percevoir ce changement qui les déstabilisait et qui ne pouvait que les détrôner.

Bouffie d’impunité et d’arrogance, la classe politicienne se complaisait dans sa réalité qu’elle façonnait avec la complicité des médias et de la presse. La réalité n’était-elle pas sous leur contrôle ?

Ils brassaient l’argent et l’économie mondiale, ils façonnaient une réalité qu’ils croyaient universelle et étaient à ce point déconnectés des autres réalités qu’ils ne pouvaient plus en imaginer l’existence.

Ce n’est donc pas une surprise qu’ils ne virent pas que le reste de l’humanité apprenait à écrire et à se passer d’eux.

Une révolte ? Une révolution !

Lorsque des tentatives de changement voyaient le jour, le pouvoir les étouffait ou, confiant, les laissait mourir afin de servir d’exemple. La technique était simple : se convaincre et convaincre les autres que le changement était impossible, voué à l’échec. Que les réponses apportées étaient indiscutables.

Aux États-Unis, par exemple, un réel mouvement progressiste tenta de porter Bernie Sanders à la présidence. L’homme inspirait la passion et, comme Michael Moore, je suis convaincu qu’il aurait gagné contre n’importe quel adversaire. Il représentait l’espoir d’une génération qui avait appris à lire et qui voulait désormais se mettre à écrire.

Mais Hillary Clinton parvint à convaincre tous les partisans qu’un tel changement était impossible, qu’écrire, c’était bien, mais que la lecture était suffisante pour la majorité.

Les jeunes, les intellectuels, les penseurs critiquaient Hillary Clinton mais se pliaient à la réalité qu’on leur avait tellement bien inculquée. C’est beau de rêver mais les sondages et les médias ont toujours raison. Les politiciens ne sont pas parfaits mais ils savent ce qui est bien pour nous.

À l’époque, j’avais dit que les démocrates regretteraient amèrement d’avoir détruit Sanders, qui représentait leur meilleure chance. Puis, je me suis moi-même plié à cette réalité. J’étais persuadé que le Brexit ne passerait pas, que Trump serait défait. J’ai été un parfait petit représentant de cette frange éduquée et indignée qui se résigne.

Mais toutes ces considérations n’atteignaient pas les franges de la population qui avaient toujours été oubliées, spoliées. La masse qui a soudainement appris à écrire avant même de savoir lire.

Dans une leçon magistrale, cette masse a refusé de se plier aux injonctions d’une réalité que, de toutes façons, elle ne comprenait pas. Insensible à la raison, à la logique, elle démontre qu’il n’est pas nécessaire de se plier à une réalité imposée.

La résignation des millenials

Si les « millenials » et les jeunes étaient les seuls à voter, le Brexit ne serait pas passé, Bernie Sanders serait probablement président. Malheureusement, les millenials ont également appris à « être réalistes », à ne pas se laisser aller à leurs rêves si ceux-ci sont plus gros qu’une belle voiture et une maison 4 façades.

Au fond, les millenials auraient aimé changer le monde mais sans prendre le risque de perturber leur petit confort bien douillet.

Ces millenials, dont je fais partie, sont coupables d’avoir oublié la masse qui n’a pas la chance d’être éduquée, qui n’a plus rien à perdre. Les révoltes progressistes et écologiques contre le pouvoir en place sont, fondamentalement, égocentriques et égoïstes. Nous luttons pour notre idéal sans tenir compte des autres réalités.

Enfermés dans nos bulles de confirmation par les algorithmes des réseaux sociaux, nous avons oublié une partie de l’humanité qui ne savait pas lire, pas écrire.

Une partie de l’humanité qui, ironiquement, s’est trouvée un champion qui personnifie tout ce contre quoi nous devrions nous unir : le profit sans scrupule, l’égotisme maladif, la haine, le rejet de la différence et le non-respect de l’autre.

Reconstruire nos rêves

Si nous ne devons retenir qu’une leçon de l’élection de Trump, c’est que l’ancien monde est mort. Que nous ne pouvons pas faire confiance aux politiciens, aux médias, aux sondages. Il est indispensable d’éteindre définitivement notre télévision, de refuser de consulter les médias financés par l’état et la publicité, d’apprendre à ne plus brader nos émotions.

Il est primordial d’arrêter d’élever le « journalisme » au rang d’outil rebelle au service de la démocratie. C’était le cas au 18ème siècle et les médias, comme les politiciens, tentent de nous garder dans ce passéisme suranné tout en nous poussant à la consommation. Les rares exceptions se reconnaissent : elles n’ont ni publicités ni subventions publiques.

Rendons-nous à l’évidence : les politiciens et les journalistes n’en savent pas plus que nous. Au contraire, ils sont fats, imbus de leur ignorance ! Mais ils sont persuadés d’avoir raison, de contrôler la réalité. Nous ne pouvons même plus leur accorder le bénéfice du doute et nous devons accepter que ces fonctions sont devenues essentiellement néfastes.

Mais nous ne pouvons plus non plus nous complaire dans nos bulles qui nous renforcent dans nos convictions. Il est nécessaire d’écouter les autres, de se confronter.

Aurons-nous la lâcheté de léguer le monde tel qu’il est aujourd’hui à nos enfants parce que nous nous limitons à ce qui est “réaliste” ? Aurons-nous l’honnêteté intellectuelle de leur dire que nous avons préféré consacrer notre énergie à payer une maison et une carte essence plutôt qu’à rendre le monde un tout petit peu meilleur ?

Il est urgent de réapprendre ensemble à lire, à écrire et, surtout, à rêver et à désobéir, même au risque de tout perdre. Le monde n’est-il pas façonné par ceux qui n’ont plus rien à perdre ?

 

Photo par Joel Penner.

Ce texte est a été publié grâce à votre soutien régulier sur Tipeee et sur Paypal. Je suis @ploum, blogueur, écrivain, conférencier et futurologue. Vous pouvez me suivre sur Facebook, Medium ou me contacter.

Ce texte est publié sous la licence CC-By BE.

November 09, 2016

A question that has come up a couple of times already is how Autoptimize and it’s cache work. So let’s do some copy/pasting of what I replied earlier on the wordpress.org support forum;

  1. AO intercepts the HTML created by WordPress for a request (using the output buffer)
  2. all references to JS (and CSS) are extracted from the HTML
  3. all original references to JS (and CSS) are removed from the HTML (the code in the original files is left as is, AO never changes those files)
  4. all JS (and CSS) is aggregated (JS in one string, CSS in as many strings as there were media types)
  5. the md5-hash (mathematical/ cryptographic function that generates a quasi-unique string based on another string) of the aggregated JS (and CSS) is calculated
  6. using the md5 AO checks if a cached file with that md5 exists and if so continues to step 8
  7. if no cached file is found, the JS (and CSS) is minified and cached in a new file, with the md5 as part of the filename
  8. the links to the autoptimized JS (and CSS) file in cache are injected in the HTML
  9. the HTML is minified (but not cached in Autoptimize)
  10. the HTML is returned to WordPress (where it can be cached by a page cache and sent to the visitor)

This is especially interesting if you want to understand why the cache size can “explode”; if in step 4 the code is even a bit different from previous requests, the md5-hash in step 5 will be different so the file will not be found in cache (step 6) and the code will be re-minified (which is relatively expensive) and cached (step 7).

And that, my friends, is how Autoptimize works.

I’m happy to announce the first release candidate for Maps 4.0. Maps is a MediaWiki extension to work with and visualize geographical information. Maps 4.0 is the first major release of the extension since January 2014, and it brings a ton of “new” functionality.

First off, this blog post is about a release candidate, meant to gather feedback and not suitable for usage in production. The 4.0 release itself will be made one week from now if no issues are found.

Almost all features from the Semantic Maps extension got merged into Maps, with the notable omission of the form input, which now resides in Yaron Korens Page Forms extension. I realized that spreading out the functionality over both Maps and Semantic Maps was hindering development and making things more difficult for the users than needed. Hence Semantic Maps is now discontinued, with Maps containing the coordinate datetype, the map result formats for each mapping service, the KML export format and distance query support. All these features will automatically enable themselves when you have Semantic MediaWiki installed, and can be explicitly turned off with a new egMapsDisableSmwIntegration setting.

The other big change is that, after 7 years of no change, the default mapping service was changed from Google Maps to Leaflet. The reason for this alteration is that Google recently required obtaining and specifying an API key for its maps to work on new websites. This would leave some users confused when they first installed the Maps extension and got a non functioning map, even though the API key is mentioned in the installation instructions. Google Maps is of course still supported, and you can make it the default again on your wiki via the egMapsDefaultService setting.

Another noteworthy change is the addition of the egMapsDisableExtension setting, which allows for disabling the extension via configuration, even when it is installed. This has often been requested by those running wiki farms.

For a full list of changes, see the release notes. Also check out the new features in Maps 3.8, Maps 3.7 and Maps 3.6 if you have not done so yet.

Upgrading

Since this is a major release, please beware of the breaking changes, and that you might need to change configuration or things inside of your wiki. Update your mediawiki/maps version in composer.json to ~4.0@rc (or ~4.0 once the real release has happened) and run composer update.

Beware that as of Maps 3.6, you need MediaWiki 1.23 or later, and PHP 5.5 or later. If you choose to remain with an older version of PHP or MediaWiki, use Maps 3.5. Maps works with the latest stable versions of both MediaWiki and PHP, which are the versions I recommend you use.

November 08, 2016

A plan for media management in Drupal 8 Dries Tue, 11/08/2016 - 10:23
Topic

Today, when you install Drupal 8.2, the out-of-the-box media handling is very basic. For example, you can upload and insert images in posts using a WYSIWYG editor, but there is no way to reuse files across posts, there is no built-in media manager, no support for "remote media" such as YouTube videos or tweets, etc. While all of these media features can be added using contributed modules, it is not ideal.

This was validated by my "State of Drupal 2016 survey" which 2,900 people participated in; the top two requested features for the content creator persona are richer image and media integration and digital asset management (see slide 44 of my DrupalCon New Orleans presentation).

This led me to propose a "media initiative" for Drupal 8 at DrupalCon New Orleans. Since then a dedicated group of people worked on a plan for the Drupal 8 media initiative. I'm happy to share that we now have good alignment for that initiative. We want to provide extensible base functionality for media handling in core that supports the reuse of media assets, media browsing, and remote media, and that can be cleanly extended by contributed modules for various additional functionality and integrations. That is a mouthful so in this blog post, I'll discuss the problem we're trying to solve and how we hope to address that in Drupal 8.

Problem statement

While Drupal core provides basic media capabilities, contributed modules have to be used to meet the media management requirements of most websites. These contributed modules are powerful — look at Drupal's massive adoption in the media and entertainment market — but they are also not without some challenges.

First, it is hard for end-users to figure out what combination of modules to use. Even after the right modules are selected, the installation and configuration of various modules can be daunting. Fortunately, there are a number of Drupal distributions that select and configure various contributed modules to offer better out-of-the-box experience for media handling. Acquia maintains the Lightning distribution as a general purpose set of components including media best practices. Hubert Burda Media built the Thunder distribution and offers publishers strong media management capabilities. MD Systems created the NP8 distribution for news publishers which also bundles strong media features. While I'm a big believer in Drupal distributions, the vast majority of Drupal sites are not built with one of these distributions. Incorporating some of these media best practices in core would make them available to all end-users.

Second, the current situation is not ideal for module developers either. Competing solutions and architectures exist for how to store media data and how to display a library of the available media assets. The lack of standardization means that developers who build and maintain media-related modules must decide which of the competing approaches to integrate with, or spend time and effort integrating with all of them.

The current plan

In a way, Drupal's media management today is comparable to the state of multilingual in Drupal 7; it took 22 or more contributed modules to make Drupal 7 truly multilingual and some of those provided conflicting solutions. Multilingual in Drupal 7 was challenging for both end-users and developers. We fixed that in Drupal 8 by adding a base layer of services in Drupal 8 core, while contributed modules still cover the more complex scenarios. That is exactly what we hope to do with media in a future version of Drupal 8.

The plan for the Drupal 8 media initiative is to provide extensible base functionality for media handling in core that supports the reuse of media assets, media browsing, and remote media, and that can be cleanly extended by contributed modules for various additional functionality and integrations.

In order to do so, we're introducing a media entity type which supports plugins for various media types. We're currently aiming to support images and YouTube videos in core, while contributed modules will continue to provide more, like audio, Facebook, Twitter, etc. To facilitate media reuse, WYSIWYG image embedding will be rebuilt using media entities and a media library will be included to allow selecting from pre-existing media.

We consider this functionality to be the minimum viable product for media in Drupal 8 core. The objective is to provide a simple media solution to make Drupal 8 easy to use out of the box for basic use cases. This would help users of sites large and small.

Media library prototype
A work-in-progress prototype of the proposed media library.

Expected timeline and call for help

We believe this could be achieved in a relatively short time — to be included in Drupal 8.3 or Drupal 8.4 as experimental modules. To help make this happen, we are looking for organizations to help fund two dedicated code sprints. The existing contributors are doing an amazing job but dedicated in-person sprints would go a long way to make the plans actually happen. If you are willing to help fund this project, let me know! Looking to help with the implementation itself? The media team meets at 2pm UTC every Wednesday. I also recommend you follow @drupalmedia for updates.

I tried to make a list of all people and organizations to thank for their work on the media initiative but couldn't. The Drupal 8 initiative borrows heavily from years of hard work and learnings on media related modules from many people and organizations. In addition, there are many people actively working on various aspects of the Drupal 8 media initiative. Special thanks to everyone who has contributed now and in the past. Also thank you to Gábor Hojtsy, Alex Bronstein and Janez Urevc for their contributions to this blog post.

Comments

Andreas (not verified):

Thanks for the update, Dries! I am following the discussion around the Media Initiative on d.o and I have one question: Will the Media entity you mention correspond to the Media entity we now have through contrib? In other words, do I have to worry about the Media library I am building up right now in Lightning, because Media handling is improved and eventually changed in core in half a year or a year? Do I have to worry about compatibility?

I would be happy to hear your opinion on this one.

Best,
Andreas

November 08, 2016
Rich Allen (not verified):

I second this question. I'd love some information on a migration path, and if there will be a preferred contribu module which will make the migration easier.

November 08, 2016
Gábor Hojtsy (not verified):

Andreas, the plan linked in the post outlines the sub-issues, especially the 8.3 plan at https://www.drupal.org/node/2825215 which links to https://www.drupal.org/node/2801277 which is the issue to figure out how to move Media Entity contrib module into core. We are not making this up again, but we may not have all of Media Entity contrib's functionality in core at the end. Please join the discussion on that issue to get involved.

November 08, 2016
Andreas (not verified):

Thanks Gábor for pointing to the relevant issues. For non-developers it's sometimes hard to understand which will be the consequences of decisions made on a technical layer. That's why I posted my question here.

While not all functionality will move into core, it will most likely continue living in contrib. That sounds as if it will be safe to build on Media Entities now, if there is no heavy customization involved.

November 09, 2016
Adam Balsam (not verified):

Hi Andreas. We (the maintainers of Lightning) are committed to an update path from our current media entities to whatever ends up in core - and we're working with the those involved in the media initiative.

Tangentially related, the same holds true for migrating to core's Content Moderation from our current implementation of Workbench Moderation once that becomes stable.

November 10, 2016
Paul Driver (not verified):

Like others, I am getting ready to start using Drupal 8 and with media being a client expectation it is helpful to know that this is coming along soon. Thank you clarifying that Media Entity module is the way to go in the meantime.

November 10, 2016
Alain (not verified):

Thanks for the article. We are struggling to implement a robust and complete file management for large public websites at the moment. You are now calling for a sprint effort, but are the requirements already clear and complete? I'd be happy to help for this...

Thanks

November 10, 2016
Gábor Hojtsy (not verified):

The plan for 8.3 is posted at https://www.drupal.org/node/2825215, and the sub-issues being used for planning the exact details. The high level goals are agreed on as documented in this post as well, the details are still being figured out. We are not using a waterfall type approach where we would have detailed requirements first before doing any implementation, approaching this in an agile way instead.

November 10, 2016
Gregg Short (not verified):

What about including PDF files in the base core library handling? There are a lot of PDF files that need wrangling. Just make these easy ...

November 10, 2016
Dries:

The current plan is to handle PDF files through a contributed module. For the MVP we're focused on image support as that is the most important use case. It's good to add YouTube support to core as part of the MVP, as it is probably the number two use case _and_ it's healthy to have one remote media example. Let's revisit adding support for PDF files after we completed the MVP.

November 10, 2016
knibals (not verified):

Hi Dries! Why YouTube support to core? Why not Dailymotion? Or some other provider?

I mean, should we tie Drupal to a particular company? Be it Google. I would respectfully suggest being as abstract as possible for Media management (PDF, Images, Audio and other "local" documents). And leave remote media management pluggagle, but in contrib modules.

Thx for listening!
POF

November 16, 2016
Karl Kaufmann (not verified):

Thanks for bringing this to the forefront. Like Rich Allen and others above post, I make heavy use of Media in Drupal 7, and am very interested in the migration path.

I'd love to help in any way I can, but most of my strengths are in design and front-end. Since development time is very valuable, how about doing something similar to what was done with the Rules module--crowdfunding development?

Not all developers, and organizations who employ them may have the resources to give toward community projects, so crowdfunding could help to move it along.

November 10, 2016
AFowle (not verified):

Embedding a single image in a node is one very important use. It could be done by uploading inline or selecting from a media library.

There are scores of requests all over the web for a gallery feature in Drupal which you do not seem to be addressing. It needs a bit more dressing up than browsing a media library, but I would have thought there was much in common in the implementation. Would it be possible to implement a simple library from the outset?

I like the implementation of D6 with the gallery2 project - both sadly unsupported now, so adding urgency to my request. D7 seems to have come and gone without a decent gallery solution that is easy to use. The D6/G2 combination offered:

  1. Hierarchical galleries (I want one per member, with sub-galleries)
  2. Flexible permissions
  3. Bulk uploading
  4. Space quotas per user
  5. Ease of use for non technical end users (bit of a pig for webnaster to configure sometimes though)

I don't have the skills to contribute code to something like that, but I would add to a chipin or similar.

November 11, 2016
Gábor Hojtsy (not verified):

There are several of those things that would indeed be good to implement out of the box, whether its Facebook or Twitter post embedding or audio file support or galleries. What we are looking at is how we can improve the core system significantly with some of the common things that *all* of these will need anyway first. Once we delivered that successfully, we should definitely look at next steps and priorities. If we are trying to solve everything at once, that would likely not result in any progress for Drupal users for much longer, so we chose to make smaller steps available to users sooner and then continue to work on it based on feedback and further priorities.

November 11, 2016

Today, when you install Drupal 8.2, the out-of-the-box media handling is very basic. For example, you can upload and insert images in posts using a WYSIWYG editor, but there is no way to reuse files across posts, there is no built-in media manager, no support for "remote media" such as YouTube videos or tweets, etc. While all of these media features can be added using contributed modules, it is not ideal.

This was validated by my "State of Drupal 2016 survey" which 2,900 people participated in; the top two requested features for the content creator persona are richer image and media integration and digital asset management (see slide 44 of my DrupalCon New Orleans presentation).

This led me to propose a "media initiative" for Drupal 8 at DrupalCon New Orleans. Since then a dedicated group of people worked on a plan for the Drupal 8 media initiative. I'm happy to share that we now have good alignment for that initiative. We want to provide extensible base functionality for media handling in core that supports the reuse of media assets, media browsing, and remote media, and that can be cleanly extended by contributed modules for various additional functionality and integrations. That is a mouthful so in this blog post, I'll discuss the problem we're trying to solve and how we hope to address that in Drupal 8.

Problem statement

While Drupal core provides basic media capabilities, contributed modules have to be used to meet the media management requirements of most websites. These contributed modules are powerful — look at Drupal's massive adoption in the media and entertainment market — but they are also not without some challenges.

First, it is hard for end-users to figure out what combination of modules to use. Even after the right modules are selected, the installation and configuration of various modules can be daunting. Fortunately, there are a number of Drupal distributions that select and configure various contributed modules to offer better out-of-the-box experience for media handling. Acquia maintains the Lightning distribution as a general purpose set of components including media best practices. Hubert Burda Media built the Thunder distribution and offers publishers strong media management capabilities. MD Systems created the NP8 distribution for news publishers which also bundles strong media features. While I'm a big believer in Drupal distributions, the vast majority of Drupal sites are not built with one of these distributions. Incorporating some of these media best practices in core would make them available to all end-users.

Second, the current situation is not ideal for module developers either. Competing solutions and architectures exist for how to store media data and how to display a library of the available media assets. The lack of standardization means that developers who build and maintain media-related modules must decide which of the competing approaches to integrate with, or spend time and effort integrating with all of them.

The current plan

In a way, Drupal's media management today is comparable to the state of multilingual in Drupal 7; it took 22 or more contributed modules to make Drupal 7 truly multilingual and some of those provided conflicting solutions. Multilingual in Drupal 7 was challenging for both end-users and developers. We fixed that in Drupal 8 by adding a base layer of services in Drupal 8 core, while contributed modules still cover the more complex scenarios. That is exactly what we hope to do with media in a future version of Drupal 8.

The plan for the Drupal 8 media initiative is to provide extensible base functionality for media handling in core that supports the reuse of media assets, media browsing, and remote media, and that can be cleanly extended by contributed modules for various additional functionality and integrations.

In order to do so, we're introducing a media entity type which supports plugins for various media types. We're currently aiming to support images and YouTube videos in core, while contributed modules will continue to provide more, like audio, Facebook, Twitter, etc. To facilitate media reuse, WYSIWYG image embedding will be rebuilt using media entities and a media library will be included to allow selecting from pre-existing media.

We consider this functionality to be the minimum viable product for media in Drupal 8 core. The objective is to provide a simple media solution to make Drupal 8 easy to use out of the box for basic use cases. This would help users of sites large and small.

Media library prototype
A work-in-progress prototype of the proposed media library.

Expected timeline and call for help

We believe this could be achieved in a relatively short time — to be included in Drupal 8.3 or Drupal 8.4 as experimental modules. To help make this happen, we are looking for organizations to help fund two dedicated code sprints. The existing contributors are doing an amazing job but dedicated in-person sprints would go a long way to make the plans actually happen. If you are willing to help fund this project, let me know! Looking to help with the implementation itself? The media team meets at 2pm UTC every Wednesday. I also recommend you follow @drupalmedia for updates.

I tried to make a list of all people and organizations to thank for their work on the media initiative but couldn't. The Drupal 8 initiative borrows heavily from years of hard work and learnings on media related modules from many people and organizations. In addition, there are many people actively working on various aspects of the Drupal 8 media initiative. Special thanks to everyone who has contributed now and in the past. Also thank you to Gábor Hojtsy, Alex Bronstein and Janez Urevc for their contributions to this blog post.

November 07, 2016

The post Laravel: The only supported ciphers are AES-128-CBC and AES-256-CBC with the correct key lengths appeared first on ma.ttias.be.

I just started working on a new Laravel project (it's been a while), and this had me googling. For a second, I thought I had missing PHP extensions (like 'mcrypt') or other cryptographic configs I needed to tweak.

But sometimes a solution is so obvious, you miss it.

So I found myself hitting this error on a new Laravel project, without any configuration whatsoever. Just did the composer install and browsed to the app:

$ tail -f laravel.log
[2016-11-07 15:48:13] local.ERROR: exception 'RuntimeException' with message
  'The only supported ciphers are AES-128-CBC and AES-256-CBC with the correct
   key lengths.' in htdocs/bootstrap/cache/compiled.php:13261

Stack trace:
#0 htdocs/bootstrap/cache/compiled.php(7739): Illuminate\Encryption\Encrypter->__construct('xxxx...', 'AES-256-CBC')
#1 htdocs/bootstrap/cache/compiled.php(1374): Illuminate\Encryption\EncryptionServiceProvider->Illuminate\Encryption\{closure}(Object(Illuminate\Foundation\Application), Array)
#2 htdocs/bootstrap/cache/compiled.php(1330): Illuminate\Container\Container->build(Object(Closure), Array)
#3 htdocs/bootstrap/cache/compiled.php(1908): Illuminate\Container\Container->make('encrypter', Array)
#4 htdocs/bootstrap/cache/compiled.php(1431): Illuminate\Foundation\Application->make('Illuminate\\Cont...')
#5 htdocs/bootstrap/cache/compiled.php(1408): Illuminate\Container\Container->resolveClass(Object(ReflectionParameter))
#6 htdocs/bootstrap/cache/compiled.php(1394): Illuminate\Container\Container->getDependencies(Array, Array)
#7 htdocs/bootstrap/cache/compiled.php(1330): Illuminate\Container\Container->build('App\\Http\\Middle...', Array)
#8 htdocs/bootstrap/cache/compiled.php(1908): Illuminate\Container\Container->make('App\\Http\\Middle...', Array)
#9 htdocs/bootstrap/cache/compiled.php(2426): Illuminate\Foundation\Application->make('App\\Http\\Middle...')
#10 htdocs/public/index.php(58): Illuminate\Foundation\Http\Kernel->terminate(Object(Illuminate\Http\Request), Object(Illuminate\Http\Response))
#11 {main}

Every time, the cause has been that my .env file had unsupported characters in the APP_KEY environment variable that caused that "The only supported ciphers are AES-128-CBC and AES-256-CBC with the correct key lengths" error.

So as a reminder to my future self: alphanumeric characters, no spaces, no dashes, nothing fancy.

Aka:

$ cat .env
...
APP_KEY=l58ZVK24IpxHd4ms82U46tOxvdVK24IpxH

As soon as you include dashes or other "special" characters, things inevitably b0rk. And my default random-character-generator includes some dashes, obviously.

The post Laravel: The only supported ciphers are AES-128-CBC and AES-256-CBC with the correct key lengths appeared first on ma.ttias.be.

The post Apache HTTP authentication in .htaccess appeared first on ma.ttias.be.

There are plenty of guides that describe this already, but I find it frustrating to find a clear, concise write-up of what's needed to get some simple username/password HTTP authentication to work just with .htaccess code.

So, here's the short version.

Create an htpasswd file

This is a simple file that holds the username and (encrypted) password of the users you want to allow access.

$ htpasswd -c /var/www/vhosts/site.tld/passwd/htaccess.passwd mattias

The above will create a new file at that location and configure a user called "mattias", you'll be prompted (twice) for your password.

The -c parameter makes a new file, if you want to add new users to an existing passwd file, use this;

$ htpasswd /var/www/vhosts/site.tld/passwd/htaccess.passwd jasper

In the end, the file looks like this:

$ cat /var/www/vhosts/site.tld/passwd/htaccess.passwd
mattias:$apr1$656eUsUz$305AHL.2PAC.U2UTBdlql0

That's an encrypted password, to reset it, run the above command again for the same username, it'll overwrite the password.

If you want to use the password file above (because it's quick and easy copy/pasting): that encrypted password in plain text is "nucleus", so you can log in with the user "mattias" and password "nucleus".

HTTP authentication in .htaccess

Now, once you have the file, add the following in your .htaccess. This snippet of code will work for an Apache 2.2 and Apache 2.4 configuration. Just make sure you change the path to point to your htpasswd file.

$ cat .htaccess
<IfModule mod_authn_file.c>
  # For Apache 2.4 or later
  AuthType Basic
  AuthName "Please provide username and password"
  AuthBasicProvider file
  AuthUserFile "/var/www/vhosts/site.tld/passwd/htaccess.passwd"
  Require valid-user
</IfModule>

<IfModule mod_auth.c>
  # For Apache 2.2 or lower
  AuthType Basic
  AuthName "Please provide username and password"
  AuthUserFile "/var/www/vhosts/site.tld/passwd/htaccess.passwd"
  Require valid-user
</IfModule>

Next time you visit your site, you'll be prompted for an HTTP username and password.

The post Apache HTTP authentication in .htaccess appeared first on ma.ttias.be.

November 06, 2016

While generating some random test data with some unique properties (all positive, highly autocorrelated, uniform distributed) I accidently bumped on a nice way to generate paraphs using R.


Try yourself in R:

November 05, 2016

I published the following diary on isc.sans.org: “Full Packet Capture for Dummies

When a security incident occurred and must be investigated, the Incident Handler’s Holy Grail is a network capture file. It contains all communications between the hosts on the network. These metadata are already in goldmine: source and destination IP addresses, ports, time stamps.  But if we can also have access to the full packets with the payload, it is even more interesting. We can extract binary files from packets, replay sessions, extract IOC’s and many mores [Read more]

[The post [SANS ISC Diary] Full Packet Capture for Dummies has been first published on /dev/random]

November 04, 2016

Ceci est le billet 41 sur 42 dans la série Printeurs

Nellio, Eva, Max et Junior se sont engoufrés dans une voiture un peu particulière.

— Bon sang, mais c’est une voiture de flics !
— Et oui, fait Junior avec un sourire désarmant. J’ai l’avantage de connaître tous leurs petits points faibles.

Utilisant sa main nouvellement greffée, Junior pianote sur l’écran et, d’une pression, valide la destination. Des instructions s’affichent mais, curieusement, la voiture ne bouge pas. Junior s’empare alors d’un curieux objet circulaire et…

— Accrochez-vous les gars, Junior est au commande !
— Junior, ne me dit pas que tu pilotes ce truc en mode manuel !
— Et oui mon pote, les voitures des policiers d’élite sont avant tout manuelles, pour éviter tout brouillage ou piratage trop simple. J’ai appris à conduire même, si je vous l’avoue, c’est la première fois que je le fais sans avatar !

Max se tourne vers moi.
— En mode manuel, les voitures n’identifient pas les occupants. Ne me demande pas pourquoi, sans doute un vieux bug informatique. Mais cela va nous permettre de gagner du temps et d’arriver au siège du conglomérat avant d’être repérés.
— Et une fois là-bas, on fait quoi ?

Le silence se fait un instant dans l’habitacle.
— On improvise, répond Eva d’une voix très douce. Mais je pense qu’on ne nous laissera pas beaucoup de choix.

Une froide distance semble s’être installée entre Eva et moi. Je n’arrive plus à la percevoir, à la comprendre. J’ai face à moi une étrangère, une inconnue. J’aimerais faire un geste, lui demander des explications, lui montrer de l’empathie mais je ne fais que croiser son regard fuyant, son front contracté, ses lèvres serrées.

Maladroitement, je fais un geste vers elle, je la touche. Elle sursaute mais ne se retourne pas. J’ai l’étrange impression d’être à la fois victime et coupable, de devoir m’excuser après avoir été humilié.

— Eva, dis-je doucement. Est-ce que tu pourrais m’expliquer ce que je suis le seul à ne pas comprendre ? Que sont ces outils de masturbation à ton effigie ?

— Masturbation, c’est peut-être pas le mot que j’aurais employé, nous interrompt Max. Avec le projet Eva, la différence entre la masturbation et le coït devient vraiment ténue. Il s’est toujours dit que les réels progrès technologiques se faisaient d’abord dans l’industrie du porno. Nous n’avions juste pas pensé à appliquer la loi de Turing à cette règle.

— Tu veux dire, fais-je en déglutissant, que l’on pourra considérer l’intelligence artificielle comme douée de raisonnement le jour où les humains feront l’amour à un robot sans le savoir ?

— Quelques choses dans le genre, oui. Et j’avoue que, de ce côté là, le projet Eva est incroyablement innovant et surprenant. Je ne suis pas sûr que les créateurs se soient vraiment rendu compte de ce qu’ils faisaient.

— Ils ne se sont rendu compte de rien, je le garantis, intervient brusquement Eva. Ils n’ont fait que suivre mes instructions. Du moins au début… Avant…

Max semble aussi surpris que moi. Nous n’avons pas le temps d’esquisser un geste qu’une brusque secousse nous projette les uns contre les autres dans la voiture.

— Ah oui, accrochez-vous, nous lance Junior, hilare ! Le mode manuel est généralement un peu moins fluide et beaucoup plus dangereux que la conduite automatique traditionnelle !

Une embardée particulièrement violente projette mon front contre le nez d’Eva. Son visage reste fixe, sans émotion mais une goutte de sang perle le long de sa narine et vient s’étaler sur sa lèvre supérieure. Machinalement, Eva porte un doigt à sa bouche, le frotte et le contemple longuement avant de me jeter un regard étonné, comme apeuré.

— Du sang ! Est-ce que tu…

— Non, fais-je en me dépêtrant de la situation inconfortable dans laquelle je suis tombée. Je n’ai rien. Il s’agit de ton sang.

— Mon sang ? Mon sang ?

Sa surprise me parait étrange mais Junior, concentré sur sa route, ne me laisse pas le temps d’investiguer.

— Waw, ça revient vite la conduite manuelle. Désolé pour les chocs mais y’a du traffic. J’espère qu’on ne va pas tomber sur un autre attentat !

Tout en donnant de violents coups de volant, il continue à grommeler dans sa barbe.

— Saleté de califat. On aurait dû les atomiser depuis longtemps.

Max éclate de rire.

— Parce que tu penses vraiment que ce califat islamique est derrière ces attentats ?

— Bien sûr, qui d’autre ?

— N’importe qui ! Quoi de plus facile que de créer un ennemi virtuel qui aurait tous les attributs que la majorité déteste mais qui séduiraient les plus psychopathes d’entre nous ?

Junior sursaute.

— Hein ? Mais quel serait l’intérêt ?

— Facile, continue Max de sa voix douce et convaincante. Un ennemi commun qui unit le peuple sans discussion, qui fait que tout le monde se serre les coudes. Sans compter que les attentats créent beaucoup d’emplois : les morts qu’il faut remplacer, les dégâts à réparer, les policiers pour sécuriser encore plus les périmètres. Ton boulot, tu le dois principalement aux attentats !

— Tu voudrais dire que le califat serait inventé de toutes pièces ? Mais comment seraient recrutés les terroristes ? Ça n’a pas de sens !

— Le califat existe, intervient brutalement Eva. Son existence pose d’ailleurs de plus gros problèmes que de simples attentats. Il est le reliquat animal de l’humanité, cette partie sauvage qui est en chacun de nous et que nous refusons de voir, ce fragment de notre inconscient collectif qui nous transforme en bêtes féroces ne pensant qu’à tuer et à copuler pour propager nos gênes.

— Copuler ? Au califat ? Ils sont plutôt rigoristes là-bas, fais-je avec un pauvre sourire.

Eva me darde de son regard noir.

— Justement ! Le rigorisme n’est que l’apanage des pauvres, la façade. Les puissants, eux, disposent de harems. Le califat est encore à un stade de l’évolution où le mâle tente de multiplier les femelles afin de les ravir aux autres mâles. Cette compétition accrue entre les mâles les rend fous et prêt à n’importe quel acte insensé, même au suicide. Quand aux femmes, elles ne sont que du cheptel et traitées comme tel.

— Heureusement que nous n’en sommes plus là. L’égalité entre les hommes et les femmes…

Je n’ai pas le temps de terminer ma phrase qu’Eva m’interrompt, au bord de l’hystérie. Elle hurle, sa voix résonne comme une sirène dans l’étroit habitacle de la voiture.

— Tu penses que nous avons progressé ? Que nous valons mieux qu’eux ? La vue de l’usine ne t’a pas suffi ? Pourquoi crois-tu que j’ai été fabriquée ?

Je reste un instant bouche bée. Le silence s’est brutalement installé.

— Fabriquée ? Que veux-tu dire Eva ?

Photo par Andy Rudorfer.

Ce texte est a été publié grâce à votre soutien régulier sur Tipeee et sur Paypal. Je suis @ploum, blogueur, écrivain, conférencier et futurologue. Vous pouvez me suivre sur Facebook, Medium ou me contacter.

Ce texte est publié sous la licence CC-By BE.

November 03, 2016

This is an introductory post about the Inventory in Ansible where I’m looking at the current design and implementation, some of it internals, and where hope to yield some discussion and ideas on how it could be improved, be extended. A recent discussion at Devopsdays Ghent last week, re-spawned my interest in this topic, with some people actively showing interest to participate. Part of that interest is about building some standard inventory tool with some API and frontend, similar to what Vincent Van der Kussen started (and also lots of other often now abandoned projects) but going way further. Of course, that exercise would be pointless when not looking at what parts need to happen upstream, and what parts are more fitting in a separate project, or not. That’s also why my initial call to people interested in this didn’t focus on immediately bringing that discussion on one of the Ansible mailing lists.

20161103115009

In it’s current state, the Ansible Inventory – at the time of writing 2.2 was just released – hasn’t changed since it’s initial inception on how it was modelled during it’s early 0.x releases. This post tries to explain a bit how it was designed, how it works, and what might be its limits. I might oversimplify some details whilst focusing on the internal model and how data is kept in data structures.

ansible-host-inventory1

Whilst most people tend to see the inventory as just the host list, and a way to group them, it is much more than that, as parameters to each host, inventory variables, are also part of the inventory. Initially, those group and host variables where implemented as vars plugins, and whilst the documentation still seems to imply this, this hasn’t been true since around a major bugfix and update to the inventory in the 1.7 release, now over two years ago, where this part now is a fixed part of the ansible.inventory code. As far as I know, nobody seems to use custom vars plugins. I’d argue that the part where one manages variables, parameters to playbooks and roles, is the most important part in inventory. Structurally, the inventory model comes down to this:

The inventory basically is a list of hosts, which can be member of one or more groups, and where each group can be a child of one or more other (parent) groups. One can define variables on each of those hosts and groups, and define different values for all of those hosts. Groups and hosts inherit variables from their parents.

The inventory is mostly pre-parsed at the beginning of an Ansible run. After that, you can consider the inventory as being a set of groups, where hosts live, and each host has a set of variables with one specific value attached to it. An often made misunderstanding that comes up on the mailing list every now and then, is thinking a host can have a different value for a specific variable, depending on which group was used to target that host in a playbook. Ansible doesn’t work like that. Ansible takes a hosts: definition and calculates a list of hosts. In the end which exact group was used to get to that host, doesn’t matter anymore. Before hosts are touched, and variables are used, those variables always are calculated down to the host. Assigning different values to different groups, is how you can manage those, but in the end, you could choose to never use group_vars, and put everything yourself, manually, in host_vars, and get the same end result.

Now, if a host is member of multiple groups, and the same variable is defined in (some of) those groups, the question is, which value will prevail? Variable precedence in Ansible is quite a beast, and can be quite complex or at least daunting to both new and experienced users. The Ansible docs overview doesn’t explain alle the nifty details, and I couldn’t even find an explanation how precedence works within group_vars. Now, the short story here is, the more specific and down the tree wins. A child group wins over a parent group, host vars always win over group vars.

When host kid is member of a father group, and that father group is member of a grandfather group, then the kid will inherit variables from grandfather and father. Father could overrule a value from grandfather, and kid can overrule his father and grandfather if he wants. Modern family values.

There are also two special default groups: all and ungrouped. The former contains all groups that are not defined as a child group of another group, the latter contains all hosts that are not made member of a group.

1

But what if I have two application groups, app1 and app2, which are not parent-child related, and both define the same variable? In this case, both groups app1 and app2 live on the same level, and have the same ‘depth‘. Which one will prevail depends on the alphanumerical sorting of both names – IIRC – but I’m not even sure of the details.

2

That depth parameter is actually an internal parameter of the Group object. Each time a group is made member of a parent group, that group gets the depth from its parent + 1 unless if that group’s depth was already bigger than that newly calculated depth. The special ALL group has a depth of 0, app1 and app2 both have a depth of 1, and app21 got a depth of two. For a variable defined in all those groups, the value in app21 will be inherited by node2, whilst node1 will get the value from either app1 or app2, which is more or less undefined. That’s one of the reasons why I recommend to not define the same variables in multiple group “trees”, where a group tree is one big category of groups. It’s already hard to manage groups within a specific sub tree, whilst keeping precedence (and hence depth) in mind, it’s totally impractical to track that amongst groups from different sub trees.

20161103084137
Oh, if you want to generate a nice graphic of your inventory, to see how it lays out, Will Thames manages a nice project (https://github.com/willthames/ansible-inventory-grapher) that does just that. As long as graphviz manages to generate a small enough graph that fits on a sheet of paper, whilst remaining readable, you probably have a small inventory.

Unless you write your own application to manage all this, and write a dynamic inventory script to export data to Ansible, one is stuck with mostly the INI style hosts files, and the yaml group_vars and host_vars files to manage those groups and variables. If you need to manage lots of hosts, lots of applications, you probably end up with lots of categories to organise these groups, and then it’s very easy to lose any overview in how those groups are structured, and how variables inherit over those groups. It becomes hard to predict which value will prevail for a particular host, which doesn’t help to ensure consistency. If you were to change default values in the all group, whilst some hosts have an overruled value defined in child groups, but not all, you suddenly change values for a bunch of hosts. That might be your intention, but when managing hundreds of different groups, you know such a mistake might easily happen when all you have are the basic Ansible inventory files.

Ansible tends (or at least often used to) to recommend not doing such complicated things, “better remodel how you structure your groups” – but I think managing even moderately large infrastructures can quickly have complex demands on how to structure your data model. Different levels of organisation (sub trees) are often needed in this concept. Many of us need to manually create intersection of groups such as app1 ~ development => app1-dev to be able to manage different parameters for different applications in different environments. At scale this quickly becomes cumbersome with the ini and yaml file based inventory, as that doesn’t scale. Maybe we need a good pattern to handle dynamic intersections implemented upstream? Yes, hosts: app1&dev, but that is parsed at run time, and you can’t assign vars to such an intersection. An interesting approach is how Saltstack – which doesn’t have the notion of groups in the way Ansible does – lets you target hosts using grains, a kind of dynamic groups filtering hosts based on facts. Perhaps this project does something similar?

Putting more logic on how we manage the inventory, could be beneficial in several ways. Some ideas.

Besides scaling the current model by having a better tooling, it could for example allow to write simpler roles/playbooks, e.g. where the only lookup plugin we need to use for with_ iterations, would be the standard with_items, as the other ones are actually used to handle more complex data and generate lists from it. That could be part of the inventory, where the real infrastructure-as-data is modelled, doing a better decoupling of code (roles) and config (inventory). Possibly a way to really describe infrastructure as a higher level data model, describing a multi-tier application, that gets translated into specific configurations and action on the node level.

How about tracking the source of (the value for) a variable, allowing to make a difference between having a variable inherit a default and updating from changing that default from a more generic group (e.g. the dns servers for the DC), as opposed to only instantiate a variable from a default, and not letting it change afterwards by inheriting that default (e.g. the Java version development starts out with during the very first development deploy).

Where are gathered facts kept? For some, just caching those as it currently happens, is more than enough. For others, especially more specific custom facts – I’d call those run time values – that can have a direct relationship with inventory parameters, it might make more sense to keep them in the inventory. Think about the version of your application: you configure that as a parameter, but how do you know if that version is also the one that is currently deployed? One might need to keep track of the  deployed version (runtime fact) and compare that to what was configured to be deployed (inventory parameter).

An often needed pattern when deploying clusters, is the concept of a specific cluster group. I’ve seen roles acting on the master node in a cluster by conditionally running when: inventory_hostname = myapplicationcluster[0].

How many of you are aware that the order of the node list of a group was reversed somewhere between two 1.x releases? It was initially reverse alphanumerically ordered.

This should nicely illustrate the problem of relying on undocumented and untested behaviour. This pattern also makes you hard coding an inventory group name, in a role, which I think is ugly, and makes your role less portable. Doing a run_once can solve a similar problem, but what if your play targets a group of multiple clusters, instead of a specific cluster where you need to run_once per cluster? Perhaps we need to introduce a special kind of group that can handle the concept of a cluster? Metadata to groups, hosts and their variables?

Another hard pattern to implement in Ansible, is what Puppet solves with their exported resources. Take a list of applications that are deployed on multiple (application) hosts, and put that in a list so we can deploy a load balancer on 1 specific node. Or monitoring, or ACL to access those different applications. As Ansible in the end manages just vars per hosts, doing things amongst multiple hosts is hard. Can’t solve everything with a delegate_to.
node-collaboration-exported-resources-and-puppetdb-7-638

At some point, we might want to parse the code (the roles) that will be run, as to import a list of variables that are needed, and build some front-end, so the user can instantiate his node definitions and parameterize the right vars in inventory.

How about integrating a well managed inventory, with external sources? Currently, that happens by defining the Ansible inventory as a directory, and combining ini files with dynamic inventory scripts. I won’t get started on the merits of dir.py, but let’s say we really need to redesign that into something more clever, that integrates multiple sources, keeps track of metadata, etc. William Leemans started something on this after some discussion at Loadays in 2014, implementing specific external connectors.

Perhaps on a side note, I have also been thinking a lot on versioning. Versioning of the deploy code, and especially roles here. I want to be able to track the life cycle of an application, its different versions, and the life cycle and versions of it’s deployment. Imagine I start a business where I offer hosted Gitlab for multiple customers. Each of those customers potentially has multiple organisations, and hence Gitlab instances. Each of them potentially want to always run the latest version, whilst some want to remain ages on the same enterprizy first install, and others are somewhere in between. Some might even have different DTAP environments – Gitlab might be a bad example here as not all customers will do custom Gitlab development, but you get the idea. In the end you really have LOTS of Gitlab instances to manage. Will you manage them all with one single role? At some point the role needs development. Needs testing. A new version of the role needs to be used in production, for a specific customer’s instance. How can I manage these different role version in the Ansible eco-system? Doing that in the inventory sounds like the central source of information?

Lots of ideas. Lots of things to discuss. I fully expect to hear about many other use cases I never thought of, or never even would need. And as to not re-invent the wheel, insights from other tools are very welcome here!

November 02, 2016

When I analyzed the data collected during the last BruCON edition, I had the idea to correlate the timeslots assigned to talks with the amount of Internet traffic. First a big disclaimer: My goal is not to judge the popularity of a speaker or the quality of his/her presentation but more to investigate if the network usage could reveal interesting facts.

Measuring the bandwidth is not a good indicator. Some people used BitTorrent clients or others were downloaded big files in the background. I think that it is more relevant to collect the number of sessions. The first step was to extract relevant data. I decided to focus only on HTTP traffic (TCP 80 & 443). Only public destination IP addresses have been used (eg. connections to the wall of sheel are not included). All sessions with their timestamp have been extracted and indexed by Splunk:

HTTP Connections

Then, I exported the connections grouped by slots of 30 minutes and exported the data in a CSV file:

source="/opt/splunk/var/run/splunk/csv/httpbrucon.csv" index="brucon" | timechart span=30m count

Finally, I exported the schedule from sched.brucon.org and correlated both with Excel:

Excel Correlation

And the graph showing traffic per talk:

Connections VS. Talks

So now, how to interpret those numbers? A peak of traffic can be interpreted in both ways: When the speaker has a nice slide or explain something awesome, attendees will often share it on social networks. But, on the other side, bored people (or those who are lost in too complex slides) will be tempted to surf the web waiting for the end of the presentation. Based on the feedback received about some talks, both situations are present in my results (again, I won’t disclose which one).

This model is not perfect. Besides regular talks, there was also workshops organized and they could generate a significant amount of connections too. The idea to improve the reporting could be to restrict the analyze to connections performed from wireless access points located in the main room…

[The post Popularity of a Talk VS. Internet Usage? has been first published on /dev/random]

What is CockroachDB? CockroachDB is a project I’ve been keeping an eye on for a while. It’s a an open-source, Apache 2 licensed, database system(Github link). At it’s core its a key-value store that scales horizontally. But what makes it really interesting for us though, is that 1) it supports SQL by using the Postgres wire protocol and 2) has full […]

Puppet, in its new parser (long known as «future parser»), allows you to do things like loops, lambda functions, ... And that was great. It was great because those were things we needed. We needed that so much that the community just went for its own variant, in ruby (create_resources). Being able to remove create_resources is great.

However, as Daniele stated, Puppet has quite reached Feature Complete. That means that we made Puppet work for all our needs. Even Puppet 3.x was feature complete. You can/could build anything with Puppet.

Puppet, Inc. is running in all kind of directions now and they start implementing new features everywhere. Instead of allowing us to do more and more crazy things with the Puppet DSL, they should start wondering the question: what is the real value for the users? It looks like the game now is to throw away as most features as possible -- but hey, that is not what we expect from Puppet.

Puppet gets to a point when we can actually stop learning it, and focus on all the other tasks. The job is not to automate everything nowadays -- because everything is already automated. We want a Puppet that is transparent, fast, silent. We want a Puppet that just works.

There are lots of bad moves in the last years. One of the latest example is the closed source app orchestration, which breaks the spirit of Puppet to design the final state. Strangely enough, I never needed such a thing. Instead of adapting tools to everyone, maybe some time should be spent to explain people HOW to design infrastructures. The fact that you need the app orchestrator maybe means that you do something wrong.

One other bad move is the Puppet server metrics. That was such a great idea to expose metrics with the graphite API. Obviously, in the monitoring world, everyone speaks graphite. The bad move was to decide that the open source users don't deserve it. Yes, the same users that write and contribute to the real value of Puppet, the open source modules on github and the forge.

What actually triggered that article is a new puppet function, called return. Let's go over a quick example:

class bar {
  notify {'foo':}
  return()
  notify {'bar':}
}
include bar

And here is the result:

Notice: Compiled catalog for localhost in environment production in 0.08 seconds
Notice: foo
Notice: /Stage[main]/Bar/Notify[foo]/message: defined 'message' as 'foo'
Notice: Applied catalog in 0.03 seconds

Great!

This is the kind of features that adds complexity for nothing. All it does is adding complexity and making it hard to debug manifests. It is really hard to find usecases for that return function. And in you have one -- you might as well think twice.

Puppet lost is soul. When previously reading Puppet manifests was easy and really intuitive, now you need to deal with complex functions and you barely now what will end in the catalog at first sight.

I kinda miss my past parser.

This article has been written by Julien Pivotto and is licensed under a Creative Commons Attribution 4.0 International License.

November 01, 2016

Like I mentioned a few months ago, here we are again. More things equals more crap on the Internet. Not more utility. No. More crap. It’s only crap. With lower case ‘c’. The crap of the crap programmers of the crap is not worth wasting an expensive capital letter on.

Time to require CE marking for all that crap. Enough is enough.

The eighth BruCON edition is already over! Don’t expect a wrap-up because I just don’t have time. I’m always keeping an eye on the attendees’ bits & bytes! Based on the first feedback that I received from attendees and speakers, it was another good edition but, from a network point of view, it was harder. Indeed, the venue does not provide any network service at all and we have to build a temporary network from scratch. The ISP which provides us the pipe to the Internet was not able to help us and we had to find an alternative. We found one but it was extremely expensive for us (keep in mind that BruCON is a non-profit organization) and, worse, the quality was not present. When we deployed the network, we had only 25% of the ordered bandwidth (ouch!). The ISP installed in emergency a backup line via a 4G connection and I spend an half-day configuring the load-balancing between the two lines and some QoS to prioritize traffic. At certain times, we had up to 15% of packets lost on the main link… Our apologies for the bad network quality! Hopefully, more and more people don’t trust wireless networks and use their mobile phones or portable access points to access the Internet.

First some high level stats about the network usage:

High Level Statistics unique-wifi-clients-os-over-time unique-wifi-clients-over-time

About the traffic, we collected 193 GigaBytes of PCAP files. 528 unique devices (based on their MAC addresses) connected to the wireless and got an IP address. We did not play MitM to inspect encrypted protocols (we respect your privacy). Editions after editions, we see that more and more people are using VPN, which is good! Here is the top-20 of MIME types detected:

Count MIME Type
1034509 application/pkix-cert
94885 text/plain
74707 text/html
61321 image/jpeg
34892 image/png
31268 image/gif
20709 text/json
19209 application/ocsp-response
14323 application/ocsp-request
9370 application/xml
5054 application/vnd.ms-cab-compressed
4304 application/javascript
2717 application/x-debian-package
2250 application/font-woff
1761 image/svg+xml
1734 image/x-icon
1608 application/x-gzip
675 video/mp4
675 application/zip
519 application/x-bzip2

Our attendees communicated with 115.930 uniques IP addresses from the wild Internet. Here is a global map:

BruCON Traffic Map

Of course, we had our wall of sheep running to collect all pictures and interesting credentials. If our attendees use VPN connections, some of them regularly fail to protect their network communications.

Wall of Sheep

We collected 68848 images and 119 credentials. Amongst the classic IMAP or SNMP accounts, we found that some security products are not so secure by default. Two attendees were running the GFI LANguard tool which communicates over HTTP with the central servers:

GFI LANguard HTTP

About DNS requests, 129883 unique A requests were performed. Here is the top-30 of hosts queried:

Count FDQN
119314 google.com
93601 brucon.org
90383 t.co
57564 apple.com
42841 microsoft.com
29479 facebook.com
27399 vmware.com
27043 g.co
27038 softwareupdate.vmware.com
23418 capgemini.com
22252 gstatic.com
21127 dns.msftncsi.com
20532 www.google.co
20425 www.google.com
19952 wall.brucon.org
19405 pool.ntp.org
19375 push.apple.com
18844 auth.gfx.ms
16919 live.com
14999 twitter.com
13942 avast.com
13443 zabbix.countercept.mwr
12814 dropbox.com
11006 www.googleapis.com
10680 doubleclick.net
10346 nucleus.be
9952 teredo.ipv6.microsoft.com
9863 google.be
9706 sz.local
9205 corp.capgemini.com

Interesting top queries: WPAD, AD, ISATAP. WPAD is amazing, so easy to be abused to play MitM. Some samples detected:

wpad, wpad.hogeschool-wvl.be, wpad.nl.capgemini.com, wpad.corp.capgemini.com,
wpad.home, wpad.howest.be, wpad.brucon.org,  wpad.be.capgemini.com,
wpad.capgemini.com, wpad.bnl.capgemini.com,  wpad.capgemini.be, wpad.capgemini.nl,
wpad.fantastig.lan , wpad.webde.local, wpad.eu.thmulti.com, wpad.soglu.internal,
wpad.ctg.com, wpad.sogeti.be, wpad.united.domain, wpad.fictile.lan,
wpad.telenet.be, wpad.eu.didata.local

The DNS traffic remains one of my favorite source of intelligence! Many devices are corporate ones and keep constantly trying to “phone home”. Here is a list of companies that were present (well, their devices) at BruCON:

  • Cap Gemini
  • Ernst & Young
  • Sogeti
  • PWC
  • ING
  • CTG
  • Hogeschool West-Vlaanderen
  • MWR
  • Nucleus
  • Limes Security

It’s always interesting to extract the download PE files. We captured 268 unique PE files. Not really malicious but some of them were really suspicious. We detected the following signatures:

  • 5 x Win32.Trojan.WisdomEyes.16070401.9500.9997
  • 1 x Trojan.Agentb.akq
  • 2 x Win32/Bundled.Toolbar.Google.D potentially unsafe
  • 1 x Posible_Worm32
  • 1 x Win32.Application.OpenCandy.G
  • 1 x Trojan-Clicker.Win32.Agent!O

A special mention to the guy who downloaded a malicious ‘BitTorrent.exe’ (22bc69ed880fa239345d9ce0b1d12c62). Do you really need to download such files at a security conference?

From a security point of view, we did not face any incident. Only one device was blacklisted during the conference. As usual, some folks spent time to bring p0rn pictures on the wall of sheep. Besides the classic [smurf|avatar|manga|hulk] p0rn, we have a winner who used furnitureporn.com! Taste and colors are not always the same! 🙂

We already have nice and fun ideas to implement during the next edition. We will expect your packets again in 2017!

 

[The post Debriefing the BruCON Network has been first published on /dev/random]

October 31, 2016

I got into the paint business Dries Tue, 11/01/2016 - 00:07

I got into the paint business last week! Sherwin-Williams (SHW) is a 150 year old company selling paints and coatings. I first got interested in Sherwin-Williams 9 months ago, and have been watching the stock since.

A business selling paint sounds boring, but Sherwin-Williams might have one of the best business models in the world. The reason for my interest is that over the past 10 years Sherwin-Williams' annualized return was 17%. Put differently, someone who invested $10,000 in Sherwin-Williams 10 years ago, would have seen that money grow to $50,000 today. The past 20 years, between 1995 and 2016, Sherwin-Williams returned on average 15% per year and the past 30 years, between 1985 and 2016, Sherwin-Williams returned on average 15% per year as well. In other words, if you invested $10,000 in Sherwin-Williams 30 years ago, it would have grown to be over $700,000 today. When it comes to investing, boring can be good. People will always paint things, in good times and bad times, and they don't mind paying extra for quality.

Shw vs spy
The past 10 years, Sherwin-Williams has significantly outperformed the S&P 500. The past months, Sherwin-Williams has fallen about 20% from its all-time high.

Past performance is no guarantee for future results. I've been waiting to invest in Sherwin-Williams for about 9 months, and finally started half a position last week at $249 a share. The share price of Sherwin-Williams declined by about 21% from its all-time high, in part because of a disappointing third-quarter earnings report. At $249, Sherwin-Williams might not be a screaming buy -- the P/E of 21 remains relatively high. If you're a more patient investor, it might be worth waiting a bit longer. The price could fall further, especially if the Federal Trade Commission blocks the Valspar acquisition or if the market corrects more broadly. At the same time, there is no guarantee the stock will go lower and Sherwin-Williams might not trade 21% of its all-time high for long. If the stock were to drop another 10% or more, I would almost certainly buy more and cost-average into a full position.

Understanding past returns

The first thing I like to do is study how a company achieved such returns.

Between 2006 and 2016, Sherwin-Williams grew revenues 4% annually from 7.8 billion to 11.7 billion. Earnings grew faster than revenues as the result of profit margins improving from 7.4% to 9.5%. Specifically, earnings grew 6.8% annually from 576 million in 2006 to 1.1 billion in 2016.

Earnings growth of 6.8% isn't something to write home about. Fortunately, earnings growth isn't the whole story. What makes Sherwin-Williams an interesting business is that it gushes cash flow to the tune of 1.1 billion a year on 11.7 billion of sales. The company used its cash flow to buy back 31% of its shares between 2006 and 2016. This has allowed earnings per share to grow by an additional 3.8% annually.

Shw components
Three components working together to improve earnings per share: revenue growth, margin expansion and share count reduction.

Next we can add in the dividend. Sherwin-Williams has a history of raising its dividend for 38 years. The past 10 years, Sherwin-Williams had an average dividend yield of 1.6%.

So we have 6.8% earnings per share growth as the result of growing sales and margin expansion, and 3.8% earnings per share growth thanks to a reduction in share count. This equates to 10.6% earnings per share growth. Add a 1.6% dividend yield, and investors are looking at 12.2% annual returns. That is a great return, especially for a 150 old business selling paint.

In addition to revenue growth, margin expansion, a significant share count reduction and a small dividend, Sherwin-Williams also saw significant price to earnings (P/E) expansion. In 2006, Sherwin-Williams was trading at a P/E of 14 while it is trading at a P/E of 21 today. On average, the P/E expanded by 4.1% per year. While some of the P/E expansion could be justified by the quality of the sales improving (higher margins), it is fair to say that the P/E got ahead of itself.

Modeling future returns

The second thing I like to do is evaluate a company's future prospects.

The reason Sherwin-Williams has been so successful over the past decade is because it had all these components working together -- sales growth, margin expansion, share count reduction, a small dividend and a growing valuation. Understanding the different components is important to help us evaluate Sherwin-Williams' prospects.

Because of Sherwin-Williams' healthy cash flow, I believe the company should be able to keep growing revenues (through acquisitions), keep retiring shares, and grow their dividend for the foreseeable future. I'm more concerned about the margins and valuation components. Sherwin-Williams won't be able to improve margins forever -- in fact, an increase in input costs could have negative impact on margins. And despite a 22% drop in price and growing earnings, the P/E remains high at 21.

For illustration purposes, let's assume that Sherwin-Williams' earning per share will grow 6% for the next 5 years instead of the 10.6% it grew the last decade. After five years, earnings per share would grow from $11.16 in 2015 to roughly $14.9 in 2020. Next, let's factor in the possibility for a P/E contraction. Let's say that after five years, the P/E contracted almost 30% from 21 to 15. In this scenario, shares of Sherwin-Williams would be trading at $224 in 2020. This is $20 below today's share price of $244. That said`, the next 5 years a shareholder of Sherwin-Williams will collect something in the order of $25 in dividends for every share owned. This would bring the total return up to $249. I consider this scenario to be pessimistic but certainly in the realm of possibilities.

In a more optimistic scenario, Sherwin-Williams might be able to maintain a 10% earnings per share growth rate at a P/E of 18. Here, we're looking at a 2020 share price of $323, good for an annual compound growth rate of almost 6%.

Needless to say, this is a very rough model. The idea is not to create a perfect model, but to understand a range of possibilities.

Long-term passive income

Shw dividends
The dividend is low but has been growing fast. At the same time the dividend actually became safer as the dividend payout ratio went down sharply.

While I don't think I will lose money on my investment, I also don't expect that Sherwin-Williams will be a home run in the next 5 years. Why then did I decide to invest? The reason is that I prefer to think in terms of decades, rather than a 5 year horizon. What excites me about Sherwin-Williams is not its prospects over the next 5 years, but its potential for income generation 20 to 30 years from now.

Sherwin-Williams appears to be a well-run company. It has been in business for 150 years and continues to grow at a healthy pace. It has increased dividends each year for the last 38 years. During the past 10 years, the dividend growth rate was almost 13% annually, well above the S&P 500's 10 year average of 4%. Last week Sherwin-Williams announced a 25% increase in its quarterly dividend from $0.67 a share to $0.84. Even with that generous increase, the company's payout ratio -- the percentage of profits it spends on paying out its dividend -- remains low at around 30%.

Sherwin-Williams's starting dividend of 1.3% is not impressive, but its dividend growth rate is. Should Sherwin-Williams continue to increase its dividend by 13% annually, after 20 years, each share would produce $39 in annual income. After 30 years the annual dividend would amount to $132 per share. The yield on cost would be 16% and 54% respectively. Given the low payout ratio, strong share buyback program, and the 30 year track record of 15%+ returns, I believe that could be a possibility. Time will tell. If you plan to hold Sherwin-Williams for 20 to 30 years, I believe it could be among the best stocks for both wealth building and dividend income.

Disclaimer: I'm long SHW with a cost basis of $249 per share. Before making an investments, you should do your own proper due diligence. Any material in this article should be considered general information, and not a formal investment recommendation.

Comments

Herman (not verified):

Interesting. But I think most people in the Drupal community wish for the founder to be more focused on code and less on vc ...

November 02, 2016
Dries:

This is "personal finance", not "venture capital". Most of us have to work, save, invest and plan for retirement during our working lives. We should all find time for that. Writing an occasional blog post on investing doesn't take away from my passion and commitment to Drupal.

November 02, 2016
Raphael (not verified):

As Jean de la Fontaine wrote in The Crow & the Fox” fable: “Tout vient à point à qui sait attendre” (Eng: All in good time) which is valid for investments and product dev… I guess

November 02, 2016
Hugh (not verified):

I'm very happy to read about the interesting and very digestible analysis of some obscure paint company shares (and your holiday in Italy)! Glad you have a life outside of Drupal. You do a great job on the Drupal front, so if you have energy left for share analysis, great!

November 03, 2016
Dries:

Thanks Hugh!

November 03, 2016
Victoria (not verified):

Interesting take on the differing segments of life as a Drupal designer. I think you may find this article of interest, as you are familiar with the platform yourself: https://www.achieveinternet.com/enterprise-drupal-blogs/jurassic-world-…

November 03, 2016
Douglas Deleu (not verified):

A blog post about buying shares in a paint company? How weird ... I was about to drop you a line to tell you your blog got hacked. ;)

November 04, 2016
Dries:

Funny thing is, I got two such emails! Two people believed my blog was hacked and emailed me. I guess you make a third. :-)

November 08, 2016

I got into the paint business last week! Sherwin-Williams (SHW) is a 150 year old company selling paints and coatings. I first got interested in Sherwin-Williams 9 months ago, and have been watching the stock since.

A business selling paint sounds boring, but Sherwin-Williams might have one of the best business models in the world. The reason for my interest is that over the past 10 years Sherwin-Williams' annualized return was 17%. Put differently, someone who invested $10,000 in Sherwin-Williams 10 years ago, would have seen that money grow to $50,000 today. The past 20 years, between 1995 and 2016, Sherwin-Williams returned on average 15% per year and the past 30 years, between 1985 and 2016, Sherwin-Williams returned on average 15% per year as well. In other words, if you invested $10,000 in Sherwin-Williams 30 years ago, it would have grown to be over $700,000 today. When it comes to investing, boring can be good. People will always paint things, in good times and bad times, and they don't mind paying extra for quality.

Shw vs spy
The past 10 years, Sherwin-Williams has significantly outperformed the S&P 500. The past months, Sherwin-Williams has fallen about 20% from its all-time high.

Past performance is no guarantee for future results. I've been waiting to invest in Sherwin-Williams for about 9 months, and finally started half a position last week at $249 a share. The share price of Sherwin-Williams declined by about 21% from its all-time high, in part because of a disappointing third-quarter earnings report. At $249, Sherwin-Williams might not be a screaming buy -- the P/E of 21 remains relatively high. If you're a more patient investor, it might be worth waiting a bit longer. The price could fall further, especially if the Federal Trade Commission blocks the Valspar acquisition or if the market corrects more broadly. At the same time, there is no guarantee the stock will go lower and Sherwin-Williams might not trade 21% of its all-time high for long. If the stock were to drop another 10% or more, I would almost certainly buy more and cost-average into a full position.

Understanding past returns

The first thing I like to do is study how a company achieved such returns.

Between 2006 and 2016, Sherwin-Williams grew revenues 4% annually from 7.8 billion to 11.7 billion. Earnings grew faster than revenues as the result of profit margins improving from 7.4% to 9.5%. Specifically, earnings grew 6.8% annually from 576 million in 2006 to 1.1 billion in 2016.

Earnings growth of 6.8% isn't something to write home about. Fortunately, earnings growth isn't the whole story. What makes Sherwin-Williams an interesting business is that it gushes cash flow to the tune of 1.1 billion a year on 11.7 billion of sales. The company used its cash flow to buy back 31% of its shares between 2006 and 2016. This has allowed earnings per share to grow by an additional 3.8% annually.

Shw components
Three components working together to improve earnings per share: revenue growth, margin expansion and share count reduction.

Next we can add in the dividend. Sherwin-Williams has a history of raising its dividend for 38 years. The past 10 years, Sherwin-Williams had an average dividend yield of 1.6%.

So we have 6.8% earnings per share growth as the result of growing sales and margin expansion, and 3.8% earnings per share growth thanks to a reduction in share count. This equates to 10.6% earnings per share growth. Add a 1.6% dividend yield, and investors are looking at 12.2% annual returns. That is a great return, especially for a 150 old business selling paint.

In addition to revenue growth, margin expansion, a significant share count reduction and a small dividend, Sherwin-Williams also saw significant price to earnings (P/E) expansion. In 2006, Sherwin-Williams was trading at a P/E of 14 while it is trading at a P/E of 21 today. On average, the P/E expanded by 4.1% per year. While some of the P/E expansion could be justified by the quality of the sales improving (higher margins), it is fair to say that the P/E got ahead of itself.

Modeling future returns

The second thing I like to do is evaluate a company's future prospects.

The reason Sherwin-Williams has been so successful over the past decade is because it had all these components working together -- sales growth, margin expansion, share count reduction, a small dividend and a growing valuation. Understanding the different components is important to help us evaluate Sherwin-Williams' prospects.

Because of Sherwin-Williams' healthy cash flow, I believe the company should be able to keep growing revenues (through acquisitions), keep retiring shares, and grow their dividend for the foreseeable future. I'm more concerned about the margins and valuation components. Sherwin-Williams won't be able to improve margins forever -- in fact, an increase in input costs could have negative impact on margins. And despite a 22% drop in price and growing earnings, the P/E remains high at 21.

For illustration purposes, let's assume that Sherwin-Williams' earning per share will grow 6% for the next 5 years instead of the 10.6% it grew the last decade. After five years, earnings per share would grow from $11.16 in 2015 to roughly $14.9 in 2020. Next, let's factor in the possibility for a P/E contraction. Let's say that after five years, the P/E contracted almost 30% from 21 to 15. In this scenario, shares of Sherwin-Williams would be trading at $224 in 2020. This is $20 below today's share price of $244. That said`, the next 5 years a shareholder of Sherwin-Williams will collect something in the order of $25 in dividends for every share owned. This would bring the total return up to $249. I consider this scenario to be pessimistic but certainly in the realm of possibilities.

In a more optimistic scenario, Sherwin-Williams might be able to maintain a 10% earnings per share growth rate at a P/E of 18. Here, we're looking at a 2020 share price of $323, good for an annual compound growth rate of almost 6%.

Needless to say, this is a very rough model. The idea is not to create a perfect model, but to understand a range of possibilities.

Long-term passive income

Shw dividends
The dividend is low but has been growing fast. At the same time the dividend actually became safer as the dividend payout ratio went down sharply.

While I don't think I will lose money on my investment, I also don't expect that Sherwin-Williams will be a home run in the next 5 years. Why then did I decide to invest? The reason is that I prefer to think in terms of decades, rather than a 5 year horizon. What excites me about Sherwin-Williams is not its prospects over the next 5 years, but its potential for income generation 20 to 30 years from now.

Sherwin-Williams appears to be a well-run company. It has been in business for 150 years and continues to grow at a healthy pace. It has increased dividends each year for the last 38 years. During the past 10 years, the dividend growth rate was almost 13% annually, well above the S&P 500's 10 year average of 4%. Last week Sherwin-Williams announced a 25% increase in its quarterly dividend from $0.67 a share to $0.84. Even with that generous increase, the company's payout ratio -- the percentage of profits it spends on paying out its dividend -- remains low at around 30%.

Sherwin-Williams's starting dividend of 1.3% is not impressive, but its dividend growth rate is. Should Sherwin-Williams continue to increase its dividend by 13% annually, after 20 years, each share would produce $39 in annual income. After 30 years the annual dividend would amount to $132 per share. The yield on cost would be 16% and 54% respectively. Given the low payout ratio, strong share buyback program, and the 30 year track record of 15%+ returns, I believe that could be a possibility. Time will tell. If you plan to hold Sherwin-Williams for 20 to 30 years, I believe it could be among the best stocks for both wealth building and dividend income.

Disclaimer: I'm long SHW with a cost basis of $249 per share. Before making an investments, you should do your own proper due diligence. Any material in this article should be considered general information, and not a formal investment recommendation.