Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

June 27, 2016

Captain: What happen?
Mechanic: Somebody set up us the bomb!

So yeah, my blog was off the air for a couple of days. So what happened?

This is what /var/log/nginx/error.log told me:

2016/06/27 08:48:46 [error] 22758#0: *21197
connect() to unix:/var/run/php5-fpm.sock failed (11: Resource temporarily unavailable) while connecting to upstream, client: 194.187.170.206, server: blog.amedee.be, request: "GET /wuala-0 HTTP/1.0", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host:
"amedee.be"

So I asked Doctor Google “connect() to unix:/var/run/php5-fpm.sock failed (11: resource temporarily unavailable)” and got this answer from StackOverflow:

The issue is socket itself, its problems on high-load cases is well-known. Please consider using TCP/IP connection instead of unix socket, for that you need to make these changes:

  • in php-fpm pool configuration replace listen = /var/run/php5-fpm.sock with listen = 127.0.0.1:7777
  • in /etc/nginx/php_location replace fastcgi_pass unix:/var/run/php5-fpm.sock; with fastcgi_pass 127.0.0.1:7777;

followed by a carefull application of

sudo /etc/init.d/php-fpm restart
sudo /etc/init.d/nginx restart

Tl;dr version: don’t use a Unix socket, use an IP socket. For great justice!

I leave you with this classic:

The post The Website Was Down appeared first on amedee.be.

June 26, 2016

Little Britain is an amalgamation of the terms ‘Little England‘ and ‘Great Britain’, and is also the name of a Victorian neighbourhood and a modern street in London. Says Wikipedia. It’s also what I think will remain of Great Britain in a few years. Maybe defacto already in a few days or weeks. But okay.

This is not a big problem. More serious problems are geopolitical. I do think Russia will gain from not having England (in the end not Great Britain, but just England) in the European Union: it’ll make the UK’s (or England’s) voice in NATO sound less part of one block. To remain significant will the EU block have to find a new way.

I propose to form a European military. Initially make NATO part of it. The idea would be that each country in Western Europe can join this military alliance based on negotiated contribution criteria. Let’s learn from our mistakes and allow countries to leave and be kicked out: Especially if a country doesn’t contribute enough to the alliance, it should be kicked out (temporarily).

That allows for England or Little Britain to keep its geopolitical relevance, yet allows for the EU member states to exchange economy-currency into military-currency and vice versa. Let’s show some European teeth. But let’s also remain calm and intelligent.

Meanwhile we can slow down NATO becoming a geopolitical playball against Russia. This Cold-War 2.0 nonsense isn’t benefiting world peace. Keeping the world of humans in as much peace as possible should nowadays be NATO’s only goal. I hope there is still some time before any big war starts, to stop it from happening at all. We have so much technology, happiness and growth to give to the world of humans. Let us not waste it in a big stupid worldwide conflict.

 

 

June 24, 2016

20160624_091041

Une piste cyclable parfaitement sécurisée et sur site propre pour relier Ottignies à Bruxelles en seulement 16km ? Le tout entièrement financé par l’argent du contribuable ?

Un rêve ?

En fait, c’est déjà une réalité que vous avez déjà financé à hauteur de plusieurs milliards d’euros.

Seul petit problème à régler : les contribuables qui ont financé cette merveille sont interdits d’accès.

Car cette merveilleuse piste cyclable, c’est le tracé du futur RER. Un chantier qui a déjà englouti des milliards d’euros d’argent public pour un résultat qui serait, au mieux, utilisable en 2024. Mais les prévisions les plus réalistes tablent pour une arrivée du RER aux alentours de 2030. Si jamais il est finalement terminé et n’est pas déjà périmé avant même sa mise en service.

De Ottignies à Bruxelles (gare de Boitsfort), il existe donc une véritable route goudronnée, lisse, plate, sans aucune côte et sans aucun trafic. Cette route en parfait état ne s’approche jamais à moins de trois mètres des voies de chemin de fer et en est toujours séparé par une bordure et un écran minimal de végétation. Nous l’avons baptisé le VER, Vélo Express Régional.

20160624_085512-ANIMATION

Cliquez pour voir l’animation

L’association cycliste Gracq a très récemment annoncé que certains de ses membres utilisaient certains tronçons du VER. La réaction d’Infrabel, gestionnaire des voies, ne s’est pas fait attendre : l’accès à cette route est strictement interdit voire serait dangereux.

Cette route en parfait état devrait donc rester inutilisée et se dégrader inutilement pendant au minimum une décennie.

C’est pour en avoir le cœur net que cinq cyclistes ont décidé de relier Ottignies à Boitsfort en vélo, un jour de grève générale : Stéphane, Nils, Natacha, Yves et votre serviteur.

20160624_053113

Preuve que l’idée est dans l’air du temps : nous préparions notre action alors qu’aucun de nous n’était au courant de l’action très similaire du Gracq.

Le résultat est sans appel : seul le tronçon entre les gares de Genval et La Hulpe (2km) n’est pas encore aménagé. Le passage est strictement impossible sans s’approcher dangereusement des voies ou en les traversant (l’aménagement étant fait de l’autre côté des voies). Il est donc impératif de quitter le VER avant la gare de Genval et de le reprendre à la gare de La Hulpe, impliquant un détour de 15 minutes.

Le reste du trajet se fait de manière entièrement sécurisée sur une route large et dégagée. Deux passages d’une centaine de mètres sont en sable et en terre mais restent praticables en VTT, le premier à Profondsart et le second dans la gare de Boitsfort même.

20160624_084618

Passage boueux à Profondsart

Au total ? Un VER d’un peu plus de 16km sur un terrain absolument plat. Pour un cycliste entraîné, ce trajet est réalisable en une demi-heure. Et pour ceux qui préfèrent prendre le temps et admirer le cadre très agréable, 45 à 50 minutes semble un grand maximum. Tant que la jonction Genval vers La Hulpe n’est pas finalisée, une petite heure semble un temps raisonnable, même pour un cycliste néophyte.

IMG_20160624_081417

Une partie du trajet est même couverte

Autre obstacle imprévu : une étendue de verre brisé dans la gare de Rixensart qui déchirera le pneu de votre serviteur, le forçant à faire demi-tour tandis que les quatre autres continuaient vers Boitsfort.

Mais rien de mieux pour vous convaincre qu’une petite vidéo (d’où il ne manque que les derniers kilomètres).

Alors, est-ce dangereux ?

Oui, clairement. Le fait de devoir faire un détour entre Genval et La Hulpe nécessitant de passer par des rues ouvertes au trafic automobile et sans pistes cyclables est certainement la partie la plus dangereuse du trajet. Un danger que les cyclistes vivent au quotidien mais qui pourrait désormais être évité grâce au VER.

En dehors du tronçon Genval/La Hulpe, les trains restant toujours à une bonne distance ne peuvent en aucun cas représenter le moindre danger.

Est-ce légal ?

Non. Bien qu’il n’y ait ni dégâts matériel, ni victimes, cette action que nous avons entreprise est illégale.

Cette illégalité est-elle justifiable ?

Suite à l’action du Gracq, la réaction d’Infrabel ne s’est pas fait attendre : des bacs de ciment ont été volontairement placés pour bloquer l’accès aux cyclistes. Cette réaction vous semble-t-elle responsable et utile ?

IMG_20160624_081549

Infrabel ne supporte pas la concurrence intolérable du vélo

Le pouvoir politique qui lutte pour la mobilité, la réduction des polluants peut-il légitimement décider que les cyclistes n’ont pas le droit d’être protégés et ne doivent en aucun cas bénéficier du VER ?

Ces politiciens ne seront-ils pas moralement responsables si un cycliste se fait renverser par une voiture car il a décidé de respecter l’interdiction d’utiliser le VER et roule au milieu de routes pensées pour l’automobile ?

Un état démocratique qui a financé le VER avec l’argent du contribuable a-t-il le droit d’interdir ces mêmes contribuables de l’utiliser ?

Ne devrait-on pas au contraire finaliser au plus vite la jonction Genval/La Hulpe et inaugurer une formidable voie verte sur laquelle pourrait naître une véritable économie de proximité : buvette pour cyclistes assoiffés, ateliers de réparation, salles de réunions et espaces de travail.

La créativité est sans limite. Il ne reste plus qu’à finaliser l’effort accompli.

Mesdames et Messieurs les politiciens, vous avez aujourd’hui l’opportunité de transformer le plus grand des travaux inutiles belges, véritable gabegie d’argent public (le RER) en un formidable investissement écologique et économique, le VER.

Mesdames et messieurs les politiciens, il suffit d’une impulsion pour finaliser le VER. La balle est dans votre camp !

 

Photo de couverture : départ du VER depuis le pont de Jassans à Ottignies.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins: 12uAH27PhWepyzZ8kzY9rMAZzwpfsWSxXt, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

I’m happy to announce the immediate availability of Maps 3.7. This feature release brings some minor enhancements.

  • Added rotate control support for Google Maps (by Peter Grassberger)
  • Changed coordinate display on OpenLayers maps from long-lat to lat-long (by Peter Grassberger)
  • Upgraded Google marker cluster library to its latest version (2.1.2) (by Peter Grassberger)
  • Upgraded Leaflet library to its latest version (0.7.7) (by Peter Grassberger)
  • Added missing system messages (by Karsten Hoffmeyer)
  • Internal code enhancements (by Peter Grassberger)
  • Removed broken custom map layer functionality. You no longer need to run update.php for full installation.
  • Translation updates by TranslateWiki

Upgrading

Since this is a feature release, there are no breaking changes, and you can simply run composer update, or replace the old files with the new ones.

Beware that as of Maps 3.6, you need MediaWiki 1.23 or later, and PHP 5.5 or later. If you choose to remain with an older version of PHP or MediaWiki, use Maps 3.5. Maps works with the latest stable versions of both MediaWiki and PHP, which are the versions I recommend you use.

June 23, 2016

2750056025_d748167c26_o

De tous temps, la jeunesse est entré en rébellion contre les vieillards afin de faire évoluer une société que les conservateurs, par essence, veulent figée.

La jeunesse finit toujours par gagner même s’il faut parfois plusieurs générations de jeunes pour faire admettre une idée, avec potentiellement des retours en arrière. Au final, il suffit d’être patient.

Mais aujourd’hui, il y’a un problème pour lequel nous n’avons malheureusement plus le temps d’attendre : la sauvegarde de notre planète.

Nous n’avons plus le loisir de discuter et de laisser le conservatisme accepter péniblement l’idée que, tiens, peut-être que les ressources de la planète sont limitées. Nous ne pouvons plus nous permettre de mettre quinze ans à apprendre à mettre les déchets plastiques dans des sacs bleus pour avoir l’impression de faire un geste pour l’environnement.

Il faut agir radicalement aujourd’hui et maintenant. Il faut repenser fondamentalement tout ce qui, dans notre société, détruit ou justifie la destruction de la planète.

Et l’une des principales sources de destruction est clairement identifiée : l’emploi ! Personne n’ose le dire voire le penser car il s’agit d’un pilier de notre société et de notre identité.

Car quel est le réel problème auquel nous sommes confrontés ? Nous consommons et nous produisons trop ! C’est aussi simple que cela : tout notre modèle de société est basé sur produire plus pour pouvoir consommer plus et consommer plus pour pouvoir produire plus.

Et comme nous sommes de plus en plus productifs pour produire avec moins de travail, nous n’avons d’autre choix que d’augmenter la consommation.

Les emballages biodégradables, les réductions d’émissions, l’isolation des bâtiments et même les marches pour l’environnement pleines de bons sentiments ne sont que cela : des bons sentiments, des vœux pieux.

Tous les discours, toutes les décisions politiques et toutes les technologies “vertes” ne pourront rien faire d’autre que légèrement ralentir l’inéluctable tant que nous n’aurons pas conscience que le seul et unique problème est notre relation au travail.

Car un travail n’est finalement rien d’autre que prendre une partie de ressources de la planète et la transformer en autre chose, en produisant au passage des déchets.

Tant que nous nous évertuerons à vouloir “créer des emplois”, nous consommerons, nous polluerons, nous détruirons la planète.

Or, loin de remettre en question cette cause fondamentale, nous en sommes arrivé à la suprême hypocrisie qui consiste à “créer des emplois verts”. Le discours des partis écologistes est de dire que “être écologique crée de l’emploi”.

Nous essayons de faire en sorte que les voitures polluent un peu moins par kilomètre parcouru, quitte à truquer les tests pour faire semblant, alors que l’unique problème est que nous parcourons bien trop de kilomètres pour… nous rendre au travail. Kilomètres qui nécessitent des routes de plus en plus larges afin d’attirer de plus en plus d’automobilistes qui sont de plus en plus ralentis et donc polluent encore plus.

Nous ne pouvons plus nous permettre de “polluer moins”. Nous ne pouvons plus accepter que les mentions “écologique” ou “vert” soient apposées à coté de tous ce qui est légèrement moins polluant que la concurrence. Nous devons radicalement changer notre mode de vie pour ne plus polluer du tout voir pour régénérer la planète.

La remise en question du travail génère des peurs fondamentales : plus personne ne va rien faire, les gens vont être désœuvrés, la civilisation va s’écrouler.

Mais le pire des scénarios n’est-il pas préférable à l’issue vers laquelle nous nous dirigeons inexorablement ?

Car si nous observons ce que les gens font en dehors du travail, que ce soit en bénévolat, en activité artistique, en entraide, en faisant de l’artisanat ou du sport, une tendance nette s’observe : ces activités détruisent très peu la planète (à l’exception de quelques sports moteurs ou de la chasse).

À l’opposé, le travail est une activité rarement réalisée avec plaisir qui a pour essence même de détruire la planète ou d’encourager à sa destruction à travers la consommation.

Dans le pire et le plus effrayant des futurs, une société de loisirs entraînerait des inégalités, un appauvrissement général voire un écroulement de la civilisation. Le tout potentiellement agrémenté de famines, d’épidémies, de guerre. Nous sommes d’accord que ce scénario catastrophe est improbable mais considérons le pire.

Nous constatons que, pour l’humanité, ce scénario catastrophe n’est pas mortel. Une nouvelle civilisation finira toujours pas renaître.

Tandis qu’en continuant à travailler, à créer de l’emploi et à valoriser le travail, nous détruisons peut-être définitivement notre planète.

Par peur des incertitudes, nous préférons offrir à nos enfants une quasi-certitude : celle d’être l’une des dernières générations d’êtres humains.

L’humanité peut se remettre de toutes les catastrophes. Sauf une. La perte de son unique planète.

Il est urgent de nous débarrasser de l’emploi le plus vite possible. D’arrêter d’essayer de négocier avec les conservateurs inquiets et d’agir sans tenir compte de leur avis. Nous devons unir nos forces aujourd’hui car nous n’aurons pas de seconde chance.

Alors ? Comment fait-on pour arrêter de nourrir le système ?

 

Photo par Alan Cleaver.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins: 12uAH27PhWepyzZ8kzY9rMAZzwpfsWSxXt, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

The post Podcast: curl, libcurl and the future of the web appeared first on ma.ttias.be.

I recorded a new episode of the SysCast podcast earlier this week, with Daniel Stenberg.

He's the author and maintainer of the curl project and we talk about curl & libcurl, HTTP/3, IETF and standards, OpenSSL vs LibreSSL and where the web is heading.

If you've got an interest in the web, HTTP and standards, this one's for you.

The post Podcast: curl, libcurl and the future of the web appeared first on ma.ttias.be.

June 22, 2016

Most VMware appliances (vCenter Appliance, VMware Support Appliance, vRealize Orchestrator) have the so called VAMI: the VMware Appliance Management Interface, generally served via https on port 5480. VAMI offers a variety of functions, including "check updates" and "install updates". Some appliances offer to check/install updates from a connected CD iso, but the default is always to check online. How does that work?

VMware uses a dedicated website to serve the updates: vapp-updates.vmware.com. Each appliance is configured with a repository URL: https://vapp-updates.vmware.com/vai-catalog/valm/vmw/PRODUCT-ID/VERSION-ID . The PRODUCT-ID is a hexadecimal code specific for the product. vRealize Orchestrator uses 00642c69-abe2-4b0c-a9e3-77a6e54bffd9, VMware Support Appliance uses 92f44311-2508-49c0-b41d-e5383282b153, vCenter Server Appliance uses 647ee3fc-e6c6-4b06-9dc2-f295d12d135c. The VERSION-ID contains the current appliance version and appends ".latest": 6.0.0.20000.latest, 6.0.4.0.latest, 6.0.0.0.latest.

The appliance will check for updates by retrieving the repository URL /manifest/manifest-latest.xml . This xml contains the latest available version in and (fullVersion includes the build number), pre- and post-install scripts, EULA, and a list of updated rpm packages. Each entry has a that can be appended to the repository URL and downloaded. The update procedure downloads manifest and rpms, verifies checksums on downloaded rpms, executes the preInstallScript, runs rpm -U on the downloaded rpm packages, executes the postInstallScript, displays the exit code and prompts for reboot.

With this information, you can setup your own local repository (for cases where internet access is impossible from the virtual appliances), or you can even execute the procedure manually. Be aware that manual update would be unsupported. Using a different repository is supported by a subset of VMware appliances (e.g. VCSA, VRO) but not all (VMware Support Appliance).

June 21, 2016

I sent an internal note to all of Acquia's 700+ employees today and decided to cross-post it to my blog because it contains a valuable lesson for any startup. One of my personal challenges — both as an Open Source evangelist/leader and entrepreneur — has been to learn to be comfortable with not being understood. Lots of people didn't believe in Open Source in Drupal's early days (and some still don't). Many people didn't believe Acquia could succeed (and some still don't). Something is radically different in software today, and the world is finally understanding and validating that some big shifts are happening. In many cases, an idea takes years to gain general acceptance. Such is the story of Drupal and Acquia. Along the way it can be difficult to deal with the naysayers and rejections. If you ever have an idea that is not understood, I want you to think of my story.

Team,

This week, Acquia got a nice mention on Techcrunch in an article written by Jake Flomenberg, a partner at Accel Partners. For those of you who don't know Accel Partners, they are one of the most prominent venture capital investors and were early investors in companies like Facebook, Dropbox, Slack, Etsy, Atlassian, Lynda.com, Kayak and more.

The article, called "The next wave in software is open adoption software", talks about how the enterprise IT stack is being redrawn atop powerful Open Source projects like MongoDB, Hadoop, Drupal and more. Included in the article is a graph that shows Acquia's place in the latest wave of change to transform the technology landscape, a place showing our opportunity is bigger than anything before as the software industry migrated from mainframes to client-server, then SaaS/PaaS and now - to what Flomenberg dubs, the age of Open Adoption Software.

Waves of software adoption

It's a great article, but it isn't new to any of us per se – we have been promoting this vision since our start nine years ago and we have seen over and over again how Open Source is becoming the dominant model for how enterprises build and deliver IT. We have also shown that we are building a successful technology company using Open Source.

Why then do I feel compelled to share this article, you ask? The article marks a small but important milestone for Acquia.

We started Acquia to build a new kind of company with a new kind of business model, a new innovation model, all optimized for a new world. A world where businesses are moving most applications into the cloud, where a lot of software is becoming Open Source, where IT infrastructure is becoming a metered utility, and where data-driven services make or break business results.

We've been steadily executing on this vision; it is why we invest in Open Source (e.g. Drupal), cloud infrastructure (e.g. Acquia Cloud and Site Factory), and data-centric business tools (e.g. Acquia Lift).

In my 15+ years as an Open Source evangelist, I've argued with thousands of people who didn't believe in Open Source. In my 8+ years as an entrepreneur, I've talked to thousands of business people and dozens of investors who didn't understand or believe in Acquia's vision. Throughout the years, Tom and I have presented Acquia's vision to many investors – some have bought in and some, like Accel, have not (for various reasons). I see more and more major corporations and venture capital firms coming around to Open Source business models every day. This trend is promising for new Open Source companies; I'm proud that Acquia has been a part of clearing their path to being understood.

When former skeptics become believers, you know you are finally being understood. The Techcrunch article is a small but important milestone because it signifies that Acquia is finally starting to be understood more widely. As flattering as the Techcrunch article is, true validation doesn't come in the form of an article written by a prominent venture capitalist; it comes day-in and day-out by our continued focus and passion to grow Drupal and Acquia bit by bit, one successful customer at a time.

Building a new kind of company like we are doing with Acquia is the harder, less-traveled path, but we always believed it would be the best path for our customers, our communities, and ultimately, our world. Success starts with building a great team that not only understands what we do, but truly believes in what we do and remains undeterred in its execution. Together, we can build this new kind of company.

--
Dries Buytaert
Founder and Project Lead, Drupal
Co-founder and Chief Technology Officer, Acquia

These are currently the popular search terms on my blog:

  • blog amedee be
    Yeah, that’s this blog.
  • localhost
    Which used to be my IRC handle a looooong time ago.
  • upgrade squeeze to wheezy sed -i
    Sometimes I blog about Ubuntu, or Linux in general.
  • guild wars bornem
    Okay, I have played Guild Wars, but not very often, and I have been in Bornem, but the combination???
  • giftige amedeeamedee giftig
    Wait, I am toxic???
  • orgasme
    Ehhhh… dunno why people come looking for orgasms on my blog.
  • telenet service
    I used to blog about bad service I got a couple of times from Telenet.
  • taxipost 2007
    Dunno.
  • ik bond ixq
    Lolwut?

The post Popular Search Terms appeared first on amedee.be.

June 18, 2016

The standard WordPress RSS-feeds don’t include posts featured image. Below code adds the medium-format thumbnail to each item in a RSS2 standards-compliant manner by inserting it as an enclosure.

add_action('rss2_item', 'add_enclosure_thumb');
function add_enclosure_thumb() {
  global $post;
  if(has_post_thumbnail($post->ID)) {
    $thumbUrl = get_the_post_thumbnail_url($post->ID,"medium");

    if ((substr($thumbUrl, -4) === "jpeg") || (substr($thumbUrl, -3) === "jpg")) {
      $mimeType="image/jpeg";
    } else if (substr($thumbUrl, -3) === "png") {
      $mimeType="image/png";
    } else if (substr($thumbUrl, -3) === "gif") {
      $mimeType="image/gif";
    } else {
      $mimeType="image/unkown";
    }

    $thumbSize = filesize(WP_CONTENT_DIR.str_replace(WP_CONTENT_URL,'',$thumbUrl));

    echo "<enclosure url=\"".$thumbUrl."\" size=\"".$thumbSize."\" type=\"".$mimeType."\" />\n";
  }
}

A more advanced & flexible approach would be to add support for the media RSS namespace, but the above suffices for the purpose I have in mind.

June 16, 2016

As a general rule, I try not to include new features in angular-gettext: small is beautiful and for the most part I consider the project as finished. However, Ernest Nowacki just contributed one feature that was too good to leave out: translation parameters.

To understand what translation parameters are, consider the following piece of HTML:

<span translate>Last modified: {{post.modificationDate | date : 'yyyy-MM-dd HH:mm'}} by {{post.author}}.</span>

The resulting string that needs to be handled by your translators is both ugly and hard to use:

msgid "Last modified: {{post.modificationDate | date : 'yyyy-MM-dd HH:mm'}} by {{post.author}}."

With translation parameters you can add local aliases:

<span translate
      translate-params-date="post.modificationDate | date : 'yyyy-MM-dd HH:mm'"
      translate-params-author="post.author">
    Last modified: {{date}} by {{author}}.
</span>

With this, translators only see the following:

msgid "Last modified: {{date}} by {{author}}."

Simply beautiful.

You’ll need angular-gettext v2.3.0 or newer to use this feature.

More information in the documentation: https://angular-gettext.rocketeer.be/dev-guide/translate-params/.


Comments | More on rocketeer.be | @rubenv on Twitter

June 15, 2016

and if so why haven't they done so yet ?

Unlike many people think, containers are not new, they have been around for more than a decade, they however just became popular for a larger part of our ecosystem. Some people think containers will eventually take over.

Imvho It is all about application workloads, when 8 years ago I wrote about a decade of open source virtualization, we looked at containers as the solution for running a large number of isolated instances of something on a machine. And with large we meant hundreds or more instances of apache, this was one of the example use cases for an ISP that wanted to give a secure but isolated platform to his users. One container per user.

The majority of enterprise usecases however were full VM's Partly because we were still consolidating existing services to VM's and weren't planning on changing the deployment patterns yet. But mainly because most organisations didn't have the need to run 100 similar or identical instances of an application or a service, they were going from 4 bare metal servers to 40 something VM's but they had not yet come to the need to run 100's of them. The software architecture had just moved from FatClient applications that talked directly to bloated relational databases containing business logic, to web enabled multi-tier
applications. In those days when you suggested to run 1 Tomcat instance per VM because VM's were cheap and it would make management easier, (Oh oops I shut down the wrong tomcat instance) , people gave you very weird looks

Slowly software architectures are changing , today the new breed of applications is small, single function, dedicated, and it interacts frequently with it's peers, together combined they provide similar functionality as a big fat application 10 years ago, But when you look at the market that new breed is a minority. So a modern application might consist of 30-50 really small ones, all with different deployment speeds. And unlike 10 years ago where we needed to fight hard to be able to build both dev, acceptance and production platforms, people now consider that practice normal. So today we do get environments that quickly go to 100+ instances , but requiring similar CPU power as before, so the use case for containers like we proposed it in the early days is now slowly becoming a more common use case.

So yes containers might take over ... but before that happens .. a lot of software architectures will need to change, a lot of elephants will need to be sliced, and that is usually what blocks cloud, container, agile and devops adoption.

The first law of Serge van Ginderachter, which would be myself, is

One has more problems with anti-virus software than with the viruses themselves.

Originally stated in 2006, in Dutch.

One of the prominent features of the recent Activiti 5.21.0 release is ‘secure scripting’. The way to enable and use this feature is documented in detail in the Activiti user guide. In this post, I’ll show you how we came to its final implementation and what it’s doing under the hood. And of course, as it […]

June 14, 2016

In my latest SXSW talk, I showed a graphic of each of the major technology giants to demonstrate how much of our user data each company owned.

Microsoft linkedin data

I said they won't stop until they know everything about us. Microsoft just bought LinkedIn, so here is what happened:

Data ownership

By acquiring the world's largest professional social network, Microsoft gets immediate access to data from more than 433 million LinkedIn members. Microsoft fills out the "social graph" and "interests" circles. There is speculation over what Microsoft will do with LinkedIn over time, but here is what I think is most likely:

  • With LinkedIn, Microsoft could build out its Microsoft Dynamics CRM business to reinvent the sales and marketing process, helping the company compete more directly with SalesForce.
  • LinkedIn could allow Microsoft to implement a "Log in with LinkedIn" system similar to Facebook Connect. Microsoft could turn LinkedIn profiles into a cross-platform business identity to better compete with Google and Facebook.
  • LinkedIn could allow Microsoft to build out Cortana, a workplace-tailored digital assistant. One scenario Microsoft referenced was walking into a meeting and getting a snapshot of each attendee based on his or her LinkedIn profile. This capability will allow Microsoft to better compete against virtual assistants like Google Now, Apple Siri and Amazon Echo.
  • LinkedIn could be integrated in applications like Outlook, Skype, Office, and even Windows itself. Buying LinkedIn helps Microsoft limit how Facebook and Google are starting to get into business applications.

Data is eating the world

In the past I wrote that data, not software, is eating the world. The real value in technology comes less and less from software and more and more from data. As most businesses are moving applications into the cloud, a lot of software is becoming free, IT infrastructure is becoming a metered utility, and data is what is really makes or breaks business results. Here is one excerpt from my post: "As value shifts from software to the ability to leverage data, companies will have to rethink their businesses. In the next decade, data-driven, personalized experiences will continue to accelerate, and development efforts will shift towards using contextual data.". This statement is certainly true in Microsoft / LinkedIn's case.

Microsoft linkedin graphs

Source: Microsoft.

If this deal shows us anything, it's about the value of user data. Microsoft paid more than $60 per registered LinkedIn user. The $26.2 billion price tag values LinkedIn at about 91 times earnings, and about 7 percent of Microsoft's market cap. This is a very bold acquisition. You could argue that this is too hefty a price tag for LinkedIn, but this deal is symbolic of Microsoft rethinking its business strategy to be more data and context-centric. Microsoft sees that the future for them is about data and I don't disagree with that. While I believe acquiring LinkedIn is a right strategic move for Microsoft, I'm torn over whether or not Microsoft overpaid for LinkedIn. Maybe we'll look back on this acquisition five years from now and find that it wasn't so crazy, after all.

June 13, 2016

I`m working on getting even more moving parts automated, those who use Jenkins frequently probably also have Love - Hate relationship with it.

The love coming from the flexibility , stability and the power you get from it, the hate from it's UI. If you've ever had to create a new Jenkins job or even pipeline based on one that already existed you've gone trough the horror of click and paste errors , and you know where the hate breeds.

We've been trying to automate this with different levels of success, we've puppetized the XML jobs, we've used the Buildflow Plugin (reusing the same job for different pipelines is a bad idea..) We played with JJB running into issues with some plugins (Promoted Build) and most recently we have put our hope in the Job DSL.

While toying with the DSL I ran into a couple of interresting behaviours. Imagine you have an entry like this which is supposed to replace the $foldername with the content of the variable and actually take the correct upstream

  1. cloneWorkspace('${foldername}/dashing-dashboard-test', 'Successful')

You generate the job, look inside the Jenkins UI to verify what the build result was .. save the job and run it .. success ..
Then a couple of times later that same job gives an error ... It can't find the upstream job to copy the workspace from. You once again open up the job in the UI, look at it .. save it , run it again and then it works.. a typical case of Heisenbug ..

When you start looking closer to the XML of the job you notice ..

  1. <parentJobName>${foldername}/dashing-dashboard-test</parentJobName>

obviously wrong .. I should have used double quotes ..

But why doesn't it look wrong in the UI ? That's because the UI autoselects the first option from it's autogenerated pull down list .. Which actually contains the right upstream workplace I wanted to trigger (that will teach me to use 00 as a prefix for the foldername for all my tests..)

So when working with the DSL .. review the generated XML .. not just if the job works ..

I was playing around with Easy Digital Downloads (because this) and I choose EUR as currency, but I wanted the price to be also displayed in USD. Obviously there’s a premium add-on for that, but as I don’t want to purchase stuff just yet, I concocted an alternative myself. Here’s the resulting snippet of code that shows the price in USD for shops with EUR currency and shows the price in EUR when the shop is in USD;

add_action("plugins_loaded","edd_curconv_init");
function edd_curconv_init() {
	$curpos = edd_get_option( 'currency_position', 'before' );
	$curcur = strtolower(edd_get_currency());
  	if (in_array($curcur, array("eur","usd"))) {
	  $filtername="edd_".$curcur."_currency_filter_".$curpos;
	  add_filter($filtername, "edd_eur_dollar_conv",10,3);
	}
}

function edd_eur_dollar_conv($formatted, $currency, $price) {
  $rate=1.13;
  if ($currency === "EUR") {
	$outprice = $price * $rate;
	$outrate = "USD";
  } else if ($currency === "USD") {
	$outprice = $price / $rate;
	$outrate = "EUR";
  }
  
  if (!empty($outprice)) {
	$out = " ( ~ ".edd_currency_filter(round($outprice,2),$outrate).")";
	$formatted.=$out;
  }
  
  return $formatted;
}

This obviously lacks the features and robustness of that Currency Converter add-on, so (don’t) use (unless) at your own risk.

We just released Activiti version 5.21.0! This release contains some quite important bugfixes, more specifically It fixes some cases where the end time was not set for activities in a process definition under certain conditions. A concurrency bug was discovered when using delegateExpressions together with field injection. Make sure to read the updated documentation section […]

June 10, 2016

From the day we started Acquia, we had big dreams: we wanted to build a successful company, while giving back to the Open Source community. Michael Skok was our first investor in Acquia and instrumental in making Acquia one of the largest Open Source companies in the world, creating hundreds of careers for people passionate about Open Source. This week, Michael and his team officially announced a new venture firm called _Underscore.VC. I'm excited to share that I joined _Underscore.VC as a syndicate lead for the "Open Source _Core".

I'm very passionate about Open Source and startups, and want to see more Open Source startups succeed. In my role as the syndicate lead for the Open Source _Core, I can help other Open Source entrepreneurs raise money, get started and scale their companies and Open Source projects.

Does that mean I'll be leaving Drupal or Acquia? No. I'll continue as the lead of the Drupal project and the CTO of Acquia. Drupal and Acquia continue to be my full-time focus. I have been advising entrepreneurs and startups for the last 5+ years, and have been a moderately active angel investor the past two years. Not much, if anything, will change about my day-to-day. _Underscore.VC gives me a better platform to advise and invest, give back and help others succeed with Open Source startups. It's a chance to amplify the "do well and do good" mantra that drives me.

Mautic and the power of syndicates

While Michael, the _Underscore.VC team and I have been working on _Underscore.VC for quite some time, I'm excited to share that on top of formally launching this week, they've unveiled a $75 million fund, as well as our first seed investment. This first investment is in Mautic, an Open Source marketing automation company.

Mautic is run by David Hurley, who I've known since he was a community manager at Joomla!. I've had the opportunity to watch David grow for many months. His resourcefulness, founding and building the Mautic product and Open Source community impressed me.

The Mautic investment is a great example of _Underscore.VC's model in action. Unlike a traditional firm, _Underscore.VC co-invests with a group of experts, called a syndicate, or in the case of _Underscore.VC a "_Core". Each _Core has one or more leads that bring companies into the process and gather the rest of the investors to form a syndicate.

As the lead of the Open Source _Core, I helped pull together a group of investors with expertise in Open Source business models, marketing automation, and SaaS. The list of people includes Larry Augustin (CEO of SugarCRM), Gail Goodman (CEO of Constant Contact), Erica Brescia (Co-Founder and COO of Bitnami), Andrew Aitken (Open Source Lead at Wipro) and more. Together with _Underscore.VC, we made a $600,000 seed investment in Mautic. In addition to the funding, Mautic will get access to a set of world-class advisors invested in helping them succeed.

I personally believe the _Underscore.VC model has the power to transform venture capital. Having raised over $180 million for Acquia, I can tell you that fundraising is no walk in the park. Most investors still don't understand Open Source business models. To contrast, our Open Source _Core group understands Open Source deeply; we can invest time in helping Mautic acquire new customers, recruit great talent familiar with Open Source, partner with the right companies and navigate the complexities of running an Open Source business. With our group's combined expertise, I believe we can help jumpstart Mautic and reduce their learnings by one to two years.

It's also great for us as investors. By combining our operating experience, we hope to attract entrepreneurs and startups that most investors may not get the opportunity to back. Furthermore, the _Core puts in money at the same valuation and terms as _Underscore.VC, so we can take advantage of the due diligence horsepower that _Underscore.VC provides. The fact that _Underscore.VC can write much larger checks is also mutually beneficial to the _Core investor and the entrepreneur; it increases the chances of the entrepreneur succeeding.

If you're starting an Open Source business, or if you're an angel investor willing to co-invest in the Open Source _Core, feel free to reach out to me or to get in touch with _Underscore.VC.

June 09, 2016

This morning I was reminded that, 4 years ago, I was looking for a project to get some experience with Java, C or C++.
Looking back, I started working on an Getback GPS, an Android app (learning some Java) and later on another project called Buildtime Trend, which gave me some Python and JavaScript experience.
So in 4 years, I started 2 Open Source projects, learned 3 new programming languages, and some other technologies and frameworks along the way.

I can say I learned a lot the last few years, on a technical level, but it also made me realise that it is possible to learn new things, if you set your mind to it. You just have to start doing it, try things, fail, learn from it, try again, read a tutorial, look for questions and answers (fe. on Stack Overflow), go to conferences, talk to experienced people, join a project that uses the technology you want to learn.

And this is not limited to technology. Want to learn a musical instrument? How to make a cake? How to become a great speaker? Learn to swim longer or faster?

This is all possible. You just have to start doing it and practice. Taking small steps at the start. Allow yourself to fail, but learn from it and improve. You might need some guidance or coaching, or take a course to give you a headstart.

I'm not saying it won't be hard, sometimes you keep failing, stop making progress and you get frustrated. And that's a time to take a step back, monitor your progress, examine the goals you have set yourself. Are you doing it the right way? Can it be done differently? Do you have all the required skills to make progress? Maybe you need to practise something else first?

Anyway, keep the end goal in mind, take small steps and enjoy  the journey. Enjoying what you are doing or achieving is an important motivator.
If you set your mind to it, you can learn anything you want.

Which reminds of this video, how to learn anything in 20 hours :





June 06, 2016

The post Open Torrent Tracker List (2016) appeared first on ma.ttias.be.

If you've ever downloaded a torrent, chances are you've cursed at the slow download speeds. That could be your ISP throttling the connection (thanks, Telenet), but it could also be that the trackers or peers you're using are just slow or unresponsive.

Since torrents aren't only used for illegal downloads, I figure I'd share a list of known public good trackers. If your Linux ISO downloads are slow, add these to the mix and you should see a significant speedup.

udp://tracker.opentrackr.org:1337/announce

http://explodie.org:6969/announce

http://mgtracker.org:2710/announce

http://tracker.tfile.me/announce

udp://9.rarbg.com:2710/announce

udp://9.rarbg.me:2710/announce

udp://9.rarbg.to:2710/announce

udp://tracker.coppersurfer.tk:6969/announce

udp://tracker.glotorrents.com:6969/announce

udp://tracker.leechers-paradise.org:6969/announce

udp://open.demonii.com:1337

udp://tracker.openbittorrent.com:80

Some of these are provided by the OpenBitTorrent initiative, other are community supported trackers.

To add any of these, edit the properties of your torrent and add the trackers listed above.

In the case of uTorrent, edit the properties of the torrent and just copy/paste the list above in the Trackers list.

If all goes well, your torrent client should show a list of trackers it's using.

torrent_tracker_list

The result should be a significantly faster download because it can find more peers.

If you're still suffering from slow downloads, look into using a VPN or seedbox that downloads the torrents for you, so you can download it via ssh or another protocol.

The post Open Torrent Tracker List (2016) appeared first on ma.ttias.be.

The Gotthard Base Tunnel, under construction for the last 17 years, was officially opened last week. This is the world's longest and deepest railroad tunnel, spanning 57 kilometers from Erstfeld to Bolio, Switzerland, underneath the Swiss Alps. To celebrate its opening, Switzerland also launched a multi-lingual multimedia website to celebrate the project's completion. I was excited to see they chose to build their site on Drupal 8! The site is a fitting digital tribute to an incredible project and launch event. Congratulations to the Gotthard Base Tunnel team!

Gottardo
3197629002_884b546f05_o

Nous avons cru que tout était propriété, que chaque atome appartenait au premier qui le réclamerait.

Mais nous avons oublié que la matière a toujours existé, qu’elle nous a été transmise et que nous la transmettrons à notre tour, peu importe les transactions, les ventes et les achats. Nous n’en sommes que les dépositaires temporaires.

Nous avons cru que tout se vendait et tout s’achetait. Que pour subsister, il fallait acheter et donc vendre pour gagner de quoi acheter.

Mais nous avons oublié que, parfois, nous n’avons même plus de quoi acheter le minimum vital. Alors nous avons puni ceux qui étaient dans cette situation, nous les avons accusé et nous nous sommes convaincu que nous ne serions jamais comme eux. Nous avons séparé l’humanité en deux.

Nous avons cru que nous devions gagner plus afin de vivre plus et de posséder plus. Que nous n’avions pas le choix. Que nous devions vendre notre corps, notre intelligence ou bien des objets. Ou vendre des idées afin d’aider d’autres à vendre plus. Ou d’enseigner à d’autres la meilleure manière de vendre.

Mais nous avons oublié que le choix, il se prend. Qu’accepter un travail plus loin mais mieux rémunéré afin de consommer plus est un choix. Qu’accepter un travail qui pousse d’autres à consommer est un choix. Nous avons refusé de voir que nous étions chacun responsable de notre travail, de l’impact que celui-ci avait sur le monde.

Nous avons cru que le fait de posséder était notre objectif ultime, que nous devions amasser, acheter, consommer.

Mais nous avons oublié que les objets n’ont pas de maître. Qu’ils peuvent tout au plus nous procurer quelques soupçons de joie lorsque nous les utilisons durant quelques minutes ou quelques heures. Et que, le reste du temps, ils nous encombrent, nous rendent malheureux et nous convainquent d’acheter encore plus.

Nous avons cru que la propriété apportait la liberté. Que le propriétaire pouvait jouir de son bien à sa guise sans se préoccuper des conséquences.

Mais nous avons oublié que les frontières et les tracés ne sont que des délimitations virtuelles. Que nous ne possédons qu’une seule et unique planète qui souffre globalement de chacune de nos actions.

Nous avons cru que les idées étaient une propriété. Que même les semences et le génome devait être breveté. Que partager revenait à voler.

Mais nous avons oublié qu’une idée qui ne se partage pas se fige et s’oublie. Que le vivant n’a que faire de nos brevets. Qu’en tentant de contrôler la propriété, nous ne pouvions qu’arrêter de penser.

Nous avons cru jouir de la propriété.

Mais nous avons oublié que nous ne faisons qu’emprunter au futur chaque molécule, chaque journée.

Nous avons cru ne pas avoir le choix et devoir acheter la liberté.

Mais nous avons oublié que la liberté, c’est avant tout de faire des choix. Nos choix.

 

Photo par Stefano Corso.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins: 12uAH27PhWepyzZ8kzY9rMAZzwpfsWSxXt, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

In an earlier blog post, I looked at the web services solutions available in Drupal 8 and compared their strengths and weaknesses. That blog post was intended to help developers choose between different solutions when building Drupal 8 sites. In this blog post, I want to talk about how to advance Drupal's web services beyond Drupal 8.1 for the benefit of Drupal core contributors, module creators and technical decision-makers.

I believe it is really important to continue advancing Drupal's web services support. There are powerful market trends that oblige us to keep focused on this: integration with diverse systems having their own APIs, the proliferation of new devices, the expanding Internet of Things (IoT), and the widening adoption of JavaScript frameworks. All of these depend to some degree on robust web services.

Moreover, newer headless content-as-a-service solutions (e.g. Contentful, Prismic.io, Backand and CloudCMS) have entered the market and represent a widening interest in content repositories enabling more flexible content delivery. They provide content modeling tools, easy-to-use tools to construct REST APIs, and SDKs for different programming languages and client-side frameworks.

In my view, we need to do the following, which I summarize in each of the following sections: (1) facilitate a single robust REST module in core; (2) add functionality to help web services modules more easily query and manipulate Drupal's entity graph; (3) incorporate GraphQL and JSON API out of the box; and (4) add SDKs enabling easy integration with Drupal. Though I shared some of this in my DrupalCon New Orleans keynote, I wanted to provide more details in this blog post. I'm hoping to discuss this and revise it based on feedback from you.

One great REST module in core

While core REST can be enabled with only a few configuration changes, the full extent of possibilities in Drupal is only unlocked either when leveraging modules which add to or work alongside core REST's functionality, such as Services or RELAXed, or when augmenting core REST's capabilities with additional resources to interact with (by providing corresponding plugins) or using other custom code.

Having such disparate REST modules complicates the experience. These REST modules have overlapping or conflicting feature sets, which are shown in the following table.


Feature Core REST RELAXed Services Ideal core REST
Content entity CRUD Yes Yes Yes Yes
Configuration entity CRUD Create resource plugin (issue) Create resource plugin Yes Yes
Custom resources Create resource plugin Create resource plugin Create Services plugin Possible without code
Custom routes Create resource plugin or Views REST export (GET) Create resource plugin Configurable route prefixes Possible without code
Translations Not yet (issue) Yes Create Services plugin Yes
Revisions Create resource plugin Yes Create Services plugin Yes
File attachments Create resource plugin Yes Create Services plugin Yes
Authenticated user resources (log in/out, password reset) Not yet (issue) No User login and logout Yes

I would like to see a convergence where all of these can be achieved in Drupal core with minimal configuration and minimal code.

Working with Drupal's entity graph

Recently, a discussion at DrupalCon New Orleans with key contributors to the core REST modules, maintainers of important contributed web services modules, and external observers led to a proposed path forward for all of Drupal's web services.

Web services entity graph

A visual example of an entity graph in Drupal.

Buried inside Drupal is an "entity graph" over which different API approaches like traditional REST, JSON API, and GraphQL can be layered. These varied approaches all traverse and manipulate Drupal's entity graph, with differences solely in the syntax and features made possible by that syntax. Unlike core's REST API which only returns a single level (single entity or lists of entities), GraphQL and JSON API can return multiple levels of nested entities as the result of a single query. To better understand what this means, have a look at the GraphQL demo video I shared in my DrupalCon Barcelona keynote.

What we concluded at DrupalCon New Orleans is that Drupal's GraphQL and JSON API implementations require a substantial amount of custom code to traverse and manipulate Drupal's entity graph, that there was a lot of duplication in that code, and that there is an opportunity to provide more flexibility and simplicity. Therefore, it was agreed that we should first focus on building an "entity graph iterator" that can be reused by JSON API, GraphQL, and other modules.

This entity graph iterator would also enable manipulation of the graph, e.g. for aliasing fields in the graph or simplifying the structure. For example, the difference between Drupal's "base fields" and "configured fields" is irrelevant to an application developer using Drupal's web services API, but Drupal's responses leak this internal distinction by prefixing configured fields with field_ (see the left column in the table below). By the same token, all fields, even if they carry single values, expose the verbosity of Drupal's typed data system by being presented as arrays (see the left column in the table below). While there are both advantages and disadvantages to exposing single-value fields as arrays, many developers prefer more control over the output or the ability to opt into simpler outputs.

A good Drupal entity graph iterator would simplify the development of Drupal web service APIs, provide more flexibility over naming and structure, and eliminate duplicate code.


Current core REST (shortened response) Ideal core REST (shortened response)
{
  "nid": [
    {
      "value": "2"
    }
  ],
  "title": [
    {
      "value": "Lorem ipsum"
    }
  ],
  "field_product_number": [
    {
      "value": "35"
    }
  ],
  "field_image": [
    {
      "target_id": "2",
      "alt": "Image",
      "title": "Hover text",
      "width": "210",
      "height": "281",
      "url": "http://site.com/x.jpg"
    }
  ]
}
{
  "nid": "2"
  "title": "Lorem ipsum",
  "product_number": {
    "value": 35
  },
  "image": {
    "target_id": 2,
    "alt": "Image",
    "title": "Hover text",
    "width": 210,
    "height": 281,
    "url": "http://site.com/x.jpg"
  }
}

GraphQL and JSON API in core

We should acknowledge simultaneously that the wider JavaScript community is beginning to embrace different approaches, like JSON API and GraphQL, which both enable complex relational queries that require fewer requests between Drupal and the client (thanks to the ability to follow relationships, as mentioned in the section concerning the entity graph).

While both JSON API and GraphQL are preferred over traditional REST due to their ability to provide nested entity relationships, GraphQL goes a step further than JSON API by facilitating explicitly client-driven queries, in which the client dictates its data requirements.

I believe that GraphQL and JSON API in core would be a big win for those building decoupled applications with Drupal, and these modules can use existing foundations in Drupal 8 such as the Serialization module. Furthermore, Drupal's own built-in JavaScript-driven UIs could benefit tremendously from GraphQL and JSON API. I'd love to see them in core rather than as contributed modules, as we could leverage them when building decoupled applications backed by Drupal or exchanging data with other server-side implementations. We could also "eat our own dog food" by using them to power JavaScript-driven UIs for block placement, media management, and other administrative interfaces. I can even see a future where Views and GraphQL are closely integrated.

Web services rest json grapql

A comparison of different API approaches for Drupal 8, with amended and simplified payloads for illustrative purposes.

SDKs to consume web services

While a unified REST API and support for GraphQL and JSON API would dramatically improve Drupal as a web services back end, we need to be attentive to the needs of consumers of those web services as well by providing SDKs and helper libraries for developers new to Drupal.

An SDK could make it easy to retrieve an article node, modify a field, and send it back without having to learn the details of Drupal's particular REST API implementation or the structure of Drupal's underlying data storage. For example, this would allow front-end developers to not have to deal with the details of single- versus multi-value fields, optional vs required fields, validation errors, and so on. As an additional example, incorporating user account creation and password change requests into decoupled applications would empower front-end developers building these forms on a decoupled front end such that they would not need to know anything about how Drupal performs user authentication.

As starting points for JavaScript applications, native mobile applications, and even other back-end applications, these SDKs could handle authenticating against the API and juggling of the correct routes to resources without the front-end developer needing an understanding of those nuances.

In fact, at Acquia we're now in the early stages of building the first of several SDKs for consuming and manipulating data via Drupal 8's REST API. Waterwheel (previously Hydrant), a new generic helper library intended for JavaScript developers building applications backed by Drupal, is the work of Acquia's Matt Grill and Preston So, and it is already seeing community contributions. We're eager to share our work more widely and welcome new contributors.

Conclusion

I believe that it is important to have first-class web services in Drupal out of the box in order to enable top-notch APIs and continue our evolution to become API-first.

In parallel with our ongoing work on shoring up our REST module in core, we should provide the underpinnings for even richer web services solutions in the future. With reusable helper functionality that operates on Drupal's entity graph available in core, we open the door to GraphQL, JSON API, and even our current core REST implementation eventually relying on the same robust foundation. Both GraphQL and JSON API could also be promising modules in core. Last but not least, SDKs like Hydrant that empower developers to work with Drupal without learning its complexities will further advance our web services.

Collectively, these tracks of work will make Drupal uniquely compelling for application developers within our own community and well beyond.

Special thanks to Preston So for contributions to this blog post and to Moshe Weitzman, Kyle Browning, Kris Vanderwater, Wim Leers, Sebastian Siemssen, Tim Millwood, Ted Bowman, and Mateu Aguiló Bosch for their feedback during its writing.

June 04, 2016

Dear Opa,

We just got the news that you passed away while we were in flight from Boston to Amsterdam. We landed an hour ago, and now I'm writing you this letter on the train from Amsterdam to Antwerp. We were on our way to come visit you. We still will.

I wish I could have had one last drink with you, chat about days gone by, and listen to your many amazing life stories. But most of all, I wanted to thank you in person. I wanted to thank you for making a lasting mark on me.

I visited you in the hospital two months ago, but I never had the courage to truly say goodbye or to really thank you. I was hoping I'd see you again. I'm in tears now because I feel you might never know how important you were to me.

I can't even begin to thank you for everything you've taught me. The way you invented things -- first in your job as an engineer and researcher, and later in automating and improving your home. The way you taught me how to sketch -- I think of you each time I draw something. The way you shared your knowledge and insight and how you always kept reading and learning -- even as recent as 2 months ago you asked me to bring you a book on quantum physics. The way you cooked and cared for Oma every single day and the way you were satisfied with a modest, but happy family life. The way you unconditionally loved all your grandchildren, no matter what choices we made -- with you we never had to live up to expectations, yet you encouraged us to make most out of our talents.

There are no words. No words at all for how you impacted my life and how you helped me become the person I've become. Few adults have the opportunity to really get to know their grandparents. I have been lucky to have known you for 37 years. Thank you for our time together. Your impact on me is deep, and forever. You made your mark.

Love,

Dries

Wedding
We heart opa

Een consument kan zijn geld maar één keer uitgeven. Maar men geeft het wel uit onderheving aan inflatie. Dit wil zeggen dat wanneer we een goed kochten in juni 2006, bv. een auto, dat het uitgegeven geld, volgens de EUCPI2005 index, zo’n 17,95% goedkoper is in 2016. Louter op basis van inflatie. Dit was zo tijdens een decennium met jaren van quasi deflatie (= ongezien). Toch halen we bijna 18% waardevermindering op tien jaar.

Wat gebeurt er economisch wanneer de consument betaalt met privacy? Men zal in de toekomst immers een deel van de auto betalen d.m.v. privacy op te geven: de prijs van die wagen zal dalen precies omdat allerlei organizaties zich gaan bezig houden met locatiedata van de consument (en weet ik veel wat nog allemaal). De consument betaalt dat deel met wat ik privacy-currency zal noemen.

Mijn eigen inzicht is dat een privacy-currency weliswaar meervoudig uitgegeven wordt; oude gegevens worden steeds minder waard. Maar diensten die met privacy-currency werken hebben vaak een langdurige stroom bij haar consument bemachtigd. Daarmee bedoel ik dat het over een sensor gaat (een smartphone met pervasieve app, een thermostaat die jaren aan de muur hangt, een digicorder die dankzij monopolie jaren lang TV-kijkgewoontes vastlegt) die niet éénmalig maar wel steeds weer dezelfde consumentenprivacy “verkoopt” aan het bedrijf.

Zo’n sensor kan slechts enkelmalig geïnstalleerd worden. Want de markt zorgt ervoor dat een privacygegeven zo goedkoop mogelijk geëxtraheerd wordt. Vijf keer vastleggen wat de TV-kijkgewoontes van een consument zijn, heeft in de markt geen nut: de markt zal de efficiëntste verkiezen. Die zal het aan de anderen verkopen.

Dit wil voor mij zeggen dat privacy-currency in inflatie zal gaan. De currency wordt steeds minder waard. De installatie van een sensor heeft nu een zekere prijs (je moet je dwaze product aan de man brengen), maar zal in de toekomst steeds minder opbrengen.

Voorts kost het voor de consument steeds meer om zijn of haar privacy op te geven: men verliest opties bij verzekeringen, men verliest werkgelegenheid, men verliest vriendschappen en zal gepest of aangesproken worden. Typisch zijn oudere mensen dan ook meer gesteld op hun privacy. Ze verkopen hun privacy steeds duurder. Hun ongeletterdheid in technologie ontwijkt dit nog even; maar iedereen weet dat dat van korte duur is.

Dit geeft dat er twee vectoren zijn die de privacy-currency in inflatie doet gaan: de markt maakt een sensor minder veel waard uit efficiëntieoverweging, en de consument maakt een sensor minder wenselijk door een kleine maar niet onbestaande vergroting van kennis in technologie (en haar kwalen).

Bedrijven die hun waarde in de vermeerdering van privacy-currency leggen, zullen op middellange termijn failliet gaan. Want zelfs geld is een betere focus. De financiële sector heeft het traditioneel dan ook goed.

June 03, 2016

The post I started a podcast for sysadmins and developers: SysCast appeared first on ma.ttias.be.

In preparation of the launch of SysCast, the screencasting site where you can learn about linux and open source, I started a podcast: the SysCast podcast!

I've been playing with this idea for a couple of months and having been guest on a number of podcasts (on HTTP/2 and DevOps), I decided I wanted to start my own.

As a result, the SysCast podcast was born!

syscast_logo_wide

The first 3 episodes have been recorded and are available online:

  1. The Caddy webserver, with Matt Holt
  2. An introduction to Docker, with Nils de Moor
  3. Managing secrets with Vault, with Seth Vargo

In terms of content, I try to find a solid mix between Linux, open source, web development and system administration. Expect a mix of both Dev and Ops topics. You might even call it DevOps.

I'm an avid listener of podcasts in my daily commute or when going shopping, trying to fill every bit of spare time with an interesting podcasts so I can learn new things (something about being obsessively efficient). My goal is to make SysCast fit into that category, too.

Want to subscribe to updates? There are a couple of ways:

There are plenty of podcasting apps out there, if you search them for SysCast, it should pop up.

A very big thanks to my first 3 guests and to everyone who's been listening and sending in feedback! I'd love to hear what other topics you would like to hear about or which interesting guests I could bring on the show.

I've already got some interesting guests lined up for next few weeks too!

So as of now, I can add Podcaster to my Twitter bio. Geek status++.

The post I started a podcast for sysadmins and developers: SysCast appeared first on ma.ttias.be.

I first heard Débruit a couple of weeks ago while dozing off listening to Lefto’s late-nite show on local radiostation Studio Brussels and the set was that good that I wanted to wake up to listen more carefully.

Débruit is a French producer (apparently currently living in Brussels), who seamlessly merges electronica with African and Middle-Eastern influences and collaborations. He just released “Débruit & Istanbul”, an album based on his Europalia-commissioned explorations of Istanbul in 2015.

Below video is of the great Boiler Room series and although it doesn’t feature his latest work, it is just as diverse and exciting as what I heard on the radio;

YouTube Video
Watch this video on YouTube.

June 02, 2016

The battle for the marketing cloud just got way more interesting. This week, Salesforce announced its acquisition of Demandware for $2.8B in cash. It will enable Salesforce to offer a "Commerce Cloud" alongside its sales and marketing solutions.

The large platform companies like Oracle and Adobe are trying to own the digital customer experience market from top to bottom by acquiring and integrating together tools for marketing, commerce, customer support, analytics, mobile apps, and more. Oracle's acquisition of Eloqua, SAP's acquisition of hybris and Salesforce's acquisitions of ExactTarget were earlier indicators of market players consolidating SaaS apps for customer experience onto their platforms.

In my view, the Demandware acquisition is an interesting strategic move for Salesforce that aligns them more closely as a competitor to marketing stack mega-vendors such as Adobe, Oracle and IBM. Adding a commerce solution to its suite, makes it easier for Salesforce's customers to build an integrated experience and see what their customers are buying. There are advantages to integrated solutions that have a single system of record about the customer. The Demandware acquisition also makes sense from a technology point of view; there just aren't many Java-based commerce platforms that are purely SaaS-based, that can operate at scale, and that are for sale.

However, we've also seen this movie before. When big companies acquire smaller, innovative companies, over time the innovation goes away in favor of integration. Big companies can't innovate fast enough, and the suite lock-in only benefits the vendor.

There is a really strong case to be made for a best-of-breed approach where you choose and integrate the best software from different vendors. This is a market that literally changes too much and too fast for any organization to buy into a single mega-platform. From my experience talking to hundreds of customer organizations, most prefer an open platform that integrates different solutions and acts as an orchestration hub. An open platform ultimately presents more freedom for customers to build the exact experiences they want. Open Source solutions, like Drupal, that have thousands of integrations, allow organizations to build these experiences in less time, with a lower overall total cost of ownership, more flexibility and faster innovation.

Adobe clearly missed out on buying Demandware, after it missed out on buying Hybris years ago. Demandware would have fit in Adobe's strategy and technology stack. Now Adobe might be the only mega-platform that doesn't have an embedded commerce capability. More interestingly, there don't appear to be large independent commerce operators left to buy.

I continue to believe there is a great opportunity for new independent commerce platforms, especially now Salesforce and Demandware will spend the next year or two figuring out the inevitable challenges of integrating their complex software solutions. I'd love to see more commerce platforms emerge, especially those with a modern micro-services based architecture, and an Open Source license and innovation model.

16215044078_2adccdfebb_k

Les raisons de l’échec de Google+ et des tentatives sociales chez Google.

Avec 90% du marché mondial des recherches web, 1 milliard de personnes utilisant des téléphones Android, 1 milliard de visiteurs mensuels sur Youtube et 900 millions d’utilisateurs de GMail, difficile pour un internaute de passer à côté de Google.

Aussi, quand Google a décidé de se lancer dans les réseaux sociaux en 2011, personne ne donnait cher de la peau de Twitter et Facebook.

Pourtant, Google Buzz, la tentative de concurrencer Twitter, fut un échec cuisant et Google+, l’équivalent Google de Facebook, reste vivement controversé et assez peu utilisé alors même qu’il est intégré avec la plupart des smartphones vendus dans le monde aujourd’hui !

Et s’il est impensable pour une marque ou une célébrité de ne pas avoir une page Facebook ou un compte Twitter, qu’en est-il d’une page Google+ ? La plupart ne sont-elles pas créées par acquis de conscience ?

Trouver une personne, la base d’un réseau social

Google+ serait-il techniquement tellement inférieur à ses concurrents que, même imposé, il soit si peu utilisé ? Au contraire, certains, parmi lesquels l’auteur de ces lignes, considèrent que Google+ est techniquement plus abouti et plus riche que Facebook : possibilité d’avoir des relations asymétriques entre personnes, facilité de regroupement des amis dans des “cercles”, meilleur contrôle des permissions, …

Mais alors, pourquoi même les aficionados les plus accros à Google ont-il le réflexe d’aller sur Twitter et Facebook ?

La réponse la plus souvent pointée est que tout le monde est sur Facebook et que les utilisateurs vont où les autres sont. Facebook aurait l’avantage d’avoir été le premier à bénéficier de cet effet de réseau à large échelle.

Mais c’est sans compter que Google bénéficie déjà d’énormes réservoirs d’utilisateurs que sont Gmail, Android et Youtube. S’il ne s’agissait que d’atteindre une masse critique, Google+ aurait pu être un succès instantané.

Un réseau social, ce n’est jamais qu’un groupe de personnes avec des liens entre eux. Et ce réseau ne peut se construire qu’avec les personnes. La première fonctionnalité d’un réseau social est bien celle-là : trouver une personne, étape indispensable avant la création d’un lien. La motivation première pour ajouter un ami sur Facebook n’est pas de voir ses photos de vacances, c’est de rester en contact. Les photos de vacances ne sont qu’une conséquence !

Certains utilisateurs sur Facebook n’utilisent d’ailleurs pas le flux d’activité. D’autres n’ont jamais ouvert la messagerie. Mais tous ont un point commun : ils ont confiance de pouvoir trouver n’importe qui ou presque sur Facebook. Même le « Jean Dupont » que je cherche se démarquera au milieu de ses homonymes grâce à nos amis communs, ses centres d’intérêts, ses photos ou sa description.

Sur Twitter, aucun doute possible grâce à l’identifiant unique que Jean Dupont m’aura très aisément communiqué.

Google+, un réseau asocial ?

Google, par contre, a complètement perdu de vue la fonctionnalité de base : « trouver une personne ». Google+ s’est immédiatement concentré sur les conséquences (avoir un flux d’activité, partager des photos, chatter) en oubliant la raison première d’un tel produit : rester en contact. De l’aveu même des ingénieurs travaillant sur le projet, il fallait toujours « développer une nouvelle fonctionnalité ».

Que ce soit dans mon téléphone ou dans Gmail, le fait de taper « Jean Dupont » me donne des dizaines d’occurrences dont certaines sont des doublons et d’autres des homonymes. À qui appartient ce numéro de téléphone associé à un « Jean » qui a sans doute été importé depuis ma carte SIM à un moment donné ? Est-ce l’ancien numéro de Jean Dupont ? Au contraire un nouveau numéro ? Ou bien un homonyme ?

Impossible, depuis GMail, d’envoyer un mail à certaines personnes avec qui je suis pourtant en contact sur Google+ ! Et si les innovations de Google Inbox ont largement amélioré la situation, elle n’en reste pas moins loin d’être parfaite !

 Google Inbox me propose deux fois la même personne. Laquelle choisir ?


Google Inbox me propose deux fois la même personne. Laquelle choisir ?

Détail révélateur : la photo de profil d’une personne varie d’un produit Google à l’autre voire, au sein même de GMail et Google Inbox, d’un mail à l’autre ! Certaines anciennes photos de profils Google+, pourtant supprimées depuis longtemps, apparaissent parfois comme par enchantement au détour d’un mail. Mais le plus souvent, aucune image ne s’affiche. Il m’est donc impossible d’associer avec confiance une personne à une photo de profil unique, contrairement à Twitter ou Facebook.

Facebook l’a bien compris et, sur cette plateforme, le changement de photo de profil d’un de vos contacts est un événement majeur qui sera particulièrement mis en avant.

Hangouts et Contacts, des échecs lourds.

Sur Android, l’application Hangout est incroyablement lente quand il s’agit de lancer une conversation avec un nouveau contact. Parfois, elle ne trouve tout simplement pas ce contact ou n’associe pas le numéro de téléphone avec le profil de la personne, ne me laissant que le choix d’envoyer un message Hangout à la place d’un SMS. À d’autres moments, elle me met en avant des « suggestions » de profil Google+ que je ne connais pas et cache ceux que je connais.

 Hangout me propose 5 fois Marie qui sont la même et unique personne sur G+ !


Hangout me propose 5 fois Marie qui sont la même et unique personne sur G+ !

Avant ce mois de mars 2016, l’interface web de Google Contacts n’avait jamais connu de refonte complète depuis sa mise en service. Google n’a même jamais pris la peine de développer une application Android de gestion de contacts.

Si cette nouvelle version rassure sur le fait que cette partie de Google n’a pas été complètement laissée à l’abandon, elle est néanmoins très frustrante : il ne s’agit que d’un changement purement esthétique sans réelle nouvelle fonctionnalité ni meilleure intégration avec les autres produits Google.

L'interface de Google Contacts, restées inchangée pendant des années.

L’interface de Google Contacts, restées inchangée pendant des années.

C’est comme si Google considérait qu’unifier et gérer une liste de contacts n’avait aucun intérêt. Google s’est contenté de développer les fonctionnalités d’un réseau social en oubliant ce qui est selon moi la fondation même de l’interaction sociale : entrer en contact avec une personne donnée.

Une fonctionnalité que Google a laissé, peut-être volontairement, aux fabricants de smartphones. Avec un résultat assez catastrophique.

3 vincents identiques, 2 vincents différents et de nouveau 3 vincents identiques. Merci Samsung !

3 vincents identiques, 2 vincents différents et de nouveau 3 vincents identiques. Merci Samsung !

Un désintérêt que Google paie très cher, y compris dans le domaine de la messagerie où, malgré une position dominante confortable, Gmail et Hangouts se sont vite fait dépasser par Whatsapp.

Le désespoir de l’incompréhension

Est-ce que Whatsapp offre une fonctionnalité incroyable, nouvelle ou particulièrement utile ?

Non, la principale caractéristique de Whatsapp est de trouver mes amis qui utilisent Whatsapp en se basant sur les numéros stockés sur mon téléphone. Que ce soit sur Facebook, Twitter ou Whatsapp, j’ai donc confiance de facilement trouver une personne donnée. Oui, Google est très fort pour me faire explorer, pour me suggérer des nouvelles personnes. C’est d’ailleurs ce qui fait la joie des aficionados de Google+. Mais la plupart du temps, je veux simplement contacter une personne donnée le plus vite possible.

Avec sa nouvelle version, Google+ semble d’ailleurs faire progressivement son deuil de l’aspect social pour se concentrer sur la découverte de nouveaux contenus, de thématiques et de centres d’intérêt.

Le lancement d’un enième produit social, Google Spaces, et d’une enième application de chat, Google Allo, sont la confirmation de la totale incompréhension de Google face au social. Plutôt que de réfléchir, d’essayer de trouver les racines du problème, le géant américain lance des dizaines d’applications en espérant trouver, par hasard, le succès. On lance tout contre un mur et on regarde ce qui reste collé…

Mais, ce faisant, il ne fait que créer des espaces supplémentaires où potentiellement chercher une personne. Il rend encore plus complexe la recherche d’une personne précise.

Peut-être car, dans la culture des ingénieurs de chez Google, on ne recherche que des solutions à des problèmes, des informations. Pas des personnes. Jamais des personnes.

Ceci expliquerait tout : Google ne peut développer un réseau social car il est, tout simplement, profondément asocial.

 

Photo par Thomas Hawk.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins: 12uAH27PhWepyzZ8kzY9rMAZzwpfsWSxXt, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

June 01, 2016

Last week I attended the 2016 edition of the PHP Unconference Europe, taking place in Palma De Mallorca. This post contains my notes from various conference sessions. Be warned, some of them are quite rough.

Overall impression

Before getting to the notes, I’d like to explain the setup of the unconference and my general impression.

The unconference is two days long, not counting associated social events before and afterwards. The first day started with people discussing in small groups which sessions they would like to have, either by leading them themselves, or just wanting to attend. These session ideas where written down and put on papers on the wall. We then went through them one by one, with someone explaining the idea behind each session, and one or more presenters / hosts being chosen. The final step of the process was to vote on the sessions. For this, each person got two “sticky dots” (what are those things called anyway?), which they could either both put onto a single session, or split and vote on two sessions.

One each day we had 4 such sessions, with long breaks in between, to promote interaction between the attendees.

Onto my notes for individual sessions:

How we analyze your code

Analysis and metrics can be used for tracking progress and for analyzing the current state. Talk focuses on current state.

  • Which code is important
  • Probably buggy code
  • Badly tested code
  • Untested code

Finding the core (kore?): code rank (like Google page rank): importance flows to classes that are dependent upon (fan-in). Qafoo Quality Analyzer. Reverse code rank: classes that depend on lots of other classes (fan-out)

Where do we expect bugs? Typically where code is hard to understand. We can look at method complexity: cyclomatic complexity, NPath complexity. Line Coverage exists, Path Coverage is being worked upon. Parameter Value Coverage. CRAP.

Excessive coupling is bad. Incoming and outgoing dependencies. Different from code rank in that only direct dependencies are counted. Things that are depended on a lot should be stable and well tested (essentially the Stable Dependencies Principle).

Qafoo Quality Analyzer can be used to find dependencies across layers when they are in different directories. Very limited at present.

When finding highly complex code, don’t immediately assume it is bad. There are valid reasons for high complexity. Metrics can also be tricked.

The evolution of web application architecture

How systems interact with each other. Starting with simple architecture, looking at problems that arise as more visitors arrive, and then seeing how we can deal with those problems.

Users -> Single web app server -> DB

Next step: Multiple app servers + load balancers (round robin + session caching server)

Launch of shopping system resulted in app going down, as master db got too many writes, due to logging “cache was hit” in it.

Different ways of caching: entities, collections, full pages. Cache invalidation is hard, lots of dependencies even in simple domains.

When too many writes: sharding (split data across multiple nodes), vertical (by columns) or horizontal (by rows). Loss of referential integrity checking.

Complexity with relational database systems -> NoSQL: sharding, multi master, cross-shard queries. Usually no SQL or referential integrity, though those features are already lost when using sharding.

Combination of multiple persistence systems: problems with synchronization. Transactions are slow. Embrace eventual consistency. Same updating strategies can be used for caches.

Business people often know SQL, yet not NoSQL query languages.

Queues can be used to pass data asynchronously to multiple consumers. Following data flow of an action can be tricky. Data consistency is still a thing.

Microservices: separation of concerns on service and team level. Can simplify via optimal tech stack per serve. Make things more complicated, need automated deployment, orchestration, eventual consistency, failure handling.

Boring technology often works best, especially at the beginning of a project. Start with the simplest solution that works. Hold team skills into account.

How to fuck up projects

Before the project

  • Buzzword first design
  • Mismatching expectations: huge customer expectations, no budget
  • Fuzzy ambitious vocabulary, directly into the contract (including made up words)
  • Meetings, bad mood, no eye contact
  • No decisions (no decision making process -> no managers -> saves money)
  • Customer Driven Development: customer makes decisions
  • Decide on environment: tools, mouse/touchpad, 1 big monitor or 2 small ones, JIRA, etc
  • Estimates: should be done by management

During the project

  • Avoid ALL communication, especially with the customer
  • If communication cannot be avoided: mix channels
  • Responsibility: use group chats and use “you” instead of specific names (cc everyone in mails)
  • Avoid issue trackers, this is what email and Facebook are for
  • If you cannot avoid issue trackers: use multiple or have one ticket with 2000 notes
  • Use ALL the programming languages, including PHP-COBOL
  • Do YOUR job, but nothing more
  • Only pressure makes diamonds: coding on the weekend
  • No breaks so people don’t lose focus
  • Collect metrics: Hours in office, LOC, emails answered, tickets closed

Completing the project

  • 3/4 projects fail: we can’t do anything about it
  • New features? Outsource
  • Ignore the client when they ask about the completed project
  • Change the team often, fire people on a daily basis
  • Rotate the customer’s contact person

Bonus

  • No VCS. FTP works. Live editing on production is even better
  • http://whatthecommit.com/
  • Encoding: emjois in function names, umlaut in file names. Mix encodings, also in MySQL
  • Agile is just guidelines, change goals during sprints often
  • Help others fuck up: release it as open source
  • git blame-someone-else

The future of PHP

This session started with some words from the moderator, who mainly talked about performance, portability and future adoption of, or moving away from, PHP.

  • PHP now fast enough to use many PHP libraries
  • PHP now better for long running tasks (though still no 64 bit for windows)
  • PHP now has an Abstract Syntax Tree

The discussion that followed after was primarily about the future of PHP in terms of adoption. The two languages most mentioned as competitors where Javascript and Java.

Java because it is very hard to get PHP into big enterprise, where people tend to cling to Java. A point made several times about this is that such choices have very little to do with technical sensibility, and are instead influenced by the eduction system, languages already used, newness/ hipness and the HiPPO. Most people also don’t have the relevant information to make an informed choice, and do not do the effort to look up this information as they already have a preference.

Javascript is a competitor because web based projects, be it with a backend in PHP or in another language, need more and more Javascript, with no real alternatives. It was mentioned several times that not having alternatives it bad. Having multiple JS interpreters is cool, JS being the only choice for browser programming is not.

Introduction to sensible load testing

In this talk the speaker explained why it is important to do realistic load testing, and how to avoid common pitfalls. He explained how jMeter can be used to simulate real user behavior during peak load times. Preliminary slides link.

Domain Objects: not just for Domain Driven Design

This session was hard to choose, as it coincided with “What to look for in a developer when hiring, and how to test it”, which I also wanted to attend.

The Domain Objects session introduced what Value Objects are, and why they are better than long parameter lists and passing around values that might be invalid. While sensible enough, all very basic, with unfortunately no information for me whatsoever. I’m thinking it’d have been better to do this as a discussion, partly because the speaker was clearly very inexperienced, and gave most of the talk with his arms crossed in front of him. (Speaker, if you are reading this, please don’t be discouraged, practice makes perfect.)

Performance monitoring

I was only in the second half of this session, during which two performance monitoring tools where presented. Tideways by Qafoo and Instana.

Some tweets

Back in 2006 I wrote a blog post about linux troubleshoooting. Bert Van Vreckem pointed out that it might be time for an update ..

There's not that much that has changed .. however :)

Everything is a DNS Problem

Everything is a Fscking DNS Problem
No really, Everything is a Fscking DNS Problem
If it's not a fucking DNS Problem ..
It's a Full Filesystem Problem
If your filesystem isn't full
It is a SELinux problem
If you have SELinux disabled
It might be an ntp problem
If it's not an ntp problem
It's an arp problem
If it's not an arp problem...
It is a Java Garbage Collection problem
If you ain't running Java
It's a natting problem
If you are already on IPv6
It's a Spanning Tree problem
If it's not a spanning Tree problem...
It's a USB problem
If it's not a USB Problem
It's a sharing IRQ Problem
If it's not a sharing IRQ Problem
But most often .. its a Freaking Dns Problem !

`

May 31, 2016

Sticker TrytonCe jeudi 16 juin 2016 à 19h se déroulera la 50ème séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : Tryton, un framework libre d’application métier

Thématique : Progiciel de Gestion Intégré

Public : Programmeurs|Responsables d’entreprise|étudiants

L’animateur conférencier : Cédric Krier (B2CK SPRL)

Lieu de cette séance : Université de Mons, Faculté Polytechnique, Site Houdain, Rue de Houdain, 9, auditoire 3 (cf. ce plan sur le site de l’UMONS, ou la carte OSM). Entrée par la porte principale, au fond de la cour d’honneur. Suivre le fléchage à partir de là.

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page http://jeudisdulibre.fikket.com/. La séance sera suivie d’un verre de l’amitié (le tout sera terminé au plus tard à 22h).

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, Normation, OpenSides, MeaWeb et Phonoid.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

Description :

Tryton est une plate-forme de développement d’application pour entreprise (progiciel de gestion intégré/PGI/ERP) sous licence GPL-3+. Grâce à son ensemble de modules qui grandit à chaque version, elle couvre de base bon nombre de besoins de l’entreprise. Et ceux qui seraient manquants peuvent être comblés grâce à son architecture modulaire. Ecrit en Python dans une architecture trois tiers, le système peut être utilisé avec PostgreSQL, SQLite, MySQL.

L’exposé ciblera les sujets suivants :

  • L’historique et gouvernance du projet
  • Architecture du logiciel
  • Découverte de quelques modules: achats, ventes, comptabilité et stock
  • Démonstration: création d’un module simple

May 30, 2016

Next time, I show you how to turn that ViewModel into a Visitor? And then make a view that shows the syntax and outline of a parsed file’s language?

How about I turn my blog into the programmers’ equivalent to Dobbit magazine? :-)

Who knows what I’ll come up with next time. I guess I don’t know myself.

Brujas - Brugge

D’un claquement légèrement éméché sur le comptoir, la femme repose le verre de bière vide tout en écartant une mèche rousse de son front rougeaud. D’un doigt tremblant, elle ouvre un bouton de son chemisier, laissant dévaler une goutte de sueur entre ses seins.
— Eh bien moi, je dis qu’il faut les laisser se démerder. Après tout, c’est eux qui ont élu Trump, ce n’est pas notre problème.
— Tu n’y connais rien Véro, répond un homme mal rasé, le nez dans son demi, le regard trouble.
— Parce que toi t’es subitement un expert en géopolitique internationale ?
— Ils ont dit à la télé que…
— Je rêve ! Tu regardes encore la télé ? Y’a autre chose que des pubs à la télé ?
— N’empêche qu’il y avait Nicolas Hulot et qu’il expliquait que Trump avait mis l’écologie sur la liste des idéologies terroristes, tout comme l’islam.
— Pour l’islam, il n’a pas tort. Et pour le reste, ils n’ont qu’à se détruire eux-mêmes…
— Sauf que les polluants déversés dans les mers affectent directement les espèces des océans et que c’est pour ça que la pêche est désormais interdite chez nous. C’est à cause de lui que nos pêcheurs n’ont plus de boulot !
— Ben de toute façon, personne n’a plus de boulot. On va pas se transformer en soldats juste parce qu’il n’y a plus de boulot. Entre chômeuse ou chair à canon, j’ai choisi. Hein Malou ?

Derrière son comptoir, une femme entre deux âges hoche la tête tout en essuyant un verre. Sa chevelure blond platine laisse doucement la place à des mèches grisonnantes qui se mélangent à ses montures de lunettes argentées.

— Moi, tant que vous serez en vie, je suis sûre d’avoir du boulot.
— Ça c’est sûr, renchérit l’homme en levant sa bière, tu nous dois une fière chandelle !

Il boit une gorgée mousseuse avant d’émettre un rot sonore. Mais la rousse ne se laisse pas décontenancer.

— Malou, tu trouves normal qu’on déclare la guerre aux États-Unis ? Qu’on devienne allié avec les pays islamiques ?
— Islamiques, il ne faut rien exagérer, tempère la patronne. On ne s’allie pas avec Daesh et le califat tout de même.
— Oui mais le Pakistan, la Tchétchénie, l’Iran et même la Russie et la Corée du Nord. On est passé du côté des terroristes ou quoi ?
— Faut que tu comprennes qu’on lutte pour la survie de la planète là ! Trump fait brûler des gisements de gaz exprès pour nous provoquer, réplique l’homme dont les narines peuplées de poils noirs huileux palpitent de colère. Les experts sont unanimes : le réchauffement climatique ne pourra plus être arrêté.
— Ben justement, s’il ne peut plus être arrêté, pourquoi aller s’entretuer ? Hein Malou ?
— Pour pas faire pire, tiens ! Le Tsunami de Knokke, tu crois que ça n’a pas suffi ?

L’homme se caresse nerveusement la calvitie naissante. Il s’agite, ses tempes se couvrent de veinules bleuâtres.

— Bien fait pour ces flamins, répond Véro avec un sourire goguenard.
— À ce rythme-là, dans 15 ans, on parlera du tsunami de Louveigné ! On devra tous se réfugier au signal de Botrange !
— Moi je maintiens qu’on va faire pire que mieux. Trump est capable de nous balancer des missiles nucléaires. Tant qu’à faire, je préfère passer mes dernières années à Botrange-plage. Et si je dois mourir noyée, autant que ce soit dans une eau non-radioactive !
– Dans de l’eau, tu ne risques pas ! Tu mourras d’une cirrhose bien avant.
– Ça me va, santé Malou ! À ma cirrhose et à la fin du monde !

Mais l’homme n’admet pas sa défaite :
— Si on s’y met tous les pays ensemble, en quelques jours les États-Unis sont rayés de la carte…
— Qu’ils disent ! Comme en 14 !
— C’est notre seul espoir ! Détruire les États-Unis ou la planète entière, c’est le choix qui s’offre à nous !
— Bref, y’a plus d’espoir, je te rejoins sur ce point !
— Moi j’ai toujours voté Écolo, intervient Malou.
— C’est gentil Malou. Grâce à toi on va aller faire la guerre à Trump en vélo partagé…
— À choisir entre les missiles de Trump et les toilettes sèches… Planquez-vous, la Wallonie sort les armes bactériologiques !

Le couple s’esclaffe. Réconciliés, les deux clients se tapent mutuellement sur la cuisse.

— Oh, moi je disais ça, répond Malou d’un ton vexé. Les écolos, ils ont quand même proposé des sanctions économiques et le boycott dès l’élection de Trump !
— Ça fait cinq années que la plupart du monde boycotte les États-Unis. L’effet est nul ! D’un côté personne ne veut se passer d’un Iphone ou d’un juteux contrat pour la défense américaine, de l’autre, ce qu’on ne vend pas à Trump, il vient le chercher.

Un silence s’installe dans le troquet, laissant un ventilateur essoufflé brasser l’air lourd et moite de la fin de journée alors que le crépuscule enflamme les dizaines de verres aux couleurs des différentes bières du pays qui s’alignent en rang d’oignon sur une étagère vieillissante.

— Faut reconnaître, dit l’homme, que les écolos ont au moins fait semblant de se préoccuper du problème. Les autres partis, eux, ils étaient encore en train de se battre pour des histoires communautaires auxquelles personne ne comprend rien.
— D’ailleurs, est-ce qu’ils sont encore en train de négocier un gouvernement ou est-ce qu’ils se sont tous tirés en Suisse comme les députés français ? répond sa comparse.
— Aucune idée. Mais je crois que ça n’a plus beaucoup d’importance…

Malou semble réfléchir un instant.

— Mais si on n’a pas de gouvernement, qui a voté le fait qu’on déclarait la guerre aux États-Unis ? Parce que c’est bien beau de discuter, la décision est prise, non ?

L’homme hoche la tête.

— On n’est pas dans la merde…

D’un air désabusé, la femme regarde le fond de son verre vide avant de le tendre par dessus le comptoir.

— Allez Malou, mets-moi son petit frère ! J’ai une cirrhose qui attend !

 

Photo par Ramón. Relecture par le gauchiste.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins: 12uAH27PhWepyzZ8kzY9rMAZzwpfsWSxXt, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

May 29, 2016

The post Limit the runtime of a cronjob or script appeared first on ma.ttias.be.

There are ways you can prevent cronjobs from overlapping, but you can also limit how long a particular script can run.

This doesn't just apply to cronjobs but to all scripts, actually, but this guide focusses on cronjobs.

Setting timeouts

The hidden secret is the timeout command. From the manpages:

timeout -- run a command with a time limit

And it's very simple to use.

Limit the time a cronjob can run

To avoid endless cronjobs, change your crontab like this:

$ crontab -l
* * * * * /path/to/your/script.sh

To:

$ crontab -l
* * * * * /bin/timeout -s 2 10 /path/to/your/script.sh

Here's the breakdown:

  • /bin/timeout: the command.
  • -s 2: the signal to send when the timer has exceeded, it can be a number or the name. Equally valid would have been -s SIGINT (more on the kill signals below)
  • 10: the duration the script can run, before the kill signal described above is sent to it.

So the command above will send the SIGINT signal to the script whenever the timer of 10 seconds has been exceeded.

You can use interesting arguments like --preserve-status to have the timeout command return the same exit code as the script you executed.

The correct kill signal

For a list of valid kill signals, use kill -l.

$ kill -l
 1) SIGHUP	 2) SIGINT	 3) SIGQUIT	 4) SIGILL	 5) SIGTRAP
 6) SIGABRT	 7) SIGBUS	 8) SIGFPE	 9) SIGKILL	10) SIGUSR1
11) SIGSEGV	12) SIGUSR2	13) SIGPIPE	14) SIGALRM	15) SIGTERM
16) SIGSTKFLT	17) SIGCHLD	18) SIGCONT	19) SIGSTOP	20) SIGTSTP
21) SIGTTIN	22) SIGTTOU	23) SIGURG	24) SIGXCPU	25) SIGXFSZ
26) SIGVTALRM	27) SIGPROF	28) SIGWINCH	29) SIGIO	30) SIGPWR
31) SIGSYS	34) SIGRTMIN	35) SIGRTMIN+1	36) SIGRTMIN+2	37) SIGRTMIN+3
38) SIGRTMIN+4	39) SIGRTMIN+5	40) SIGRTMIN+6	41) SIGRTMIN+7	42) SIGRTMIN+8
43) SIGRTMIN+9	44) SIGRTMIN+10	45) SIGRTMIN+11	46) SIGRTMIN+12	47) SIGRTMIN+13
48) SIGRTMIN+14	49) SIGRTMIN+15	50) SIGRTMAX-14	51) SIGRTMAX-13	52) SIGRTMAX-12
53) SIGRTMAX-11	54) SIGRTMAX-10	55) SIGRTMAX-9	56) SIGRTMAX-8	57) SIGRTMAX-7
58) SIGRTMAX-6	59) SIGRTMAX-5	60) SIGRTMAX-4	61) SIGRTMAX-3	62) SIGRTMAX-2
63) SIGRTMAX-1	64) SIGRTMAX

Each of these can be used with its numeric value or the name of the signal.

The post Limit the runtime of a cronjob or script appeared first on ma.ttias.be.

May 28, 2016

We're increasingly using Docker to build packages, a fresh chroot in which we prepare a number of packages, builds typically for ruby (rvm) , or python (virtualenv) or node stuf where the language ecosystem fails on us ... and fpm the whole tree as a working artifact.

An example of such a build is my work on packaging Dashing. https://github.com/KrisBuytaert/build-dashing

Now part of that build is running the actual build script in docker with a local volume mounted inside the container This is your typical -v=/home/src/dashing-docker/package-scripts:/scripts param.

Earlier this week however I was stuck on a box where that combo did not want to work as expected. Docker clearly mounted the local volume, as it could execute the script in the directory, but for some reason it didn't want to write in the mounted volume.

docker run -v=/home/src/dashing-docker/package-scripts:/scripts dashing/rvm /scripts/packagervm
Usage of loopback devices is strongly discouraged for production use. Either use `--storage-opt dm.thinpooldev` or use `--storage-opt dm.no_warn_on_loop_devices=true` to suppress this warning.
corefines: Your Ruby doesn't support refinements, so I'll fake them using plain monkey-patching (not scoped!).
/usr/local/share/gems/gems/corefines-1.9.0/lib/corefines/support/fake_refinements.rb:26: warning: Refinements are experimental, and the behavior may change in future versions of Ruby!
/usr/share/ruby/fileutils.rb:1381:in `initialize': Permission denied - rvm-1.27.0-1.x86_64.rpm (Errno::EACCES)

So what was I doing wrong, did the Docker params change, did I invert the order of the params, did I mistype them ? I added debugging to the script, (ls , chmod, etc..) and I couldn't seem to read or modify the directory. So I asked a coworker to be my wobbling duck.

He did more .. he wondered if this wasn't selinux.

And he was right..

Apr 23 21:47:00 mine23.inuits.eu audit[9570]: AVC avc: denied { write } for pid=9570 comm="fpm" name="package-scripts" dev="dm-2" ino=368957 scontext=system_u:system_r:svirt_lxc_net_t:s0:c47,c929 tcontext=unconfined_u:object_r:unlabeled_t:s0 tclass=dir permissive=0
Apr 23 21:47:02 mine23.inuits.eu python3[9597]: SELinux is preventing fpm from write access on the directory /home/src/dashing-docker/package-scripts.

So while I was looking for errors in docker, it was just my selinux set to enforce acting up and me not noticing it.

The quick way to verify obvisously was to setenforce 0 and trigger the build again that however is not a long term fix so I changed the

semanage fcontext -a -t cgroup_t '/home/src/dashing-docker/package-scripts'
restorecon -v '/home/src/dashing-docker/package-scripts'

That solves the problem

11333801536_b4e16316ce_k

…en cycliste.

La plupart de mes lecteurs sont sans doute familiers avec le principe de réalité virtuelle. Un univers entièrement fictif dans lequel on s’immerge totalement afin de se couper du monde extérieur.

L'auteur, plongé dans une réalité virtuelle…

L’auteur, plongé dans une réalité virtuelle…

Mais un autre concept très intéressant est en train d’émerger : celui de réalité augmentée.

Le principe de la réalité augmentée est de rajouter des interactions virtuelles au sein monde réel.

Les exemples les plus spectaculaires sont certainement Microsoft Hololens et Magic Leap. Ces deux technologies, encore expérimentales, projettent sur des lunettes des objets virtuels qui viennent se juxtaposer à ce qui se trouve dans votre champs de vision. Vous pouvez, par exemple, voir un personnage fictif évoluer dans la pièce où vous vous trouvez.

Mais pas besoin d’aller aussi loin pour expérimenter la réalité augmentée. Le jeu Ingress, développé par Google, ne nécessite qu’un simple smartphone : vous devez vous rendre dans des endroits précis afin de conquérir du territoire. Sa popularité conduit les joueurs à s’organiser et se rencontrer régulièrement. Run Zombie vous pousse à faire de l’entraînement fractionné en course à pieds en vous faisant entendre des zombies auxquels vous devez échapper en sprintant.

De mon côté, le jeu en réalité augmentée qui certainement a bouleversé ma vie est Strava.

Je vois des sourcils se froncer.

Strava n’est-il pas une application qui enregistre les ballades en vélo ?

Oui. Mais Strava dispose d’une fonctionnalité incroyable : les segments.

Un segment sur Strava est un chemin qui relie un point de départ à un point d’arrivée. N’importe quel utilisateur de Strava peut en créer.

Avec la subtilité qu’un classement de tous les utilisateurs Strava passés par chaque segment est affiché publiquement. Il devient donc possible de se comparer à des dizaines voire des centaines de cyclistes.

Mieux : les membres Premium peuvent désormais voir en temps réel leur position dans un segment par rapport à leur meilleur temps personnel et le meilleur temps de tous les autres utilisateurs. Le smartphone rivé sur le guidon, j’ai réellement le sentiment d’être en course acharnée avec moi-même et avec un autre utilisateur Strava. C’est à peine si mon imagination ne me fait pas ressentir l’aspiration quand l’écart passe sous la seconde !

strava-live-segments

Certains segments sont sans grand intérêt mais soyez certains que toutes les côtes de votre région ont leur segment Strava où des dizaines de cyclistes s’affrontent chaque semaine pour le tant convoité titre de KOM ou QOM (King/Queen Of the Mountain).

Lorsque vous êtes détrôné de votre KOM, une notification vous parvient immédiatement sur votre smartphone. La tentation est alors immense de tout plaquer, d’enfiler son casque et d’aller montrer à ce jeune freluquet de quoi vous êtes capable. Surtout si celui-ci a eu l’outrecuidance de laisser un commentaire de type : « Je reprends ce qui m’appartient » (exemple vécu).

Grâce à Strava, j’ai pu mener des compétitions acharnées contre des cyclistes que je n’ai jamais rencontré, chacun reprenant le KOM à l’autre à chaque tentative. Ces compétitions virtuelles se soldent même parfois par de cordiaux échanges dans les commentaires, chacun félicitant l’autre pour sa performance mais lui annonçant avec humour de profiter du KOM tant qu’il peut le garder.

Pour ne pas se limiter aux segments, Strava propose également des challenges réguliers basés sur la distance parcourue, sur le dénivelé escaladé voire sur l’exploration de nouveaux parcours.

Chez moi, le résultat est incroyable : chaque fois que j’enfourche ma bécane, je vais « jouer à Strava ». J’ai l’appétence de découverte de nouveaux segments de qualité, l’envie de m’améliorer, de me dépasser. À l’incroyable plaisir de sentir les kilomètres défiler sous mes roues, je rajoute la petite jouissance intellectuelle que connaissent bien les amateurs de jeux vidéos.

Alors, oui, Strava a changé ma vie. De cycliste utilitaire, je me suis transformé en cycliste passionné. Strava m’a donné envie d’explorer, de partir à la découverte. À la fois dans ma propre région et partout où j’aurai l’occasion d’aller donner quelques coups de pédale.

Grâce à Strava (ou à cause ?), de plus en plus de mes kilomètres utilitaires se font en vélo, au détriment de la voiture…

La réalité augmentée, malgré qu’elle n’en soit qu’à ses balbutiements, est donc déjà en train de changer le monde, de nous changer.

Après tout, quoi de plus normal ? La frontière entre le réel et le virtuel n’est qu’arbitraire, historique. Les deux sont appelés à se fondre l’une dans l’autre et il est probable que nos enfants ne parleront pas de réalité virtuelle ni de réalité augmentée. Ils diront tout simplement… « la réalité ».

Ils ne joueront plus à des « jeux vidéos » mais à des jeux tout court. Ils nous mettront dans une situation inconfortable, ils nous donneront l’impression d’être déconnectés du réel alors qu’ils seront en train de l’étendre. Ils seront tristes pour nous, les vieux, limités à une toute petite frange du réel.

Nous traverserons forcément des phases d’inquiétude ou de rejet mais, dans ces moments là, rappelez-vous que cette réalité augmentée m’a transformé de conducteur en cycliste. Une transformation dont je suis fier et que je considère comme positive ! Une transformation que j’ai acceptée voire que je recherchais car elle me convenait. Tous les Strava du monde n’auront jamais aucun effet sur quelqu’un qui abhorre le vélo.

Notre tâche n’est donc pas de tenter de limiter cette incursion du virtuel dans le réel. Ce serait peine perdue. Non, notre responsabilité est de faire en sorte que les incroyables pouvoirs liés à ce progrès ne soient pas entre les mains de quelques uns mais entre les mains de chacun. Notre mission est d’encourager nos enfants à développer, démocratiser et utiliser tous les outils possibles et imaginables. De leur offrir les technologies et de leur faire confiance quant à l’usage qu’ils en feront.

Et pour répondre à la question qui vous brûle les lèvre, je ne dispose que de peu ou prou de KOM sur les segments prisés. Mais je tire une certaine fierté à être, avec le même vélo, dans les tops 10 de certains segments VTT à travers bois et de quelques segments pour purs routiers. Tiens, je proposerais bien à Strava un badge “passe partout”…

 

Photo par Jijian Fan.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins: 12uAH27PhWepyzZ8kzY9rMAZzwpfsWSxXt, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

The Drupal community is very special because of its culture of adapting to change, determination and passion, but also its fun and friendship. It is a combination that is hard to come by, even in the Open Source world. Our culture enabled us to work through really long, but ground-breaking release cycles, which also prompted us to celebrate the release of Drupal 8 with 240 parties around the world.

Throughout Drupal's 15 years history, that culture has served us really well. As the larger industry around us continues to change -- see my DrupalCon New Orleans keynote for recent examples -- we have been able to evolve Drupal accordingly. Drupal has not only survived massive changes in our industry; it has also helped drive them. Very few open source projects are 15 years old and continue to gain momentum.

Drupal 8 is creating new kinds of opportunities for Drupal. For example, who could have imagined that Lufthansa would be using Drupal 8 to build its next-generation in-flight entertainment system? Drupal 8 changes the kind of end-user experiences people can build, how we think about Drupal, and what kind of people we'll attract to our community. I firmly believe that these changes are positive for Drupal, increase Drupal's impact on the world, and grow the opportunity for our commercial ecosystem.

To seize the big opportunity ahead of us and to adjust to the changing environment, it was the Drupal Association's turn to adapt and carefully realign the Drupal Association's strategic focus.

The last couple of years the Drupal Association invested heavily in Drupal.org to support the development and the release of Drupal 8. Now Drupal 8 is released, the Drupal Association's Board of Directors made the strategic decision to shift some focus from the "contribution journey" to the "evaluator's adoption journey" -- without compromising our ability to build and maintain the Drupal software. The Drupal Association will reduce its efforts on Drupal.org's collaboration tools and expand its efforts to grow Drupal's adoption and to build a larger ecosystem of technology partners.

We believe this is not only the right strategic focus at this point in Drupal 8's lifecycle, but also a necessary decision. While the Drupal Association's revenues continued to grow at a healthy pace, we invested heavily, and exhausted our available reserves supporting the Drupal 8 release. As a result, we have to right-size the organization, balance our income with our expenses, and focus on rebuilding our reserves.

In a blog post today, we provide more details on why we made these decisions and how we will continue to build a healthy long-term organization. The changes we made today help ensure that Drupal will gain momentum for decades to come. We could not make this community what it is without the participation of each and every one of you. Thanks for your support!

May 27, 2016

In the XAML world it’s very common to use the MVVM pattern. I will explain how to use the technique in a similar way with Qt and QML.

The idea is to not have too much code in the view component. Instead we have declarative bindings and move most if not all of our view code to a so called ViewModel. The ViewModel will sit in between the actual model and the view. The ViewModel typically has one to one properties for everything that the view displays. Manipulating the properties of the ViewModel alters the view through bindings. You typically don’t alter the view directly.

In our example we have two list-models, two texts and one button: available-items, accepted-items, available-count, accepted-count and a button. Pressing the button moves stuff from available to accepted. Should be a simple example.

First the ViewModel.h file. The class will have a property for ~ everything the view displays:

#ifndef VIEWMODEL_H
#define VIEWMODEL_H

#include <QAbstractListModel>
#include <QObject>

class ViewModel : public QObject
{
	Q_OBJECT

	Q_PROPERTY(QAbstractListModel* availableItems READ availableItems NOTIFY availableItemsChanged )
	Q_PROPERTY(QAbstractListModel* acceptedItems READ acceptedItems NOTIFY acceptedItemsChanged )
	Q_PROPERTY(int available READ available NOTIFY availableChanged )
	Q_PROPERTY(int accepted READ accepted NOTIFY acceptedChanged )
public:

	ViewModel( QObject *parent = 0 );
	~ViewModel() { }

	QAbstractListModel* availableItems()
		{ return m_availableItems; }

	QAbstractListModel* acceptedItems()
		{ return m_acceptedItems; }

	int available ()
		{ return m_availableItems->rowCount(); }

	int accepted ()
		{ return m_acceptedItems->rowCount(); }

	Q_INVOKABLE void onButtonClicked( int availableRow );

signals:
	void availableItemsChanged();
	void acceptedItemsChanged();
	void availableChanged();
	void acceptedChanged();

private:
	QAbstractListModel* m_availableItems;
	QAbstractListModel* m_acceptedItems;
};

#endif

The ViewModel.cpp implementation of the ViewModel. This is of course a simple example. The idea is that ViewModels can be quite complicated while the view.qml remains simple:

#include <QStringListModel>

#include "ViewModel.h"

ViewModel::ViewModel( QObject *parent ) : QObject ( parent )
{
	QStringList available;
	QStringList accepted;

	available << "Two" << "Three" << "Four" << "Five";
	accepted << "One";

	m_availableItems = new QStringListModel( available, this );
	emit availableItemsChanged();

	m_acceptedItems = new QStringListModel( accepted, this );
	emit acceptedItemsChanged();
}

void ViewModel::onButtonClicked(int availableRow)
{
	QModelIndex availableIndex = m_availableItems->index( availableRow, 0, QModelIndex() );
	QVariant availableItem = m_availableItems->data( availableIndex, Qt::DisplayRole );

	int acceptedRow = m_acceptedItems->rowCount();

	m_acceptedItems->insertRows( acceptedRow, 1 );

	QModelIndex acceptedIndex = m_acceptedItems->index( acceptedRow, 0, QModelIndex() );
	m_acceptedItems->setData( acceptedIndex, availableItem );
	emit acceptedChanged();

	m_availableItems->removeRows ( availableRow, 1, QModelIndex() );
	emit availableChanged();
}

The view.qml. We’ll try to have as few JavaScript code as possible; the idea is that coding itself is done in the ViewModel. The view should only be view code (styling, UI, animations, etc). The import url and version are defined by the use of qmlRegisterType in the main.cpp file, lower:

import QtQuick 2.0
import QtQuick.Controls 1.2

import be.codeminded.ViewModelExample 1.0

Rectangle {
    id: root
    width: 640; height: 320

	property var viewModel: ViewModel { }

	Rectangle {
		id: left
		anchors.left: parent.left
		anchors.top: parent.top
		anchors.bottom: button.top
		width: parent.width / 2
		ListView {
		    id: leftView
			anchors.left: parent.left
			anchors.right: parent.right
			anchors.top: parent.top
			anchors.bottom: leftText.top

			delegate: rowDelegate
		        model: viewModel.availableItems
		}
		Text {
			id: leftText
			anchors.left: parent.left
			anchors.right: parent.right
			anchors.bottom: parent.bottom
			height: 20
			text: viewModel.available
		}
	}

	Rectangle {
		id: right
		anchors.left: left.right
		anchors.right: parent.right
		anchors.top: parent.top
		anchors.bottom: button.top
		ListView {
		    id: rightView
			anchors.left: parent.left
			anchors.right: parent.right
			anchors.top: parent.top
			anchors.bottom: rightText.top

			delegate: rowDelegate
		        model: viewModel.acceptedItems
		}
		Text {
			id: rightText
			anchors.left: parent.left
			anchors.right: parent.right
			anchors.bottom: parent.bottom
			height: 20
			text: viewModel.accepted
		}
	}

	Component {
		id: rowDelegate
		Rectangle {
			width: parent.width
			height: 20
			color: ListView.view.currentIndex == index ? "red" : "white"
			Text { text: 'Name:' + display }
			MouseArea {
				anchors.fill: parent
				onClicked: parent.ListView.view.currentIndex = index
			}
		}
	}

	Button {
		id: button
		anchors.left: parent.left
		anchors.right: parent.right
		anchors.bottom: parent.bottom
		height: 20
	        text: "Accept item"
		onClicked: viewModel.onButtonClicked( leftView.currentIndex );
	}
}

A main.cpp example. The qmlRegisterType defines the url to import in the view.qml file:

#include <QGuiApplication>
#include <QQuickView>
#include <QtQml>
#include <QAbstractListModel>

#include "ViewModel.h"

int main(int argc, char *argv[])
{
	QGuiApplication app(argc, argv);
	QQuickView view;
	qRegisterMetaType<QAbstractListModel*>("QAbstractListModel*");
	qmlRegisterType<ViewModel>("be.codeminded.ViewModelExample", 1, 0, "ViewModel");
	view.setSource(QUrl("qrc:/view.qml"));
	view.show();
	return app.exec();
}

A project.pro file. Obviously should you use cmake nowadays. But oh well:

TEMPLATE += app
QT += quick
SOURCES += ViewModel.cpp main.cpp
HEADERS += ViewModel.h
RESOURCES += project.qrc

And a project.qrc file:

<!DOCTYPE RCC>
<RCC version="1.0">
<qresource prefix="/">
    <file>view.qml</file>
</qresource>
</RCC>

May 26, 2016

I’m happy to announce the immediate availability of Maps 3.6. This feature release brings marker clustering enhancements and a number of fixes.

These parameters where added to the display_map parser function, to allow for greater control over marker clustering. They are only supported together with Google Maps.

  • clustergridsize: The grid size of a cluster in pixels
  • clustermaxzoom: The maximum zoom level that a marker can be part of a cluster
  • clusterzoomonclick: If the default behavior of clicking on a cluster is to zoom in on it
  • clusteraveragecenter: If the cluster location should be the average of all its markers
  • clusterminsize: The minimum number of markers required to form a cluster

Bugfixes

  • Fixed missing marker cluster images for Google Maps
  • Fixed duplicate markers in OpenLayers maps
  • Fixed URL support in the icon parameter

Credits

Many thanks to Peter Grassberger, who made the listed fixes and added the new clustering parameters. Thanks also go to Karsten Hoffmeyer for miscellaneous support and to TranslateWiki for providing translations.

Upgrading

Since this is a feature release, there are no breaking changes, and you can simply run composer update, or replace the old files with the new ones.

There are, however, compatibility changes to keep in mind. As of this version, Maps requires PHP 5.5 or later and MediaWiki 1.23 or later. composer update will not give you a version of Maps incompatible with your version of PHP, though it is presently not checking your MediaWiki version. Fun fact: this is the first bump in minimum requirements since the release of Maps 2.0, way back in 2012.

 

 

May 25, 2016

Every now and then I get asked how to convince ones team members that Pair Programming is worthwhile. Often the person asking, or people I did pair programming with, while obviously enthusiastic about the practice, and willing to give it plenty of chance, are themselves not really convinced that it actually is worth the time. In this short post I share how I look at it, in the hope it is useful to you personally, and in convincing others.

Extreme Programming

The cost of Pair Programming

Suppose you are new to the practice and doing it very badly. You have one person hogging the keyboard and not sharing their thoughts, with the other paying more attention to twitter than to the development work. In this case you basically spend twice the time for the same output. In other words, the development cost is multiplied by two.

Personally I find it tempting to think about Pair Programming as doubling the cost, even though I know better. How much more total developer time you need is unclear, and really depends on the task. The more complex the task, the less overhead Pair Programming will cause. What is clear, is that when your execution of the practice is not pathologically bad, and when the task is more complicated than something you could trivially automate, the cost multiplication is well below two. An article on c2 wiki suggests 10-15% more total developer time, with the time elapsed being about 55% compared to solo development.

If these are all the cost implications you think about with regards to Pair Programming, it’s easy to see how you will have a hard time to justify it. Let’s look at what makes the practice actually worthwhile.

The cost of not Pair Programming

If you do Pair Programming, you do not need a dedicated code review step. This is because Pair Programming is a continuous application of review. Not only do you not have to put time into a dedicated review step, the quality of the review goes up, as communication is much easier. The involved feedback loops are shortened. With dedicated review, the reviewer will often have a hard time understanding all the relevant context and intent. Questions get asked and issues get pointed out. Some time later the author of the change, who in the meanwhile has been working on something else, needs to get back to the reviewer, presumably forcing two mental context switches. When you are used to such a process, it becomes easy to become blind to this kind of waste when not paying deliberate attention to it. Pair Programming eliminates this waste.

The shorter feedback loops and enhanced documentation also help you with design questions. You have a fellow developer sitting next to you who you can bounce ideas off and they are even up to speed with what you are doing. How great is that? Pair Programming can be a lot of fun.

The above two points make Pair Programming pay more than for itself in my opinion, though it offers a number of additional benefits. You gain true collective ownership, and build shared commitment. There is knowledge transfer and Pair Programming is an excellent way of onboarding new developers. You gain higher quality, both internal in the form of better design, and external, in the form of fewer defects. While those benefits are easy to state, they are by no means insignificant, and deserve thorough consideration.

Give Pair Programming a try

As with most practices there is a reasonable learning curve, which will slow you down at first. Such investments are needed to become a better programmer and contribute more to your team.

Many programmers are more introverted and find the notion of having to pair program rather daunting. My advice when starting is to begin with short sessions. Find a colleague you get along with reasonably well and sit down together for an hour. Don’t focus too much on how much you got done. Rather than setting some performance goal with an arbitrary deadline, focus on creating a habit such as doing one hour of Pair Programming every two days. You will automatically get better at it over time.

If you are looking for instructions on how to Pair Program, there is plenty of google-able material out there. You can start by reading the Wikipedia page. I recommend paying particular attention to the listed non-performance indicators. There are also many videos, be it conference tasks, or dedicated explanations of the basics.

Such disclaimer

I should note that while I have some experience with Pair Programming, I am very much a novice compared to those who have done it full time for multiple years, and can only guess at the sage incantations these mythical creatures would send your way.

Extreme Pair Programming

Extreme Pair Programming

May 24, 2016

So although I am taking things rather slowly, I am in fact still working on Power-Ups for Autoptimize, focusing on the one most people were asking for; critical CSS. The Critical CSS Power-Up will allow one to add “above the fold”-CSS for specific pages or types of pages.

The first screenshot shows the main screen (as a tab in Autoptimize), listing the pages for which Critical CSS is to be applied:

The second screenshot shows the “edit”-modal (which is almost the same when adding new rules) where you can choose what rule to create (based on URL or on WordPress Conditional Tag), the actual string from the URL or Conditional Tag and a textarea to copy/ paste the critical CSS:

ao_critcss_edit

The next step will be to contact people who already expressed interest in beta-testing Power-Ups, getting feedback from them to improve and hopefully make “Autoptimize Critical Css” available somewhere in Q3 2016 (but no promises, off course).

Last week I attended the I T.A.K.E. unconference in Bucharest. This unconference is about software development, and has tracks such as code quality, DevOps, craftsmanship, microservices and leadership. In this post I share my overall impressions as well as the notes I took during the uncoference.

Conference impression

itakeThis was my first attendance of I T.A.K.E, and I had not researched in high detail what the setup would look like, so I did not really know what to expect. What surprised me is that most of the unconference is actually pretty much a regular conference. For the majority of the two days, there where several tracks in parallel, with talks on various topics. The unconference part is limited to two hours each day during which there is an open space.

Overall I enjoyed the conference and learned some interesting new things. Some talks were a bit underwhelming quality wise, with speakers not properly using the microphone, code on slides in such a quantity that no one can read it, and speakers looking at their slides the whole time and not connecting to the audience. The parts I enjoyed most were the open space, conversations during coffee breaks, and a little pair programming. I liked I T.A.K.E more than the recent CraftConf, though less than SoCraTes, which perhaps is a high standard to set.

Keynote: Scaling Agile

Day one started with a keynote by James Shore (who you might know from Let’s Code: Test-Driven JavaScript) on how to apply agile methods when growing beyond a single team.

The first half of the talk focused on how to divide work amongst developers, be it between multiple teams, or within a team using “lanes”. The main point that was made is that one wants to minimize dependencies between groups of developers (so people don’t get blocked by things outside of their control), and therefore the split should happen along feature boundaries, not within features themselves. This of course builds on the premise that the whole team picks up a story, and not some subset or even individuals.

ScalingAgile

A point that caught my interest is that while collective ownership of code within teams is desired, sharing responsibility between teams is more problematic. The reason for this being that supposedly people will not clean up after themselves enough, as it’s not their code, and rather resort to finger-pointing to the other team(s). As James eloquently put it:

My TL;DR for this talk is basically: low coupling, high cohesion 🙂

Mutation Testing to the rescue of your Tests

During this talk, one of the first things the speaker said is that the only goal of tests is to make sure there are no bugs in production. This very much goes against my point of view, as I think the primary value is that they allow refactoring with confidence, without which code quality suffers greatly. Additionally, tests provide plenty of other advantages, such as documenting what the system does, and forcing you to pay a minimal amount of attention to certain aspects of software design.

The speaker continued to ask about who uses test coverage, and had a quote from Uncle Bob on needing 100% test coverage. After another few minutes of build up to the inevitable denunciation of chasing test coverage as being a good idea, I left to go find a more interesting talk.

Afterwards during one of the coffee breaks I talked with some people that had joined the talk 10 minutes or so after it started and had actually found it interesting. Apparently the speaker got to the actual topic of the talk; mutation testing, and presented it as a superior metric. I did not know about mutation testing before and recommend you have a look at the Wikipedia page about it if you do not know what it is. It automates an approximation of what you do in trying to determine which tests are valuable to write. As with code coverage, one should not focus on the metric though, and merely use it as the tool that it is.

Interesting related posts:

Raising The Bar

A talk on Software Craftsmanship that made me add The Coding Dojo Handbook to my to-read list.

Metrics For Good Developers

  • Metrics are for developers, not for management.
  • Developers should be able to choose the metrics.
  • Metrics to get a real measure of quality, not just “it feels like we’re doing well”
  • Measuring the number of production defects.
  • Make metrics visible.
  • Sometimes it is good to have metrics for individuals and not the whole team.
  • They can be a feedback mechanism for self improvement.

Open Space

The Open Space is a two hour slot which puts the “un” in unconference. It starts by having a market place, where people propose sessions on topics of their interest. These sessions are typically highly interactive, in the form of self-organized discussions.

Open Space: Leadership

This session started by people writing down things they associate with good leadership, and then discussing those points.

Two books where mentioned, the first being The Five Dysfunctions of a Team.

The second book was Leadership and the One Minute Manager: Increasing Effectiveness Through Situational Leadership.

Open Space: Maintenance work: bad and good

This session was about finding reasons to dislike doing maintenance work, and then finding out how to look at it more positively. My input here was that a lot of the negative things, such as having to deal with crufty legacy code, can also be positive, in that they provide technical challenges absent in greenfield projects, and that you can refactor a mess into something nice.

I did not stay in this session until the very end, and unfortunately cannot find any pictures of the whiteboard.

Open Space: Coaching dojo

I had misheard what this was about and thought the topic was “Coding Dojo“. Instead we did a coaching exercise focused on asking open ended questions.

Are your Mocks Mocking at You?

This session was spread over two time slots, and I only attended the first part, as during the second one I had some pair programming scheduled. One of the first things covered in this talk was an explanation of the different types of Test Doubles, much like in my recent post 5 ways to write better mocks. The speakers also covered the differences between inside-out and outside-in TDD, and ended (the first time slot) with JavaScript peculiarities.

Never Develop Alone : always with a partner

In this talk, the speaker, who has been doing full-time pair programming for several years, outlined the primary benefits provided by, and challenges encountered during, pair programming.

Benefits: more focus / less distractions, more confidence, rapid feedback, knowledge sharing, fun, helps on-boarding, continuous improvement, less blaming.

Challenges: synchronization / communication, keyboard hogging

Do:

  • Ping-Pong TDD
  • Time boxing
  • Multiple keyboards
  • Pay attention and remind your pair if they don’t
  • Share your thoughts
  • Be open to new ideas and accept feedback
  • Mob programming

Live coding: Easier To Change Code

In this session the presenter walked us through some typical legacy code, and then demonstrated how one can start refactoring (relatively) safely. The code made me think of the Gilded Rose kata, though it was more elaborate/interesting. The presenter started by adding a safety net in the form of golden master tests and then proceeded with incremental refactoring.

Is management dead?WMDE management

Uncle Abraham certainly is most of the time! (Though when he is not, he approves of the below list.)

  • Many books on Agile, few on Agile management
  • Most common reasons for failure of Agile projects are management related
  • The Agile Manifesto includes two management principles
  • Intrinsic motivation via Autonomy, Mastery, Purpose and Connection
  • Self-organization: fully engaged, making own choices, taking responsibility
  • Needed for self-organization: skills, T-shaped, team players, collocation, long-lived team
  • Amplify and dampen voices
  • Lean more towards delegation to foster self-organization (levels of delegation)

delegationlevels

Visualizing codebases

This talk was about how to extract and visualize metrics from codebases. I was hoping it would include various code quality related metrics, but alas, the talk only included file level details and simple line counts.

May 22, 2016

We zijn goed. We tonen dat door ons respect voor privacy en veiligheid te combineren. Kennis is daar onontbeerlijk voor. Ik pleit voor investeren in techneuten die de twee beheersen.

Onze overheid moet niet alles investeren in miljoenen voor het bestrijden van computervredebreuk; wel ook investeren in betere software.

Belgische bedrijven maken soms software. Ze moeten aangemoedig worden, gestuurd, om het goede te doen.

Ik zou graag van ons centrum cybersecurity zien dat ze bedrijven aanmoedigt om goede en dus veilige software te maken. We moeten ook inzetten op repressie. Maar we moeten net zo veel inzetten op hoge kwaliteit.

Wij denken wel eens dat, ach, wij te klein zijn. Maar dat is niet waar. Als wij beslissen dat hier, in België, de software goed moet zijn: dan creërt dat een markt die zich zal aanpassen aan wat wij willen. Het is zaak standvastig te zijn.

Wanneer wij zeggen dat a – b hier welkom is, of niet, geven we vorm aan technologie.

Ik verwacht niet minder van mijn land. Geef vorm.

May 21, 2016

Recently, I came across some code of a web application that, on brief inspection, was vulnerable to XSS and SQL injection attacks : the SQL queries and the HTML output were not properly escaped, the input variables were not sanitized. After a bit more reviewing I made a list of measures and notified the developer who quickly fixed the issues.

I was a bit surprised to come across code that was very insecure, which took the author only a few hours to drastically improve with a few simple changes. I started wondering why the code wasn't of better quality in the first place? Did the developer not know about vulnerabilities like SQL injection and how to prevent them? Was it time pressure that kept him from writing safer code?

Anyway, there are a few guidelines to write better and safer code.

Educate yourself

As a developer you should familiarize yourself with possible vulnerabilities and how to avoid them. There are plenty of books and online tutorials covering this. A good starting point is the Top 25 Most Dangerous Software Errors list. Reading security related blogs and going to conferences (or watch talks online) is useful as well.

Use frameworks and libraries

About every language has a framework for web applications (Drupal, Symfony (PHP), Spring (Java), Django (Python), ...) that has tools and libraries for creating forms, sanitizing input variables, properly escaping HTML output, handling cookies, check authorization and do user and privileges management, database-object abstraction (so you don't have to write your own SQL queries) and much more.
Those frameworks and libraries are used by a lot of applications and developers, so they are tested much more than code you write yourself, so bugs are found more quickly.

It is also important to regularly update the libraries and frameworks you use, to have the latest bugs and vulnerabilities fixed.

Code review

More people see more than one. Have your code reviewed by a coworker and use automated tools to check your code for vulnerabilities. Most IDEs have code checking tools, or you can implement them in a Continuous Integration (CI) environment like Jenkins, Travis CI, Circle CI, ... to check your code during every build.
A lot of online code checking tools exist that can check your code every time you push your code to your version control system.
There is no silver bullet here, but a combination manual code review and automated checks will help to spot vulnerabilities sooner.

Test your code

Code reviewing tools can't spot every bug, so testing your code is important as well. You will need automated unit tests, integration tests, ... so you can test your code during every build in you CI environment.
Writing good tests is an art and takes time, but more tests means less possible bugs remaining in your code.

Coding style

While not directly a measure against vulnerabilities, using a coding style that is common for the programming language you are using, makes your code more readable both for you, the reviewer and future maintainers of your code. Better readability makes it easier to spot bugs, maintain code and avoid new bugs.


I guess there are many more ways to improve code quality and reduce vulnerabilities. Feel free to leave a comment with your ideas.


May 20, 2016

May 19, 2016

My colleague Henk Van Der Laak made a interesting tool that checks your code against the QML coding conventions. It uses the internal parser’s abstract syntax tree of Qt 5.6 and a visitor design.

It has a command line, but being developers ourselves we want an API too of course. Then we can integrate it in our development environments without having to use popen!

So this is how to use that API:

// Parse the code
QQmlJS::Engine engine;
QQmlJS::Lexer lexer(&engine);
QQmlJS::Parser parser(&engine);

QFileInfo info(a_filename);
bool isJavaScript = info.suffix().toLower() == QLatin1String("js");
lexer.setCode(code,  1, !isJavaScript);
bool success = isJavaScript ? parser.parseProgram() : parser.parse();
if (success) {
    // Check the code
    QQmlJS::AST::UiProgram *program = parser.ast();
    CheckingVisitor checkingVisitor(a_filename);
    program->accept(&checkingVisitor);
    foreach (const QString &warning, checkingVisitor.getWarnings()) {
        qWarning() << qPrintable(warning);
    }
}

May 18, 2016

This is a time of transition for the Drupal Association. As you might have read on the Drupal Association blog, Holly Ross, our Executive Director, is moving on. Megan Sanicki, who has been with the Drupal Association for almost 6 years, and was working alongside Holly as the Drupal Association's COO, will take over Holly's role as the Executive Director.

Open source stewardship is not easy but in the 3 years Holly was leading the Drupal Association, she lead with passion, determination and transparency. She operationalized the Drupal Association and built a team that truly embraces its mission to serve the community, growing that team by over 50% over three years of her tenure. She established a relationship with the community that wasn't there before, allowing the Drupal Association to help in new ways like supporting the Drupal 8 launch, providing test infrastructure, implementing the Drupal contribution credit system, and more. Holly also matured our DrupalCon, expanding its reach to more users with conferences in Latin America and India. She also executed the Drupal 8 Accelerate Fund, which allowed direct funding of key contributors to help lead Drupal 8 to a successful release.

Holly did a lot for Drupal. She touched all of us in the Drupal community. She helped us become better and work closer together. It is sad to see her leave, but I'm confident she'll find success in future endeavors. Thanks, Holly!

Megan, the Drupal Association staff and the Board of Directors are committed to supporting the Drupal project. In this time of transition, we are focused on the work that Drupal Association must do and looking at how to do that in a sustainable way so we can support the project for many years to come.

cache enablerCache Enabler – WordPress Cache is a new page caching kid on the WordPress plugin block by the Switzerland-based KeyCDN. It’s based in part on Cachify (which has a strong user-base in Germany) but seems less complex/ flexible. What makes it unique though, is it that it allows one to serve pages with WEBP images (which are not supported by Safari, MS IE/ Edge or Firefox) instead of JPEG’s to browsers that support WEBP. To be able to do that, you’ll need to also install Optimus, an image optimization plugin that plugs into a freemium service by KeyCDN (you’ll need a premium account to convert to WEBP though).

I did some tests with Cache Enabler and it works great together with Autoptimize out of the box, especially after the latest release (1.1.0) which also hooks into AO’s autoptimize_action_cachepurged action to clear Cache Enabler’s cache if AO’s get purged (to avoid having pages in cache the refer to deleted autoptimized CSS/ JS-files).

Just not sure I agree with this text on the plugin’s settings page;

Avoid […] concatenation of your assets to benefit from parallelism of HTTP/2.

because based on previous tests by smarter people than me concatenation of assets can still make (a lot of) sense, even when on HTTP/2 :-)

Damned, QML is inconsistent! Things have a content, data or children. And apparently they can all mean the same thing. So how do we know if something is a child of something else?

After a failed stackoverflow search I gave up on copy-paste coding and invented the damn thing myself.

function isChild( a_child, a_parent ) {
	if ( a_parent === null ) {
		return false
	}

	var tmp = ( a_parent.hasOwnProperty("content") ? a_parent.content
		: ( a_parent.hasOwnProperty("children") ? a_parent.children : a_parent.data ) )

	if ( tmp === null || tmp === undefined ) {
		return false
	}

	for (var i = 0; i < tmp.length; ++i) {

		if ( tmp[i] === a_child ) {
			return true
		} else {
			if ( isChild ( a_child, tmp[i] ) ) {
				return true
			}
		}
	}
	return false
}

May 17, 2016

The post The async Puppet pattern appeared first on ma.ttias.be.

I'm pretty sure this isn't tied to Puppet and is probably widely used by everyone else, but it only occurred to me recently what the structural benefits of this pattern are.

Async Puppet: stop fixing things in one Puppet run

This has always been a bit of a debated topic, both for me internally as well as in the Puppet community at large: should a Puppet run be 100% complete after the first run?

I'm starting to back away from that idea, having spent countless hours optimising my Puppet code to have the "one-puppet-run-to-rule-them-all" scenario. It's much easier to gradually build your Puppet logic in steps, each step activating when the next one has caused its final state to be set.

What I'm mostly seeing this scenario shine in is the ability to automatically add monitoring from within your Puppet code. There's support for Nagios out of the box and I contributed to the zabbixapi ruby gem to facilitate managing Zabbix host and templates from within Puppet.

Monitoring should only be added to a server when there's something to monitor. And there's only something to monitor once Puppet has done its thing and caused state on the server to be as expected.

Custom facts for async behaviour

So here's a pattern I particularly like. There are many alternatives to this one, but it's simple, straight forward and super easy to understand -- even for beginning Puppeteers.

  1. A first Puppet run starts and installs Apache with all its vhosts
  2. The second Puppet run starts and gets a fact called "apache_vhost_count", a simple integer that counts the amount of vhosts configured
  3. When that fact is a positive integer (aka: there are vhosts configured), monitoring is added

This pattern takes 2 Puppet runs to be completely done: the first gets everything up-and-running, the second detects that there are things up-and-running and adds the monitoring.

Monitoring wrappers around existing Puppet modules

You've probably done this: you get a cool module from Forge (Apache, MySQL, Redis, ...), you implement it and want to add your monitoring to it. But how? It's not cool to hack away in the modules themselves, those come via r10k or puppet-librarian.

Here's my take on it:

  1. Create a new module, call it "monitoring"
  2. Add custom facts in there, called has_mysql, has_apache, ... for all the services you want
  3. If you want to go further, create facts like apache_vhost_count, mysql_databases_count, ... to count the specific instance of each service, to determine if it's being used or not.
  4. Use those facts to determine whether to add monitoring or not:
    if ($::has_apache > 0) and ($::apache_vhost_count > 0) {
      @@zabbix_template_link { "zbx_application_apache_${::fqdn}":
        ensure   => present,
        template => 'Application - PHP-FPM',
        host     => $::fqdn,
        require  => Zabbix_host [ $::fqdn ],
      }
    }
        

Is this perfect? Far from it. But it's pragmatic and it gets the job done.

The facts are easy to write and understand, too.

Facter.add(:apache_vhost_count) do
  confine :kernel => :linux
  setcode do
    if File.exists? "/etc/httpd/conf.d/"
      Facter::Util::Resolution.exec('ls -l /etc/httpd/conf.d | grep \'vhost-\' | wc -l')
    else
      nil
    end
  end
end

It's mostly bash (which most sysadmins understand) -- and very little Ruby (which few sysadmins understand).

The biggest benefit I see to it is that whoever implements the modules and creates the server manifests doesn't have to toggle a parameter called enable_monitoring (been there, done that) to decide whether or not that particular service should be monitored. Puppet can now figure that out on its own.

Detecting Puppet-managed services

Because some services are installed because of dependencies, the custom facts need to be clever enough to understand when they're being managed by Puppet. For instance, when you install the package "httpd-tools" because it contains the useful htpasswd tool, most package managers will automatically install the "httpd" (Apache) package, too.

Having that package present shouldn't trigger your custom facts to automatically enable monitoring, it should probably only do that when it's being managed by Puppet.

A very simple workaround (up for debate whether it's a good one), is to have each Puppet module write a simple file to /etc/puppet-managed in each module.

$ ls /etc/puppet-managed
apache mysql php postfix ...

Now you can extend your custom facts with the presence of that file to determine if A) a service is Puppet managed and B) if monitoring should be added.

Facter.add(:has_apache) do
  confine :kernel => :linux
  setcode do
    if File.exists? "/sbin/httpd"
      if File.exists? "/etc/puppet-managed/apache"
        # Apache installed and Puppet managed
        true
      else
        # Apache is installed, but isn't Puppet managed
        nil
      end
    else
      # Apache isn't installed
      nil
    end
  end
end

(example explicitly split up in order to add comments)

You may also be tempted to use the defined() (manual) function, to check if Apache has been defined in your Puppet code and then add monitoring. However, that's dependent on the resource order in which it's evaluated.

Your code may look like this:

if (defined(Service['httpd']) {
   # Apache is managed by Puppet, add monitoring ? 
}

Puppet's manual explains the big caveat though:

Puppet depends on the configuration’s evaluation order when checking whether a resource is declared.

In other words: if your monitoring code is evaluated before your Apache code, that defined() will always return false.

Working with facter circumvents this.

Again, this pattern isn't perfect, but it allows for a clean separation of logic and -- if your team grows -- an easier way to separate responsibilities for the monitoring team and the implementation team to each have their own modules with their own responsibilities.

The post The async Puppet pattern appeared first on ma.ttias.be.

May 16, 2016

Last year around this time, I wrote that The Big Reverse of Web would force a major re-architecture of the web to bring the right information, to the right person, at the right time, in the right context. I believe that conversational interfaces like Amazon Echo are further proof that the big reverse is happening.

New user experience and distribution platforms only come along every 5-10 years, and when they do, they cause massive shifts in the web's underlying technology. The last big one was mobile, and the web industry adapted. Conversational interfaces could be the next user experience and distribution platform – just look at Amazon Echo (aka Alexa), Facebook's messenger or Microsoft's Conversation-as-a-Platform.

Today, hardly anyone questions whether to build a mobile-optimized website. A decade from now, we might be saying the same thing about optimizing digital experiences for voice or chat commands. The convenience of a customer experience will be a critical key differentiator. As a result, no one will think twice about optimizing their websites for multiple interaction patterns, including conversational interfaces like voice and chat. Anyone will be able to deliver a continuous user experience across multiple channels, devices and interaction patterns. In some of these cross-channel experiences, users will never even look at a website. Conversational interfaces let users disintermediate the website by asking anything and getting instant, often personalized, results.

To prototype this future, my team at Acquia built a fully functional demo based on Drupal 8 and recorded a video of it. In the demo video below, we show a sample supermarket chain called Gourmet Market. Gourmet Market wants their customers to not only shop online using their website, but also use Echo or push notifications to do business with them.

We built an Alexa integration module to connect Alexa to the Gourmet Market site and to answer questions about sale items. For example, you can speak the command: "Alexa, ask Gourmet Market what fruits are on sale today". From there, Alexa would make a call to the Gourmet Market website, finding what is on sale in the specified category and pull only the needed information related to your ask.

On the website's side, a store manager can tag certain items as "on sale", and Alexa's voice responses will automatically and instantly reflect those changes. The marketing manager needs no expertise in programming -- Alexa composes its response by talking to Drupal 8 using web service APIs.

The demo video also shows how a site could deliver smart notifications. If you ask for an item that is not on sale, the Gourmet Market site can automatically notify you via text once the store manager tags it as "On Sale".

From a technical point of view, we've had to teach Drupal how to respond to a voice command, otherwise known as a "Skill", coming into Alexa. Alexa Skills are fairly straightforward to create. First, you specify a list of "Intents", which are basically the commands you want users to run in a way very similar to Drupal's routes. From there, you specify a list of "Utterances", or sentences you want Echo to react to that map to the Intents. In the example of Gourmet Market above, the Intents would have a command called GetSaleItems. Once the command is executed, your Drupal site will receive a webhook callback on /alexa/callback with a payload of the command and any arguments. The Alexa module for Drupal 8 will validate that the request really came from Alexa, and fire a Drupal Event that allows any Drupal module to respond.

It's exciting to think about how new user experiences and distribution platforms will change the way we build the web in the future. As I referenced in Drupalcon New Orleans keynote, the Drupal community needs to put some thought into how to design and build multichannel customer experiences. Voice assistance, chatbots or notifications are just one part of the greater equation. If you have any further thoughts on this topic, please share them in the comments.

Digital trends

The post Redis: OOM command not allowed when used memory > ‘maxmemory’ appeared first on ma.ttias.be.

If you're using Redis, you can find your application logs start to show the following error messages:

$ tail -f error.log
OOM command not allowed when used memory > 'maxmemory'

This can happen every time a WRITE operations is sent to Redis, to store new data.

What does it mean?

The OOM command not allowed when used memory > 'maxmemory' error means that Redis was configured with a memory limit and that particular limit was reached. In other words: its memory is full, it can't store any new data.

You can see the memory values by using the redis CLI tool.

$ redis-cli -p 6903

127.0.0.1:6903> info memory
# Memory
used_memory:3221293632
used_memory_human:3.00G
used_memory_rss:3244535808
used_memory_peak:3222595224

If you run a Redis instance with a password on it, change the redis-cli command to this:

$ redis-cli -p 6903 -a your_secret_pass

The info memory command remains the same.

The example above shows a Redis instance configured to run with a maximum of 3GB of memory and consuming all of it (=used_memory counter).

Fixing the OOM command problem

There are 3 potential fixes.

1. Increase Redis memory

Probably the easiest to do, but it has its limits. Find the Redis config (usually somewhere in /etc/redis/*) and increase the memory limit.

 $ vim /etc/redis/6903.conf
maxmemory 3gb

Somewhere in that config file, you'll find the maxmemory parameter. Modify it to your needs and restart the Redis instance afterwards.

2. Change the cache invalidation settings

Redis is throwing the error because it can't store new items in memory. By default, the "cache invalidation" setting is set pretty conservatively, to volatile-lru. This means it'll remove a key with an expire set using an LRU algorithm.

This can cause items to be kept in the queue even when new items try to be stored. In other words, if your Redis instance is full, it won't just throw away the oldest items (like a Memcached would).

You can change this to a couple of alternatives:

# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
# is reached? You can select among five behavior:
#
# volatile-lru -> remove the key with an expire set using an LRU algorithm
# allkeys-lru -> remove any key accordingly to the LRU algorithm
# volatile-random -> remove a random key with an expire set
# allkeys->random -> remove a random key, any key
# volatile-ttl -> remove the key with the nearest expire time (minor TTL)
# noeviction -> don't expire at all, just return an error on write operations
#
# Note: with all the kind of policies, Redis will return an error on write
#       operations, when there are not suitable keys for eviction.
#
#       At the date of writing this commands are: set setnx setex append
#       incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
#       sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
#       zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
#       getset mset msetnx exec sort

In the very same Redis config you can find the directive (somewhere in /etc/redis/*), there's also an option called maxmemory-policy.

The default is:

$ grep maxmemory-policy /etc/redis/*
maxmemory-policy volatile-lru

If you don't really care about the data in memory, you can change it to something more agressive, like allkeys-lru.

$ vim /etc/redis/6903.conf
maxmemory-policy allkeys-lru

Afterwards, restart your Redis again.

Keep in mind though that this can mean Redis removes items from its memory that haven't been persisted to disk just yet. This is configured with the save parameter, so make sure you look at this values too to determine a correct "max memory" policy. Here are the defaults:

#   In the example below the behaviour will be to save:
#   after 900 sec (15 min) if at least 1 key changed
#   after 300 sec (5 min) if at least 10 keys changed
#   after 60 sec if at least 10000 keys changed
#
#   Note: you can disable saving at all commenting all the save lines.

save 900 1
save 300 10
save 60 10000

With the above in mind, setting a different maxmemory-policy could mean dataloss in your Redis instance!

3. Store less data in Redis

I know, stupid 'solution', right? But ask yourself this: is everything you're storing in Redis really needed? Or are you using Redis as a caching solution and just storing too much data in it?

If your SQL queries return 10 columns but realistically you only need 3 of those on a regular basis, just store those 3 values -- not all 10.

The post Redis: OOM command not allowed when used memory > ‘maxmemory’ appeared first on ma.ttias.be.

May 14, 2016

I love street photography. Walking and shooting. Walking, talking and shooting. Slightly pushing me out of my comfort zone looking for that one great photo.

Street life
Street life
Street life
Street life
Sunrise

Street photography is all fun and games until someone pulls out a handgun. The anarchy sign in the background makes these shots complete.

Gun
Gun

For more photos, check out the entire album.

The post The day Google Chrome disables HTTP/2 for nearly everyone: May 31st, 2016 appeared first on ma.ttias.be.

If you've been reading this blog for a while (or have been reading my rants on Twitter), you'll probably know this was coming already. If you haven't, here's the short version.

The Chromium project (whose end result is the Chrome Browser) has switched the negotiation protocol by which it decides whether to use HTTP/1.1 or the newer HTTP/2 on May 15th, 2016 May 31st, 2016.

Update: Chrome 51 is released and should be updated everywhere in a couple of hours/days. If HTTP/2 stops working for you, it's probably because NPN has been disabled from now on.

That in and of itself isn't a really big deal, but the consequences unfortunately are. Previously (as in: before May 31st, 2016), a protocol named NPN was used -- Next Protocol Negotiation. This wasn't a very efficient protocol, but it got the job done.

There's a newer negotiation protocol in town called ALPN -- Application-Layer Protocol Negotiation. This is a more efficient version with more future-oriented features. It's a good decision to switch from NPN to ALPN, there are far more benefits than there are downsides.

However, on the server side -- the side which runs the webservers that in turn run HTTP/2 -- there's a rather practical issue: to support ALPN, you need at least OpenSSL 1.0.2.

So what? You're a sysadmin, upgrade your shit already!

I know. It sounds easy, right? Well, it isn't. Just for comparison, here's the current (May 2016) state of OpenSSL on Linux.

Operating System OpenSSL version
CentOS 5 0.9.8e
CentOS 6 1.0.1e
CentOS 7 1.0.1e
Ubuntu 14.04 LTS 1.0.1f
Ubuntu 16.04 LTS 1.0.2g
Debian 7 (Wheezy) 1.0.1e
Debian 8 (Jessie) 1.0.1k

As you can tell from the list, there's a problem: out of the box, only the latest Ubuntu 16.04 LTS (out for less than a month) supports OpenSSL 1.0.2.

Upgrading OpenSSL packages isn't a trivial task, either. Since just about every other service links against the OpenSSL libraries, they too should be re-packaged (and tested!) to work against the latest OpenSSL release.

On the other hand, it's just a matter of time before distributions have to upgrade as support for OpenSSL 1.0.1 ends soon.

Support for version 1.0.1 will cease on 2016-12-31. No further releases of 1.0.1 will be made after that date. Security fixes only will be applied to 1.0.1 until then.

OpenSSL Release Strategy

To give you an idea of the scope of such an operation, on a typical LAMP server (the one powering the blogpost you're now reading), the following services all make use of the OpenSSL libraries.

$ lsof | grep libssl | awk '{print $1}' | sort | uniq
anvil
fail2ban
gdbus
gmain
httpd
postfix
mysqld
NetworkManager
nginx
php-fpm
puppet
sshd
sudo
tuned
zabbix_agent

A proper OpenSSL upgrade would cause all of those packages to be recreated too. That's a hassle, to say the least. And truth be told, it probably isn't just repackaging but potentially changing the code of each application to be compatible to the newer or changed API's in OpenSSL 1.0.2.

Right now, the proper simplest way to run HTTP/2 on a modern server (that isn't Ubuntu 16.04 LTS) would be to run a Docker container, based on Ubuntu 16.04, and run your webserver inside of it.

I don't blame Google for switching protocols and evolving the web, but I'm sad to see that as a result of it, a very large portion of Google Chrome users will have to live without HTTP/2, once again.

Before May 15th, 2016 -- a Google Chrome user would see this in its network inspector:

protocol_http2_enabled

After May 31st, it'll be old-skool HTTP/1.1.

protocol_http2_disabled

It used to be that enabling HTTP/2 in Nginx was a very simple operation, but in order to support Chrome it'll be a bit more complicated from now on.

This change also didn't come out of the blue: Chrome had disabled NPN back in 2015 but quickly undid that change when the impact was clear. We knew, since the end of 2015, that this change was coming -- we were given 6 months time to get support for ALPN going, but by the current state of OpenSSL packages that was too little time.

If you want to keep track of the state of Red Hat (Fedora, RHEL & CentOS) upgrades, here's some further reading: RFE: Need OpenSSL 1.0.2.

As I'm mostly a CentOS user, I'm unaware of the state of Debian or Ubuntu OpenSSL packages at this time.

The post The day Google Chrome disables HTTP/2 for nearly everyone: May 31st, 2016 appeared first on ma.ttias.be.

May 13, 2016

As we all know has Qt types like QPointerQSharedPointer and we know about its object trees. So when do we use what?

Let’s first go back to school, and remember the difference between composition and aggregation. Most of you probably remember drawings like this?

It thought us when to use composition, and when to use aggregation:

  • Use composition when the user can’t exist without the dependency. For example a Human can’t exist without a Head unless it ceases to be a human. You could also model Arm, Hand, Finger and Leg as aggregates but it might not make sense in your model (for a patient in a hospital perhaps it does?)
  • Use aggregate when the user can exist without the dependency: A car without a passenger is still a car in most models.

This model in the picture will for example tell us that a car’s passenger must have ten fingers.

But what does this have to do with QPointer, QSharedPointer and Qt’s object trees?

First situation is a shared composition. Both Owner1 and Owner2 can’t survive without Shared (composition, filled up diamonds). For this situation you would typically use a QSharedPointer<Shared> at Owner1 and Owner2:

If there is no other owner, then it’s probably better to just use Qt’s object trees and setParent() instead. Note that for example QML’s GC is not very well aware of QSharedPointer, but does seem to understand Qt’s object trees.

Second situation are shared users. User1 and User2 can stay alive when Shared goes away (aggregation, empty diamonds). In this situation you typically use a QPointer<Shared> at User1 and at User2. You want to be aware when Shared goes away. QPointer<Shared>’s isNull() will become true after that happened.

Third situation is a mixed one. In this case you could at Owner use a QSharedPointer<Shared> or a parented raw QObject pointer (using setParent()), but a QPointer<Shared> at User. When Owner goes away and its destructor (due to the parenting) deletes Shared, User can check for it using the previously mentioned isNull check.

Finally if you have a typical object tree, then use QObject’s infrastructure for this.