Subscriptions

Planet Grep is open to all people who either have the Belgian nationality or live in Belgium, and who actively work with or contribute to Open Source/Free software.

About Planet Grep...

Other planets can be found at the Planet Index.

A complete feed is available in a number of syndication formats: RSS 1.0, RSS 2.0, FOAF, and OPML.

The layout of this site was done by Gregory

January 25, 2015

Mattias Geniar

The PHP Paradox

The post The PHP Paradox appeared first on ma.ttias.be.

I was at PHP Benelux, the annual PHP conference for Belgium, the Netherlands and Luxembourg, and I realized 2 things about PHP that I hadn't really thought of before. I call them the PHParadoxes (or PHP Paradoxes).

The Job Hunt

In most industries, it's the employee that convinces the employer he/she is worthy to work at the company.

In the PHP-world, it's the employer (aka the companies) that needs to convince the employee (aka the developer) that their company is worthy of their time and devotion.

In tech, it's the companies that persuade the developers to work for them. In any other business, it's the other way around.

I didn't actually pay much attention to this thought. But looking back, I spent the entire weekend, with everyone I talked to, mentioning that we are looking for PHP developers. What a cool place it is to work at. How we have remote workers. How we have nerf-gun wars and stress-ball battles. How we use Neo4j as our graph database. How we have really smart developers in our team.

But in many other places, it's vica versa. It's the developer that claims their knowledge on frameworks, design patterns, multiple languages, ...

Not in PHP. In PHP land, the companies put in way more efforts than the PHP developers to get them on board.

Why isn't every industry like this? It's not just the shortage on PHP developers. Most of the tech industry works like this.

But the health sector has a shortage of nurses and general staff, do you see them being this actively recruited? I don't. What makes the tech sector so different?

You Know Nothing, John Snow

The closing keynote was brought by @SaraMG, mostly known for her hard work at HHVM and PHP core. She was going over the new tools in the pipeline for PHP and what the PHP7 landscape could, theoretically, be. It mostly covered features of Hack, that could make their way into PHP core.

And then she mentioned static code analysis "on the fly".

php_hack_static_analysis

As soon as the file you're working on is saved, it's analysed and type errors (wrong casts, character conversions, ...) could be shown. I loved this. The room loved this.

I loved it, right up until the point my colleague next to me, with no PHP background, said:

I don't get all this excitement ... this has been in Visual Basic for years, and it wasn't even in PHP yet?

And he was right.

I'm working mostly in PHP and it has blinded me. My small efforts into Ruby and side languages like JavaScript don't really count. PHP has been my dominant language.

But if I look at other languages, mostly languages away from the web, we can see an entire ecosystem of IDE's, debug tools, compilers, standard libraries, ... that help those languages. Think Visual Basic, C#, Java, .NET, ... They all have tools that PHP, even after all these years, doesn't have.

There are no complaints from PHP developers. I don't think anyone feels they're missing something. But maybe that's just because we don't know any better?

Either way, it made me think about other languages. About other development ecosystems that we can learn from, as the PHP community. I'd like to give a few other languages a try this year and see if some of those good bits can be ported back to PHP.

For many, PHP is the entry language into becoming a developer. Don't let it be the exit. Don't stare yourself blind into PHP alone.

The post The PHP Paradox appeared first on ma.ttias.be.

by Mattias Geniar at January 25, 2015 09:24 PM

Lionel Dricot

Trouvez le job de vos rêves avec Facebook !

2390914273_9b1ee4ee61_z

Mark Zuckerberg vient de se saisir du micro. Les applaudissements se sont tus. Comme à son habitude, le jeune prodige de la Silicon Valley est à la fois décontracté et mal à l’aise.

— En janvier 2015, il y a tout juste un an, des chercheurs ont démontré que ce que nous likons sur Facebook permet de dessiner un profil psychologique de notre personnalité. Ce profil est plus précis que ce que nos amis pensent de nous, ce que nos proches pensent de nous et même de ce que nous même pensons être notre personnalité. Facebook nous connait donc mieux que nous nous connaissons nous-mêmes !

Silence dans l’assemblée. Le ton tranche étrangement avec les habituelles conférences de presse ponctuées de “Awesome ! Awesome !”.

— Dans un sens, cela fait peur. Moi-même, je l’avoue, j’ai eu un instant de doute en apprenant cette nouvelle.

Dans la salle de conférence plongée dans la pénombre, on entendrait un drone voler. Même le sempiternel cliquètement des claviers s’est éteint.

— Puis je me suis souvenu que si ce merveilleux outil nous connait mieux que nous-mêmes, il ne reste qu’un outil. Un outil n’est ni bien, ni mal. Il ne fait qu’accomplir la volonté de son utilisateur. Pourquoi ne pas profiter de cette aubaine pour améliorer sensiblement la vie de chacun ? Transformer notre peur irrationnelle en outil au service du bien !

Il fait quelques pas sur la scène et s’approche d’un membre de l’assistance.

— Est-ce que votre travail vous prend beaucoup du temps ?
— Euh oui, bredouille la journaliste dans le micro qui lui est tendu. Les voyages, les relectures, les corrections, ça prend beaucoup de temps.
— Il vous prend beaucoup de temps. Mais est-ce que cela vous plait ?
— Euh… oui. Oui, certainement, ajoute la reporter d’une voix incertaine.
— Est-ce le meilleur travail que vous puissiez faire en ce moment ? Celui qui est le plus enrichissant ?
— Je n’en sais fichtre rien !
— Vous n’en savez rien !

Le mondialement célèbre CEO remonte sur l’estrade.
— Elle n’en sait rien. Et vous n’en savez rien non plus ! Même moi je n’en sais rien. Nous consacrons la plus grande partie de notre temps et de nos efforts à une activité dont nous ne savons pas si elle est celle qui nous convient. En fait, selon nos algorithmes, 67% de nos utilisateurs sont frustrés par leur travail ! Ne pourrait-on pas les aider ?

Il fait une pause et adresse un clin d’œil à l’assemblée.

— C’est pourquoi nous avons conçu Facebook Dream Job. Facebook Dream Job est une fonctionnalité presqu’invisible qui va analyser les personnalités mais également les interactions des personnes au sein d’une entreprise afin de vous suggérer l’entreprise la plus adaptée à votre personnalité. La proximité de votre domicile ou, si vous êtes voyageur, la possibilité d’un déménagement sont pris en compte. Les entreprises qui recrutent peuvent, sur leur page Facebook, poster des offres d’emploi. Comme votre degré d’intéressement à votre travail est également mesuré grâce à vos activités Facebook, si un job apparement plus intéressant que l’actuel apparaît, il vous sera automatiquement suggéré. Les entreprises utilisant Facebook for Business se verront automatiquement suggérer des profils susceptibles de renforcer leurs équipes.

L’audience se lève d’un coup. Le brouhaha est général. Les mains se lèvent.

— Mark ! Mark ! Ne trouvez-vous pas que vous forcez la main aux utilisateurs, que vous envahissez leur vie et leurs sentiments ?
— Nous ne prenons aucune décision. Lorsque vous cherchez un travail, vous allez sur des sites spécialisés et vous vous fiez à la chance. Nous ne faisons que rendre automatique ce processus, nous vous affichons une annonce. Libre à vous d’y répondre ou non.
— Mark ! Mark ! Ne craignez-vous pas de faire concurrence à Linkedin ?
— À une époque où le travail et la vie privée sont étroitement mêlé, je pense que Facebook est le mieux placé pour améliorer la vie professionnelle de ses utilisateurs. Le succès de Facebook for Business l’illustre amplement.
— Mark ! Mark ! Quel est le business model ?
— Nous n’avons pas besoin de business model pour chaque fonctionnalité. Notre business model, c’est de rendre les gens plus heureux, plus épanoui.
— Mark ! Mark !
— …

*

Le programme Facebook Recruitement Care

4646164016_3b0efa734a_z

— Notre programme Facebook Recruitement Care est extrêmement confidentiel. En signant ce contrat, vous vous engagez en ne pas en révélez l’existence.
— Je sais, je sais. Finissons-en !
— Je tiens à préciser les termes exacts : votre ingénieur clé, dont le profil Facebook est identifié sur le contrat, ne verra plus d’annonces pour des opportunités soumises par Facebook Dream Job. S’il consulte Dream Job manuellement, il se verra répondre que son travail actuel est idéal pour sa personnalité.
— Oui, c’est ce que j’ai demandé.
— Par contre, s’il cochait activement l’option “Je veux changer de travail, suggérez-moi des opportunités”, de son panneau de configuration, le comportement normal sera restauré.
— Il n’y pas moyen de l’empêcher ?
— Non, absolument pas. Le contraire révélerait l’existence de ce programme.
— Peut-être pourrais-je en être simplement informé ?
— Voyons ! Que faîtes vous de l’éthique ?
— Oui, pardon. Et bien, je suppose que je n’ai pas le choix.
— Signez ici ! Le contrat est renouvelable annuellement. Nous attendons votre paiement.

 

Images par Marco Paköeningrat et Sean MacEntee.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

flattr this!

by Lionel Dricot at January 25, 2015 08:41 PM

Damien Sandras

Ekiga 5 – Progress Report

Current Status Ekiga 5 has progressed a lot lately. OpenHUB is reportin a High Activity for the project. The main reason behind this is that I am again dedicating much of my spare time to the project. Unfortunately, we are again facing a lack of contributions. Most probably (among others) because the project has been […]

by Damien Sandras at January 25, 2015 05:01 PM

January 24, 2015

Mattias Geniar

PHP Benelux 2015 Unconference Schedule Saturday

The post PHP Benelux 2015 Unconference Schedule Saturday appeared first on ma.ttias.be.

In case you're looking for the Unconference schedule for todays PHP Benelux 2015, here it is. It's subject to change of course, this is written at 10:00h and probably won't be updated again afterwards.

php_benelux_2015_uncon

Have fun!

The post PHP Benelux 2015 Unconference Schedule Saturday appeared first on ma.ttias.be.

by Mattias Geniar at January 24, 2015 08:56 AM

January 23, 2015

Mattias Geniar

PHP Benelux 2015: Day One

The post PHP Benelux 2015: Day One appeared first on ma.ttias.be.

It's a tradition for 4 or 5 years in row now, to attend PHP Benelux. It's the conference focussed on PHP in the Benelux, for me (I'm not counting the Dutch PHP Conference, since I've never attended one yet).

Last year I even gave a presentation there.

This year was more about attending: a tutorial session and full-time conference + socials.

Docker Tutorial

The morning was a tutorial on Docker, given by Andreas Hucks. The preparation was superb, everyone had clear instructions clone a git repo and download a vagrant box, everything was nicely packaged. The box itself worked perfectly. This is what preparations should be.

I heard from colleagues that their tutorials were first consuming an hour or more on setting up the environment. In a 3-hour tutorial, that's more than 30% of the time.

A clean github repo with Vagrant boxes and we were set. Perfect!

The tutorial itself was fast-paced, everyone was following along. A lot of content in just 3 hours. And I learned a lot, so thank you Andreas!

Maximize Growth as a Software Developer

Well this was new. A rabbi (yes, a coderabbi)) giving a talk about the similarities in Jewish religion/culture and software development, with tips to grow further as a software developer.

My key-takeaways I should check;

There were countless other ideas/arguments that I just forgot to write down. Including the link about "low hanging fruit" and "how to get into open source", if anyone still has that?

Low-Level PHP: Gettings things done with Go

The Go Language has been on my radar for a while. I've read quite a bit about it and it's multi-threading model is really powerful. My problem with it? Finding a use-case in every day life.

I liked the presentation, it covered a practical scenario with code examples and comparisons to the PHP world. I'll remember the quote "Go is like programming in PHP4" as a reference to more functional programming (especially since I came from the PHP4 world).

Clear talk, good instructions. I appreciated the good & lesser sides of Go. It's not all roses and sunshine, so it's important to know the limitations and the strengths of a new language.

Go is still on my todo-list.

Getting Started with Continuous Integration

I think I had different expectations from this talk. It mentioned all the tools (that I was assuming everyone already knew about): PHPUnit, PHP_CodeSniffer, phpmd (PHP Mess Detector), phing, ...

I was expecting a talk about how to tie those all together, make a strategy for actual CI and how to implement it. Instead, it gave the building blocks with the solution still to be built. Maybe I misread the introduction or was interpreting the title differently. It wasn't for me, but that isn't to say it was a bad talk. For someone unknown to those tools, it was a really good introduction.

Conference Graphics

Not so much the conference, but @sgrame (Peter Decuyper) made some really impressive illustrations during the presentations that he posted on his twitter account.

The socials

They never disappoint. In part because the organisation puts in a lot of effort, with sponsors and side animation, but mainly because in the Benelux the PHP community consists of really nice people. I had a few drinks, a lot of laughs and even more interesting talks with known and unknown people.

But I failed in my plan:

Something to try next: better facial recognition!

Looking forward to Saturday!

The post PHP Benelux 2015: Day One appeared first on ma.ttias.be.

by Mattias Geniar at January 23, 2015 11:10 PM

Fabian Arrotin

More builders available for Koji/CBS

As you probably know, the CentOS Project now hosts the CBS effort, (aka Community Build System), that is used to build all packages for the CentOS SIGs.

There was already one physical node dedicated to Koji Web and Koji Hub, and another node dedicated to the build threads (koji-builder). As we have now more people building packages, we thought it was time to add more builders to the mix, and here we go: http://cbs.centos.org/koji/hosts lists now two added machines that are dedicated to Koji/CBS.

Those added nodes have 2 * Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz with 8cores/sockets (+ Hyperthreading activated)  , and 32Gb of RAM. Let's see how the SIGs members will keep those builders busy and throwing a bunch of interesting packages for the CentOS Community :-) . Have a nice week-end

by fabian.arrotin at January 23, 2015 04:54 PM

January 22, 2015

Les Jeudis du Libre

Mons, le 19 février : Pharo

PharoCe jeudi 19 février 2015 à 19h se déroulera la 36ème séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : Pharo (langage de programmation orienté objet)

Thématique : Live programming, langage de programmation, Web

Public : Développeurs|étudiants|académiques

L’animateur conférencier : Stéphane Ducasse (INRIA RMoD Team, Lille)

Lieu de cette séance : Université de Mons, Campus Plaine de Nimy, avenue Maistriau, Grands Amphithéâtres, Auditoire Curie (cf. ce plan sur le site de l’UMONS, ou la carte OSM).

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page http://jeudisdulibre.fikket.com/. La séance sera suivie d’un verre de l’amitié.

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, Normation, OpenSides, MeaWeb, NextLab, Phonoid et Creative Monkeys.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

Description : Pharo est un langage pur objet dynamiquement typé et réflexif. Il est inspiré de Smalltalk mais Pharo veut ré-inventer Smalltalk. Par exemple, la version 4.0 de Pharo contient un nouveau compilateur qui offre la possibilité d’avoir des variables d’instances de première classe. Un nouveau protocole réflexif est en gestation ainsi qu’un système de modules. Nous bootstrappons Pharo complètement et nous sommes capable d’avoir un noyau complet en 11k. Pharo offre du live programming.

Cependant, la raison d’être de Pharo est de créer un écosystème au sein duquel l’innovation et le business fleurissent. L’objectif de Pharo est que des programmeurs puissent faire du business et de manière fun et efficace! La pile web de Pharo est sexy : Seaside, Seaside-Rest, Reef (client/server components), Magritte (metamodel pour generation de formulaire), Zinc server Http/s superbe, Voyage (layer mongodb),…

Cette présentation fera un rapide tour des buts, de la communauté et des réalisations actuelles. Un survol de quelques frameworks existants sera présenté. Ensuite un rapide survol de la syntaxe sera présenté et on codera ensemble un petit langage pour partager ensemble le feel de programmer en Pharo. Finalement, Pharo s’améliore tous les jours: notre philosophie est une amélioration par jour – comme kent beck nous le disait, on oublie souvent que faire une chose tous les jours est la meilleure façon d’avancer.

Pharo, c’est “près de chez nous et maintenant !” comme en attestent les Pharo Days 2015 avec plus de 70 développeurs de toute l’Europe regroupés à Lille les 29 et 30 Janvier !

Bio : Stéphane Ducasse est directeur de recherche Inria première classe et dirige l’équipe RMoD à Lille. Il est expert en conception objet, conception de langages à objets, programmation réflexive ainsi que maintenance et évolution de larges applications (visualisation, métriques, meta modelisation). Ses travaux sur les traits ont été introduits dans AmbientTalk, Slate, Pharo, Perl-6, PHP 5.4 and Squeak. Ils ont été portés sur Ruby et JavaScript, et ont influencé les langages Scala et Fortress. Stéphane Ducasse est un des développeurs de Pharo, un langage open-source inspiré de Smalltalk. Il est aussi un des développeurs de Moose, une plate-forme d’analyses, et un des fondateurs de Synectique, une société proposant des outils d’analyses dédiées. Stéphane Ducasse est auteur de nombreuses publications scientifiques (son h-index est 47 d’après Google Scholar), et de quelques livres sur l’apprentissage de la programmation et d’autres sujets comme la programmation web (cf. http://book.seaside.st).

by Didier Villers at January 22, 2015 08:40 PM

FOSDEM organizers

Join your fellow hackers for a drink next Friday!

Looking for something to do the evening before FOSDEM? Like every year, many FOSDEM attendees are planning to enjoy some beers at the Delirium Café, in a beautiful alley near the Grand Place in Brussels. We have reserved most of the bar (most of the alley in fact!) again this year. Come and join us next Friday, 30 January, from 17:00(ish). Read the beer event page for all the details! If the beer event is very busy or if you would like dinner before heading over (Delirium Café does not serve food), a number of other events are on.舰

January 22, 2015 03:00 PM

January 21, 2015

Mattias Geniar

Meet Microsoft’s Project Spartan, The New IE6

The post Meet Microsoft’s Project Spartan, The New IE6 appeared first on ma.ttias.be.

I don't often get excited about Microsoft, but together with their open source strategy and this new browser, they may be heading in the right direction. Unless this turns out to be another IE6.

Microsoft will ship their Windows 10 release with a new browser called "Spartan".

project_spartan

It's supposed to be a trimmed down version of Internet Explorer, but compatible with Google's Chrome Extensions (these are still rumours at this point). These browser extensions are in fact just Javascript, HTML and CSS (like Firefox extensions).

By adding support for Chrome's extensions, it would make it a lot easier and faster for plugin developers to publish their extensions for Spartan. And it instantly buys Microsoft a great share of the extension market.

Sounds good so far, right?

Here's the downside: Microsoft is adding a rendering engine of their own to Project Spartan, for "speed" and "security". That means no trusted Webkit, Blink or Gecko rendering engine like we know from Safari, Chrome or Firefox.

Is it a fork of one of those projects? Is it really brand new? What's the compatibility with todays standards like? To quote Microsoft: "that new browser is about being fast and compatible with the modern web".

Are we looking at another CSS conditional stylesheet?

css_spartan_style_switcher

Spartan looked like a good move, but yet another HTML/CSS renderer with quirks of its own? That's a hard sell. Maybe I'm too cynical, but I had great expectations for this project. Especially since the rumours of chrome extension compatibility.

I'm not sure this story will have a happy ending.

The post Meet Microsoft’s Project Spartan, The New IE6 appeared first on ma.ttias.be.

by Mattias Geniar at January 21, 2015 08:48 PM

FOSDEM organizers

Main tracks schedule is complete

With just over a week to go until FOSDEM 2015, Our main tracks schedule is now complete. We are proud to announce the following talks: Closing keynote Title Speaker Living on Mars: A Beginner's Guide Ryan MacDonald Miscellaneous track Title Speaker(s) Stretching out for trustworthy reproducible builds Holger Levsen Security track Title Speaker(s) Keccak and SHA-3: code and standard updates Gilles Van Assche, Joan Daemen, Michaël Peeters Time track Title Speaker(s) Precise time: from CPU clocks to hacking the Universe Tom Van Baak

January 21, 2015 03:00 PM

Dries Buytaert

I am what I read

Almost every night before bed, I spend time reading. I love the feeling of falling asleep a little wiser than when I woke up. And often I read in the morning too. I read about photography, technology, investing, business, and more.

I love reading anything that provides a "mental workout"; articles or blog posts that stretch my mind or that point me in a different direction, that help me articulate my own emotions and my thoughts, or that make my imagination travel in time and space. Changing minds is changing lives. Reading makes me who I am.

While reading begets more reading, I love writing down my own thoughts as well, and that is what I decided to do this night. Sleep well!

by Dries at January 21, 2015 06:05 AM

January 20, 2015

Mattias Geniar

Security Panel Lands In Firefox 37

The post Security Panel Lands In Firefox 37 appeared first on ma.ttias.be.

Firefox Nightly (or if you prefer, Firefox's Developer Edition) just got a pretty interesting new feature, called the Security Panel.

Just 2 weeks ago, Jerod Santo blogged about browsers having a "security tab", with an overview of the most common security best practices and checks. Craig Francis made an interactive demo to show it of.

The idea of a "security panel" appeared to be proposed by Joel Weinberger first and led to some discussion with Chris Palmer, after which Craig Francis made a first version of the panel.

And now, Firefox version 37 ships with a security panel.

The Network Monitor is the home of our other new tool, the security panel. Selecting a request in the network panel now displays a security panel in the request inspector. The panel reveals a list of information about the request’s connection, host, as well as the certificate used.

The security panel can help debug issues related to SSL protocol versions [...] and can help ensure that sufficiently strong security measures are implemented.

Security Panel in the Network Inspector

Someone got what they wanted.

firefox_developer_security_tab

The Security Panel doesn't show a lot just yet, but I like where this is heading. So far, we've got:

Jerod's example went a lot further. It showed the Content Security Policy, Cross Site Request Forgeries, Cross Site Scripting, Frame Injection, ...

What's in Firefox right now is, I hope, just the start. Right now, the panel in itself isn't all that useful. It's information that you can gather from the browser already, just hidden in many different places.

Here's what I'm hoping: that the security panel isn't just a quick response to the request of more security features, but a real commitment. I'm curious how they plan on keeping it up-to-date. Even with the rapid Firefox releases, the security world is moving at a very fast pace. Today's safe SSL configs are tomorrow's POODLE.

Can browsers keep up? Will this give users a false sense of security, if that panel were to show all OK's? Rumour has it, Chrome is working on a similar feature. What'll they do different?

The post Security Panel Lands In Firefox 37 appeared first on ma.ttias.be.

by Mattias Geniar at January 20, 2015 08:12 PM

FOSDEM organizers

Fourth set of FOSDEM 2015 speaker interviews

We are pleased to announce a fourth round of interviews with some of our main track speakers: Alex Bradbury: lowRISC: The path to an open-source SoC Federico Vaga and Matthieu Cattin: A GPS watch made of free software and hardware Harlan Stenn: NTF's General Timestamp API and Library: Current timestamps suck. We can do much better Martin Burnicki: Technical Aspects of Leap Second Propagation and Evaluation Pepijn Noltes: Modularizing C software with Apache Celix Steve Klabnik: The story of Rust Our interviews page is already filling up nicely with a diverse set of main track speakers. Next week we'll publish舰

January 20, 2015 03:00 PM

Frank Goossens

Fixing Firefox’ LessChromeHD to reclaim lost screen real estate

I had been happily auto-hiding the Firefox navigation bar on my small-screen netbook for a couple of years already, until that add-on (LessChromeHD from the Prospector series) stopped working after having upgraded to Firefox 35. So I started Firefox from the command-line and spotted this error:

addons.xpi WARN Error loading bootstrap.js for lessChrome.HD@prospector.labs.mozilla: TypeError: redeclaration of variable event (resource://gre/modules/addons/XPIProvider.jsm -> jar:file:///home/frank/.mozilla/firefox/jy6bws91.default/extensions/lessChrome.HD@prospector.labs.mozilla.xpi!/bootstrap.js:226:8)

A quick look at the source code confirmed “event” was declared twice, once on line 210 and a second time on line 226. The fix, obviously, is simple; on lines 226-228 replace all references to “event” with e.g. “shownEvent”;

let shownEvent = document.createEvent("Event");
shownEvent.initEvent("LessChromeShown", true, false);
trigger.dispatchEvent(shownEvent);

You can do this yourself by unzipping “lessChrome.HD@prospector.labs.mozilla.xpi” in your extenstions-folder, editing boostrap.js and updating the xpi. Or you could wait for the Mozillians to update LessChromeHD.

by frank at January 20, 2015 06:12 AM

January 19, 2015

Mattias Geniar

Nginx sets HTTP 200 OK on PHP-FPM parse errors

The post Nginx sets HTTP 200 OK on PHP-FPM parse errors appeared first on ma.ttias.be.

Here's an interesting bit of behaviour. When a PHP error occurs in a PHP-FPM pool, nginx can still reply with a HTTP 200 status code -- indicating everything is OK -- if the PHP script returns output.

This is tested running PHP 5.4.36 and nginx 1.6.2.

For instance:

php-fpm-error.log:
[error] 24802#0: *7 FastCGI sent in stderr: "PHP message: PHP Parse error:  syntax error, unexpected '<' in test.php on line 3" while reading response header from upstream, client: 127.0.0.1, server: , request: "GET /test.php?time=1421685293 HTTP/1.1"

nginx-access.log:
127.0.0.1 - - [...] "GET /test.php?time=1421685293 HTTP/1.1" 200 110 "-" "curl"

The actual response seen by the client shows the exact same HTTP 200 OK status code.

$  curl -i localhost/test.php?time=`date +%s`
HTTP/1.1 200 OK
Server: nginx
...

Parse error: syntax error, unexpected '<' in test.php on line 3

Enabling or disabling PHP's display_errors option is what makes the difference. This option determines whether errors should be printed to the screen (= On) or hidden (=Off).

If nginx is detecting output from the FastCGI upstream it will consider it as a valid response, even if the upstream (in this case, php-fpm) triggered an error.

Disabling display_errors in the PHP-FPM pool fixes this.

php_admin_value[display_errors] = Off

It prevents the PHP-script from showing error output to the screen, which in turn causes nginx to correctly throw an HTTP 500 Internal Server Error.

$  curl -i localhost:8080/test.php?time=`date +%s`
HTTP/1.1 500 Internal Server Error
Server: nginx
...

(no output is shown, empty response)

You can still log all errors to a file, with the error_log directive.

php_admin_value[error_log] = /var/log/php-fpm/error.log
php_admin_flag[log_errors] = on

I'll need to dig deeper into this, because the display_errors option shouldn't have this kind of effect on how nginx handles its HTTP response codes. An error from the backend is still an error, regardless of the fact if there's output or not.

If this isn't what you're experiencing, have a look at nginx's fastcgi_intercept_errors, a config that allows you to catch upstream errors and substitute them with your own error pages.

The post Nginx sets HTTP 200 OK on PHP-FPM parse errors appeared first on ma.ttias.be.

by Mattias Geniar at January 19, 2015 04:42 PM

Frank Goossens

Music from Duyster and Our Tube: Schneider TM decomposing the Light

Schneider TM‘s bips & bleeps version of “There is a Light That Never Goes Out”, as heard yesterday evening on “Duyster”;

YouTube Video
Watch this video on YouTube or on Easy Youtube.

Thanks for the help Matthias!

by frank at January 19, 2015 03:57 PM

January 18, 2015

Mattias Geniar

The Frontpage of Hacker News: Stats, Graphs & Some Analysis

The post The Frontpage of Hacker News: Stats, Graphs & Some Analysis appeared first on ma.ttias.be.

A week ago, I found what I consider a bug in Audi's cruise control functions. So I did what I always do, I whipped up a blogpost. On Monday, I submitted that post to HN, and it worked.

Hacker News

Within minutes of submitting of submitting to Hacker News, the upvotes started. It reached the frontpage in about 20-30 minutes, and it stayed there for the entire day.

The first few hours after submitting, it even got stuck at the very top of the page.

hn_frontpage

That was fun.

The Numbers

I then did what every normal geek does when he finds his post made it to the top of HN, I looked at my Google Analytics stats.

hn_frontpage_google_analytics

Not bad.

But I got a bit worried. Could my pretty little server handle this? I run SSL on this site and a pretty "default" WordPress. Luckily, I installed Wordfence which generates a static HTML version of each post, so it didn't have to go through the PHP -> MySQL dance every single time.

HTTP req/s and bandwidth consumed

The CPU load was mostly running around 20-25%, and was taken up by Nginx. That means the load came from the SSL handling, not from PHP.

Around its peak, the server was happily pushing around 600 HTTP req/s.

hn_frontpage_nginx_requests

And because the page consisted of 2 (non-optimised) images, it consumed a considerable amount of bandwidth as well. In about 30 minutes, my server went from 1Mbps to just short of 60Mbps.

hn_frontpage_network_traffic

In the longer run, you can see the peak very clearly and it starts to slowly diminish after 2 hours.

hn_frontpage_latest_6_hours

Thankfully, the server bandwidth is on the house. ;-)

Pageview statistics

In terms of actual pageviews, I feel I can't complain. The post made it to 100 upvotes, which is about the average of an HN frontpage post.

It doesn't come close to the 700+ upvoted posts obviously, those will get a multitude of this traffic.

hn_frontpage_google_analytics_stats

The stats above show Google Analytics, which measured a total of 25.090 pageviews.

hn_frontpage_jetpack_stats

WordPress' Jetpack plugin, which also does statistics, measured a total of 27.464 visits, which is shown in the image above.

If I look at the raw Nginx access logs, I count a total of 37.955 hits to that particular page. That includes bots, scrapers, link-prefetchers (like Reddit, Twitter, ...), ...

hn_frontpage_browser_stats

More than 60% of the browser share went to Google Chrome, with Firefox at 16%, Safari at 12% and IE taking less than 2% of the share. On HN, it's obvious Chrome has won the war.

Internet Comments: YOLO

And besides a few dickhead comments, I feel the post did alright.

hn_comments_1

We'll always have those.

hn_comments_2

I did relax, actually. Thanks for the advise!

Lessons learned

All in all, I'm very happy with the numbers.

I'm also happy I took the time to install a static HTML generator for WordPress, otherwise my server couldn't have handled the load.

I should have optimised the images in the page (which are around 700Kb in size, each), if I took them down to < 100Kb each it would have saved 600% in terms of bandwidth consumed. I merely uploaded them to the server and embedded them, which was foolish.

I also missed a few opportunities to lure that HN traffic onto other posts, I didn't link to anything else and most of the exit traffic was on the same page. So my "marketing skills" need some more work. ;-)

The post The Frontpage of Hacker News: Stats, Graphs & Some Analysis appeared first on ma.ttias.be.

by Mattias Geniar at January 18, 2015 03:08 PM

PHP6: The Missing Version Number

The post PHP6: The Missing Version Number appeared first on ma.ttias.be.

For those active in the PHP community for a while, it's not exactly a secret. But to the outside world, it may seem odd that the PHP version numbers jump from the 5.x series straight to 7.x, without the magical number 6.

Why is there no PHP6?

The main reason for not having a PHP 6 version is, in fact, marketing.

There have been attempts at making a PHP 6 release in as early as 2005, which would feature UTF8/Unicode support (at last!). But those efforts never succeeded. As time went on, it gave the PHP 6 release a bad name. It's the version that went in development for ever, but was never released.

In that regard, it's similar to IPv5, the "missing" version between the now popular IPv4 and the "new" IPv6 Internet Protocol.

So, as with anything in PHP, there was a vote. Should the PHP 6 name be kept, even though it has a bad reputation/name? The answer finally came in the form of "no, we will rename the next version of PHP to PHP 7".

The main arguments were;

- First and foremost, PHP 6 already existed and it was something completely different.

-- While it's true that the other PHP 6 never reached General Availability, it was still a very widely published and well-known project conducted by php.net that will share absolutely nothing with the version that is under discussion now. Anybody who knew what PHP 6 is (and there are many) will have a strong misconception in his or her mind as to the contents and features of this new upcoming version (essentially, that it's all about Unicode).
PHP6's naming debacle

In fact, many believe -- myself included -- that the PHP 5.3 release should have been the PHP 6 release. If introduced a lot of new features and quite a few backwards incompatible changes.

Since a lot of debate was already going on about PHP 6 (it had over 5 years of "speculation", since it started in 2005 and was abandoned in 2010), the PHP6 name was referenced in quite a lot of places already.

This previous attempt at a new major version was also developed under the name of PHP 6 and as such there are various resources referring to it, including a number of books. There is concern that there might be confusion between the abandoned previous attempt and the work that is currently happening.
PHP6's naming debacle

PHP6 failed and never came to be. PHP7 however has a clear timeline and is looking really good.

The post PHP6: The Missing Version Number appeared first on ma.ttias.be.

by Mattias Geniar at January 18, 2015 11:17 AM

A good week for this blog!

The post A good week for this blog! appeared first on ma.ttias.be.

This has been an interesting week for this blog, in terms of traffic numbers and "prominent" features.

On Monday, one of my posts made it to the Hacker News frontpage with a little over a 100 points, and stayed there for the large part of the day. I'll do a more in-depth write-up on the details (traffic, lessons learned, ...) at a later point, since the stats are pretty impressive.

My article on HTTP/2 then got mentioned on the HighScalability.com blog.

Not much later, one of the Anonymous Twitter accounts (yes, that Anonymous), with 1.5 million followers, tweeted about a post on learning resources for systemd.

That same systemd post got featured on the Homepage of Linux.com (and still is at the time of writing) in the tutorial section. Some time after, someone (not me this time) posted the link to /r/LinuxActionShow on Reddit. All in all, the learning systemd post got quite a lot of traction on Twitter.

I didn't expect this week to be this intensive in terms of blog-post traffic. But hey, I'm not complaining. Most bloggers get their kicks out of great numbers of page views, and I'm no exception.

Here's to next week!

The post A good week for this blog! appeared first on ma.ttias.be.

by Mattias Geniar at January 18, 2015 09:51 AM

January 17, 2015

Lionel Dricot

Je suis un prisonnier

Screen Shot 2015-01-17 at 10.48.06

Un homme a sacrifié son mariage, sa vie de famille et a délaissé l’éducation de ses enfants afin de subvenir aux besoins de son frère handicapé. Et il a toujours considéré ce sacrifice comme allant de soi. Jusqu’au jour où…

Rendez-vous ici pour visionner le court-métrage et connaître la suite. N’hésitez pas à soutenir le court-métrage s’il vous a plu.

 

« Je suis un prisonnier » est le premier court-métrage dont j’ai écrit le scénario sans l’avoir réalisé moi-même. Réalisé par Thomas van der Straeten dans le cadre du Festival Nikon, l’écriture de « Je suis un prisonnier » était assortie de lourdes contraintes : 140 secondes max, le thème du choix, un titre commençant par « Je suis… » et un budget minimal. Cette très courte durée m’a donc donné l’idée d’utiliser le titre non pas comme un élément descriptif mais comme un élément explicatif de l’histoire. Finalement, c’est peut-être un peu obscur…

Écrire un scénario sans le réaliser soi-même a été une expérience nouvelle pour moi et particulièrement instructive. En effet, plus question de combler les lacunes du scénario au moment du tournage voire du montage (cela m’est arrivé de tourner en catastrophe une scène en cours de montage). Au vu du résultat, je note plusieurs points :

Moralité : c’est en forgeant qu’on devient forgeron. Alors que j’ai toujours rêvé d’être acteur, réalisateur et scénariste, je me rends compte que le scénario est l’élément qui m’intéresse le plus et me passionne. J’ai donc une réelle envie de continuer dans cette voie et je suis ouvert aux propositions de collaborations, dans les limites de mon agenda. Appel aux réalisateurs en manque d’idées !

Et bravo Thomas pour ta première réalisation et notre première collaboration. J’espère qu’il y en aura d’autres.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

flattr this!

by Lionel Dricot at January 17, 2015 10:17 AM

January 16, 2015

Frank Goossens

Liberté et/ ou securité

liberté et ou securité

As found on Facebook (via Serge)

by frank at January 16, 2015 08:09 PM

Mattias Geniar

PHP7 To Remove Deprecated Functionality

The post PHP7 To Remove Deprecated Functionality appeared first on ma.ttias.be.

I'm sure this must have been quite an internal-mailinglist-battle, but I'm glad they heard the community and decided to do The Right Thing for PHP, the language.

This RFC proposes to remove functionality which has been deprecated during the 5.x cycle.

...

All removals have been accepted.

wiki.php.net, RFC to remove deprecated features

Most notably, I believe, are the following deprecated features which will be removed entirely from the PHP7 codebase:

More can be read on the PHP RFC page to remove deprecated functionality in PHP7. I'm all in favor of this vote, good job!

(PS; if you're wondering where version 6 of PHP went, have a look here: PHP6: The Missing Version Number)

The post PHP7 To Remove Deprecated Functionality appeared first on ma.ttias.be.

by Mattias Geniar at January 16, 2015 07:53 PM

Ruben Vermeersch

Surviving winter as a motorsports fan.

Winter is that time of the year where nothing happens in the motorsport world (one exception: Dakar). Here are a few recommendations to help you through the agonizing wait:

Formula One

Start out with It Is What It Is, the autobiography of David Coulthard. It only goes until the end of 2007, but nevertheless it’s a fascinating read: rarely do you hear a sportsman speak with such openness. A good and honest insight into the mind of a sportsman and definitely not the politically correct version you’ll see on the BBC.

It Is What It Is

Next up: The Mechanic’s Tale: Life in the Pit-Lanes of Formula One by Steve Matchett, a former Benetton F1 mechanic. This covers the other side of the team: the mechanics and the engineers.

The Mechanic's Tale: Life in the Pit-Lanes of Formula One

Still feel like reading? Dive into the books of Sid Watkins, who deserves huge amounts of credit for transforming a very deadly sport into something surprisingly safe (or as he likes to point out: riding a horse is much more dangerous).

He wrote two books:

Both describe the efforts on improving safety and are filled with anecdotes.

And finally, if you prefer movies, two more recommendations. Rush, an epic story about the rivalry between Niki Lauda and James Hunt. Even my girlfriend enjoyed it and she has zero interest in motorsports.

Rush

And finally Senna, the documentary about Ayrton Senna, probably the most mythical Formula One driver of all time.

Rush

Le Mans

On to that other legend: The 24 hours of Le Mans.

I cannot recommend the book Le Mans by Koen Vergeer enough. It’s beautiful, it captures the atmosphere brilliantly and seamlessly mixes it with the history of this event.

But you’ll have to go the extra mile for it: it’s in Dutch, it’s out of print and it’s getting exceedingly rare to find.

Le Mans

Nothing is lost if you can’t get hold of it. There’s also the 1971 movie with Steve McQueen: Le Mans.

It’s everything that modern racing movies are not: there’s no CG here, barely any dialog and the story is agonizingly slow if you compare it to the average Hollywood blockbuster.

But that’s the beauty of it: in this movie the talking is done by the engines. Probably the last great racing movie that featured only real cars and real driving.

Le Mans

Motorcycles

Motorcycles aren’t really my thing (not enough wheels), but I have always been in awe for the street racing that happens during the Isle of Man TT. Probably one of the most crazy races in the world.

Riding Man by Mark Gardiner documents the experiences of a reporter who decides to participate in the TT.

Riding Man

And to finish, the brilliant documentary TT3D: Closer to the Edge gives a good insight into the minds of these drivers.

It seems to be available online. If nothing else, I recommend you watch the first two minutes: the onboard shots of the bike accelerating on the first straight are downright terrifying.

TT3D: Closer to the Edge

Rounding up

By the time you’ve read/seen all of the above, it should finally be spring again. I hope you enjoyed this list. Any suggestions about things that would belong in this list are greatly appreciated, send them over!

by Ruben at January 16, 2015 05:07 PM

Mattias Geniar

Learning systemd

The post Learning systemd appeared first on ma.ttias.be.

Systemd is coming to a linux distro near you.

In fact, if you're using RHEL 7+, CentOS 7+, Fedora 15+ or Arch, you're already using systemd. You can always stick to a distribution that stays clear of systemd, but chances are you'll eventually run into systemd -- so why not get to know it a little better?

Here's a set of resources I found useful.

I hope these resources can be valuable to you too!

And who knows, after reading up on systemd, you may actually like it? I know I'm looking forward to using it!

The post Learning systemd appeared first on ma.ttias.be.

by Mattias Geniar at January 16, 2015 04:41 PM

January 15, 2015

Frank Goossens

Uw eigen mobielere deredactie in 5 stappen

openshift configureer uw redactieuwredacie op openshift uwredactie op openshiftOok al kunt ge mijn alternatieve mobiele redactie hier bekijken, dan zout ge toch, om redenen die geheel de uwe zijn, misschien liever uw hoogst persoonlijke redactie hebben?

Wel, dat kan in 5 eenvoudige stappen dankzij Openshift, het freemium PAAS platform van Red Hat en dat gaat ongeveer zo:

  1. Maak een gratis account aan bij Openshift
  2. Klik op “Create your first application now
  3. Vul PHP in in het zoekvenster en selecteer de PHP 5.4 cartridge
  4. Vul een naam in voor de public URL, copy/paste https://github.com/futtta/redactie in het source code veld en klik op “Create application
  5. Even geduld terwijl uw eigenste redactie wordt aangemaakt. In het laatste scherm kunt ge eventueel git access configureren (“Will you be changing the code of this application?”) of direct op “Visit app in the browser” (in mijn geval naar http://mijnredactie-futtta.rhcloud.com/) klikken.

Spreading the news, tiens!

by frank at January 15, 2015 05:58 PM

Mattias Geniar

Does SDPY (or HTTP/2) Actually Help?

The post Does SDPY (or HTTP/2) Actually Help? appeared first on ma.ttias.be.

Here's an interesting PDF to read: How speedy is SPDY?.

I'll spoil the results for you, but still encourage you to read the entire PDF to see how the conclusions were built.

Conclusions
-- We experimented with SPDY page loads over a large parameter space
-- Most performance impact of SPDY over HTTP comes from its single TCP connection
-- Browser computation and dependencies in real pages reduce the impact of SPDY
-- To improve further, we need to restructure the page load process

The last point, about having the restructure the page load, fits in nicely with my article on architecting websites for the HTTP/2 era, because it will require restructuring.

The end-result can be anything from a 10% to 80% improvement, in terms of perceived latency and bandwidth consumed.

While this test was performed on SPDY, and not HTTP/2 directly, I believe the end-results can be similar. But to be certain, we'll have to benchmark HTTP/2 directly instead of SPDY.

The post Does SDPY (or HTTP/2) Actually Help? appeared first on ma.ttias.be.

by Mattias Geniar at January 15, 2015 09:02 AM

January 14, 2015

Dries Buytaert

Drupal retrospective 2014

It's that time again. Time to look back at 2014, and to look forward to 2015. For Drupal in 2014, it was all about Drupal 8. As Drupal 8's development enters its fourth (and hopefully, final) year of development, it's a good time to reflect on all the work achieved by the Drupal 8 team so far, and to talk about Drupal 8's momentum heading into the final stretch to the release.

Drupal 8 will have 200 new features. Among the larger features that I'm excited about are the responsive design, HTML5 support, the native web service support, the much improved multilingual support, the configuration management system, a built-in WYSIWYG editor, in-place editing, streamlined content editing, the improved entity system, and more. The list of improvements is long!

My favorite part of Drupal 8 is that it will make building all types of Drupal sites — both big and small — much easier than with Drupal 7.

Key accomplishments in 2014 include:

Drupal 8 beta 1 released

October 1, 2014, amidst the fanfare at DrupalCon Amsterdam, we released Drupal 8 beta 1. This was an important milestone in the project, marking the finalization of major APIs, which enables contributed modules to begin porting in earnest.

Total number of Drupal 8 contributors surpasses 2,500

Our 2,500th core contributor was Tasya Rukmana (tadityar), a high-school student participating in Google Code-in 2014! Awesome.

Kick-starting contributed modules in Drupal 8

Drupal 8's new object-oriented API represents a significant paradigm shift for developers (there are many benefits to this). To help Drupal 7 pros make the jump to Drupal 8, Acquia funded the Drupal Module Upgrader project. This project will not only scan a Drupal 7 module and generate a report pointing off to the appropriate documentation on how to port it, there is even a mode that automatically re-writes much of your module's code to Drupal 8 to eliminate a huge chunk of the work.

Sprints, sprints and more sprints!

We organized dozens of sprints all around the world, and together hundreds of people came together in "real life" to help get Drupal 8 released. Sprints are a key part of momentum-building in Drupal, by laser-focusing on a specific goal, or by pairing both new and experienced contributors together for mentorship. Not only do sprints make solving tough issues easier, they also provide opportunities for building relationships and "leveling up" your skills.

Drupal 8 accelerate fund

Though it was launched just a month ago, the Drupal Association's Drupal 8 Accelerate Fund is already helping to add velocity to Drupal 8, by paying key contributors to help fix particularly onerous critical issues.

What is in store for 2015?

Getting the Drupal 8 release done

Our current focus is resolving the Drupal 8 upgrade path issues, which will allow early adopters of Drupal 8 to upgrade their site data between beta releases, and should result in a further uptick to Drupal 8 development velocity.

Once we reach zero critical issues, we begin the release candidate phase. Among the areas left to polish up after the Drupal 8 upgrade path issues are bringing external libraries up to date, finalizing documentation, and performance.

Continuous improvements after Drupal 8

Unlike prior versions of Drupal, Drupal 8 has adopted a new release cycle that will provide backwards-compatible "feature" releases every 6 months. I'm extremely excited about this change, as it means we can innovate on the core platform for years to come after release, versus holding all of the new goodies until Drupal 9.

Getting more organizations to contribute

We're now one of the largest Open Source projects in terms of active contributors, if not the largest. That growth requires us to evolve how we work. Over the years, we've grown from a 100% volunteer-driven model to a model where there is a mix of volunteers, contributors who are partially funded by their customers or employers, and contributors who are paid full-time to work on Drupal.

While this shift has big benefits in making Drupal more sustainable, it also means there is increasingly more corporate participation and influence. One of our biggest challenges for 2015 is to figure out how we can get more commercial organizations to step up to take on more of the shared maintenance of Drupal, while at the same time respecting the needs and desires of our entire community.

Improving our governance model

There has also been a lot of talk about optimizing the way in which we work, to make it more explicit who is responsible for what, how decisions are made, and so on. This year I plan to work with others in the community to revamp Drupal core's governance model to bring more transparency and appoint additional leadership.

Conclusion

Overall, I'm thrilled with the progress that the Drupal core contributors have made in 2014, and want to extend an enormous thanks to each and every one of our 2,500 contributors who have brought us this far. I'm feeling very positive about our momentum going into 2015.

Drupal 8 will set a new standard for ease of use, power and flexibility, and will have something for everyone to love. Without a doubt, Drupal 8 will take our community to new heights. Let's do this!

by Dries at January 14, 2015 11:43 AM

Dieter Adriaenssens

What I learned from 365 days of contributing to Open Source projects

Today I reached a 365 day commit streak on GitHub, with over 2300 commits to Open Source projects in that period. In this post I'd like to share my experiences during this past year and the lessons I learned.

Github contribution overview

It started one year ago, on January 14th, 2014, the day I returned from a two week trip to Malaysia. I didn't take a laptop or smartphone on that trip, in order to be 'disconnected' from PC, internet and E-mail for some time. I've taken a habit to have a 'unplugged' holiday about once a year.
I had a streak of over 100 days running before I left on holiday, and had reached 2000+ commits during that year, so I was eager to start committing again, to keep the running total of commits close to 2000 and to start building a commit streak again.
I don't remember if I had a clear goal at that time of the total consecutive days of committing to Open Source projects I aspired to reach. I guess at first I wanted to match the previous record and see how much further I could get.

At the end of June, I got inspired by Josh (@dzello) from Keen.io who had pledged to commit to Open Source projects for 365 days in a row. I had an extensive streak going on at that time, so I decided to try and reach a 365 day streak as well.

Until then it had been fairly easy to keep the streak going. Doing at least one commit every day is not that hard, and I usually did more than one. I was working on my Android app and I started working on what I called 'my side project' at first. In the last few months my focus has shifted to that 'side project', making it my main project basically, but that's a different story.
So I had plenty to do and I was well motivated to work on my projects regularly, so it was easy to keep committing daily.

Until summer I didn't do any long trips, so I was either at home at some point during the day or had my laptop with me (when going to Berlin for LinuxTag, for example) so finding a few minutes every day to do at least one commit wasn't a big challenge. Although sometimes, it required some planning.
If I had a social, cultural or sports activity planned in the evening after work, I planned for an 'easy' commit on those days, usually cleaning up some code, fixing coding style, writing a small test, improving documentation or doing a few translations. I kept the bigger work, implementing a new feature, figuring out an API or framework, or doing some bigger refactoring for days when I had more time available.

Then summer arrived and I planned to go on a climbing trip for a week. I was in doubt if I would have time and opportunity to keep committing during the trip. But in the end I decided to take my laptop and give it a try. I worked on bash script to crop and scale Android screenshots that week, something I could easily develop and test without access to internet. On some days I barely managed to contribute, finishing the commit only a few minutes before the end of the day, but in the end I managed to keep the streak going.

With this hurdle taken I imagined reaching the 365 day goal was achievable. I didn't know back then I was to go on a few long weekends to Fontainebleau during the Fall, but again I took my laptop with me on the trip and found time to contribute some code.

In the end of October I went to California for the Google Summer of Code 10 year Reunion and I had the opportunity to meet Josh, and Justin, also from Keen.io. Josh  published a blogpost by that time explaining why he ended his commit streak at Burning Man, and why he wasn't planning on starting the streak again.

I read his post, and it got me thinking. Is this continuous streak a good thing? Sure, it was a motivation on its own, you want to keep going because you don't want to break the streak, you'd have to start all over again for a long time to reach the same number of consecutive days.
Some days you don't have time, or you've had a rough day and don't feel like turning on your PC and doing the required commit for the day, but would rather do something else, forgetting about that ongoing streak.

But I decided to go for it and finish the pledge to reach 365 days of committing to Open Source projects. I found out that my motivation wasn't only in keeping the streak going, but mostly in making progress with my projects.

Now that I've reached the 365 day goal I've come to some conclusions:

Many thanks to Josh to inspiring me to reach these conclusions, to all who supported me during the past year and good luck to everyone aspiring a goal, being it a commit streak or something else.

Kudos to those who still have a very long commit streak going!

by Dieter Adriaenssens (noreply@blogger.com) at January 14, 2015 10:11 AM

Frank Goossens

Music from Our Tube; Milosh – You Make Me Feel

Slightly sad clickety-clackety electronica by Milosh, a classically trained Canadian cellist who’s also 1/2th of Rhye;

YouTube Video
Watch this video on YouTube or on Easy Youtube.

by frank at January 14, 2015 06:38 AM

January 13, 2015

FOSDEM organizers

Third set of FOSDEM 2015 speaker interviews

Get a sneak peek of FOSDEM 2015 in our third round of interviews with some main track speakers: Benjamin Berg: SDAPS: Surveying Made Easy Gilles Van Assche, Joan Daemen and Michaël Peeters: Keccak and SHA-3: code and standard updates Jonas Öberg: Automating Attribution: Giving credit where credit is due Till Tantau: Algorithmic Graph Drawing in TikZ Yorik van Havre: FreeCAD: a hackable design platform You can find our complete set of interviews on our interviews page, showing the diversity of our main track speakers: about languages, performance, time, typesetting, hardware, security and much more. Stay tuned for more food for舰

January 13, 2015 03:00 PM

January 12, 2015

Mattias Geniar

Enable HTTP/2 support in Chrome

The post Enable HTTP/2 support in Chrome appeared first on ma.ttias.be.

I actually thought HTTP/2 was enabled by default in Chrome, but as it turns out there's a special flag (via a GUI) you still need to set.

Go to the chrome://flags page (but the browser won't allow direct links to it) in your Chrome browser. Search for HTTP/2. Find the option called Enable SPDY/4.

chrome_enable_http2_spdy

Next, restart Chrome and SPDY4 aka HTTP/2 will be enabled.

If you're running Chromes nightly "canary" build, you can view the Protocol used in the Network Tab. Even though the SPDY-HTTP2 option is disabled by default, the before and after protocols are exactly the same: h2-14.

Before:

http2_chrome_disabled

After enabling SPDY, HTTP/2:

http2_chrome_enabled

This'll need a bit more testing to be sure.

The post Enable HTTP/2 support in Chrome appeared first on ma.ttias.be.

by Mattias Geniar at January 12, 2015 09:28 PM

Joram Barrez

Activiti : Looking Back At 2014

Our general manager (aka “my boss”), Paul Holmes-Higgin has noted down his thoughts about the past year about Activiti. You can read it here : Alfresco Activiti Shakes BPM World Adding my own personal thoughts: 2014 was awesome! We did *a lot* of work on Activiti. Countless of hours spent brainstorming and hacking away … but […]

by Joram Barrez at January 12, 2015 06:09 PM

Xavier Mertens

IoT : The Rise of the Machines

[This blogpost has also been published as a guest diary on isc.sans.org]

The Rise of the Machines

Our houses and offices are more and more infested by electronic devices embedding a real computer with an operating system and storage. They are connected to network resources for remote management, statistics or data polling. This is called the “Internet of Things” or “IoT“. My home network is hardened and any new (unknown) device connected to it receives an IP address from a specific range which has no connectivity with other hosts or the Internet but its packets are logged. The goal is to detect suspicious activity like data leaks or unexpected firmware updates. The last toy I bought yesterday is a Smart Plug from Supra-Electronics. This device allows you to control a power plug via your mobile device and calculate the energy consumption with nice stats. I had a very good opportunity to buy one for a very low price (25€). Let’s see what’s inside…

The documentation mentions a setup procedure and management via a mobile device (with a free app for IOS or Android) but the first reflex is to scan the box. Interesting, a webserver as well as a telnet server are waiting for packets. Let’s try common credentials like admin/admin and…

$ telnet 192.168.254.225
Trying 192.168.254.225...
Connected to 192.168.254.225.
Escape character is '^]'.

(none) login: admin
Password:

BusyBox v1.12.1 (2014-07-31 06:32:52 CEST) built-in shell (ash)
Enter 'help' for a list of built-in commands.
#

Immediately after the boot sequence, the device started to try to communicate with remote hosts:

Network Traffic

(Click to enlarge)

Amongst DNS requests and NTP synchronization, a lot of traffic was generated to different IP addresses over UDP/10001. The same packet being sent to different hosts. The payload was a block of 60 bytes:

UDP Payload

I was not able to decode the content of this payload, please comment if you recognize some patterns. The device also performs a regular connectivity check via a single ICMP ECHO packet sent to www.google.com (every 5 mins). This network traffic is generated by the process called RDTServer:

# ps
 PID USER       VSZ STAT COMMAND
   1 admin     1400 S    init 
   2 admin        0 SWN  [ksoftirqd/0]
   3 admin        0 SW<  [events/0]
   4 admin        0 SW<  [khelper]
   5 admin        0 SW<  [kthread]
   6 admin        0 SW<  [kblockd/0]
   7 admin        0 SW<  [kswapd0]
   8 admin        0 SW   [pdflush]
   9 admin        0 SW   [pdflush]
  10 admin        0 SW<  [aio/0]
  11 admin        0 SW   [mtdblockd]
  18 admin     1084 S    nvram_daemon 
  19 admin     1612 S    goahead 
  20 admin      872 R    RDTServer 
  24 admin     1400 R    telnetd 
  26 admin      872 S    RDTServer 
  27 admin      872 S    RDTServer 
  33 admin      872 S    RDTServer 
  34 admin      872 S    RDTServer 
  35 admin      872 S    RDTServer 
  36 admin      872 S    RDTServer 
  53 admin     1400 S    /bin/sh 
 238 admin        0 SW   [RtmpCmdQTask]
 239 admin        0 SW   [RtmpWscTask]
 366 admin     1400 S    -sh 
 505 admin     1400 R    ps 
 678 admin     1400 S    udhcpd /etc/udhcpd.conf 
 1116 admin    1396 S    udhcpc -i apcli0 -s /sbin/udhcpc.sh -p /var/run/udhcp
 1192 admin     872 S    RDTServer 
 1207 admin     772 S    ntpclient -s -c 0 -h ntp.belnet.be -i 86400 
#

I grabbed a copy of the RDTServer binary (Mips) and using the “strings” command against the file revealed interesting stuff. The IP addresses used were found in the binary:

IP FQDN NetName Country
50.19.254.134 m1.iotcplatform.com AMAZON-EC2-8 US
122.248.234.207 m2.iotcplatform.com AMAZON-EC2-SG Singapore
46.137.188.54 m3.iotcplatform.com AMAZON-EU-AWS Ireland
122.226.84.253 JINHUA-MEIDIYA-LTD China
61.188.37.216 CHINANET-SC China
220.181.111.147 CHINANET-IDC-BJ China
120.24.59.150 m4.iotcplatform.com ALISOFT China
114.215.137.159 m5.iotcplatform.com ALISOFT China
175.41.238.100 AMAZON-AP-RESOURCES-JP Japan

Seeing packets sent to China is often suspicious! The domain name iotcplatform.com belongs to ThroughTek, a company specialized in IoT and M2M (“Machine to Machine“) connection platforms:

Domain Name: IOTCPLATFORM.COM
Registry Domain ID: 1665166563_DOMAIN_COM-VRSN
Registrar WHOIS Server: whois.godaddy.com
Registrar URL: http://www.godaddy.com
Update Date: 2014-07-09T11:44:15Z
Creation Date: 2011-07-04T08:50:36Z
Registrar Registration Expiration Date: 2016-07-04T08:50:36Z
Registrar: GoDaddy.com, LLC
Registrar IANA ID: 146
Registrar Abuse Contact Email: abuse@godaddy.com
Registrar Abuse Contact Phone: +1.480-624-2505
Registry Registrant ID: 
Registrant Name: Charles Kao
Registrant Organization: 
Registrant Street: 4F., No.221, Chongyang Rd.,
Registrant City: Taipei
Registrant State/Province: Nangang District
Registrant Postal Code: 11573
Registrant Country: Taiwan
Registrant Phone: +886.886226535111
Registrant Phone Ext:
Registrant Fax: 
Registrant Fax Ext:
Registrant Email: justin_yeh@tutk.com

In fact, the IOTC platform is a service developed by ThoughTek to establish P2P communications between devices. I read the documentation provided with the device as well as all the website pages and there is no mention of this service. Manufacturers should include some technical documentation about the network requirements (ex: to download firmware updates). In this case, it’s not a major security issue but this story enforces what we already know (and be afraid) about IoT: those devices have weak configuration and they lack of visibility/documentation about their behavior. Take care when connecting them on your network. A best practice is to inspect the traffic they generate once online (DNS requests, HTTP(S) request or any other protocol).

by Xavier at January 12, 2015 04:42 PM

Fabian Arrotin

Provisioning quickly nodes in a SeaMicro chassis with Ansible

Recently I had to quickly test and deploy CentOS on 128 physical nodes, just to test hardware and that all currently "supported" CentOS releases could be installed quickly when needed. The interesting bit is that it was a completely new infra, without any traditional deployment setup in place, so obviously, as sysadmin, we directly think about pxe/kickstart, which is so trivial to setup. That was the first time I had to "play" with SeaMicro devices/chassis though, and so understanding how they work (the SeaMicro 15K fabric chassis, to be precise). One thing to note is that those seamicro chassis don't provide remote VGA/KVM feature (but who cares, as we'll automate the whole thing, right ? ) but they instead provide either cli (ssh) or rest api access to the management interface, so that you can quickly reset/reconfigure a node, changing vlan assignement, and so on.

It's not a secret that I like to use Ansible for ad-hoc tasks, and I thought that it would be (again) a good tool for that quick task. If you have used Ansible already, you know that you have to declare nodes and variables (not needed, but really useful) in the inventory (if you don't gather inventory from an external source). To configure my pxe setup (and so being able to reconfigure it when needed) I obviously needed to get mac addresses from all 64 nodes in each chassis, decide that hostnames will be n${slot-number}., etc .. (and yes in Seamicro slot 1 = 0/0, slot 2 = 1/0, and so on ...)

The following quick-and-dirty bash script let you do that quickly in 2 seconds (ssh into chassis, gather information, and fill some variables in my ansible host_vars/${hostname} file) :

#!/bin/bash
ssh admin@hufty.ci.centos.org "enable ;  show server summary | include Intel ; quit" | while read line ;
  do
  seamicrosrvid=$(echo $line |awk '{print $1}')
  slot=$(echo $seamicrosrvid| cut -f 1 -d '/')
  id=$(( $slot + 1)); ip=$id ; mac=$(echo $line |awk '{print $3}')
  echo -e "name: n${id}.hufty.ci.centos.org \nseamicro_chassis: hufty \nseamicro_srvid: $seamicrosrvid \nmac_address: $mac \nip: 172.19.3.$ip \ngateway: 172.19.3.254 \nnetmask: 255.255.252.0 \nnameserver: 172.19.0.12 \ncentos_dist: 6" > inventory/n${id}.hufty.ci.centos.org
done

Nice so we have all ~/ansible/hosts/host_vars/${inventory_hostname} files in one go (I let you add ${inventory_hostname} in the ~/ansible/hosts/hosts.cfg file with the same script, but modify to your needs
For the next step, we assume that we already have dnsmasq installed on the "head" node, and that we also have a httpd setup to provide the kickstart to the nodes during installation.
So our basic ansible playbook looks like this :

---
- hosts: ci-nodes
  sudo: True
  gather_facts: False

  vars:
    deploy_node: admin.ci.centos.org
    seamicro_user_login: admin
    seamicro_user_pass: obviously-hidden-and-changed
    seamicro_reset_body:
      action: reset
      using-pxe: "true"
      username: "{{ seamicro_user_login }}"
      password: "{{ seamicro_user_pass }}"

  tasks:
    - name: Generate kickstart file[s] for Seamicro node[s]
      template: src=../templates/kickstarts/ci-centos-{{ centos_dist }}-ks.j2 dest=/var/www/html/ks/{{ inventory_hostname }}-ks.cfg mode=0755
      delegate_to: "{{ deploy_node }}"

    - name: Adding the entry in DNS (dnsmasq)
      lineinfile: dest=/etc/hosts regexp="^{{ ip }} {{ inventory_hostname }}" line="{{ ip }} {{ inventory_hostname }}"
      delegate_to: "{{ deploy_node }}"
      notify: reload_dnsmasq

    - name: Adding the DHCP entry in dnsmasq
      template: src=../templates/dnsmasq-dhcp.j2 dest=/etc/dnsmasq.d/{{ inventory_hostname }}.conf
      delegate_to: "{{ deploy_node }}"
      register: dhcpdnsmasq

    - name: Reloading dnsmasq configuration
      service: name=dnsmasq state=restarted
      run_once: true
      when: dhcpdnsmasq|changed
      delegate_to: "{{ deploy_node }}"

    - name: Generating the tftp configuration boot file
      template: src=../templates/pxeboot-ci dest=/var/lib/tftpboot/pxelinux.cfg/01-{{ mac_address | lower | replace(":","-") }} mode=0755
      delegate_to: "{{ deploy_node }}"

    - name: Resetting the Seamicro node[s]
      uri: url=https://{{ seamicro_chassis }}.ci.centos.org/v2.0/server/{{ seamicro_srvid }}
           method=POST
           HEADER_Content-Type="application/json"
           body='{{ seamicro_reset_body | to_json }}'
           timeout=60
      delegate_to: "{{ deploy_node }}"

    - name: Waiting for Seamicro node[s] to be available through ssh ...
      action: wait_for port=22 host={{ inventory_hostname }} timeout=1200
      delegate_to: "{{ deploy_node }}"

  handlers:
    - name: reload_dnsmasq
      service: name=dnsmasq state=reloaded

The first thing to notice is that you can use Ansible to provision nodes that aren't already running : people think than ansible is just to interact with already provisioned and running nodes, but by providing useful informations in the inventory, and by delegating actions, we can already start "managing" those yet-to-come nodes.
All the templates used in that playbook are really basic ones, so nothing "rocket science". For example the only diff for the kickstart.j2 template is that we inject ansible variables (for network and storage) :

network  --bootproto=static --device=eth0 --gateway={{ gateway }} --ip={{ ip }} --nameserver={{ nameserver }} --netmask={{ netmask }} --ipv6=auto --activate
network  --hostname={{ inventory_hostname }}
<snip>
part /boot --fstype="ext4" --ondisk=sda --size=500
part pv.14 --fstype="lvmpv" --ondisk=sda --size=10000 --grow
volgroup vg_{{ inventory_hostname_short }} --pesize=4096 pv.14
logvol /home  --fstype="xfs" --size=2412 --name=home --vgname=vg_{{ inventory_hostname_short }} --grow --maxsize=100000
logvol /  --fstype="xfs" --size=8200 --name=root --vgname=vg_{{ inventory_hostname_short }} --grow --maxsize=1000000
logvol swap  --fstype="swap" --size=2136 --name=swap --vgname=vg_{{ inventory_hostname_short }}
<snip>

The dhcp step isn't mandatory, but at least in that subnet we only allow dhcp to "already known" mac address, retrieved from the ansible inventory (and previously fetched directly from the seamicro chassis) :

# {{ name }} ip assignement
dhcp-host={{ mac_address }},{{ ip }}

Same thing for the pxelinux tftp config file :

SERIAL 0 9600
DEFAULT text
PROMPT 0
TIMEOUT 50
TOTALTIMEOUT 6000
ONTIMEOUT {{ inventory_hostname }}-deploy

LABEL local
        MENU LABEL (local)
        MENU DEFAULT
        LOCALBOOT 0

LABEL {{ inventory_hostname}}-deploy
        kernel CentOS/{{ centos_dist }}/{{ centos_arch}}/vmlinuz
        MENU LABEL CentOS {{ centos_dist }} {{ centos_arch }}- CI Kickstart for {{ inventory_hostname }}
        {% if centos_dist == 7 -%}
	append initrd=CentOS/7/{{ centos_arch }}/initrd.img net.ifnames=0 biosdevname=0 ip=eth0:dhcp inst.ks=http://admin.ci.centos.org/ks/{{ inventory_hostname }}-ks.cfg console=ttyS0,9600n8
	{% else -%}
        append initrd=CentOS/{{ centos_dist }}/{{ centos_arch }}/initrd.img ksdevice=eth0 ip=dhcp ks=http://admin.ci.centos.org/ks/{{ inventory_hostname }}-ks.cfg console=ttyS0,9600n8
 	{% endif %}

The interesting part is the one on which I needed to spend more time : as said, it was the first time I had to play with SeaMicro hardware, so I had to dive into documentation (which I *always* do, RTFM FTW !) and understand how to use their Rest API but once done, it was a breeze. Ansible by default doesn't provide a native resource for Seamicro, but that's why Rest exists, right and thanfully, Ansible has a native URI module, which we use here . The only thing on which I had to spend more time was to understand how to properly construct the body, but declaring in the yaml file as a variable/list and then converting it on the fly to json (with the magical body='{{ seamicro_reset_body | to_json }}' ) was the way to go and is so self-explained when read now.

And here we go, calling that ansible playbook and suddenly 128 physical machines were being installed (and reinstalled with different CentOS versions - 5,6,7 - and arches i386,x86_64)

Hope this helps if you  have to interact with Seamicro chassis from within an ansible playbook too

by fabian.arrotin at January 12, 2015 02:19 PM

Lionel Dricot

La cueillette des biens matériels

7901479448_6a5139433f_z
Ceci est le billet 4 sur 4 dans la série La consommation cueillette

Lorsqu’un lecteur m’envoie un paiement libre d’une dizaine d’euros, j’en tire une grande fierté et une réelle source de motivation. J’ai également l’impression d’accomplir quelque chose d’important, d’utile, de nécessaire.

Après tout, si les gens sont près à me payer des dizaines d’euros pour mon travail, n’est-ce pas légitime ?

Ce raisonnement est tenu par absolument tout commerçant. Ma compagne, qui vend des Bubble Teas à un prix non-libre tout à fait traditionnel, se fait exactement la même réflexion lorsqu’elle a eu une bonne journée.

On peut en déduire que même les pires industriels pensent de cette manière. Le cigarettier à qui vous donnez des dizaines d’euros non pas par an, mais par semaine ? Il se sent encouragé par votre argent. L’éleveur industriel de bétail aux hormones ? Il se sent utile grâce à votre choix d’une entrecôte sous blister grosse et pas chère.

La consommation cueillette peut-elle améliorer la situation ?

 

Étape 1 : la cueillette

J’ai donc décidé de maintenir une liste de mes envies d’achats. Cette liste ne comporte pas les achats quotidiens récurrents ni les biens culturels mais toutes les autres envies : un nouveau vélo, un gadget électronique, un abonnement à un service web, un accessoire, de l’équipement, des vêtements. Bref, à peu près tout.

Personnellement, je garde cela dans une note Evernote.

Lorsqu’une envie apparait, j’en prends note. Si besoin, je passe du temps de recherche à affiner mon envie : trouver le modèle exact qui me conviendrait le mieux, les éventuelles options, les accessoires, etc.

À côté de chaque envie, je note le prix total que cela va me coûter ainsi que, et c’est très important, la raison pour laquelle j’ai cette envie. Le fait d’écrire la raison se révèle, parfois, plus ardu que prévu. Je mets également la raison en relation directe avec le prix : suis-je prêt à payer autant pour satisfaire ce besoin particulier, indépendamment de l’objet ? Je rajoute également dans ma liste d’envies les services ou artistes gratuits que je souhaite soutenir.

Une amélioration que je n’applique pas encore pleinement est de rajouter, en plus, une note explicitant à qui va l’argent.

 

Étape 2 : la consommation

Avoir cette liste est un réel atout pour éviter les achats impulsifs. Lorsqu’une envie me vient, j’ouvre ma liste et je compare toutes mes autres envies dans la même gamme de prix.

Je réalise alors que je suis près à dépenser une certaine somme pour un achat futile alors que la même somme me permettrait d’acheter une envie que j’ai depuis plusieurs mois et dont j’éprouve de plus en plus le besoin.

Je rajoute alors ce nouveau désir impulsif dans ma liste et, parfois, je dépense malgré tout la somme mais pour une envie antérieure et confirmée.

Souvent, certaines envies sont supprimées au bout de quelques semaines, sans raison particulière.

 

Au final

Avec un outil tout simple, une liste d’envies, je suis parvenu à diminuer drastiquement mes achats impulsifs. Lorsqu’on me demande ce qui me ferait plaisir, j’ai également toujours sous la main une idée utile et pertinente.

J’ai pris le contrôle sur ma consommation et, sans la moindre douleur, j’ai découvert que je dépensais beaucoup moins.

Mais j’ai également découvert un certain sentiment de richesse ! En effet, le total des prix dans ma liste d’envies représente la somme nécessaire à combler toutes mes envies, tous mes besoins. Et, surprise, ce montant est assez peu élevé.

Du coup, j’ai parfois l’impression d’être riche. Je sais que, si je veux, je peux me payer ce dont j’ai envie. Je retrouve également plus souvent à donner des prix libres ou à soutenir les services que j’utilise. J’avais notamment ajouté l’achat d’un abonnement pro au service Pocket. Je n’en avais pas besoin, les fonctionnalités pro ne m’étant pas utile. Mais je me suis posé la question : « Si ce service m’était offert gratuitement, aurais-je envie de le soutenir ? ». La réponse m’a soudain semblé évidente…

Certains reprochent à la méthode de manquer de spontanéité. Pourtant, c’est le contraire : je m’autorise absolument la moindre envie sans hésiter. Une idée, même folle ? Je l’ajoute à liste d’envies, ça ne coûte rien ! D’ailleurs, nous fonctionnons tous plus ou moins consciemment avec des listes d’envies. Si vous ne prenez pas le temps de structurez la vôtre, d’autres le feront à votre place. Ce que vous pensez être spontané n’est souvent qu’une envie sournoisement instillée dans votre liste grâce au marketing ou à la publicité.

En séparant la cueillette de la consommation, je pose un geste politique fort, je fais des économies et je me sens, contre toute attente, satisfait et comblé. Étonnant, non ?

 

Photo par Igal Kleiner.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

flattr this!

by Lionel Dricot at January 12, 2015 01:00 PM

January 11, 2015

Mattias Geniar

Audi’s Cruise Control “>=” Bug

The post Audi’s Cruise Control “>=” Bug appeared first on ma.ttias.be.

I noticed a "bug" in the way Audi handles its Cruise Control options. Cruise control is only meant to be used once you drive "30 kilometers per hour or more". However, the controls only allow you to activate cruise control at 31km/h. Once the cruise control is active, you can dial it down to 30km/h again.

This seems like there are 2 independent controls happening, each determining how to handle user input.

The source code below is my own. I do not have access to the real Audi on-board controller source code. They are here for demonstration purposes only.

The speed limit seems to be a constant.

const CRUISE_CONTROL_MINIMUM = 30;

But the handling is different. For instance, when you first activate cruise control, this is the check in the function/method/controller (whatever it may be) to validate the input.

if (CURRENT_SPEED > CRUISE_CONTROL_MINIMUM) {
  /* Current speed exceeds the CRUISE_CONTROL_MINIMUM (more than 30km/h), it's allowed */
  CURRENT_CRUISE_CONTROL = CURRENT_SPEED;
}

This check makes sure you can only activate cruise control at 31km/h or more.

cruise_control_audi_31km

If you want to change the speed configured on the car, the following check appears to happen.

if (CURRENT_CRUISE_CONTROL >= CRUISE_CONTROL_MINIMUM) {
  /* Cruis control is set to CRUISE_CONTROL_MINIMUM or higher (30km/h or more), it's allowed */
  ...
}

This allows you to set the cruise control speed back to 30km/h.

cruise_control_audi_30km

Notice the difference in ">" (greater than) and ">=" (greater than or equals) in the equation. Such a check allows you to set the cruise control to 30km/h, but only after you've first enabled it at 31km/h or more.

This seems like something QA should have caught. Unless there are strict car regulations that would only allow cruise control at more than 30km/h? Sounds silly.

If you live in a country with way too many 30km/h zones, you notice this. And it bugs me. Pun intended.

The post Audi’s Cruise Control “>=” Bug appeared first on ma.ttias.be.

by Mattias Geniar at January 11, 2015 06:39 PM

Lionel Dricot

Le mur du cimetière

361084004_5e92f32405_z

Déambulant le long du vieux mur de briques qui sépare le cimetière des humains de celui des robots, le promeneur trouvera une plaque commémorative gravée d’un fémur croisé avec un ressort. On peut y lire, en français et en binaire : « À Alfred Janning, qui ne sut choisir ».

 

Cette histoire est un fifty, une histoire de pile 50 mots. Elle m’a été inspirée par le concours Fifty Cyberpunk de Saint Epondyle. Photo par fauxto_digit.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

flattr this!

by Lionel Dricot at January 11, 2015 10:53 AM

January 10, 2015

Mattias Geniar

View the HTTP/SPDY/HTTP2 Protocol in Google Chrome

The post View the HTTP/SPDY/HTTP2 Protocol in Google Chrome appeared first on ma.ttias.be.

A cool little improvement just landed in Chrome Canary (the nightly builds of chrome) in version 41 that allow you to show which HTTP protocol was used to retrieve resources in the Network Tab of the inspector.

The current "stable" version of Chrome is 39, so it'll take a few weeks before version 41, that contains this feature, will be generally available.

To use it, you first need to enable it: open the DevTools by right-click any page and choosing "Inspect Element". Go to the network tab, right-click the columns in the and enable the "Protocol" column.

chrome_enable_protocol

Once enabled, refresh the page and it'll show you what protocols each resource are using.

chrome_view_protocols

A quick reminder;

This new column will definitely be useful when HTTP/2 becomes mainstream.

The post View the HTTP/SPDY/HTTP2 Protocol in Google Chrome appeared first on ma.ttias.be.

by Mattias Geniar at January 10, 2015 07:02 PM

Lionel Dricot

Éradiquons la source du terrorisme !

134H (1)

Ne nous voilons pas la face, faisons fi du politiquement correct : il est désormais évident que la plupart des terroristes sont issus d’une partie bien identifiée de la population.

Certes, la majorité des individus la composant ne deviennent pas terroristes. Mais cette population reste néanmoins le terreau, le berceau qui permet à l’horreur de grandir et d’exister.

Aujourd’hui, je pense qu’il est indispensable d’ouvrir les yeux et de prendre des mesures pour éradiquer cette partie de la population, pour faire en sorte qu’elle ne puisse plus exister dans nos pays. Nous n’avons rien à attendre des politiques ou de l’état. Nous ne pouvons compter que sur nous-mêmes. Et nous en avons les moyens. Aujourd’hui, individuellement, nous pouvons prendre des mesures, nous pouvons lutter afin de réduire cette partie de la population qui donne naissance au terrorisme : la classe sociale humainement pauvre et peu éduquée.

 

Le premier réflexe

Notre premier réflexe après une agression est bien entendu de haïr, de souhaiter la mort. On amalgamera sans discernement. Par exemple, si les aléas de l’histoire font qu’il y’a proportionnellement plus d’Arabes parmi la classe pauvre et peu éduquée que parmi la classe riche, on associera les Arabes au terrorisme, oubliant que c’est la pauvreté et la misère intellectuelle qui sont en cause, que corrélation n’implique pas causalité. Et que, peut-être, les Arabes ne sont pas la majorité des terroristes mais ceux dont les médias parlent le plus.

Dans un second temps, toujours pris par l’émotion de l’agression, on voudra se défendre, se venger, se protéger. Dans l’urgence, nous prendrons des mesures qui seront, au mieux, inutiles face au terrorisme.

Car il leur suffit d’une tentative d’attentat, même complètement ratée, pour terrifier. Il leur suffit d’un seul et unique mort pour réussir.

Empêcher tout attentat terroriste par la force est donc illusoire et dangereux. Se défendre avec les armes des terroristes, c’est accepter la guerre, c’est leur faire l’honneur de les reconnaître comme ennemis, c’est se mettre à leur niveau.

Porter une arme, c’est bâtir un monde où posséder une arme est nécessaire. Soutenir la peine de mort, c’est bâtir un monde où tuer est acceptable. Encourager la surveillance, c’est bâtir un monde d’insécurité où la surveillance est indispensable.

Paradoxalement, en luttant de front contre les terroristes, nous augmentons l’insécurité et la violence. Nous coopérons avec eux pour bâtir le monde qu’ils cherchent à construire. Nous leur donnons raison.

 

Offrons l’humanité

Pour pouvoir tuer de sang-froid, avec préméditation et sans discernement, il faut avoir perdu toute notion d’humanité. Il faut avoir appris à haïr l’humain, le détester. Il faut n’avoir jamais reçu d’humanité.

Grandissant dans la haine, n’ayant jamais été reconnu, félicité, admiré, aimé par les autres humains, il est tellement facile de perdre toute considération, de se réfugier dans la première superstition surhumaine venue puis de l’utiliser comme un prétexte afin d’assouvir sa rage.

Nous sommes tous coupables d’oublier d’offrir de l’humanité à toute une couche de la population. Nous l’endoctrinons à la consommation, nous lui offrons une fausse image de luxe obscène. À la première incartade, nous la brimons et nous l’accusons de tous nos maux. Nous qui avons une vie confortable et luxueuse, nous accusons ceux qui peinent pour survivre de ne pas faire d’efforts et d’être coupables du fait que nous ayons un peu moins de luxe ce mois-ci.

Combien de vies auraient été sauvées si chaque terroriste avait, au cours de sa vie, rencontré une seule personne qui lui aurait dit : « Tu es quelqu’un de bien. Tu as du talent. Tu es unique. Tu n’es pas un adjectif, une culture, un compte en banque ou une superstition. Tu es un humain et tu n’as pas à te comparer à d’autres. »

 

Enseignons à apprendre

Empli de haine envers l’humanité, envieux d’une classe sociale supérieure fantasmée, l’individu sans éducation se découvre également sans sens à sa propre vie. Il tente de s’oublier dans l’alcool, la drogue jusqu’au jour où on viendra lui offrir un sens tout fait. Un but. Un objectif qui est compatible avec sa haine.

Alors arrêtez de nous casser les pieds avec vos valeurs. Elles ne sont pas meilleures que d’autres. S’il est acceptable de choisir un sens à la vie préfabriqué, alors ne vous étonnez pas si certains en choisissent un autre que le vôtre. En érigeant en idéal absolu votre sens de la vie, vos valeurs, vous justifiez que d’autres fassent la même chose avec les leurs.

Nous devons au contraire enseigner à construire un sens individuel, à refuser les solutions toutes faites, les valeurs de groupes. Celui qui a lu Proust, Hugo ou King et Rowling ne verra dans la Bible et le Coran qu’un livre de plus dont il pourra éventuellement tirer des enseignements en rejetant certaines parties. Il comprendra l’inanité d’un manifeste nationaliste ou indépendantiste.

Celui qui n’a jamais lu, émerveillé par le pouvoir de l’écriture, grisé par le fait d’apprendre, ce qui est nouveau pour lui, ne voudra plus jamais rien lire d’autre de peur de perdre cette magie initiale. Il se radicalisera et basera sa vie sur un seul et unique livre ou sur une seule et même idée. N’ayant jamais appris à être critique, il abhorrera ceux qui le sont.

Combien de vies auraient été sauvées si, avant de rencontrer un manipulateur, les futurs terroristes avaient appris à lire et à apprendre, à construire leurs propres idées, à critiquer ?

 

Ne remettons pas la lutte à demain !

Malheureusement, il est déjà trop tard pour certains. Nous allons encore connaître des attentats. Les terroristes de demain sont déjà embrigadés. Mais peut-être pouvons nous éviter cela à la génération qui nous suivra ? En refusant un monde armé, surveillé. En donnant de l’humanité à tous et en enseignant le fait d’apprendre.

Nous ne pouvons rejeter la tâche sur d’autres. Nous ne pouvons pas espérer de soutien des politiciens ni des médias. Au contraire, ils lutteront contre nous : un monde qui va bien n’est pas vendeur dans leur business model.

Au fond, éradiquer la misère humaine et la pauvreté intellectuelle, faire disparaître le terrorisme, cela ne tient qu’à chacun de nous.

 

Lectures complémentaires :
– 10 conseils concrets pour changer le monde.
– Analyse historique sur l’importance de l’écriture, des médias et de la religion.

 

[Edit 1] : Ajout d’une phrase pour clarifier le fait que je ne dis pas que les terroristes sont majoritairement Arabe (car je n’en sais rien et le savoir ne m’intéresse pas).

 

Photo par Ryan McGuire.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

flattr this!

by Lionel Dricot at January 10, 2015 02:29 PM

January 09, 2015

Dries Buytaert

Acquia retrospective 2014

As is now a tradition for me, here is my annual Acquia retrospective, where I look back at 2014 and share what is on my mind as we start the new year. I take the time to write these retrospectives not only for you dear reader, but also for myself, because I want to keep a record of the changes we've gone through as a company and how my personal thinking is evolving from year to year. But I also write them for you, because you might be able to learn from my experiences or from analyzing the information provided. If you would like to, you can read my previous retrospectives: 2009, 2010, 2011, 2012 and 2013.

For Acquia, 2014 was another incredible year, one where we beat our wildest expectations. We crossed the major milestone of $100 USD million in annual revenue, the majority of which is recurring subscription revenue. It is hard to believe that 2014 was only our sixth full year as a revenue-generating business.

We've seen the most growth from our enterprise customers, but our number of small and medium size customers has grown too. We helped launch and host some incredible sites last year: from weather.com (a top 20 site) to the Emmys. Our efforts in Europe and Asia-Pacific are paying off; our EMEA business grew substantially, and the Australian government decided to switch the entire government to Drupal and the Acquia Platform.

We hired 233 people in 2014 and ended the year with 575 employees. About 25% of our employees work from home. The other 75% work from offices around the world; Burlington MA (US), Portland OR (US), Washington DC (US), Paris (France), Reading (United Kingdom), Ghent (Belgium), Singapore, Delhi (India), Brisbane (Australia) and Sydney (Australia). About 75% of our employees are based in the United States. Despite our fast growth rate in staff, recruiting remains a key challenge; it's hard to hire as fast as we do and maintain the high bar we've set for ourselves in terms of talent and commitment.

We raised venture funding twice in 2014: a $50MM series F round led by New Enterprise Associates (NEA) followed by Amazon investing an undisclosed amount of money in our business. It's not like Tom Erickson and I enjoy raising money, but building and expanding a sales and marketing team is notoriously difficult and requires big investments. At the same time, we're building and supporting the development of multiple products in parallel. Most companies only build one product. We're going after a big dream to become the preferred platform for what has been called the "pivot point of many enterprise tech stacks" -- the technologies that permit organizations to deliver on the promises of exceptional digital customer experiences from an agile, open, resilient platform. We are also competing against behemoths. We can't show up to a gunfight with a knife, so to speak.

Building a digital platform for the enterprise

Digital has changed everything, and more and more organizations need or want to transform into digital-first businesses to stay in step with the preferences of their customers. Furthermore, technology innovations keep occurring at an ever faster and more disruptive pace. No organization is immune to the forces of digital disruption. At Acquia, we help our customers by providing a complete technology platform and the support necessary to support their digital initiatives. The Acquia Platform consists of tools and support for building and managing dynamic digital experiences. It includes Acquia Cloud, which helps developers deliver complex applications at scale, and Acquia Lift, our digital engagement services for bringing greater context to highly personalized experiences. Let me give you an update on each of the major components.

Drupal tools and support

Drupal gives organizations the ability to deliver a unified digital experience that includes mobile delivery, social and commerce. Great inefficiencies exist in most organizations that use a variety of different, disconnected systems to achieve those three essentials. They are tired of having to tie things together; content is important, social is important, commerce is important but connecting all these systems seamlessly and integrating them with preferred applications and legacy systems leads to massive inefficiencies. Companies want to do things well, and more often than not, Drupal allows them to do it better, more nimbly and in a far more integrated framework.

In 2010, we laid out our product vision and predicted more and more organizations would start to standardize on Drupal. Running 20 different content management systems on 20 different technology stacks is both an expensive and unnecessary burden. We've seen more and more large organizations re-platform most of their sites to Drupal and the Acquia Platform. They realize they don't need multiple content management systems for different sites. Great examples are Warner Music and Interscope Records, who have hundreds of sites on Drupal across the organization, resulting in significant cost savings and efficiency improvements. The success of our Acquia Cloud Site Factory solution has been gratifying to witness. According to a research study by Forrester Consulting, which we released late last year, ACSF is delivering a 944% return on investment to its adopters.

After many years of discussion and debate in the Drupal community, we launched the Acquia Certification Program in March 2014. So far, 546 Drupal developers from more than 45 countries have earned certification. The exams focus on real world experience, and the predominant comments we've heard this past year are that the exams are tough but fair. Acquia delivered six times the amount of training in 2014 compared to the previous year, and demand shows no sign of slowing.

Last, but definitely not least, is Drupal 8. We contributed significantly to Drupal 8 and helped it to achieve beta status; of the 513 critical Drupal 8 bugs fixed in 2014, Acquia's Office of the CTO helped fix 282 of them. We also funded work on the Drupal Module Upgrader to automate much of the work required to port modules from Drupal 7 to Drupal 8.

Acquia Cloud

Drupal alone isn't enough for organizations to succeed in this digital-first world. In addition to adopting Drupal, the cloud continues to enable organizations to save time and money on infrastructure management so they can focus on managing websites more efficiently and bringing them to market faster. Acquia customers such as GRAMMY.com have come to depend on the Acquia Cloud to provide them with the kind of rugged, secure scale that ensures when the world's attention is focused on their sites, they will thrive. On a monthly basis, we're now serving more than 33 billion hits, almost 5 billion pageviews, 9 petabytes of data transferred, and logging 13 billion Drupal watchdog log lines. We added many new features to Acquia Cloud in 2014, including log streaming, self-service diagnosis tools, support for teams and permissions, two-factor authentication, new dashboards, improved security with support for Virtual Private Networks (VPNs), an API for Acquia Cloud, and more.

Acquia Lift

As powerful as the Drupal/Acquia Cloud combination may be, our customers demand far more from their digital properties, focusing more and more on optimizing them to fully deliver the best possible experience to each individual user. Great digital experiences have always been personal; today they have to become contextual, intuitively knowing each user and dynamically responding to each user's personal preference from device to location to history with the organization. After two years of development and the acquisition of TruCentric, we launched Acquia Lift in 2014.

It's surprising how many organizations aren't implementing any form of personalization today. Even the most basic level of user segmentation and targeting allows organizations to better serve their visitors and can translate into significant growth and competitive differentiation. Advanced organizations have a single, well-integrated view of the customer to optimize both the experience and the lifetime value of that customer, in a consistent fashion across all of their digital touchpoints. Personalization not only leads to better business results, customers have come to expect it and if they don't find it, they'll go elsewhere to get it. Acquia Lift enables organizations to leverage data from multiple sources in order to serve people with relevant content and commerce based on intent, locations and interests. I believe that Acquia Lift has tremendous opportunity and that it will grow to be a significant business in and of itself.

While our key areas of investment in 2014 were Acquia Cloud and Acquia Lift, we did a lot more. Our Mollom service blocked more than 7.8 billion spam messages with an error rate of only 0.01%. We continue to invest in commerce; we helped launch the new Puma website leveraging our Demandware connector and continue to invest and focus on the integration of content and commerce. Overall, the design and user experience of our products has improved a lot, but it is still an area for us to work on. Expect us to focus more heavily on user experience in 2015.

The results of all our efforts around the launch of the Acquia Platform have not gone unnoticed. In October, Acquia was identified as a Leader in the 2014 Gartner Magic Quadrant for Web Content Management.

The wind is blowing in the right direction

I'm very optimistic about Acquia's future in 2015. I believe we've steered the company to be positioned at the right place at the right time. As more organizations are shifting to becoming digital-first businesses they want to build digital experiences that are more pervasive, more contextual, more targeted, more integrated, and last but not least, more secure.

The consolidation from many individual point solutions to one platform is gaining momentum, although re-platforming is usually a long process. Organizations want the unified or integrated experience that Drupal has to offer, as well as the flexibility of Open Source. It is still time consuming and challenging to create quality content, and I believe there is plenty of opportunity for us and our partners to help with that going forward.

Without a doubt, organizations want to better understand their customers and use data-driven decisions to drive growth. Data is becoming the new product. The opportunity this creates in commerce is massive.

Cloud computing and Software-as-a-Service (SaaS) continues to be on the rise. Cloud is top of mind and the transition away from on-premise solutions is accelerating even as the arguments around security and privacy issues in the cloud continue to be raised. While there is a certain amount of emotion, and sometimes politics, people are beginning to realize that the cloud is usually more secure and more robust against cyber-attacks than traditional on-premise systems.

The promise of Drupal 8, arguably the most significant advance in the evolution of the Drupal software, has me very excited. It is shaping up to be a great release, and I'm confident it will further secure Drupal's reputation among developers, designers, agencies and site managers as the most flexible, powerful content management solution available.

All of this is not to say 2015 will be easy. This is an incredibly exciting and fast-changing space in the world of technology. Acquia is growing in an incredibly fast-paced, dynamic sector and we realize our mission is to help our customers understand how to think ahead to ever more innovation and change. Simplifying our overall messaging and defining ourselves around the Acquia Platform is a significant first step.

Of course, none of this success would be possible without the support of our customers, partners, the Drupal community, the Acquia team, and our many friends. Thank you for your support in 2014, and I look forward to working with you to find out what 2015 will bring!

by Dries at January 09, 2015 08:01 PM

January 08, 2015

Xavier Mertens

Searching for Microsoft Office Files Containing Macro

MacroA quick blog post which popped up in my mind after a friend posted a question on Twitter this afternoon: “How to search for Office documents containing macros on a NAS?“. This is a good idea to search for such documents as VBA macros are known to be a good infection vector and come back regularly in the news like the Rocket Kitten campaign.

My first idea was to use the oledump tool developed by Didier Stevens. Without any command line option, this nice tool lists the streams contained in a document and macros are flagged with a “M” like in the example below. The 7th stream is a macro:

# ./oledump.py /tmp/Suspicious/Invoice.doc 
 1:      113 '\x01CompObj'
 2:     4096 '\x05DocumentSummaryInformation'
 3:     4096 '\x05SummaryInformation'
 4:     4096 '1Table'
 5:      444 'Macros/PROJECT'
 6:       41 'Macros/PROJECTwm'
 7: M  12604 'Macros/VBA/ThisDocument'
 8:     3413 'Macros/VBA/_VBA_PROJECT'
 9:      514 'Macros/VBA/dir'
10:     4142 'WordDocument'

But this requires to grep for the “M” in the output and adds some complexity. Didier responded on Twitter with another tool he also developed: filescanner.exe. This tool does exactly the job we expect by searching for patterns into a file but it runs only on Windows! Being a UNIX guy, why not use YARA with a custom signature to achieve this? As Didier said, an Office document containing a macro can be detected by searching the following patterns:

Let’s wirte a simple YARA rule:

rule office_macro
{
    meta:
        description = "M$ Office document containing a macro"
        thread_level = 1
        in_the_wild = true
    strings:
        $a = {d0 cf 11 e0}
        $b = {00 41 74 74 72 69 62 75 74 00}
    condition:
        $a at 0 and $b
}

Finally, let’s mount our NAS share (NFS, CFS, AFS, …) and use the standard UNIX tool “find” to search for juicy files:

# mkdir /mnt/share
# smbmount //nas.lan/users /mnt/share -o username=user,password=pass,ro
# find /mnt/share -type f -size -1M -exec yara /tmp/office-macro.rule {} \;
office_macro /mnt/share/xavier/tmp/Invoice.doc
office_macro /mnt/share/tmp/TaskManager.xls
...

And you can use the power of the find command to restrict your search to only specific files. If you don’t know YARA, have a look at this powerful tool. Happy scanning!

by Xavier at January 08, 2015 09:25 PM

Mattias Geniar

Recent OpenSSL Security Advisories Are a Good Thing

The post Recent OpenSSL Security Advisories Are a Good Thing appeared first on ma.ttias.be.

The announcement of upcoming security advisories was just finalized with several new CVE's being announced by OpenSSL. I like this.

Obviously not the CVE's themselves. But the announcement means OpenSSL is far from dead. It means there's security researchers finding bugs and there are developers fixing them. It means responsible disclosure. This isn't a new piss at LibreSSL, but a positive look towards OpenSSL.

In May 2014 a donation of 133.000eur was made to the OpenSSL project, and in December that same company donated the same amount, again. Big cheers!

I know 2 donations a year, from the same company, don't fix the problems with OpenSSL. But I am glad to still see OpenSSL alive and kicking and being actively supported!

The post Recent OpenSSL Security Advisories Are a Good Thing appeared first on ma.ttias.be.

by Mattias Geniar at January 08, 2015 06:17 PM

Wouter Verhelst

ExtreMon example

About a month ago, I blogged about extremon. As a reminder, ExtreMon is a monitoring tool that allows you to view things as they are happening, rather than with the ~5 minute delay that munin gives you, and also avoiding the quad-state limitation of Nagios' "good", "bad", "ugly", and "unknown" states. No, they're not really called that. Yes, I know you knew that.

Anyway. In my blog post, I explained how you can set up ExtreMon, and I also set up a fairly limited demo version on my own server. But I have since realized that while it is functional, it doesn't actually show why ExtreMon is so great. In an effort to remedy that, I present you an example of what ExtreMon can do.

Let's start with a screenshot of the ExtreMon console at the customer for which I spent time trying to figure out how to get it up and running:

Click for full sized version. You'll note that even in that full-sized version, many things are unreadable. This is because the ExtreMon console allows one to move around (right mouse button drag for zoom; left mouse button drag for moving around; control+RMB for rotate; center mouse button to reset to default); so what matters is that everything fits on the screen, not whether it is all readable (if you need to read, you zoom).

The image shows 18 rectangles. Each rectangle represents a single machine in this particular customer's HPC cluster. The top three rectangles are the cluster's file servers; the rest are its high performance nodes.

You'll note that the left fileserver has 8 processor cores (top row), 8 network cards (bottom row, left part), and it also shows information on its memory usage (bottom row, small rectangle in the middle) as well as its NFS client and server procedure calls (bottom row, slightly larger rectangles to the right). This file server is the one on which I installed ZFS a while back; hence the large amount of disks visible in the middle row. The leftmost disk is the root filesystem (which is an ext4 off a hardware RAID1); the two rightmost "disks" are the PCIe-attached SSDs which are used for the ZFS L2ARC and write log. The other disks in this file server nicely show how ZFS does write load balancing over all its disks.

The second file server has a hardware RAID1 on which it stores all its data; as such, there is only one disk graph there. It is also somewhat more limited in network, as it has only two NICs. It does, however, also have 8 cores.

The last file server has no more than four processor cores; in addition, it also does not have a hardware RAID controller, so it must use software RAID over its four hard disks. This server is used for archival purposes, mostly, since it is insufficient for most anything else.

As said, the other nodes are the "compute nodes", where the hard work is done. Most of these compute nodes have 16 cores each; two have 12 instead. When this particular screenshot was taken, four of the nodes (the ones showing red in their processor graphs) were hard at work; the others seem to have been mostly idling. In addition to the familiar memory, NFS (client only), network, and processor graphs, these nodes also show a "swap space" graph (just below the memory one), which seems fine for most nodes, except for the bottom left one (which shows a few bars that are coloured yellow rather than green).

The green/yellow/red stuff is supposed to represent the "ok", "warning", "bad" states that would be familiar from Nagios. In this particular case, however, where "processor is busy all the time" is actually a wanted state, a low amount of idleness on the part of the processor isn't actually a problem, on the contrary. I did consider, therefore, to modify the ExtreMon configuration so that the processor graphs would not show red when the system was under high load; however, I found that differences in colour like this actually makes it more easy to see, at a glance, which machines are busy -- and that's one of the main reasons why we wanted to set this up.

If you look carefully, you can find a particular processor core in the graph which shows 100% usage for "idle", "system", and "softirq", at the same time. Obviously that can't be the case, so there's a bug somewhere. Frank seems to believe it is a bug in CollectD; I haven't verified that. At any rate, though, this isn't usually a problem, due to the high update frequency of ExtreMon.

The amount of data that's flowing through ExtreMon is amazing:

Which renders a grand total of 2887 data points that are shown in this particular screenshot; and then I'm not even counting all the intermediate values, some of which also pass through ExtreMon. Nor am I counting the extra bits which have since been added (this screenshot is a few days old, now, and I'm still finetuning things). Yet even so, ExtreMon manages to update those values once every few seconds, in the worst case. As a result, the display isn't static for a moment, constantly moving and updating data so that what you see is never out of date for more than a second or two.

Awesome.

January 08, 2015 12:47 PM

Dries Buytaert

The future of software is data-driven

Marc Andreessen famously said that software is eating the world. While I certainly agree with Marc that software companies are redefining our economies, I believe that much of that technological shift is being driven by data. So is the value of a business in the data or is it in the software? I believe data is eating the world because the value is increasingly more in the data and not the software. Let's investigate why.

Data-driven experiences

Netflix provides a great example of a data-driven customer-centric company. By introducing streaming video, their software "ate" the traditional DVD business. But Netflix soon realized their future wasn't in the medium of delivery -- it was in the wealth of data generated simply by people using the service. The day-to-day data generated by Netflix viewers provides a crucial ingredient to competing in the marketplace and defining the company's mission: improving the quality of the service.

To that end, Netflix uses passive data -- the information gathered quietly in the background without disrupting users' natural behaviors -- to provide TV and movie recommendations, as well as to optimize the quality of services, such as streaming speed, playback quality, subtitles, or closed captioning. Of course, Netflix subscribers can contribute active feedback to the company, such as movie reviews or feedback on the accuracy of a translation, but the true value of Netflix's user data is in the quiet, zero-effort observation that allows the company to optimize experiences with no friction or disruption to regular user behavior. In fact, the company even hosted several competitions to invent better algorithms for user ratings, with a winning prize of $1M USD.

Within very saturated marketplaces, data is also becoming a key differentiator for some companies. For example, when Google first started, its value was almost entirely centered around the quality of its Pagerank algorithm, or its "software". But Google did not rest on the laurels of having good software, and prioritized data-driven insights as the future of the company. Consider Google Waze, the world's largest community-based traffic and navigation app. Google Waze relies heavily on both active consumer input and passive location-based data, combined with a sophisticated routing algorithm. The routing algorithm alone would not be enough to differentiate Waze from the other navigation systems of the world. Consumers are demanding more accurate maps and real-time traffic information, which could not happen without the use of data.

The future of software

There is another element in the rising importance of data: not only is the sheer amount of consumer data growing, but software is simultaneously becoming much easier to build. Developers can leverage new software programming tools, open source, and internet-based services to build more complex software in less time. As a result, the underlying intrinsic value of software companies is diminishing.

Netflix and Google are still disruptive companies, but no longer primarily because of their software -- it's their ability to use the data their customers produce to extend their engagement with customers. Their actual software is increasingly being commoditized; recommendation engines and navigation software both exist in open source and are no longer trade secrets.

Tomorrow's applications will consume multiple sources of data to create a fine-grained context; they will leverage calendar data, location data, historic clickstream data, social contacts, information from wearables, and much more. All that rich data will be used as the input for predictive analytics and personalization services. Eventually, data-driven experiences will be the norm.

And this basic idea doesn't even begin to cover the advances in machine learning, artificial intelligence, deep learning and beyond -- collectively called "machine intelligence". Looking forward even more, computers will learn to do things themselves from data rather than being programmed by hand. They can learn faster themselves than we'd be able to program them. In a world where software builds itself, computers will only be limited by the data they can or cannot access, not by their algorithms. In such a future, is the value in the software or in the data?

Rethinking business

As value shifts from software to the ability to leverage data, companies will have to rethink their businesses, just as Netflix and Google did. In the next decade, data-driven, personalized experiences will continue to accelerate, and development efforts will shift towards using contextual data collected through passive user behaviors.

Companies of the future have a lot on their plates. More than ever, they'll need to adapt to all types and formats of data (closed, open, structured and unstructured); leverage that data to make their product or service better for users; navigate the gray area around privacy concerns; and even reconsider the value of their intellectual property derived from software. They'll have to do all this while providing more contextualized, personalized, and automated experiences. "Data-driven" will spell a win-win situation for both users and businesses alike.

by Dries at January 08, 2015 05:37 AM

January 07, 2015

FOSDEM organizers

Call for volunteers

With FOSDEM just around the corner, it is time for us to enlist your help. Every year, an enthusiastic band of volunteers helps us make FOSDEM a fun and safe place for all our attendees. We could not do this without you. This year we again need as many hands as possible, especially during the buildup (starting Friday at noon) and teardown (Sunday evening). No need to worry about missing lunch. Food will be provided. If you have some spare time during the weekend and would like to be a part of the team that makes FOSDEM tick: We need舰

January 07, 2015 03:00 PM

Frank Goossens

Charlie Hebdo; L’amour plus fort que la haîne

Love is stronger than hate!

by frank at January 07, 2015 01:05 PM

January 06, 2015

Mattias Geniar

Architecting Websites For The HTTP/2 Era

The post Architecting Websites For The HTTP/2 Era appeared first on ma.ttias.be.

http2_logo

The arrival of HTTP/2 will require a bit of re-thinking how we handle websites (and webservers). This makes it a good time to reflect on what those changes can bring.

This post is based entirely on theory (the HTTP2 spec), as HTTP/2 is hard to test today. Major browsers support HTTP/2, but very little servers do. And often, it's unclear which draft of the HTTP/2 spec they do support.

The entire HTTP/2 Spec is available for reading on Github and is highly recommended. It covers a lot more edge-cases than this article does.

And if you're up for a bit more reading, the HTTP/1.1 spec is also worth it, if only for comparisons' sake.

Latest benchmarks have shown that changes are in fact needed in order to fully use HTTP/2. Not optimising the way data is transferred could end up hurting performance on the HTTP/2 protocol.

Table of Contents

  1. Some notes
  2. An introduction to HTTP/2
  3. Less domain sharding
  4. Less concatenation
  5. Is HTTPS/TLS required?
  6. Compression
  7. Server-side push
  8. Request priorities
  9. HTTP methods and status codes
  10. HTTP/2 and Varnish
  11. The rise of alternative webservers
  12. When will we see HTTP/2?
  13. References
  14. Comments


Some notes

This post took a while to finish and gather all the information. It's, so far, based entirely on theory. It's my plan to keep this post updated as A) the spec progresses and B) some of these theories can be benchmarked and put to the test.

For that to work, please let me know (in the comments at the bottom or online) what is wrong, what should be expanded upon and how you think the HTTP/2 protocol is going to evolve the web.

(Note: even though HTTP/2 is based on SPDY, I don't feel benchmarking SPDY would accurately reflect the way HTTP/2 would perform, I therefore consider HTTP/2 "untestable" for the moment.)


An introduction to HTTP/2

Before I go deeper into what HTTP/2 can change for the web, it's important to know what HTTP/2 is. First and foremost, it builds upon the SPDY protocol that Google designed, and has learned the lessons learned from that protocol.

Where HTTP/1.0 and HTTP/1.1 were a plain-text protocol, HTTP/2 isn't. It's entirely binary and based on a concept of streams, messages and frames -- adding considerable complexity to the protocol.

- The stream is a virtual channel within a connection, which carries bidirectional messages. Each stream has a unique integer identifier (1, 2, ..., N).

- The message is a logical HTTP message, such as a request, or response, which consists of one or more frames.

- The frame is the smallest unit of communication, which carries a specific type of data—e.g., HTTP headers, payload, and so on.

HTTP/2 streams, messages and frames

Bottom line here is you can't telnet into a HTTP/2 webserver and expect to write plain-text headers to make a request. You'll need tools to translate the HTTP protocol into the HTTP/2 binary form (think curl, wget, your browser, ...).

Chances are, you'll use those tools the same way you use them today. Behind the scenes they'll translate your HTTP requests into the binary message frame format that HTTP/2 expects.

So a curl request like the one below will work for HTTP/1.0, HTTP/1.1 and HTTP/2 servers. It will be curl that will handle the connection and encode your request transparently to meet with HTTP/2's requirements.

$ curl -I -H "Accept-Encoding: gzip" -H "User-Agent: YourCustomUA" http://192.168.1.5/mypage.html

HTTP/2 is the first major change to the HTTP protocol since 1999. That means it can learn from 15 years of experience and from watching the web evolve (and my-oh-my, has the web evolved in the last 15 years).

So HTTP/2, what can you bring to the table?


Less domain sharding

In HTTP/1.1 there is a problem known as "concurrent connections per domain". A browser will open 4 to 8 TCP connections to a given host, and request individual resources (stylesheets, images, javascript, ...) one by one. To circumvent this, websites nowadays use multiple domains to load their resources (like static1.domain.tld, static2.domain.tld, ...).

The reason for this kind of domain sharding is to have more concurrent downloads of resources. Each connection would otherwise block until one of the previous request is done.

HTTP/2 introduces multiplexing, which allows one TCP/IP connection to request and receive multiple resources, intertwined. Requests won't be blocking anymore, so there is no need for multiple TCP connections on multiple domain names.

In fact, opening multiple connections would hurt performance in HTTP/2. Each connection would have to go through the SYN -> SYN-ACK -> ACK three-way handshake, wasting round-trips. The HTTP/2 spec describes it like this.

Clients SHOULD NOT open more than one HTTP/2 connection to a given host and port pair, where host is derived from a URI, a selected alternative service [ALT-SVC], or a configured proxy.
9.1 -- connection management

This would mean that HTTP resources, such as CSS, JavaScript, Images, ... don't need to come from other (sub)domains anymore, but can all come from the same domain as the parent resource. This would also make it easier to implement protocol-relative URLs.


Less concatenation

With HTTP/1.1 there was always a difficult trade-of between domain sharding, as explained above, and resource concatenation.

Since HTTP requests are fairly expensive, they were reduced to a minimum. Separate JavaScript and CSS files were concatenated into a single file, CSS Sprites were used to reduce the number of individual image resources.

Stylesheets would be (partly) inlined, to avoid additional requests to the server for more CSS files (even if there were always arguments against inlining). The inlining of content has mostly been solved by server-side pushes in HTTP/2, more on that later.

For HTTP/2, a part of that workflow can be undone. Looking at CSS sprites for instance, they would commonly include images that are needed on the site, but perhaps not on the page currently being browsed. Yet they were sent to the client in the "large" sprite. Since HTTP requests are becoming less expansive, it can become acceptable to separate those images again and not bundle them in one large file.

The same would apply to CSS and JavaScript as well. Instead of having a single monolithic file with all content, it can be split into chunks that are then only loaded on the pages that need them.

There will, as always, be a tradeoff between making an additional HTTP call and bundling all resources into single files -- that's what the benchmarks will have to decide for us.


Is HTTPS/TLS required?

HTTP/2 is based on SPDY. And SPDY required a TLS (https) connection in order to use the SPDY protocol.

However, the HTTP/2 doesn't require a secure connection, unlike SPDY. It's possible to use HTTP/2 on a plain, non-secure HTTP connection. Having said that, it looks like major browsers (Firefox & Chrome) may be limiting the HTTP/2 support for TLS connections only, in order to push for a more secure web (SSL/TLS everywhere).

Firefox will only be implementing HTTP/2 over TLS -- and so far that means for https:// schemed URLs. It does enforce the protocol's >= TLS 1.2 requirement -- if a server negotiates HTTP/2 with a lower TLS version it is treated as a protocol error.

Networking/http2 on Mozilla.org

So even though the spec says HTTP/2 is possible on plain HTTP, chances are we'll only be using it on HTTPS websites.

I believe it's safe to say the web built on HTTP/2 will be a web built on TLS (1.2 and higher). With free certificate authorities managed by eff.org and cheaper certificates all around, I don't think are many compelling reasons anymore to not be going SSL/HTTPS in the future (but beware of bad SSL/HTTPS implementations).


Compression

HTTP/2 actively discourages the use of compression for secure websites. HTTP compression (gzip, deflate, ...) has been known to compromise the SSL/TLS security in the "breach" and "CRIME" attacks.

The attack exists on HTTP/1.0 and HTTP/1.1 infrastructure and will also be possible on HTTP/2.

HTTP/2 enables greater use of compression for both header fields (Section 4.3) and entity bodies. Compression can allow an attacker to recover secret data when it is compressed in the same context as data under attacker control.

10.6 Use of Compression

For any secure site, where the SSL/TLS connection is used to protect user data, compression should be disabled. For sites that use SSL/TLS only to ensure the validity of the data that is being sent, I believe compression will still be an option -- as long as no secret or sensitive information is shown. This is the same today in HTTP/1.1 as well.

HTTP/2 will support the compression of HTTP headers, which is not possible in HTTP/1.1 (where HTTP headers are always sent uncompressed). This is especially useful for sites shipping with a lot of cookies (sites ship with 1MB worth of cookies, really?). This content can now be reliably compressed.

The HTTP header compression doesn't use the known gzip/deflate algorithms and is as such not vulnerable to BREACH attacks. It uses a custom compression method, known as HPACK, to compress the HTTP headers.

In all likeliness, HTTP/2 will not change the way we handle the compression of data compared to HTTP/1.1. It does offer a great improvement for the compression of HTTP headers.


Server-side push

In HTTP/1.1, the only way for a browser (or "client") to retrieve data from a server, is to first request it from the server.

In HTTP/2, the server can send along extra resources together with the first HTTP request, thus avoiding additional network round-trips for follow-up HTTP requests.

This is especially useful for those first requests where the browser would ask for the HTTP resource of the page (say, /somepage.html), only to parse the DOM and figure out it needs to request additional CSS/JavaScript/images/... resources as a result.

How will this feature work with todays code, written in PHP, Ruby or .NET? Hard to say. In the end, it's the HTTP/2 webserver (Nginx, Apache, ...) that needs to send along additional HTTP requests to the client.

Will the HTTP/2 webserver determine this on its own, which extra resources to send? Will there be a way to instruct the HTTP/2 webserver from within your programming code? Hopefully, although the syntax or the methods for doing so are still unclear and would/could be highly dependent on the chosen webserver.

At the moment, I would treat this feature as an "obscure black box" that will have little or control over. Here are a few suggestions on how to handle these server-side pushes from within your application code.

1. The application can explicitly initiate server push within its application code. (example in NodeJS)

2. The application can signal to the server the associated resources it wants pushed via an additional HTTP header. (ref.: X-Associated-Content header)

3. The server can automatically learn the associated resources without relying on the application.

Implementing HTTP 2.0 server push

Ilya Grigorik (@igrigorik) has some examples based on NodeJS code that demonstrate this powerful feature (examples where you do have full control over server-side pushes).


Request priorities

An equally "obscure" feature in HTTP/2 is the prioritisation of HTTP requests. Each request can be given a priority (0 begin the highest priority, like MX DNS records) and will be processed accordingly.

It'll be up to the browser to specify the priority of each HTTP resource. The HTTP/2 protocol allows the priority to be given, so blocking resources can be given a higher processing priority than non-blocking resources. It's up to the HTTP/2 webserver to process those priority requests accordingly.

As it looks now, this will be a feature of HTTP/2 that we, developers/users, won't have a say in. We will most likely not be able to assign priorities to HTTP resources ourselves. This may be a good thing, as browsers will be far more intelligent in figuring out which resources should get which priority.


HTTP methods and status codes

All HTTP status codes that are defined for HTTP/1.1 remain for HTTP/2. We'll still have HTTP 200 OK requests, the 301 permanent redirects and 404 Page Not Found's.

The same goes for all methods defined in HTTP/1.1: GET, POST, PATCH, PUT, DELETE, ... all these methods are still here.

Since HTTP/2 builds further upon HTTP/1.1, all status codes and methods remain the same.


HTTP/2 and Varnish

It's no secret that I love Varnish, the HTTP accelerator/cacher/load balancer/router. Varnish has historically only supported HTTP/1.1, and HTTP only. It never implemented SSL/TLS.

For sites to use Varnish with HTTPS, they would use Pound / HAProxy / Nginx as an "SSL offloader" in front of their Varnish configuration. That service would handle all the SSL/TLS encryption and pass the requests to Varnish in plain HTTP for caching.

However, it looks like support for HTTP/2 may be coming to Varnish after all. It's no secret that Poul-Henning Kamp, author of Varnish, doesn't like HTTP/2, or at least -- the first drafts -- but at the same time he says "if that's what the people want, I'll do it".

At the end of the day, a HTTP request or a HTTP response is just some metadata and an optional chunk of bytes as body, and if it already takes 700 pages to standardise that, and HTTP/2.0 will add another 100 pages to it, we're clearly doing something wrong.
Poul-Henning Kamp

And in a more recent (May 2014) mailing list post, Poul-Henning Kamp confirms his opinion again.

Isn't publishing HTTP/2.0 as a "place-holder" is just a waste of everybodys time, and a needless code churn, leading to increased risk of security exposures and failure for no significant gains ?

[...]

Please admit defeat, and Do The Right Thing.

Poul-Henning Kamp

And even more recently (Januari 2015), the HTTP/2 rant got an update.

HTTP/2.0 is not a technical masterpiece. It has layering violations, inconsistencies, needless complexity, bad compromises, misses a lot of ripe opportunities, etc.
Poul-Henning Kamp

Maybe we'll see HTTP/2 support in Varnish in the 4.x releases, maybe we'll have to wait for the 5.x release. As far as I can tell, there is no "official" statement from the Varnish community yet.

For me personally, I believe (at least in the short-term) our server setups will look like this.

port :80   --> Varnish HTTP accelerator
port :443  --> Nginx SSL HTTP/2 + SPDY  offloading, proxy all to Varnish on :80
port :8080 --> The actual webserver (Nginx/Apache/...) parsing the requests

If HTTP/2 does seem to catch on for HTTP-connections and not only for TLS sessions, and Varnish turns out not supporting HTTP/2 at all, the setup would be slightly different.

port :80   --> Nginx running HTTP/1.1 and HTTP/2, proxy all to Varnish on :8080
port :443  --> Nginx SSL offloading, proxy all to Varnish on :8080

port :8080 --> Varnish serving the cached requests, proxy all not in the cache to :8081
port :8081 --> The actual webserver (Nginx/Apache/...) parsing the requests

Time will tell. Wether the backend serving the actual PHP/Ruby/Node/... requests will be Nginx or Apache will depend on the sysadmin and the familiarity with each webserver.


The rise of alternative webservers

HTTP/2 may not be 100% new (after all, it's based on the HTTP/1.1 spec), it does change a few important paradigms on how we think about webservers and sites nowadays. That means the webservers we're using today, may not be the best for the HTTP/2 world.

Websites are designed and architected with the best user-experience in mind. That means they're optimized for the browsers rendering them, not the servers serving them. We can easily swap out the webserver (they're in our control), but we can't change the browsers clients are using.

So in the HTTP/2 era, we may see H2O as a new rising star, next to the proven webservers like Nginx and Apache. And I don't think H2O will be alone. It already shows impressive improvements over Nginx, and the HTTP/2 race has only just begun.


When will we see HTTP/2?

This is very hard to say. The timeline for HTTP/2 has a major milestone set on February 2015 for the RFC. The RFC is the moment when the IETF working group has finished the proposal and it's been reviewed.

So at the earliest, HTTP/2 will be "finalised" on February 2015. We can expect the final implementations in major webservers soon thereafter (especially since Nginx already fully supports SPDY and HTTP/2 is based partly on SPDY). Most modern browsers already support SPDY, making the change to HTTP/2 less of a hurdle (but by no means an easy task).

Both Mozilla and Chrome already support HTTP/2, albeit in a limited form for Mozilla's firefox, and it needs to be enabled explicitly in Chrome.

The HTTP/2 spec won't change much compared to the version currently published. Anyone could already implement the current spec and update their implementation should any changes still be approved.

2015 will be the year we see HTTP/2 reach general availability.


References

In no particular order, but all worthy of your time and attention.

If you have any more feedback, please let me know in the comments below. I'd love to hear what you think --- even if you disagree with me entirely!

The post Architecting Websites For The HTTP/2 Era appeared first on ma.ttias.be.

by Mattias Geniar at January 06, 2015 09:07 PM

Flush all the content from Memcached via the CLI

The post Flush all the content from Memcached via the CLI appeared first on ma.ttias.be.

Memcached is an easy to use key/value store run in memory of a server. It's content is volatile, every restart of the Memcached service would remove all the data in memory and start anew.

But you can also flush the content (all keys and their values) from the command line, without restarting Memcached or additional sudo commands to grant non-privileged users permissions to flush the cache.

To flush the content, you can telnet to your Memcached instance and run the flush_all command.

$ telnet localhost 11211
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.

flush_all
OK

quit

Or in several one-liners that you can use in scripts, they rely on nc (netcat; yum install nc):

$ echo "flush_all" | nc localhost 11211
$ nc localhost 11211 <<< "flush_all"

As soon as the flush_all command is typed, all keys are set to expire. They won't be dropped from memory actively (as this would be quite a "heavy" operation to drop every key), but they're expired so the next retrieval would be invalid. This also means the flush_all doesn't free memory on the server, it frees the memory in the memcached service.

As a bonus, here's an nmap command that scans the entire IPv4 IP space for services that listen to port 11211. If you want to have fun, write a script that loops each IP and sends flush_all's to each service, every minute.

$ nmap 0.0.0.0/0 -p 11211 --open 2> /dev/null
...
Nmap scan report for something.tld (192.168.5.152)
PORT      STATE SERVICE
11211/tcp open  unknown
...

That's the price you pay for not correctly limiting your Memcached services. ;-)

The post Flush all the content from Memcached via the CLI appeared first on ma.ttias.be.

by Mattias Geniar at January 06, 2015 06:23 PM

Frank Goossens

As heard on Our Tube: Robert, Laurel & Hardy Woke up laughing

Robert Palmer was so much more then just the slick guy from his eighties videos (think “Addicted to Love”). A couple of days ago I heard “Woke up Laughing” on the radio from the 1980 album “Clues” (which also features “Johnny & Mary”) and I found this video on YouTube which features the iconic dancing Laurel & Hardy;

YouTube Video
Watch this video on YouTube or on Easy Youtube.

by frank at January 06, 2015 05:39 PM

FOSDEM organizers

More main track presentations confirmed

We are pleased to announce a second round of confirmed presentations. There is more to come as we are still reviewing a number of proposals. Keynotes Title Speaker Identity Crisis: Are we who we say we are? Karen Sandler What is wrong with Operating Systems Antti Kantee Languages track Title Speaker Get ready to party! Larry Wall Modularizing C software with Apache Celix Pepijn Noltes The Story of Rust Steve Klabnik Time track Title Speaker NTF's General Timestamp API and Library Harlan Stenn Ntimed an NTPD replacement Poul-Henning Kamp Technical Aspects of Leap Second Propagation and Evaluation Martin Burnicki舰

January 06, 2015 03:00 PM

Joram Barrez

Well worth reading

I hardly post link to other blogs here, but I felt that this deserved more attention than a regular tweet: Future of Programming – Rise of the Scientific Programmer (and fall of the craftsman) Many of the ideas written there resonate very well with me and are similar to what I’ve been pondering about and […]

by Joram Barrez at January 06, 2015 12:39 PM

Lionel Dricot

La cueillette des livres, films et autres biens culturels

164453373_a508b978a5_z
Ceci est le billet 3 sur 4 dans la série La consommation cueillette

N’avez-vous jamais soupiré en refermant un livre parce que vous ne saviez pas quoi lire ensuite ? N’avez-vous jamais eu envie de voir des dizaines de films classiques pour vous retrouver, le soir avec des amis, sans aucune idée d’un film à suggérer si ce n’est le blockbuster dont la publicité passe en boucle dans tous les médias ?

Je suis un consommateur particulièrement vorace de livres, de films et de bandes dessinées. Historiquement, mon choix se limitait à ma bibliothèque, laquelle était enrichie ponctuellement par les achats, les cadeaux et les découvertes aléatoires dans les bouquineries.

À l’heure du web, la bibliothèque d’un pirate comme moi est virtuellement illimitée. Et, pourtant, je ne regardais que très rarement des « grands films ». Je lisais des livres « faciles » en série. L’immensité de ma bibliothèque illimitée me terrorisait et me faisait me replier dans mon petit univers connu.

J’ai donc décidé d’appliquer la consommation cueillette à ma consommation culturelle.

 

Étape 1 : la cueillette

Après moultes essais de solutions diverses, j’ai fini par établir mon panier de cueillette sur le site SensCritique, sous forme d’une « liste d’envies ».

J’y marque comme « Envies » tout film, livre, bande dessinée ou série télévisée qui m’est conseillé par une connaissance ou par un article.

Mais là où SensCritique se démarque, c’est par sa capacité de compiler des sondages. Par exemple, chaque membre du site qui le souhaite sélectionne ses 10 films de SF préférés. Les résultats sont agrégés et une liste des meilleurs films de SF est publiée. Il est également possible de « suivre » des utilisateurs qui deviennent nos « éclaireurs ». Vous pouvez, par exemple, m’ajouter comme éclaireur.

Toutes ces fonctionnalités, au final, ne servent qu’à une et une seule chose : ajouter des éléments à ma liste d’envies. Élargir mon domaine de cueillette ! Mais tout autre panier de cueillette peut bien entendu faire l’affaire.

 

Étape 2 : la consommation

Lorsque j’ai envie de regarder un film, je consulte ma liste d’envies. Lorsque j’ai finis un livre, je consulte ma liste d’envies. Lorsqu’un ami me demande quelle bande dessinée me ferait plaisir, je consulte ma liste d’envies.

Parfois, je retrouve des éléments qui sont dans ma liste pour une raison que je n’explique pas. Je n’ai pas envie de les consommer. Les critiques semblent généralement négatives, surtout chez mes éclaireurs. Alors je n’insiste pas : je supprime cet élément sans remord.

Et si, au cours d’une soirée cinéma, on me propose un film qui n’était pas dans mes envies, je me pose consciemment la question : pourquoi n’y était-il pas ? Et l’ajouterais-je à mes envies ? Si non, ne puis-je pas proposer une alternative ?

 

Étape 3 : l’action

Après la consommation, je note l’œuvre sur SensCritique et, parfois, je me risque à rédiger une critique. Le but n’est pas tant d’être lu que de marquer mon passage et de pouvoir, dans quelques années, me remémorer une œuvre particulière.

À cette étape, j’aimerais également pouvoir remercier le ou les auteurs, s’ils sont encore vivants. Malheureusement, les artistes acceptant des prix libres sont encore de rares exceptions. Dommage !

 

Au final

Tout cet effort mais pour quels résultats ? Des résultats tout simplement inespérés !

Depuis que j’applique cette méthode, mon quotient cinéphilique a effectué un bond magistral. J’ai enfin pris le temps de regarder de vieux classiques que « je me devais de voir » depuis des années. Mieux : j’ai pris énormément de plaisir à les découvrir et j’ai pris goût aux films de qualité.

Le fait de noter négativement un film parce que j’estime avoir perdu mon temps est une sorte de rappel, de marqueur inconscient. J’ai envie de regarder des films que je vais noter positivement ! Une fois désintoxiqué, je réalise à quel point les blockbusters décérébrés sont ennuyeux et écœurants.

L’effet est identique sur les livres : je lis beaucoup plus de classiques qu’auparavant. J’éprouve un besoin de qualité, de profondeur. Si je ne referme pas un livre en m’en sentant grandi, je suis déçu.

Un petit blockbuster ou un petit polar pour se vider l’esprit ? Non merci ! Je ne veux pas me vider l’esprit, au contraire, je veux le remplir, le construire, le travailler, le faire progresser ! Je ne veux pas entraîner mon cerveau à ne plus réfléchir ni l’habituer à la bêtise ! Le cerveau est un muscle qui s’atrophie quand on ne s’en sert pas.

Après les médias, la consommation cueillette s’est donc révélée donc particulièrement adaptée à la culture. Il ne manque que les spectacles, concerts et expositions ! Mais pourrait-on appliquer cette méthode à des biens matériels ? La réponse au prochain épisode !

 

Photo par Matt McGee.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

flattr this!

by Lionel Dricot at January 06, 2015 07:43 AM

January 05, 2015

Claudio Ramirez

Ignorance and arrogance don’t mix well (aka the CCC The Perl Jam talk)

perljam

In my experience (in an academic setting), positive public talks can often be categorized in two scenarios:

- You are knowledgeable about the subject, but by no means an expert. You have something to share that you think will be interesting for the audience.

- You specialised on a subject, and you hope the expertise you built and your experience will interest/help others. You know there are experts on related subjects out there and you’re curious in what they have to say. Oh, and maybe you’re not an expert after all. Maybe you are. You don’t care.

With a positive attitude like that these kind of talks tend to be fun for you and for the audience. With or without jokes, depending on the setting and your style. It boils down to knowledge (can they audience learn something from you, or at least did you made someone there think about the subject) and openness (are you willing to learn?). If you’re a superstar in your field, you may get pretty far with knowledge alone. The audience may forgive you for being an arrogant prick. For some time. Maybe. (If you declare yourself an expert, you’re probably no superstar.)

Most of us are no superstars: enter the “Perl Jam Talk”. Wouter (@debian) presented a nice technical rebuttal. I am sure there are other out there. Someone that learnt the basics of Perl with O’Reilly’s Learning Perl (I did) of stayed up to date with 2014’s Perl by reading chromatic’s Modern Perl (a wonderful gift to the Perl community), knows that the ranting has no technical merit.

So, in this case it wasn’t ignorance that hit me. We are all newbies one way of an other, life would be boring otherwise. I think I may even stand arrogance. But the combination? No number of “suck” and “fuck” exclamations during a talk can save you there. The tantrum route for not understanding something? Here is what most sensible people do: reread the paragraph or ask someone to explain it. I am pretty sure that even superstars do that once in a while.


Filed under: Uncategorized Tagged: An old man with a stick cursing and ranting about something he doesn't understand, Perl, PerlJamRant

by claudio at January 05, 2015 11:44 PM

LOADays Organizers

Loadays CFP 2015

After a successful fifth edition of LOAD, the crew decided to organize a sixth edition.

You are invited to submit a proposal to participate in the 2014 Linux Open Administration Days in Wilrijk, Belgium on April 11 and 12 2015.

More info about how to submit your proposal can be found here

The Linux Open Administration Days is a conference focusing on Linux and Open Administration/engineering. We are trying to fill a gap for System Engineers and Administrators using Open Source technologies.

Now go and submit that proposal!

by Loadays Crew at January 05, 2015 11:00 PM

Mattias Geniar

Making Standard Resources Overwritable In Your Own Puppet Modules

The post Making Standard Resources Overwritable In Your Own Puppet Modules appeared first on ma.ttias.be.

Let's say you have a module where some "standard" resources are being managed (file/user/package/...). In those resources, you want to make some attributes overwritable, but only as optional arguments. So if the arguments aren't supplied in your own module, the OS/Puppet default values should be used instead.

It's hard to explain in words, but an example would make it more clear. Here's what you actually want to do (even though it's invalid puppet code).

class mymodule (
  $gid = undef,
  $uid = undef,
)
  user { 'myuser':
    ensure => present,
    uid    => $uid,
  }

  group { 'mygroup':
    ensure => present,
    gid    => $gid,
  }
}

If the $gid or $uid parameters are not supplied in your mymodule module, Puppet should just use the "default" values. So you want to make these arguments optional. There are some default values you could be tempted to use, like nill, undef, ... in the parameter list of your module, but they won't work.

Your Puppet run will most likely end up like this and your catalog compilation will fail.

$ puppet agent -vt 
...
Error: Failed to apply catalog: Parameter gid failed on Group[mygroup]: Invalid GID  at /etc/puppet/modules/mymodule/manifests/init.pp:8

The solution is to reference the already-defined resource and amend the attributes you want to manage to it. That would mean the example above can be extended to look like this (which includes the stdlib module from puppetlabs for input validation).

class mymodule (
  $gid = undef,
  $uid = undef,
)
  include stdlib

  user { 'myuser':
    ensure => present,
  }

  if (is_integer($uid)) {
    # If the supplied UID is an integer, extend the previous resource with the uid
    # Otherwise, don't try to manage the attribute and leave it at the defaults / untouched
    User [ 'myuser' ] {
      uid => $uid,
    }
  }

  group { 'mygroup':
    ensure => present,
  }

  if (is_integer($gid)) {
    # If the supplied GID is an integer, extend the previous resource with the GID
    # Otherwise, don't try to manage the attribute and leave it at the defaults / untouched
    Group [ 'mygroup' ] {
      gid => $gid,
    }
  }
}

This does mean your module code gets large pretty quickly, as each "optional" attribute needs to be checked / validated (but you do that for every parameter already, right?) before it'll be managed.

I'm actually hoping that I'm missing a more obvious solution to this problem, because with a few "default" resources in your modules, the parameter-list grows very quickly, and the amount of if statements grows with it -- each managing a new resource attribute.

You can test each puppet resource parameter at the CLI to test your supplied values.

The post Making Standard Resources Overwritable In Your Own Puppet Modules appeared first on ma.ttias.be.

by Mattias Geniar at January 05, 2015 07:36 PM

Lionel Dricot

La cueillette de l’actualité et des informations

2947269689_ee643d0272_z
Ceci est le billet 2 sur 4 dans la série La consommation cueillette

Le principe de base de la « consommation cueillette » que j’ai introduit dans le billet précédent est de dissocier complètement l’acte de « récolte » de la consommation elle-même.

Je maintiens donc une liste de ce que je veux consommer. Lorsque je trouve quelque chose d’intéressant à consommer, je le rajoute dans cette liste. Et lorsque j’ai envie de consommer, je choisis un élément de cette liste. J’évite, autant que possible, la consommation directe.

Les lecteurs attentifs remarqueront que j’ai auparavant critiqué les listes. Effectivement, les listes sont dangereuses lorsque l’objectif est de les vider. Dans ce cas particulier, l’objectif n’est pas de vider la liste, au contraire, mais de la garder remplie pour subvenir aux envies de consommation. Je garde également à l’esprit que la liste n’est pas une obligation : je supprime régulièrement de la liste des éléments non consommés mais dont je n’ai tout simplement plus l’envie.

 

Les informations et articles d’intérêt général

L’exemple le plus simple est la consommation de « nouvelles ». À travers de nombreuses techniques, les sites web nous rendent accros à la consommation d’information. Cette consommation est compulsive, directe.

Les effets sont délétères à tout point de vue : nous perdons du temps à consommer des vidéos inutiles et des articles émotionnels ce qui peut induire une frustration. Nous perdons également progressivement notre capacité de concentration en recherchant la satisfaction immédiate. L’émotion provoquée inhibe la réflexion plus large. Nous sommes exposés aux publicités propres à ce type de contenus. Savez-vous que, statistiquement, le monde est de moins en moins violent ? Que l’année 2014 a été une année avec un taux extrêmement faible d’accidents d’avions ? Étonnant ? C’est tout simplement parce que votre vision du monde est façonnée par les médias afin de faire de vous un consommateur émotionnellement compulsif.

Pire : en cliquant sur ces liens, nous renforçons les industries qui valorisent ce type de contenu. Nous validons un business model dont l’objectif est de nous abrutir. Nous augmentons également la popularité du lien ce qui va induire une plus grande probabilité que ce lien sera proposé à nos contacts sur les réseaux sociaux.

Lire les nouvelles sur les sites d’actualité, c’est comme la cigarette : cela vous détruit, cela pollue votre entourage et cela renforce une industrie morbide.

Bref, s’il y a un domaine où je pouvais grandement améliorer ma consommation, c’est bien les contenus sur le web.

 

Étape 1 : la cueillette

La première étape consiste à récolter des contenus susceptibles de m’intéresser et de les sauver dans une liste. Pour cette liste, j’utilise le service Pocket. Pour les libristes, je recommande Framabag.

Au cours de la cueillette, je m’interdis la lecture d’articles ou le visionnage direct de vidéos. Cette discipline est primordiale dans l’application de la « consommation cueillette ». D’une manière générale, je m’interdis de faire défiler la page. Si le contenu n’est pas entièrement contenu sur mon écran, alors je le rajoute dans Pocket.

Je cueille essentiellement sur les réseaux sociaux. Je consulte ces réseaux aléatoirement, sans réelle logique, me désabonnant des personnes postant trop de contenus inintéressants à mes yeux. Lorsque je suis sur un réseau social, je suis en mode cueillette : je me contente de mettre dans ma besace les fruits appétissants et je les oublie immédiatement.

Je vais même plus loin : j’ai constaté que j’avais acquis le réflexe de me rendre sur certains sites d’actualités sans que le contenu soit pertinent. Au contraire, toute consultation de ces sites me faisait soupirer devant l’inanité du contenu. Pourtant, dès que j’avais un instant d’inattention, mes doigts entraient sans réfléchir l’adresse de ces sites. Pas moyen de me contrôler, c’était un réflexe acquis ! Aux grands maux les grands remèdes, j’ai installé les extensions LeechBlock (Firefox) et WasteNoTime (Chrome). J’ai bloqué complètement tous les sites d’actualité générique et je contemple parfois avec effroi la page de blocage m’informant que mes doigts ont, encore une fois, tapé cette adresse ! Une douloureuse mais nécessaire désintoxication.

Pour plus de structure, je m’abonne aux sites les plus pertinents via Feedly. Pour les libristes, je recommande Framanews. Je garde volontairement très peu d’abonnements et je parcours les nouvelles sans jamais aller plus loin que le titre et la première phrase. S’ils me semblent intéressants, je rajoute l’article à Pocket.

Lorsque je constate que Feedly se remplit un peu trop à mon goût, je me désabonne du flux qui comporte le plus d’éléments non-lus.

Lorsque j’aime bien un producteur de contenu, je m’abonne de manière multiple : RSS, Facebook, Twitter, etc. J’accepte consciemment de voir apparaître des doublons et d’être informé plusieurs fois de l’existence d’un contenu. Le fait de dissocier la cueillette de la consommation rend les doublons peu gênants.

D’ailleurs, j’en profite sans aucune honte pour vous inviter à me suivre dès à présent sur Twitter, Facebook, Google+, Diaspora et Feedly. Non seulement vous serez informé lorsque je poste un nouveau billet mais vous favoriserez également la propagation de ceux-ci !

 

Étape 2 : la consommation

Lorsque je me sens désœuvré, que j’ai envie de lire ou, tout simplement, que je vais où le roi va seul, je lance l’application Pocket. En fonction de mon humeur, je choisis un article dans la liste. Je le lis. Parfois très rapidement, parfois de manière approfondie.

Notons que Pocket n’affiche que le contenu de l’article : je ne suis donc pas bombardé de publicités, je n’ai pas tous les liens du types « Si vous avez aimé cet article, vous aimerez… ». Bref, je consomme intelligemment.

Je m’autorise sans aucun complexe à marquer un article comme lu sans l’avoir terminé. Parce qu’il n’était pas tellement intéressant. Parce qu’il est périmé. Ou, tout simplement, parce que sa lecture m’ennuie.

Un autre point important à garder à l’esprit : mon objectif n’est pas de « vider » ma liste Pocket. Ce n’est pas une todo-list déguisée mais uniquement une bibliothèque, une liste de suggestions.

Pour éviter que certains articles ne moisissent, je m’impose néanmoins une discipline minimale : parfois, je m’oblige à commencer la lecture de l’article le plus ancien, quel qu’il soit.

 

Étape 3 : l’action

Dans l’immense majorité des cas, la lecture n’entraine aucune action directe. Je me contente de lire et de laisser les informations décanter dans mon cerveau. L’avantage d’une centralisation comme Pocket est que, si je me souviens d’un article intéressant, je pourrais probablement le retrouver facilement dans mes archives.

Certains articles me poussent néanmoins à l’action : je veux partager cet article ou contacter l’auteur, lui offrir un flattr, un ChangeTip ou utiliser le contenu dans un de mes articles. Dans ces cas-là, je marque tout simplement l’article comme favori dans Pocket.

Grâce à une règle IFTTT, tout article Pocket mis en favori apparaît dans mes notes Evernote.

 

Les leçons de la consommation cueillette

Au final, cette « consommation cueillette » me permet d’assimiler une quantité impressionnante d’informations pertinentes et intelligentes tout en évitant l’ivraie et l’emballement émotionnel.

À titre d’exemple, lors du crash de l’avion MH17, j’avais sauvé dans Pocket une dizaine d’articles sur le sujet, récoltés au fil des jours. Lorsque j’ai voulu réellement lire sur le sujet, je me suis rendu compte que l’un des articles était un résumé complet et détaillé de toutes l’affaire. Je l’ai lu et j’ai pu supprimer tous les autres articles sans même y jeter un œil. Le gain de temps mais également de perspective est donc particulièrement intéressant.

Mais n’e pourrais-je pas utiliser cette technique pour autre chose que les nouvelles et les articles sur le web ? C’est justement ce que je vous propose de découvrir dans le prochain article…

 

Photo par Cindy Cornett Seigle.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

flattr this!

by Lionel Dricot at January 05, 2015 07:44 AM

January 04, 2015

Les Jeudis du Libre

Mons, le 15 janvier – DocBook: rédigez comme un pro !

DocBookCe jeudi 15 janvier 2015 à 19h se déroulera la 35ème séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : DocBook: rédigez comme un pro !

Thématique : Documentation

Public : Tout public

L’animateur conférencier : Philippe Wambeke (LoLiGrUB)

Lieu de cette séance : Université de Mons, Campus Plaine de Nimy, avenue Maistriau, Grands Amphithéâtres, Auditoire Curie (cf. ce plan sur le site de l’UMONS, ou la carte OSM).

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page http://jeudisdulibre.fikket.com/. La séance sera suivie d’un verre de l’amitié.

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, Normation, OpenSides, MeaWeb, NextLab, Phonoid et Creative Monkeys.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

Description : Tout le monde a déjà dû écrire un document au moins une fois dans sa vie: travail de fin d’études, mémoires, recueil de cuisine, biographie, documentation technique ou fonctionnelle, … La rédaction de documentation est souvent synonyme de tâche répétitive, ingrate et chronophage, où on se perd très rapidement dans des détails qui n’ont rien à voir avec le contenu du document, comme la mise en forme par exemple. Dès lors, on se prend à rêver d’une solution libre qui permettrait de rédiger facilement sans se préoccuper de la mise en forme. Qui permettrait de changer de support en 1 clic, le tout en garantissant la pérennité du document. Et bien grâce à DocBook, ce rêve est devenu réalité !

Lors de cette conférence, les différents points suivants seront abordés :

  • (Dé-)Montrer par quelques exemples toute l’absurdité des outils “wysiwyg”
  • Parler brièvement des solutions LaTex et markdow
  • Présenter DocBook : son origine et sa gouvernance, un exemple simple, les outils en jeux, les formats de sortie, les balises classiques (chapitre, section, paragraphe, tableau, exemple, liste (non) ordonnée, …), la personnalisation et le paramétrage.
  • En faire une démonstration via un document utilisé en milieu professionnel

En résumé, DocBook est un langage de balisage xml pour décrire la structure d’un document sur base sémantique. Aucun élément de mise en forme n’existe, le document peut être généré dans différents formats (html, pdf,…) et toujours avec une mise en page impeccable adaptée au support.

by Didier Villers at January 04, 2015 04:01 PM

January 03, 2015

Thomas Vander Stichele

2 months in

Today is my two month monthiversary at my new job. Haven’t had time so far to sit back and reflect and let people know, but now during packing boxes for our upcoming move downtown, I welcome the distraction.

I dove into the black hole. I joined the borg collective. I’m now working for the little search engine that could.

I sure had my reservations while contemplating this choice. This is the first job I’ve had that I had to interview for – and quite a bit, I might add (though I have to admit that curiosity about the interviewing process is what made me go for the interviews in the first place – I wasn’t even considering a different job at that time). My first job, a four month high school math teaching stint right after I graduated, was suggested to me by an ex-girlfriend, and I was immediately accepted after talking to the headmaster (that job is still a fond memory for many reasons). For my first real job, I informally chatted over dinner with one of the four founders, and then I started working for them without knowing if they were going to pay me. They ended up doing so by the end of the month, and that was that. The next job was offered to me over IRC, and from that Fluendo and Flumotion were born. None of these were through a standard job interview, and when I interviewed at Google I had much more experience on the other side of the interviewing table.

From a bunch of small startups to a company the scale of Google is a big step up, so that was my main reservation. Am I going to be able to adapt to a big company’s way of working? On the other hand, I reasoned, I don’t really know what it’s like to work for a big company, and clearly Google is one of the best of those to work for. I’d rather try out working for a big company while I’m still considered relatively young job-market-wise, so I rack up some experience with both sides of this coin during my professionally mobile years.

But I’m not going to lie either – seeing that giant curious machine from the inside, learn how they do things, being allowed to pierce the veil and peak behind the curtain – there is a curiosity here that was waiting to be satisfied. Does a company like this have all big problems solved already? How do they handle things I’ve had to learn on the fly without anyone else to learn from? I was hiring and leading a small group of engineers – how does a company that big handle that on an industrial scale? How does a search query really work? How many machines are involved?

And Google is delivering in spades on that front. From the very first day, there’s an openness and a sharing of information that I did not expect. (This also explains why I’ve always felt that people who joined Google basically disappeared into a black hole – in return for this openness, you are encouraged to swear yourself to secrecy towards the outside world. I’m surprised that that can work as an approach, but it seems to). By day two we did our first commit (obviously nothing that goes to production, but still.) In my first week I found the way to the elusive (to me at least) roof top terrace by searching through internal documentation.IMG_20141229_144054The view was totally worth it.

So far, in my first two months, I’ve only had good surprises. I think that’s normal – even the noogler training itself tells you about the happiness curve, and how positive and excited you feel the first few months. It was easy to make fun of some of the perks from an outside perspective, but what you couldn’t tell from that outside perspective is how these perks are just manifestations of common engineering sense on a company level. You get excellent free lunches so that you go eat with your team mates or run into colleagues and discuss things, without losing brain power on deciding where to go eat (I remember the spreadsheet we had in Barcelona for a while for bike lunch once a week) or losing too much time doing so (in Barcelona, all of the options in the office building were totally shit. If you cared about food it was not uncommon to be out of the office area for ninety minutes or more). You get snacks and drinks so that you know that’s taken care of for you and you don’t have to worry about getting any and leave your workplace for them. There are hammocks and nap pods so you can take a nap and be refreshed in the afternoon. You get massage points for massages because a healthy body makes for a healthy mind. You get a health plan where the good options get subsidized because Google takes that same data-driven approach to their HR approach and figured out how much they save by not having sick employees. None of these perks are altruistic as such, but there is also no pretense of them being so. They are just good business sense – keep your employees healthy, productive, focused on their work, and provide the best possible environment to do their best work in. I don’t think I will ever make fun of free food perks again given that the food is this good, and possibly the favorite part of my day is the smoothie I pick up from the cafe on the way in every morning. It’s silly, it’s small, and they probably only do it so that I get enough vitamins to not get the flu in winter and miss work, but it works wonders on me and my morning mood.

I think the bottom line here is that you get treated as a responsible adult by default in this company. I remember silly discussions we had at Flumotion about developer productivity. Of course, that was just a breakdown of a conversation that inevitably stooped to the level of measuring hours worked as a measurement of developer productivity, simply because that’s the end point of any conversation on that spirals out of control. Counting hours worked was the only thing that both sides of that conversation understood as a concept, and paying for hours worked was the only thing that both sides agreed on as a basic rule. But I still considered it a major personal fault to have let the conversation back then get to that point; it was simply too late by then to steer it back in the right direction. At Google? There is no discussion about hours worked, work schedule, expected productivity in terms of hours, or any of that. People get treated like responsible adults, are involved in their short-, mid- and long-term planning, feel responsible for their objectives, and allocate their time accordingly. I’ve come in really early and I’ve come in late (by some personal definition of “on time” that, ever since my second job 15 years ago, I was lucky enough to define as ’10 AM’). I’ve left early on some days and stayed late on more days. I’ve seen people go home early, and I’ve seen people stay late on a Friday night so they could launch a benchmark that was going to run all weekend so there’d be useful data on Monday. I asked my manager one time if I should let him know if I get in later because of a doctor’s visit, and he told me he didn’t need to know, but it helps if I put it on the calendar in case people wanted to have a meeting with me at that hour.

And you know what? It works. Getting this amount of respect by default, and seeing a standard to live up to set all around you – it just makes me want to work even harder to be worthy of that respect. I never had any trouble motivating myself to do work, but now I feel an additional external motivation, one this company has managed to create and maintain over the fifteen+ years they’ve been in business. I think that’s an amazing achievement.

So far, so good, fingers crossed, touch wood and all that. It’s quite a change from what came before, and it’s going to be quite the ride. But I’m ready for it.

(On a side note – the only time my habit of wearing two different shoes was ever considered a no-no for a job was for my previous job – the dysfunctional one where they still owe me money, among other stunts they pulled. I think I can now empirically elevate my shoe habit to a litmus test for a decent job, and I should have listened to my gut on the last one. Live and learn!)

flattr this!

by Thomas at January 03, 2015 10:54 PM

Lionel Dricot

Repensons notre consommation et partons à la cueillette

4660849545_004f0b6e01_z
Ceci est le billet 1 sur 4 dans la série La consommation cueillette

La société de consommation nous a appris à consommer machinalement, sans penser. Nous (nous) dépensons sans perspective afin d’assouvir nos besoins, réels ou inventés.

Pourtant, la consommation est l’acte le plus fondamental que nous puissions faire. C’est l’acte par lequel nous nous construisons, en temps qu’individu et en temps que société.

 

La construction de l’individu

Avez-vous déjà réfléchi au fait que chacun des atomes qui composent votre corps aujourd’hui a été, d’une manière ou d’une autre, ingéré dans le passé ? Vous êtes littéralement ce que vous avez mangé. Cette simple réflexion justifie amplement la nécessité de faire attention à la provenance et la qualité de ce que nous mangeons.

Mais la nourriture n’est pas notre seule consommation. Notre cerveau, notre personnalité, notre être se construit avec ce que nous consommons quotidiennement, avec les choix que nous posons. C’est pour cette raison que je tente d’éviter au maximum la publicité et la télévision, qui sont des ingestions forcées et inconscientes de substances dont la plus proche analogie alimentaire serait « hautement cancérigène et débilitante ».

 

L’acte politique suprême

Mais la consommation est un acte à double-sens. En échange de ce que nous consommons, nous fournissons quelque chose. Du travail, de la monnaie, du temps de cerveau. Il y a toujours une contrepartie ! Même lorsque vous téléchargez illégalement un film sur un réseau P2P, le fait de télécharger rend ce film plus facilement disponible pour d’autres, augmente sa popularité dans les moteurs de recherches P2P et, au final, favorise sa propagation. Votre consommation a un impact direct sur le monde !

Lorsque j’ai rendu ce blog payant, j’avoue que je ne savais pas trop ce que je faisais. Aujourd’hui, je commence à peine à prendre conscience de l’importance du concept de prix libre. Le prix libre représente en effet l’idéal de la consommation, de l’échange commercial : chacun échange consciemment ce qu’il désire contre ce qu’il souhaite donner.

J’ai donc commencé à analyser toute ma consommation à travers le prisme du prix libre. Pour chaque produit, j’imagine que je l’ai reçu gratuitement. Et je me pose la question : « Combien aurais-je envie de donner, librement, à ceux qui ont conçu ce produit ? ». Le simple fait de réfléchir de cette manière fait soudain paraître le fromage artisanal bio à 5€, produit dans une ferme à quelques kilomètres, meilleur marché que le fromage industriel à 2€.

Faire un plein d’essence me semble désormais insupportable. J’ai envoyé 10€ à un artiste qui produit du contenu que j’aime beaucoup depuis un an. Mais, avec un simple plein, j’envoie hebdomadairement 80€ à une industrie et des gens que j’exècre, qui sont responsables de catastrophes écologiques et de guerres. Au fond, je suis le seul responsable : c’est moi qui les soutiens, avec mon argent. Je les soutiens des dizaines, des centaines de fois plus que les artistes qui illuminent ma vie.

La consommation est donc l’acte politique ultime, le véritable militantisme. Aller, en voiture, manifester pour une société meilleure tout en gardant la clope au bec est une absurde vulgarité, une insulte au bon sens. C’est se donner bonne conscience en vains époumonements afin d’être certain que rien ne change.

Grâce à la philosophie du prix libre, nous ne sommes en mesure de ne plus être façonnés par notre consommation mais, au contraire, de façonner notre consommation pour qu’elle soit à notre image. Le musicien Antoine Guenet explique notamment comment le prix libre l’a fait devenir végétarien.

 

Une méthode pour changer sa consommation

On ne peut pas exiger des autres, en tant que groupe, de changer si l’on est pas capable, en tant qu’individu, de s’améliorer. Nous devons, comme disait Gandi, être le changement que nous voulons voir dans le monde. Nous devons nous changer en sélectionnant les briques qui vont nous construire et en sélectionnant les actions humaines que nous voulons soutenir.

Les deux se réduisent à un seul et unique acte : la consommation.

Rien ne sert de brocarder la « société de consommation » comme le mal absolu : la consommation est indispensable à la vie ! La consommation est donc une force. Elle n’est ni positive, ni négative. Elle n’est que ce qu’on en fait.

En 2014, je m’étais donné l’objectif d’améliorer la qualité de mon alimentation intellectuelle. J’ai non seulement rempli cet objectif, je l’ai également généralisé à travers une méthodologie que j’appelle la « consommation cueillette ». Au fil des mois, j’ai élargi, avec un certain bonheur, l’utilisation de cette consommation cueillette. Je vous propose de partager cette expérience avec vous dans les trois billets suivants à travers la consommation d’informations et de nouvelles, la consommation culturelle et la consommation des biens matériels.

 

Photo par Jessica Lucia.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

flattr this!

by Lionel Dricot at January 03, 2015 09:19 PM

January 02, 2015

Mattias Geniar

PHP’s CVE vulnerabilities are irrelevant

The post PHP’s CVE vulnerabilities are irrelevant appeared first on ma.ttias.be.

ircmaxell wrote a good blog post about usage of PHP versions in the wild, which shows the vast amount of different versions used. The summary is meant to provide an overview of "secure" vs. "unsecure" PHP installations, based on the known CVE's in previous PHP versions. The conclusion is that less than 25% of the PHP installations are secure. However, I think those numbers are all irrelevant.

I'm unsure if the list is complete, but here's an overview of known CVE's for PHP. There's a lot of them.

I'm active in the hosting sector. I manage and secure servers, used by clients to host their PHP code. Do you know how many hacked sites and servers I've seen due to PHP CVE's? Absolutely zero. How many are caused by outdated Content Management Systems or plugins? All of them..

We still see PHP 5.1 and 5.2 in the wild. They've been unsupported for years. They have known security vulnerabilities. But those are not being abused. What's an unsecure PHP installation? The one with an outdated WordPress or outdated plugins. Or a Drupal installation that isn't maintained. Or yet another WYSIWYG editor with upload functionality that is installed once, and never updated.

The most common security flaws are still the ones that get websites and servers hacked. The CVE's in the PHP code aren't easy to exploit. But a SQL-injection or Remote Code Execution vulnerability in a popular CMS? Those are the real danger.

I gave a presentation at PHP Benelux 2014 titled "Code Obfuscation, PHP shells & more: what hackers do once they get passed your code". In that talk, I never mentioned PHP CVE's. During the many discussions afterwards, CVE's were never the topic.

I love PHP as the language. I still do all my coding in it. Which is why it pains me to say this, but PHP's security problem isn't the CVE's of the core language, it's the users. It's in the nonchalance that users install their CMS and not look at it again. That's what gives PHP (among the many other things) a bad name.

WordPress 3.7+ does a really fantastic job on mass-patching security updates to all installations (if enabled). That still leaves Drupal, Joomla, Magento, ...

The post PHP’s CVE vulnerabilities are irrelevant appeared first on ma.ttias.be.

by Mattias Geniar at January 02, 2015 08:23 PM