Planet Grep is open to all people who either have the Belgian nationality or live in Belgium, and who actively work with or contribute to Open Source/Free software.

About Planet Grep...

Other planets can be found at the Planet Index.

A complete feed is available in a number of syndication formats: RSS 1.0, RSS 2.0, FOAF, and OPML.

The layout of this site was done by Gregory

September 20, 2014

Dieter Plaetinck

Graphite & Influxdb intermezzo: migrating old data and a more powerful carbon relay

Migrating data from whisper into InfluxDB

"How do i migrate whisper data to influxdb" is a question that comes up regularly, and I've always replied it should be easy to write a tool to do this. I personally had no need for this, until a recent small influxdb outage where I wanted to sync data from our backup server (running graphite + whisper) to influxdb, so I wrote a script:
# whisper dir without trailing slash.
start=$(date -d 'sep 17 6am' +%s)
end=$(date -d 'sep 17 12pm' +%s)
pipe_path=$(mktemp -u)
mkfifo $pipe_path
function influx_updater() {
    influx-cli -db $db -async < $pipe_path
influx_updater &
while read wsp; do
  series=$(basename ${wsp//\//.} .wsp)
  echo "updating $series ..." --from=$start --until=$end $wsp_dir/$wsp.wsp | grep -v 'None$' | awk '{print "insert into \"'$series'\" values ("$1"000,1,"$2")"}' > $pipe_path
done < <(find $wsp_dir -name '*.wsp' | sed -e "s#$wsp_dir/##" -e "s/.wsp$//")

It relies on the recently introduced asynchronous inserts feature of influx-cli - which commits inserts in batches to improve the speed - and the whisper-fetch tool.
You could probably also write a Go program using the unofficial whisper-go bindings and the influxdb Go client library. But I wanted to keep it simple. Especially when I found out that whisper-fetch is not a bottleneck: starting whisper-fetch, and reading out - in my case - 360 datapoints of a file always takes about 50ms, whereas InfluxDB at first only needed a few ms to flush hundreds of records, but that soon increased to seconds.
Maybe it's a bug in my code, I didn't test this much, because I didn't need to; but people keep asking for a tool so here you go. Try it out and maybe you can fix a bug somewhere. Something about the write performance here must be wrong.

A more powerful carbon-relay-ng

carbon-relay-ng received a bunch of love and has been a great help in my graphite+influxdb experiments.

Here's what changed:
  • First I made it so that you can adjust routes at runtime while data is flowing through, via a telnet interface.
  • Then Paul O'Connor built an embedded web interface to manage your routes in an easier and prettier way (pictured above)
  • The relay now also emits performance metrics via statsd (I want to make this better by using go-metrics which will hopefully get expvar support at some point - any takers?).
  • Last but not least, I borrowed the diskqueue code from NSQ so now we can also spool to disk to bridge downtime of endpoints and re-fill them when they come back up
Beside our metrics storage, I also plan to put our anomaly detection (currently playing with heka and kale) and carbon-tagger behind the relay, centralizing all routing logic, making things more robust, and simplifying our system design. The spooling should also help to deploy to our metrics gateways at other datacenters, to bridge outages of datacenter interconnects.

I used to think of carbon-relay-ng as the python carbon-relay but on steroids, now it reminds me more of something like nsqd but with an ability to make packet routing decisions by introspecting the carbon protocol,
or perhaps Kafka but much simpler, single-node (no HA), and optimized for the domain of carbon streams.
I'd like the HA stuff though, which is why I spend some of my spare time figuring out the intricacies of the increasingly popular raft consensus algorithm. It seems opportune to have a simpler Kafka-like thing, in Go, using raft, for carbon streams. (note: InfluxDB might introduce such a component, so I'm also a bit waiting to see what they come up with)

Reminder: notably missing from carbon-relay-ng is round robin and sharding. I believe sharding/round robin/etc should be part of a broader HA design of the storage system, as I explained in On Graphite, Whisper and InfluxDB. That said, both should be fairly easy to implement in carbon-relay-ng, and I'm willing to assist those who want to contribute it.

September 20, 2014 07:18 PM

Dries Buytaert

Reflections on Drupal in Japan


I spent the last week in Japan. The goal was two-fold: meet with the Drupal community to understand how we can grow Drupal in Japan, and evaluate the business opportunity to incorporate an Acquia subsidiary in Japan (we already offer Acquia Cloud in Japan using Amazon's Tokyo data center).

I presented at two Drupal meetups in Japan; spent the week meeting with members of the Drupal community, Drupal agencies, large system integrators (IBM, Accenture, Hitachi, Fujitsu, Ci&T and SIOS) and the Japanese government. In between meetings, I enjoyed the amazing food that Japan has to offer.

The community in Japan is healthy; there are some noteworthy Japanese Drupal sites and there are passionate leaders that organize meetups and conferences. The Japanese Drupal community is bigger than the Chinese Drupal community but compared to North America and Europe, the Japanese Drupal community is relatively small; the largest Drupal agency I met with employs 20 developers.

The large system integrators, with the exception of Ci&T, have not done any Drupal projects in Japan. We're way behind our competitors like Sitecore, Adobe Experience Manager and SDL in this regard. All of them enabled the large system integrators to sell and use their products. It was great to meet with all the system integrators to make them aware of Drupal, and the potential it could have to their business. It's clear the large system integrators could benefit from an Open Source platform that allows them to move faster and integrate with more systems.

The biggest challenge is the lack of Japanese documentation; both marketing materials as well as developer documentation. Most of the Japanese do not have much confidence in their English speaking ability and struggle to use Drupal or to participate on My recommendation for the Japanese Drupal community is to organize regular translation sprints. Translating one or more of the best-selling English Drupal books to Japanese could also be a game-changer for the community.

Another problem has been the historic challenges with The anonymous owner of the domain claims that is the official Drupal site in Japan (it's not officially approved) and runs it without much regard or consultation with the broader Japanese Drupal community. I promised the Japanese community to help fix this.

I returned from my trip feeling that the Japanese market offers a great opportunity for Drupal. Japan is the world's third-largest economy, after the United States and China. With continued leadership, Drupal could be huge in Japan. I’d love that, as I would like to go back and visit Japan again.


by Dries at September 20, 2014 06:46 PM

Kris Buytaert

On Systemd and devops

If it's not broken , don't fix it.
Those who don't understand Unix are doomed to reinvent it, poorly
Complexity is the enemy of reliability.

Are some of the more frequently heard arguments in the systemd discussion. Indeed I see and hear a lot of senior Linux people react openly and probably way to late against the introduction of systemd in a lot of our favorite Linux distributions.

To me this is a typical example of the devops gap. The gap between developers writing code and operations needing to manage that code on production platforms at scale.
Often developers writing code that they think is useful and relevant while they are not listening to their target audience , in this case not the end users of the systems but the people that are maintaining the platforms. The people that work on a daily base with these tools.

I have had numerous conversations with people in favor and against systemd, till today I have not found a single general purpose use case that could convince me of the relevance of this large change in our platforms. I've found edge cases where it might be relevant. but not mainstream ones. I've also seen much more people against it than in favor. I've invited speakers to conference to come and teach me. I've probably spoken to the wrong people,

But this is not supposed to be yet another systemd rant.. I want to tackle a bigger problem. The problem that this change and some others have been forced upon us by distributions that should be open, and listen to their users, apparently both Debian and Fedora/RHEL failed largely but somehow fail to listen to their respective communities. Yes we know that e.g Fedora is the development platform and acts as a preview of what might come up in RHEL and thus CentOS later , but not everything eventually ends up in RHEL. So it's not like we didn't have an 'acceptance' platform where we could play with the new technology. The main problem here is that we had no simple way to stop the pipeline, it really feels like that long ago Friday evening rush deploy. Not like a good conversation between developers and actual ops on the benefits and problems of implementing these changes. This feels like the developers of the distributions deciding what goes in from their own little silo and voting in 'private' committee.

It also feels like the ops people being to busy to react, "Someone else will respond to this change, it's trivial this change is wrong , someone else will block this for sure",

And the fact that indeed Operating System developers, like Fedora and Debian friends kinda live in their own silo. (specifically not listing CentOS here..)

So my bigger question is .. how do we prevent this from happening again.. how do we make sure that distributions actually listen to their core users and not just the distribution developers.

Rest assured,, Systemd is not the only case with this problem .. there's plenty of cases were features that were used by people, sometimes even the something people considered the core feature of a project got changed or even got ripped out by the developers because they didn't realize they were being used, sometimes almost killing that Open Source project by accident.
And I don't want that to happen to some of my favourite open source projects ..

by Kris Buytaert at September 20, 2014 05:52 PM

September 19, 2014

Dieter Adriaenssens

How I got more relaxed by no longer commuting by car

Yesterday was car free day, at least in Belgium, and by a happy coincidence I came across an article that pointed out a correlation between mental well-being and the means of transportation when commuting to work. It turns out that not using a car, fe. going by bike, on foot or by public transport increases your mental health. The author wonders about the reason for this.

I'm not a psychologist, nor have I done scientific research to investigate this, but from my personal experience, I can think of a few reasons why not driving by car to commute is better for you.

A few years ago, I was commuting daily by car. Construction works were going on for a few months, so every morning I spent 20-30 minutes in a traffic jam (on top of the 30 minute drive it took me to get to work).
Those 20-30 minutes of waiting, driving slowly, accelerating and breaking again, more waiting, ... well, it annoyed me, and I guess a lot of other people don't like traffic jams either.

A few months later, I was told the contract of my company lease car was about to end and I would get a new one.
Then I started wondering if I really liked spending that much time in traffic jams every day, 50-60 minutes of doing nothing else but stare at the car in front of me. So I started looking for alternatives. It turned out there was a train station at walking distance from my office, and it would take me 50-60 minutes to get from home to work. No gain in travel time (and it would take me less time by car if there would be no traffic jam), but I would spend about 45 minutes in a train, not having to pay attention to the cars in front of me, not having the stress and boredom of waiting in a traffic jam. I could listen to some music, read a bit, take a nap, stare out the window enjoying the scenery passing by or having a chat with a fellow commuter.
So instead of spending about an hour getting annoyed and stressed, I could relax while the train driver got me to work and I could get some things done in the mean time.

So I declined the offer of a new lease car and decided to commute by train. I couldn't have made a better decision. From that moment on I arrived more relaxed at work and at home. Of course, commuting by train can be stressful as well : delayed or cancelled trains, crowded with noisy people. But I was lucky to have a quiet commuter train in the morning, and I could usually avoid rush-hour in the evening, so I usually had a comfortable commute, arriving at work or at home much more relaxed.

Commuting by train can be annoying as well, if you have to cope with long commutes, multiple stop-overs, delays and crowded trains on a daily basis, as I experienced a few years later on another job (but at least I could still doze off or read a bit).
But I was relieved, when I found a job closer to home that would take me 20 minutes by bike. No reading this time while commuting, but having the daily physical exercise and cruising past rows of waiting cars was enjoyable (I'm not gloating, actually, I took a route through the car free city center, so I didn't see that much cars on my way to work), but I knew that if I would go to work by car I would end up in a traffic jam and it would take me much longer to get to work.

I don't use the car that much anymore, only for longer drives, places that are hard to reach by public transport, or when transporting heavy or bulky loads. And I like it. I can't imagine losing multiple hours waiting in traffic jams every week.
Overall I'm more relaxed because I don't get annoyed waiting, can do some enjoyable things while commuting or get some physical exercise (which is also known to reduce stress levels).

A a consequence you have to make some compromises and it will take some extra planning, but it's worth it.

by Dieter Adriaenssens ( at September 19, 2014 01:44 PM

Frederic Hornain

[MBaaS] Red Hat aquires FeedHenry


Red Hat, today announced that it has signed a definitive agreement to acquire FeedHenry, a leading enterprise mobile application platform provider.
FeedHenry expands Red Hat’s broad portfolio of application development, integration, and Platform-as-a-Service (PaaS) solutions – Openshift [1] – , enabling Red Hat to support mobile application development in public and private environments.


by Frederic Hornain at September 19, 2014 09:10 AM

Frank Goossens

Septemberigheid uit Japan

Een mooie zomer, dat moet zo niet persé, zolang september maar prachtig is. En dat is ze in 2014 ook weer, die mooiste maand ter wereld!

De septemberigheids-soundtrack van dit jaar komt uit Japan. Geen idee waar ze over zingen, die jongens en meisjes van Clammbon, maar het gaat over het September-gevoel. Of over de liefde, dat kan ook altijd natuurlijk.

YouTube Video
Watch this video on YouTube or on Easy Youtube.

by frank at September 19, 2014 05:24 AM

September 18, 2014

FOSDEM organizers

Call for participation: lightning talks and stands

We now invite proposals for lightning talks and stands. There is still time to submit main track presentations. The deadline for proposing developer rooms has now passed. Please see our previous call for participation for more information on submitting main track proposals. Lightning talks Lighting talks are short — 15 minutes — talks on a wide variety of topics. Anyone who has something interesting to say about an open source topic can apply. We particularly encourage topics that do not fit in any of the developer rooms. Proposals for lightning talks should be submitted using Pentabarf: Please be舰

September 18, 2014 03:00 PM

September 17, 2014

Joram Barrez

Upcoming Webinar: Process Driven Spring Applications with Activiti – Sept 23rd

Next week, I’ll be doing a webinar together with my friend Josh Long (he’s a Spring Developer Advocate, committer to many open source projects and of course Activiti). I will show some of the new Activiti tooling we’ve been working on recently, while Josh will demonstrate with live coding how easy it is to use Activiti in Spring […]

by Joram Barrez at September 17, 2014 08:42 AM

Frank Goossens

The electric future of single-seat auto racing

team mahindra in londonI always felt somewhat torn between my love for F1 and my political/ ethical green-leftist point of view, so I was very glad to learn about “Formula E” earlier this year. This new competition focuses entirely on identical (a Dallara chassis and a Mc-Laren engine, next year other constructors will be allowed) electrically-powered single-seat racing cars, competing on street circuits in major cities around the world.

So how will that work out? Won’t the lack of engine noise (you might remember the heated debate about the quieter engines in F1 earlier this year) hamper the “ambiance”? Aren’t those cars way slower then F1 and won’t racing be less spectacular that way?

Well, past weekend the first e-Prix took place in China (Beijing) and it was a great race with lots of close racing, exciting overtaking and one very heavy crash between P1 Nicolas Prost and P2 Nick Heidfeld seconds before the finish with Di Grassi taking a victory he did not really expect any more at P3. You can watch the full race here or, if you have less time, enjoy this fan-made summary;

YouTube Video
Watch this video on YouTube or on Easy Youtube.

We’ll have to wait for over a month for the next race, but I’ll be on the lookout for more electrical racing-excitement!

by frank at September 17, 2014 04:57 AM

September 16, 2014

Xavier Mertens

Security Appliances, Pandora’s Boxes?

Security AppliancesNo breaking news, nothing fancy in this quick blog post but it is worth to remember that security appliances can be a potential threat when deployed on your network. For years, security appliances are the “in” thing. On the paper, they are sexy: you just plug a power cable, a network cable, 4 screws if you install them in a 19″ rach (under a table isn’t the best place) and you are ready to play with them. At first boot time, some wizards guide you to perform the basic configuration. They require zero administration tasks, vendors have juicy RMA contracts, … What else?

Of course, they are different levels of maturity on the security applicance market. Some of them are very performant and hardened but others are just a repository of old CVE‘s! Why? Back to the roots: When a vendor decides to develop a new appliance, what does it need? Basic components like:

On top of these, we found some kind of interconnection layer where all the components are connected and talk to each other. All the components are packed in a white-label box with enough CPU/memory/storage. A first problem arise: some specific features of an application can be used by the developers and prevent upgrades to a newer release “because this feature has been deprecated and the code must be rewritten“. As an example, the dl() function in PHP has been removed since the version 5.3. Bad news, PHP before this version suffers of many vulnerabilities! Appliances have sexy web interfaces but behind we can have badly written shell/perl scripts which access core components with full privileges.

Engineers working for vendors have usually a virtualized version of their appliance that can be executed on their laptop for demos or labs but those VM’s remain private, it’s very frustrating! Often appliances are delivered as black boxes. Being a hacker, I hate this and I like to understand “what’s under the hood“! A few days ago, I spent some time to create a virtualized version of an old appliance used for network management / monitoring. During the migration, here are some findings:

Dear Vendors, keep in mind that once people have physical access to your appliances, consider them as compromized! System admins, take care when you deploy new appliances in your network. Here are some recommendations:

My message is not to get rid of appliances (some of them are strongly hardened) and we need them for multiple tasks. Just keep in mind that they could introduce some new threats on your network. Just be careful!

by Xavier at September 16, 2014 09:16 PM

September 15, 2014

Les Jeudis du Libre

Mons, le 16 octobre – Apprendre à programmer à l’école : pourquoi et comment ?

Ce jeudi 16 octobre 2014 à 19h, dans le cadre de la Quinzaine Numérique 2014 à Mons, le Mundaneum et les Jeudis du Libre s’associent pour proposer une conférence grand public sur les objectifs et les moyens de l’apprentissage de la programmation.

Le titre de l’exposé : Apprendre à programmer à l’école : pourquoi et comment ?

Thématique : Programmation

Public : Tout public

L’animateur conférencier : Martin Quinson (Université de Lorraine, France)

Lieu de cette séance : Université de Mons, Campus Plaine de Nimy, avenue Maistriau, Grands Amphithéâtres, Auditoire Curie (cf. ce plan sur le site de l’UMONS, ou la carte OSM).

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page Cette 32ème séance montoise se terminera, suivant l’habitude, par un verre de l’amitié. L’organisation bénéficie du soutien de la Fédération Wallonie-Bruxelles dans le cadre de la Quinzaine Numérique.

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : Normation, OpenSides, MeaWeb, NextLab, Phonoid et Creative Monkeys.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires du Pôle Hainuyer d’enseignement supérieur impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres. Le Mundaneum est un centre d’archives de la Fédération Wallonie-Bruxelles et un espace d’expositions temporaires. Le Mundaneum organise de fréquentes conférences sur l’architecture du savoir et l’indexation des connaissances, sur un plan historique et à la lumière des technologies modernes telles qu’internet. Cette conférence constitue la quatrième collaboration avec les Jeudis du Libre.

Description (texte provisoire) : L’apprentissage de base et pour tous de la programmation est un enjeu majeur de notre société basée sur la connaissance et les réseaux. D’une part pour éviter que le citoyen ne se sente démuni ou victime face à la déferlante des technologies de l’information et de la communication ; d’autre part pour susciter des intérêts voire des vocations pour un secteur particulièrement prometteur d’emplois.

Une affichette est téléchargeable ici !

by Didier Villers at September 15, 2014 05:03 PM

Dries Buytaert

Reflections on Drupal in China


I just spent the past week in China, and I thought I'd share a few reflections on the state of Drupal in China.

First, let me set the stage. There are 1.35 billion people living in China; that is almost 20 percent of the world's population. Based on current trends, China's economy will overtake the US within the next few years. At that point, the US economy will no longer be the largest economy in the world. China's rapid urbanization is what has led to the country's impressive economic growth over the past couple of decades and it doesn't look like it is going to stop anytime soon. To put that in perspective: China currently produces and uses 60 percent of the world's cement.

In terms of Drupal, the first thing I learned is that "Drupal" sounds like "the pig is running" ("Zhu Pao") in Chinese. Contrary to a pig's rather negative reputation in the West, many Chinese developers find that cute. A pig is a more honorable sign in Chinese astrology and culture. Phew!

In terms of adoption, it feels like the Drupal community in China is about 8 to 10 years behind compared to North America or Europe. That isn't a surprise, as Open Source software is a more recent phenomenon in China than it is in North America or Europe.

Specifically, there are about 5 Drupal companies in Shanghai (population of 21 million people), 3 Drupal companies in Beijing (population of 23 million people) and 5 Drupal companies in Hong Kong (population of 7 million people). The largest Drupal companies in China have about 5 Drupal developers on staff. Four of the 5 Shanghai companies are subsidiaries from European Drupal companies. The exception is Ci&T, which has 40 Drupal developers in China. Ci&T is a global systems integrator with several thousand employees worldwide, so unlike the other companies I met, they are not a pure Drupal play. Another point of reference is that the largest Drupal event in China attracted 200 to 300 attendees.

Drupal meetup tianjin

Given that China has 4 times the population of the US, or 2 times the population of Europe, what are we missing? In talking to different people, it appears the biggest barrier to adoption is language. The problem is that Chinese Drupal documentation is limited; translation efforts exist but are slow. The little documentation that is translated is often outdated and spread out over different websites. Less than 20 percent of the Chinese Drupal developers have an account on, simply because they are not fluent enough in the English language. Most Drupal developers hang out on QQ, an instant messaging tool comparable to Skype or IRC. I saw QQ channels dedicated to Drupal with a couple thousand of Drupal developers.

There is no prominent Chinese content management system; most people appear to be building their websites from scratch. This gap could provide a big opportunity for Drupal. China's urbanization equals growth -- and lots of it. Like the rest of the economy, Drupal and Open Source could be catching up fast, and it might not take long before some of the world's biggest Drupal projects are delivered from China.

Supporting Drupal's global growth is important so I'd love to improve Drupal's translation efforts and make Drupal more inclusive and more diverse. Drupal 8's improved multilingual capabilities should help a lot, but we also have to improve the tools and processes on to help the community maintain multi-lingual documentation. Discussing this with both the Drupal Association and different members of our community, it's clear that we have a lot of good ideas on what we could do but lack both the funding and resources to make it happen faster.

Drupal meetup tianjin

Special thanks to Fan Liu (Delivery Manager @ Ci&T), Jingsheng Wang (CEO @ INsReady Inc.) and Keith Yau. All the Drupal people I met were welcoming, fun and are working hard.

by Dries at September 15, 2014 02:14 PM

September 12, 2014

Frank Goossens

Music from Our Tube; Flying Lotus + Kendrick Lamar

I love me some Flying Lotus and it so happens he just released the first track of his forthcoming album “You’re Dead”. The song is called “Never Catch me” and it features Kendrick Lamar in the first part and a friggin’ great bass-synth in the second part (around 2:30).

YouTube Video
Watch this video on YouTube or on Easy Youtube.

by frank at September 12, 2014 05:12 AM

September 11, 2014

Tom Baeyens

5 Types Of Cloud Workflow

Last Wednesday, Box Workflow was announced. It was a expected move for them to go higher up the stack as the cost of storage “races very quickly toward zero”.  It made me realize there are actually 4 different types of workflow solutions available on the cloud.

Box, Salesforce, Netsuite and many others have bolted workflow on top of their products.  In this case workflow is offered as a feature on a product with a different focus.  The advantage is they are well integrated with the product and that it’s available when you have the product already.  The downside can be that the scope is mostly limited to the product.

Another type is the BPM as a service (aka cloud enabled BPM).  BPM as a service has an online service for which you can register an account and use the product online without setting up or maintaining any IT infrastructure for it.  The cloud poses a different set of challenges and opportunities for BPM.  We at Effektif provide a product that is independent, focused on BPM and which is born and raised in the cloud.  In our case, we could say that our on-premise version is actually the afterthought.  Usually it’s the other way round.  Most cloud enabled BPM products were created for on-premise first and have since been tweaked to run on the cloud.  My opinion ‘might’ be a bit biased, but I believe that today’s hybrid enterprise environments are very different from the on-premise-only days.   Ensuring that a BPM solution integrates seamless with other cloud services is non-trivial.   Especially when it needs to integrate just as well with existing on-premise products.

BPM platform as a service (bpmPaaS) is an extension of virtualization.  These are prepackaged images of BPM solutions that can be deployed on a hosting provider.  So you rent a virtual machine with a hosting provider and you then have a ready-to-go image that you can deploy on that machine to run your BPM engine.  As an example, you can have a look at Red Hat’s bpmPaaS cartridge.

Amazon simple workflow service is in many ways unique and a category on it‘s own in my opinion.  It is a developer service that in essence stores the process instance data and it takes care of the distributed locking of activity instances.  All the rest is up to the user to code.  The business logic in the activities has to be coded.  But what makes Amazon’s workflow really unique is that you can (well.. have to) code the logic between the activities yourself as well.  There's no diagram involved.  So when an activity is completed, your code has to perform the calculation of what activities have to be done next.  I think it provides a lot of freedom, but it’s also courageous of them to fight the uphill battle against the user’s expectations of a visual workflow diagram builder.

Then there is IFTTT and Zapier.  These are in my opinion iconic online services because they define a new product category.  At the core, they provide an integration service.  Integration has traditionally been one of the most low level technical aspects of software automation.  Yet they managed to provide this as an online service enabling everyone to accomplish significant integrations without IT or developer involvement.  I refer to those services a lot because they have transformed something that was complex into something simple.  That, I believe, is a significant accomplishment.  We at Effektif are on a similar mission.  BPM has been quite technical and complex.  Our mission is also to remove the need for technical expertise so that you can build your own processes. 

by Tom Baeyens ( at September 11, 2014 09:57 AM

September 10, 2014

Philip Van Hoof

Please remove me from planet-gnome

Please remove me from planet-gnome: I’m nothing but trouble.

I want to create a new era for GNOME. The last few years at GNOME have created only desperation.

You guys invested heavily in Woman outreach programs and it didn’t create any meaningful innovation

You invested in …, oh right, you only invested in woman outreach programs.

I’m sorry, I wanted to make a list. But there is none. You could have invested in gnome-shell?

But also gnome-shell is by now being rightfully replaced by MATE. After several years, gnome-shell just doesn’t work.

I always understood that I’m the underdog of GNOME: the guy who the group wants to hate. That’s ok for me. Having worked on the core component of Tracker, tracker-store, you all rely on my code anyway. Please do me a great favor and replace all of my code.

I need to speak up. When I tried using GEdit I had to use Pluma Text Editor to get any stuff done.

The so-called GNOME desktop is in an unusable state.

You guys suck.

by admin at September 10, 2014 10:52 PM

FOSDEM organizers

Hotel offer: FunKey

Like last year, FunKey Hotel is offering special rates for FOSDEM 2015 attendees. They offer our attendees a bed for 28EUR, per night. Their offer is valid for Friday, Saturday and Sunday night. Please check our accommodation page for more information.

September 10, 2014 03:00 PM

Frank Goossens

WP SEO vs Autoptimize; who broke your WordPress?

Autoptimize 1.9 was released yesterday but unfortunately some reports were coming in about JS optimization being broken.  At first I suspected the problem being related to small changes that added semi-colons to individual blocks of script (before being aggregated), but tests with some impacted users showed this was not the case.

The breakthrough came in this thread in Autoptimize’s support forum, where user “grief-of-these-days” confirmed the problem started with the update of WP SEO and specifically the “sitelinks search box“-functionality that was added in WP SEO 1.6. Sitelinks Search Box comes as an inline script of type “application/ld+json”, that contains a name-less JSON-object with “linked data”. Autoptimize detected, aggregated and minimized this name-less object, but that not only defies the sitelinks search box mechanism, but potentially also broke the optimized JS itself. So I updated & enabled WP SEO, confirmed the problem, identified “potentialAction” as unique string to base exclusion on and pushed out 1.9.1 which will now no longer Autoptimize Sitelinks Search Box-code.

So who broke your WordPress today, WP SEO or Autoptmize? Well, WP SEO’s update may have made the bug appear, but based on the fact that json-ld is standardized and as such will probably be also present in other guises, Autoptimize should really just exclude any script of the “application/ld+json”-type from being aggregated & minimized (and not just that of the Sitelinks Search Box). Adding to the to-do-list now!

by frank at September 10, 2014 12:48 PM

September 08, 2014

Xavier Mertens

ownCloud & Elasticsearch Integration

ownCloudFor a while I left Dropbox and other cloud storage solutions and decided to host my own file exchange service based on I’m using it to exchange files with my partners and customers and keep a full control of the service from A to Z. A major advantage of ownCloud is its modular architecture which allows third party applications to be installed to extend its features. When I started to work with ownCloud, I wrote a first small application which adds a way to check the uploaded files against VirusTotal.

From my humble opinion, there is a point where ownCloud is lacking of good features: The way it manages events. By default, it is possible to send events to a remote Syslog server or in a flat file but the format of the generated events is really ugly. External application were developed to log events into a MySQL database but here again it was not enough convenient for me. Next to ownCloud, I’m also using ELK to manage my log files. It was clear that both solutions must be integrated and I wrote a small application which writes event directly into Elasticsearch. The idea and framework is based on SuperLog wrote by Bastien Ho.

ownCloud implements “hooks” that can be defined as:

A function whose name can be used by developers of plug-ins to ensure that additional code is executed at a precise place during the execution of other parts of ownCloud code. For example, when an ownCloud user is deleted, the ownCloud core hook post_deleteUser is executed.

An application can place a hook on post_deleteuser and automatically performs actions when a user is deleted. seLog supports the following hooks. For each of them, an event is sent to Elasticsearch with relevant information (source IP address, login, file, folder, etc) everytime the action is performed by a user or a desktop client.

Before the esLog installation, the Elasticsearch PHP API must be deployed. Once done, you can setup the application like any other one. Extract the archive content into the /apps directory. To complete the installation, three manual steps must be performed:

1. Copy the “/vendor” directory created during the PHP API installation into a directory readable by Apache

2. Edit the file app/eslog/lib/log.php and add the following line at the top:

  require "/var/www/vendor/autoload.php"; # Change to your own location

3. To be able to log webdav operations, you must edit the remote.php file (in the root of ownCloud) and add the following line at the top:

  require_once 'apps/eslog/spy.php';

That’s it! Now enable the application via the admin panel and configure it. The following parameters can be defined:

Here is a dashboard example with data received from ownCloud:

ownCloud Dashboard

(Click to zoom)

The esLog application is available on my Github account or on the official ownCloud apps repository. Comments, suggestions are welcome and happy logging!

by Xavier at September 08, 2014 09:23 PM

Dries Buytaert

Hiking the Great Wall


Last weekend I had the opportunity to visit the Great Wall of China, the world's longest wall at 8,850 kilometers (5,500 miles). The Great Wall runs mostly through the mountains to take advantage of natural obstacles. As a result, it was quite the hike to get there but once on the Great Wall, the scenery was beyond sensational. It runs like a relentless serpentine over the horizon, evoking the image of a giant dragon.

Great wall

I learned that the Great Wall is actually a discontinuous network of walls built by various dynasties. The Chinese started building the Great Wall as early as the 7th century BC and kept building for over 2,000 years!

Great wall

It is said that over one million people died building the Great Wall. Every step you take, you can't stop imagining how they built it and the many people who suffered. I was pretty much alone on the wall, so it was not hard to imagine the lonesome life of a Ming soldier up here, waiting for something to happen. The history, the scenery, the peacefulness -- it just made me speechless.

Great wall

by Dries at September 08, 2014 04:44 PM

Dieter Plaetinck

Influx-cli: a commandline interface to Influxdb.

Time for another side project: influx-cli, a commandline interface to influxdb.
Nothing groundbreaking, and it behaves pretty much as you would expect if you've ever used the mysql, pgsql, vsql, etc tools before.
But I did want to highlight a few interesting features.

read more

September 08, 2014 12:36 PM

September 07, 2014

Dries Buytaert

The end of ownership: the zero-marginal-cost economy

Society is undergoing tremendous change right now -- those of us who enjoy services like Uber and Kickstarter are experiencing it firsthand. The sharing and collaboration practices of the internet are extending to transportation (Uber), hotels (Airbnb), financing (Kickstarter, LendingClub), music services (Spotify) and even software development (Linux, Drupal). While the consumer "sharing economy" gives us a taste of what it's like to live in a world where we own less, perhaps there is an equally powerful message for the business community. Using collaboration, companies are dramatically reducing the production cost of their goods or services.

Welcome to the zero-marginal-cost economy, a way of doing business where ownership of a core process is surrendered to community collaboration. In economic terms, the cost of a product or a "good" can be divided into two parts. The first part is a "setup cost", which is the cost of assembling the team and tools needed to make the first unit. The second part is called the "marginal cost", or the cost of producing a single, additional unit.

For decades, competitive markets have focused on driving productivity up and marginal costs down, enabling businesses to reduce the price of their goods and services to compete against each other and win customers. A good example of this approach is Toyota, which completely reinvented how cars were made through lean manufacturing, changing the entire automotive industry. Japanese cars were produced much more quickly than their American counterparts, created via traditional assembly lines in Detroit, ultimately driving down the final cost for consumers and shrinking margins for companies like Ford. Software development methodologies like the lean startup methodology and Kanban are modeled after the Toyota production line and have made software development more efficient.

Today, the focus is changing. Within service industries like hospitality and transportation, new entrants are succeeding not by optimizing production, but by eliminating production cost altogether. Consider Uber versus traditional taxi companies. For a traditional taxi company to add another taxi to its fleet, a car and license need to be acquired at significant cost. Instead of shouldering that setup cost, Uber can add another taxi to its inventory at almost no cost by enabling people to share their existing cars, all coordinated via the internet. Airbnb does the same for renting properties vs. acquiring more physical space. The fact that both these companies have near zero-marginal-cost production is threatening longstanding business and regulatory models alike.

In the software industry, the low marginal cost of producing Open Source Software threatens to our equivalent of longstanding business models: proprietary software companies. Free Open Source Software essentially can undermine the way proprietary software companies make money -- by selling software licenses. By sharing the cost to develop software, organizations can increase their productivity, accelerate innovation and bring down their setup costs.

The open source ideology extends even further beyond software. Last month, Elon Musk open sourced the patents for Tesla. His main reason? Pushing the automotive industry to create more electric cars. If Elon Musk is an indicator for industries across the board, it's further proof that capitalism is starting to become more collaborative rather than centered around individual ownership.

Great businesses can be built by adding value on top of a low-marginal-cost community that is owned by many. For example, my company, Acquia, creates value on top of the open-source Drupal software by providing support and software-as-a-service tools. Similarly, Uber adds value by providing consumer-friendly, on-demand services beyond just increasing the supply of available cars on the road. In both cases, the companies' products grow stronger as their communities grow, even as the acceleration of those same communities brings down marginal costs. The power of the community vastly improves previously inefficient base process (such as waterfall software development or taxi regulations) and creates a forcing function for business to generate profit based on products and services that appeal directly to users.

Within the next decade, businesses will need to become much more open and collaborative to survive in an increasingly zero-marginal-cost economy. Those who develop proprietary software are finding it harder and harder to sustain "business as usual". The sharing economy and collaborative development will further streamline capitalism, and organizations that figure out how to master this dynamic will succeed. A community model can work in any number of industries -- we just have to challenge ourselves to as entrepreneurs to discover how.

(I originally wrote this blog post as a guest article for The Next Web. I'm cross-posting it to my blog.)

by Dries at September 07, 2014 11:48 PM

Claudio Ramirez

Review: Java Cookbook, 3rd ed., by Ian F. Darwin (O’Reilly)

Java Cookbook

I am not really a fan of Cookbook style books. However, by looking at the table of contents of the Java Cookbook, it’s clear that the chapters on this book run parallel to chapters in more classical technical books, e.g. the excellent Core Java books by Horstman and Cornell. While the more classical books try hard to provide a logical structure within the chapter with the needed context (a story) to master the content, cookbooks give you independent ready-to-use recipes (yeah for metaphors!). So it’s really about getting a proven solution fast than (deep) understanding. But you should get that from the title, so no surprises there.

My specific use case is simple. I want to do some Java coding soon, and needed a quick refresh on the language. While not so long ago I used Java 7 features for a private project, most of my Java coding in the past was based on Java 6. Java 8 seems to have nice improvements, so a fresh-from-the-press Java Cookbook (3rd edition, July 2014) seemed a fast way to get up to date. With 900+ pages, “fast” is of course relative. By this standard, the books delivered.

The content is *very* varied. Some chapters needed careful reading (or even re-reading), while others could be skimmed or even skipped. Recipes include (citing the product page):

As you can see, it’s a mixture of very basic and advanced stuff and that kind of thechnical width has its risks. Some recipes are too obvious and not more useful than the javadoc showing the classic usage of a standard method of a standard class (e.g. substr from String). As such, I don’t think it can replace a good book from a “Learning/Starting” series if you’re new to Java. A similar phenomenon happened on the other side of the spectrum. For more complicated subjects (like the new functional aspects in Java (lambdas), GUI development or threading) a cooking recipe just doesn’t provide enough context to really grasp the concepts. I hope the following screenshot from the book illustrates my point (and also shows what I don’t like about java):


Try to put this in a recipe!


Despite this warning, I can say that most of the recipes are useful. By example, even the first chapter (“Getting Started: Compiling, Running, and Debugging”) was surprisingly useful because it included things like ant, maven and gradle. There is lots of content to skim (it’s OK, it’s a cookbook!), lots of interesting stuff to read, lots to use with a mental note “need to dig deeper here” and even some annoyances from time to time (I wish the author would stop pushing his own classes for trivial stuff, like removing tabs (!)).

As a conclusion we can say that if you like cookbooks, this could be a good one. If you don’t, it depends on your expectations. Mine were met, no more, no less. 3 out of 5.


Title: Java Cookbook, 3rd Edition
By: Ian F. Darwin
Publisher: O’Reilly Media
Print: July 2014
Ebook: June 2014
Print ISBN: 978-1-4493-3704-9
| ISBN 10: 1-4493-3704-X
Ebook ISBN: 978-1-4493-3703-2
| ISBN 10:1-4493-3703-1

Filed under: Uncategorized Tagged: bookreview, books, goodread, Java, o'reilly, Programming

by claudio at September 07, 2014 09:27 PM

September 06, 2014

Paul Cobbaut

Vagrantfile (this is just a bookmark)

This is my (thank you Abel) current Vagrantfile to quickly create a number of servers with two extra disks and three extra network cards:


Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.provider :virtualbox do |vb|
vb.customize ["storagectl", :id, "--add", "sata", "--name", "SATA" , "--portcount", 2, "--hostiocache", "on"]
(1..3).each do |i|
config.vm.define "server#{i}" do |node|
node.vm.hostname = "server#{i}" = "hfm4/centos7"
config.vm.box_check_update = true :public_network, ip: "10.1.1.#{i}", netmask: '' :public_network, ip: "10.1.2.#{i}", netmask: '' :public_network, ip: "10.1.3.#{i}", netmask: ''
config.vm.provider "virtualbox" do |v| = "server#{i}"
v.memory = 512
v.cpus = 1
v.customize ['createhd', '--filename', "server_#{i}a.vdi", '--size', 8192 ]
v.customize ['createhd', '--filename', "server_#{i}b.vdi", '--size', 8192 ]
v.customize ['storageattach', :id, '--storagectl', 'SATA', '--port', 1, '--device', 0, \
'--type', 'hdd', '--medium', "./server_#{i}a.vdi"]
v.customize ['storageattach', :id, '--storagectl', 'SATA', '--port', 2, '--device', 0, \
'--type', 'hdd', '--medium', "./server_#{i}b.vdi"]

by Paul Cobbaut ( at September 06, 2014 12:29 PM

September 05, 2014

Wouter Verhelst

ASCII art Wouter

          <*@@@*`  ,--------------------------------"@@@@&*,          
          d#@@V>   '--------------------------------'"@@@?"*,         
         <d@@$"`   .--------'------------------------<$@@&b*b,        
         <@@@"`     ---------  -----------------------"@@@[**>        
         d@@@"      '-----`     -      '--------------'#@@@***        
        <@@@"`       '-`-               --- -----------*@@@***>       
        <]@""                             -,-----------<$@@[**>       
         ][F[        ` '`                 '-------------]@@[**>       
        <Q@[`                               -`----------*@@&**>       
        <#"">                                   --------*@@@**>       
        <@"*                                    ----`---*@@@&*>       
        ]Q#[                                      ------]#@@&*        
        ]@#[                                          <-<#@@@*>       
        ]@#b                                          .-<]@@@*        
        "@@[                                         ---<#@@@*        
        <@@"                                         ----]@@@*        
        ]#@[                                           -.]@@&"        
        ]#@b                                           -<]@@@[        
        <@@[                                       <   -<]@@"[        
        <@@[                                           -<]@@V>        
        <@@>     ,                                 ..  -<]@@b,        
        d@#>   -._----___ _                 _ _-_   -- -<#@@#-        
        ]@*> _-d*****obo_----       __--------___-, ----]@@@*,        
        d@[> -**?"**@@@@@o*>--,  -,---.o*ooW******_,,.--]@@#"`        
        #@[> -???'''--''?"**----<----<*@@@@@V******[----]@@@b>        
        *@[  -------------------<----*"'----''----'--->-<@@@*>        
        *@[  ------ooWWo,------- ------------_ '--------<@@@*-        
        "]>  <---bd?"@@@"`.----,  -------dQ@@&b-.-----'''$@[*`        
        *">   '--*",'$@#> -,'---  ---`--''$@@@?&_---->  -]@[*-        
        "*`    --'--.'"` `--.---  --- --, ]@@"-?*----   -"@*`>        
        <*>      <------`   ----  ---  ''--'",------    -]"*->        
        <*,       -----,   -----'----,  ---------`      -]#*-         
        '">        ---`    -----  <---                  '][[          
         <`                -----------,                 <d"[          
         <>               .------------                 -**-          
          >               <-----  -----                <-`--          
          -               .-----  -----                .--'           
          -               ------,.-----                -'-            
          ->              -------------                -.-            
          --              ---  -  -----                --`            
          --              ---      -----              --->            
          --             .->       -----              ---             
          -->            -----    .-----              ---             
          <-> '         ----`_-----__---            ' ---             
           --_.        ------'-----'----           ...---             
           ---`       ---------''------.           <----`             
           ---,     --------`-------`--,-          -----              
           <-----  --------_--------,-----        .-----              
            --------------*?--------`--------,    -----`              
            -----'------_?`--`'------'b--------- .-----               
            ----- ---"""`---.   '  ---'bb-------.------               
            ----------d,-------___-----'*o"----`` -----               
             -----------'------'--------'"*----> -----                
             -----------, '---   ---------------------                
               '*,-------------- --------------.--`                   
           .o, --**>.o----------------------,-.**`-                   
         .@@@[ .-"*****_-_----------------o******--                   
         Q@@F`  -'******d*--------------,-******`--                   
        .@@F`   '-<*******-.-.,------.o*********---,_o@W,             
        ]@[--    --"***d***od*--****dbd********"----@@@@[             
      .d@@#,-    --'***"@o*************oWb****"-----'$@@&,            
    _oQ&@@@>    ----'"**]@@o********doQ@?****"--------"@@b            
 dQ@@@@@@@@b      '---**"$@@@@@@@@@@@@F"*****---------<]@@d,,         
@@@@@@@@@@@[,      ---'***V?""??VVV"********>---------<Q@@@@@&b,      
@@@@@@@@@@@@><      ---'*******************?----------d@@@@@@@@@F_    
@@@@@@@@@@@@[.      <----"****************"----------]Q@@@@@@@@@@@@bo_
@@@@@@@@@@@@@>-      -----'**************'----------.#@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@[>        ------**********`------------d@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@[        '-------"****?`-------------.Q@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@&,         ------'''''---------------d@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@o,         ------------------------.Q@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@[_         '----------------------d@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@[,          --------------------]@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@b-_          '----------------.Q@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@b-            '--------------Q@@@@@@@@@@@@@@@@@@@@@

You know you're doing a fun gig when you get to do things like the above on billable hours.

Full story: writing a test suite for reading data from eID cards. It makes sense to decode the JPEG data which you read from the card, so that you know there's no error in the lower-layer subroutines (which would result in corruption). And since we've decoded it anyway, why not show it in the test suite log? Right.

September 05, 2014 10:20 AM

Les Jeudis du Libre

Mons, le 18 septembre : MOOC – Une façon OUVERTE d’apprendre LIBREMENT ?

Ce jeudi 18 septembre 2014 à 19h se déroulera la 31ème séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : MOOC – Une façon OUVERTE d’apprendre LIBREMENT ? Exemples avec ITyPa (sujet généré par les participants) ou « Khan Academy » (sujet classique exposé par un professeur)

Thématique : e-Learning

Public : Tout public

Les animateurs conférenciers : Philippe Verstichel (Académie du Management) et Bruno De Lièvre (UMONS, Faculté de Psychologie et des Sciences de l’Éducation)

Philippe Verstichel

Philippe Verstichel

Bruno De Lièvre

Bruno De Lièvre








Lieu de cette séance : Campus technique (ISIMs) de la Haute Ecole en Hainaut, Avenue V. Maistriau, 8a, Salle Académique, 2e bâtiment (cf. ce plan sur le site de l’ISIMs, et ici sur la carte Openstreetmap).

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page La séance sera suivie d’un verre de l’amitié.

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : Normation, OpenSides, MeaWeb, NextLab, Phonoid et Creative Monkeys.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des rencontres autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires du Pôle Hainuyer d’enseignement supérieur impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

Description : Voyage au pays des MOOC (Massive Open Online Course), cours en ligne qui autorisent de nouvelles façons d’apprendre mais également, dans l’esprit du “Libre”, de nouvelles façons de développer non pas des programmes informatiques mais des contenus éducatifs communs.

Dans une première partie, le concept est expliqué de manière pratique au travers d’exemples bien choisis pour mettre en relief les différences entre les plateformes. Du traditionnel « coursera » au connectiviste « ITyPa » en passant par le concept de la « Khan Academy ».

Au cours de la deuxième partie, à côté d’une nouvelle approche de l’enseignement à distance, l’exposé dévoilera l’extraordinaire possibilité des MOOC comme outil de co-création de connaissance et de supports éducatifs. Au delà des défis technologiques engendrés, le caractère collaboratif/participatif des MOOC requiert en effet de nouvelles approches aussi bien en matière de pédagogie que de fabrication de contenu (celle-ci pouvant se rapprocher des développements de logiciels libres).

Dans la dernière partie, nous aborderons le côté pratique au travers de deux outils, l’un Moodle (utilisé par l’Université de Mons et la Haute École Condorcet) et OpenMOOC.

La présentation se terminera par un débat interactif sur les potentiels des MOOC en Wallonie et sur leurs apports possibles dans des milieux universitaires, entrepreneuriaux ou en formation continue.

by Didier Villers at September 05, 2014 06:06 AM

September 03, 2014

Dries Buytaert

Visiting China and Japan


After spending the summer in Boston, I'm ready to fly across the world ... literally, as I'm leaving on a two week trip to China and Japan later this week. I'm very excited about it as I've never had the opportunity to see either of these countries.

I will arrive in Beijing on Saturday, September 6th, for the Young Global Leaders Annual Summit. A private path from where I'll be staying leads to a non-restored section of the epic Great Wall of China. Exploring this truly untouched piece of Chinese history still in its original landscape should be a special experience! Stay tuned for photos.

Following my time in Beijing, I'll transfer to Tianjin to attend Summer Davos. In addition to that, we're organizing a meetup with the local Drupal community - If you are in the area on September 10th, please stop by for a drink. I'd love to meet you and learn about the state of Drupal in China.

I'll end my two week trip in Tokyo, Japan. My time will be split between meeting the local Drupal community -, understanding the adoption rate of Drupal in the Japanese market, and attending private meetings with digital agencies, Acquia partners and others to learn about the state of the web and digital in Japan.

Xièxiè and dōmo arigatō to those that have helped plan these events and gather the Drupal community for some fun evenings!

If you aren't able to make either Drupal meetup, feel free to leave your thoughts in the comments.

by Dries at September 03, 2014 05:55 PM

Xavier Mertens

Book Review: Penetration Testing with the Bash Shell

Penetration Testing with the Bash shellA few weeks ago, I reviewed Georgia’s book about penetration testing. In the same topic (pentesting), I was asked to review another one which focus on shell scripting using the bash shell. Keith Makan is the author of “Penetration Testing with the Bash Shell“. Bash is the default shell on many UNIX distributions and is also the primary interface between the operating system and the user when no graphical interface is available. Why talk about a shell in the scope of a penetration test? Simply because good pentesters write code! It’s almost impossible to complete a penetration test without write some lines of code. Because we need to gain time, we need more visibility and we need to parse thousands of lines or files. Usually, the UNIX shell is the first tool that we have to achieve such tasks. That’s the goal of the book. Throughout the chapters, Keith demonstrates how to take advantage of the many bash features to make your life easier.

In my opinion, the first chapter is completely optional: as a pentester, you already know Bash and how to find your way with the UNIX command line. Except the section about using grep and regular expression which is always a good reminder, you can jump immediately to the next chapter which will help you to customize your environment. Like a desktop, a shell can be customized to meet your requirements in terms of look and feel but also in terms of information displayed (as an example with the prompt). Formating the terminal output can also be useful to highlight some important information in your scripts. Most of the escape sequences are reviewed (like printing in red with ‘\033[31M”). A mention for the section covering the risk of information leak via the .bash_history and how to prevent this.

The remaining chapters talk about the different aspects of a penetration test project but always peformed with the help of our Bash shell. The chapter three focuses on the network reconnaissance with tools like whois, dig, dnsmap and target enumeration with arping, nmap. Each tool is briefly described with only the basic options (nmap has a book fully dedicated to it – a must have!). The next chapter covers the exploitation and reverse-engineering. The tool used is Metasploit and more precisely msfcli. Nice examples of Bash/msfcli integrations are provided like this one:

for range in `whois –i mnt-by [maintainer]|awk –F '/inetnum/ { print $2"-"$4 }'
  msfcli auxiliary/scanner/portscan/syn RHOSTS=$range E

The other tools used are msfpayload, objdump and gdb (the GNU debugger). Then, exploitation and network monitoring are covered in the fifth chapter. How to abuse MAC addresses and the ARP protocol, DNS spoofing, SNMP probing, SMTP, brute-force password (using Medusa) and tcpdump. SSL auditing using SSLize and another interesting Bash script are also part of this chapter. And finally some tools for websites assessment are reviewed.

Across the 130 pages of the book, many (too much?) tools are presented and all the exercices remain inline with the title: everything is performed from a Bash shell. But major programs like nmap or metasploit cannot be covered in a few pages. I think that the book is missing more Bash scripts examples like writing a port scanner or extracting juicy information from downloaded HTML pages, etc (those are just examples). This can be useful when you don’t have your toolbox with you. The book is interesting if you’re a newcomer in the infomation security (not only pentesting) field and you’re looking for nice command line exercices.. If you want to increase your “script-fu“, have look at the blog Command Line Kung Fu. My conclusion is that this book is an entry-level and definitively not for experienced pentesters.

The book is published by PacktPub and is available here.

by Xavier at September 03, 2014 02:15 AM

September 02, 2014

Frank Goossens

Music from India, Canada and Our Tube; Kiran Ahluwalia

Kiran Ahluwalia is a Indo-Canadian singer who I have heard a couple of times on KCRW already. Given my new professional context, “Saffar (Journey)” does seem somewhat appropriate, no?

YouTube Video
Watch this video on YouTube or on Easy Youtube.

by frank at September 02, 2014 04:29 PM

September 01, 2014

Philip Van Hoof

PADI Rescue diver

For this one I worked really hard. Buddy breading, relaxing people in panic at 20 meters deep, keeping yourself cool. And that in Belgian waters (no visibility and freezing cold). We simulated it all. It was harder than most other things I did in my life.

by admin at September 01, 2014 04:25 PM

August 29, 2014

Lionel Dricot

Printeurs 25


Alors qu’il explore le commissariat en compagnie de Junior Freeman, le policier qui lui a sauvé la vie lors de l’attaque des drones kamikazes, Nellio insiste pour rencontrer le mystérieux John, un homme dont le témoignage est tellement dangereux pour le monde de l’industrie que Georges Farreck l’a placé au secret sous protection rapprochée.

Je lève les yeux. L’homme m’est totalement inconnu. De taille moyenne, maigre, les cheveux épars, Monsieur John semble avoir enduré privations et souffrances. Son visage est constellé de plaques rouges. son crâne révèle des zones d’une calvitie chaotique et aléatoire. Je suis frappé par son regard noir, pénétrant. Naïvement, j’avais espéré que la vision de ce fameux John déchirerait le voile de mon amnésie. Mais l’homme m’est totalement inconnu. Cela ne semble pas être réciproque car John s’est figé dans un rictus de pur effroi. La terreur se lit sur ce visage dont la bouche est restée ouverte, laissant sa dernière phrase en suspens. Je décide de rompre le silence glacé qui s’est installé entre nous.
— Bonjour John, je suis Nellio, un ami de Georges. Je suis touché par une crise d’amnésie et j’espérais que vous puissiez m’aider. Nous connaissons-nous ?
Son visage semble se détendre progressivement, dessinant un sourire mielleux et faussement obséquieux.
— Enchanté Nellio. Non, malheureusement je ne vous connais pas. Je ne pense pas pouvoir vous aider.
Sa voix est rauque, rocailleuse. Il s’exprime dans notre langue avec difficulté, teintant son élocution d’un accent dégénéré. Pendant un fugace instant, j’ai la conviction qu’il ment. Sans pouvoir l’expliquer, je le sens, je le vois à travers tous les pores de sa peau.
— Qui êtes-vous ? Comment connaissez-vous Georges Farreck ?
Il me regarde étonné.
— Mais vous devriez le savoir. Je suppose que si vous avez accès à moi, vous êtes au courant. Demandez à Georges Farreck de vous raconter. Je ne suis qu’un modeste travailleur qui cherche à faire le bien de l’humanité.
— Pourquoi êtes-vous sous protection ?
— Monsieur Farreck prétend que ma démarche risque de m’attirer des ennuis. Que l’on voudrait taire les révélations que je suis en mesure de faire.
— Quelles révélations ?
— Et bien celles concernant mon travail. Mais je pense que le mieux est d’en discuter avec Monsieur Farreck.
Une main se pose sur mon épaule.
— Dîtes, honnêtement, je pense que les avatars sont beaucoup plus marrants que ce type. Et comme le règlement interdit aux étrangers d’accéder au centre de contrôle, j’aimerais vous le faire visiter avant que les autres ne reviennent.
Je regarde Monsieur John dans les yeux. Je ne pense pas que j’en tirerais quoi que ce soit. Il contourne, esquive et glisse comme une anguille. Il affiche à présent un air tellement innocent que mon impression initiale de roublardise s’est totalement dissipée. Je pousse un soupir.
— C’est bon mon vieux, je vous suis. Allons voir ces fameux avatars !
Tournant les talons, j’adresse un dernier regard à ce fameux John dont les révélations semblent si fracassantes. Il se tient modestement au milieu du salon et m’adresse un sourire gêné.
— Désolé de ne pas être d’une grande aide. Repassez me voir quand vous le souhaitez !


— Attendez, je vais allumer !
Junior tâtonne un instant avant d’activer un antique interrupteur mural. Le clignotement des néons résonne à travers le hangar.
— Et voilà les avatars ! me fait Junior avec fierté.
Comme un enfant à la fête de l’école, il m’attrape la main avec enthousiasme et m’emmène devant une rangée de policiers immobilisés dans un silencieux garde-à-vous.
— Et celui-là, c’est moi !
Je reconnais en effet le policier qui m’a tiré de la voiture. Le badge “J. Freeman” se détache sur la carapace de chitine artificielle. Incrédule, j’avance la main et je tâte les éclats d’obus et de balles.
— Oui, c’est vrai, il doit encore passer à l’entretien.
Je me retourne vers Junior :
— Ces policiers… Ce sont donc des robots ?
— Des avatars ! Pas des robots, des avatars !
— Quelle différence ?
Je tape sur le policier qui renvoie un son métallique.
— La différence est fondamentale ! Je vais vous montrer.
Nous ressortons aussitôt du hangar et Junior me conduit dans une salle bardée d’écrans et de matériel informatique. La pièce est constellée de zones circulaires entourées chacune d’une rampe sur laquelle pendent des câbles et des accessoires.
— Le centre de contrôle ! Le saint des saints !
— C’est d’ici que vous pilotez les robots ?
Il me jette un regard noir par dessus ses lunettes.
— Que nous incarnons les avatars !
Sans un mot d’explication, il pénètre dans l’un des cercle et m’invite à y prendre place.
— Enfilez ça ! fait-il en me tendant une paire de fins gants accrochée à la rampe.
Tandis que je m’exécute, il se saisit d’un casque intégral.
— Baissez la tête ! Fermez les yeux et attendez mon signal !
Noir ! Je suis dans le noir. Un noir total, étouffant. Le silence me prend à la gorge. Plongé dans une abysse de noirceur, j’entends la voix de Junior qui me parvient d’une hypothétique surface inhumainement lointaine, définitivement hors d’atteinte.
— Vous êtes prêt ? Go !
La lumière se fait. Les néons blafards du hangar m’éblouissent une fraction de seconde. Le hangar ! Je me retourne pour demander des explications à Junior. Un grincement métallique me parcourt le corps. Je baisse les yeux. Mon corps. Je manque de pousser un hurlement. Je palpe, je touche un corps de métal et de matériaux composites blindés avant de réaliser que ma main gantée est entièrement robotique. Je la porte à hauteur de mon visage. Pas de doute, je suis devenu un robot ! Je…
— Alors, vous avez compris la différence ?
Je suis dans le centre de contrôle face à Junior, la main toujours à hauteur de mes yeux. Il tient le casque qu’il vient de me retirer.
— Waw ! fais-je dans un souffle.
Son sourire s’élargit jusqu’aux oreilles, ouvrant une fente béante dans son visage ravagé par l’acné et le manque de soleil.
— Génial, non ? Les avatars ça ne s’explique pas, il faut les vivre. C’est pour ça que je voulais que vous testiez.
Tout en retirant mes gants, je cligne des yeux pour reprendre mes esprits.
— C’est hallucinant de réalisme, fais-je. J’avais l’impression d’y être.
— Mais vous y étiez ! Réfléchissez un instant : votre cerveau reçoit en permanence les informations de votre corps sous forme d’impulsions électriques. Il réagit par le même canal. Le casque permet d’envoyer au cerveau les impulsions perçues par l’avatar et de le contrôler. C’est comme si votre cerveau avait été mis dans l’avatar. Bref, vous étiez réellement l’avatar.
— Pas réellement, pinaillé-je. Moi j’étais toujours ici !
— Pourtant, lorsque vous faites de la vidéoconférence avec des personnes aux antipodes, vous dites “J’ai assisté à la réunion”. Votre moi ne se définit pas par votre corps physique mais bien par là où se porte votre attention.
Je tente de détourner la conversation.
— C’est donc cette espèce de drone sur pattes qui m’a sauvé la vie ?
Sous le coup de la colère, un bouton d’acné explose sur le front de Junior.
— Mais non ! C’est moi qui vous ai sauvé la peau ! Moi Junior Freeman, commando d’élite ultra-entraîné. Votre peau, c’est à moi que vous la devez !
Alors qu’il dit ces mots, je réalise que j’ai toujours la main levée, à hauteur de mes yeux.
— Ma peau ? Ma peau !
Un souvenir vient de fulgurer à travers mon esprit.
— Ma peau ! Vite, un couteau ! Donnez-moi un couteau ! Ou n’importe quel objet tranchant !
Junior me regarde d’un air dubitatif.
— Je suis pas sûr d’avoir envie de vous donner un objet tranchant.
À son regard, je devine qu’il me considère comme fou à lier. Mais que sur son échelle de valeur, fou est nettement plus intéressant que normal ou banal. Je retire précipitamment le gant et pose ma main sur la rampe d’un geste décidé.
— Alors je vous fais confiance. Incisez-là ! dis-je d’une voix ferme en désignant le dos de ma main.


Photo par NASA.


Merci d'avoir pris le temps de lire ce billet librement payant. Pour écrire, j'ai besoin de votre soutien. Suivez-moi également sur Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

flattr this!

by Lionel Dricot at August 29, 2014 06:21 PM

August 28, 2014

Lionel Dricot

Pas aujourd’hui !


Aujourd’hui, nous vivons dans un univers de listes sur lesquelles nous avons perdu prise et qui nous contrôlent. Mais je vous propose de reprendre ce contrôle avec un principe simple : “Pas aujourd’hui !”

Afin de s’organiser, les humains ont développé une propension toute particulière à créer et remplir des listes. Aujourd’hui, nous sommes à l’apogée de la civilisation de la liste : emails, todos, liste de lecture, liste de films à voir, liste de bonnes résolutions, liste de commissions.

Lorsque nous créons une liste, c’est avec le secret idéal de la vider. Un jour, ma liste de todos, ma liste de mails et ma liste de commissions seront vides. Ma liste de listes sera vide également. Ce jour là, je pourrai enfin souffler ! Ou m’attaquer à la liste des choses à faire le jour où mes listes seront vides…

Or, force est de constater que cette situation est entièrement illusoire. Les listes ont la fâcheuse tendance à se remplir plus vite qu’elles ne se vident. Cette situation a pour résultat catastrophique que nous n’avons plus aucun incitant à vider les listes. Quel est l’intérêt de travailler une journée pour faire passer une liste de todos de 232 items à 208 ? Voire à 231 ! Aucun. Donc autant ne rien faire.

Comment avons-nous pallié ce problème ? En créant des listes dans les listes mais sans que ça ressemble à des listes : on assigne des priorités, on utilise le status lu/non-lu pour créer deux listes, on crée un hiérarchie de fichiers pour classer et trier nos listes. Bref, nous ajoutons des listes aux listes et nous plaçons le tout dans des listes de listes. Non seulement ce n’est pas très efficace mais c’est de plus absurde. À quoi sert de définir une priorité dans une liste ? Soit l’élément doit être réalisé, soit il doit être supprimé de la liste. Les éléments en priorité basse se verront toujours dépasser par des nouveaux éléments de haute priorité et ne servent donc qu’à remplir la liste.

En gros, nous avons du mal à supprimer consciemment des éléments de nos listes et nous préférons refuser le contrôle de nos listes pour ne pas avoir à prendre une décision ferme. Toute notre structure de listes, de répertoires, de priorités ne sert, finalement, qu’à se dédouaner.

Si j’expose le problème, vous vous en doutez, c’est que j’ai une solution à proposer. Et, conceptuellement, cette solution est simple. Il “suffit” de vider ses listes. Inbox 0 !

J’ai déjà expliqué en détail comment parvenir à l’Inbox 0 et pourquoi vous n’êtes pas à Inbox 0. Et bien vous pouvez appliquer la même méthode à toutes vos listes.

En premier lieu, vous devez éviter à tout prix de rajouter des listes aux listes. Bannissez les priorités, les folders, les classements divers. Dans les mails, un mail est soit dans l’inbox, soit archivé. Tout autre classement est un obstacle à l’inbox 0. Les todos sont soit faits, soit à faire. Le travail de classement est une dangereuse procrastination qui donne l’illusion de productivité.

Mais cela ne résout pas le problème fondamental qui est que votre liste est une montagne et que vous n’avez aucune motivation pour vous y attaquer. C’est ici qu’intervient un concept fondamental de l’Inbox 0 : pas aujourd’hui !

Pour chaque élément de vos listes, vous devez pouvoir consciemment décider : “Non, ça je ne ferais pas aujourd’hui”. Au fur et à mesure de la journée, les imprévus s’intercalant, vous raffinerez “Finalement, celui-là, pas aujourd’hui non plus”.

Cette approche a même été poussée à son paroxysme avec le gestionnaire de todos Do It Tomorrow.

Mais on peut envisager de raffiner la fonctionnalité en repoussant un élément de la liste à une date donnée : dans une semaine, dans un mois, le 3 novembre. C’est pour cette raison que, dès sa conception initiale, GTG comportait le principe de “start date”. propose également de passer en revue sa liste de tâches chaque matin et de décider celles qui sont pour aujourd’hui et celles pour un autre jour. Pour les mails, Mailbox vient d’implémenter exactement ce principe avec un certain succès. Au contraire, je n’insisterai jamais assez sur le fait que tous les gestionnaires de projets, de tâches, de todos ou de listes en général qui n’ont pas cette fonctionnalité finiront par devenir tôt ou tard des trous noirs, des listes qu’on remplit mais dont personne n’ose plus explorer autre chose que la surface.

Par rapport aux gestionnaires classiques, le résultat de la méthode “Pas aujourd’hui !” est sans appel : vous êtes aux commandes de vos listes. Vous êtes forcé de passer en revue chaque élément et de décider consciemment de ne pas faire quelque chose aujourd’hui. Résultat : il est parfois plus facile psychologiquement de le faire plutôt que de le repousser. Quoi que vous décidiez, vous êtes aux commandes de vos listes et de votre vie. Vous prenez des décisions.

Et, chaque soir, en regardant vos listes vides, vous aurez la délicieuse satisfaction du travail accompli !


Photo par Palo.

Merci d'avoir pris le temps de lire ce billet librement payant. Pour écrire, j'ai besoin de votre soutien. Suivez-moi également sur Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

flattr this!

by Lionel Dricot at August 28, 2014 05:16 PM

Frank Goossens

You cannot unsee this

YouTube Video
Watch this video on YouTube or on Easy Youtube.

From the guy that also brought us this, but also this.

by frank at August 28, 2014 04:42 PM

Xavier Mertens

Check Point Firewall Logs and Logstash (ELK) Integration

Firewall LogsIt has been a while that I did not write an article on log management. Here is a quick how-to about the integration of Check Point firewall logs into ELK. For a while, this log management framework is gaining more and more popularity. ELK is based on three core components: ElasticSearch, Logstrack and Kibana. Google is your best friend to find information about ELK. But why Check Point? Usually, I don’t blog about commercial products but I investigated a request from a customer who was looking for a clean solution to integrate this product logs into ELK and I didn’t find my heart’s desire on the Internet

Check Point firewalls are good products amongst others but what I really like is the way they handle logs. By default, logs generated by the firewall modules are sent to the management system (the “SmartCenter“) where they can be reviewed using a powerful fat client but… running only on top of Microsoft Windows systems. To export the logs to an external log management solution, Check Point has developed the OPSEC framework which allows third party applications to interact with firewalls. One of the feature is to get a copy of logs using the LEA protocol. LEA means “Log Export API” and provides the ability to pull logs from a Check Point device via the port TCP/18184. What about Syslog could you ask? It is simply not possible in an out-of-the-box way! To forward logs to a remote Syslog server, you can use the “fwm” command:

# fw log -f -t -n -l 2>/dev/null | awk 'NF' | sed '/^$/d' | logger -p -t cpwd &

An alternative way is to create a “User Defined Alert” which will call a script for every(!) line of log. In such situations, how to be sure that our firewall will be able to handle a big amount of logs?

Honestly, I don’t like this way of working, it creates new processes on the firewall that can’t be properly controlled and Syslog, even if still mainly used, remains a poor protocol in terms of reliability and security (Note: the Check Point OS – SecurePlatform or Gaia – can be configured to forward Syslog to a remote server).  The benefits of using OPSEC/LEA are multiple:

OPSEC is a proprietary framework developed by Check Point but SDK’s are available and developers can write tools which talk to Check Point devices. Commercial log management/SIEM solutions support OPSEC and they MUST do (Check Point is one of the market leaders) but Logstash does not support OPSEC to pull logs natively. That’s why we will use a AitM (“Agent in the Middle” ;-)) to achieve this. Here are the details of the lab and the components I used to integrate a firewall with ELK:

The first challenge is to compile fw1-loggrabber on your system. This tool is quite old (2005!) but a fork is available on You’ll also need the OPSEC SDK 6.0 linux30. The compilation is quite straight forward if you properly adapt the original Makefile (to specify the right location of the SDK). fw1-loggrabber requires two configuration files to work: “fw1-loggrabber.conf” is the primary configuration file and “lea.conf” contains the details about the firewall you’d like to connect to. Here are mine:

# cd /usr/local/fw1-loggrabber
# cat ../etc/fw1-loggrabber.conf

# cat ../etc/lea.conf
lea_server auth_type sslca
lea_server ip
lea_server auth_port 18184
opsec_sic_name "CN=loggrabber-opsec,O=........."
opsec_sslca_file /usr/local/fw1-loggrabber/etc/opsec.p12
lea_server opsec_entity_sic_name "cn=cp_mgmt,o=........."

Before pulling logs out of the firewall, a secure link must be established between the firewall and the OPSEC client based. This one is based on SIC (“Secure Internal Communications“). The Check Point documentation describes step by step how to establish a SIC communication channel. The most critical part will be to export the certificate (the .p12 file referenced in the lea.conf file). Hopefully, to do this, Check Point provides a specific tool delivered with the OPSEC SDK: pull_opsec_cert. Once the firewall is properly configured (and the right communications allowed in the security policy!), the certificate can be extracted via the following command line:

# opsec_pull_cert -h firewall -n opsec-object-name -p passwd -o p12_cert_file

Copy the created .p12 file into the right directory. Now, start the tool and if it works, your log file will start to be populated with interesting lines:

# cd /usr/local/fw1-loggrabber/bin
#./fw1-loggrabber -c ../etc/fw1-loggrabber.conf -l ../etc/lea.conf &
# tail -f /var/log/checkpoint/fw1.log
time=28Aug2014 0:19:08|action=accept|orig=|i/f_dir=inbound|\
product=VPN-1 & FireWall-1|rule=1|rule_uid={xxxxxxxx}|service_id=nbname|\
__policy_id_tag=product=VPN-1 & FireWall-1 [db_tag={xxxxxxxx};mgmt=cpfw-lab;\

The file is easy to parse, fields are delimited by “|” and prepended with names. It’s peace of cake to integrate this into ELK. Personally, I deployed a logstash-forwarder which sends the events to my central server. Here is my logstash.conf:

   "network": {
     "servers": [ "" ],
     "timeout": 15,
     "ssl ca": "/etc/ssl/certs/logstash-forwarder.crt"
   "files": [
         "paths": [
         "fields": { "type": "checkpoint" }

On my ELK, grok is used to parse the events:

filter {
   # Checkpoint OPSEC/LEA event (lab)
   if [type] =~ /^checkpoint/ {
      grok {
         match => { "message" => "time=%{DATA:timestamp}\|action=%{WORD:action}\|orig=%{IPORHOST:origin}\|i\/f\_dir=%{WORD:direction}\|i\/f\_name=%{WORD:interface}\|has\_accounting=%{INT:accounting}\|uuid=%{DATA:uuid}\|product=%{DATA:product}\|rule=%{INT:rule}\|rule_uid=%{DATA:rule_uid}\|*src=%{IP:src_ip}\|s_port=%{INT:src_port}\|dst=%{IP:dst_ip}\|service=%{INT:dst_port}\|proto=%{WORD:protocol}" }
         add_tag => "checkpoint"
      mutate {
         gsub => ["timestamp"," "," "]
      date {
         tags => "checkpoint"
         match => [ "timestamp", "ddMMMYYYY HH:mm:ss" ]

Here are the results in Kibana:

Checkpoint Firewall Event Flow

Check Point Firewall Event Flow


CheckPoint Firewall Event Details

Check Point Firewall Event Details

Note that the parsed events are just the basic event (communication between two hosts). The following fields are extracted:

Check Point has plenty of other interesting fields and events (related to other blades like IPS, URL filtering, …). More grok regex’s must be created for each of them. A good reference is the document “LEA Fields Update“. Happy logging!

by Xavier at August 28, 2014 03:21 PM

August 27, 2014

Dries Buytaert

A better runtime for component-based web applications

I have an idea but currently don't have the time or resources to work on it. So I'm sharing the idea here, hoping we can at least discuss it, and maybe someone will even feel inspired to take it on.

The idea is based on two predictions. First, I'm convinced that the future of web sites or web applications is component-based platforms (e.g. Drupal modules, WordPress plugins, etc). Second, I believe that the best way to deploy and use web sites or web applications is through a SaaS hosting environment (e.g., DrupalGardens, SalesForce's platform, DemandWare's SaaS platform, etc). Specifically, I believe that in the big picture on-premise software is a "transitional state". It may take another 15 years, but on-premise software will become the exception rather than the standard. Combined, these two predictions present a future where we have component-based platforms running in SaaS environments.

To get the idea, imagine a, SquareSpace, Wix or DrupalGardens where you can install every module/plugin available, including your own custom modules/plugins, instead of being limited to those modules/plugins manually approved by their vendors. This is a big deal because one of the biggest challenges with running web sites or web applications is that almost every user wants to extend or customize the application beyond what is provided out of the box.

Web applications have to be (1) manageable, (2) extensible, (3) customizable and (4) robust. The problem is that we don't have a programming language or an execution runtime that is able to meet all four of these requirements in the context of building and running dynamic component-based applications.

Neither PHP, JavaScript, Ruby, Go or Java allow us to build truly robust applications as the runtimes don't provide proper resource isolation. Often all the components (i.e. Drupal modules, WordPress plugins) run in the same memory space. In the Java world you have Enterprise Java Beans or OSGi which add some level of isolation and management, but it still doesn't provide full component-level isolation or component-level fault containment. As a result, it is required that one component pretty much trusts the other components installed on the system. This means that usually one malfunctioning component can corrupt the other component's data or functional logic, or that one component can harm the performance of the entire platform. In other words, you have to review, certify and test components before installing them on your platform. As a result, most SaaS vendors won't let you install untrusted or custom components.

What we really need here is an execution runtime that allows you to install untrusted components and guarantee application robustness at the same time. Such technology would be a total game-changer as we could build unlimited customizable SaaS platforms that leverage the power of community innovation. You'd be able to install any Drupal module on DrupalGardens, any plugin on or custom code on Squarespace or Wix. It would fundamentally disrupt the entire industry and would help us achieve the assembled web dream.

I've been giving this some thought, and what I think we need is the ability to handle each HTTP request in a micro-kernel-like environment where each software component (i.e. Drupal module, WordPress plugin) runs in its own isolated process or environment and communicates with the other components through a form of inter-process communication (i.e. think remote procedure calls or web service calls). It is a lot harder to implement than it sounds as the inter-process communication could add huge overhead (e.g. we might need fast or clever ways to safely share data between isolated components without having to copy or transfer a lot of data around). Alternatively, virtualization technology like Docker might help us move in this direction as well. Their goal of a lightweight container is a step towards micro-services but it is likely to have more communication overhead. In both scenarios, Drupal would look a lot like a collection of micro web services (Drupal 10 anyone?).

Once we have such a runtime, we can implement and enforce governance and security policies for each component (e.g. limit its memory usage, limit its I/O, security permission, but also control access to the underlying platform like the database). We'd have real component-based isolation along with platform-level governance: (1) manageable, (2) extensible, (3) customizable and (4) robust.

Food for thought and discussion?

by Dries at August 27, 2014 07:53 PM

Philip Van Hoof


Nu de rest van het land nog


by admin at August 27, 2014 06:11 PM

August 26, 2014

Philip Van Hoof

RE: Scudraketten

Wanneer Isabel Albers iets schrijft ben ik aandachtig. Misschien ben ik verliefd? Misschien is ze een verstandige vrouw? Niettemin doe ik moeite me te ontdoen van m’n eigen grote waarheden. Één blijft overeind: wij moeten investeren in infrastructuur.

Dit creëert welvaart, distribueert efficiënt het geld en investeert in onze kinderen hun toekomst: iets wat nodig is; en waar we voor staan.

De besparingsinspanningen kunnen we beperken wat betreft investeringen in infrastructuur; we moeten ze des te meer doorvoeren wat betreft andere overheidsuitgaven.

Misschien moeten we bepaalde scudraketten lanceren? Een scudraket op de overheidsomvang zou geen slecht doen.

Een week mediastorm meemaken over hoe hard we snoeien in bepaalde overheidssectoren: laten we dat hard en ten gronde doen.

Laten we tegelijk investeren in de Belgische infrastructuur. Laten we veel investeren.

by admin at August 26, 2014 11:48 PM

Lionel Dricot

Laissez-vous guider par la jalousie positive !


Il y a quelques années, j’ai découvert que j’étais, sans le savoir, quelqu’un de très jaloux, particulièrement envieux du succès des autres. Et plutôt que de combattre cette tendance, j’ai décidé d’en tirer parti. Utiliser ma jalousie comme une force plutôt qu’une faiblesse m’a permis de modifier durablement ma façon d’être.

Lorsqu’une connaissance me faisait part d’un projet, j’avais tout naturellement tendance à l’encourager et à lui souhaiter sincèrement le plus grand succès. Untel travaillait dur pour devenir violoniste ? J’étais de tout cœur avec lui. Je n’hésitais pas à faire sa promotion et à le soutenir. D’ailleurs, j’avoue que l’idée d’être ami avec un violoniste célèbre m’emplissait d’une certaine fierté.

Par contre, si ce projet rentrait dans un domaine de compétence proche du mien, j’avais tendance à voir tous les défauts, tous les problèmes possibles. Un projet informatique ? D’écriture ? Sur le web ? “Cela ne marchera jamais” disais-je. En fait, au fond de moi, je ne voulais pas que ça fonctionne.

Je ne voulais pas qu’il réussisse, je lui souhaitais même l’échec. Car j’étais aussi compétent que cette personne dans ce domaine. Je n’avais pas eu de succès dans ce type d’entreprise ou je n’avais même pas osé me lancer. Si cette personne réussissait là où j’avais échoué ou là où je n’avais même pas commencé, cela serait… Non, le projet ne devait pas réussir !

Il m’a fallu des années pour comprendre que ce sentiment était de la jalousie pure et simple. La peur de se faire dépasser.

Mais plutôt que de me soigner, de tenter de faire disparaître ce sentiment, j’ai décidé de l’utiliser. Si je suis jaloux d’une personne, c’est que j’ai à apprendre d’elle. Si je souhaite l’échec d’un projet, c’est que je dois absolument l’observer voire y contribuer.

Ce simple paradygme a bouleversé ma vie. En à peine quelques mois, j’ai observé que mon cercle d’amis et de connaissances s’élargissait et s’enrichissait de personnes particulièrement intéressantes.

Alors que je me plongeais dans les projets que j’aurais voulu créer moi-même, je découvrais des subtilités, des problèmes que je n’aurais probablement pas été capable de relever seul. En fréquentant les personnes que je jalousais, j’apprenais les sacrifices qu’elles avaient dû faire, je comprenais les différences qui nous séparaient. Et j’en arrivais à ne plus les jalouser du tout voire, dans certains cas, à être heureux de ne pas être à leur place.

Contribuant à ces projets dont j’avais initialement souhaité l’échec, je finissais par les encourager à tout prix, à m’associer à leur succès. À chaque fois que je dépassais les simples apparences, de celles qui rendent envieux, je découvrais un monde complexe et des conséquences parfois insoupçonnées.

Utiliser ma jalousie comme un indicateur, comme un phare m’a permis d’apprendre, de découvrir les autres, de savourer les succès de mon entourage et, surtout, de me réjouir de mes propres accomplissements. Au fond, la jalousie est peut-être ma qualité innée la plus importante. Il m’a seulement fallu beaucoup d’années avant de comprendre comment en tirer parti.


Photo par Indy Kethdy.

Merci d'avoir pris le temps de lire ce billet librement payant. Pour écrire, j'ai besoin de votre soutien. Suivez-moi également sur Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

flattr this!

by Lionel Dricot at August 26, 2014 08:11 PM

Dries Buytaert

Open Source and social capital


The notion that people contributing to Open Source don't get paid is false. Contributors to Open Source are compensated for their labor; not always with financial capital (i.e. a paycheck) but certainly with social capital. Social capital is a rather vague and intangible concept so let me give some examples. If you know someone at a company where you are applying for a job and this connection helps you get that job, you have used social capital. Or if you got a lead or a business opportunity through your network, you have used social capital. Or when you fall on hard times and you rely on friends for emotional support, you're also using social capital.

The term "social" refers to the fact that the value is in the network of relationships; they can't be owned like personal assets. Too many people believe that success in life is based on the individual, and that if you do not have success in life, there is no one to blame but yourself. The truth is that individuals who build and use social capital get better jobs, better pay, faster promotions and are more effective compared to peers who are not tapping the power of social capital. As shown in the examples, social capital also translates into happiness and well-being.

Most Open Source contributors benefit from social capital but may not have stopped to think about it, or may not value it appropriately. Most of us in the Open Source world have made friendships for life, have landed jobs because of our contributions, others have started businesses together, and for others it has provided an important sense of purpose. Once you become attuned to spotting social capital being leveraged, you see it everywhere, every day. I could literally write a book filled with hundreds of stories about how contributing to Open Source changed people's lives -- I love hearing these stories.

Social capital is a big deal; it is worth understanding, worth talking about, and worth investing in. It is key to achieving personal success, business success and even happiness.

by Dries at August 26, 2014 08:27 AM

Frederic Hornain

Red Hat | Corporate



by Frederic Hornain at August 26, 2014 07:48 AM

August 24, 2014

Lionel Dricot

Printeurs 24

Ceci est le billet 24 sur 24 dans la série Printeurs

Je reprends péniblement conscience. Une douleur sourde résonne entre mes tempes et me cisaille le cerveau.

— À… À boire !
Ma bouche est pâteuse, ma gorge rêche. Chaque respiration me donne l’impression d’être devenu un robot de métal corrodé enfoui sous une tonne de sable. Une main me soulève la nuque et je sens le contact d’un récipient métallique sur mes lèvres. Les quelques gorgées d’eau que j’avale ruissèlent comme un torrent sur un lit trop longtemps asséché. Je déglutis douloureusement avant d’ouvrir les yeux.
— Alors ? Ça va mieux ?
Je cligne des paupières rapidement. Une paire de lunettes est penchée sur moi.
— Rassurez-vous, vous n’êtes pas blessé ! J’ai fait écran au moment de l’explosion.
Je réalise que, derrière les lunettes démesurées, un visage affable me parle. Un jeune homme a la peau extrêmement pâle. Il porte des traces d’acné mal soignée et ses cheveux en friche semblent avoir été laissé à l’abandon plusieurs années auparavant. Son corps est petit, osseux, chétif. M’apporter un verre d’eau a du représenter un véritable effort physique pour un organisme si frêle.
— Qui… qui êtes-vous ? fais-je en me redressant sur mes coudes.
— Appelez-moi Junior ! Mais ne vous relevez pas trop vite. Vous êtes au commissariat, en sécurité.
Je tente de rassembler mes esprits.
— Que s’est-il passé ?
— Un de nos clients, Monsieur Farreck, a fait une demande de protection d’urgence. Comme le prévoit le contrat de Monsieur Farreck, nous sommes intervenus immédiatement et nous avons aussitôt mis en sécurité tous les occupants du véhicule. C’est la clause d’extensibilité du contrat de Monsieur Farreck : nous devons également protéger ses proches.
— Comment va Georges ? Est-il blessé ?
— Non, rassurez-vous ! Il n’a même pas été assommé. Vous, par contre, avez pris le souffle d’une explosion de plein fouet. Vous allez ressentir de légères brûlures intérieures pendant quelques jours.
— Je veux parler à Georges.
— Il est déjà parti. Il soupçonne très fortement un certain Warren d’être à l’origine de l’attentat. Et, entre nous, le Warren en question n’y est pas allé de main morte. Waw !
Il secoue la main en sifflant et me gratifie d’un énorme sourire qui révèle une dent mal alignée. Son enthousiasme semble croître au fur et à mesure qu’il détaille l’attaque dont j’ai été victime.
— Je croyais que les drones kamikazes, on ne voyait ça qu’en territoire islamique ! C’était chaud. Sans la mousse airbag, on vous ramassait à la petite cuillère. Et encore, vous avez été assommé par le souffle de l’explosion au tout début, vous avez manqué le meilleur. On a établit un écran de protection et une couverture de feu nourri pour se tailler un couloir de fuite. C’était vraiment super, mieux qu’en compétition !
Je suis pris d’un léger doute. Ce jeune homme malingre et souffreteux me raconte les événements comme si il y était.
— Excusez-moi mais… vous faîtes partie de l’équipe ?
— Bien sûr, c’est moi qui vous ai tiré de la voiture.
Je manque de m’étrangler.
— Pardon ?
— Je m’appelle Junior Freeman. Enchanté de faire votre connaissance !
D’un geste ample, il me tend une main moite.


Alors que je suis Junior Freeman à travers les couloirs aseptisés du commissariat, je pose une question qui me brûle les lèvres depuis plusieurs minutes.
— Dîtes Junior, ce n’est pas que je veux paraître grossier mais le Freeman qui m’a sorti de la voiture…
— C’est moi, réplique-t-il avec un grand sourire.
— Mais alors, comment se fait-il que vous faisiez deux mètres de haut et presqu’autant de large ? Sans vouloir vous diminuer, vous n’êtes pas exactement ce qu’on appelle une armoire à glace. Non ?
Contre toute attente, il éclate d’un rire franc.
— Bien entendu ! Je suis un soldat d’élite ultra entraîné ! Je coûte trop cher pour être envoyé directement sur le théâtre des opérations. C’est la règle : si vous êtes face à un vrai policier en chair et en os, c’est qu’il n’est pas bon et qu’il peut être sacrifié. C’est évident, non ?
— C’est évident, en effet, annoncé-je sans avoir la moindre idée de ce qu’il sous-entendait.
— Comme les avatars coûtent énormément d’argent, seules les unités d’élite en utilisent. Et puis, je ne suis pas sûr que cela soit très légal. Il y a une convention, une charte ou un brol de ce genre qui soumet leur utilisation à une autorisation gouvernementale. Mais bon, vous savez, moi, les règlements… Du coup, nos avatars sont anthropomorphes et portent nos noms. Légalement, quand mon avatar est dehors, c’est de moi qu’il s’agit.
— Ah… fais-je sans conviction. Et… c’est quoi un avatar ?
Junior s’arrête et, à son regard, j’ai l’impression que des antennes vertes et des tentacules m’ont brusquement poussé sur le visage. Après quelques secondes d’hésitation, il se reprend.
— Le mieux est que j’aille vous les montrer au garage. Suivez-moi !
Alors que je lui emboîte le pas, nous passons devant une porte ou deux policiers en armure montent une garde attentive. Sans se faire prier, Junior se lance dans une explication.
— Ce sont les appartement de votre ami John, que nous devons à tout prix protéger.
— Mais je ne connais pas ce John !
— Ah bon ? fait-il d’un air étonné. Pourtant Monsieur Farreck vous a nommé comme la seule personne de confiance autorisée à l’approcher. À part lui-même, bien entendu !
— Bien entendu…
Saluant à peine les deux gardes, il continue sur sa lancée dans le couloir. D’un geste, il me fait signe de le suivre.
— Vous venez ?
— Je veux voir ce fameux John.
Campé sur mes deux jambes face à la porte, la voix ferme, je tente d’adopter une posture d’autorité.
— Mais… je voulais vous montrer les avatars.
— Ils attendront.
— Mais… je ne sais pas si le règlement permet…
Je me tourne vers les deux gardes qui ne semblent même pas prêter attention à notre existence.
— Conduisez-moi à John !
L’un des policiers daigne abaisser vers moi un regard hautain.
— Seul Monsieur Farreck a le droit de voir Monsieur John. Ainsi que les personnes de confiance désignée.
— J’en suis une ! Ouvrez !
Il pousse un profond soupir et hausse les épaules en regardant son collègue. Sans aménité, il saisit ma main qu’il applique sur un lecteur. Un léger bruit se fait entendre et la mention “autorisé” s’affiche sur l’écran. Aussitôt, le garde se recule et m’adresse un salut.
— Excusez-moi monsieur, je ne savais pas ! Mais pour des raisons de sécurité, je dois rester avec vous.
Junior a fait demi-tour et arrive à ma hauteur.
— Vous ne préférez pas voir les avatars ? Parce que je ne suis pas sûr que le règlement permette…
Je lui lance un regard teinté d’ironie.
— Je ne suis pas sûr que le règlement permette l’utilisation des avatars sans accord du gouvernement. Alors, vous savez, moi, les règlements…
Il n’a pas le temps de me répondre que l’un des deux gardes a ouvert la porte et m’introduit dans un sas d’entrée. Il toque à une seconde porte et appelle.
— Monsieur John ? Une visite pour vous.
Derrière moi, j’entends le premier garde discuter avec junior. Dans sa voix perce une pointe de respect, de déférence. Junior, qui mesure deux têtes de moins et pourrait se tenir trois fois dans le pantalon du policier est visiblement un soldat respecté et expérimenté. Mais je n’ai pas le temps de m’intéresser au comique de la situation. Monsieur John vient d’arriver.
— C’est vous Monsieur Farr…
Sa voix s’étrangle dans sa gorge.


Photo par Tanakawho.


Merci d'avoir pris le temps de lire ce billet librement payant. Pour écrire, j'ai besoin de votre soutien. Suivez-moi également sur Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

flattr this!

by Lionel Dricot at August 24, 2014 05:12 PM

August 22, 2014

Joram Barrez

Seriously reducing memory consumption when running multiple Activiti Engines on the same JVM

The Activiti Forum is a place where I typically try to spend every day a bit of time. Surely, it is hard work to keep up with the influx of posts (but then again it means that Activiti is popular, so I’m not complaining!) but sometimes you find a true gem amongst the posts. On […]

by Joram Barrez at August 22, 2014 07:08 AM

August 21, 2014

Wouter Verhelst

Multiarchified eID libraries, now public

Yesterday, I spent most of the day finishing up the multiarch work I'd been doing on introducing multiarch to the eID middleware, and did another release of the Linux builds. As such, it's now possible to install 32-bit versions of the eID middleware on a 64-bit Linux distribution. For more details, please see the announcement.

Learning how to do multiarch (or biarch, as the case may be) for three different distribution families has been a, well, learning experience. Being a Debian Developer, figuring out the technical details for doing this on Debian and its derivatives wasn't all that hard. You just make sure the libraries are installed to the multiarch-safe directories (i.e., /usr/lib/<gnu arch triplet>), you add some Multi-Arch: foreign or Multi-Arch: same headers where appropriate, and you're done. Of course the devil is in the details (define "where appropriate"), but all in all it's not that difficult and fairly deterministic.

The Fedora (and derivatives, like RHEL) approach to biarch is that 64-bit distributions install into /usr/lib64 and 32-bit distributions install into /usr/lib. This goes for any architecture family, not just the x86 family; the same method works on ppc and ppc64. However, since fedora doesn't do powerpc anymore, that part is a detail of little relevance.

Once that's done, yum has some heuristics whereby it will prefer native-architecture versions of binaries when asked, and may install both the native-architecture and foreign-architecture version of a particular library package at the same time. Since RPM already has support for installing multiple versions of the same package on the same system (a feature that was originally created, AIUI, to support the installation of multiple kernel versions), that's really all there is to it. It feels a bit fiddly and somewhat fragile, since there isn't really a spec and some parts seem fairly undefined, but all in all it seems to work well enough in practice.

The openSUSE approach is vastly different to the other two. Rather than installing the foreign-architecture packages natively, as in the Debian and Fedora approaches, openSUSE wants you to take the native foo.ix86.rpm package and convert that to a foo-32bit.x86_64.rpm package. The conversion process filters out non-unique files (only allows files to remain in the package if they are in library directories, IIUC), and copes with the lack of license files in /usr/share/doc by adding a dependency header on the native package. While the approach works, it feels like unnecessary extra work and bandwidth to me, and obviously also wouldn't scale beyond biarch.

It also isn't documented very well; when I went to openSUSE IRC channels and started asking questions, the reply was something along the lines of "hand this configuration file to your OBS instance". When I told them I wasn't actually using OBS and had no plans of migrating to it (because my current setup is complex enough as it is, and replacing it would be far too much work for too little gain), it suddenly got eerily quiet.

Eventually I found out that the part of OBS which does the actual build is a separate codebase, and integrating just that part into my existing build system was not that hard to do, even though it doesn't come with a specfile or RPM package and wants to install files into /usr/bin and /usr/lib. With all that and some more weirdness I've found in the past few months that I've been building packages for openSUSE I now have... Ideas(TM) about how openSUSE does things. That's for another time, though.

(disclaimer: there's a reason why I'm posting this on my personal blog and not on an official website... don't take this as an official statement of any sort!)

August 21, 2014 08:30 AM

August 20, 2014

Frederic Hornain

[Aug 2014] Red Hat JBoss Enterprise Java EE Containers Presentation


Dear *,

Few days ago, I did a presentation on Red Hat JBoss Enterprise Java EE Containers
If your company or you are based in BeNeLux and are interested by this presentation, just let me know and I will try to arrange a meeting for you.

N.B. You can also find it in Mojo.


by Frederic Hornain at August 20, 2014 05:17 PM

Open source: Debunking the myths


by Frederic Hornain at August 20, 2014 05:11 PM

Frank Goossens

Music from Our Tube; Tom Vek in the Jungle

Tom Vek is an English multi-instrumentalist, but you could just as well read the Wikipedia-article if you want to know that and more. If, instead, you’d prefer to hear what he sounds like, you can listen (and watch) below video of “Sherman (Animals in the Jungle)”;

YouTube Video
Watch this video on YouTube or on Easy Youtube.

by frank at August 20, 2014 05:08 AM

August 19, 2014

Dries Buytaert

To bean or not to bean?


Last weekend our Nespresso machine died in front of my eyes. Water started leaking from its base during use and it shorted the electricity. It was a painful death. I'm tempted to take it apart and try to repair it but it also brings up the question; what to buy next?

Part of me enjoys the convenience of the Nespresso machine, the other part of me is eager to buy my first "serious" espresso machine.

See, I'm a coffee lover. That is to say, like most of the people living in the US I have a coffee addiction, and have been brainwashed into spending more and more on my daily coffee intake. To make matters worse, we live in a society where we call the people who make great coffee "artists". I'd love to practice some coffee artistry myself and make that perfect barista-grade cup of coffee.

I did a little bit of research and picking an espresso machine is not easy. It turns out this is a complex space. The choices range from super-automatic machines (e.g. they do everything from grinding, dosing, tamping to brewing) to semi-automatic machines (e.g. you manually grind your own beans and tamp them) to manual machines (e.g. you control how long the brewing water sits over the bed of coffee, resting as it were at neutral or boiler pressure). There are even "coffee schools" that offer classes and certifications to become a professional barista.

While I love the smell of fresh ground coffee and an above perfection espresso, I also don't want to take 15 minutes to make a cup of coffee. I usually need my first cup of coffee to help me wake up and I'm often crunched for time, so I don't want it to be super complicated.

Espresso or Nespresso? To bean or not to bean? Help!

by Dries at August 19, 2014 10:15 PM

Luc Stroobant

I3 WM Gnome style screenshots

I found lot's of examples to take screenshots in I3, but most of them require additional scripts or will always overwrite the same temp file. So I've been testing a bit to make it easier. Put this in your .i3/config to take Gnome style screenshots in a png in ~/Pictures. Mod+x selects a rectangle on the screen or mod+y takes the entire screen contents.

bindsym --release $mod+x exec --no-startup-id import ~/Pictures/screenshot-`/bin/date +%Y%m%d-%H:%M:%S`.png
bindsym --release $mod+x exec --no-startup-id import -window root ~/Pictures/screenshot-`/bin/date +%Y%m%d-%H:%M:%S`.png

by luc at August 19, 2014 05:36 PM

August 18, 2014

Frank Goossens

(When) should you Try/Catch Javascript?

Autoptimize comes with a “Add try-catch wrapping?”-option, which wraps every aggregated script in a try-catch-block, to avoid an error in one script to block the others.

I considered enabling this option by default, as it would prevent JS optimization occasionally breaking sites badly. I discussed this with a number of smart people and searched the web, eventually stumbling on this blogpost which offers an alternative for try-catch because;

Some JavaScript engines, such as V8 (Chrome) do not optimize functions that make use of a try/catch block as the optimizing compiler will skip it when encountered. No matter what context you use a try/catch block in, there will always be an inherent performance hit, quite possibly a substantial one. [Testcases] confirm [that] not only is there up to a 90% loss in performance when no error even occurs, but the declination is significantly greater when an error is raised and control enters the catch block.

So given this damning evidence of severe performance degradation, “try/catch wrapping” will not be enabled by default and although Ryan’s alternative approach has its merits, I’m weary of the caveats so I won’t include that (for now anyway). If your site breaks when enabling JS optimization in Autoptimize, you can enable try/catch wrapping as a quick workaround, but finding the offending script and excluding it from being optimized is clearly the better solution.

by frank at August 18, 2014 03:01 PM

August 17, 2014

Lionel Dricot

Printeurs 23

Ceci est le billet 23 sur 24 dans la série Printeurs

— Tu ne te souviens vraiment de rien ?
La voiture privée de Georges nous emmène à toute allure vers l’aéroport. J’ai décidé de me montrer coopératif. Après tout, s’il avait voulu se débarrasser de moi, Georges n’aurait eu qu’à claquer des doigts. Peut-être est-il sincère ? Je dois la jouer subtile, feindre l’acceptation totale tout en restant sur mes gardes. Alors que le véhicule nous emporte à toute vitesse hors de la ville, je regarde Georges dans les yeux et secoue négativement la tête.
— Rien. Le mur blanc.
— Bizarre… Bizarre…
— Au fait Georges, qu’ai-je fait avec toi durant tout ce temps ? Quel est ce grand projet dont je ne me souviens pas ?
— Tu dois te souvenir que je me bats pour améliorer les conditions de travail des ouvriers dans la zone industrielle.
— Euh… c’est possible. Quel est le rapport ?
— Les ouvriers sont aujourd’hui forcé d’accomplir des actions dangereuses, de manipuler des produits toxiques pour la simple raison que la robotisation de l’industrie coûte cher et ne permet pas la personnalisation extrême qui est aujourd’hui en vogue parmi les consommateurs. Si nous parvenons à industrialiser les printeurs, les ouvriers pourront travailler dans de meilleures conditions.
— Voire plus du tout.
— En effet…
— Tu espères donc transformer la majorité des ouvriers en télé-passifs ? Imagines-tu l’impact social ? C’est criminel Georges !
— Criminel ? Et forcer les individus à travailler 8h par jour, 4 jours par semaine dans des conditions dangereuses ce n’est pas criminel peut-être ? Le tout pour une situation qui n’offre aucun réel avantage par rapport à celle des télé-passifs !
— Oui mais personne ne veut devenir télé-passifs. C’est une question d’honneur, d’identité. Tu vas arracher à des milliers de personnes la seule chose qui leur donne le sentiment d’exister. Ils vont te haïr, te détester !
Les mots sont sortis spontanément de ma bouche mais ils ont un goût amer, artificiel. J’ai l’impression de ressasser des idées pré-mâchées, une propagande qui n’est pas la mienne. Depuis que je ne suis plus soumis à la publicité, je me surprends à être en désaccord avec moi-même, à découvrir des paradoxes dans les valeurs que je pensais les plus établies.
— Justement Nellio, il n’y a que moi qui puisse mener cela à bien. Toi tu es le créateur, l’ingénieur. Moi je serai la face publique. Les gens m’aiment Nellio. Les gens me reconnaissent. Si c’est moi qui parle, ils comprendront. Et même s’ils doivent me détester, c’est un prix à payer bien faible par rapport à la liberté que nous apportons à l’humanité. Peut-être que, libérés des contraintes, de l’obligation de présence, de la fatigue nerveuse, ils deviendront créatifs. Combien de Mozart, de Tolstoï, de Rowling, de Mercury, de Peegou n’ont jamais découvert leur propre talent car nous avons arbitrairement décidé que les télé-passifs sont une abomination morale, parce que nous avons érigé l’occupation inutile en sens ultime de l’existence ?
Je reste un instant sans rien dire, le regard perdu par la fenêtre. Les rues me semblent bien calmes. Georges m’a acheté une nouvelle paire de lentilles avec l’abonnement non-publicitaire total. Le ciel me semble à présent dégagé, aucune publicité ne vient plus perturber mes pensées et mon champ de vision.
— Georges, tu ne crois pas que tu exagères un peu les conditions de travail des ouvriers ?
— J’oubliais que tu ne te souviens plus du témoignage de John.
— Non, je ne m’en souviens plus. Mais pourquoi ne pas être plus progressif ? Il faut laisser le temps…
Ma phrase se termine en un hurlement bestial de terreur alors que que retentit une explosion assourdissante. La voiture semble faire un bon de plusieurs mètres. Pendant une fraction de secondes, je sens mon corps flotter en apesanteur avant de percevoir une douleur sèche dans le creux de l’estomac. Le genou de Georges. Ses mains agrippent mes épaules nous tourbillonnons dans un monde opaque et duveteux. Mon corps s’enfonce dans une mousse pâteuse qui s’insère dans ma bouche, mes narines. Je suis aveugle. J’étouffe.

J’inspire violemment une gorgée d’air. La mousse s’évapore. La voiture est sens dessus dessous. Des flammes dansent autour de nous, j’entends la voix de Georges.
— Intervention immédiate maximale.
— Georges ! crié-je.
— Nellio, ne bouge pas !
— Les flammes !
— Ne bouge pas ! Nous sommes dans un habitacle sécurisé. Les flammes brouilleront les capteurs du drone pendant quelques secondes et retarderont le prochain missile. La mousse airbag a parfaitement fonctionné et fait également écran.
— Mais… après ? On fait quoi ?
— Mes gardes du corps sont en route. Je les avais assigné à la protection de John.
— Qui serait assez fou pour envoyer un drone d’attaque en pleine ville ? hurlé-je. C’est de la démence !
Tournant légèrement la tête, je vois le visage de Georges. Un fin filet de sang lui dégouline du front, traverse ses sourcils avant de rejoindre sa lèvre. Un bruit violent me fait sursauter. Un main gantée de noire m’attrape soudain par le col et m’extirpe hors de la voiture. Je n’ai pas le temps de me débattre que je me retrouve nez-à-nez avec un policier caparaçonné des pieds à la tête. Une dizaine de ses collègues s’affairent autour de la voiture et pointent leurs armes vers le ciel tandis que Georges se relève en s’époussetant. Des coups de feu retentissent.
— Ne t’inquiètes pas Nellio, ce sont mes hommes.
— Ah… Et bien merci ! fais-je au géant noir qui se tient à mes côtés et dont je n’arrive pas à apercevoir le moindre morceau de chair. Sur la veste, je déchiffre un badge d’identification. J. Freeman.
— Merci J. Freeman !
— De rien monsieur, me réponds une voix caverneuse issue du masque. Monsieur Farreck nous paie pour ça.
— L’élite de l’élite, me fait Geroges avec un clin d’œil. Rien à voir avec tout ceux que tu as déjà vu qui ne sont, au fond, que des télé-pass à qui on a trouvé une occupation.
— Il y a quand même ceux qui ont tué Eva, fais-je en grinçant des dents.
— Les gouvernementaux ? De dangereux amateurs !
Dans la rue, les rares passants ont complètement disparu. Contrairement à la curiosité intrinsèque à tout citadin, les banlieusards semblent donner plus de valeur à leur tranquillité et à leur intégrité physique qu’au spectacle de voitures qui brûlent. Calmement, avec des gestes posés et mesurés, les policiers se rapprochent rapidement de nous pour former un mur humain.
— Escadrille de drones kamikazes en approche. Il faut évacuer la zone.
Le visage de Georges devient soudain pâle comme la mort.
— Warren, murmure-t-il. Je n’aurais jamais cru qu’il en arriverait à de telles extrémités.
— Quoi ? Le mec du conglomérat de la zone industrielle ?
Je lève les yeux au ciel. Au dessus de nous, des centaines de points bourdonnants semblent grossir.
— Qu’est-ce…
L’enfer se déchaîne soudain.


Photo par Norm Lanier.

Merci d'avoir pris le temps de lire ce billet librement payant. Pour écrire, j'ai besoin de votre soutien. Suivez-moi également sur Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

flattr this!

by Lionel Dricot at August 17, 2014 06:34 PM

August 14, 2014

Frank Goossens

Music from Our Tube; Bonobo, Fink & Andreya Triana

From what appears was a show in France some time (months? years?) ago, the audio of a live concert of Bonobo with Fink & the lovely Andreya Triana;

YouTube Video
Watch this video on YouTube or on Easy Youtube.

Especially love the acoustic version of Flying Lotus’ “Tea Leaf Dancers” at the 27th minute!

by frank at August 14, 2014 07:09 PM

Dries Buytaert refresh launched


Back in the early days of Drupal, looked like this: in 2005 as launched in 2005.

On August 14 2009, I relaunched to replace the oh-so-embarrassing placeholder page. The 2009 re-launch turned into a better spotlight for Drupal. It wasn't hard to beat the white page with a Druplicon logo. in 2009 as launched in 2009.

What was a good spotlight five years ago though is no longer a good spotlight today. Five years later, didn't do Drupal justice. It didn't really explain what Drupal is, what you can use Drupal for, and more. Along with sub-optimal content, the site wasn't optimized for mobile use either.

Today, exactly five years later to the day, I'm excited to announce that I relaunched again:

Drupal com devices

Redesigning to make it more useful and current has been one of my New Year's resolutions for a number of years now. And as of today, I can finally strike that off my list.

The new has become richer in its content; you'll find a bit more information about Drupal to help people understand what Drupal is all about and how to get started with Drupal. On a desktop, on a tablet, on a phone, the site has become much easier to navigate and read.

I believe the new is a much better, more relevant showcase for Drupal. The goal is to update the site more regularly and to keep adding to it. My next step is to add more use cases and to include short demo videos of both the Drupal backend as well as the showcases. will become an increasingly helpful resource and starting point for people who are evaluating Drupal.

Drupal com mobile

The changes are not limited to content and look; also has a new engine as the site was upgraded from Drupal 6 to Drupal 8 alpha (don't try this at home). We're using Drupal 8 to push the boundaries of site building and responsive design and to uncover bugs and usability issues with Drupal 8. Because we're using an alpha version of Drupal 8, things might not function perfectly yet. We’d still love to hear feedback from designers and front end developers on how it’s working.

by Dries at August 14, 2014 05:52 PM

Claudio Ramirez

Review: “Software Architecture Fundamentals Part 2. Taking a Deeper Dive” by Neal Ford and Mark Richards

Taking a Deeper DiveWell if the first video got a 5/5 rating, you can be sure that the second part deserves that as well. Although more advanced than the first part, it’s pretty obvious both videos must be considered to be one workshop. Looking a the clothing of the instructors and audience in the studio (2 people), tt’s pretty clear they were recorded on the same day :).

While the first video was more about common sense and obvious rights and wrongs (patterns and anti-patters), the second part seems to boil down to ‘it depends’. TIMTOWTDI (There is more than one way to do it) will Perl people say. By example, it was clear the instructors had a very different opinion about SOA and Continous Delivery.

Like in the first part, the chapters titles are well chosen and give a correct overview of the subjects that make up the course. The video starts with Architecture Tradeoffs (see above), followed by Continous Delivery, Abstraction and Choosing and Comparing Architecture. More applied are the chapters about Web Services and Messaging, SOA, Integrations Hubs and the continuation of Continous Delivery. More abstract where the Approaches to Enterprise Architecture, Strategies, Evolutionary Architecture and Emergent Desing.

Again,  this video delivers what it promises. Neal Ford and Mark Richards are still enthusiastic about their teaching and seem even more involved in the second part.

Again, great series. Kudos to Neal and Mark.

Software Architecture Fundamentals Part 2
Taking a Deeper Dive
By Neal Ford, Mark Richards
Publisher: O’Reilly Media
Final Release Date: April 2014
Run time: 5 hours 57 minutes


Filed under: Uncategorized Tagged: Architecture, Java, o'reilly, video

by claudio at August 14, 2014 01:20 PM

August 13, 2014

Lionel Dricot

Les opportunités viennent toujours par deux…


Lorsque des opportunités s’offrent à moi et qu’un proche me fait remarquer que j’ai de la chance, je réponds que la chance, ça se provoque. Et que les opportunités se cultivent avant d’éclore. L’éclosion est le plus souvent totalement inattendue et se fait généralement par paire. Je viens d’ailleurs d’en faire l’expérience.

Alors que j’avais décidé de prendre du temps pour réfléchir à mes nombreux projets, je fus invité par Olivier Verbeke à passer une partie de mon temps dans le cadre idyllique de Nest’Up. Grâce à son enthousiasme, je me suis découvert une passion pour le partage d’expérience et l’accompagnement des startups. Après plusieurs mois de doute, d’exploration et d’incertitude, voilà que se présente enfin l’opportunité de transformer cette passion diffuse en un projet concret, solide.

C’est évidemment ce moment-là qu’a choisi mon ami Antoine Perdaens pour me proposer une opportunité complètement différente et inattendue.

Vous le savez certainement, je suis un grand utilisateur des réseaux sociaux. Je les trouve très pertinents pour partager l’information. Je collecte cette information dans Pocket et, afin d’alimenter mon blog, je trie les articles Pocket avec des tags suivant des thèmes avant de les regrouper dans des brouillons au format txt.

Or, Antoine est le CEO de Knowledge Plaza, une plateforme qui propose de faire… exactement ce que je fais pour mon blog mais au niveau de l’entreprise, pour des équipes allant d’une dizaine à plusieurs dizaines de milliers de personnes. Knoweldge Plaza, c’est le Facebook de l’entreprise agrémenté d’un Pocket sous stéroïdes, le tout enveloppé dans un moteur de recherche ultra puissant.

Le principal challenge de Knowledge Plaza est qu’il est complexe. L’outil est puissant et difficile à appréhender. Si les clients sont tous aux anges et chantent les louanges de KP, les nouveaux utilisateurs ont plus de peine à prendre leurs marques. De plus, comme Facebook ou Twitter à leur début, la valeur ajoutée et l’utilité de KP ne sautent pas immédiatement aux yeux. Y compris pour moi, qui suis pourtant assez sensible à la question.

Antoine étant bien conscient de ces défis, il m’a proposé de les relever. « KP a besoin d’élargir sa vision à long terme, de se projeter dans le futur. J’ai pensé que tu pourrais nous aider ».

Je me suis donc retrouvé face à deux opportunités complètement différentes, passionnantes. J’aime à croire que ces opportunités, je les ai provoquées.

Choisir, c’est renoncer.

Vous le savez, je résiste difficilement aux mots « vision » et « futur ». Je n’ai pas pu refuser. J’ai donc officiellement rejoint l’équipe de Knowledge Plaza sous le titre de Product Manager. Si Knowledge Plaza est un produit qui pourrait intéresser votre entreprise, n’hésitez pas à me contacter ou à demander une invitation pour rejoindre la Sphère, notre espace de test.

Mais si je tenais à partager ce nouveau cap avec vous, c’est parce qu’il illustre un principe qui m’est cher : ne cherchez pas à faire grandir une seule et unique opportunité particulière. Soyez actifs, faites ce que vous aimez et donnez de votre temps sans rien attendre en retour. Préparez un terrain fertile. Avec un peu de patience, les opportunités se présenteront d’elles-mêmes. Pas toujours de la manière dont vous les attendez mais toujours par deux.

Peut-être est-ce dans la surprise de l’inattendu et la difficulté du choix que se trouve tout le plaisir, vous ne trouvez pas ?

Merci d'avoir pris le temps de lire ce billet librement payant. Pour écrire, j'ai besoin de votre soutien. Suivez-moi également sur Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

flattr this!

by Lionel Dricot at August 13, 2014 04:04 PM

Dries Buytaert

Amazon invests in Acquia


I'm happy to share news that Amazon has joined the Acquia family as our newest investor. This investment builds on the recent $50 million financing round that Acquia completed in May, which was led by New Enterprise Associates (NEA).

Acquia is the largest provider of Drupal infrastructure in the world. We run on more than 8,000 AWS instances and serve more than 27 billion hits a month or 333 TB of bandwidth a month. Working with AWS has been an invaluable part of our success story, and today's investment will further solidify our collaboration.

We did not disclose the amount of the investment in today's news announcement.

by Dries at August 13, 2014 03:46 PM

August 12, 2014

Kris Buytaert

Upcoming Conferences

After not being able to give my planned Ignite at #devopsdays Amsterdam because I was down with the flu here's some fresh opportunities to listen to my rants :)

In September I`ll be talking at PuppetConf 2014, San Francisco , USA about some of the horror stories we went trough over the past couple of years when deploying infrastructure the automated fasion.

Just one week later I`ll be opening the #devops track at DrupalCon Amsterdam together with @cyberswat (Kevin Bridges) where we'll talk about the current state of #drupal and #devops , We'll be reopening the #drupal and #devops survey shortly, more info about that later here..

Just a couple of weeks later I will be ranting about Packaging software on Linux at LinuxConf Europe in Dusseldorf, Germany

And in November , I`m headed to Nuremberg, Germany where I will be opening the Open Source Monitoring Conference tinkering about the current state of Open Source Monitoring, do we love it .. or does it still suck :)

That's all ..
for now ..

by Kris Buytaert at August 12, 2014 07:23 PM

Fedora 20 Annoyancies

  1. yum install docker
  3. docker run --rm -i -t -e "FLAPJACK_BUILD_REF=6ba5794" \
  4. > -e "FLAPJACK_PACKAGE_VERSION=1.0.0~rc3~20140729T232100-6ba5794-1" \
  5. > flapjack/omnibus-ubuntu bash -c \
  6. > "cd omnibus-flapjack ; \
  7. > git pull ; \
  8. > bundle install --binstubs ; \
  9. > bin/omnibus build --log-level=info flapjack ; \
  10. > bash"
  11. docker - version 1.5
  12. Copyright 2003, Ben Jansens <>
  14. Usage: docker [OPTIONS]
  16. Options:
  17. -help Show this help.
  18. -display DISLPAY The X display to connect to.
  19. -border The width of the border to put around the
  20. system tray icons. Defaults to 1.
  21. -vertical Line up the icons vertically. Defaults to
  22. horizontally.
  23. -wmaker WindowMaker mode. This makes docker a
  24. fixed size (64x64) to appear nicely in
  25. in WindowMaker.
  26. Note: In this mode, you have a fixed
  27. number of icons that docker can hold.
  28. -iconsize SIZE The size (width and height) to display
  29. icons as in the system tray. Defaults to
  30. 24.
  33. [root@mine ~]# rpm -qf /bin/docker
  34. docker-1.5-10.fc20.x86_64
  35. [root@mine ~]# rpm -qi docker
  36. Name : docker
  37. Version : 1.5
  38. Release : 10.fc20
  39. Architecture: x86_64
  40. Install Date: Sun 03 Aug 2014 12:41:57 PM CEST
  41. Group : User Interface/X
  42. Size : 40691
  43. License : GPL+
  44. Signature : RSA/SHA256, Wed 07 Aug 2013 10:02:56 AM CEST, Key ID 2eb161fa246110c1
  45. Source RPM : docker-1.5-10.fc20.src.rpm
  46. Build Date : Sat 03 Aug 2013 10:30:23 AM CEST
  47. Build Host :
  48. Relocations : (not relocatable)
  49. Packager : Fedora Project
  50. Vendor : Fedora Project
  51. URL : <a href="<br />
  52. Summary" title="<br />
  53. Summary"><br />
  54. Summary</a> : KDE and GNOME2 system tray replacement docking application
  55. Description :
  56. Docker is a docking application (WindowMaker dock app) which acts as a system
  57. tray for KDE and GNOME2. It can be used to replace the panel in either
  58. environment, allowing you to have a system tray without running the KDE/GNOME
  59. panel or environment.
  60. [root@mine ~]# yum remove docker
  61. [root@mine ~]# yum install docker-io
  63. Installed:
  64. docker-io.x86_64 0:1.0.0-9.fc20
  66. Complete!

by Kris Buytaert at August 12, 2014 07:18 PM

Ubuntu 14.04 on an Dell XPS 15 9530

So I had the chance to unbox and install a fresh Dell XPS-15 9530
Dell ships the XPS13's with Ubuntu, but for some crazy reason it does not do the same with the XPS15 which imvho is much more
appropriate for a developer as it actually has a usable size screen.

That means they also force you to boot into some proprietary OS the very first time before you can even try get into the boot options menu.
which is where you need to be.

The goal was to get a fresh Ubuntu on the box , nothing more.

There's a large number of tips scattered on the internet such as

- Use the USB 2 connector (that's the one furthest to the back on the right side of the device)
- Disable secure booting
- Put the box in Legacy mode
- Do not enable external repositories, and do not install updates during the installation.

The combination of all of them worked. I booted form USB and 20 minutes later stuff worked. But it really was after trying them all ..


wireless works, touchscreen works, touchpad works, hdmi works,

Screen resolution is awesome 3000x1940

PS. For those who wonder .. I`m still on Fedora 20 :)

by Kris Buytaert at August 12, 2014 07:17 PM

Dieter Plaetinck

Darktable: a magnificent photo manager and editor

A post about the magnificent darktable photo manager/editor and why I'm abandoning pixie
read more

August 12, 2014 12:36 PM

Frank Goossens

Next Autoptimize eliminates render-blocking CSS in above-the-fold content

Although current versions of Autoptimize can already tackle Google PageSpeed Insights’ “Eliminate render-blocking CSS in above-the-fold content” tip, the next release will allow you to do so in an even better way. As from version 1.9 you’ll be able to combine the best of both “inline CSS” and “defer CSS” worlds. “Defer” effectively becomes “Inline and defer“, allowing you to specify the “above the fold CSS” which is then inlined in the head of your HTML, while your normal autoptimized CSS is deferred until the page has finished loading.

Other improvements in the upcoming Autoptimize 1.9.0 include:

On the todo-list; testing, some translation updates (I’ll contact you translators in the coming week) and updating the readme.txt.

The first test-version of what will become 1.9.0 (still tagged 1.8.5 for now though) has been committed to’s plugin SVN and can be downloaded here. Anyone wanting to help out testing this new release, go and grab your copy and provide me with feedback.

by frank at August 12, 2014 04:04 AM

August 11, 2014

Dries Buytaert

Help me write my DrupalCon Amsterdam keynote

For my DrupalCon Amsterdam keynote, I want to try something slightly different. Instead of coming up with the talk track myself, I want to "crowdsource" it. In other words, I want the wider Drupal community to have direct input on the content of the keynote. I feel this will provide a great opportunity to surface questions and ideas from the people who make Drupal what it is.

In the past, I've done traditional surveys to get input for my keynote and I've also done keynotes that were Q&A from beginning to end. This time, I'd like to try something in between.

I'd love your help to identify the topics of interests (e.g. scaling our community, future of the web, information about Drupal's competitors, "headless" Drupal, the Drupal Association, the business of Open Source, sustaining core development, etc). You can make your suggestions in the comments of this blog post or on Twitter (tag them with @Dries and #driesnote). I'll handpick some topics from all the suggestions, largely based on popularity but also based on how important and meaty I think the topic is.

Then, in the lead-up to the event, I'll create discussion opportunities on some or all of the topics so we can dive deeper on them together, and surface various opinions and ideas. The result of those deeper conversations will form the basis of my DrupalCon Amsterdam keynote.

So what would you like me to talk about? Suggest your topics in the comments of this blog post or on Twitter by tagging your suggestions with #driesnote and/or @Dries. Thank you!

by Dries at August 11, 2014 12:56 PM