Planet Grep is open to all people who either have the Belgian nationality or live in Belgium, and who actively work with or contribute to Open Source/Free software.

About Planet Grep...

Other planets can be found at the Planet Index.

A complete feed is available in a number of syndication formats: RSS 1.0, RSS 2.0, FOAF, and OPML.

The layout of this site was done by Gregory

December 21, 2014

Mattias Geniar

Explicitly Approving (Whitelisting) Cookies in Varnish With Libvmod-Cookie

In all my previous Varnish 3.x configs, I've always used blacklisting as the way of handling Cookies. You explicitly tell which cookies you want to remove in vcl_recv, all others remain. But just as security measures, whitelisting is always better than blacklisting.

Even if you fully manage your site and all code, you may not have full control over 3rd party (client-side) advertisers that use tracking cookies. And those cookies may, even if you don't approve of the method, be placed under your domain. So the next request to your site suddenly includes (random) tracking cookies, unique for each visitor, and it destroys the caching in vcl_hash.

Please note this guide is focused on Varnish 3.x. Varnish 4.x will have the Cookie VMOD available by default, no custom compiles required!

Blacklisting / Removing cookies in Varnish

This is the common method of removing cookies in vcl_recv.

set req.http.Cookie = regsuball(req.http.Cookie, "has_js=[^;]+(; )?", "");

And you would repeat that line 1, 10 or 100 times, depending on what cookies you want to remove.

Implementing a whitelist of allowed cookies

In order to use a whitelisting approach, you can use the libvmod-cookie VMOD for Varnish 3.x. It allows more fine-grained control over what cookies are preserved and which ones get removed.

In order to use the VMOD, you need to compile it from source. And to compile the VMOD from source, you also need the Varnish source files somewhere on your system. You can still keep the RPM packages from Varnish installed, but the source is needed to compile the VMOD against it.

Preparing the Varnish source

In this guide, I'll use Varnish 3.0.6 as the base-version to compile against. Download the source and run make to build the binary files, but no not make install as you want to keep the packages from the upstream Varnish repo intact.

$ cd /usr/local/src
$ wget ""
$ tar xzvf varnish-3.0.6.tar.gz
$ cd varnish-3.0.6
$ ./configure
$ make

Now you have the Varnish source and a built binary available in /usr/local/src/varnish-3.0.6. We'll use this to compile the VMOD against.

Download and install the libvmod-cookie varnish module

Next, download and compile the libvmod-cookie module.

$ cd /usr/local/src
$ wget ""
$ tar xzvf 3.0
$ cd lkarsten-libvmod-cookie-fe38614
$ ./configure VARNISHSRC=/usr/local/src/varnish-3.0.6 VMODDIR=/usr/lib64/varnish/vmods/
$ make && make install

The result is a vmod module installed in /usr/lib64/varnish/vmods/.

$ ls -alh /usr/lib64/varnish/vmods/
-rwxr-xr-x 1 root root  955 Dec 21 21:28
-rwxr-xr-x 1 root root  42K Dec 21 21:28
-rwxr-xr-x 1 root root  16K Oct 16 16:30

The libvmod_std is the standard library included by Varnish. The libvmod_cookie is the new binary module, and you can include the vmod in your VCL code now.

import cookie;

vcl_recv {

Whitelisting cookies using filter_except() in libvmod-cookie

And now that the VMOD module is installed and ready for use, you can use the powerful filter_except() function call to pass a comma-separated list of cookies to allow, all others will be removed.

vcl_recv {
  # Let the module parse the "Cookie:" header from the client

  # Filter all except these cookies from it

  # Set the "Cookie:" header to the parsed/filtered value, removing all unnecessary cookies
  set req.http.cookie = cookie.get_string();

Any other cookie besides cookie1 and cookie2 will be removed from the Cookie: header now.

To debug this and to test what cookies are removed and which ones remain, look at my post about seeing which cookies get stripped in the VCL.

Next up: figuring out how to pass regex's along. ;-)


A few things to keep in mind;

  1. VMODs are compiled, so it's better to make packages out of them
  2. Since VMODs are compiled against a specific version, they need to match the Varnish version. (so varnish 3.0.5 from RPM/Yum repos and VMODs compiled with 3.0.6 source can mean troubles)
  3. For automating this at scale, you need the VMODs in your own repository
  4. The filter_except() call accepts strings, not regex's -- to match against regex's, you would need to loop all values

To be continued!

The post Explicitly Approving (Whitelisting) Cookies in Varnish With Libvmod-Cookie appeared first on

by Mattias Geniar at December 21, 2014 08:48 PM

List The Files In A Yum/RPM Package

It's not possible by default, but you can install the yum-utils package that provides tools to list the contents of a certain package.

$ yum -y install yum-utils

Now the repoquery tool is available, that allows you to look into (installed and not-yet-installed) packages.

$ repoquery --list varnish-libs-devel

Very useful to combine with yum whatprovides */something searches to find exactly the package you need!

The post List The Files In A Yum/RPM Package appeared first on

by Mattias Geniar at December 21, 2014 07:40 PM

Setting HTTPS $_SERVER variables in PHP-FPM with Nginx

A typical Nginx setup uses fastcgi_pass directives to pass the request to the PHP-FPM daemon. If you would be running an Apache setup, Apache would automatically set the HTTPS server variable, that PHP code can check via $_SERVER['HTTPS'] to determine if the request is HTTP or HTTPs.

In fact, that's how most CMS's (WordPress, Drupal, ...) determine the server environment. They'll also use it for redirects from HTTP-to-HTTPs or vica versa, depending on the config. So the existence of the $_SERVER['HTTPS'] variable is pretty crucial.

Nginx doesn't pass the variable by default to the PHP-FPM daemon when you use fastcgi_pass, but it is easily added.

A basic example in Nginx looks like this.

include fastcgi_params;
fastcgi_split_path_info ^(.+\.php)(.*)$;
fastcgi_index  index.php;
fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;

# Check if the PHP source file exists (prevents the cgi.fix_pathinfo=1 exploits)
if (-f $request_filename) {
    fastcgi_pass   backend_php; # This backend is defined elsewhere in your Nginx configs

The example above is a classic one, that just passes all to PHP. In order to make PHP-FPM aware of your HTTPs setup, you need to add a fastcgi_param environment variable to the config.

include fastcgi_params;
fastcgi_split_path_info ^(.+\.php)(.*)$;
fastcgi_index  index.php;
fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;

# Make PHP-FPM aware that this vhost is HTTPs enabled
fastcgi_param  HTTPS 'on';

# Check if the PHP source file exists (prevents the cgi.fix_pathinfo=1 exploits)
if (-f $request_filename) {
    fastcgi_pass   backend_php; # This backend is defined elsewhere in your Nginx configs

The solution is in the fastcgi_param HTTPS 'on'; line, which passes the HTTPS variable to the PHP-FPM daemon.

The post Setting HTTPS $_SERVER variables in PHP-FPM with Nginx appeared first on

by Mattias Geniar at December 21, 2014 06:40 PM

Mark Van den Borre

The Spirit of the 1914 Christmas Truce

The Spirit of the 1914 Christmas Truce

by Mark Van den Borre ( at December 21, 2014 08:04 AM

December 19, 2014

Wouter Verhelst

joytest UI improvements

After yesterday's late night accomplishments, today I fixed up the UI of joytest a bit. It's still not quite what I think it should look like, but at least it's actually usable with a 27-axis, 19-button "joystick" (read: a PS3 controller). Things may disappear off the edge of the window, but you can scroll towards it. Also, I removed the names of the buttons and axes from the window, and installed them as tooltips instead. Few people will be interested in the factoid that "button 1" is a "BaseBtn4", anyway.

The result now looks like this:

If you plug in a new joystick, or remove one from the system, then as soon as udev finishes up creating the necessary device node, joytest will show the joystick (by name) in the treeview to the left. Clicking on a joystick will show that joystick's data to the right. When one pushes a button, the relevant checkbox will be selected; and when one moves an axis, the numbers will start changing.

I really should have some widget to actually show the axis position, rather than some boring numbers. Not sure how to do that.

December 19, 2014 10:21 PM

Xavier Mertens

The Marketing of Vulnerabilities

There is a black market for vulnerabilities, nothing new with this fact! A brand new 0-day can be sold for huge amounts of money. The goal of this blog post is not to cover this market of vulnerabilities but the way some of them are disclosed today. It’s just a reflexion I had when reading some news about the Rompager:


2014 is almost behind us and we faced some critical vulnerabilities in the last months! If some of them affected very critical and widely spread software components, some also were publicly released in the wild with all the classic components of a commercial marketing action.

Previously, vulnerabilities were disclosed on specific communications channels like mailing-lists (full-disclosure being one of the most known). Then, came social networks like Twitter (which remains a key player for broadcast information) but, across the last months, we saw more and more vulnerabilities disclosed with:

Vulnerabilities are referenced via assigned IDs. The most used reference system is called “CVE” or “Common Vulnerabilities and Exposures“. Security professionals are always speaking about such identifiers. To give you an example, the Rompager vulnerability is reference as CVE-2014-9222. But some vulnerabitilies receive a name and all the marketing material associated to them. Here are some examples: Heartbleed, Poodle, Sandworm or, the latest, the Misfortune cookie.

HeartbleedMisfortune CookieSandwormPoodle

Such vulnerabilities are critical and affect millions of devices and, thanks to the help of their marketing presence, they were also relayed by regular mass media to the general public, sometimes in a good way but sometimes with a very bad coverage. Often, behind this marketing material, there are big players in the infosec landscape who are fighting for being the first one to release the vulnerability. Examples:

If speaking about major vulnerabilities to a broader audience is of course a good initiative, it must be performed in the right way. I’m afraid that more and more vulnerabilities will be known to the general public but keep in mind that they are the top of the iceberg. There are new vulnerabilities found every day and some of them are also very nasty! The graphic below gives you an idea of CVE numbers assigned per month in 2014 (7739 as of today!). As you can see, it’s far more than the four vulnerabilities mentionned above.

CVE 2014


To resume: not only “general public” vulnerabilites must be addressed. All of them are important and could lead to a complete compromise of your infrastructure (remember: the weakest link). I hate marketing, also in information security! ;-)

by Xavier at December 19, 2014 04:01 PM

Mattias Geniar

Sony vs. North Korea: 0 – 0?

The fact that the code was written on a PC with Korean locale & language actually makes it less likely to be North Korea. Not least because they don’t speak traditional “Korean” in North Korea, they speak their own dialect and traditional Korean is forbidden.
Marc Rogers

And there's many more arguments why North Korea would not be behind these recent Sony hacks.

The post Sony vs. North Korea: 0 – 0? appeared first on

by Mattias Geniar at December 19, 2014 12:22 PM

December 18, 2014

Wouter Verhelst

Introducing libjoy

I've owned a Logitech Wingman Gamepad Extreme since pretty much forever, and although it's been battered over the years, it's still mostly functional. As a gamepad, it has 10 buttons. What's special about it, though, is that the device also has a mode in which a gravity sensor kicks in and produces two extra axes, allowing me to pretend I'm really talking to a joystick. It looks a bit weird though, since you end up playing your games by wobbling the gamepad around a bit.

About 10 years ago, I first learned how to write GObjects by writing a GObject-based joystick API. Unfortunately, I lost the code at some point due to an overzealous rm -rf call. I had planned to rewrite it, but that never really happened.

About a year back, I needed to write a user interface for a customer where a joystick would be a major part of the interaction. The code there was written in Qt, so I write an event-based joystick API in Qt. As it happened, I also noticed that jstest would output names for the actual buttons and axes; I had never noticed this, because due to my 10 buttons and 4 axes, which by default produce a lot of output, the jstest program would just scroll the names off my screen whenever I plugged it in. But the names are there, and it's not too difficult.

Refreshing my memory on the joystick API made me remember how much fun it is, and I wrote the beginnings of what I (at the time) called "libgjs", for "Gobject JoyStick". I didn't really finish it though, until today. I did notice in the mean time that someone else released GObject bindings for javascript and also called that gjs, so in the interest of avoiding confusion I decided to rename my library to libjoy. Not only will this allow me all kinds of interesting puns like "today I am releasing more joy", it also makes for a more compact API (compare joy_stick_open() against gjs_joystick_open()).

The library also comes with a libjoy-gtk that creates a GtkListStore* which is automatically updated as joysticks are added and removed to the system; and a joytest program, a graphical joystick test program which also serves as an example of how to use the API.

still TODO:

What's there is functional, though.

Update: if you're going to talk about code, it's usually a good idea to link to said code. Thanks, Emanuele, for pointing that out ;-)

December 18, 2014 11:29 PM

Mattias Geniar

Interviewing Systems Administrators

... it probably makes more sense for you to ask open ended questions about things you care about. "So, we have a lot of web servers here. What’s your experience with managing them?"


We all have our favorite pet technologies, but most of us are able to put personal preferences aside in favor of the prevailing consensus. A subset of technologists are unable to do this. "You use Redis? Why?! It’s a steaming pile of crap!"

There's a lot of truth in SysAdvent's blogpost about hiring systems administrators.

The post Interviewing Systems Administrators appeared first on

by Mattias Geniar at December 18, 2014 07:08 PM

FOSDEM organizers

Guided sightseeing tours

If you intend to bring your non-geek partner and/or kids to FOSDEM, they may be interested in exploring Brussels while you attend the conference. Like previous years, FOSDEM is organising sightseeing tours.

December 18, 2014 03:00 PM

Joram Barrez

Activiti + Spring Boot docs and example

With the Activiti 5.17.0 release going out any minute now, one of the things we did was writing down documentation on how to use this release together with Spring Boot. If you missed it, me and my Spring friend Josh Long did a webinar a while ago about this. You can find the new docs already […]

by Joram Barrez at December 18, 2014 09:18 AM

Frederic Hornain

Red Hat 2015 Customer Priorities Survey



Customers reporting interest in cloud, containers, Linux, and OpenStack for 2015

More information at

Kind Regards


by Frederic Hornain at December 18, 2014 08:57 AM

December 17, 2014

Mattias Geniar

Azure Cloud Outage Root Cause Analysis

I don't particularly enjoy outages, but I do like reading about their root cause analysis afterwards. It's a valuable place to learn about mistakes made and often shares a lot of insights into (the technology behind) an organization that you normally wouldn't get to know.

And last November's Azure outage is no different. A very detailed write-up with enough internals to keep things interesting. The outage occurred as a result of a planned maintenance, to deploy an improvement to the storage infrastructure that would result in faster Storage Tables.

During this deployment, there were two operational errors:

1. The standard flighting deployment policy of incrementally deploying changes across small slices was not followed.

2. Although validation in test and pre-production had been done against Azure Table storage Front-Ends, the configuration switch was incorrectly enabled for Azure Blob storage Front-Ends.

As with most problems, they're human-induced. Technology doesn't often fail, except when engineers make mistakes or implement the technology in a bad way. In this case, a combination of several human errors were the cause.

In summary, Microsoft Azure had clear operating guidelines but there was a gap in the deployment tooling that relied on human decisions and protocol. With the tooling updates the policy is now enforced by the deployment platform itself.

Not everything can be solved with procedures. Even with every step clearly outlined, it still relies on engineers following every step to the letter, and not making mistakes. But we make mistakes. We all do.

It's just hoping those mistakes don't occur during critical times.

The post Azure Cloud Outage Root Cause Analysis appeared first on

by Mattias Geniar at December 17, 2014 09:28 PM


I recently enabled SPDY on this blog, and once in a while I got the following error in my browser, causing chunks of the page (javascript, CSS, ...) to stop loading.



The solution turned out to be really simple, at least in this case, by just looking at the error logs produced by this vhost.

==> /var/www/ <==
2014/12/18 00:15:01 [crit] 1041#0: *3 open() "/usr/local/nginx/fastcgi_temp/4/00/0000000004" failed (13: Permission denied) while reading upstream, client:, server:, request: "GET /page.php?..."

To enable SPDY I had upgraded my Nginx to the latest available 1.7.8, but in the process of that installation it seems some file permissions were incorrectly modified. For instance, the FastCGI cache - the one that holds the responses for the PHP-FPM upstream - were owned by user nobody, with only write permissions to the owner.

ls -alh /usr/local/nginx/fastcgi_temp
drwx------ 12 nobody root 4096 Feb 11  2012 fastcgi_temp

And as an obvious result, Nginx couldn't write to that cache directory. Modified the permissions to grant the user nginx write permissions, and all errors were gone.

chown nginx:nginx /usr/local/nginx/fastcgi_temp/ -R

And all SPDY protocol errors were gone. Conclusion? Check the errors logs when you implement a new protocol in the middle of the night.

The post Nginx & SPDY error: net::ERR_SPDY_PROTOCOL_ERROR appeared first on

by Mattias Geniar at December 17, 2014 09:24 PM

PHP’s OPCache and Symlink-based Deploys

In PHP =< 5.4, there was APC. And in APC, there was the apc.stat option to check the timestamps on files to determine if they've changed, and a new version should be cached. However, in PHP 5.5+, it introduces OPCode Caching as an alternative to APC, and that works a bit differently.

Now, with PHP 5.5 and the new OPcache things are a bit different. OPcache is not inode-based so we can't use the same trick [symlink-changes for deploys].
Rasmus Lerdorf

In short: if your PHP code is in the following directory structure, and the current symlink is changed to point to a new release, the OPCache won't read the new file since the path of the file remains the same.

$ ls
current -> releases/20141217084854

If the current symlink changes from releases/20141217084854 to releases/20141217085023, you need to manually clear the OPCache to have it load the new PHP files.

There is no mechanisme (yet?) in OPCache that allows you to stat the source files and check for changes. There are options like opcache.validate_timestamps and opcache.revalidate_freq that allow you to periodically check for changes, but that's always time-based.

So to clear the OPCache after each deploy, you now have two options:

  1. Restart the PHP-FPM daemon (you'll need sudo-rights for this)
  2. Use a tool like cachetool to flush the OPCache via the CLI

This is something to keep in mind.

The post PHP’s OPCache and Symlink-based Deploys appeared first on

by Mattias Geniar at December 17, 2014 10:48 AM

Frank Goossens

Music form Our Tube: (Can’t) Stand still with Flight Facilities

This one (Flight Facilities featuring Nicky Green) is already a year old, but I just “discovered” it on KCRW.

YouTube Video
Watch this video on YouTube or on Easy Youtube.

Love the eighties drumcompouter-sound (as also heard on Kelis & André 3000’s Millionaire).

by frank at December 17, 2014 05:36 AM

December 16, 2014

Mattias Geniar

Force Redirect From HTTP to HTTPs On A Custom Port in Nginx

If you're running an Nginx vhost on a custom port, with SSL enabled, browsing to the site without HTTPs will trigger the following Nginx error.

400 Bad Request
The plain HTTP request was sent to HTTPS port

For instance, a vhost like this, running on a non-standard port :1234 with SSL enabled, will cause that error.

server {
  listen      1234 ssl;
  server_name your_site;
  ssl         on;

Redirecting HTTP to HTTPs the old-fashioned way

The classic way to force an HTTP to HTTPs redirection, is to have one vhost listen on port :80 and one on :443, and have the port :80 vhost redirect all traffic to the HTTPs version.

server {
  listen      80;
  # 301 = permanent redirect, 302 = temporary redirect
  return 301$request_uri;

server {
  listen      443 ssl;
  ssl         on;

However, you can not use that trick if you're running HTTPs/SSL on a custom port, as there is no "unsafe" port :80 that you can catch requests on to redirect them.

Forcing HTTPs redirects on non-standard ports

Nginx has created a custom HTTP status code to allow you to force a redirect for anyone browsing a vhost via HTTP to the HTTPs version, called 497. This is only needed if you're running SSL on a custom port, since otherwise you can use the config shown above for redirecting.

To force the browser to redirect from HTTP to the HTTPs port, do the following.

server {
  listen      1234 ssl;
  ssl         on;
  error_page  497 https://$host:1234$request_uri;

Now, anyone reaching the site via the HTTP protocol will be redirect to the HTTPs version.

The post Force Redirect From HTTP to HTTPs On A Custom Port in Nginx appeared first on

by Mattias Geniar at December 16, 2014 10:50 PM

Enable SPDY in Nginx on CentOS 6

SPDY is the protocol designed by Google, which is later to be known as HTTP/2. Nginx supports this protocol, on top of SSL connections, and since recent versions it has the --with-http_spdy_module option enabled!

And seeing as how Google is investigating if they can show plain HTTP sites as "unsecure", this may be the perfect time for you to consider an SSL certificate on your site, with SPDY enabled.

Install Nginx from the official repositories

For this to work, the easiest setup is to install Nginx from the official repositories. In the case of CentOS 6, that would be the following simple steps.

$ rpm -ivh ""
$ yum install nginx

Your installed version should be at least in the 1.6 release. If you already have Nginx installed from other sources, such as EPEL, you can install the Nginx repository as shown above and update to the latest version via yum clean all && yum update nginx. The version from the Nginx repository is likely to be the latest one available.

Enable SPDY on SSL vhosts

Since SPDY runs on top of SSL/TLS, you need a working SSL-enabled website already. For that, you'll have a config similar to this in your Nginx.

server {
    listen       443 ssl;
    ssl on;
    ssl_certificate ...;
    ssl_certificate_key ...;

For a correct SSL configuration, I recommend you have a look at Mozilla's recommended Nginx server configuration, which contains a lot of templates and best practices.

Now, to enable SPDY, first verify that your Nginx version supports the SPDY protocol.

$ nginx -V 2>&1 | grep 'spdy'
configure arguments: --without-mail_pop3_module --without-mail_imap_module --without-mail_smtp_module --with-http_stub_status_module --conf-path=/etc/nginx/ --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --with-http_ssl_module --with-http_spdy_module

The line "--with-http_spdy_module" needs to be present in the argument list.

To enable SPDY, it's as easy as changing this line:

listen       443 ssl;

to this one:

listen       443 ssl spdy;

... and reloading your Nginx config with a service nginx reload.

Testing for SPDY

There's a simple website that allows you to test your SPDY configuration:


If the test indicates a success, you're set!

The post Enable SPDY in Nginx on CentOS 6 appeared first on

by Mattias Geniar at December 16, 2014 09:10 PM

Presentation: Managing Zabbix Hosts with Puppet’s Config Management

At the previous Puppet User Group in Belgium I gave a talk about how to manage Zabbix Hosts with Puppet's Exported Resources. After some tweaking of the presentation to get it OK-ish to be published online, I present you: the slides!

The rational behind the talk

This not about how to set up the Zabbix servers that actually do the monitoring. It's about how to add data into your Zabbix configuration, adding hosts with their correct IPs and applying the correct monitoring templates to them.

Since Puppet is used as our config management of choice, it would be a perfect fit to have our monitoring be kept up-to-date automatically as soon as changes are made in our Puppet codebase.

The presentation, when given with verbal comments (aka: a talk), runs for about 40-45 minutes.

Managing Zabbix with Puppet

There is no native Puppet type for managing a Zabbix host in Puppet. That means the code to talk to the Zabbix API introduce custom Puppet types and providers to make this work. My version of the zabbixapi Puppet type is available online, but if you're going to look into this, I would recommend checking out express42/zabbixapi. That codebase was the original basis for my work, but seems to have evolved much further in the meantime and probably contains more features / bugfixes.

Disclaimer: I've not yet tried the express42 version.

Lessons learned

While the Puppet type in itself is no magic, and neither is the use of Puppet's Exported Resources, I did learn a few things while implementing this.

  1. PuppetDB, in the early days, was flaky: I had several times where I had to wipe all my data from PuppetDB. This has since greatly improved.
  2. Naming the items zabbix_* was a bad idea: Puppet is used to abstract away the configs, naming the types zabbix_* implies I'll always use Zabbix. It would have been better to name it monitor_* as a type, with Zabbix as one of the possible providers.
  3. The code needs optimising: adding ~400 hosts into Zabbix made a little over 60.000 Zabbix API calls. There's probably a more efficient way for that.
  4. This scares the crap out of me: I love automation, but having had a few PuppetDB failures I'm hesitant to base all my monitoring configuration on it (after all: no monitoring = no service guarantees). Perhaps my fear will go away in 2015.

If anyone has any remarks or comments, spots any obvious rookie mistakes or has any other feedback of any kind, please let me know!

The post Presentation: Managing Zabbix Hosts with Puppet’s Config Management appeared first on

by Mattias Geniar at December 16, 2014 07:40 PM

Frederic Hornain

When application requests start piling up…

Can Not Keep Up with Application Demand











When application requests start piling up, OpenShift Enterprise by Red Hat® can help you keep up with demand. This Platform-as-a-Service (PaaS) lets developers deploy on their time, optimizes your compute resources, and frees you up to focus on the future.

Enable your developers with velocity and stability.

Learn more at :

Kind Regards


by Frederic Hornain at December 16, 2014 05:43 PM

December 15, 2014

Frederic Hornain

[Wildfly-Camel] XML to DB Data Conversion Small Tutorial

wildfly Camel

In one of my previous posts[1], I explain how you can install Apache Camel[2] on top of Wildfly[3].

In that small tutorial, I am not going to reinvent the wheel. I am just going to explain how you can transform/convert data stored in XML format and upload it to a database of your choice  – In this example H2[4] was the chosen default database -.

Note: H2 is not recommended to be used in production. I would rather recommend you to look at  postgresq[5], MongoDB[6], MariaDB[7], etc… instead.

This example named “camel-jpa”  is coming from the ones provided with the wildfly-camel project[8]

First of all here is the tree of that project :

[fhornain@localhost camel-jpa]$ tree
├── pom.xml
└── src
├── main
│   ├── java
│   │   └── org
│   │       └── wildfly
│   │           └── camel
│   │               └── examples
│   │                   └── jpa
│   │                       ├──
│   │                       ├──
│   │                       ├── model
│   │                       │   ├──
│   │                       │   └──
│   │                       ├──
│   │                       └──
│   ├── resources
│   │   ├── META-INF
│   │   │   └── persistence.xml
│   │   └── org
│   │       └── wildfly
│   │           └── camel
│   │               └── examples
│   │                   └── jpa
│   │                       └── model
│   │                           └── jaxb.index
│   └── webapp
│       └── WEB-INF
│           ├── beans.xml
│           ├── customers.jsp
│           ├── customer.xml
│           └── jboss-web.xml
└── test
└── java
└── org
└── wildfly
└── camel
└── examples
└── jpa

26 directories, 14 files

And here is the pom.xml

<?xml version="1.0" encoding="UTF-8"?>
 Wildfly Camel :: Example :: Camel JPA
 Copyright (C) 2013 - 2014 RedHat
 Licensed under the Apache License, Version 2.0 (the "License");
 you may not use this file except in compliance with the License.
 You may obtain a copy of the License at

 Unless required by applicable law or agreed to in writing, software
 distributed under the License is distributed on an "AS IS" BASIS,
 See the License for the specific language governing permissions and
 limitations under the License.

<project xmlns="" xmlns:xsi="" xsi:schemaLocation="">


 <name>Wildfly Camel :: Example :: Camel JPA</name>



 <!-- Provided -->

 <!-- Test Scope -->

 <!-- Build -->

 <!-- Profiles -->

As you can see the project in based on a MVC pattern and is composed of four directories but for the time being we will only cover three of them.
That is to say :

- camel-jpa/src/main/java
- camel-jpa/src/main/resources
- camel-jpa/src/main/webapp

Indeed, the project can be then cut in two parts.

The first part which is taking care of the content routing and the transformation of information of the following XML data sample.

<?xml version="1.0" encoding="UTF-8"?>
<cus:customer xmlns:cus="http://org/wildfly/camel/examples/jpa/model/Customer"

Note : You can retrieve these data in the camel-jpa/src/main/webapp/WEB-INF/customer.xml xml file.
The Second part which transcribes the content of the database in a web page.  – See below – We will call it the servlet[12] part.

Note: This outcome can be found at http://localhost:8080/example-camel-jpa/customers

Screenshot from 2014-12-15 13:37:35




First part of the Project (Content routing and transformation of information)

Here the idea is to unmarshall[11] or deserialize the content of the XML file through Camel Route (Here implemented in the to an Entity Bean (here in which will then persisted to the H2 Database.

In the camel-jpa/src/java you have got : which is where the Camel route is going to be implemented.

* #%L
* Wildfly Camel :: Example :: Camel JPA
* %%
* Copyright (C) 2013 - 2014 RedHat
* %%
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* See the License for the specific language governing permissions and
* limitations under the License.
* #L%
package org.wildfly.camel.examples.jpa;

import javax.ejb.Startup;
import javax.enterprise.context.ApplicationScoped;
import javax.inject.Inject;
import javax.persistence.EntityManager;
import javax.persistence.EntityManagerFactory;
import javax.transaction.UserTransaction;

import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.cdi.ContextName;
import org.apache.camel.component.jpa.JpaEndpoint;
import org.apache.camel.model.dataformat.JaxbDataFormat;
import org.springframework.transaction.jta.JtaTransactionManager;
import org.wildfly.camel.examples.jpa.model.Customer;

public class JpaRouteBuilder extends RouteBuilder {

private EntityManager em;

UserTransaction userTransaction;

public void configure() throws Exception {

// Configure our JaxbDataFormat to point at our 'model' package
JaxbDataFormat jaxbDataFormat = new JaxbDataFormat();

EntityManagerFactory entityManagerFactory = em.getEntityManagerFactory();

// Configure a JtaTransactionManager by looking up the JBoss transaction manager from JNDI
JtaTransactionManager transactionManager = new JtaTransactionManager();

// Configure the JPA endpoint to use the correct EntityManagerFactory and JtaTransactionManager
JpaEndpoint jpaEndpoint = new JpaEndpoint();

* Simple route to consume customer record files from directory input/customers,
* unmarshall XML file content to a Customer entity and then use the JPA endpoint
* to persist the it to the 'ExampleDS' datasource (see standalone.camel.xml for datasource config).

In addition to that, We will use JaxbDataFormat to point at our 'model' package in camel-jpa/src/main/java/org/wildfly/camel/examples/jpa/model/

@javax.xml.bind.annotation.XmlSchema(namespace = "http://org/wildfly/camel/examples/jpa/model/Customer", elementFormDefault = javax.xml.bind.annotation.XmlNsForm.QUALIFIED)
package org.wildfly.camel.examples.jpa.model;


Second part of the Project

In that part of that project,  the idea is to transcribes the content of the database in a web page through a Servlet[12].

In the camel-jpa/src/java you have got : which is only there to send back customer information from the Database to the Servlet

 * #%L
 * Wildfly Camel :: Example :: Camel JPA
 * %%
 * Copyright (C) 2013 - 2014 RedHat
 * %%
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * See the License for the specific language governing permissions and
 * limitations under the License.
 * #L%
package org.wildfly.camel.examples.jpa;

import java.util.List;

import javax.inject.Inject;
import javax.persistence.EntityManager;
import javax.persistence.criteria.CriteriaBuilder;
import javax.persistence.criteria.CriteriaQuery;

import org.wildfly.camel.examples.jpa.model.Customer;

public class CustomerRepository {
    private EntityManager em;

     * Find all customer records
     * @return A list of customers
    public List<Customer> findAllCustomers() {

        CriteriaBuilder criteriaBuilder = em.getCriteriaBuilder();
        CriteriaQuery<Customer> query = criteriaBuilder.createQuery(Customer.class);;

        return em.createQuery(query).getResultList();

Entity Manager which is going to use details defined in your camel-jpa/src/main/resources/META-INF/persistence.xml file:

<?xml version="1.0" encoding="UTF-8"?>
Wildfly Camel :: Example :: Camel JPA
Copyright (C) 2013 - 2014 RedHat
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
See the License for the specific language governing permissions and
limitations under the License.
<persistence version="2.0"
 xmlns="" xmlns:xsi=""

 <persistence-unit name="camel">
 <property name="" value="create-drop"/>
 <property name="hibernate.show_sql" value="true"/>

Note : ExampleDS should be the default datasource set in your Wildfly instance. – see for instance your /wildfly-8.1.0.Final/standalone/configuration/standalone-camel.xml configuration file in your wildfly instance with the additional Camel Patches[1]

Finally here is the Servlet [12] implementation  :

 * #%L
 * Wildfly Camel :: Example :: Camel JPA
 * %%
 * Copyright (C) 2013 - 2014 RedHat
 * %%
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * See the License for the specific language governing permissions and
 * limitations under the License.
 * #L%
package org.wildfly.camel.examples.jpa;

import java.nio.file.Files;
import java.nio.file.Path;
import java.util.List;

import javax.inject.Inject;
import javax.servlet.ServletConfig;
import javax.servlet.ServletContext;
import javax.servlet.ServletException;
import javax.servlet.annotation.WebServlet;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;

import org.wildfly.camel.examples.jpa.model.Customer;

@WebServlet(name = "HttpServiceServlet", urlPatterns = { "/customers/*" }, loadOnStartup = 1)
public class SimpleServlet extends HttpServlet {

    static Path CUSTOMERS_PATH = new File(System.getProperty("")).toPath().resolve("customers");
    private CustomerRepository customerRepository;

    public void init(ServletConfig config) throws ServletException {

        // Copy WEB-INF/customer.xml to the data dir
        ServletContext servletContext = config.getServletContext();
        try {
            InputStream input = servletContext.getResourceAsStream("/WEB-INF/customer.xml");
            Path xmlPath = CUSTOMERS_PATH.resolve("customer.xml");
            Files.copy(input, xmlPath);
        } catch (IOException ex) {
            throw new ServletException(ex);

    protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
         * Simple servlet to retrieve all customers from the in memory database for
         * output and display on customers.jsp

        List<Customer> customers = customerRepository.findAllCustomers();

        request.setAttribute("customers", customers);
        request.getRequestDispatcher("/WEB-INF/customers.jsp").forward(request, response);

Object which will be called by the camel-jpa/src/main/webapp/WEB-INF/customers.jsp JSP file when you will access to the following URL  : http://localhost:8080/example-camel-jpa/customers

 Wildfly Camel :: Example :: Camel JPA
 Copyright (C) 2013 - 2014 RedHat
 Licensed under the Apache License, Version 2.0 (the "License");
 you may not use this file except in compliance with the License.
 You may obtain a copy of the License at

 Unless required by applicable law or agreed to in writing, software
 distributed under the License is distributed on an "AS IS" BASIS,
 See the License for the specific language governing permissions and
 limitations under the License.
<%@ page contentType="text/html;charset=UTF-8" language="java" %>
<%@ taglib uri="" prefix="c" %>
<c:forEach var="customer" items="${customers}">
 ${customer.firstName} ${customer.lastName}

Now up, you can compile you code using the following command at the root of your project /camel-jpa :

[fhornain@localhost camel-jpa]$ mvn clean package
[INFO] Scanning for projects...
[INFO] ------------------------------------------------------------------------
[INFO] Building Wildfly Camel :: Example :: Camel JPA 2.0.0.CR1
[INFO] ------------------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 5.491s
[INFO] Finished at: Mon Dec 15 15:10:01 CET 2014
[INFO] Final Memory: 36M/333M
[INFO] ------------------------------------------------------------------------

and then deploy your code to your wildfly instance[1] with its additional camel patches.

[fhornain@localhost target]$ pwd
[fhornain@localhost target]$ cp -pR example-camel-jpa-2.0.0.CR1.war /wildfly-8.1.0.Final/standalone/deployments/

Then, you can play with the example.

You should have a customer directory in /wildfly-8.1.0.Final/standalone/data








Do a copy/paste of the the camel-jpa/src/main/webapp/WEB-INF/customer.xml xml file in that directory /wildfly-8.1.0.Final/standalone/data.

[fhornain@localhost WEB-INF]$ pwd


[fhornain@localhost WEB-INF]$ cp -pR customer.xml /wildfly-8.1.0.Final/standalone/data/customers

You should now see an additional customer in the following web page : http://localhost:8080/example-camel-jpa/customers

New Web screenshot

N.B. if you are looking for certified and supported enterprise solutions please consider Red Hat JBoss EAP[8] or Red Hat JBoss Fuse[9]

Kind Regards

























by Frederic Hornain at December 15, 2014 08:27 PM

Kristof Willen

Perl advent calendar 2014


Holy chestnuts ! I almost forgot to mention the Perl advent calender 2014, your source of lesser known Perl modules. What CPAN modules would Santa use this year to deliver all our presents ?

by kristof at December 15, 2014 07:07 PM

Mattias Geniar

Varnish FetchError http first read error: -1 11 (Resource temporarily unavailable)

Just like the Straight insufficient bytes error, this is another error you can see in your varnishlogs.

It looks like this.

11 VCL_return   c hash
11 VCL_call     c pass pass
11 FetchError   c http first read error: -1 11 (Resource temporarily unavailable)
11 VCL_call     c error deliver
11 VCL_call     c deliver deliver
11 TxProtocol   c HTTP/1.1
11 TxStatus     c 503
11 TxResponse   c Service Unavailable
11 TxHeader     c Content-Type: text/html; charset=utf-8
11 TxHeader     c Content-Length: 686
11 TxHeader     c Accept-Ranges: bytes
11 TxHeader     c Age: 15
11 TxHeader     c Connection: close
11 TxHeader     c x-Cache: uncached
11 TxHeader     c x-Hits: 0

What's happening

The error "HTTP first read error" can indicate that your Varnish config is not waiting long enough for the backend to respond. A default backend definition in Varnish will wait for 5 seconds for a "first byte" from the upstream. If it doesn't respond in that time, it'll close the connection and throw an HTTP 503 error.

The quick fix (but not the best)

Modify your backend definition to look like this, to allow for longer waiting responses. It's especially the .first_byte_timeout option that will make the difference.

backend my_backend {
  .host                   = "";
  .port                   = "80";

  # How long to wait before we receive a first byte from our backend?
  .first_byte_timeout     = 300s;

  # How long to wait for a backend connection?
  .connect_timeout        = 3s;

  # How long to wait between bytes received from our backend?
  .between_bytes_timeout  = 10s;

The example above will wait for 300 seconds (= 5 minutes) for the backend to respond with the first byte.

The error will mostly pop-up when a POST is made to the server, but the request is too slow to be processed and is causing the server to take longer to send a reply to the request. Increasing the upstream timeouts can help. But you may not want to do this for all requests.

Setting a longer timeout for certain pages

In case you don't want longer timeouts for all pages, it's a good idea to configure 2 backends; one that you can use for "normal" situations and one for requests that you know take longer. For instance, a config like this might help you. You define a second backend, with longer timeouts and in your Varnish VCL, you set logic for switching between these backends.

backend my_backend_normal {
  .host                   = "";
  .port                   = "80";

backend my_backend_longwait {
  .host                   = "";
  .port                   = "80";

  # How long to wait before we receive a first byte from our backend?
  .first_byte_timeout     = 300s;

  # How long to wait for a backend connection?
  .connect_timeout        = 3s;

  # How long to wait between bytes received from our backend?
  .between_bytes_timeout  = 10s;

sub vcl_recv {
  # Set the default backend server
  set req.backend = my_backend_normal;

  # Set the backend for POST requests
  if (req.request == "POST") {
    set req.backend = my_backend_longwait;

  # Or set a longer-waiting backend for specific URL schemes
  if (req.url ~ "^/some/vary/long/page$") {
    set req.backend = my_backend_longwait;


This gives you the benefit of having smaller timeouts for "normal" pages, and selective longer timeouts for pages that you know will take longer to respond.

The post Varnish FetchError http first read error: -1 11 (Resource temporarily unavailable) appeared first on

by Mattias Geniar at December 15, 2014 06:05 PM

Kristof Willen

Oneplus One

Mobile phones

My trusty Galaxy Nexus has been by far one of my most beloved phones : I loved the design, the openess and the available development (custom ROMs). It has been running ParanoidAndroid most of the time, but when KitKat wasn't available anymore for the GNex, and because PA4 just sucked, I switched to Shiny ROM during its last months.

However, the internal storage was way too low (16GB), way too slow, and the battery was mediocre to say at least. So end of July, I started looking around for a replacement. Initially, I planned on waiting untill the Nexus6 would be available, but rumours about its 6 inch screen did me fear that its price tag would be also impressive.

My eye fell on the Oneplus One, and despite the questions about Oneplus' ability to support, I decided end July to bite the bullet, and to purchase a OnePlus One. An invite was easily found on G+, and beginning of August, I got the phone in the mail. Impressive by size, this phone is located deep into phablet territory. However, its size quickly gets used to. I wasn't really impressed with the sandstone back, I even feared it would be too fragile and damage prone, but after 5 months, it proves to be quite durable. I'm not so wild of the design either, I would have like some more rounded corners.

But apart from that, my experience of the phone has been allround positive : the phone is fast thanks to the Snapdragon 801, and 3GB of memory, and the 64GB of storage is a bliss. The battery covers mostly 2 days even with moderate to heavy use. Add Cyanogemod on the software side, combined with Exposed Framework, and count me as a happy man.

by kristof at December 15, 2014 04:25 PM

Xavier Mertens

Automatic MIME Parts Scanning with VirusTotal

MIME-TypesHere is a Python script that I developed for my personal use: I decided to release it because I think it could be helpful for many of you. In 2012, I started a project called CuckooMX. The goal was to automatically scan attachments in emails with Cuckoo to find for potential malicious files. Unfortunately, the project never reached a milestone to use it smoothly. Maintaining a set of Cuckoo sandboxes is really a pain and consume precious computing resources, so why not use the cloud? Yeah, the evil cloud can also be useful!

I wrote a new Python script which extracts MIME types from emails and checks them against I’m using it to scan my spam folder. But the domain has been registered in January 2001, this means that I’ve email addresses in almost all spam lists over the world! Besides scanning some private addresses, I’ve a catch-all address which sometimes receives  very interesting emails! The last update was to integrate the script with Elasticsearch to have a better reporting.

The implemented features are:

The primary purpose of this tool is to automate the scan of attachments for juicy files. It does NOT protect (no files are blocked). Here is an example of logged result:

Nov 18 13:48:25 marge[5225]: File: 7ce782ba4e23d6cf7b4896f9cd7481cc.obj \
     (7ce782ba4e23d6cf7b4896f9cd7481cc) Score: 0/55 Scanned: 2014-11-17 08:29:14 (1 day, 5:19:11)
Dec 12 18:41:20 marge[1104]: Processing zip archive:
Dec 12 18:41:21 marge[1104]: File: VOICE748-348736.scr \
     (acb05e95d713b1772fb96a5e607d539f) Score: 38/53 Scanned: 2014-11-13 15:45:04 (29 days, 2:56:17)

If the file has already been scanned by Virustotal, its score is returned as well as the scan time (+ time difference). If the file is unknown, it is uploaded for analyzis. Optionally, the Virustotal JSON reply can be indexed by Elasticsearch to generate live dashboards:

ELK VirusTotal Dashboard

(Click to enlarge)

The script can be used from the command line to parse data from STDIN or (as I do) it can be used from a Procmail config file (or any other mail handling tool):

* ^X-Spam-Flag: YES
    | /usr/local/bin/ -d /tmp/mime -c /etc/mime2vt.conf

The script is available here. If you’ve ideas to improve it, please share!

by Xavier at December 15, 2014 04:03 PM

Frank Goossens

Een copywriter staakt niet …

… maar schrijft over zijn ongenoegen met Michel I:

Ik heb nog nooit gestaakt, en ben dat in de nabije toekomst ook niet van plan. Vandaag ga ik gewoon werken. Maar dat betekent niet dat ik daarom jullie regeringskoers goedkeur, zoals Zuhal Demir in haar opiniestukje beweert (DM 10/12). En misschien ben ik niet alleen.

Nee, de man is inderdaad niet alleen, ook niet als ik zie hoeveel dit artikel gelinkt en geliket werd op Facebook.

Update: de vraag blijft of een niet-staker, hoe verontwaardigd over de oneerlijk verdeelde besparingsmaatregelen ook, genoeg indruk maakt op de regering? Ik blijf twijfelen of ik niet beter daadwerkelijk had gestaakt.

by frank at December 15, 2014 02:42 PM

Mattias Geniar

Roundup: Belgacom Hack, Defcon, Varnish, HTTPs, Chrome and PHP

Here's what happend last week on this blog, just in case you missed it.

I'll do these round-ups on a weekly basis -- assuming I have enough content every week to actually summarise.

The post Roundup: Belgacom Hack, Defcon, Varnish, HTTPs, Chrome and PHP appeared first on

by Mattias Geniar at December 15, 2014 09:10 AM

December 14, 2014

Mattias Geniar

Operation Socialist: Inside The Belgacom Hack By GCHQ

This is a really interesting article about how the GCHQ managed to hack Belgacom, one of the largest telco's in Belgium.

Top-secret GCHQ documents name three male Belgacom engineers who were identified as targets to attack.


GCHQ monitored the browsing habits of the engineers, and geared up to enter the most important and sensitive phase of the secret operation. The agency planned to perform a so-called “Quantum Insert” attack, which involves redirecting people targeted for surveillance to a malicious website that infects their computers with malware at a lightning pace.

The Intercept

A careful crafted phishing attack against 3 identified Belgacom engineers was sufficient to bring down the company's defences. In all likeliness, this wasn't those engineers' fault. Everyone eventually falls for a phishing scam, it just depends on the complexity of the scam. And if that doesn't work, they'll get tricked via social engineer.

So how is a sophisticated attack discovered?

... employees of Belgacom’s BICS subsidiary complained about problems receiving emails. The email server had malfunctioned, but Belgacom’s technical team couldn’t work out why. The glitch was left unresolved until June 2013, when there was a sudden flare-up. After a Windows software update was sent to Belgacom’s email exchange server, the problems returned, worse than before.The Intercept

In all likeliness, a software bug on the count of the malware authors. Something went wrong in their malware, that caused external services to be down. And that gets noticed. As long as all systems remain to work as they always do, nobody would notice a hack inside their network. This is also the reason why many malware authors keep their compromised systems "secure" and working, to avoid others penetrating the same servers as well.

In an organisation as large as Belgacom's, it's only a matter of time before someone slips up. The only real defence is segregating duties, making sure everyone only has access to their own systems and their own data. But if you're high enough in the hierarchy, you'll eventually find someone with access to everything, and those will always be the targets ...

The entire story is a nice in-depth look in what happened, and I can highly recommend it to you.

The post Operation Socialist: Inside The Belgacom Hack By GCHQ appeared first on

by Mattias Geniar at December 14, 2014 07:58 PM

Reload Varnish VCL without losing cache data

You can reload the Varnish VCL configuration without actually restarting Varnish. A restart would stop the varnishd process and start it anew, clearing all the cache it has built up in the meantime. But you can also reload the varnish configurations, to load your new VCL without losing the cache.

Beware though, there are times when you want to clear your cache on VCL changes: for instance, healthchecks on backend definitions can get pretty funky when a reload of the VCL would modify their IPs, and there are situations where when you change the vcl_hash routine, a restart is advised since the data in memory would never be used again (because of a hash-change). Having said that, there are plenty of reasons to reload a Varnish VCL cache without losing the data in memory.

Via init.d scripts

Not all init.d scripts have a reload option, and it can be disabled with the sysconfig settings if the RELOAD_VCL option is turned off, but if it's enabled, this by far the easiest way.

$ /etc/init.d/varnish reload

This will reload the VCL, compile it and make Varnish use the new version.

Via the Varnish Reload VCL script

Varnish ships with a command called varnish_reload_vcl (if you use the official RPM/Deb repos). You can use this via the CLI to make Varnish load the default.vcl file again into memory (assuming that is the VARNISH_VCL_CONF defined in the /etc/sysconfig/varnish file).

$ varnish_reload_vcl
Loading vcl from /etc/varnish/default.vcl
Current running config name is boot
Using new config name reload_2014-12-14T20:19:16
VCL compiled.

available      11 boot
active          0 reload_2014-12-14T20:19:16


The new Varnish VCL is loaded, without losing in-memory data.

Via a custom script

The varnish_reload_vcl is essentially a Bash-script, you can view the contents by looking into /usr/bin/varnish_reload_vcl. If you want to write something similar into your own scripts, here's a stripped/easier version of that script. It uses varnishadm to load the file and eventually switch the config to use it.


# Generate a unique timestamp ID for this version of the VCL
TIME=$(date +%s)

# Load the file into memory
varnishadm -S /etc/varnish/secret -T vcl.load varnish_$TIME /etc/varnish/default.vcl

# Active this Varnish config
varnishadm -S /etc/varnish/secret -T vcl.use varnish_$TIME

At any time, you can view the in-memory available Varnish VCL files using the vcl.list command. You can load/active one with the vcl.use command and you can compile a new one with vcl.load.

To view the available ones, run the following command.

$ varnishadm -S /etc/varnish/secret -T vcl.list
available       8 boot
available       0 varnish_1418585083

Eeach of those names can be activated with vcl.use.

 $ varnishadm -S /etc/varnish/secret -T vcl.use varnish_1418585083

The post Reload Varnish VCL without losing cache data appeared first on

by Mattias Geniar at December 14, 2014 07:29 PM

Defcon 22 Videos And Slides Released

Anyone into computer security knows the Defcon conferences. There was one in august of 2014, and the videos and slides of those presentations have now just been released and are free to download and enjoy.

To save on bandwidth, there's also a Torrent download available.

As an added bonus, you should check out the HTTP headers that the site sends out. There sure are a few in there that I've never heard of and have never seen in the wild before!

curl -I

HTTP/1.1 200 OK
X-Frame-Options: DENY
X-Content-Type-Options: nosniff
X-XSS-Protection: 1; mode=block
Content-Security-Policy: default-src 'self'
X-Content-Security-Policy: default-src 'self'
X-WebKit-CSP: default-src 'self'
Strict-Transport-Security: max-age=16070400; includeSubDomains
X-Permitted-Cross-Domain-Policies: master-only
X-Download-Options: noopen
Cache-Control: public, max-age=600
Content-Language: en
Connection: keep-alive
Date: Sun, 14 Dec 2014 03:04:28 GMT
Last-Modified: Fri, 12 Dec 2014 09:08:14 GMT
Content-Type: text/html; charset=utf-8
Vary: Accept-Encoding
Transfer-Encoding: chunked

Happy viewing!

The post Defcon 22 Videos And Slides Released appeared first on

by Mattias Geniar at December 14, 2014 07:07 PM

Combine Apache’s HTTP authentication with X-Forwarded-For IP whitelisting in Varnish

Such a long title for a post. If you want to protect a page or an entire website with HTTP authentication, but also want to whitelist a few fixed IPs (for instance: office or VPN IPs), you can combine both authentication mechanismes in Apache via .htaccess files.

The full example goes like this.

AuthName "User + Pass required"
AuthUserFile /path/to/your/htpasswd
AuthType Basic
Require valid-user
Order Deny,Allow
Deny from all

# Normal whitelist would just add Allow directives

# But if your site is behind Varnish, all connections will appear
# to come from the Varnish IP, most likely
# or the IP from the host itself.
# So we set the X-Forwarded-For header in Varnish, and filter 
# on that header in Apache's htaccess / Directory access control.
# Allow from an IP in the X-Forwarded-For header (Varnish?)
SetEnvIf X-Forwarded-For ^12\.34\.12\.34 env_allow_1
Allow from env=env_allow_1

# Allow from another IP in the X-Forwarded-For header
SetEnvIf X-Forwarded-For ^12\.34\.12\.35 env_allow_2
Allow from env=env_allow_2

# Either the HTTP authentication needs to be correct, or the custom
# environment that allowed the X-Forwarded-For header
Satisfy Any

And when we break this time even more, it'll show 2 clear methods.

HTTP authentication in .htaccess

The first part is the HTTP authentication, usernames and passwords, for Apache. You can set this in your Apache vhost config, or in an .htaccess file in the directory you want to secure.

AuthName "User + Pass required"
AuthUserFile /path/to/your/htpasswd
AuthType Basic
Require valid-user
Order Deny,Allow
Deny from all

This sets the AuthUserFile to the path where the usernames/passwords are stored. If there's no such file yet, you can create one with the htpasswd tool at the CLI.

$ htpasswd -c /path/to/your/htpasswd $username

Replace $username with what you want as a username. You'll be prompted for a password for that user. If you want to append a username/password to an existing file, or want to modify the password of a user in an existing file, use the same CLI command without the -c parameter.

$ htpasswd /path/to/your/htpasswd $username

And you're set: your HTTP authentication password file exists.

IP Whitelisting with X-Forwarded-For

Normal IP whitelisting for this scenario would be done like this.

Allow from
Allow from

But this uses the IP of the client connecting to Apache, which in the case of a reverse proxy config (like Nginx or Varnish), would nearly always be or the IP of the connecting proxy. Not the client's IP (unless you would be running a transparant proxy). We can retrieve that via the X-Forwarded-For, if it's set in the Varnish configs.

To check if your Varnish config has this enabled, search for the line that says something like this.

set req.http.X-Forwarded-For = client.ip;

It can exist in many variants, but the set req.http.X-Forwarded-For is always the same. If it's missing, add it at the top of the vcl_recv routine. If that doesn't work, have a look at my Varnish configs for some ideas.

Now, the Apache bit. You can perform a check on HTTP headers and allow or deny authentication based on that.

SetEnvIf X-Forwarded-For ^12\.34\.12\.35 env_allow_1
Allow from env=env_allow
Satisfy Any

The above example will set an environment variable called env_allow_1, if the X-Forwarded-For HTTP header matches the regex ^12\.34\.12\.35, which in human words means as much as "the X-Forwarded-For header must start with". If that's the case, the environment variable env_allow_1 will be set.

The Allow from code allows you to check for environment variables, to allow or deny access.

The post Combine Apache’s HTTP authentication with X-Forwarded-For IP whitelisting in Varnish appeared first on

by Mattias Geniar at December 14, 2014 06:57 PM

Frederic Hornain

[wildfly-extras] Wildfly and Apache Camel

Wildfly Camel




This quick tutorial  takes you through the first steps of getting Camel into WildFly[1][2] and provides the initial pointers to get up and running.

This explanation is based on the wildfly-camel Project[5].

First step is to download wilfly 8.1.0.Final at the following link :

Second step is to download the WildFly Camel Patch at the following link :

Third step is to install the Camel Subsystem by applying  the patch into the wildfly 8.1.0.Final directory

[fhornain@localhost Project]$ ls
wildfly-8.1.0.Final  wildfly-camel-patch-2.0.0.CR1.tar
[fhornain@localhost Project]$ cd wildfly-8.1.0.Final
[fhornain@localhost wildfly-8.1.0.Final]$ tar -xvf ../wildfly-camel-patch-2.0.0.CR1.tar

Then you have to create few users in order to have access to the wildfly administrator console and hawtio administration console/

[fhornain@localhost wildfly-8.1.0.Final]$ bin/ 

What type of user do you wish to add?
 a) Management User (
 b) Application User (
(a): a              

Enter the details of the new user to add.
Using realm 'ManagementRealm' as discovered from the existing property files.
Username : fhornainWildfly
Password recommendations are listed below. To modify these restrictions edit the configuration file.
 - The password should not be one of the following restricted values {root, admin, administrator}
 - The password should contain at least 8 characters, 1 alphabetic character(s), 1 digit(s), 1 non-alphanumeric symbol(s)
 - The password should be different from the username
Password :
Re-enter Password :
What groups do you want this user to belong to? (Please enter a comma separated list, or leave blank for none)[  ]:
About to add user 'fhornainWildfly' for realm 'ManagementRealm'
Is this correct yes/no? yes
Added user 'fhornainWildfly' to file '/home/fhornain/OttoProject/YttyProject2/wildfly-8.1.0.Final/standalone/configuration/'
Added user 'fhornainWildfly' to file '/home/fhornain/OttoProject/YttyProject2/wildfly-8.1.0.Final/domain/configuration/'
Added user 'fhornainWildfly' with groups  to file '/home/fhornain/OttoProject/YttyProject2/wildfly-8.1.0.Final/standalone/configuration/'
Added user 'fhornainWildfly' with groups  to file '/home/fhornain/OttoProject/YttyProject2/wildfly-8.1.0.Final/domain/configuration/'
Is this new user going to be used for one AS process to connect to another AS process?
e.g. for a slave host controller connecting to the master or for a Remoting connection for server to server EJB calls.
yes/no? yes
To represent the user add the following to the server-identities definition <secret value="XXXXXXXXXXXXXXXXXXX" /></pre>
<pre>[fhornain@localhost wildfly-8.1.0.Final]$
[fhornain@localhost wildfly-8.1.0.Final]$
[fhornain@localhost wildfly-8.1.0.Final]$ bin/ 

What type of user do you wish to add?
 a) Management User (
 b) Application User (
(a): b

Enter the details of the new user to add.
Using realm 'ApplicationRealm' as discovered from the existing property files.
Username : fhornainHawtio
Password recommendations are listed below. To modify these restrictions edit the configuration file.
 - The password should not be one of the following restricted values {root, admin, administrator}
 - The password should contain at least 8 characters, 1 alphabetic character(s), 1 digit(s), 1 non-alphanumeric symbol(s)
 - The password should be different from the username
Password :
Re-enter Password :
What groups do you want this user to belong to? (Please enter a comma separated list, or leave blank for none)[  ]: admin
About to add user 'fhornainHawtio' for realm 'ApplicationRealm'
Is this correct yes/no? yes
Added user 'fhornainHawtio' to file '/home/fhornain/OttoProject/YttyProject2/wildfly-8.1.0.Final/standalone/configuration/'
Added user 'fhornainHawtio' to file '/home/fhornain/OttoProject/YttyProject2/wildfly-8.1.0.Final/domain/configuration/'
Added user 'fhornainHawtio' with groups admin to file '/home/fhornain/OttoProject/YttyProject2/wildfly-8.1.0.Final/standalone/configuration/'
Added user 'fhornainHawtio' with groups admin to file '/home/fhornain/OttoProject/YttyProject2/wildfly-8.1.0.Final/domain/configuration/'
Is this new user going to be used for one AS process to connect to another AS process?
e.g. for a slave host controller connecting to the master or for a Remoting connection for server to server EJB calls.
yes/no? yes
To represent the user add the following to the server-identities definition <secret value="XXXXXXXXXXXXXXXXXXX" /></code></pre>

Finally, you can start your wildfly 8.1.0.Final java application server with the following command :

<pre>[fhornain@localhost wildfly-8.1.0.Final]$ <code>bin/ -c standalone-camel.xml</code></pre>

Then you can log on the wildfly admin console via the following URL :  http://localhost:9990/console/App.html

Wildfly admin console








Or you can log on the hawtio[8] console via the following URL :

Hawtio Console








Then you can see the sample camel route created by default for you through hawtio console :

Camel route diagram








Camel Route Source








FYI, this Camel XML route sample is located in your “standalone-camel.xml” configuration file you used to start your wildfly application server few minutes ago.

 <subsystem xmlns="urn:jboss:domain:camel:1.0">
 <camelContext id="system-context-1">
 <from uri="direct:start"/>
 <simple>Hello #{body}</simple>

In conclusion, you are now ready to create camel route on top of wildfly[2].

N.B. if you are looking for certified and supported enterprise solutions please consider Red Hat JBoss EAP[6] or Red Hat JBoss Fuse[7]

Kind Regards


[1] Based on the definition written on Wikipedia :

WildFly, formerly known as JBoss AS, or simply JBoss, is an application server authored by JBoss, now developed by Red Hat. WildFly is written in Java, and implements the Java Platform, Enterprise Edition (Java EE) specification. It runs on multiple platforms.


[3] Based on the definition written on Wikipedia :

Apache Camel is a rule-based routing and mediation engine that provides a Java object-based implementation of the Enterprise Integration Patterns using an API (or declarative Java Domain Specific Language) to configure routing and mediation rules. The domain-specific language means that Apache Camel can support type-safe smart completion of routing rules in an integrated development environment using regular Java code without large amounts of XML configuration files, though XML configuration inside Spring is also supported.






by Frederic Hornain at December 14, 2014 02:10 PM

[Red Hat Satellite 6] Managing Red Hat Enterprise Linux across heterogeneous environments

Red Hat Satellite Overview








In 2014, Red Hat launched Red Hat® Satellite 6, a new version of its classic Red Hat Enterprise Linux® life-cycle management solution. It includes some of the best in open system-management technology and a flexible architecture to manage scale from bare-metal to virtualized environments, and in public and private clouds.


Red Hat® Satellite is a complete cloud system management product that manages the full life cycle of your Red Hat deployments across physical, virtual, and private clouds. Watch this demo to see how Red Hat Satellite delivers system provisioning, configuration management, software management, and subscription management—all while maintaining high scalability and security.


Kind Regards


by Frederic Hornain at December 14, 2014 02:03 PM

December 13, 2014

Mattias Geniar

Chrome To Explicitly Mark HTTP Connections As Non-Secure

So 2015 will be the year of HTTPs/SSL/TLS. Chromium, the project behind Chrome, is making plans to mark HTTP connections as "non-secure".

We propose that user agents (UAs) gradually change their UX to display non-secure origins as affirmatively non-secure. We intend to devise and begin deploying a transition plan for Chrome in 2015. The goal of this proposal is to more clearly display to users that HTTP provides no data security.Chrome Security Team

If's efforts to offer free SSL certificates pays of, the step to change HTTP to HTTPs for site owners may be just a little smaller. But running an HTTPs site is not without dangers, as bad implementations can knock your site offline.

This move seems to fit Chrome's plan to push the SPDY/HTTP2 forward, as it's built on TLS connections. But at what cost?

The post Chrome To Explicitly Mark HTTP Connections As Non-Secure appeared first on

by Mattias Geniar at December 13, 2014 03:35 PM

Varnish tip: see which cookies are being stripped in your VCL

Most Varnish configs contain a lot of logic to strip cookies from the client, to avoid them being sent to the server. This is needed, because cookies are often part of the hash-calculation of a request (if they are included in the vcl_hash routine), and a random cookie would make for a unique hash each time and that can destroy your caching strategy. Here's a trip on how you can see what cookies your client is sending, and what cookies your Varnish VCL is removing and sending on to the backend.

View the client's request

First of, see what cookies your client is sending to a particular request. You can easily filter on the source IP of the connection or on the URL that's being called.

$ varnishlog -c -m RxURL:/my-unique-page
$ varnishlog -c -m ReqStart:

The first varnishlog example will show you all the headers for the request to the URL "/my-unique-page", regardless of the source IP. The second example will show you all requests coming from IP

In the output that follows, pay special attention to the Cookie: header in the request.

29 ReqStart     c 65175 295468531
29 RxRequest    c GET
29 RxURL        c /my-unique-page
29 RxProtocol   c HTTP/1.1
29 RxHeader     c Host: xxx
29 RxHeader     c Connection: keep-alive
29 RxHeader     c Cookie: my-cookie=2; has_js=1; __utma=123456789; __utmb=123456789;
29 VCL_return   c hash
29 VCL_call     c miss fetch

The example above shows that 4 cookies are sent along, semi-colon separated:

You can strip cookies that are sent from the client in vcl_recv, the routine that processes/prepares the client's request. To remove cookies, you can use snippets like these.

set req.http.Cookie = regsuball(req.http.Cookie, "has_js=[^;]+(; )?", "");
set req.http.Cookie = regsuball(req.http.Cookie, "__utm.=[^;]+(; )?", "");

The VCL code above strips out the has_js and the __utm. cookies, where the dot "." indicates a wildcard -- this catches the __utma, __utmb, ... cookie names in a regular expression.

Notice that the VCL example above does not strip the "my-cookie" cookie. But how can you test this? Well, the varnishlog example above shows the cookies received from the client, we can use a similar varnishlog command to show what cookies Varnish is sending to the backend.

View what Varnish is sending to the backend

The previous example contained the -c parameter of varnishlog, which include the log entries which result from communication with a client. Similarly, there is a -b parameter for including log entries which result from communication with a backend server. So if you take the same examples from above, you can see the backend communication made by Varnish.

$ varnishlog -b -m TxUrl:/my-unique-page

Notice the 2 subtle differences in the command: -c is swapped for -b, to show backend communication, and RxURL is swapped for TxURL. The "Rx" and "Tx" naming comes from the telecom/networking world, where "Rx" stands for "Receiving" and "Tx" stands for "Transmitting". In other words, an RxURL is an URL received from the client, a TxURL is a URL sent by Varnish (transmitted) to the backend.

The output is similar to the Varnishlog example from the client, but with a clear difference in the Cookie: headers.

314 BackendXID   b 295505930
314 TxRequest    b GET
314 TxURL        b /my-unique-page
314 TxProtocol   b HTTP/1.1
314 TxHeader     b Host: xxx
314 TxHeader     b Cookie: my-cookie=2;

The varnishlog with TxURL filter shows which cookies remain, in this case only the my-cookie one, that we did not remove in the VCL.


You can use varnishlog to filter on the requests that come from the client, as well as the requests that are sent to the backend server, after the VCL manipulation by Varnish. Pay close attention to the Cookie: header in RxURL and TxURL, they show you which cookies are not being stripped to the backend.

Varnishlog is a powerful tool for debugging Varnish requests, but you need correct filters to show you the relevant output, not all output. If you want to, you can read up more useful varnishtop and varnishlog commands.

The post Varnish tip: see which cookies are being stripped in your VCL appeared first on

by Mattias Geniar at December 13, 2014 12:50 PM

December 12, 2014

Frederic Hornain

[Open Source Cloud Day] Openshift Enterprise v2 Workshop

Openshift Workshop







Following my previous post named : [Open Source Cloud Day] Openshift | PaaS Sessions

Presentation can be downloded at

Kind Regards


by Frederic Hornain at December 12, 2014 05:01 PM

[Open Source Cloud Day] Introduction to Openshift for Application Developers

Open Shift Presentation








Following my previous post named : [Open Source Cloud Day] Openshift | PaaS Sessions

Presentation can be downloded at

Kind Regards



by Frederic Hornain at December 12, 2014 04:41 PM

[Open Source Cloud Day] Openshift | PaaS Sessions

Open Source Cloud Day






On Wednesday December 10 2014, I did a Openshift[1] presentation and workshops @ the Open Source Days event[2] organized by our Belgian partner Kangaroot[3] – Mechelen.

I would like to thank Bart Janssens from Kangaroot[3] for his excellent job before and during the workshop session.

Open cloud days kangaroot










Below you will find presentation I used during the Openshift Enterprise Presentation session and workshop.

Open Shift Presentation







Openshift Workshop













N.B. Presentations should be available on Slideshare[4] in the next few hours. I will include links asap.

Kind Regards


by Frederic Hornain at December 12, 2014 10:15 AM

Secure your applications with Red Hat JBoss Middleware

Red Hat JBoss PKI








Red Hat® JBoss® Middleware is fast becoming recognized as government’s trusted choice for applications that need robust encrypted data transport and secure identity and access control.
Red Hat works continuously with government agencies and the open source community to implement Red Hat JBoss Middleware that supports the most stringent federal protocols, standards, and security features.

For more information visit:


Kind Regards


by Frederic Hornain at December 12, 2014 09:44 AM

Red Hat secure, open source solutions: SELinux and sVirt

Secure Virtualization








In the escalating battle against malicious attackers who want to compromise enterprise level systems, you need cost-effective solutions to stop potential threats — both inside and outside the walls of government offices. Red Hat delivers rigorous security controls to prevent unauthorized access to systems and data with Red Hat® Security-Enhanced Linux® (SELinux) and Red Hat sVirt for virtual and cloud environments.

For more information visit:


by Frederic Hornain at December 12, 2014 09:32 AM

[Fedora 21] Now Available, Delivers the Flexibility of

Fedora 21





While each variant aims to meet specific user demands, all are built from a common base set of packages that includes the same Linux kernel, RPM, yum, systemd, and Anaconda. This small, stable set of components allows for a solid foundation upon which to base the Fedora 21 variants.








Download available at the following URL :








Designed to handle the myriad of computing requirements across different cloud deployments, Fedora 21 Cloud provides images for use in private cloud environments, like OpenStack, and Amazon Machine Images (AMIs) for use on Amazon Web Services (AWS), as well as a base image to enable creation of Fedora containers. Key features of Fedora 21 Cloud include:

Modular Kernel Packaging for Cloud Compuing – To save space and reduce “bloat” in cloud computing deployments, the Fedora 21 Cloud kernel contains the minimum modules needed for running in a virtualized environment; coupled with other size reduction work, the Fedora 21 Cloud image is roughly 25 percent smaller than that of previous Fedora releases, enabling faster deployment and increasing available space for critical applications.

Atomic logo






Fedora Atomic Host[1] – Using tools and patterns made available through Project Atomic, Fedora 21 offers the first “Atomic” host for Fedora, which includes a minimal package set and an image composed with only the run-times and packages needed to serve as an optimized host for Linux containers. Fedora Atomic Host allows for ”atomic” updates as a single unit, simplifying update management and providing the ability to roll-back updates if necessary. Fedora Atomic Host also includes Kubernetes for container orchestration and Cockpit for container management.








The Fedora 21 Server variant offers a common base platform for running featured application stacks (produced, tested, and distributed by the Fedora Server Working Group), providing a flexible foundation for Web servers, file servers, database servers, and even Platform-as-a-Service (PaaS) deployments. Fedora 21 Server delivers:

New Management Features – Fedora 21 Server introduces three new technologies to handle the management and installation of discrete infrastructure services.


Rolekit provides a Role deployment and management toolkit that helps administrators to install and configure a specific server role.









A Rolekit graphical interface should be provided with the cockpit project – see below –  in Fedora 22.






Cockpit[2] is a Web-based user interface for configuring, managing, and monitoring servers, accessible remotely via a Web browser.

cockpit screenshot storage















OpenLMI[3] delivers a remote management system built on top of Distributed Management Taskforce – Common Information Model (DMTF-CIM), offering scripting management functions across machines, capabilities querying and system event monitoring.






Domain Controller – One of the roles offered through Rolekit, Fedora 21 Server packages freeIPA’s integrated identity and authentication solution for Linux/UNIX networked environments; machines running Fedora 21 Server can now offer centralized authentication, authorization, and account information by storing user, group, host, and other object data necessary to manage network security.

freeipa screenshot













Fedora 21









Revitalizing the Linux desktop, Fedora Workstation provides a polished, targeted system designed to offer a smooth experience for general desktop use as well as software development, from independent Web developers to corporate coders. New features in Fedora 21 Workstation include:

Streamlined Software Installation – The Software installer, a cornerstone component to Fedora 21 Workstation, allows users to quickly and easily locate their applications. It provides a responsive and fast user experience, going hand-in-hand with a greatly improved number of featured Fedora applications included with Fedora 21 Workstation.

Wayland Support (Experimental) – Wayland, a powerful next-generation display server technology, is included in Fedora 21 Workstation as an experimental build, allowing developers to test and integrate their applications with Wayland’s new capabilities.

DevAssistant – A developer “helper,” DevAssistant automates the setup process for a large number of language runtimes and integrated development environments (IDEs); DevAssistant also integrates with Fedora Software Collections, offering access to multiple versions of different languages without worrying about system software conflicts.

Fedora 21 redefines the very nature of the Fedora distribution, so users seeking to try all of the new capabilities for themselves or looking for additional information on all of the new features, enhancements and tweaks, please visit





Kind Regards


by Frederic Hornain at December 12, 2014 09:18 AM

December 11, 2014

Frederic Hornain

[SCAP+STIG] Red Hat secure, open source solutions[1]

Protecting against today’s relentless and adaptive cyber threats requires continuous monitoring of your networks and systems, but providing the investment and support needed for continuous monitoring can strain security budgets already stretched thin.

Red Hat helps address this challenge through centralized security management, configuration scanning, and advanced remediation. With Red Hat’s® continuous monitoring capabilities, you can automatically scan Red Hat technology[1] for security gaps, vulnerabilities, and unauthorized changes in security configurations — and then remediate problems to restore security controls to your established security configuration.

For more information visit:
BTW, STIG stands for “Security Technical Implementation Guide”.

Ref :

Kind Regards

by Frederic Hornain at December 11, 2014 06:43 PM

Red Hat secure, open source solutions overview

Red Hat secures the world around you

For agencies and programs across government, open source solutions from Red Hat are delivering security as good as or better than proprietary solutions.



Red Hat is recognized as a secure and trusted choice for open source solutions in government. We understand that security is paramount — as attacks on federal systems and data grow more sophisticated, each intrusion could potentially damage government missions, citizens’ trust, and national security. Our reputation is built on protecting yours.

For more in formation visit: RED.HT/OpenSourceSecurity
Ref :
Kind Regards

by Frederic Hornain at December 11, 2014 06:41 PM

Frank Goossens

Vragen over politieke stakingen en FGTB

mini_dewever_strikDus Bart De Wever noemt de staking van volgende week maandag een politieke staking en de socialistische vakbond FGTB “de gewapende arm van de PS”? Ik weet niet wat U, maar in mijn warrige linkse rattenkop klinkt dat al snel als “deze staking is een ongeoorloofde politieke actie van de PS die niet tegen zijn verlies kan”.

Maar de FGTB (en Vlaamse tegenhanger ABVV) stond toch evengoed op de barricades tegen het Paarse generatiepact in 2005 en tegen de besparingen van de regering Di Rupo in 2012? Het moet dan toch zijn dat de PS zijn gewapende arm niet echt onder controle heeft? En hoe komt het dat ACV en ACLVB, beiden toch in meerdere (ACV) en mindere (ACLVB) mate gelieerd aan coalitiepartners CD&V en OpenVLD, ook weer mee staken? Of behoren zij evengoed tot de gewapende arm van de PS? En de bijna 8 op 10 Vlamingen die vind dat de besparingen niet eerlijk verdeeld zijn, dat zijn dan ook allemaal PS’ers?

by frank at December 11, 2014 05:18 PM

December 10, 2014

Mattias Geniar

Generate PHP core dumps on segfaults in PHP-FPM

The PHP documentation is pretty clear on how to get a backtrace in PHP, but some of the explanations are confusing and seem focused on mod_php, instead of PHP-FPM. So here's the steps you can take to enable core dumps in PHP-FPM pools.

Enable core dumps on Linux

Chances are, your current Linux config doesn't support core dumps yet. You can enable them and set the location where the kernel will dump the core files.

$ echo '/tmp/coredump-%e.%p' > /proc/sys/kernel/core_pattern

You can use many different kinds of core dump variables for the filename, such as;

%%  a single % character
%c  core file size soft resource limit of crashing process (since
    Linux 2.6.24)
%d  dump mode—same as value returned by prctl(2) PR_GET_DUMPABLE
    (since Linux 3.7)
%e  executable filename (without path prefix)
%E  pathname of executable, with slashes ('/') replaced by
    exclamation marks ('!') (since Linux 3.0).
%g  (numeric) real GID of dumped process
%h  hostname (same as nodename returned by uname(2))
%p  PID of dumped process, as seen in the PID namespace in which
    the process resides
%P  PID of dumped process, as seen in the initial PID namespace
    (since Linux 3.12)
%s  number of signal causing dump
%t  time of dump, expressed as seconds since the Epoch,
    1970-01-01 00:00:00 +0000 (UTC)
%u  (numeric) real UID of dumped process

The example above will use the executable name (%e) and the pidfile (%p) in the filename. It'll dump in /tmp, as that will be writable to any kind of user.

Now that your kernel knows where to save the core dumps, it's time to change PHP-FPM.

Enable PHP-FPM core dumps per pool

To enable a core dump on a SIGSEGV, you can enable the rlimit_core option per PHP-FPM pool. Open your pool configuration and add the following.

rlimit_core = unlimited

Restart your PHP-FPM daemon (service php-fpm restart) to activate the config. Next time a SIGSEGV happens, your PHP-FPM logs will show you some more information.

WARNING: [pool poolname] child 20076 exited on signal 11 (SIGSEGV - core dumped) after 8.775895 seconds from start

You can find the core-dump in /tmp/coredump*.

$ ls /tmp/coredump*
-rw------- 1 user group 220M /tmp/coredump-php-fpm.2393

The filename shows the program (php-fpm) and the PID (2393).

Reading the core dumps

This is one part that the PHP docs are pretty clear about, so just a copy paste with modified/updated paths.

First, you need gdb installed (yum install gdb) to get the backtraces. You then start the gdb binary like gdb $program-path $coredump-path. Since our program is php-fpm, which resides in /usr/sbin/php-fpm, we call the gdb binary like this.

$ gdb /usr/sbin/php-fpm /tmp/coredump-php-fpm.2393
(gdb loading all symbols ... )
Reading symbols from /usr/lib64/php/modules/ debugging symbols found)...done.
Loaded symbols for /usr/lib64/php/modules/

(gdb) bt
#0  0x00007f8a8b6d7c37 in mmc_value_handler_multi () from /usr/lib64/php/modules/
#1  0x00007f8a8b6db9ad in mmc_unpack_value () from /usr/lib64/php/modules/
#2  0x00007f8a8b6e0637 in ?? () from /usr/lib64/php/modules/
#3  0x00007f8a8b6dd55b in mmc_pool_select () from /usr/lib64/php/modules/
#4  0x00007f8a8b6ddcc8 in mmc_pool_run () from /usr/lib64/php/modules/
#5  0x00007f8a8b6d7e92 in ?? () from /usr/lib64/php/modules/
#6  0x00007f8a8ac335cf in nr_php_curl_setopt () at /home/hudson/slave-workspace/workspace/PHP_Release_Agent/label/centos5-64-nrcamp/agent/php_curl.c:202
#7  0x0000000002b14fe0 in ?? ()
#8  0x0000000000000000 in ?? ()

The bt command will show you the PHP backtrace on the moment of the core dump. To exit gdb, just type quit.

The post Generate PHP core dumps on segfaults in PHP-FPM appeared first on

by Mattias Geniar at December 10, 2014 06:45 PM

The Pirate Bay Calls It Quits

This day was bound to come, everyone knew this.

But from the immense void that will now fill up the fiber cables all over the world, I’m pretty sure the next thing will pan out. And hopefully it has no ads for porn or viagra. There’s already other services for that.Peter Sunde, TPB

They survived for a long time. Longer than I had anticipated. They survived countless firewall-blocks. The short-term result will be a massive drop in network traffic, until the next big thing arrives.

The Pirate Bay Goes Down

The Pirate Bay Goes Down

Even though the tracker may have been distributed, TBP was still the single-point-of-entry as a search engine for many. So long TPB, and thanks for all the fish.

The post The Pirate Bay Calls It Quits appeared first on

by Mattias Geniar at December 10, 2014 08:43 AM

Frank Goossens

WP YouTube Lyte now parses normal YouTube links as well

I was just being a jealous guy, seeing how normal YouTube embeds (oEmbeds) got previewed nicely in WordPress 4.0 TinyMCE editor. This had been on my wishlist for a long time already and I looked into enabling that for httpv-links and lyte-shortcodes as well, but that turned out not to be that simple.

So I took the alternative approach, enabling WP YouTube Lyte to act on normal YouTube-links (a much requested feature anyhow) and thereby piggy-backing on the TinyMCE-improvement in 4.0. So there you have it; lyte video’s can be inserted using normal YouTube links and that will result in a (non-lyte) preview of the video in the visual editor content box.

1.5 as a number of other improvements and bugfixes, but you can read all about those in the changelog.

Have fun with this small Rick James like party-track to celebrate 1.5.0 (and my birthday, while we’re at it);

YouTube Video
Watch this video on YouTube or on Easy Youtube.

by frank at December 10, 2014 05:47 AM

December 09, 2014

Wouter Verhelst

Playing with ExtreMon

Munin is a great tool. If you can script it, you can monitor it with munin. Unfortunately, however, munin is slow; that is, it will take snapshots once every five minutes, and not look at systems in between. If you have a short load spike that takes just a few seconds, chances are pretty high munin missed it. It also comes with a great webinterfacefrontendthing that allows you to dig deep in the history of what you've been monitoring.

By the time munin tells you that your Kerberos KDCs are all down, you've probably had each of your users call you several times to tell you that they can't log in. You could use nagios or one of its brethren, but it takes about a minute before such tools will notice these things, too.

Maybe use CollectD then? Rather than check once every several minutes, CollectD will collect information every few seconds. Unfortunately, however, due to the performance requirements to accomplish that (without causing undue server load), writing scripts for CollectD is not as easy as it is for Munin. In addition, webinterfacefrontendthings aren't really part of the CollectD code (there are several, but most that I've looked at are lacking in some respect), so usually if you're using CollectD, you're missing out some.

And collectd doesn't do the nagios thing of actually telling you when things go down.

So what if you could see it when things go bad?

At one customer, I came in contact with Frank, who wrote ExtreMon, an amazing tool that allows you to visualize the CollectD output as things are happening, in a full-screen fully customizable visualization of the data. The problem is that ExtreMon is rather... complex to set up. When I tried to talk Frank into helping me getting things set up for myself so I could play with it, I got a reply along the lines of...

well, extremon requires a lot of work right now... I really want to fix foo and bar and quux before I start documenting things. Oh, and there's also that part which is a dead end, really. Ask me in a few months?

which is fair enough (I can't argue with some things being suboptimal), but the code exists, and (as I can see every day at $CUSTOMER) actually works. So I decided to just figure it out by myself. After all, it's free software, so if it doesn't work I can just read the censored code.

As the manual explains, ExtreMon is a plugin-based system; plugins can add information to the "coven", read information from it, or both. A typical setup will run several of them; e.g., you'd have the from_collectd plugin (which parses the binary network protocol used by collectd) to get raw data into the coven; you'd run several aggregator plugins (which take that raw data and interpret it, allowing you do express things along the lines of "if the system's load gets above X, set load.status to warning"; and you'd run at least one output plugin so that you can actually see the damn data somewhere.

While setting up ExtreMon as is isn't as easy as one would like, I did manage to get it to work. Here's what I had to do.

You will need:

First, we clone the ExtreMon git repository:

git clone extremon
cd extremon

There's a README there which explains the bare necessities on getting the coven to work. Read it. Do what it says. It's not wrong. It's not entirely complete, though; it fails to mention that you need to

Make sure the script outputs something from collectd. You'll know when it shows something not containing "plugin" or "plugins" in the name. If it doesn't, fiddle with the #x3. lines at the top of the from_collectd file until it does. Note that ExtreMon uses inotify to detect whether a plugin has been added to or modified in its plugins directory; so you don't need to do anything special when updating things.

Next, we build the java libraries (which we'll need for the display thing later on):

cd java/extremon
mvn install
cd ../client/
mvn install

This will download half the Internet, build some java sources, and drop the precompiled .jar files in your $HOME/.m2/repository.

We'll now build the display frontend. This is maintained in a separate repository:

cd ../..
git clone display
cd display
mvn install

This will download the other half of the Internet, and then fail, because Frank forgot to add a few repositories. Patch (and push request) on github

With that patch, it will build, but things will still fail when trying to sign a .jar file. I know of four ways on how to fix that particular problem:

  1. Add your passphrase for your java keystore, in cleartext, to the pom.xml file. This is a terrible idea.
  2. Pass your passphrase to maven, in cleartext, by using some command line flags. This is not much better.
  3. Ensure you use the maven-jarsigner-plugin 1.3.something or above, and figure out how the maven encrypted passphrase store thing works. I failed at that.
  4. Give up on trying to have maven sign your jar file, and do it manually. It's not that hard, after all.

If you're going with 1 through 3, you're on your own. For the last option, however, here's what you do. First, you need a key:

keytool -genkeypair -alias extremontest

after you enter all the information that keytool will ask for, it will generate a self-signed code signing certificate, valid for six months, called extremontest. Producing a code signing certificate with longer validity and/or one which is signed by an actual CA is left as an exercise to the reader.

Now, we will sign the .jar file:

jarsigner target/extremon-console-1.0-SNAPSHOT.jar extremontest

There. Who needs help from the internet to sign a .jar file? Well, apart from this blog post, of course.

You will now want to copy your freshly-signed .jar file to a location served by HTTPS. Yes, HTTPS, not HTTP; ExtreMon-Display will fail on plain HTTP sites.

Download this SVG file, and open it in an editor. Find all references to be.grep as well as those to barbershop and replace them with your own prefix and hostname. Store it along with the .jar file in a useful directory.

Download this JNLP file, and store it on the same location (or you might want to actually open it with "javaws" to see the very basic animated idleness of my system). Open it in an editor, and replace any references to by the location where you've stored your signed .jar file.

Add the chalice_in_http plugin from the plugins directory. Make sure to configure it correctly (by way of its first few comment lines) so that its input and output filters are set up right.

Add the configuration snippet in section 2.1.3 of the manual (or something functionally equivalent) to your webserver's configuration. Make sure to have authentication—chalice_in_http is an input mechanism.

Add the chalice_out_http plugin from the plugins directory. Make sure to configure it correctly (by way of its first few comment lines) so that its input and output filters are set up right.

Add the configuration snippet in section 2.2.1 of the manual (or something functionally equivalent) to your webserver's configuration. Authentication isn't strictly required for the output plugin, but you might wish for it anyway if you care whether the whole internet can see your monitoring.

Now run javaws https://url/x3console.jnlp to start Extremon-Display.

At this point, I got stuck for several hours. Whenever I tried to run x3mon, this java webstart thing would tell me simply that things failed. When clicking on the "Details" button, I would find an error message along the lines of "Could not connect (name must not be null)". It would appear that the Java people believe this to be a proper error message for a fairly large number of constraints, all of which are slightly related to TLS connectivity. No, it's not the keystore. No, it's not an API issue, either. Or any of the loads of other rabbit holes that I dug myself in.

Instead, you should simply make sure you have Server Name Indication enabled. If you don't, the defaults in Java will cause it to refuse to even try to talk to your webserver.

The ExtreMon github repository comes with a bunch of extra plugins; some are special-case for the place where I first learned about it (and should therefore probably be considered "examples"), others are general-purpose plugins which implement things like "is the system load within reasonable limits". Be sure to check them out.

Note also that while you'll probably be getting most of your data from CollectD, you don't actually need to do that; you can write your own plugins, completely bypassing collectd. Indeed, the from_collectd thing we talked about earlier is, simply, also a plugin. At $CUSTOMER, for instance, we have one plugin which simply downloads a file every so often and checks it against a checksum, to verify that a particular piece of nonlinear software hasn't gone astray yet again. That doesn't need collectd.

The example above will get you a small white bar, the width of which is defined by the cpu "idle" statistic, as reported by CollectD. You probably want more. The manual (chapter 4, specifically) explains how to do that.

Unfortunately, in order for things to work right, you need to pretty much manually create an SVG file with a fairly strict structure. This is the one thing which Frank tells me is a dead end and needs to be pretty much rewritten. If you don't feel like spending several days manually drawing a schematic representation of your network, you probably want to wait until Frank's finished. If you don't mind, or if you're like me and you're impatient, you'll be happy to know that you can use inkscape to make the SVG file. You'll just have to use dialog behind ctrl+shift+X. A lot.

Once you've done that though, you can see when your server is down. Like, now. Before your customers call you.

December 09, 2014 06:43 PM

FOSDEM organizers

Certification exams

The Linux Professional Institute and the BSD Certification Group will offer exam sessions at FOSDEM 2015. Interested candidates can now register for exams with the respective organisations. Further details are available on the certification page.

December 09, 2014 03:00 PM

December 08, 2014

Mattias Geniar

The Real Cost of the “S” in HTTPS

A new paper has been released on the cost of "s" in HTTPs, and it's been getting a lot of attention lately. And rightfully so, it's a good paper. But I feel it's missing the most important aspect of the "cost" of the "s" in HTTPs.

If the paper is a bit heavy on you, the slides are also published, which make it easier to digest.


The cost of HTTPs

Besides the certificate (obviously, but that may change very soon) there are indeed technical costs involved. The delay in a TLS handshake, the extra round trips, the loss of caching proxies for ISPs, ... they're all valid points. But the real cost in an HTTPs site?

It's too easy to fuck up

Since Google now considers HTTPs as an additional SEO ranking factor, a lot of sites are implementing HTTPs. But it isn't just slapping a certificate on it, and you're done. What happens to hard-coded links to HTTP content? Do you have proper redirects/rewrites, to make the HTTPs version your only version? Will you get duplicate content penalties for exposing both versions of your site?

Did you remember to order or renew 2048 bit certificates, instead of 1024 bit ones?

Expired or invalid certificates

Purchase, configure and test. And you're done, right? Until the expiration date comes and you've missed or ignored your renewal messages. Or did you order your certificate for a domain that you decided to move away from?


Tough luck, your visitors are greeted with an error message when visiting your site. And if your browser is caching certificates, you may not notice yourself right away.

Mixing content

Browsers will block the inclusion of HTTP-content on an HTTPs-site. After all, loading HTTP content gives away information about the user that an HTTPs-site may be trying to hide. It can also invalidate the entire SSL/TLS encryption of the HTTPs site.


So what happens when you include HTTP content on an HTTPs site? It's just blocked. Not even downloaded. Not even attempted. If you have important CSS or JavaScript on your site, it won't load.

Mixed Content: The page at '' was loaded over HTTPS, but requested an insecure resource ''. This request has been blocked; the content must be served over HTTPS.

Now imagine you have an HTTP site. You want to make it HTTPs enabled. You now have to think about every hard-coded include in your site and change it (yes, you should use protocol agnostic or protocol relative URLs, but who knew, right?). If you're using WordPress, there are some great guides to help you.

But be sure to tripplecheck your configurations and your content, because you may be denying your visitors content that they came looking for, just because it wasn't loaded from an HTTPs domain.

The real cost is in the user errors

Certificates are cheap. Finding a guide on the internet to configure your webserver, is cheap. Making a user error when configuring your website, that's the real cost. Browsers don't care if your config is 95% perfect. They'll destroy the visitor's experience if you don't nail it for the full 100%.

The post The Real Cost of the “S” in HTTPS appeared first on

by Mattias Geniar at December 08, 2014 09:06 PM

Varnish FetchError: straight insufficient bytes

In your varnishlog, you may see the following error appearing.

   11 ObjProtocol  c HTTP/1.1
   11 ObjResponse  c OK
   11 ObjHeader    c Content-Length: 482350
   11 FetchError   c straight insufficient bytes
   11 Gzip         c u F - 437646 482350 80 3501088 3501098
   11 VCL_call     c error deliver
   11 VCL_call     c deliver deliver
   11 TxProtocol   c HTTP/1.1
   11 TxStatus     c 503
   11 TxResponse   c Service Unavailable
   11 TxHeader     c Accept-Ranges: bytes

When you check the access logs on your backend server, you may notice something odd about the Content-Length header. For instance, here's an access logs from my backend webserver.

 ... "GET /some/file.pdf HTTP/1.1" 200 437646 "referer" "user-agent"

The access logs show the size of the request is 437646 bytes, whereas Varnish thinks the request is 482350 bytes (Content-Length header). So the "straight insufficient bytes" literally means: the response I got from the backend did not contain enough bytes, I was expecting more. I'll panic now, kthxbye.

The backend indicated the request was 482KB in size, but it only sent along 437KB according to the logs. No wonder Varnish freaks out.

There's no real fix, though, except to make sure your backend sends along as much bytes to Varnish as it indicates with the Content-Length header. There are odd cases where Gzip/Deflate compression can screw up these numbers, if you're debugging this -- disable the gzip compression or mod_deflate in Apache and try again. See if it helps. If so, you're on the right track and you may be experiencing mod_fastcgi + mod_deflate combination errors.

The post Varnish FetchError: straight insufficient bytes appeared first on

by Mattias Geniar at December 08, 2014 07:00 PM

PHP 5.5 Opcode Cache Settings

Unless you're running all your PHP via CLI scripts, you'll be using an Opcode Caching in your PHP configurations. This used to be APC for anything below PHP 5.5, but since PHP 5.5 there's a built-in extension called Opcode Cache that replaces APC. (note: APC is still available for user-caching, like a key/value store)

To activate the PHP Opcode Cache extension, you can drop the following lines in your /etc/php.d/ directory, to a new file named 10-opcache.ini. The reasoning behind the number in the front of the file, is to ensure this gets loaded first, before other modules.

; Enable the Opcache

; In MegaBytes, how much memory can it consume?

; The number of keys/scripts in the Opcache hash table (how many files can it cache?)

; How often to check script timestamps for updates, in seconds.
; 0 will result in OPcache checking for updates on every request.

; If enabled, OPcache will check for updated scripts every
; opcache.revalidate_freq seconds. When this directive is disabled,
; you must reset OPcache manually

; If enabled, a fast shutdown sequence is used that doesn't free
; each allocated block, but relies on the Zend Engine memory manager
; to deallocate the entire set of request variables en masse.

; Chances are, you won't need this on the CLI

Reload your PHP-FPM daemon, and you should be set. If you're looking at how your Opcache is performing, check out opcache.php (similar to apc.php's file for checking the APC performance). If you want to know more about PHP Opcode performance, make sure to read the excellent post at EngineYard on Opcode Caching.

The post PHP 5.5 Opcode Cache Settings appeared first on

by Mattias Geniar at December 08, 2014 06:00 PM

Dries Buytaert

Announcing the Drupal 8 Accelerate Fund

Today the Drupal Association announced a new program: the Drupal 8 Accelerate Fund. Drupal 8 Accelerate Fund is a $125,000 USD fund to help solve critical issues and accelerate the release of Drupal 8.

The Drupal Association is guaranteeing the funds and will try to raise more from individual members and organizations within the Drupal community. It is the Drupal 8 branch maintainers — Nathaniel Catchpole, Alex Pott, Angie Byron, and myself — who will decide on how the money is spent. The fund provides for both "top-down" (directed by the Drupal 8 branch maintainers) and "bottom-up" (requested by other community members) style grants. The money will be used on things that positively impact the Drupal 8 release date, such as hiring contributors to fix critical bugs, sponsoring code sprints to fix specific issues, and other community proposals.

Since the restructuring of the Drupal Association, I have encouraged the Drupal Association staff and Board of Directors to grow into our ambitious mission; to unite a global open source community to build and promote Drupal. I've also written and talked about the fact that scaling Open Source communities is really hard. The Drupal 8 Accelerate Fund is an experiment with crowdsourcing as a means to help scale our community which is unique compared to other efforts because it is backed by the official non-profit organization that fosters and supports Drupal.

I feel that the establishment of this fund is an important step towards more sustainable core development. My hope is that if this round of funding is successful that this can grow over time to levels that could make an even more meaningful impact on core, particularly if we complement this with other approaches and steps, such as organization credit on

This is also an opportunity for Drupal companies to give back to Drupal 8 development. The Drupal Association board is challenging itself to raise $62,500 USD (half of the total amount) to support this program. If you are an organization who can help support this challenge, please let us know. If you're a community member with a great idea on how we might be able to spend this money to help accelerate Drupal 8, you can apply for a grant today.

by Dries at December 08, 2014 05:01 PM

Mattias Geniar

Apache’s mod_fastcgi and mod_deflate troubles

There's a very old outstanding but in Apache's mod_fastcgi combined with mod_deflate, where the Content-Length header is not being passed along properly between the FastCGI process (like PHP) and the compression performed by mod_deflate.

The problem occurs when your Apache webserver is running gzip compression via mod_deflate on the content, which is then being passed via mod_fastcgi to an upstream (like PHP) that tries to handle compression as well. In the case of a Drupal system, this can happen when you let Drupal handle gzip-compression via its Performance settings as well as Apache via its mod_deflate module.

And if you search the interwebs, you'll find lots of reports on this;

These all go way back, to beyond 2010. So it's safe to say, this problem has been around for some time. The suggestions to upgrade to the latest version of FastCGI don't seem to help either, at least not on my test-system.

My current workaround was just to disable mod_deflate entirely in Apache, and handle the Gzip compression in another service (like Varnish as HTTP cache or in Nginx as my SSL terminator). This is far from pretty, but it got the job done. For now.

If anyone has any clues why mod_deflate together with mod_fastcgi can cause these headaches, I'd love to hear your thoughts and ideas.

The post Apache’s mod_fastcgi and mod_deflate troubles appeared first on

by Mattias Geniar at December 08, 2014 04:48 PM

December 06, 2014

Dieter Plaetinck

IT-Telemetry Google group. Trying to foster more collaboration around operational insights.

The discipline of collecting infrastructure & application performance metrics, aggregation, storage, visualizations and alerting has many terms associated with it... Telemetry. Insights engineering. Operational visibility. I've seen a bunch of people present their work in advancing the state of the art in this domain:
from Anton Lebedevich's statistics for monitoring series, Toufic Boubez' talks on anomaly detection and Twitter's work on detecting mean shifts to projects such as flapjack (which aims to offload the alerting responsibility from your monitoring apps), the metrics 2.0 standardization effort or Etsy's Kale stack which tries to bring interesting changes in timeseries to your attention with minimal configuration.

Much of this work is being shared via conference talks and blog posts, especially around anomaly and fault detection, and I couldn't find a location for collaboration, quicker feedback and discussions on more abstract (algorithmic/mathematical) topics or those that cross project boundaries. So I created the IT-telemetry Google group. If I missed something existing, let me know. I can shut this down and point to whatever already exists. Either way I hope this kind of avenue proves useful to people working on these kinds of problems.

December 06, 2014 09:01 PM

December 05, 2014

Xavier Mertens

Botconf 2014 Wrap-Up Day #3

The Botconf venueI’m just back from Nancy and it’s time to publish the wrap-up for the last day! The last night was very short for most of the attendees: 30 minutes before the first talk, the coffee room was almost empty! This third started with “A new look at Fast Flux proxy networks” by Dhia Mahjoub from OpenDNS. Hendrik Adrian was also involved in this research but he can’t be present for personal reasons. OpenDNS provides DNS services and, as we all know, DNS is critical in botnets infrastructure. They have access to a very big source of information! It was already said multiple times, the crimeware scene is an eco-system. Modern malware communicate with their C&C through proxies. That was the topic of Dhia’s presentation: Fast-Flux proxy networks.

Dhia on site

The concept of Fast-Flux proxy networks is simple but efficient. It’s a botnet used by another botnet to interconnect victimes and their C&C’s. Their specifications are:

Then, Dhia explained how to detect ZeuS using DNS techniques:
  1. Initial list of ZeuS Fast Flux domains
  2. Get IP, TTL via direct lookup into a DNS DB
  3. Extract IP with TTL = 150
  4. Get domains from IP’s via reverse lookups
  5. Add domains to the initial list
  6. Extract IP with TTL=150
  7. Add the new IP’s to the list of proxies

Based on this exercice, they got some statistics about the Zbot proxy network geographic distribution: 18K IP addresses detected from 691 ASN’s in 71 countries and 7600+ are live! They also reviewed some stats from the Kelihos botnet: >2600 IP addresses in 221 ASN’s from 44 countries. For information, to generate nice graphs, they used a tool developed by OpenDNS: OpenGraphiti. Dhia’s conclusions are: such botnets are very versatile and provided multi-purpose services based on the client’s need. They use mainly the .ru and .su TLD’s (Russia seems to be the main source) but victims are mainly located in Western countries. If you are interesting in DNS & Botnets, have a look at OpenDNS labs.

The next talk was presented by Evgeny Sidorov and Andrew Kovalev from Yandex: “Botnets of *NIX web servers”. Usually, system administrators think that they are safe because they are using Linux (or any other UNIX flavour). This is clearly a false sense of security. Today, *NIX servers are also interesting targets! Why? They have specifications that are very interesting for criminals: they are not patched, they are facing the Internet and don’t use NAT, no need to use P2P protocols. It also generates a new business: renting shells, spam bots, BlackHat SEO. Mainly Linux systems are targeted but the speakers already found some samples which work on FreeBSD! The attacks are based on weak CMS, brute-forcing passwords. As already mentioned yesterday, some site have a nice Alexa rank!

The Yandex guys on stage

But the question is: once infected, is there a life beyond webshells? PHP suxx, for criminals too! Some webshells have bugs and the PHP 30” script execution limit is a real pain. Attackers search to evade this. They made a nice review of Mayhem, the best-known UNIX botnet. It is very portable and uses ShellShock, Heartbleed and much more to infect other computers. It has a proper architecture and is based on plugins. New plugins can be developed and added later.

Then, Evgeny and Andrew reviewed other discovered trojans like Darkleech and Trololo_mod which infect Apache webserver via malicious modules. Effusion is another one which targets nginx. Others reviewed where: Ebury, Cdorked. The operating Windigo is still ongoing to try to kill them (25K servers infected, 500K+ web redirections / day and 35M spam sent / day! A specific mention for Bury which used most of the times the but the latest version uses What to conclude? Infection of *NIX servers are real! There is a new monetisation and criminals use all the advantages of the server (ex: a very good uptime and a direct access to Internet).

After the coffee break, “DNS analytics, case study” by Osama Kamal. The first (and recurrent) message is: “Check your DNS logs!“. The approach to do this is simple and has a zero foot-print in customer’s infrastructure. Osama and his colleagues from the Q-CERT created a toolbox to analyse DNS logs. Amongst 20 organisations that they checked, all of them were infected! (100%).

Osama on stage


They a analysed 600M DNS events and found 250 infections with a rate of 25% of false positives. They used a classic approach to analyse the logs:

  1. Collect
  2. Parse
  3. Index
  4. Store
  5. Enrich
  6. Analyse

The toolbox was based on cloud-instances and JSON files. Osama gave the results of a sample case: They started with 72M of DNS events (14 days). They extracted 460K unique domains. The list was reduced to 270K without the local domains and 14K after whitelisting. They execute 35 checks to extract 500 domains and, after a manual review, found finally 70 domains. 44 hosts were infected. Very interesting talk! But the toolbox is not yet ready and must be improved: They need to minimise the manual operations and to scale it for but enterprise.

Just before the lunch, Jean-Yves Marion from LORIA, the hoster of the conference this year, presented his keynote: “Malware and botnet research at LORIA”. This laboratory has many experiences in the security field and is busy on topics like: malwares, network security, SCADA systems and even drones! But Jean-Yves focuses his keynote on x86 malwares.

Jean-Yves on stage

When a sample must be a analysed we are facing three challenges: the identification, the classification and the detection. Jean-Yves started with a theoretical speech and asked the following questions (and gave answers):

A classic approach is to dump the memory, disassemble and generate a control flow graph. But quite all malwares today are self-modifying (93%) and they require multiple decryption waves in the packer to get the right code. Jean-Yves made a demo with tElock99 which requires 17 waves! The next part of the presentation was harder (read: with a lot of assembler in slides). I noted an interesting project developed by LORIA: CoDisasm : Concatic disassembly of self-modifying binaries with overlapping instructions.

The lunch break (very delicious as usual) was followed by David Sancho’s presentation: “Holes in banking 2FA”. Operation Emmental. The scheduled talk was cancelled due to a travel issue for the speaker. David started with two questions for the audience: “Who never helped a friend to clean his infected laptop?” and “Aren’t we are always thinking… What did he click on such link?”. Who do not remember the DnsChanger trojan in 2009?

David on stage

David explained in details how attackers are abusing of their victims using a nice attack. It is based on the following components:

In this attack, an AV is usually not effective because there is no persistence. If the malicious code is not detected at the second, it will never be! (the trojan deletes itself). The infrastructure deployed by the attackers contains: DNS servers, hosting servers, SMS receivers and C&C servers. Usually, they run the campaign for a short time (10 days) then delete everything. Very nice analyse of an attack. Detailed information are available in David’s report here.

And the conference ended with the talk  “ZeuS meets VM – Story so far” by Maciej Kotowicz also from Again, ZeuS was the topic of this talk. This trojan is in the news for a while, most security companies communicated about it but some are giving false information. That’s why Maciej decided to present this talk. It’s an happy family of different versions of Zeus. He reviewed many variants of ZeuS: ICEX, Citadel, PowerZeus, KiNS, VMZeuS, ZeuSVM and others. He also introduced libzpy, a Python library to play with Zbot and made some demos based on Cuckoo!

Maciej on stage

That’s over for this second edition which was, according to many attendees, a big success! Some numbers:

The next edition has already  been announced: It will be held in Paris, the 2-3-4 December 2015.

The Botconf Crew

Botconf 2014 archives are also online:

by Xavier at December 05, 2014 09:54 PM

December 04, 2014

Wim Coekaerts

EBS VMs explained

A great blog entry from the EBS team explaining the various Oracle VM appliances for EBS :

by wcoekaer at December 04, 2014 10:59 PM

Xavier Mertens

Botconf 2014 Wrap-Up Day #2

Botconf AttendeesHere is my wrap-up for the second day. Yesterday, we had a nice evening with some typical local food and wine then we went outside for a walk across the city of Nancy. Let’s go!

Paul Rascagnères kicked off the second day with a workshop about WinDbg debugger and some useful tips. It started with Paul’s questions like “Who think it’s a good idea to speak about ASM/debugger/WinDBG at 9AM” or “Who think I need a costume?”. It started with a joke and now Paul is expected to wear a nice costume during his talks. Challenge accepted again!

Paul Santa Claus

Starting the day with assembler code is not easy. For the fun, Paul proposed to use a new TLP code (“Traffic Light Protocol”) for this presentation: TLP:Rainbow. The Wikipedia page was changed but it was of course quickly rolled back to the previous version. Here is a screenshot of the updated version:


Then, he mentioned the singer Regine (who was born in Anderlecht – close to Brussels) and the recent Belgacom pwnage by the malware with the  same name. But enough jokes ad back to the topic: WinDBG. This is a Windows debugger that has powerful features but not easy to use. According to Paul, there are two majors issues: The layout and the built-in scripting language. The first problem can be solved by using plugins to reorganise the interface and the second one can be fixed by using pykb which adds Python support into the debugger. But don’t forget that it is prohibited to use  WinDBG in planes (real story). There are two ways to debug: live debugging (dynamic). A virtual machine can also be debugged live, Paul gave an example with VirtualBox configuration to achieve this. The second way is to use a Windows crash dump. In this case, we perform static analysis. It is mostly used during incident response. And Paul started to give some tips:

Finally, Paul gave an examples of pykd usage to analyse the Uroburos malware. It was a very interesting introduction to WinDBG. A suggestion could be to create a poster with all those nice tips.

The next presentation was “A timeline of mobile botnets” by Ruchna Nigam. Ruchna reviewed many malwares which targeted mobile devices.  It started in 2004 with the first malware but they were not real bots (no communication with a C&C). The real precursor was Symbian Yxes in 2009 (Internet access and SMS propagation) Internet access was not so common in 2009! Eeki.B targeted iOS devices (jailbroken only) and performed ssh scanning using the classic well-known default password ‘alpine’).

Ruchna on stage

Which are the most affected mobile devices?

And we continued to review a timeline of well-known infections:

What type of data are most stolen? IMEI numbers, IMSI numbers, phone numbers, build info and location. The communications with C&C are performed over HTTP(S), SMS or a combination of both. This last type is nasty: if the current C&C becomes unavailable (ex: taken down), a new configuration is pushed through SMS messages! What are the motivations from criminals? If financial motivations become more and more important, it is due to the premium SMS services and the banking trojans. She also reviewed interesting cases like Android/SMSHowU which, when a SMS “How are you?” is received, send the mobile position via a Google map URL. Twikabot is a malware which communicates via Twitter. NotCompatible converts the infected phone into a proxy. The traffic is redirected to the C&C, guess what when the mobile phone is connected to a private network via a WiFi hotspot? There are multiple attack vectors: Google play can used but the malicious app is often quickly removed. Third party app-stores are real pain! FakePlay.C is also a nasty one: it infects the PC connected to its USB ports and used the autorun feature. Chuli was a specific one sent via mail attachment to all attendees of a conference. Some others targets groups of hacktivists like Xsser which targeted the “occupy central” participants in Hong-Kong (send via a Whatsapp message). Ruchna’s conclusions are: Mobile malware are a fact, not a fiction. They are more C&C channels than on a regular victim and they can be upgraded remotely (increasing the resilience). It was a nice review of the situation in the mobile world which is not better than classic computers. Maybe even worse due to the fact that people don’t consider them as real computers. She closed the talk with a nice demo of a live infection seen from the C&C… What a sensation of power!

The coffee break was followed by “Ad Fraud botnet 101” by Aleksander Tsvyashchenko and Sebastian Milieus from Google, which is a key player in ads networks! The presentation started with some examples like Win32.Bot.ZeroAccess2, Win32.Bot.ZeusZbot_P2P. The ZeroAccess traffic volume is estimated to millions of ads requests! It’s huge and fuelling the underground economy and the attack surface increases. The presentation focused on bots which generate false clicks on ads (fraud) and not to ads which delivers malicious code.

The Google team on stage

A classic ads ecosystem is simple: we have a direct deal between an advertiser and a publisher. In this ecosystem, the main issue is the scalability. Then came another model where we have more and more new players between the two original parties (ex: Google). The model can become very complex and each party takes a percentage on the price of a click. Billing models are based on CPC (“Cost Per Click“), CPM (“Cost Per Mile“) or CPE (“Cost Per Engagement“). With more and more parties trying to grab some cents at each click, it is normal to see more and more fraud attempts! That was the second part of the presentation. Aleksander and Sebastian explained in details how such botnets are working. The primary goal is to mimic a user browser (environment, user activity, mouse movements, etc). Each browser can be “infected” and have components added to automate such tasks. Once infected the computer surfs the web, detects clickable zones and click! But such artificial clicks have patterns that can be used to detect them (they click only on borders) as seen in the picture below:

Automated click patterns

(Picture credits: @gnppn)

This behaviour is used by Google to detect fraud! While talking with friends after the presentation was: what percentage of fraud is “allowed” by Google. Of course, as seen in the beginning of the talk, Google is part of the business and each click earns a few cents. What was interesting is also the monitoring of the botnet and the very quick reaction time in case of take down operation (the botnet being down means no revenue!). Very interesting talk, I learned lot of interesting stuff in this field.

Then, Pedro Camelo and Joao Moura presented a tool called CONDENSER. It is a graph-based approach for detecting botnets. After an short introduction about botnets (was it necessary for the Botconf attendees?), they focused on the detection methods available: In a passive way, we can inspect packets, network flows, domain name syntax (ex: % of vowels, length, english words, …). In an active way, we can check the DNS resolutions. Once collected, you can correlated the information and play with infected machines, IP addresses reputation, malware analysis, … A demo was performed: tagging domain names with bot names. Interesting but the demo was very light.


After the lunch, we restarted with Ivan Fontaresky and Ronan Mouchoux who presented “APT investigations backstage”. The first indicators: OSINT, 3rd parties, customers or bad guy watch. What is a suspect? A data! IP addresses, domains, logins,  binaries. They can be identified from a knowledge base or from different aspects: behaviour, social or financial. The keyword is “Follow your enemy“! Build your knowledge base! But what is a suspect behaviour?

Ivan & Ronan on stage

The next part of the talk was how to build a performant team. Choose your area of expertise. Try to estimate how many people you need. Keep in mind that more people means more competencies and more complexity for sharing info. People must have an area of competence but also be backup for a colleague. To resume, they explained how to work in team. When investigations are over, what to communicate? Do we anonymise What about privacy (even of the attacker). Technically we are all good but we have to improve working in teams and teams of teams.

The following talk was very interesting: “Middle income malware actors in Poland: BVKlip & beyond” by Lukasz Siewierski from This talk focused on a malware that is active in Poland only, Lukasz never saw it outside his country. In Poland, people do a lot of e-banking and wire transfer. Bank account numbers are based on the following format:

Polish Bank Account Numbers

How do people work? They receive a payment request from a third party via PDF files. They copy/paste the bank account number. Guest what? If they are infected by the VBKlip malware, the clipboard content is changed on the fly if the data correspond to a bank account number. It is replaced by another number (the one of the attacker). Nasty! This malware can be detected using the following IOC’s: the following processes are running on an infected host: atidrv32.exe, winlog.exe and ms32sound.exe. SMTP is used for communications. The infection vector is mainly via email. Banatrix is another alternative. It has a funny way to replace data using a keylogger: it waits for 26 characters corresponding to a bank account number and send 26 backspaces then retype the new numbers! Very nice presentation with a unusual way to process data.

Paul Jung presented “Bypassing sandboxes for fun”. I missed this one at in October and I was very happy to follow this talk which looked intersting. In his introduction, Paul explained that, today, every malware is unique. To help in detection, companies are adding sandboxes in their network and are playing in sand:

Sandbox as a device

There are plenty of solutions on the market… free or commercial. Paul tested them. The goal for the malware is to detect if it is running in a v virtualised environment. If it is the case, the malware just stops its execution. So, Paul explained how to seek for virtualisation! VMware offers a very huge footprint in the registry, files and processes. They detect MAC addresses artefacts. Virtualised hardware has often the same serial numbers. Another technique is to use 64-bits software, to check the browser history, user activity, a potential link with an ActiveDirectory.

The next step is to be more stealthy and to check without calling API’s. A good practice is to detect the number of CPU, most VM have only one. But today, even the smallest computer has at least a dual core. It is also possible to detect the hypervisor brand via the CPUID. Another tip: VirtualBox does not report a L-1 cache for the CPU, real CPU’s have L-1 cache for years! Between all the test performed, only Joe Sandbox resisted to Paul’s tests but it’s not really a nice appliance like other products and not usable as is in a network. I had an interesting chat with Paul in the evening: very interesting feedback about other solutions and how they reacted to his research. To conclude the talk: harden your VM’s to reduce the detection ratio, never install the provided tools. Here are some interesting link about this topic:

Another very interesting one: “Learning attribution techniques by researching a Bitcoin stealing cyber criminal” by Mark Arena from INTEL 471. What is attribution? It’s taking an incident and figuring out who and why, who’s being the keyboard? It depends on the motivation of the threat actor: cyber espionage VS. hacktivism VS. cyber crime. If you can’t disrupt the criminal makes his life difficult (Think about the “WANTED” posters published by the FBI).

 Mark in stage

Bitcoins are interesting case. To search for an actor, it’s a human habit to reuse stuff (usernames, ICQ, IPs) etc. It’s very difficult over a long period to completely separate two online entities. It’s not just a name and address! We need to know a lot of stuff about him (goals, methodology, etc) How to start? From an incident? (action – reaction) or start with an actor (actor-centric). Mark’s talk could be sub-titled “Doxing for Dummies“, he explained step by step how he investigated the case of an attacker (probably located in France) who stole Bitcoins. He found plenty of information like:

It was a very great step by step introduction. The guys is probably located in France and targets users of crypto currencies. His new malware can be easily detected but Mark is not sure of who’s behind the keyboard!

The two next presentations focused on the same topic: DDoS. The first one was presented by Dennis Schwarz from Arbor Networks. Called The Russian DDoS one: Booters of Botnets” or “RD1“. You want to rent a botnet? It will cost you $5 for 1h. DDoS became a real business with plenty of potential customers. They also have a reputation service, vouchers, dispute resolution procedures, exactly like a normal business!

DDoS botnet price

If the first part was interesting, Dennis continued with a presentation of all know botnets, one by one with the same slides: what they propose, the architecture in place and known targets. This was not very relevant for me! Then, again about DDoS, Peter Kalnai and Jaromir Horeisi presented  “Chinese chicken: multi-platform DDoS botnets“. They explained with many slides a botnet based on ELF binaries… Very nice research but the slides contained too much information (MD5, SHA1, IP addresses, filenames, etc…). Impossible to take some notes! If you are interested, I suggest you to have a look at the great review make by MalwareMustDie.

Finally, Tom Ueltschi closed the second set of talks with “Ponmocup Hunter 2.0 – The sequel”.  Tom presented his project during the first edition of Botconf (link to my wrap-up) and came back to give new details. He performed a very good job to track this botnet which is huge (21M infected servers) but remains below the radar! He also gave details about the Zuponcic exploit kit.

Tom on stage

What about Ponmocup finder for the masses? The idea is to find more infected servers. A new version of the script was written in Python and tested against the Alexa 1M list as PoC. One fact is that being in the top-rank does not mean that the website is more secure! How to prevent & detect infections? Block known IP ranges and to detect check for DNS lookups to or The botnet has also an interesting anti-sinkhole technique (data is encrypted into a cookie). Once again, good job Tom!

A second set of lightning talk was organised to close the day. Topic were:

A social event was organised at the Musée des Beaux-Arts of Nancy. Very nice place, very nice food and people, what else to finish the day?

Place Stanislas, Nancy

by Xavier at December 04, 2014 10:15 PM

Frank Goossens

Muziek van Onze Buis: Luc De Vos was aanwezig

Ik was nooit echt groot Gorki-fan (Mia daargelaten), maar Luc De Vos zal nu nooit meer aanwezig zijn en dit prachtige klaagnummer met Tom Barman is dan ook meer dan toepasselijk;

YouTube Video
Watch this video on YouTube or on Easy Youtube.

by frank at December 04, 2014 04:31 PM

December 03, 2014

Xavier Mertens

Botconf 2014 Wrap-Up Day #1

Botconf 2014Botconf is back for a second edition! If the first one was held last year in Nantes, botnet fighters from many countries are back in Nancy to discuss again about… botnets! As the name says, Botconf is a security conference which focus only on botnets. This is a very interesting topic because everybody was/is/will be infected and take part of a botnets. The one who never found an infected device on his network, throw the first hard drive! About the attendees, 200 people joined Nancy from many countries (South-Africa, Israel, South-America, Korean, Japan, and most European countries). There is  25 talks on the schedule prepared by more than 30 top speakers.

The first day started slowly around 9AM with a cool breakfast and some coffee. After a short introduction by Eric Freyssinet from the board of organizers, the first keynote was presented by the United Kingdom’s National Crime Agency (NCA) about botnets takedown. Benedict Addis & Stewart Garrick: “Our GameOver Zeus experience”. They very first message to the audience was:  “We need each others (security researchers & law enforcement)“.

Eric and keynotes speakers

Stewart explained what is the average knowledge of botnets by law enforcement agencies? It is completely confusing! Stewart’s job is to make thinks clear for his colleagues. He has an interesting view: Cybercrime is working in two dimensions compared to a classic murder which is working in three dimensions. The Zeus takedown was a long process: It started in October 2011 when it was clearly d identified by and was completed in June 2014 with a public announcement. During this timeline, several operations were organised like “Tovar”, “Cleanslate” or “Gonogo”. The name of the last one was chosen because the operation against the botnet was postponed several times. Communication was a key and, first the very first time, all UK policemen received a small note about the botnet which briefly explained what it was. There was also a huge presence in the newspapers and others media included TV channels. Another initiative was the creation of the website. Even English tabloids broadcasted messages like “Two weeks to save your computer”. And it was successful: People was receptive and there was a 2/3 reduction in UK IP part of the botnet and  massive a uptake of AV tools. A few words about Cryptolocker. This is a visible threat: people know they are infected. Then the speakers explained the DGA (“Domain Generation Algorithm) used by Crypto locker DGA. The key fact was that the generated domains were predictable. It was possible to register “future” domains to build a sinkhole. Was was the learning? We have to understand how the business model behing botnets is working, to map the infrastructure, share and coordinate. We have also to learn from mistakes and use media to up-skill users. A nice idea was presented during the keynote: The creation of a “last resort registrar” to put domains in it and keep control of them.

The first regular talk was “Semantic Binary Exploration” by Laura Guevara and Daniel Plohmann. The goal was to explain how to speed up malware analysis. Their motivations are what is this malicious code doing? Different samples share the same features and evolve within the same “family”. Malware developers are also developers and they also copy/paste some code found on the Internet. That’s why we can find common pieces of code is some malwares.

Laura & Daniel on stage

Static analysis is decoupling analysis from the malware’s execution time and, using automated tools, to explore the control flow graph. The approach presented by Laura was to examine sequences of calls to API and try to infer the user-level function attached to them. Checking calls to API is a common way to analyse the behaviour. Semantics is assigning the meaning to the set of common malware operations (ex: copy/del files for hidden persistence), communications with C&C. All those tasks are performed by calling specific APIs. Example was given with process injection:

  -> Process32First 
    -> Process32Next 
      -> OpenProcess 
        -> WriteProcessMemory 
          -> CreateRemoteThread

The methodology is the following: collect malware behaviour, define semantics and explore! The next step was to explain how to analyse the arguments passed to the functions and then how to reduce the amount of data by just keeping the useful API calls using N-Gram queries. The presentation ended with a demo performed by Daniel: semanticExplorer @ IDAscope (this is an IDA extension to help in malware analysis).

IDAscope Demo

The tool is based on “helpers” which allows to perform regular (boring) tasks like analysing the communication with C&C servers, understanding the crypto routines or automating the search for YARA signatures. Daniel’s IDAscope repository is available online.

After the (excellent) lunch, the afternoon started with a presentation about the Havex RAT. Giovanni RattaroPaul Rascagnères and Renaud Leroy. They started with an introduction about this remote access tool. The first IOC’s were published on Pastebin by Giovanni in March 2014. The complete analyse was a long process: from January 2014 until today. When SCADA systems were targeted, it was a new storm.

Presenting the Havex RAT

The next part was presented by Paul who explained more technical insights of the malware. The malware is present via a DLL called TMPprovider0xx.dll (xx = the version number). The features are classic: file upload/download, command execution but other modules are very interesting like the OPC scanner, info gathering, network scanner or passwords stealer. Note that the XOR key is always the same and is story in base64. The next part was based on an analyse of the C&C logs and code. The log file is called testlog.php. All requests are logged in base64 and contains many fields (include an “in” and “out” bytes counter). Interesting feature: once a logfile has been downloaded from the C&C, it is immediately deleted. The last part was an analyse of the log files (the first one was generated in February 2011! Some key numbers:

At the end of the presentation, an idea was proposed: The creation of CERT “2.0” with  new ways of working to improve botnet fighting capabilities.

Then, “The many faces of Mevade” was presented by Martijn Grooten and Joao Gouveia. The first message from Martijn was: Fighting a botnet does not always start by reverse engineering a sample. If we compare Mevade vs Regin, from a technical point of view, Regin is much bigger than Mevade but from an infection point of view, that’s the opposite!

Maartijn & Joao on stage

This malware appeared in January 2012 and was called Win32/Sefnit by Microsoft. In September 2013,  Tor reported a sharp increase in connections from all countries. Joao presented the tool they used to gather intelligence: Cyberfeed. It takes data from multiple security feeds (URLs, trojans, spam traps, etc), analyse them and produce some data for subscribers (via dashboards or API). The presentation went deeper on Mevade with information about the domain names used and communications with the C&C. Martijn & Joao’s conclusions are that chasing botnets does not rely only on reverse engineering. Google can also be a good tool in research. Some botnets can be very big and not well known.

And we continued with “Splicing and dicing 2014: Examining this year’s botnets attacks trends” by Nick Sullivan from Cloudflare. This presentation did not focus on botnets from an analyse point of view but from a network perspective. Cloudflare being a big player in cloud services (DNS & reverse proxy), they handle a huge amount of data which contain interesting stuff. They have a great position to capture traffic.

Nick on stage

Nick reviewed some techniques used in DoS attacks. Today, most of them are based on reflection/amplification attacks. Interesting fact: 25% of networks allow IP spoofing! Common protocols are DNS, NTP & SNMP. But attacks can also be performed at the 7th layer, example with HTTP.  Some examples were also reviewed like HTTP b brute-force Nick finished with some new trends:

But they are also potential trends:

Based on the data collected by Cloudflare, attacks start immediately after a vulnerability is disclosed (ex: WordPress or Drupal). Conclusions: patch always as sonn as possible!

After a coffee break, another tool was presented by Peter Kleissner: VirusTracker: This is a bot monitoring tool. Peter explained what are the multiple challenges of running a large scale sinkhole operation. Since September 2012, they started the largest bot monitoring system. The goals are to generate statistics as size, geographic distribution on long term, detect changes/ movements and alert infected organisations. To give an idea of the system today:

Peter on stage

So, what are the challenges to operate such platform?

To reduce costs, the solution is the automation! The following process are fully automated: the registration of new domains, the classification of data and their distribution. Key elements are the creation of a distribution network with CERT’s to warn of infections but also to detect false positives (generated by web crawlers, domain tools, applications like websense, etc). Peter explained the challenges to monitor P2P communications between infected systems and their C&C. Some malwares even implement anti-sinkholing techniques. Finally, mobile botnets are coming and must also be monitored.

The last talk was presented by Karine e Silva which is a lawyer and doing some researches on the legal aspects of botnets fighting: “How to dismantle a botnet: the legal behind the scene”. This was a non-technical talk but very interesting. We are facing laws all the time and they don’t always go in the direction of the security researchers.

Karine on stage

Her presentation was directly followed by a round-table debate about the same topic. Several interesting questions came from the audience. Basically, there is a clear lack of international laws. A very interesting was about the potential creation of “botnets paradise” like fiscal paradise that we have today. What if a country does not apply international laws?

Finally, the day ended with a first set of lightning talks. The principle is easy: you come on stage and receive 3 minutes (no more no less) to present your tool, research, idea, … The following topic were presented:

After the talks, the social networking was back, some food, some drinks with old and new friends and a walk in the city of Nancy. The first day was very good. Excellent organisation with very nice ideas like providing free tickets for public transport and a welcome booth at the train station! Stay tuned for more stuff tomorrow.

by Xavier at December 03, 2014 09:13 PM