Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

May 24, 2017

In this blog post I outline my thinking on sharing code that deals with different types of Entities in your domain. We’ll cover what Entities are, code reuse strategies, pitfalls such as Shotgun Surgery and Anemic Domain Models and finally Bounded Contexts.

Why I wrote this post

I work at Wikimedia Deutschland, where amongst other things, we are working on a software called Wikibase, which is what powers the Wikidata project. We have a dedicated team for this software, called the Wikidata team, which I am not part of. As an outsider that is somewhat familiar with the Wikibase codebase, I came across a writeup of a perceived problem in this codebase and a pair of possible solutions. I happen to disagree with what the actual problem is, and as a consequence also the solutions. Since explaining why I think that takes a lot of general (non-Wikibase specific) explanation, I decided to write a blog post.

DDD Entities

Let’s start with defining what an Entity is. Entities are a tactical Domain Driven Design pattern. They are things that can change over time and are compared by identity rather than by value, unlike Value Objects, which do not have an identity.

Wikibase has objects which are conceptually such Entities, though are implemented … oddly from a DDD perspective. In the above excerpt, the word entity, is confusingly, not referring to the DDD concept. Instead, the Wikibase domain has a concept called Entity, implemented by an abstract class with the same name, and derived from by specific types of Entities, i.e. Item and Property. Those are the objects that are conceptually DDD Entities, yet diverge from what a DDD Entity looks like.

Entities normally contain domain logic (the lack of this is called an Anemic Domain Model), and don’t have setters. The lack of setters does not mean they are immutable, it’s just that actions are performed through methods in the domain language (see Ubiquitous Language). For instance “confirmBooked()” and “cancel()” instead of “setStatus()”.

The perceived problem

What follows is an excerpt from a document aimed at figuring out how to best construct entities in Wikibase:

Some entity types have required fields:

  • Properties require a data type
  • Lexemes require a language and a lexical category (both ItemIds)
  • Forms require a grammatical feature (an ItemId)

The ID field is required by all entities. This is less problematic however, since the ID can be constructed and treated the same way for all kinds of entities. Furthermore, the ID can never change, while other required fields could be modified by an edit (even a property’s data type can be changed using a maintenance script).

The fact that Properties require the data type ID to be provided to the constructor is problematic in the current code, as evidenced in EditEntity::clearEntity:

// FIXME how to avoid special case handling here?
if ( $entity instanceof Property ) {
  /** @var Property $newEntity */
  $newEntity->setDataTypeId( $entity->getDataTypeId() );

…as well as in EditEntity::modifyEntity():

// if we create a new property, make sure we set the datatype
if ( !$exists && $entity instanceof Property ) {
  if ( !isset( $data['datatype'] ) ) {
     $this->errorReporter->dieError( 'No datatype given', 'param-illegal' );
  } elseif ( !in_array( $data['datatype'], $this->propertyDataTypes ) ) {
     $this->errorReporter->dieError( 'Invalid datatype given', 'param-illegal' );
  } else {
     $entity->setDataTypeId( $data['datatype'] );

Such special case handling will not be possible for entity types defined in extensions.

It is very natural for (DDD) Entities to have required fields. That is not a problem in itself. For examples you can look at our Fundraising software.

So what is the problem really?

Generic vs specific entity handling code

Normally when you have a (DDD) Entity, say a Donation, you also have dedicated code that deals with those Donation objects. If you have another entity, say MembershipApplication, you will have other code that deals with it.

If the code handling Donation and the code handing MembershipApplication is very similar, there might be an opportunity to share things via composition. One should be very careful to not do this for things that happen to be the same but are conceptually different, and might thus change differently in the future. It’s very easy to add a lot of complexity and coupling by extracting small bits of what would otherwise be two sets of simple and easy to maintain code. This is a topic worthy of its own blog post, and indeed, I might publish one titled The Fallacy of DRY in the near future.

This sharing via composition is not really visible “from outside” of the involved services, except for the code that constructs them. If you have a DonationRepository and a MembershipRepository interface, they will look the same if their implementations share something, or not. Repositories might share cross cutting concerns such as logging. Logging is not something you want to do in your repository implementations themselves, but you can easily create simple logging decorators. A LoggingDonationRepostory and LoggingMembershipRepository could both depend on the same Logger class (or interface more  likely), and thus be sharing code via composition. In the end, the DonationRepository still just deals with Donation objects, the MembershipRepository still just deals with Membership objects, and both remain completely decoupled from each other.

In the Wikibase codebase there is an attempt at code reuse by having services that can deal with all types of Entities. Phrased like this it sounds nice. From the perspective of the user of the service, things are great at first glance. Thing is, those services then are forced to actually deal with all types of Entities, which almost guarantees greater complexity than having dedicated services that focus on a single entity.

If your Donation and MembershipApplication entities both implement Foobarable and you have a FoobarExecution service that operates on Foobarable instances, that is entirely fine. Things get dodgy when your Entities don’t always share the things your service needs, and the service ends up getting instances of object, or perhaps some minimal EntityInterface type.

In those cases the service can add a bunch of “if has method doFoobar, call it with these arguments” logic. Or perhaps you’re checking against an interface instead of method, though this is by and large the same. This approach leads to Shotgun Surgery. It is particularly bad if you have a general service. If your service is really only about the doFoobar method, then at least you won’t need to poke at it when a new Entity is added to the system that has nothing to do with the Foobar concept. If the service on the other hands needs to fully save something or send an email with a summary of the data, each new Entity type will force you to change your service.

The “if doFoobar exists” approach does not work if you want plugins to your system to be able to use your generic services with their own types of Entities. To enable that, and avoid the Shotgun Surgery, your general service can delegate to specific ones. For instance, you can have an EntityRepository service with a save method that takes an EntityInterface. In it’s constructor it would take an array of specific repositories, i.e. a DonationRepository and a MembershipRepository. In its save method it would loop through these specific repositories and somehow determine which one to use. Perhaps they would have a canHandle method that takes an EntityInterface, or perhaps EntityInterface has a getType method that returns a string that is also used as keys in the array of specific repositories. Once the right one is found, the EntitiyInterface instance is handed over to its save method.

interface Repository {
    public function save( EntityInterface $entity );
    public function canHandle( EntityInterface $entity ): bool;

class DonationRepository implements Repository { /**/ }
class MembershipRepository implements Repository { /**/ }

class GenericEntityRepository {
     * @var Repository[] $repositories
    public function __construct( array $repositories ) {
        $this->repositories = $repositories;

    public function save( EntityInterface $entity ) {
        foreach ( $this->repositories as $repository ) {
            if ( $repository->canHandle( $entity ) ) {
                $repository->save( $entity );

This delegation approach is sane enough from a OO perspective. It does however involve specific repositories, which begs the question of why you are creating a general one in the first place. If there is no compelling reason to create the general one, just stick to specific ones and save yourself all this not needed complexity and vagueness.

In Wikibase there is a generic web API endpoint for creating new entities. The users provide a pile of information via JSON or a bunch of parameters, which includes the type of Entity they are trying to create. If you have this type of functionality, you are forced to deal with this in some way, and probably want to go with the delegation approach. To me having such an API endpoint is very questionable, with dedicated endpoints being the simpler solution for everyone involved.

To wrap this up: dedicated entity handling code is much simpler than generic code, making it easier to write, use, understand and modify. Code reuse, where warranted, is possible via composition inside of implementations without changing the interfaces of services. Generic entity handling code is almost always a bad choice.

On top of what I already outlined, there is another big issue you can run into when creating generic entity handling code like is done in Wikibase.

Bounded Contexts

Bounded Contexts are a key strategic concept from Domain Driven Design. They are key in the sense that if you don’t apply them in your project, you cannot effectively apply tactical patterns such as Entities and Value Objects, and are not really doing DDD at all.

“Strategy without tactics is the slowest route to victory. Tactics without strategy are the noise before defeat.” — Sun Tzu

Bounded Contexts allow you to segregate your domain models, ideally having a Bounded Context per subdomain. A detailed explanation and motivation of this pattern is out of scope for this post, but suffice to say is that Bounded Contexts allow for simplification and thus make it easier to write and maintain code. For more information I can recommend Domain-Driven Design Destilled.

In case of Wikibase there are likely a dozen or so relevant subdomains. While I did not do the analysis to create a comprehensive picture of which subdomains there are, which types they have, and which Bounded Contexts would make sense, a few easily stand out.

There is the so-called core Wikibase software, which was created for, and deals with structured data for Wikipedia. It has two types of Entities (both in the Wikibase and in the DDD sense): Item and Property. Then there is (planned) functionality for Wiktionary, which will be structured dictionary data, and for Wikimedia Commons, which will be structured media data. These are two separate subdomains, and thus each deserve their own Bounded Context. This means having no code and no conceptual dependencies on each other or the existing Big Ball of Mud type “Bounded Context” in the Wikibase core software.


When standard approaches are followed, Entities can easily have required fields and optional fields. Creating generic code that deals with different types of entities is very suspect and can easily lead to great complexity and brittle code, as seen in Wikibase. It is also a road to not separating concepts properly, which is particularly bad when crossing subdomain boundaries.

May 23, 2017

In 2007, Jay Batson and I wanted to build a software company based on open source and Drupal. I was 29 years old then, and eager to learn how to build a business that could change the world of software, strengthen the Drupal project and help drive the future of the web.

Tom Erickson joined Acquia's board of directors with an outstanding record of scaling and leading technology companies. About a year later, after a lot of convincing, Tom agreed to become our CEO. At the time, Acquia was 30 people strong and we were working out of a small office in Andover, Massachusetts. Nine years later, we can count 16 of the Fortune 100 among our customers, saw our staff grow from 30 to more than 750 employees, have more than $150MM in annual revenue, and have 14 offices across 7 countries. And, importantly, Acquia has also made an undeniable impact on Drupal, as we said we would.

I've been lucky to have had Tom as my business partner and I'm incredibly proud of what we have built together. He has been my friend, my business partner, and my professor. I learned first hand the complexities of growing an enterprise software company; from building a culture, to scaling a global team of employees, to making our customers successful.

Today is an important day in the evolution of Acquia:

  • Tom has decided it's time for him step down as CEO, allowing him flexibility with his personal time and act more as an advisor to companies, the role that brought him to Acquia in the first place.
  • We're going to search for a new CEO for Acquia. When we find that business partner, Tom will be stepping down as CEO. After the search is completed, Tom will remain on Acquia's Board of Directors, where he can continue to help advise and guide the company.
  • We are formalizing the working relationship I've had with Tom during the past 8 years by creating an Office of the CEO. I will focus on product strategy, product development, including product architecture and Acquia's roadmap; technology partnerships and acquisitions; and company-wide hiring and staffing allocations. Tom will focus on sales and marketing, customer success and G&A functions.

The time for these changes felt right to both of us. We spent the first decade of Acquia laying down the foundation of a solid business model for going out to the market and delivering customer success with Drupal – Tom's core strengths from his long career as a technology executive. Acquia's next phase will be focused on building confidently on this foundation with more product innovation, new technology acquisitions and more strategic partnerships – my core strengths as a technologist.

Tom is leaving Acquia in a great position. This past year, the top industry analysts published very positive reviews based on their dealings with our customers. I'm proud that Acquia made the most significant positive move of all vendors in last year's Gartner Magic Quadrant for Web Content Management and that Forrester recognized Acquia as the leader for strategy and vision. We increasingly find ourselves at the center of our customer's technology and digital strategies. At a time when digital experiences means more than just web content management, and data and content intelligence play an increasing role in defining success for our customers, we are well positioned for the next phase of our growth.

I continue to love the work I do at Acquia each day. We have a passionate team of builders and dreamers, doers and makers. To the Acquia team around the world: 2017 will be a year of changes, but you have my commitment, in every way, to lead Acquia with clarity and focus.

To read Tom's thoughts on the transition, please check out his blog post. Michael Skok, Acquia's lead investor, also covered it on his blog.

Tom and dries

May 19, 2017

The post CentOS 7.4 to ship with TLS 1.2 + ALPN appeared first on

Oh happy days!

I've long been tracking the "Bug 1276310 -- (rhel7-openssl1.0.2) RFE: Need OpenSSL 1.0.2" issue, where Red Hat users are asking for an updated version of the OpenSSL package. Mainly to get TLS 1.2 and ALPN.

_openssl_ rebased to version 1.0.2k

The _openssl_ package has been updated to upstream version 1.0.2k, which provides a number of enhancements, new features, and bug fixes, including:

* Added support for the datagram TLS (DTLS) protocol version 1.2.

* Added support for the TLS automatic elliptic curve selection.

* Added support for the Application-Layer Protocol Negotiation (ALPN).

* Added Cryptographic Message Syntax (CMS) support for the following schemes: RSA-PSS, RSA-OAEP, ECDH, and X9.42 DH.

Note that this version is compatible with the API and ABI in the *OpenSSL* library version in previous releases of Red Hat Enterprise Linux 7.
RFE: Need OpenSSL 1.0.2

The ALPN support is needed because in the Chrome browser, server-side ALPN support is a dependency to support HTTP/2. Without it, Chrome users don't get to use HTTP/2 on your servers.

The newly updated packages for OpenSSL are targeting the RHEL 7.4 release, which -- as far as I'm aware -- has no scheduled release date yet. But I'll be waiting for it!

As soon as RHEL 7.4 is released, we should expect a CentOS 7.4 release soon after.

The post CentOS 7.4 to ship with TLS 1.2 + ALPN appeared first on

There was a lot of buzz about the leak of two huge databases of passwords a few days ago. This has been reported by Try Hunt on his blog. The two databases are called “Anti-Trust-Combo-List” and “Exploit.In“. If the sources of the leaks are not officially known, there are some ways to discover some of them (see my previous article about the “+” feature offered by Google).

A few days after the first leak, a second version of “Exploit.In” was released with even more passwords:










Exploit.In (2)



With the huge of amount of passwords released in the wild, you can assume that your password is also included. But what are those passwords? I used Robbin Wood‘s tool pipal to analyze those passwords.

I decided to analyze the Anti-Trust-Combo-List but I had to restart several times due to a lack of resources (pipal requires a lot of memory to generate the statistics) and it failed always. I decided to use a sample of the passwords. I successfully analyzed 91M passwords. The results generated by pipal are available below.

What can we deduce? Weak passwords remain classic. Most passwords have only 8 characters and are based on lowercase characters. Interesting fact: users like to “increase” the complexity of the password by adding trailing numbers:

  • Just one number (due to the fact that they have to change it regularly and just increase it at every expiration)
  • By adding their birth year
  • By adding the current year
Basic Results

Total entries = 91178452
Total unique entries = 40958257

Top 20 passwords
123456 = 559283 (0.61%)
123456789 = 203554 (0.22%)
passer2009 = 186798 (0.2%)
abc123 = 100158 (0.11%)
password = 96731 (0.11%)
password1 = 84124 (0.09%)
12345678 = 80534 (0.09%)
12345 = 76051 (0.08%)
homelesspa = 74418 (0.08%)
1234567 = 68161 (0.07%)
111111 = 66460 (0.07%)
qwerty = 63957 (0.07%)
1234567890 = 58651 (0.06%)
123123 = 52272 (0.06%)
iloveyou = 51664 (0.06%)
000000 = 49783 (0.05%)
1234 = 35583 (0.04%)
123456a = 34675 (0.04%)
monkey = 32926 (0.04%)
dragon = 29902 (0.03%)

Top 20 base words
password = 273853 (0.3%)
passer = 208434 (0.23%)
qwerty = 163356 (0.18%)
love = 161514 (0.18%)
july = 148833 (0.16%)
march = 144519 (0.16%)
phone = 122229 (0.13%)
shark = 121618 (0.13%)
lunch = 119449 (0.13%)
pole = 119240 (0.13%)
table = 119215 (0.13%)
glass = 119164 (0.13%)
frame = 118830 (0.13%)
iloveyou = 118447 (0.13%)
angel = 101049 (0.11%)
alex = 98135 (0.11%)
monkey = 97850 (0.11%)
myspace = 90841 (0.1%)
michael = 88258 (0.1%)
mike = 82412 (0.09%)

Password length (length ordered)
1 = 54418 (0.06%)
2 = 49550 (0.05%)
3 = 247263 (0.27%)
4 = 1046032 (1.15%)
5 = 1842546 (2.02%)
6 = 15660408 (17.18%)
7 = 14326554 (15.71%)
8 = 25586920 (28.06%)
9 = 12250247 (13.44%)
10 = 11895989 (13.05%)
11 = 2604066 (2.86%)
12 = 1788770 (1.96%)
13 = 1014515 (1.11%)
14 = 709778 (0.78%)
15 = 846485 (0.93%)
16 = 475022 (0.52%)
17 = 157311 (0.17%)
18 = 136428 (0.15%)
19 = 83420 (0.09%)
20 = 93576 (0.1%)
21 = 46885 (0.05%)
22 = 42648 (0.05%)
23 = 31118 (0.03%)
24 = 29999 (0.03%)
25 = 25956 (0.03%)
26 = 14798 (0.02%)
27 = 10285 (0.01%)
28 = 10245 (0.01%)
29 = 7895 (0.01%)
30 = 12573 (0.01%)
31 = 4168 (0.0%)
32 = 66017 (0.07%)
33 = 1887 (0.0%)
34 = 1422 (0.0%)
35 = 1017 (0.0%)
36 = 469 (0.0%)
37 = 250 (0.0%)
38 = 231 (0.0%)
39 = 116 (0.0%)
40 = 435 (0.0%)
41 = 45 (0.0%)
42 = 57 (0.0%)
43 = 14 (0.0%)
44 = 47 (0.0%)
45 = 5 (0.0%)
46 = 13 (0.0%)
47 = 1 (0.0%)
48 = 16 (0.0%)
49 = 14 (0.0%)
50 = 21 (0.0%)
51 = 2 (0.0%)
52 = 1 (0.0%)
53 = 2 (0.0%)
54 = 22 (0.0%)
55 = 1 (0.0%)
56 = 3 (0.0%)
57 = 1 (0.0%)
58 = 2 (0.0%)
60 = 10 (0.0%)
61 = 3 (0.0%)
63 = 3 (0.0%)
64 = 1 (0.0%)
65 = 2 (0.0%)
66 = 9 (0.0%)
67 = 2 (0.0%)
68 = 2 (0.0%)
69 = 1 (0.0%)
70 = 1 (0.0%)
71 = 3 (0.0%)
72 = 1 (0.0%)
73 = 1 (0.0%)
74 = 1 (0.0%)
76 = 2 (0.0%)
77 = 1 (0.0%)
78 = 1 (0.0%)
79 = 3 (0.0%)
81 = 3 (0.0%)
83 = 1 (0.0%)
85 = 1 (0.0%)
86 = 1 (0.0%)
88 = 1 (0.0%)
89 = 1 (0.0%)
90 = 6 (0.0%)
92 = 3 (0.0%)
93 = 1 (0.0%)
95 = 1 (0.0%)
96 = 16 (0.0%)
97 = 1 (0.0%)
98 = 3 (0.0%)
99 = 2 (0.0%)
100 = 1 (0.0%)
104 = 1 (0.0%)
107 = 1 (0.0%)
108 = 1 (0.0%)
109 = 1 (0.0%)
111 = 2 (0.0%)
114 = 1 (0.0%)
119 = 1 (0.0%)
128 = 377 (0.0%)

Password length (count ordered)
8 = 25586920 (28.06%)
6 = 15660408 (17.18%)
7 = 14326554 (15.71%)
9 = 12250247 (13.44%)
10 = 11895989 (13.05%)
11 = 2604066 (2.86%)
5 = 1842546 (2.02%)
12 = 1788770 (1.96%)
4 = 1046032 (1.15%)
13 = 1014515 (1.11%)
15 = 846485 (0.93%)
14 = 709778 (0.78%)
16 = 475022 (0.52%)
3 = 247263 (0.27%)
17 = 157311 (0.17%)
18 = 136428 (0.15%)
20 = 93576 (0.1%)
19 = 83420 (0.09%)
32 = 66017 (0.07%)
1 = 54418 (0.06%)
2 = 49550 (0.05%)
21 = 46885 (0.05%)
22 = 42648 (0.05%)
23 = 31118 (0.03%)
24 = 29999 (0.03%)
25 = 25956 (0.03%)
26 = 14798 (0.02%)
30 = 12573 (0.01%)
27 = 10285 (0.01%)
28 = 10245 (0.01%)
29 = 7895 (0.01%)
31 = 4168 (0.0%)
33 = 1887 (0.0%)
34 = 1422 (0.0%)
35 = 1017 (0.0%)
36 = 469 (0.0%)
40 = 435 (0.0%)
128 = 377 (0.0%)
37 = 250 (0.0%)
38 = 231 (0.0%)
39 = 116 (0.0%)
42 = 57 (0.0%)
44 = 47 (0.0%)
41 = 45 (0.0%)
54 = 22 (0.0%)
50 = 21 (0.0%)
48 = 16 (0.0%)
96 = 16 (0.0%)
49 = 14 (0.0%)
43 = 14 (0.0%)
46 = 13 (0.0%)
60 = 10 (0.0%)
66 = 9 (0.0%)
90 = 6 (0.0%)
45 = 5 (0.0%)
71 = 3 (0.0%)
56 = 3 (0.0%)
92 = 3 (0.0%)
79 = 3 (0.0%)
98 = 3 (0.0%)
63 = 3 (0.0%)
61 = 3 (0.0%)
81 = 3 (0.0%)
51 = 2 (0.0%)
58 = 2 (0.0%)
65 = 2 (0.0%)
53 = 2 (0.0%)
67 = 2 (0.0%)
68 = 2 (0.0%)
76 = 2 (0.0%)
111 = 2 (0.0%)
99 = 2 (0.0%)
73 = 1 (0.0%)
72 = 1 (0.0%)
74 = 1 (0.0%)
70 = 1 (0.0%)
69 = 1 (0.0%)
77 = 1 (0.0%)
78 = 1 (0.0%)
64 = 1 (0.0%)
109 = 1 (0.0%)
114 = 1 (0.0%)
119 = 1 (0.0%)
83 = 1 (0.0%)
107 = 1 (0.0%)
85 = 1 (0.0%)
86 = 1 (0.0%)
104 = 1 (0.0%)
88 = 1 (0.0%)
89 = 1 (0.0%)
57 = 1 (0.0%)
100 = 1 (0.0%)
55 = 1 (0.0%)
93 = 1 (0.0%)
52 = 1 (0.0%)
95 = 1 (0.0%)
47 = 1 (0.0%)
97 = 1 (0.0%)
108 = 1 (0.0%)


One to six characters = 18900217 (20.73%)
One to eight characters = 58813691 (64.5'%)
More than eight characters = 32364762 (35.5%)

Only lowercase alpha = 25300978 (27.75%)
Only uppercase alpha = 468686 (0.51%)
Only alpha = 25769664 (28.26%)
Only numeric = 9526597 (10.45%)

First capital last symbol = 72550 (0.08%)
First capital last number = 2427417 (2.66%)

Single digit on the end = 13167140 (14.44%)
Two digits on the end = 14225600 (15.6%)
Three digits on the end = 6155272 (6.75%)

Last number
0 = 4370023 (4.79%)
1 = 12711477 (13.94%)
2 = 5661520 (6.21%)
3 = 6642438 (7.29%)
4 = 3951994 (4.33%)
5 = 4028739 (4.42%)
6 = 4295485 (4.71%)
7 = 4055751 (4.45%)
8 = 3596305 (3.94%)
9 = 4240044 (4.65%)

 | | 
|||| ||| | 

Last digit
1 = 12711477 (13.94%)
3 = 6642438 (7.29%)
2 = 5661520 (6.21%)
0 = 4370023 (4.79%)
6 = 4295485 (4.71%)
9 = 4240044 (4.65%)
7 = 4055751 (4.45%)
5 = 4028739 (4.42%)
4 = 3951994 (4.33%)
8 = 3596305 (3.94%)

Last 2 digits (Top 20)
23 = 2831841 (3.11%)
12 = 1570044 (1.72%)
11 = 1325293 (1.45%)
01 = 1036629 (1.14%)
56 = 1013453 (1.11%)
10 = 909480 (1.0%)
00 = 897526 (0.98%)
13 = 854165 (0.94%)
09 = 814370 (0.89%)
21 = 812093 (0.89%)
22 = 709996 (0.78%)
89 = 706074 (0.77%)
07 = 675624 (0.74%)
34 = 627901 (0.69%)
08 = 626722 (0.69%)
69 = 572897 (0.63%)
88 = 557667 (0.61%)
77 = 557429 (0.61%)
14 = 539236 (0.59%)
45 = 530671 (0.58%)

Last 3 digits (Top 20)
123 = 2221895 (2.44%)
456 = 807267 (0.89%)
234 = 434714 (0.48%)
009 = 326602 (0.36%)
789 = 318622 (0.35%)
000 = 316149 (0.35%)
345 = 295463 (0.32%)
111 = 263894 (0.29%)
101 = 225151 (0.25%)
007 = 222062 (0.24%)
321 = 221598 (0.24%)
666 = 201995 (0.22%)
010 = 192798 (0.21%)
777 = 164454 (0.18%)
011 = 141015 (0.15%)
001 = 138363 (0.15%)
008 = 137610 (0.15%)
999 = 129483 (0.14%)
987 = 126046 (0.14%)
678 = 123301 (0.14%)

Last 4 digits (Top 20)
3456 = 727407 (0.8%)
1234 = 398622 (0.44%)
2009 = 298108 (0.33%)
2345 = 269935 (0.3%)
6789 = 258059 (0.28%)
1111 = 148964 (0.16%)
2010 = 140684 (0.15%)
2008 = 111014 (0.12%)
2000 = 110456 (0.12%)
0000 = 108767 (0.12%)
2011 = 103328 (0.11%)
5678 = 102873 (0.11%)
4567 = 94964 (0.1%)
2007 = 94172 (0.1%)
4321 = 92849 (0.1%)
3123 = 92104 (0.1%)
1990 = 87828 (0.1%)
1987 = 87142 (0.1%)
2006 = 86640 (0.1%)
1991 = 86574 (0.09%)

Last 5 digits (Top 20)
23456 = 721648 (0.79%)
12345 = 261734 (0.29%)
56789 = 252914 (0.28%)
11111 = 116179 (0.13%)
45678 = 96011 (0.11%)
34567 = 90262 (0.1%)
23123 = 84654 (0.09%)
00000 = 81056 (0.09%)
54321 = 73623 (0.08%)
67890 = 66301 (0.07%)
21212 = 28777 (0.03%)
23321 = 28767 (0.03%)
77777 = 28572 (0.03%)
22222 = 27754 (0.03%)
55555 = 26081 (0.03%)
66666 = 25872 (0.03%)
56123 = 21354 (0.02%)
88888 = 19025 (0.02%)
99999 = 18288 (0.02%)
12233 = 16677 (0.02%)

Character sets
loweralphanum: 47681569 (52.29%)
loweralpha: 25300978 (27.75%)
numeric: 9526597 (10.45%)
mixedalphanum: 3075964 (3.37%)
loweralphaspecial: 1721507 (1.89%)
loweralphaspecialnum: 1167596 (1.28%)
mixedalpha: 981987 (1.08%)
upperalphanum: 652292 (0.72%)
upperalpha: 468686 (0.51%)
mixedalphaspecialnum: 187283 (0.21%)
specialnum: 81096 (0.09%)
mixedalphaspecial: 53882 (0.06%)
upperalphaspecialnum: 39668 (0.04%)
upperalphaspecial: 18674 (0.02%)
special: 14657 (0.02%)

Character set ordering
stringdigit: 41059315 (45.03%)
allstring: 26751651 (29.34%)
alldigit: 9526597 (10.45%)
othermask: 4189226 (4.59%)
digitstring: 4075593 (4.47%)
stringdigitstring: 2802490 (3.07%)
stringspecial: 792852 (0.87%)
digitstringdigit: 716311 (0.79%)
stringspecialstring: 701378 (0.77%)
stringspecialdigit: 474579 (0.52%)
specialstring: 45323 (0.05%)
specialstringspecial: 28480 (0.03%)
allspecial: 14657 (0.02%)

[The post Your Password is Already In the Wild, You Did not Know? has been first published on /dev/random]

May 18, 2017

There is one significant trend that I have noticed over and over again: the internet's continuous drive to mitigate friction in user experiences and business models.

Since the internet's commercial debut in the early 90s, it has captured success and upset the established order by eliminating unnecessary middlemen. Book stores, photo shops, travel agents, stock brokers, bank tellers and music stores are just a few examples of the kinds of middlemen who have been eliminated by their online counterparts. The act of buying books, printing photos or booking flights online alleviates the friction felt by consumers who must stand in line or wait on hold to speak to a customer service representative.

Rather than negatively describing this evolution as disintermediation or taking something away, I believe there is value in recognizing that the internet is constantly improving customer experiences by reducing friction from systems — a process I like to call "friduction".

Open Source and cloud

Over the past 15 years, I have observed Open Source and cloud-computing solutions remove friction from legacy approaches to technology. Open Source takes the friction out of the technology evaluation and adoption process; you are not forced to get a demo or go through a sales and procurement process, or deal with the limitations of a proprietary license. Cloud computing also took off because it also offers friduction; with cloud, companies pay for what they use, avoid large up-front capital expenditures, and gain speed-to-market.

Cross-channel experiences

There is a reason why Drupal's API-first initiative is one of the topics I've talked and written the most about in 2016; it enables Drupal to "move beyond the page" and integrate with different user engagement systems that can eliminate inefficiencies and improve the user experience of traditional websites.

We're quickly headed to a world where websites are evolving into cross­channel experiences, which includes push notifications, conversational UIs, and more. Conversational UIs, such as chatbots and voice assistants, will prevail because they improve and redefine the customer experience.

Personalization and contextualization

In the 90s, personalization meant that websites could address authenticated users by name. I remember the first time I saw my name appear on a website; I was excited! Obviously personalization strategies have come a long way since the 90s. Today, websites present recommendations based on a user's most recent activity, and consumers expect to be provided with highly tailored experiences. The drive for greater personalization and contextualization will never stop; there is too much value in removing friction from the user experience. When a commerce website can predict what you like based on past behavior, it eliminates friction from the shopping process. When a customer support website can predict what question you are going to ask next, it is able to provide a better customer experience. This is not only useful for the user, but also for the business. A more efficient user experience will translate into higher sales, improved customer retention and better brand exposure.

To keep pace with evolving user expectations, tomorrow's digital experiences will need to deliver more tailored, and even predictive customer experiences. This will require organizations to consume multiple sources of data, such as location data, historic clickstream data, or information from wearables to create a fine-grained user context. Data will be the foundation for predictive analytics and personalization services. Advancing user privacy in conjunction with data-driven strategies will be an important component of enhancing personalized experiences. Eventually, I believe that data-driven experiences will be the norm.

At Acquia, we started investing in contextualization and personalization in 2014, through the release of a product called Acquia Lift. Adoption of Acquia Lift has grown year over year, and we expect it to increase for years to come. Contextualization and personalization will become more pervasive, especially as different systems of engagements, big data, the internet of things (IoT) and machine learning mature, combine, and begin to have profound impacts on what the definition of a great user experience should be. It might take a few more years before trends like personalization and contextualization are fully adopted by the early majority, but we are patient investors and product builders. Systems like Acquia Lift will be of critical importance and premiums will be placed on orchestrating the optimal customer journey.


The history of the web dictates that lower-friction solutions will surpass what came before them because they eliminate inefficiencies from the customer experience. Friduction is a long-term trend. Websites, the internet of things, augmented and virtual reality, conversational UIs — all of these technologies will continue to grow because they will enable us to build lower-friction digital experiences.

Today I was attempting to update a local repository, when SSH complained about a changed fingerprint, something like the following:

Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ECDSA key sent by the remote host is
Please contact your system administrator.
Add correct host key in /home/user/.ssh/known_hosts to get rid of this message.
Offending ECDSA key in /home/user/.ssh/known_hosts:9
ECDSA host key for has changed and you have requested strict checking.
Host key verification failed.

I checked if the host was changed recently, or the alias through which I connected switched host, or the SSH key changed. But that wasn't the case. Or at least, it wasn't the case recently, and I distinctly remember connecting to the same host two weeks ago.

Now, what happened I don't know yet, but I do know I didn't want to connect until I reviewed the received SSH key fingerprint. I obtained the fingerprint from the administration (who graceously documented it on the wiki)...

... only to realize that the documented fingerprint are MD5 hashes (and in hexadecimal result) whereas the key shown by the SSH command shows it in base64 SHA256 by default.

Luckily, a quick search revealed this superuser post which told me to connect to the host using the FingerprintHash md5 option:

~$ ssh -o FingerprintHash=md5

The result is SSH displaying the MD5 hashed fingerprint which I can now validate against the documented one. Once I validated that the key is the correct one, I accepted the change and continued with my endeavour.

I later discovered (or, more precisely, have strong assumptions) that I had an old elliptic curve key registered in my known_hosts file, which was not used for the communication for quite some time. I recently re-enabled elliptic curve support in OpenSSH (with Gentoo's USE="-bindist") which triggered the validation of the old key.

I published the following diary on “My Little CVE Bot“.

The massive spread of the WannaCry ransomware last Friday was another good proof that many organisations still fail to patch their systems. Everybody admits that patching is a boring task. They are many constraints that make this process very difficult to implement and… apply! That’s why any help is welcome to know what to patch and when… [Read more]

[The post [SANS ISC] My Little CVE Bot has been first published on /dev/random]

May 17, 2017

  • First rule. You must understand the rules of scuba diving. If you don’t know or understand the rules of scuba diving, go to the second rule.
  • The second rule is that you never dive alone.
  • The third rule is that you always keep close enough to each other to perform a rescue of any kind.
  • The forth rule is that you signal each other and therefor know each other’s signals. Underwater, communication is key.
  • The fifth rule is that you tell the others, for example, when you don’t feel well. The others want to know when you emotionally don’t feel well. Whenever you are insecure, you tell them. This is hard.
  • The sixth rule is that you don’t violate earlier agreed upon rules.
  • The seventh rule is that given rules will be eclipsed the moment any form of panic occurs, you will restore the rules using rationalism first, pragmatism next but emotional feelings last. No matter what.
  • The eighth rule is that the seventh rule is key to survival.

These rules make scuba diving an excellent learning school for software development project managers.

May 16, 2017

I have to admit it, I’m not the biggest fan of Java. But, when they asked me to prepare a talk for 1st grade students who are currently learning to code using Java, I decided it was time to challenge some of my prejudices. As I selected continuous integration as the topic of choice, I started out by looking at all available tools to quickly setup a reliable Java project. Having played with dotnet core the past months, I was looking for a tool that could do a bit of the same. A straightforward CLI interface that can create a project out of the box to mess around with. Maven provided to be of little help, but gradle turned out to be exactly what I was looking for. Great, I gained some faith.

It’s only while creating my slides and looking for tooling that can be used specifically for Java, that I had an epiphany. What if it is possible to create an entire developer environment using docker? So no need for local dependencies like linting tools or gradle. No need to mess with an IDE to get everything set up. And, no more “it works on my machine”. The power and advantages of a CI tool, straight onto your own computer.

A quick search on Google points us to gradle’s own Alpine linux container. It comes with JDK8 out of the box, exactly what we’re looking for. You can create a new Java application with a single command:

docker run -v=$(pwd):/app --workdir=/app gradle:alpine gradle init --type java-application

This starts a container, creates a volume linked to your current working directory and initializes a brand new Java application using gradle init --type java-application. As I don’t feel like typing those commands all the time, I created a makefile to help me build and debug the app. Yes, you can debug the app while it’s running in the container. Java supports remote debugging out of the box. Any modern IDE that supports Java, has support for remote debugging. Simply run the make debug command and attach to the remote debugging session on port 1044.

ROOT_DIR:=$(shell dirname $(realpath $(lastword $(MAKEFILE_LIST))))

    docker run --rm -v=${ROOT_DIR}:/app --workdir=/app gradle:alpine gradle clean build

debug: build
    docker run --rm -v=${ROOT_DIR}:/app -p 1044:1044 --workdir=/app gradle:alpine java -classpath /app/build/classes/main -verbose -agentlib:jdwp=transport=dt_socket,server=y, suspend=y,address=1044 App

Now that we have a codebase that uses the same tools to build, run and debug, we need to bring our coding standard to a higher level. First off we need a linting tool. Traditionally, people look at checkstyle when it comes to Java. And while that could be fine for you, I found that tool rather annoying to set up. XML is not something I like to mess with other than to create UI, so seeing this verbose config set me back. There simply wasn’t time to look at that. Even with the 2 different style guides, it would still require a bit of tweaking to get everything right and make the build pass.

As it turns out, there are other tools out there which feel a bit more 21st century. One of those is coala. Now, coala can be used as a linting tool on a multitude of languages, not just Java, so definetly take a look at it, even if you’re not into Java yourself. It’s a Python based tool which has a lot of neat little bears who can do things. The config is a breeze as it’s a yaml file, and they provide a container so you can run the checks in an isolated environment. All in all, exactly what we’re looking for.

Let’s extend our makefile to run coala:

docker run --rm -v=${ROOT_DIR}:/app --workdir=/app coala/base coala --ci -V

I made sure to enable verbose logging, simply to be able to illustrate the tool to students. Feel free to disable that. You can easily control what coala needs to verify by creating a .coafile in the root of the repository. One of the major advantages to use coala over anything else, is that it can do both simple linting checks as well as full on static code analysis.

Let’s have a look at the settings I used to illustrate its power.

files = src/**/*.java
language = java

bears = SpaceConsistencyBear
use_spaces = True

bears = KeywordBear

bears = JavaPMDBear
check_optimizations = true
check_naming = false

You can start out by defining a default. In my case, I’m telling coala to look for .java files which are written using Java. There are three bears being used. SpaceConsistencyBear, who will check for spaces and not tabs. KeywordBear, who dislikes //TODO comments in code, and JavaPMDBear, who invokes PMD to do some static code analysis. In the example, I had to set check_naming = false otherwise I would have lost a lot of time fixing those error (mostly due to my proper lack of Java knowledge).

Now, whenever I want to validate my code and enforce certain rules for me and my team, I can use coala to achieve this. Simply run make validate and it will start the container and invoke coala. At this point, we can setup the CI logic in our makefile by simply combining the two commands.

ci: validate build

The command make ci will invoke coala and if all goes well, use gradle to build and test the file. As a cherry on top, I also included test coverage. Using Jacoco, you can easily setup rules to fail the build when the coverage goes below a certain threshold. The tool is integrated directly into gradle and provides everything you need out of the box, simply add the following lines to your build.gradle file. This way, the build will fail if the coverage drops below 50%.

apply plugin: 'jacoco'

jacocoTestReport {
    reports {
        xml.enabled true
        html.enabled true

jacocoTestCoverageVerification {
    violationRules {
        rule {
            limit {
                minimum = 0.5

check.dependsOn jacocoTestCoverageVerifica

Make sure to edit the build step in the makefile to also include Jacoco.

    docker run --rm -v=${ROOT_DIR}:/app --workdir=/app gradle:alpine gradle clean build jacocoTestReport

The only thing we still need to do is select a CI service of choice. I made sure to add examples for both circleci and travis, each of which only require docker and an override to use our makefile instead of auto-detecting gradle and running that. The way we set up this project allows us to easily switch CI when we need to, which is not all that strange given the lifecycle of a software project. The tools we choose when we start out, might be selected to fit the needs at the time of creation, but nothing assures us that will stay true forever. Designing for change is not something we need to do in code alone, it has a direct impact on everything, so expect things to change and your assumptions to be challenged.

Have a look at the source code for all the info and the build files for the two services. Enjoy!

Source code

May 15, 2017

It's been a while since I thought about this design, but I finally had time to implement it the proper way, and "just in time" as I needed recently to migrate our Foreman instance to another host (from CentOS 6 to CentOS 7)

Within the CentOS Infra, we use Foreman as an ENC for our Puppet environments (multiple ones). For full automation between configuration management and monitoring, you need some "glue". The idea is that whatever you describe at the configuration management level should be authoritative and so automatically configuring the monitoring solution you have in place in your Infra.

In our case, that means that we have Foreman/puppet on one side, and Zabbix on the other side. Let's see how we can "link" the two sides.

What I've seen so far is that you use exported resources on each node, store that in another PuppetDB, and then on the monitoring node, reapply all those resources. Problem with such solution is that it's "expensive" and when one thinks about it, a little bit strange to export the "knowledge" from Foreman back into another DB, and then let puppet compiles a huge catalog at the monitoring side, even if nothing was changed.

One issue is also that in our Zabbix setup, we also have some nodes that aren't really managed by Foreman/puppet (but other automation around Ansible, so I had to use an intermediate step that other tools can also use/abuse for the same reason.

The other reason also is that I admit that I'm a fan of "event driven" configuration change, so my idea was :

  • update a host in Foreman (or groups of hosts, etc)
  • publish that change on a secure network through a message queue (so asynchronous so that it doesn't slow down the foreman update operation itself)
  • let Zabbix server know that change and apply it (like linking a template to a host)

So the good news is that it can be done really easily with several components :

Here is a small overview of the process :

Foreman MQTT Zabbix

Foreman hooks

Setting up foreman hooks is really easy: just install the pkg itself (tfm-rubygem-foreman_hooks.noarch), read the Documentation, and then create your scripts. There are some examples for Bash and python in the examples directory, but basically you just need to place some scripts at specific place[s]. In my case I wanted to "trigger" an event in the case of a node update (like adding a puppet class, or variable/paramater change) so I just had to place it under /usr/share/foreman/config/hooks/host/managed/update/.

One little remark though : if you put a new file, don't forget to restart foreman itself, so that it picks that hooks file, otherwise it would still be ignored and so not ran.


Mosquitto itself is available in your favorite rpm repo, so installing it is a breeze. Reason why I selected mosquitto is that it's very lightweight (package size is under 200Kb), it supports TLS and ACL out-of-the box.

For an introduction to MQTT/Mosquitto, I'd suggest you to read Jan-Piet Mens dedicated blog post around it I even admit that I discovered it by attending one of his talks on the topic, back in the days :-)


While one can always discuss "Raw API" with Zabbix, I found it useful to use a tool I was already using for various tasks around Zabbix : zabbix-cli For people interested in using it on CentOS 6 or 7, I built the packages and they are on CBS

So I plumbed it in a systemd unit file that subscribe to specific MQTT topic, parse the needed informations (like hostname and zabbix templates to link, unlink, etc) and then it updates that in Zabbix itself (from the log output):

[+] 20170516-11:43 :  Adding zabbix template "Template CentOS - https SSL Cert Check External" to host "" 
[Done]: Templates Template CentOS - https SSL Cert Check External ({"templateid":"10105"}) linked to these hosts: ({"hostid":"10174"})

Cool, so now I don't have to worry about forgetting to tie a zabbix template to a host , as it's now done automatically. No need to say that the deployment of those tools was of course automated and coming from Puppet/foreman :-)

For most of the history of the web, the website has been the primary means of consuming content. These days, however, with the introduction of new channels each day, the website is increasingly the bare minimum. Digital experiences can mean anything from connected Internet of Things (IoT) devices, smartphones, chatbots, augmented and virtual reality headsets, and even so-called zero user interfaces which lack the traditional interaction patterns we're used to. More and more, brands are trying to reach customers through browserless experiences and push-, not pull-based, content — often by not accessing the website at all.

Last year, we launched a new initiative called Acquia Labs, our research and innovation lab, part of the Office of the CTO. Acquia Labs aims to link together the new realities in our market, our customers' needs in coming years, and the goals of Acquia's products and open-source efforts in the long term. In this blog post, I'll update you on what we're working on at the moment, what motivates our lab, and how to work with us.

Alexa, ask GeorgiaGov

One of the Acquia Labs' most exciting projects is our ongoing collaboration with GeorgiaGov Interactive. Through an Amazon Echo integration with the Drupal website, citizens can ask their government questions. Georgia residents will be able to find out how to apply for a fishing license, transfer an out-of-state driver's license, and register to vote just by consulting Alexa, which will also respond with sample follow-up questions to help the user move forward. It's a good example of how conversational interfaces can change civic engagement. Our belief is that conversational content and commerce will come to define many of the interactions we have with brands.

The state of Georgia has always been on the forefront of web accessibility. For example, from 2002 until 2006, Georgia piloted a time-limited text-to-speech telephony service which would allow website information and popular services like driver's license renewal to be offered to citizens. Today, it publishes accessibility standards and works hard to make all of its websites accessible for users of assistive devices. This Alexa integration for Georgia will continue that legacy by making important information about working with state government easy for anyone to access.

And as a testament to the benefits of innovation in open source and our commitment to open-source software, Acquia Labs backported the Drupal 8 module for Amazon Echo to Drupal 7.

Here's a demo video showing an initial prototype of the Alexa integration:

Shopping with chatbots

In addition to physical devices like the Amazon Echo, Acquia Labs has also been thinking about what is ahead for chatbots, another important component of the conversational web. Unlike in-home devices, chatbots are versatile because they can be used across multiple channels, whether on a native mobile application or a desktop website.

The Acquia Labs team built a chatbot demonstrating an integration with the inventory system and recipe collection available on the Drupal website of an imaginary grocery store. In this example, a shopper can interact with a branded chatbot named "Freshbot" to accomplish two common tasks when planning an upcoming barbecue.

First, the user can use the chatbot to choose the best recipes from a list of recommendations with consideration for number of attendees, dietary restrictions, and other criteria. Second, the chatbot can present a shopping list with correct quantities of the ingredients she'll need for the barbecue. The ability to interact with a chatbot assistant rather than having to research and plan everything on your own can make hosting a barbecue a much easier and more efficient experience.

Check out our demo video, "Shopping with chatbots", below:

Collaborating with our customers

Many innovation labs are able to work without outside influence or revenue targets by relying on funding from within the organization. But this can potentially create too much distance between the innovation lab and the needs of the organization's customers. Instead, Acquia Labs explores new ideas by working on jointly funded projects for our clients.

I think this model for innovation is a good approach for the next generation of labs. This vision allows us to help our customers stake ground in new territory while also moving our own internal progress forward. For more about our approach, check out this video from a panel discussion with our Acquia Labs lead Preston So, who introduced some of these ideas at SXSW 2017.

If you're looking at possibilities beyond what our current offerings are capable of today, if you're seeking guidance and help to execute on your own innovation priorities, or if you have a potential project that interests you but is too forward-looking right now, Acquia Labs can help.

Special thanks to Preston So for contributions to this blog post and to Nikhil Deshpande (GeorgiaGov Interactive) and ASH Heath for feedback during the writing process.

The post Ways in which the WannaCry ransomware could have been much worse appeared first on

If you're in tech, you will have heard about the WannaCry/WannaCrypt ransomware doing the rounds. The infection started on Friday May 12th 2017 by exploiting MS17-010, a Windows Samba File Sharing vulnerability. The virus exploited a known vulnerability, installed a cryptolocker and extorted the owner of the Windows machine to pay ransom to get the files decrypted.

As far as worms go, this one went viral at an unprecedented scale.

But there are some design decisions in this cryptolocker that prevent it from being much worse. This post is a thought exercise, the next vulnerability will probably implement one of these methods. Make sure you're prepared.

Time based encryption

This WannaCry ransomware found the security vulnerability, installed the cryptolocker and immediately started encrypting the files.

Imagine the following scenario;

  • Day 1: worm goes round and infects vulnerable SMB, installs backdoor, keeps quiet, infects other machines
  • Day 14: worm activates itself, starts encrypting files

With WannaCrypt, it took a few hours to reach world-scale infections, alerting everyone and their grandmother that something big was going on. Mainstream media picked up on it. Train stations showed cryptolocker screens. Everyone started patching. What if the worm gets a few days head start?

By keeping quiet, the attacker risks getting caught, but in many cases this can be avoided by excluding known IPv4 networks for banks or government organizations. How many small businesses or large organizations do you think would notice a sudden extra running .exe in the background? Not enough to trigger world-wide coverage, I bet.

Self-destructing files

A variation to the scenario above;

  • Day 1: worm goes round, exploits SMB vulnerability, encrypts each file, but still allows files to remain opened (1)
  • Day 30: worm activates itself, removes decryption key for file access and prompts for payment

How are your back-ups at that point? All files on the machine have some kind of hidden time bomb in them. Every version of that file you have in back-up is affected. The longer they can keep that hidden, the bigger the damage.

More variations of this exist, with Excel or VBA macro's etc, and all boil down to: modify the file, render it unusable unless proper identification is shown.

(1) This should be possible with shortcuts to the files, first opening some kind of wrapper-script to decrypt the files before they launch. Decryption key is stored in memory and re-requested whenever the machine reboots, from its Command & Control servers.

Extortion with your friends

The current scheme is: your files get encrypted, you can pay to get your files back.

What if it's not your own files you're responsible for? What if are the files of your colleagues, family or friends? What if you had to pay 300$ to recover the files from someone you know?

Peer pressure works, especially if the blame angle is played. It's your fault someone you know got infected. Do you feel responsible at that point? Would that make you pay?

From a technical POV, it's tricky but not impossible to identify known associates for a victim. This could only happen a smaller scale, but might yield bigger rewards?

Cryptolocker + Windows Update DDoS?

Roughly 200.000 affected Windows PCs have been caught online. There are probably a lot more, that haven't made it to the online reports yet. Those are quite a few PCs to have control over, as an attacker.

The media is now jumping on the news, urging everyone to update. What if the 200k infected machines were to launch an effective DDoS against the Windows Update servers? With everyone trying to update, the possible targets are lowering every hour.

If you could effectively take down the means with which users can protect themselves, you can create bigger chaos and a bigger market to infect.

The next cryptolocker isn't going to be "just" a cryptolocker, in all likeliness it'll combine its encryption capacities with even more damaging means.

Stay safe

How to prevent any of these?

  1. Enable auto-updates on all your systems (!!)
  2. Have frequent back-ups, store them long enough

Want more details? Check out my earlier post: Staying Safe Online – A short guide for non-technical people.

The post Ways in which the WannaCry ransomware could have been much worse appeared first on

May 14, 2017

Quel est le sens de la vie ? Pourquoi y’a-t-il des êtres vivants dans l’univers plutôt que de la matière inerte ? Pour ceux d’entre vous qui se sont déjà posé ces questions, j’ai une bonne et une mauvaise nouvelle.

La bonne, c’est que la science a peut-être trouvé une réponse.

La mauvaise, c’est que cette réponse ne va pas vous plaire.

Les imperfections du big bang

Si le big bang avait été un événement parfait, l’univers serait aujourd’hui uniforme et lisse. Or, des imperfections se sont créées.

À cause des forces fondamentales, ces imperfections se sont agglomérées jusqu’à former des étoiles et des planètes faites de matière.

Si nous ne sommes pas aujourd’hui une simple soupe d’atomes parfaitement lisse mais bien des êtres solides sur une planète entourée de vide, c’est grâce à ces imperfections !

La loi de l’entropie

Grâce à la thermodynamique, nous avons compris que rien ne se perd et rien ne se crée. L’énergie d’un système est constante. Pour refroidir son intérieur, un frigo devra forcément chauffer à l’extérieur. L’énergie de l’univers est et restera donc constante.

Il n’en va pas de même de l’entropie !

Pour faire simple, l’entropie peut être vue comme une « qualité d’énergie ». Au plus l’entropie est haute, au moins l’énergie est utilisable.

Par exemple, si vous placez une tasse de thé bouillante dans une pièce très froide, l’entropie du système est faible. Au fil du temps, la tasse de thé va se refroidir, la pièce se réchauffer et l’entropie va augmenter pour devenir maximale lorsque la tasse et la pièce seront à même température. Ce phénomène très intuitif serait dû à l’intrication quantique et serait à la base de notre perception de l’écoulement du temps.

Pour un observateur extérieur, la quantité d’énergie dans la pièce n’a pas changé. La température moyenne de l’ensemble est toujours la même. Par contre, il y’a quand même eu une perte : l’énergie n’est plus exploitable.

Il aurait été possible, par exemple, d’utiliser le fait que la tasse dé thé réchauffe l’air ambiant pour actionner une turbine et générer de l’électricité. Ce n’est plus possible une fois que la tasse et la pièce sont à la même température.

Sans apport d’énergie externe, tout système va voir son entropie augmenter. Il en va donc de même pour l’univers : si l’univers ne se contracte pas sous son propre poids, les étoiles vont inéluctablement se refroidir et s’éteindre comme la tasse de thé. L’univers deviendra, inexorablement, un continuum parfait de température constante. En anglais, on parle de “Heat Death”, la mort de la chaleur.

L’apparition de la vie

La vie semble être une exception. Après tout, ne sommes-nous pas des organismes complexes et très ordonnés, ce qui suppose une entropie très faible ? Comment expliquer l’apparition de la vie, et donc d’éléments à entropie plus faible que leur environnement, dans un univers dont l’entropie est croissante ?

Jeremy England, un physicien du MIT, apporte une solution nouvelle et particulièrement originale : la vie serait la manière la plus efficace de dissiper la chaleur et donc d’augmenter l’entropie.

Sur une planète comme la terre, les atomes et les molécules sont bombardés en permanence par une énergie forte et utilisable : le soleil. Ceci engendre une situation d’entropie très faible.

Naturellement, les atomes vont alors s’organiser pour dissiper l’énergie. Physiquement, la manière la plus efficace de dissiper l’énergie reçue est de se reproduire. En se reproduisant, la matière crée de l’entropie.

La première molécule capable d’une telle prouesse, l’ARN, fut la première étape de la vie. Les mécanismes de sélection naturelle favorisant la reproduction ont alors fait le reste.

Selon Jeremy England, la vie serait mécaniquement inéluctable pour peu qu’il y aie suffisamment d’énergie.

L’humanité au service de l’entropie

Si la théorie d’England se confirme, cela serait une très mauvaise nouvelle pour l’humanité.

Car si le but de la vie est de maximiser l’entropie, alors ce que nous faisons avec la terre, la consommation à outrance, les guerres, les bombes nucléaires sont parfaitement logiques. Détruire l’univers le plus vite possible pour en faire une soupe d’atomes est le sens même de la vie !

Le seul dilemme auquel nous pourrions faire face serait alors : devons-nous détruire la terre immédiatement ou arriver à nous développer pour apporter la destruction dans le reste de l’univers ?

Quoi qu’il en soit, l’objectif ultime de la vie serait de rentre l’univers parfait, insipide, uniforme. De se détruire elle-même.

Ce qui est particulièrement angoissant c’est que, vu sous cet angle, l’humanité semble y arriver incroyablement bien ! Beaucoup trop bien


Photo par Bardia Photography.

Vous avez aimé votre lecture ? Soutenez l’auteur sur Tipeee, Patreon, Paypal ou Liberapay. Même un don symbolique fait toute la différence ! Retrouvons-nous ensuite sur Facebook, Medium, Twitter ou Mastodon.

Ce texte est publié sous la licence CC-By BE.

May 12, 2017

Updated: 20170419: gnome-shell extension browser integration.
Updated: 20170420: natural scrolling on X instead of Wayland.
Updated: 20170512: better support for multi monitor setups.


Mark Shuttleworth, founder of Ubuntu and Canonical, dropped a bombshell: Ubuntu drops Unity 8 and –by extension– also the Mir graphical server on the desktop. Starting from the 18.04 release, Ubuntu will use Gnome 3 as the default Desktop environment.

Sadly, the desktop environment used by millions of Ubuntu users –Unity 7– has no path forward now. Unity 7 runs on the graphical stack, while the Linux world –including Ubuntu now– is slowly but surely moving to Wayland (it will be the default on Ubuntu 18.04 LTS). It’s clear that Unity has its detractors, and it’s true that the first releases (6 years ago!) were limited and buggy. However, today, Unity 7 is a beautiful and functional desktop environment. I happily use it at home and at work.

Soon-to-be-dead code is dead code, so even as a happy user I don’t see the interest in staying with Unity. I prefer to make the jump now instead of sticking a year with a desktop on life support. Among other environments, I have been a full time user of CDE, Window Maker, Gnome 1.*, KDE 2.*, Java Desktop System, OpenSolaris Desktop, LXDE and XFCE. I’ll survive :).

The idea of these lines is to collect changes I felt I needed to make to a vanilla Ubuntu Gnome 3 setup to make it work for me. I made the jump 1 week before the release of 17.04, so I’ll stick with 17.04 and skip the 16.10 instructions (in short: you’ll need to install gnome-shell-extension-dashtodock from an external source instead of the Ubuntu repos).

The easiest way to make the use Gnome on Ubuntu is, of course, installing the Ubuntu Gnome distribution. If you’re upgrading, you can do it manually. In case you want to remove Unity and install Gnome at the same time:
$ sudo apt-get remove --purge ubuntu-desktop lightdm && sudo apt-get install ubuntu-gnome-desktop && apt-get remove --purge $(dpkg -l |grep -i unity |awk '{print $2}') && sudo apt-get autoremove -y


Add Extensions:

  1. Install Gnome 3 extensions to customize the desktop experience:
    $ sudo apt-get install -y gnome-tweak-tool gnome-shell-extension-top-icons-plus gnome-shell-extension-dashtodock gnome-shell-extension-better-volume gnome-shell-extension-refreshwifi gnome-shell-extension-disconnect-wifi
  2. Install the gnome-shell integration (the one on the main Ubuntu repos does not work):
    $ sudo add-apt-repository ppa:ne0sight/chrome-gnome-shell && sudo apt-get update && sudo apt-get install chrome-gnome-shell
  3. Install the “Multi-monitor add-on” (we use an upstream version because the one on the Ubuntu repos is buggy [settings do not open]) and the “Refresh wifi” extensions. You’ll need to install a browser plugin. Refresh the page after installing the plugin.
  4. Log off in order to activate the extensions.
  5. Start gnome-tweak-tool and enable “Better volume indicator” (scroll wheel to change volume), “Dash to dock” (a more Unity-like Dock, configurable. I set the “Icon size limit” to 24 and “Behavior-Click Action” to “minimize”), “Disconnect wifi” (allow disconnection of network without setting Wifi to off), “Refresh Wifi connections” (auto refresh wifi list), “Multi monitors add-on” (add a top bar to other monitors) and “Topicons plus” (put non-Gnome icons like Dropbox and pidgin on the top menu).

Change window size and buttons:

  1. On the Windows tab, I enabled the Maximise and Minise Titlebar Buttons.
  2. Make the window top bars smaller if you wish. Just create ~/.config/gtk-3.0/gtk.css with these lines:
    /* From: */
    window.ssd headerbar.titlebar {
    padding-top: 4px;
    padding-bottom: 4px;
    min-height: 0;
    window.ssd headerbar.titlebar button.titlebutton {
    padding: 0px;
    min-height: 0;
    min-width: 0;

Disable “natural scrolling” for mouse wheel:

While I like “natural scrolling” with the touchpad (enable it in the mouse preferences), I don’t like it on the mouse wheel. To disable it only on the mouse:
$ gsettings set org.gnome.desktop.peripherals.mouse natural-scroll false

If you run Gnome on good old X instead of Wayland (e.g. for driver support of more stability while Wayland matures), you need to use libinput instead of the synaptic driver to make “natural scrolling” possible:

$ sudo mkdir -p /etc/X11/xorg.conf.d && sudo cp -rp /usr/share/X11/xorg.conf.d/40-libinput.conf /etc/X11/xorg.conf.d/

Log out.

Enable Thunderbird notifications:

For Thunderbird new mail notification I installed the gnotifier Thunderbird add-on:

Extensions that I tried, liked but ended not using:

  • gnome-shell-extension-pixelsaver: it feels unnatural on a 3 screen setup like I use at work, e.g. min-max-close windows buttons on the main screen for windows on other screens..
  • gnome-shell-extension-hide-activities: the top menu is already mostly empty, so it’s not saving much.
  • gnome-shell-extension-move-clock: although I prefer the clock on the right, the default middle position makes sense as it integrates with notifications.

That’s it (so far 🙂 ).

Thx to @sil, @adsamalik and Jonathan Carter.

Filed under: Uncategorized Tagged: gnome, Gnome3, Linux, Linux Desktop, Thanks for all the fish, Ubuntu, unity

I published the following diary on “When Bad Guys are Pwning Bad Guys…“.

A few months ago, I wrote a diary about webshells[1] and the numerous interesting features they offer. They’re plenty of web shells available, there are easy to find and install. They are usually delivered as one big obfuscated (read: Base64, ROT13 encoded and gzip’d) PHP file that can be simply dropped on a compromised computer. Some of them are looking nice and professional like the RC-Shell… [Read more]

[The post [SANS ISC] When Bad Guys are Pwning Bad Guys… has been first published on /dev/random]

May 11, 2017

Imagine we want an editor that has undo and redo capability. But the operations on the editor are all asynchronous. This implies that also undo and redo are asynchronous operations.

We want all this to be available in QML, we want to use QFuture for the asynchronous stuff and we want to use QUndoCommand for the undo and redo capability.

But how do they do it?

First of all we will make a status object, to put the status of the asynchronous operations in (asyncundoable.h).

class AbstractAsyncStatus: public QObject

    Q_PROPERTY(bool success READ success CONSTANT)
    Q_PROPERTY(int extra READ extra CONSTANT)
    AbstractAsyncStatus(QObject *parent):QObject (parent) {}
    virtual bool success() = 0;
    virtual int extra() = 0;

We will be passing it around as a QSharedPointer, so that lifetime management becomes easy. But typing that out is going to give us long APIs. So let’s make a typedef for that (asyncundoable.h).

typedef QSharedPointer<AbstractAsyncStatus> AsyncStatusPointer;

Now let’s make ourselves an undo command that allows us to wait for asynchronous undo and asynchronous redo. We’re combining QUndoCommand and QFutureInterface here (asyncundoable.h).

class AbstractAsyncUndoable: public QUndoCommand
    AbstractAsyncUndoable( QUndoCommand *parent = nullptr )
        : QUndoCommand ( parent )
        , m_undoFuture ( new QFutureInterface<AsyncStatusPointer>() )
        , m_redoFuture ( new QFutureInterface<AsyncStatusPointer>() ) {}
    QFuture<AsyncStatusPointer> undoFuture()
        { return m_undoFuture->future(); }
    QFuture<AsyncStatusPointer> redoFuture()
        { return m_redoFuture->future(); }

    QScopedPointer<QFutureInterface<AsyncStatusPointer> > m_undoFuture;
    QScopedPointer<QFutureInterface<AsyncStatusPointer> > m_redoFuture;


Okay, let’s implement these with an example operation. First the concrete status object (asyncexample1command.h).

class AsyncExample1Status: public AbstractAsyncStatus
    Q_PROPERTY(bool example1 READ example1 CONSTANT)
    AsyncExample1Status ( bool success, int extra, bool example1,
                          QObject *parent = nullptr )
        : AbstractAsyncStatus(parent)
        , m_example1 ( example1 )
        , m_success ( success )
        , m_extra ( extra ) {}
    bool example1() { return m_example1; }
    bool success() Q_DECL_OVERRIDE { return m_success; }
    int extra() Q_DECL_OVERRIDE { return m_extra; }
    bool m_example1 = false;
    bool m_success = false;
    int m_extra = -1;

Let’s make a QUndoCommand that uses a timer to simulate asynchronous behavior. We could also use QtConcurrent’s run function to use a QThreadPool and QRunnable instances that also implement QFutureInterface, of course. Seasoned Qt developers know what I mean. For the sake of example, I wanted to illustrate that QFuture can also be used for asynchronous things that aren’t threads. We’ll use the lambda because QUndoCommand isn’t a QObject, so no easy slots. That’s the only reason (asyncexample1command.h).

class AsyncExample1Command: public AbstractAsyncUndoable
    AsyncExample1Command(bool example1, QUndoCommand *parent = nullptr)
        : AbstractAsyncUndoable ( parent ), m_example1(example1) {}
    void undo() Q_DECL_OVERRIDE {
        QTimer *timer = new QTimer();
        QObject::connect(timer, &QTimer::timeout, [=]() {
            QSharedPointer<AbstractAsyncStatus> result;
            result.reset(new AsyncExample1Status ( true, 1, m_example1 ));
        } );
    void redo() Q_DECL_OVERRIDE {
        QTimer *timer = new QTimer();
        QObject::connect(timer, &QTimer::timeout, [=]() {
            QSharedPointer<AbstractAsyncStatus> result;
            result.reset(new AsyncExample1Status ( true, 2, m_example1 ));
        } );
    QTimer m_timer;
    bool m_example1;

Let’s now define something we get from the strategy design pattern; a editor behavior. Implementations provide an editor all its editing behaviors (abtracteditorbehavior.h).

class AbstractEditorBehavior : public QObject
    AbstractEditorBehavior( QObject *parent) : QObject (parent) {}

    virtual QFuture<AsyncStatusPointer> performExample1( bool example1 ) = 0;
    virtual QFuture<AsyncStatusPointer> performUndo() = 0;
    virtual QFuture<AsyncStatusPointer> performRedo() = 0;
    virtual bool canRedo() = 0;
    virtual bool canUndo() = 0;

So far so good, so let’s make an implementation that has a QUndoStack and that therefor is undoable (undoableeditorbehavior.h).

class UndoableEditorBehavior: public AbstractEditorBehavior
    UndoableEditorBehavior(QObject *parent = nullptr)
        : AbstractEditorBehavior (parent)
        , m_undoStack ( new QUndoStack ){}

    QFuture<AsyncStatusPointer> performExample1( bool example1 ) Q_DECL_OVERRIDE {
        AsyncExample1Command *command = new AsyncExample1Command ( example1 );
        return command->redoFuture();
    QFuture<AsyncStatusPointer> performUndo() {
        const AbstractAsyncUndoable *undoable =
            dynamic_cast<const AbstractAsyncUndoable *>(
                    m_undoStack->command( m_undoStack->index() - 1));
        return const_cast<AbstractAsyncUndoable*>(undoable)->undoFuture();
    QFuture<AsyncStatusPointer> performRedo() {
        const AbstractAsyncUndoable *undoable =
            dynamic_cast<const AbstractAsyncUndoable *>(
                    m_undoStack->command( m_undoStack->index() ));
        return const_cast<AbstractAsyncUndoable*>(undoable)->redoFuture();
    bool canRedo() Q_DECL_OVERRIDE { return m_undoStack->canRedo(); }
    bool canUndo() Q_DECL_OVERRIDE { return m_undoStack->canUndo(); }
    QScopedPointer<QUndoStack> m_undoStack;

Now we only need an editor, right (editor.h)?

class Editor: public QObject
    Q_PROPERTY(AbstractEditorBehavior* editorBehavior READ editorBehavior CONSTANT)
    Editor(QObject *parent=nullptr) : QObject(parent)
        , m_editorBehavior ( new UndoableEditorBehavior ) { }
    AbstractEditorBehavior* editorBehavior() { return; }
    Q_INVOKABLE void example1Async(bool example1) {
        QFutureWatcher<AsyncStatusPointer> *watcher = new QFutureWatcher<AsyncStatusPointer>(this);
        connect(watcher, &QFutureWatcher<AsyncStatusPointer>::finished,
                this, &Editor::onExample1Finished);
        watcher->setFuture ( m_editorBehavior->performExample1(example1) );
    Q_INVOKABLE void undoAsync() {
        if (m_editorBehavior->canUndo()) {
            QFutureWatcher<AsyncStatusPointer> *watcher = new QFutureWatcher<AsyncStatusPointer>(this);
            connect(watcher, &QFutureWatcher<AsyncStatusPointer>::finished,
                    this, &Editor::onUndoFinished);
            watcher->setFuture ( m_editorBehavior->performUndo() );
    Q_INVOKABLE void redoAsync() {
        if (m_editorBehavior->canRedo()) {
            QFutureWatcher<AsyncStatusPointer> *watcher = new QFutureWatcher<AsyncStatusPointer>(this);
            connect(watcher, &QFutureWatcher<AsyncStatusPointer>::finished,
                    this, &Editor::onRedoFinished);
            watcher->setFuture ( m_editorBehavior->performRedo() );
    void example1Finished( AsyncExample1Status *status );
    void undoFinished( AbstractAsyncStatus *status );
    void redoFinished( AbstractAsyncStatus *status );
private slots:
    void onExample1Finished() {
        QFutureWatcher<AsyncStatusPointer> *watcher =
                dynamic_cast<QFutureWatcher<AsyncStatusPointer>*> (sender());
        emit example1Finished( watcher->result().objectCast<AsyncExample1Status>().data() );
    void onUndoFinished() {
        QFutureWatcher<AsyncStatusPointer> *watcher =
                dynamic_cast<QFutureWatcher<AsyncStatusPointer>*> (sender());
        emit undoFinished( watcher->result().objectCast<AbstractAsyncStatus>().data() );
    void onRedoFinished() {
        QFutureWatcher<AsyncStatusPointer> *watcher =
                dynamic_cast<QFutureWatcher<AsyncStatusPointer>*> (sender());
        emit redoFinished( watcher->result().objectCast<AbstractAsyncStatus>().data() );
    QScopedPointer<AbstractEditorBehavior> m_editorBehavior;

Okay, let’s register this up to make it known in QML and make ourselves a main function (main.cpp).

#include <QtQml>
#include <QGuiApplication>
#include <QQmlApplicationEngine>
#include <editor.h>
int main(int argc, char *argv[])
    QGuiApplication app(argc, argv);
    QQmlApplicationEngine engine;
    qmlRegisterType<Editor>("be.codeminded.asyncundo", 1, 0, "Editor");
    return app.exec();

Now, let’s make ourselves a simple QML UI to use this with (main.qml).

import QtQuick 2.3
import QtQuick.Window 2.2
import QtQuick.Controls 1.2
import be.codeminded.asyncundo 1.0
Window {
    visible: true
    width: 360
    height: 360
    Editor {
        id: editor
        onUndoFinished: text.text = "undo"
        onRedoFinished: text.text = "redo"
        onExample1Finished: text.text = "whoohoo " + status.example1
    Text {
        id: text
        text: qsTr("Hello World")
        anchors.centerIn: parent
    Action {
        shortcut: "Ctrl+z"
        onTriggered: editor.undoAsync()
    Action {
        shortcut: "Ctrl+y"
        onTriggered: editor.redoAsync()
    Button  {
        onClicked: editor.example1Async(99);

You can find the sources of this complete example at github. Enjoy!

May 10, 2017

For years, Google is offering two nice features with his platform to gain more power of your email address. You can play with the “+” (plus) sign or “.” (dot) to create more email addresses linked to your primary one. Let’s take an example with John who’s the owner of John can share the email address “” with his friends playing soccer or “”  to register on forums talking about information security. It’s the same with dots. Google just ignore them. So “” is the same as “”. Many people use the “+” format to optimize the flood of email they receive every day and automatically process it / store it in separate folders. That’s nice but it can also be very useful to discover where an email address is being used.

A few days ago, Troy Hunt, the owner of service (if you don’t know it yet, just have a look and register!), announced that new massive dumps were in the wild for a total of ~1B passwords! The new dumps are called “Exploit.In” (593M entries) and “Anti Public Combo List” (427M entries). The sources of the leaks are not clear. I grabbed a copy of the data and searched for Google “+” email addresses.

Not surprising, I found +28K unique accounts! I extracted strings after the “+” sign and indexed everything in Splunk:

Gmail Tags

As you can see, we recognise some known online services:

  • xtube (adult content)
  • friendster (social network)
  • filesavr (file exchange service in the cloud)
  • linkedin (social network)
  • bioware (gaming platform)

This does not mean that those platforms were breached (ok, LinkedIn was) but it can give some indicators…

Here is a dump of the top identified tags (with more than 3 characters to keep the list useful). You can download the complete CSV here.

Tag Count
























































































[The post Identifying Sources of Leaks with the Gmail “+” Feature has been first published on /dev/random]

May 09, 2017

I’m happy to announce the immediate availability of FileFetcher 4.0.0.

FileFecther is a small PHP library that provides an OO way to retrieve the contents of files.

What’s OO about such an interface? You can inject an implementation of it into a class, thus avoiding that the class knows about the details of the implementation, and being able to choose which implementation you provide. Calling file_get_contents does not allow changing implementation as it is a procedural/static call making use of global state.

Library number 8234803417 that does this exact thing? Probably not. The philosophy behind this library is to provide a very basic interface (FileFetcher) that while insufficient for plenty of use cases, is ideal for a great many, in particular replacing procedural file_get_contents calls. The provided implementations are to facilitate testing and common generic tasks around the actual file fetching. You are encouraged to create your own core file fetching implementation in your codebase, presumably an adapter to a library that focuses on this task such as Guzzle.

So what is in it then? The library provides two trivial implementations of the FileFetcher interface at its heart:

  • SimpleFileFetcher: Adapter around file_get_contents
  • InMemoryFileFetcher: Adapter around an array provided to its constructor (construct with [] for a “throwing fetcher”)

It also provides a number of generic decorators:

Version 4.0.0 brings PHP7 features (scalar type hints \o/) and adds a few extra handy implementations. You can add the library to your composer.json (jeroen/file-fetcher) or look at the documentation on GitHub. You can also read about its inception in 2013.

Logo JenkinsCe jeudi 15 juin 2017 à 19h se déroulera la 60ème séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : Intégration continue avec Jenkins

Thématique : Intégration continue|déploiement automatisé|Processus

Public : Tout public

L’animateur conférencier : Dimitri Durieux (CETIC)

Lieu de cette séance : Campus technique (ISIMs) de la Haute Ecole en Hainaut, Avenue V. Maistriau, 8a, Salle Académique, 2e bâtiment (cf. ce plan sur le site de l’ISIMs, et ici sur la carte Openstreetmap).

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page La séance sera suivie d’un verre de l’amitié.

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, OpenSides, MeaWeb et Phonoid.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

Description : Afin d’améliorer la gestion des ressources, la virtualisation et le Cloud ont révolutionné notre manière de concevoir et de déployer des applications informatiques. La complexité des systèmes informatiques a fortement évolué. Il ne s’agit plus de déployer et de maintenir une application monolithique mais un ensemble de services interagissant entre eux. Les développeurs sont donc confrontés à une forte complexité des tests, de l’intégration et du déploiement des applications.

A ce nouveau contexte technique s’ajoute des méthodologies de développement agiles, rapides et itératives. Elles favorisent un déploiement régulier de nouvelles versions des composants incorporant de nouvelles fonctionnalités. Il faut donc tester, intégrer et déployer à un rythme beaucoup plus soutenu qu’avant.

Le développeur doit donc réaliser très régulièrement des tâches de tests, d’intégration et de déploiement complexes. Il a besoin d’un outil et il en existe plusieurs. Dans le cadre de cette présentation, je propose de vous présenter l’outil Jenkins. Il s’agit de l’outil open-source disposant du plus grand nombre de possibilités de personnalisation à ce jour. Cette présentation sera l’occasion de présenter les bases de Jenkins et des pratiques dites d’intégration continue. Elle sera illustrée d’exemples concrets d’utilisation de la solution.

Jenkins est un outil installable sous forme de site web permettant la configuration de tâches automatisées. Ces tâches possibles sont nombreuses mais citons notamment la compilation ou la publication d’un module, l’intégration d’un système, les tests d’une fonctionnalité ou encore le déploiement dans l’environnement de production. Les tâches peuvent être déclenchées automatiquement selon le besoin. Par exemple, on peut réaliser un test d’intégration à chaque modification de la base de code d’un des modules. Les fonctionnalités restent assez simples et répondent à des besoins très précis mais il est possible de les combiner pour automatiser la plupart des tâches périphériques afin d’avoir plus de temps disponible pour le développement de votre logiciel.

Short Bio : Dimitri Durieux est un membre du CETIC spécialisé dans la qualité logicielle. Il a travaillé sur plusieurs projets de recherche dans des contexte à complexité et contraintes variées et aide les industries Wallonne à produire du code de meilleure qualité. Principalement, il a évalué la qualité du code et des pratiques de développement dans plus de 50 entreprises IT Wallonnes afin de les aider à mettre en place une gestion efficace de la qualité. Lors de son parcours, au CETIC, il a eu l’occasion de déployer et de maintenir plusieurs instances de Jenkins tout en démontrant la faisabilité de cas d’application complexes de cette technologie pour plusieurs sociétés Wallonnes.

7-Eleven is the largest convenience store chain in the world with 60,000 locations around the globe. That is more than any other retailer or food service provider. In conjunction with the release of its updated 7-Rewards program, 7-Eleven also relaunched its website and mobile application using Drupal 8! Check it out at, and grab a Slurpee while you're at it!

During our development we are using the Gitflow process. (Some may prefer others, but this works well in the current situation). In order to do this we are using the JGitflow Maven plugin (developed by Atlassian)

If our software is ready to go out of the door, we make a release branch. This release branch is then filled with documentation with regards to the release (Jira tickets, git logs, build numbers, … pretty much everything to make it trackable inside the artifact).
From time to time the back merge of this release branch towards Develop would fail and cause us all kinds of trouble. By default the JGitflow Maven plugin tries to do a back merge and delete the branch. This was a behaviour we wanted to change.
So during my spare time (read nights) I decided it was time to do some nightly hacking. The result is a new maven option “backMergeToDevelop” that defaults to true but can be overridden.

I created the necessary tests to validate it works and created a pull request, however no follow up so far from Atlassian … Anybody from Atlassian reading this? Reach out to me so I can get this in the next release (if any is coming …?)

It seems there were no commits since 2015-09 …


May 07, 2017

It's not a secret that I was playing/experimenting with OpenStack in the last days. When I mention OpenStack, I should even say RDO , as it's RPM packaged, built and tested on CentOS infra.

Now that it's time to deploy it in Production, that's when you should have a deeper look at how to proceed and which tool to use. Sure, Packstack can help you setting up a quick PoC but after some discussions with people hanging around in the #rdo irc channel on freenode, it seems that almost everybody agreed on the fact that it's not the kind of tool you want to use for a proper deploy.

Let's so have a look at the available options. While I really like/prefer Ansible, we (CentOS Project) still use puppet as our Configuration Management tool, and itself using Foreman as the ENC. So let's see both options.

  • Ansible : Lot of natives modules exist to manage an existing/already deployed openstack cloud, but nothing really that can help setting up one from scratch. OTOH it's true that Openstack Ansible exists, but that will setup openstack components into LXC containers, and wasn't really comfortable with the whole idea (YMMV)
  • Puppet : Lot of puppet modules so you can automatically reuse/import those into your existing puppet setup, and seems to be the prefered method when discussing with people in #rdo (when not using TripleO though)

So, after some analysis, and despite the fact that I really prefer Ansible over Puppet, I decided (so that it could still make sense in our infra) to go the "puppet modules way". That was the beginning of a journey, where I saw a lot of Yaks to shave too.

It started with me trying to "just" reuse and adapt some existing modules I found. Wrong. And it's even fun because it's one of my mantras : "Don't try to automate what you can't understand from scratch" (And I fully agree with Matthias' thought on this ).

So one can just read all the openstack puppet modules, and then try to understand how to assemble them together to build a cloud. But I remembered that Packstack itself is puppet driven. So I just decided to have a look at what it was generating and start from that to write my own module from scratch. How to proceed ? Easy : on a VM, just install packstack, generate answer file, "salt" it your needs, and generate the manifests :

 yum install -y centos-release-openstack-ocata && yum install openstack-packstack -y
 packstack --gen-answer-file=answers.txt
 vim answers.txt
 packstack --answer-file=answers.txt --dry-run
 * The installation log file is available at: /var/tmp/packstack/20170508-101433-49cCcj/openstack-setup.log
 * The generated manifests are available at: /var/tmp/packstack/20170508-101433-49cCcj/manifests

So now we can have a look at all the generated manifests and start from scratch our own, reimporting all the needed openstack puppet modules, and that's what I did .. but started to encounter some issues. The first one was that the puppet version we were using was 3.6.2 (everywhere on every release/arch we support, so centos 6 and 7, and x86_64,i386,aarch64,ppc64,ppc64le).

One of the openstack component is RabbitMQ but openstack modules rely on the puppetlabs module to deploy/manage it. You'll see a lot of those external modules being called/needed by openstack puppet. The first thing that I had to do was investigating our own modules as some are the same name, but not coming from puppetlabs/forge, so instead of analyzing all those, I moved everything RDO related to a different environment so that it wouldn't conflict with some our our existing modules. Back now to the RabbitMQ one : puppet errors where trying to just use it. First yak to shave : updating the whole CentOS infra puppet to higher version because of a puppet bug. Let's so rebuild puppet for centos 6/7 and with a higher version on CBS

That means of course testing our own modules, on our Test Foreman/puppetmasterd instance first, and as upgraded worked, I applied it everywhere. Good, so let's jump to the next yak.

After the rabbitmq issue was solved, I encountered other ones coming from openstack puppet modules now, as the .rb ruby code used for type/provider was expecting ruby2 and not 1.8.3, which was the one available on our puppetmasterd (yeah, our Foreman was on a CentOS 6 node) so another yak to shave : migrating our Foreman instance from CentOS 6 to a new CentOS 7 node. Basically installing a CentOS 7 node with the same Foreman version running on CentOS 6 node, and then following procedure, but then, again, time lost to test update/upgrade and also all other modules, etc (One can see why I prefer agentless cfgmgmt).

Finally I found that some of the openstack puppet modules aren't touching the whole config. Let me explain why. In Openstack Ocata, some things are mandatory, like the Placement API, but despite all the classes being applied, I had some issues to have it to run correctly when deploying an instance. It's true that I initially had a bug in my puppet code for the user/password to use to configure the rabbitmq settings, but it was solved and also applied correctly in /etc/nova/nova.conf (setting "transport_url=") . But openstack nova services (all nova-*.log files btw) were always saying that credentials given were refused by rabbitmq, while tested manually)

After having verified in the rabbitmq logs, I saw that despite what was configured in nova.conf, services were still trying to use the wrong user/pass to connect to rabbitmq. Strange as ::nova::cell_v2::simple_setup was included and was supposed also to use the transport_url declared at the nova.conf level (and so configured by ::nova) . That's how I discovered that something "ugly" happened : in fact even if you modify nova.conf, it stores some settings in the mysql DB, and you can see those (so the "wrong" ones in my case) with :

nova-manage cell_v2 list_cells --debug

Something to keep in mind, as for initial deployment, if your rabbitmq user/pass needs to be changed, and despite the fact that puppet will not complain, it will only update the conf file, but not the settings imported first by puppet in the DB (table nova_api.cell_mapping if you're interested) After that, everything was then running, and reinstalled/reprovisioned multiple times my test nodes to apply the puppet module/manifests from puppetmasterd to confirm.

That was quite a journey, but it's probably only the beginning but it's a good start. Now to investigate other option for cinder/glance as it seems Gluster was deprecated and I'd like to know hy.

Hope this helps if you need to bootstrap openstack with puppet !

May 06, 2017

In my computer lifetime, I’ve used Windows 3,NT,95,… 10, Linux (mostly Gnome from 2002 onwards) and since late 200X Mac. Currently I’m mostly using Mac because it fits nice between Windows and Linux.

At a certain time I was doing a demo on my Linux laptop and I got a blue screen in front of a client. Pretty much a No No for everybody … So I decided to switch to Mac and ever since no problems doing demo’s and presentations.

My Mac is already pretty old (read +3 years). As I’ve installed a lot on it, I decided to re-install Sierra. Try the good old Windows way, re-install and get a fast machine again. But what happened pretty much blew my mind …

I did a clean install and afterwards checked my laptops memory consumption … the bare system was still eating 7 Gigs of memory … Okay I may have 16 in total but why would my core OS without any additional installations need 7 gigs … This is utter madness and even a complete rip off. This would mean that in order to run a descent Mac Sierra you always need 16 gig to run at a descent speed … I’m following Microsoft pretty closely as they are more than ever focusing on good products and helping the opensource community (Did you know Microsoft is the #1 contributor to opensource software ???!!!). Even their Windows 10 is looking like a descent product again …

My current Mac is still doing fine, so I hope to postpone the purchase of a new machine somewhat to a far future … However if I need to purchase something at the moment, I wouldn’t know what to buy.
The new Macs really scare me. The touch bar is something that nobody seems  to find useful even worse, pretty much everybody I talked to hates it and finds it a waste of money … for myself I find Microsoft did a good job at Windows 10, however the command line interface is still not up to par with the on from Mac or Linux one … So if I should buy something right now, it would probably be a laptop with an SSD, I7, 16 gig or more diskspace that is completely supported on Linux. I would then be installing Linux Mint as that’s still the distro that seems for myself the most user friendly …. if you have other suggestions, just let me know !!!

Lets just hope Apple can turn it around, and can make descent desktopsagain that don’t eat all your memory for no un-necessary reason.

I published the following diary on “The story of the CFO and CEO…“.

I read an interesting article in a Belgian IT magazine[1]. Every year, they organise a big survey to collect feelings from people working in the IT field (not only security). It is very broad and covers their salary, work environments, expectations, etc. For infosec people, one of the key points was that people wanted to attend more trainings and conferences… [Read more]

[The post [SANS ISC] The story of the CFO and CEO… has been first published on /dev/random]

May 05, 2017

I published the following diary on “HTTP Headers… the Achilles’ heel of many applications“.

When browsing a target web application, a pentester is looking for all “entry” or “injection” points present in the pages. Everybody knows that a static website with pure HTML code is less juicy compared to a website with many forms and gadgets where visitors may interact with it. Classic vulnerabilities (XSS, SQLi) are based on the user input that is abused to send unexpected data to the server… [Read more]

[The post [SANS ISC] HTTP Headers… the Achilles’ heel of many applications has been first published on /dev/random]

May 02, 2017

Today, while hunting, I found a malicious HTML page in my spam trap. The page was a fake JP Morgan Chase bank. Nothing fancy. When I found such material, I usually search for “POST” HTTP requests to collect URLs and visit the websites that receive the victim’s data. As usual, the website was not properly protected and all files were readable. This one looked interesting:

Data File

The first question was: are those data relevant. Probably not… Why?

Today, many attackers protect their malicious website via an .htaccess file to restrict access to their victims only. In this case, the Chase bank being based in the US, we could expect that most of the visitors’ IP addresses to be geolocalized there but it was not the case this time. I downloaded the data file that contained 503 records. Indeed, most of them contained empty or irrelevant information. So I decided to have a look at the IP addresses. Who’s visiting the phishing site? Let’s generate some statistics!

$ grep ^ip: data.txt |cut -d ' ' -f 2 | sort -u >victims.csv
$ wc -l victims.csv

With Splunk, we can easily display them on a fancy map:

| inputlookup victims.csv | iplocation IP \
| inputlookup victims.csv | iplocation IP \
| stats count by IP, lat, lon, City, Country, Region

IP Map

Here is the top-5 of countries which visited the phishing page or, more precisely, which submitted a POST request:

United States 64
United Kingdom 13
France 11
Germany 7
Australia 5

Some IP addresses visited multiple times the website: 187 77 21 9 6

A reverse lookup on the IP addresses revealed some interesting information:

  • The Google App Engine was the top visitor
  • Many VPS providers visited the page, probably owned by researchers (OVH, Amazon EC2)
  • Service protecting against phishing sites visited the page (ex:,,
  • Many Tor exit-nodes
  • Some online URL scanners (
  • Some CERTS (CIRCL)

Two nice names were found:

  • Trendmicro
  • Trustwave

No real victim left his/her data on the fake website. Some records contained data but fake ones (although probably entered manually). All the traffic was generated by crawlers, bot and security tools…

[The post Who’s Visiting the Phishing Site? has been first published on /dev/random]

The post How to enable TLS 1.3 on Nginx appeared first on

Since Nginx 1.13, support has been added for TLSv1.3, the latest version of the TLS protocol. Depending on when you read this post, chances are you're running an older version of Nginx at the moment, which doesn't yet support TLS 1.3. In that case, consider running Nginx in a container for the latest version, or compiling Nginx from source.

Enable TLSv1.3 in Nginx

I'm going to assume you already have a working TLS configuration. It'll include configs like these;

ssl_protocols               TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers                 ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256;
ssl_prefer_server_ciphers   on;
ssl_ecdh_curve              secp384r1;

And quite a few more parameters.

To enable TLS 1.3, add TLSv1.3 to the ssl_protocols list.

ssl_protocols               TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;

And reload your Nginx configuration.

Test if your Nginx version supports TLS 1.3

Add the config as shown above, and try to run Nginx in debug mode.

$ nginx -t
nginx: [emerg] invalid value "TLSv1.3" in /etc/nginx/conf.d/
nginx: configuration file /etc/nginx/nginx.conf test failed

If you see the message above, your Nginx version doesn't support TLS 1.3. A working config will tell you this;

$ nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

If you don't see any errors, your Nginx version supports TLS 1.3.

Further requirements

Now that you've told Nginx to use TLS 1.3, it will use TLS 1.3 where available, however ... there aren't many libraries out there that offer TLS 1.3.

For instance, OpenSSL is still debating the TLS 1.3 implementation, which seems reasonable because to the best of my knowledge, the TLS 1.3 spec isn't final yet. There's TLS 1.3 support included in the very latest OpenSSL version though, but there doesn't appear to be a sane person online that actually uses it.

TLSv1.3 draft-19 support is in master if you "config" with "enable-tls1_3". Note that draft-19 is not compatible with draft-18 which is still be used by some other libraries. Our draft-18 version is in the tls1.3-draft-18 branch.
TLS 1.3 support in OpenSSL

In short; yes, you can enable TLS 1.3 in Nginx, but I haven't found an OS & library that will allow me to actually use TLS 1.3.

The post How to enable TLS 1.3 on Nginx appeared first on

May 01, 2017

Last week, 3,271 people gathered at DrupalCon Baltimore to share ideas, to connect with friends and colleagues, and to collaborate on both code and community. It was a great event. One of my biggest takeaways from DrupalCon Baltimore is that Drupal 8's momentum is picking up more and more steam. There are now about 15,000 Drupal 8 sites launching every month.

I want to continue the tradition of sharing my State of Drupal presentations. You can watch a recording of my keynote (starting at 24:00) or download a copy of my slides here (108 MB).

The first half of my presentation provided an overview of Drupal 8 updates. I discussed why Drupal is for ambitious digital experiences, how we will make Drupal upgrades easier and why we added four new Drupal 8 committers recently.

The second half of my keynote highlighted the newest improvements to Drupal 8.3, which was released less than a month ago. I showcased how an organization like The Louvre could use Drupal 8 to take advantage of new or improved site builder (layouts video, workflow video), content author (authoring video) and end user (BigPipe video, chatbot video) features.

I also shared that the power of Drupal lies in its ability to support the spectrum of both traditional websites and decoupled applications. Drupal continues to move beyond the page, and is equipped to support new user experiences and distribution platforms, such as conversational user interfaces. The ability to support any user experience is driving the community's emphasis on making Drupal API-first, not API-only.

Finally, it was really rewarding to spotlight several Drupalists that have made an incredible impact on Drupal. If you are interested in viewing each spotlight, they are now available on my YouTube channel.

Thanks to all who made DrupalCon Baltimore a truly amazing event. Every year, DrupalCon allows the Drupal community to come together to re-energize, collaborate and celebrate. Discussions on evolving Drupal's Code of Conduct and community governance were held and will continue to take place virtually after DrupalCon. If you have not yet had the chance, I encourage you to participate.

Luc en Annick stellen hun ruimte ter beschikking om de voorbijgangers gedurende de hele maand mei warm te maken om LovArte te komen bezoeken.

uitstalraam uitvaart Cocquyt

April 30, 2017

Parfois, au milieu du mépris de la cohue humaine, il parvenait à croiser un regard fuyant, à attirer une attention concentrée sur un téléphone, à briser pour quelques secondes le dédain empli de stress et d’angoisse des navetteurs préssés. Mais les rares réponses à son geste étaient invariables :
— Non !
— Merci, non. (accompagné d’un pincement des lèvres et d’un hochement de tête)
— Pas le temps !
— Pas de monnaie…

Il ne demandait pourtant pas d’argent ! Il ne demandait rien en échange de ses roses rouges. Sauf peut-être un sourire.

Pris d’une impulsion instinctive, il était descendu ce matin dans le métro, décidé à offrir un peu de gentillesse, un peu de bonheur sous forme d’un bouquet de fleur destiné au premier inconnu qui l’accepterait.

Alors que la nuée humaine peu à peu s’égayait et se dispersait dans les grands immeubles gris du quartier des affaires, il regarda tristement son bouquet.
— J’aurais essayé, murmura-t-il avant de confier les fleurs à la poubelle, cynique vase de métal.

Une larme perla au coin de sa paupière. Il l’effaça du revers de la main avant de s’asseoir à même les marches de béton. Il ferma les yeux, forçant son cœur à s’arrêter.
— Monsieur ! Monsieur !

Une main lui secouait l’épaule. Devant son regard fatigué se tenait un jeune agent de police, l’uniforme rutilant, la coupe de cheveux nette et fringuante.
— Monsieur, je vous ai observé avec votre bouquet de fleur…
— Oui ? fit-il, emplit d’espoir et de reconnaissance.
— Puis-je voir votre permis de colportage dans le métro ? Si vous n’en possédez pas, je serai obligé de vous verbaliser.

Courte histoire inspirée par ce tweet. Photo par Tiberiu Ana.


Vous avez aimé votre lecture ? Soutenez l’auteur sur Tipeee, Patreon, Paypal ou Liberapay. Même un don symbolique fait toute la différence ! Retrouvons-nous ensuite sur Facebook, Medium, Twitter ou Mastodon.

Ce texte est publié sous la licence CC-By BE.

April 27, 2017

Like many developers, I get a broken build on regular basis. When setting up a new project this happens a lot because of missing infrastructure dependencies or not having access to the companies private docker hub …

Anyway on the old Jenkins (1), I used to go into the workspace folder to have a look at the build itself. In Jenkins 2 this seemed to have disappeared  (when using the pipeline) … or did it … ?

It’s there, but you got to look very careful. To get there follow these simple steps:

  • Go to the failing build (the #…)
  • Click on “Pipeline Steps” in the sidebar
  • Click on “Allocate node : …”
  • On the right side the “Workspace” folder icon that we all love appears
  • Click it

Et voila we are all set

April 26, 2017

Here is my quick wrap-up of the FIRST Technical Colloquium hosted by Cisco in Amsterdam. This is my first participation to a FIRST event. FIRST is an organization helping in incident response as stated on their website:

FIRST is a premier organization and recognized global leader in incident response. Membership in FIRST enables incident response teams to more effectively respond to security incidents by providing access to best practices, tools, and trusted communication with member teams.

The event was organized at Cisco office. Monday was dedicated to a training about incident response and the two next days were dedicated to presentations. All of them focussing on the defence side (“blue team”). Here are a few notes about interesting stuff that I learned.

The first day started with two guys from Facebook: Eric Water @ Matt Moren. They presented the solution developed internally at Facebook to solve the problem of capturing network traffic: “PCAP don’t scale”. In fact, with their solution, it scales! To investigate incidents, PCAPs are often the gold mine. They contain many IOC’s but they also introduce challenges: the disk space, the retention policy, the growing network throughput. When vendors’ solutions don’t fit, it’s time to built your own solution. Ok, only big organizations like Facebook have resources to do this but it’s quite fun. The solution they developed can be seen as a service: “PCAP as a Service”. They started by building the right hardware for sensors and added a cool software layer on top of it. Once collected, interesting PCAPs are analyzed using the Cloudshark service. They explained how they reached top performances by mixing NFS and their GlusterFS solution. Really a cool solution if you have multi-gigabits networks to tap!

The next presentation focused on “internal network monitoring and anomaly detection through host clustering” by Thomas Atterna from TNO. The idea behind this talk was to explain how to monitor also internal traffic. Indeed, in many cases, organizations still focus on the perimeter but internal traffic is also important. We can detect proxies, rogue servers, C2, people trying to pivot, etc. The talk explained how to build clusters of hosts. A cluster of hosts is a group of devices that have the same behaviour like mail servers, database servers, … Then to determine “normal” behaviour per cluster and observe when individual hosts deviate. Clusters are based on the behaviour (the amount of traffic, the number of flows, protocols, …). The model is useful when your network is quite close and stable but much more difficult to implement in an “open” environment (like universities networks).
Then Davide Carnali made a nice review of the Nigerian cybercrime landscape. He explained in details how they prepare their attacks, how they steal credentials, how they deploy the attacking platform (RDP, RAT, VPN, etc). The second part was a step-by-step explanation how they abuse companies to steal (sometimes a lot!) of money. An interesting fact reported by Davide: the time required between the compromisation of a new host (to drop malicious payload) and the generation of new maldocs pointing to this host is only… 3 hours!
The next presentation was performed by Gal Bitensky ( Minerva):  “Vaccination: An Anti-Honeypot Approach”. Gal (re-)explained what the purpose of a honeypot and how they can be defeated. Then, he presented a nice review of ways used by attackers to detect sandboxes. Basically, when a malware detects something “suspicious” (read: which makes it think that it is running in a sandbox), it will just silently exit. Gal had the idea to create a script which creates plenty of artefacts on a Windows system to defeat malware. His tool has been released here.
Paul Alderson (FireEye) presented “Injection without needles: A detailed look at the data being injected into our web browsers”. Basically, it was a huge review of 18 months of web-­inject and other configuration data gathered from several botnets. Nothing really exciting.
The next talk was more interesting… Back to the roots: SWITCH presented their DNS Firewall solution. This is a service they provide not to their members. It is based on DNS RPZ. The idea was to provide the following features:
  • Prevention
  • Detection
  • Awareness

Indeed, when a DNS request is blocked, the user is redirected to a landing page which gives more details about the problem. Note that this can have a collateral issue like blocking a complete domain (and not only specific URLs). This is a great security control to deploy. Note that RPZ support is implemented in many solutions, especially Bind 9.

Finally, the first day ended with a presentation by Tatsuya Ihica from Recruit CSIRT: “Let your CSIRT do malware analysis”. It was a complete review of the platform that they deployed to perform more efficient automatic malware analysis. The project is based on Cuckoo that was heavily modified to match their new requirements.

The second day started with an introduction to the FIRST organization made by Aaron Kaplan, one of the board members. I liked the quote given by Aaron:

If country A does not talk to country B because of ‘cyber’, then a criminal can hide in two countries

Then, the first talk was really interesting: Chris Hall presented “Intelligence Collection Techniques“. After explaining the different sources where intelligence can be collected (open sources, sinkholes, …), he reviewed a serie of tools that he developed to help in the automation of these tasks. His tools addresses:
  • Using the Google API, VT API
  • Paste websites (like
  • YARA rules
  • DNS typosquatting
  • Whois queries

All the tools are available here. A very nice talk with tips & tricks that you can use immediately in your organization.

The next talk was presented by a Cisco guy, Sunil Amin: “Security Analytics with Network Flows”. Netflow isn’t a new technology. Initially developed by Cisco, they are today a lot of version and forks. Based on the definition of a “flow”: “A layer 3 IP communication between two endpoints during some time period”, we got a review the Netflow. Netflow is valuable to increase the visibility of what’s happening on your networks but it has also some specific points that must be addressed before performing analysis. ex: de-duplication flows. They are many use cases where net flows are useful:
  • Discover RFC1918 address space
  • Discover internal services
  • Look for blacklisted services
  • Reveal reconnaissance
  • Bad behaviours
  • Compromised hosts, pivot
    • HTTP connection to external host
    • SSH reverse shell
    • Port scanning port 445 / 139
I would expect a real case where net flow was used to discover something juicy. The talk ended with a review of tools available to process net flow data: SiLK, nfdump, ntop but log management can also be used like the ELK stack or Apache Spot. Nothing really new but a good reminder.
Then, Joel Snape from BT presented “Discovering (and fixing!) vulnerable systems at scale“. BT, as a major player on the Internet, is facing many issues with compromized hosts (from customers to its own resources). Joel explained the workflow and tools they deployed to help in this huge task. It is based on the following circle: Introduction,  data collection, exploration and remediation (the hardest part!).
I like the description of their “CERT dropbox” which can be deployed at any place on the network to perform the following tasks:
  • Telemetry collection
  • Data exfiltration
  • Network exploration
  • Vulnerability/discovery scanning
An interesting remark from the audience: ISP don’t have only to protect their customers from the wild Internet but also the Internet from their (bad) customers!
Feike Hacqueboard, from TrendMicro, explained:  “How political motivated threat actors attack“. He reviewed some famous stories of compromised organizations (like the French channel TV5) then reviewed the activity of some interesting groups like C-Major or Pawn Storm. A nice review of the Yahoo! OAuth abuse was performed as well as the tab-nabbing attack against OWA services.
Jose Enrique Hernandez (Zenedge) presented “Lessons learned in fighting Targeted Bot Attacks“. After a quick review of what bots are (they are not always malicious – think about the Google crawler bot), he reviewed different techniques to protect web resources from bots and why they often fail, like the JavaScript challenge or the Cloudflare bypass. These are “silent challenges”. Loud challenges are, by examples, CAPTCHA’s. Then Jose explained how to build a good solution to protect your resources:
  • You need a reverse proxy (to be able to change quests on the fly)
  • LUA hooks
  • State db for concurrency
  • Load balancer for scalability
  • fingerprintjs2 / JS Challenge

Finally, two other Cisco guys, Steve McKinney & Eddie Allan presented “Leveraging Event Streaming and Large Scale Analysis to Protect Cisco“. CIsco is collecting a huge amount of data on a daily basis (they speak in Terabytes!). As a Splunk user, they are facing an issue with the indexing licence. To index all these data, they should have extra licenses (and pay a lot of money). They explained how to “pre-process” the data before sending them to Splunk to reduce the noise and the amount of data to index.
The idea is to pub a “black box” between the collectors and Splunk. They explained what’s in this black box with some use cases:
  • WSA logs (350M+ events / day)
  • Passive DNS (7.5TB / day)
  • Users identification
  • osquery data

Some useful tips that gave and that are valid for any log management platform:

  • Don’t assume your data is well-formed and complete
  • Don’t assume your data is always flowing
  • Don’t collect all the things at once
  • Share!

Two intense days full of useful information and tips to better defend your networks and/or collect intelligence. The slides should be published soon.

[The post FIRST TC Amsterdam 2017 Wrap-Up has been first published on /dev/random]

MEANCe jeudi 18 mai 2017 à 19h se déroulera la 59ème séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : Développer avec la stack MEAN

Thématique : Développement

Public : Développeurs|étudiants|fondateurs de startups

Les animateurs conférenciers : Fabian Vilers (Dev One) et Sylvain Guérin (Guess Engineering)

Lieu de cette séance : Université de Mons, Campus Plaine de Nimy, avenue Maistriau, Grands Amphithéâtres, Auditoire Curie (cf. ce plan sur le site de l’UMONS, ou la carte OSM).

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page La séance sera suivie d’un verre de l’amitié.

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, OpenSides, MeaWeb et Phonoid.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

Description : MEAN est une stack JavaScript complète, basée sur des logiciels ouverts, et permettant de développer rapidement des applications orientées web. Elle est composée de 4 piliers fondamentaux :

  • MongoDB (base de données NoSQL orientée documents)
  • Express (framework de développement d’application back-end)
  • Angular (framework de développement d’application front-end)
  • Nodejs (environnement d’exécution multi-plateforme basé sur le moteur V8 JavaScript de Google).

Lors de cette session, nous présenterons à tour de rôle chaque élément constituant une application MEAN tout en construisant pas à pas, en live, et avec vous, une application simple et fonctionnelle.

Short Bios :

  • Développeur depuis près de 17 ans, Fabian se décrit comme un artisan du logiciel. Il montre beaucoup d’intérêt pour la précision, la qualité et la minutie lors de la conception et la mise au point d’un logiciel. Depuis 2010, il est consultant au sein de Dev One pour laquelle il renforce, guide, et forme des équipes de développement tout en promouvant les bonnes pratiques du développement logiciel en Belgique et en France. Vous pouvez suivre Fabian sur Twitter et LinkedIn.
  • Sylvain accompagne depuis plus de 20 ans des sociétés dans la réalisation de leurs projets informatiques. Témoin actif de la révolution numérique et porteur de la bonne parole agile, il tentera de vous faire voir les choses sous un angle différent. Vous pouvez suivre Sylvain sur Twitter et LinkedIn.

April 25, 2017

More and more developers are choosing content-as-a-service solutions known as headless CMSes — content repositories which offer no-frills editorial interfaces and expose content APIs for consumption by an expanding array of applications. Headless CMSes share a few common traits: they lack end-user front ends, provide few to no editorial tools for display and layout, and as such leave presentational concerns almost entirely up to the front-end developer. Headless CMSes have gained popularity because:

  • A desire to separate concerns of structure and presentation so that front-end teams and back-end teams can work independently from each other.
  • Editors and marketers are looking for solutions that can serve content to a growing list of channels, including websites, back-end systems, single-page applications, native applications, and even emerging devices such as wearables, conversational interfaces, and IoT devices.

Due to this trend among developers, many are rightfully asking whether headless CMSes are challenging the market for traditional CMSes. I'm not convinced that headless CMSes as they stand today are where the CMS world in general is headed. In fact, I believe a nuanced view is needed.

In this blog post, I'll explain why Drupal has one crucial advantage that propels it beyond the emerging headless competitors: it can be an exceptional CMS for editors who need control over the presentation of their content and a rich headless CMS for developers building out large content ecosystems in a single package.

As Drupal continues to power the websites that have long been its bread and butter, it is also used more and more to serve content to other back-end systems, single-page applications, native applications, and even conversational interfaces — all at the same time.

Headless CMSes are leaving editors behind

This diagram illustrates the differences between a traditional Drupal website and a headless CMS with various front ends receiving content.

Some claim that headless CMSes will replace traditional CMSes like Drupal and WordPress when it comes to content editors and marketers. I'm not so sure.

Where headless CMSes fall flat is in the areas of in-context administration and in-place editing of content. Our outside-in efforts, in contrast, aim to allow an editor to administer content and page structure in an interface alongside a live preview rather than in an interface that is completely separate from the end user experience. Some examples of this paradigm include dragging blocks directly into regions or reordering menu items and then seeing both of these changes apply live.

By their nature, headless CMSes lack full-fledged editorial experience integrated into the front ends to which they serve content. Unless they expose a content editing interface tied to each front end, in-context administration and in-place editing are impossible. In other words, to provide an editorial experience on the front end, that front end must be aware of that content editing interface — hence the necessity of coupling.

Display and layout manipulation is another area that is key to making marketers successful. One of Drupal's key features is the ability to control where content appears in a layout structure. Headless CMSes are unopinionated about display and layout settings. But just like in-place editing and in-context administration, editorial tools that enable this need to be integrated into the front end that faces the end user in order to be useful.

In addition, editors and marketers are particularly concerned about how content will look once it's published. Access to an easy end-to-end preview system, especially for unpublished content, is essential to many editors' workflows. In the headless CMS paradigm, developers have to jump through fairly significant hoops to enable seamless preview, including setting up a new API endpoint or staging environment and deploying a separate version of their application that issues requests against new paths. As a result, I believe seamless preview — without having to tap on a developer's shoulder — is still necessary.

Features like in-place editing, in-context administration, layout manipulation, and seamless but faithful preview are essential building blocks for an optimal editorial experience for content creators and marketers. For some use cases, these drawbacks are totally manageable, especially where an application needs little editorial interaction and is more developer-focused. But for content editors, headless CMSes simply don't offer the toolkits they have come to expect; they fall short where Drupal shines.

Drupal empowers both editors and application developers

This diagram illustrates the differences between a coupled — but headless-enabled — Drupal website and a headless CMS with various front ends receiving content.

All of this isn't to say that headless isn't important. Headless is important, but supporting both headless and traditional approaches is one of the biggest advantages of Drupal. After all, content management systems need to serve content beyond editor-focused websites to single-page applications, native applications, and even emerging devices such as wearables, conversational interfaces, and IoT devices.

Fortunately, the ongoing API-first initiative is actively working to advance existing and new web services efforts that make using Drupal as a content service much easier and more optimal for developers. We're working on making developers of these applications more productive, whether through web services that provide a great developer experience like JSON API and GraphQL or through tooling that accelerates headless application development like the Waterwheel ecosystem.

For me, the key takeaway of this discussion is: Drupal is great for both editors and developers. But there are some caveats. For web experiences that need significant focus on the editor or assembler experience, you should use a coupled Drupal front end which gives you the ability to edit and manipulate the front end without involving a developer. For web experiences where you don't need editors to be involved, Drupal is still ideal. In an API-first approach, Drupal provides for other digital experiences that it can't explicitly support (those that aren't web-based). This keeps both options open to you.

Drupal for your site, headless Drupal for your apps

This diagram illustrates the ideal architecture for Drupal, which should be leveraged as both a front end in and of itself as well as a content service for other front ends.

In this day and age, having all channels served by a single source of truth for content is important. But what architecture is optimal for this approach? While reading this you might have also experienced some déjà-vu from a blog post I wrote last year about how you should decouple Drupal, which is still solid advice nearly a year after I first posted it.

Ultimately, I recommend an architecture where Drupal is simultaneously coupled and decoupled; in short, Drupal shines when it's positioned both for editors and for application developers, because Drupal is great at both roles. In other words, your content repository should also be your public-facing website — a contiguous site with full editorial capabilities. At the same time, it should be the centerpiece for your collection of applications, which don't necessitate editorial tools but do offer your developers the experience they want. Keeping Drupal as a coupled website, while concurrently adding decoupled applications, isn't a limitation; it's an enhancement.


Today's goal isn't to make Drupal API-only, but rather API-first. It doesn't limit you to a coupled approach like CMSes without APIs, and it doesn't limit you to an API-only approach like Contentful and other headless CMSes. To me, that is the most important conclusion to draw from this: Drupal supports an entire spectrum of possibilities. This allows you to make the proper trade-off between optimizing for your editors and marketers, or for your developers, and to shift elsewhere on that spectrum as your needs change.

It's a spectrum that encompasses both extremes of the scenarios that a coupled approach and headless approach represent. You can use Drupal to power a single website as we have for many years. At the same time, you can use Drupal to power a long list of applications beyond a traditional website. In doing so, Drupal can be adjusted up and down along this spectrum according to the requirements of your developers and editors.

In other words, Drupal is API-first, not API-only, and rather than leave editors and marketers behind in favor of developers, it gives everyone what they need in one single package.

Special thanks to Preston So for contributions to this blog post and to Wim Leers, Ted Bowman, Chris Hamper and Matt Grill for their feedback during the writing process.

I’m the kind of dev that dreads configuring webservers and that rather does not have to put up with random ops stuff before being able to get work done. Docker is one of those things I’ve never looked into, cause clearly it’s evil annoying boring evil confusing evil ops stuff. Two of my colleagues just introduced me to a one-line docker command that kind off blew my mind.

Want to run tests for a project but don’t have PHP7 installed? Want to execute a custom Composer script that runs both these tests and the linters without having Composer installed? Don’t want to execute code you are not that familiar with on your machine that contains your private keys, etc? Assuming you have Docker installed, this command is all you need:

docker run --rm --interactive --tty --volume $PWD:/app -w /app\
 --volume ~/.composer:/composer --user $(id -u):$(id -g) composer composer ci

This command uses the Composer Docker image, as indicated by the first composer at the end of the command. After that you can specify whatever you want to execute, in this case composer ci, where ci is a custom composer Script. (If you want to know what the Docker image is doing behind the scenes, check its entry point file.)

This works without having PHP or Composer installed, and is very fast after the initial dependencies have been pulled. And each time you execute the command, the environment is destroyed, avoiding state leakage. You can create a composer alias in your .bash_aliases as follows, and then execute composer on your host just as you would do if it was actually installed (and running) there.

alias composer='docker run --rm --interactive --tty --volume $PWD:/app -w /app\
 --volume ~/.composer:/composer --user $(id -u):$(id -g) composer composer'

Of course you are not limited to running Composer commands, you can also invoke PHPUnit

...(id -g) composer vendor/bin/phpunit

or indeed any PHP code.

...(id -g) composer php -r 'echo "hi";'

This one liner is not sufficient if you require additional dependencies, such as PHP extensions, databases or webservers. In those cases you probably want to create your own Docker file. Though to run the tests of most PHP libraries you should be good. I’ve now uninstalled my local Composer and PHP.

Git is cool, for reasons I won't go into here.

It doesn't deal very well with very large files, but that's fine; when using things like git-annex or git-lfs, it's possible to deal with very large files.

But what if you've added a file to git-lfs which didn't need to be there? Let's say you installed git-lfs and told it to track all *.zip files, but then it turned out that some of those files were really small, and that the extra overhead of tracking them in git-lfs is causing a lot of grief with your users. What do you do now?

With git-annex, the solution is simple: you just run git annex unannex <filename>, and you're done. You may also need to tell the assistant to no longer automatically add that file to the annex, but beyond that, all is well.

With git-lfs, this works slightly differently. It's not much more complicated, but it's not documented in the man page. The naive way to do that would be to just run git lfs untrack; but when I tried that it didn't work. Instead, I found that the following does work:

  • First, edit your .gitattributes files, so that the file which was (erroneously) added to git-lfs is no longer matched against a git-lfs rule in your .gitattributes.
  • Next, run git rm --cached <file>. This will tell git to mark the file as removed from git, but crucially, will retain the file in the working directory. Dropping the --cached will cause you to lose those files, so don't do that.
  • Finally, run git add <file>. This will cause git to add the file to the index again; but as it's now no longer being smudged up by git-lfs, it will be committed as the normal file that it's supposed to be.

Mastonaut’s log, tootdate 10. We started out by travelling on board of Being the largest one, we met with people from all over the fediverse. Some we could understand, others we couldn’t. Those were interesting days, I encountered a lot of people fleeing from other places to feel free and be themselves, while others were simply enjoying the ride. It wasn’t until we encountered the Pawoo, who turned out to have peculiar tastes when it comes to imagery, that the order in the fediverse got disturbed. But, as we can’t expect to get freedom while restricting others’, I fetched the plans to build my own instance. Ready to explore the fediverse and its inhabitants on my own, I set out on an exciting journey.

As I do not own a server myself, and still had $55 credit on Digital Ocean, I decided to setup a simple $5 Ubuntu 16.04 droplet to get started. This setup assumes you’ve got a domain name and I will even show you how to run Mastodon on a subdomain while identifying on the root domain. I suggest following the initial server setup to make sure you get started the right way. Once you’re all set, grab a refreshment and connect to your server through SSH.

Let’s start by ensuring we have everything we need to proceed. There are a few dependencies to run Mastodon. We need docker to run the different applications and tools in containers (easiest approach) and nginx to expose the apps to the outside world. Luckily, Digital Ocean has an insane amount of up-to-date documentation we can use. Follow these two guides and report back.

At this point, we’re ready to grab the source code. Do this in your location of choice.

git clone

Change to that location and checkout the latest release (1.2.2 at the time of writing).

cd mastodon
git checkout 1.2.2

Now that we’ve got all this setup, we can build our containers. There’s a useful guide made by the Mastodon community I suggest you follow. Before we make this available to the outside world, we want to tweak our .env.production file to configure the instance. There are a few keys in there we need to adjust, and some we could adjust. In my case, Mastodon runs as a single user instance, meaning only one user is allowed in. Nobody can register and the home page redirects to that user’s profile instead of the login page. Below are the settings I adjusted, remember I run Mastodon on a subdomain, but my user identifies as The config changes below illustrate that behavior. If you have no use for that, just leave the WEB_DOMAIN key commented out. If you do need it however, you’ll still have to enter a redirect rule for your root domain that points https://rootdomain/.well-known/host-meta to https://subdomain.rootdomain/.well-known/host-meta. I added a rule on Cloudflare to achieve this, but any approach will do.

# Federation

# Use this only if you need to run mastodon on a different domain than the one used for federation.
# Do not use this unless you know exactly what you are doing.

# Registrations
# Single user mode will disable registrations and redirect frontpage to the first profile

As we can’t run a site without configuring SSL, we’ll use Let’s Encrypt to secure nginx. Follow the brilliant guide over at Digital Ocean and report back for the last part. Once setup, we need to configure nginx (and the DNS settings for your domain) to make Mastodon available for the world to enjoy. You can find my settings here. Just make sure to adjust the key file’s name and DNS settings. As I redirect all http traffic to https using Cloudflare, I did not bother to add port 80 to the config, be sure to add it if needed.

Alright, we’re ready to start exploring the fediverse! Make sure to restart nginx to apply the latest settings using sudo service nginx restart and update the containers to reflect your settings via docker-compose up -d. If all went according to plan, you should see your brand new shiny instance on your domain name. Create your first user and get ready to toot! In case you did not bother to add an smtp server, manually confirm your user:

docker-compose run --rm web rails mastodon:confirm_email

And make sure to give yourself ultimate admin powers to be able to configure your intance:

docker-compose run --rm web rails mastodon:make_admin USERNAME=alice

Updating is a straightforward process too. Fetch the latest changes from the remote, checkout the tag you want and update your containers:

docker-compose stop
docker-compose build
docker-compose run --rm web rails db:migrate
docker-compose run --rm web rails assets:precompile
docker-compose up -d

Happy tooting!

April 24, 2017

Wim made a stir in the land of the web. Good for Wim that he rid himself of the shackles of social media.

But how will we bring a generation of people, who are now more or less addicted to social media, to a new platform? And what should that platform look like?

I’m not a anthropologist, but I believe human nature of organizing around new concepts and techniques is that we, humans, start central and monolithic. Then we fine-tune it. We figure out that the central organization and monolithic implementation of it becomes a limiting factor. Then we decentralize it.

The next step for all those existing and potential so-called ‘online services’ is to become fully decentralized.

Every family or home should have its own IMAP and SMTP server. Should that be JMAP instead? Probably. But that ain’t the point. The fact that every family or home will have its own, is. For chat, XMPP’s s2s is like SMTP. Postfix is an implementation of SMTP like ejabberd is for XMPP’s s2s. We have Cyrus, Dovecot and others for IMAP, which is the c2s of course. And soon we’ll probably have JMAP, too. Addressability? IPv6.

Why not something like this for social media? For the next online appliance, too? Augmented reality worlds can be negotiated in a distributed fashion. Why must Second Life necessarily be centralized? Surely we can run Linden Lab’s server software, locally.

Simple, because money is not interested in anything non-centralized. Not yet.

In the other news, the Internet stopped working truly well ever since money became its driving factor.

ps. The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think different. Quote by Friedrich Nietzsche.

April 21, 2017

I deleted my Facebook account because in the past three years, I barely used it. It’s ironic, considering I worked there. 1

More irony: I never used it as much as I did when I worked there.

Yet more irony: a huge portion of my Facebook news feed was activity by a handful of Facebook employees. 2

No longer useful

I used to like Facebook because it delivered on its original mission:

Facebook helps you connect and share with the people in your life.

They’re clearly no longer true to that mission. 3

When I joined in November 2007, the news feed chronologically listed status updates from your friends. Great!

Since then, they’ve done every imaginable thing to increase time spent, also known as the euphemistic “engagement”. They’ve done this by surfacing friends’ likes, suggested likes, friends’ replies, suggested friends, suggested pages to like based on prior likes, and of course: ads. Those things are not only distractions, they’re actively annoying.

Instead of minimizing time spent so users can get back to their lives, Facebook has sacrificed users at the altar of advertising: more time spent = more ads shown.

No longer trustworthy

An entire spectrum of concerns to choose from, along two axes: privacy and walled garden. And of course, the interesting intersection of minimized privacy and maximized walled gardenness: the filter bubble.

If you want to know more, see Vicki Boykis’ well-researched article.

No thanks

Long story short: Facebook is not for me anymore.

It’s okay to not know everything that’s been going on. It makes for more interesting conversations when you do get to see each other again.

The older I get, the more I prefer the one communication medium that is not a walled garden, that I can control, back up and search: e-mail.

  1. To be clear: I’m still very grateful for that opportunity. It was a great working environment and it helped my career! ↩︎

  2. Before deleting my Facebook account, I scrolled through my entire news feed — apparently this time it was seemingly endless, and I stopped after the first 202 items in it, by then I’d had enough. Of those 202 items, 58 (28%) were by former or current Facebook employees. 40% (81) of it was reporting on a like or reply by somebody — which I could not care less about 99% of the time. And the remainder, well… the vast majority of it is mildly interesting at best. Knowing all the trivia in everybody’s lives is fatiguing and wasteful, not fascinating and useful. ↩︎

  3. They’ve changed it since then, to: give people the power to share and make the world more open and connected. ↩︎

I published the following diary on “Analysis of a Maldoc with Multiple Layers of Obfuscation“.

Thanks to our readers, we get often interesting samples to analyze. This time, Frederick sent us a malicious Microsoft Word document called “Invoice_6083.doc” (which was delivered in a zip archive). I had a quick look at it and it was interesting enough for a quick diary… [Read more]

[The post [SANS ISC] Analysis of a Maldoc with Multiple Layers of Obfuscation has been first published on /dev/random]

The past weeks have been difficult. I'm well aware that the community is struggling, and it really pains me. I respect the various opinions expressed, including opinions different from my own. I want you to know that I'm listening and that I'm carefully considering the different aspects of this situation. I'm doing my best to progress through the issues and support the work that needs to happen to evolve our governance model. For those that are attending DrupalCon Baltimore and want to help, we just added a community discussions track.

There is a lot to figure out, and I know that it's difficult when there are unresolved questions. Leading up to DrupalCon Baltimore next week, it may be helpful for people to know that Larry Garfield and I are talking. As members of the Community Working Group reported this week, Larry remains a member of the community. While we figure out Larry's future roles, Larry is attending DrupalCon as a regular community member with the opportunity to participate in sessions, code sprints and issue queues.

As we are about to kick off DrupalCon Baltimore, please know that my wish for this conference is for it to be everything you've made it over the years; a time for bringing out the best in each other, for learning and sharing our knowledge, and for great minds to work together to move the project forward. We owe it to the 3,000 people who will be in attendance to make DrupalCon about Drupal. To that end, I ask for your patience towards me, so I can do my part in helping to achieve these goals. It can only happen with your help, support, patience and understanding. Please join me in making DrupalCon Baltimore an amazing time to connect, collaborate and learn, like the many DrupalCons before it.

(I have received a lot of comments and at this time I just want to respond with an update. I decided to close the comments on this post.)

April 20, 2017

The Internet Archive is a well-known website and more precisely for its “WaybackMachine” service. It allows you to search for and display old versions of websites. The current Alexa ranking is 262 which makes it a “popular and trusted” website. Indeed, like I explained in a recent SANS ISC diary, whitelists of websites are very important for attackers! The phishing attempt that I detected was also using the URL shortener (Position 9380 in the Alexa list).

The phishing is based on a DHL notification email. The mail has a PDF attached to it:

DHL Notification

This PDF has no malicious content and is therefore not blocked by antispam/antivirus. The link “Click here” points to a short URL:


Note that HTTPS is used which already make the traffic non-inspected by many security solutions.

Tip: If you append a “+” at the end of the URL, will not directly redirect you to the hidden URL but will display you an information page where you can read this URL!

The URL behind the short URL is:

hxxps:// also maintains statistics about the visitors: Statistics

It’s impressive to see how many people visited the malicious link. The phishing campaign was also active since the end of March. Thank you for this useful information!

This URL returns the following HTML code:

<META http-equiv="refresh" content="0;URL=data:text/html;base64, ... (base64 data) ... "
<body bgcolor="#fffff">

The refresh META tag displays the decoded HTML code:

<script language="Javascript">

The deobfuscated script displays the following page:

DHL Phishing Page

The pictures are stored on a remote website but it has already been cleaned:


Stolen data are sent to another website: (This one is still alive)


The question is: how this phishing page was stored on If you visit the upper level on the malicious URL (, you find this: Files

Go again to the upper directory (‘../’) and you will find the owner of this page: alextray. This guy has many phishing pages available:

alextray's Projects

Indeed, the Internet Archives website allows registered users to upload content as stated in the FAQ. If you search for ‘’ on Google, you will find a lot of references to multiple contents (most of them are harmless) but on VT, there are references to malicious content hosted on

Here is the list of phishing sites hosted by “alextray”. You can use them as IOC’s:

hxxps:// (Yahoo!)
hxxps:// (Microsoft)
hxxps:// (DHL)
hxxps:// (Adobe)
hxxps://[pk[.html (Microsoft)
hxxps:// (TNT)
hxxps:// (TNT)
hxxps:// (Adobe)
hxxps:// (DHL)
hxxps:// (Microsoft)
hxxps:// (Microsoft)
hxxps:// (Yahoo!)
hxxps:// (Microsoft Excel)
hxxps:// (Adobe)
hxxps:// (DHL)
hxxps:// (Google Drive)
hxxps:// (Microsoft)
hxxps:// (Microsoft)
hxxps:// (DHL)
hxxps:// (DHL)
hxxps:// (DHL)
hxxps:// (Yahoo!)
hxxps:// (Microsoft)
hxxps:// (Yahoo!)
hxxps:// (DHL)
hxxps:// (Microsoft)
hxxps:// (DHL)
hxxps:// (DHL)
hxxps:// (Microsoft)
hxxps:// (Microsoft)
hxxps:// (Yahoo!)
hxxps:// (Microsoft)
hxxps:// (Microsoft)
hxxps:// (DHL)
hxxps:// (Adobe)
hxxps:// (Google)
hxxps:// (Microsoft)
hxxps:// (Microsoft)
hxxps:// (Microsoft)
hxxps:// (Yahoo!)
hxxps:// (Yahoo!)
hxxps://;kfd;k.html (Yahoo!)
hxxps:// (TNT)
hxxps:// (Microsoft)
hxxps:// (Google)
hxxps:// (Microsoft)
hxxps:// (Microsoft)
hxxps:// (Adobe)
hxxps:// (Microsoft)
hxxps:// (Microsoft)
hxxps:// (Microsoft)
hxxps:// (Microsoft)
hxxps:// (Yahoo!)
hxxps:// (Microsoft)
hxxps:// (Microsoft)
hxxps:// (DHL)
hxxps:// (Google)
hxxps://;.html (Microsoft)
hxxps:// (Microsoft)
hxxps:// (Microsoft)
hxxps:// (DHL)
hxxps:// (Microsoft)
hxxps:// (Yahoo!)
hxxps:// (Microsoft)
hxxps:// (Microsoft)
hxxps:// (Microsoft)
hxxps:// (Microsoft)
hxxps:// (Microsoft)
hxxps:// (DHL)
hxxps://;.html (Microsoft)
hxxps:// (Microsoft)
hxxps:// (Microsoft)

[The post Abused to Deliver Phishing Pages has been first published on /dev/random]

I published the following diary on “DNS Query Length… Because Size Does Matter“.

In many cases, DNS remains a goldmine to detect potentially malicious activity. DNS can be used in multiple ways to bypass security controls. DNS tunnelling is a common way to establish connections with remote systems. It is often based on “TXT” records used to deliver the encoded payload. “TXT” records are also used for good reasons, like delivering SPF records but, too many TXT DNS request could mean that something weird is happening on your network… [Read more]

[The post [SANS ISC] DNS Query Length… Because Size Does Matter has been first published on /dev/random]

Too many MS Office 365 appsUpdate 20160421:
– update for MS Office 2016.
– fix configuration.xml view on WordPress.

If you install Microsoft Office trough click-to-run you’ll end with the full suite installed. You can no longer select what application you want to install. That’s kind of OK because you pay for the complete suit. Or at least the organisation (school, work, etc.) offering the subscription does. But maybe you are like me and you dislike installing applications you don’t use. Or even more like me: you’re a Linux user with a Windows VM you boot once in a while out of necessity. And unused applications in a VM residing on your disk is *really* annoying.

The Microsoft documentation to remove the unused applications (Access as a DB? Yeah, right…) wasn’t very straightforward so I post what worked for me after the needed trial-and-error routines. This is a small howto:

    • Install the Office Deployment Toolkit (download for MS Office 20132016). The installer asks for a installation location. I put it in C:\Users\nxadm\OfficeDeployTool (change the username accordingly). If you’re short on space (or in a VM), you can put it in a mounted shared.
    • Create a configuration.xml with the applications you want to add. The file should reside in the directory you chose for the Office Deployment Tookit (e.g. C:\Users\nxadm\OfficeDeployTool\configuration.xml) or you should refer to the file with its full path name. You can find the full list op AppIDs here (more info about other settings)/ Add or remove ExcludeApps as desired.  My configuration file is as follows (wordpress removes the xml code below, hence the image):
    • If you run the 64-bit Office version change OfficeClientEdition="32" to OfficeClientEdition="64".
    • Download the office components. Type in a cmd box:
      C:\Users\\OfficeDeployTool>setup.exe /download configuration.xml
    • Remove the unwanted applications:
      C:\Users\\OfficeDeployTool>setup.exe /configure configuration.xml
    • Delete (if you want) the Office Deployment Toolkit directory. Certainly the cached installation files in the “Office” directory take a lot of space.

Enjoy the space and faster updates. If you are using a VM don’t forget to defragment and compact the Virtual Hard Disk to reclaim the space.

Filed under: Uncategorized Tagged: Click-to-Run, MS Office 365, VirtualBox, vm, VMWare, Windows

April 19, 2017

I published the following diary on “Hunting for Malicious Excel Sheets“.

Recently, I found a malicious Excel sheet which contained a VBA macro. One particularity of this file was that useful information was stored in cells. The VBA macro read and used them to download the malicious PE file. The Excel file looked classic, asking the user to enable macros… [Read more]

[The post [SANS ISC] Hunting for Malicious Excel Sheets has been first published on /dev/random]

The post DNS Spy has launched! appeared first on

I started to created a DNS monitoring & validation solution called DNS Spy and I'm happy to report: it has launched!

It's been in private beta starting in 2016 and in public beta since March 2017. After almost 6 months of feedback, features and bugfixes, I think it's ready to kick the tires.

What's DNS Spy?

In case you haven't been following me the last few months, here's a quick rundown of DNS Spy.

  • Monitors your domains for any DNS changes
  • Alerts you whenever a record has changed
  • Keeps a detailed history of each DNS record change
  • Notifies you of invalid or RFC-violating DNS configs
  • Rates your DNS configurations with a scoring system
  • Is free for 1 monitored domain
  • Provides a point-in-time back-up of all your DNS records
  • It can verify if all your nameservers are in sync
  • Supports DNS zone transfer (AXFR)

There's many more features, like CNAME resolving, public domain scanning, offline & change notifications, ... that all make DNS Spy what it is: a reliable & stable DNS monitoring solution.

A new look & logo

The beta design of DNS Spy was built using a Font Awesome icon and some copy/paste bootstrap templates, just to validate the idea. I've gotten enough feedback to feel confident that DNS Spy adds actual value, so it was time to make the look & feel match that sentiment.

This was the first design:

Here's the new & improved look.

It's go a brand new look, a custom logo and a way to publicly scan & rate your domain configuration.

Public scoring system

You've probably heard of tools like SSL Labs' test & Security Headers, free webservices that allow you to rate and check your server configurations. Each with focus on their domain.

From now on, DNS Spy also has such a feature.

Above is the DNS Spy scan report for, which as a rock solid DNS setup.

We rate things like the connectivity (IPv4 & IPv6, records synced, ...), performance, resilience & security (how many providers, domains, DNSSEC & CAA support, ...) & DNS records (how is SPF/DMARC set up, are your TTLs long enough, do your NS records match your nameservers, ...).

The aim is to have DNS Spy become the SSL Labs of DNS configurations. To make that a continuous improvement, I encourage any feedback from you!

If you're curious how your domain scores, scan it via

Help me promote it?

Next up, of course, is promotion. There are a lot of ways to promote a service, and advertising is surely going to be one of them.

But if you've used DNS Spy and like it or if you've scanned your domain and are proud of your results, feel free to spread word of DNS Spy to your friends, coworkers, online followers, ... You'd have my eternal gratitude! :-)

DNS Spy is available on or via @dnsspy on Twitter.

The post DNS Spy has launched! appeared first on

April 18, 2017

Vous avez peut-être entendu parler de Mastodon, ce nouveau réseau social qui fait de la concurrence à Twitter. Ses avantages ? Une limite par post qui passe de 140 à 500 caractères et une approche orientée communauté et respect de l’autre là où Twitter a trop souvent été le terrain de cyber-harcèlements.

Mais une des particularités majeures de Mastodon est la décentralisation : ce n’est pas un seul et unique service appartenant à une entreprise mais bien un réseau, comme le mail.

Si chacun peut en théorie créer son instance Mastodon, la plupart d’entre nous rejoindrons des instances existantes. J’ai personnellement rejoint, l’instance gérée par La Quadrature du Net car j’ai confiance dans la pérennité de l’association, sa compétence technique et, surtout, je suis aligné avec ses valeurs de neutralité et de liberté d’expression. Je recommande également, qui est administré par Framasoft.

Mais vous trouverez pléthore d’instances : depuis celles des partis pirate français et belge aux instances à thème. Il existe même des instances payantes et, pourquoi pas, il pourrait un jour y avoir des instances avec de la pub.

La beauté de tout ça réside bien entendu dans le choix. Les instances de La Quadrature du Net et de Framasoft sont ouvertes et libres, je conseille donc de faire un petit paiement libre récurrent à l’association de 2€, 5€ ou 10€ par mois, selon vos moyens.

Mastodon est décentralisé ? En fait, il faudrait plutôt parler de “distribué”. Il y’a 5 ans, je dénonçais les problèmes des solutions décentralisées/distribuées. Le principal étant qu’on est soumis au bon vouloir ou aux maladresses de l’administrateur de son instance.

Force est de constater que Mastodon n’a techniquement résolu aucun de ces problèmes. Mais semble créer une belle dynamique communautaire qui fait plaisir à voir. Contrairement à son ancêtre, les instances se sont rapidement multipliées. Les conversations se sont lancées et des usages ont spontanément apparu : accueillir les nouveaux, suivre ceux qui n’ont que peu de followers pour les motiver, discuter de manière transparente des bonnes pratiques à adopter, utilisation d’un CW, Content Warning, masquant les messages potentiellement inappropriés, débats sur les règles de modération.

Toute cette énergie donne l’impression d’un espace à part, d’une liberté de discussion éloignée de l’omniprésente et omnisciente surveillance publicitaire indissociable des outils Facebook, Twitter ou Google.

D’ailleurs, un utilisateur proposait qu’on ne parle pas d’utilisateurs (“users”) pour Mastodon mais bien de personnes (“people”).

Dans un précédent article, je soulignais que les réseaux sociaux sont les prémisses d’une conscience globale de l’humanité. Mais comme le souligne Neil Jomunsi, le media est une part indissociable du message que l’on développe. Veut-on réellement que l’humanité soit représentée par une plateforme publicitaire où l’on cherche à exploiter le temps de cerveau des utilisateurs ?

Mastodon est donc selon moi l’expression d’un réel besoin, d’un manque. Une partie de notre humanité est étouffée par la publicité, la consommation, le conformisme et cherche un espace où s’exprimer.

Mastodon serait-il donc le premier réseau social distribué populaire ? Saura-t-il convaincre les utilisateurs moins techniques et se démarquer pour ne pas être « un énième clone libre » (comme l’est malheureusement Diaspora pour Facebook) ?

Mastodon va-t-il durer ? Tant qu’il y’aura des volontaires pour faire tourner des instances, Mastodon continuera d’exister sans se soucier du cours de la bourse, des gouvernements, des lois d’un pays particuliers ou des desiderata d’investisseurs. On ne peut pas en dire autant de Facebook ou Twitter.

Mais, surtout, il souffle sur Mastodon un vent de fraîche utopie, un air de naïve liberté, un sentiment de collaborative humanité où la qualité des échanges supplante la course à l’audience. C’est bon et ça fait du bien.

N’hésitez pas à nous rejoindre, à lire le mode d’emploi de Funambuline et poster votre premier « toot » présentant vos intérêts. Si vous dîtes que vous venez de ma part ( ), je vous « boosterais » (l’équivalent du retweet) et la communauté vous suggérera des personnes à suivre.

Au fond, peu importe que Mastodon soit un succès ou disparaisse dans quelques mois. Nous devons continuons à essayer, à tester, à expérimenter jusqu’à ce que cela fonctionne. Si ce n’est pas Diaspora ou Mastodon, ce sera le prochain. Notre conscience globale, notre expression et nos échanges méritent mieux que d’être de simple encarts entre deux publicités sur une plateforme soumise à des lois sur lesquelles nous n’avons aucune prise.

Mastodon est un réseau social. Twitter et Facebook sont des réseaux publicitaires. Ne nous y trompons plus.


Photo par Daniel Mennerich.

Vous avez aimé votre lecture ? Soutenez l’auteur sur Tipeee, Patreon, Paypal ou Liberapay. Même un don symbolique fait toute la différence ! Retrouvons-nous ensuite sur Facebook, Medium, Twitter ou Mastodon.

Ce texte est publié sous la licence CC-By BE.

April 17, 2017

April 14, 2017

After a nice evening with some beers and an excellent dinner with infosec peers, here is my wrap-up for the second day. Coffee? Check! Wireless? Check! Twitter? Check!

As usual, the day started with a keynote. Window Snyder presented “All Fall Down: Interdependencies in the Cloud”. Window is the CSO of Fastly and, as many companies today, Fasly relies on many services running in the cloud. This reminds me the Amazon S3 outage and their dashboard that was not working because it was relying on… S3! Today, all the stuff are interconnected and the overall security depends on the complete chain. To resume: You use a cloud service to store your data, you authenticate to it using another cloud service, and you analyse your data using a third one etc… If one is failing, we can face a domino effect. Many companies have statements like “We take security very seriously” but they don’t invest. Window reviewed some nightmare stories where the security completely failed like the RSA token compromization in 2011, Diginotar in 2012 or Target in 2013. But sometimes dependencies are very simple like DNS… What if your DNS is out of service? All your infrastructure is down. DNS remains an Achille’s heel for many organizations. The keynote interesting but very short! Anyway, it meant more time for coffee…

The first regular talk was maybe the most expected: “Chasing Cars: Keyless Entry System Attacks”. The talk was promoted via social networks before the conference. I was really curious and not disappointed by the result of the research! Yingtao Zeng, Qing Yang & Jun Li presented their work about car keyless attack. It was strange that the guy responsible of the most part of the research did not speak English. I was speaking in Chinese to his colleague who was translating in English. Because users are looking for more convenience (and because it’s “cool”), modern cars are not using RKE (remote keyless entry) but PKE (passive keyless entry). They started with a technical description of the technology that many of us use daily:

Passive key entry system

How to steal the car? How could we use the key in the car owner’s pocket? The idea was to perform a relay attack. The signal of the key is relayed from the owner’s pocket to the attacker sitting next to the car. Keep in mind that cars required to press the button on the door or to use a contact sensor to enable communications with the key. A wake up is sent to the key and unlock doors. The relay attack scenario looks like this:

Relay attack scenario

During this process, they are time constraints. They showed a nice demo of a guy leaving his car, followed by attacker #1 who captures the signal and relay to the attack #2 who unlock the car.

Relay devices

The current range to access the car owner’s key is ~2m. Between the two relays, up to 300m! What about the cost to build the devices? Approximatively 20€! (the cost of the main components). What in real case? Once the car is stolen and the engine running, it will only warn that the key is not present but it won’t stop! The only limit is running out of gas 🙂 Countermeasures are: use a faraday cage or bag, remove the battery more strict timing constraints.

They are still improving the research and are now investigating how to relay this signal through TCP/IP (read: the Wild internet). [Slides are available here]

My next choice was to follow “Extracting All Your Secrets: Vulnerabilities in Android Password Managers” presented by Stephan Uber, Steven Arzt and Siegfried Rasthofer. Passwords remain a threat for most people. For years, we ask users to use strong passwords, to change them regularly. The goal was not here to debate about how passwords must be managed but, as we recommend users to use passwords manager to handle the huge amount of passwords, are they really safe? An interesting study demonstrated that, on average, users have to deal with 90 passwords. The research focused on Android applications. First of all, most of them say that they “banking level” or “military grade” encryption? True or false? Well, encryption is not the only protection for passwords. Is it possible to steal them using alternative attack scenarios? Guess what? They chose the top password managers by the number of downloads on the Google play store. They all provide standard features like autofill, custom browser, comfort features, secure sync and confidential password storage of course. (Important note: all the attacks have been performed on non-rooted devices) Manual filing attack: Manual filling is using the clipboard. 1st problem: any app can read from the clipboard without any specific rights. A clipboard sniffer app could be useful.

The first attack scenario was: Manual filing attack: Manual filling is using the clipboard. First problem: any application can read from the clipboard without any specific rights. A clipboard sniffer app could be useful to steal any password. The second scenario was: Automatic filling attack. How does it work? Applications cannot communicate due to the sandboxing system. They have to use the “Accessibility service” (normally used for disabled people). The issue may arise if the application doesn’t check the complete app name. Example: make an app that starts also with “com.twitter” like “com.twitter.twitterleak”. The next attack is based on the backup function. Backup, convert the backup to .tar, untar and get the master password in plain text in KeyStorage.xml. Browsers don’t provide API’s to perform autofill so developers create a customer browser. But it’s running in the same sandbox. Cool! But can we abuse this? Browsers are based on Webview API which supports access to files… file:///data/package/…./passwords_pref.xml Where is the key? In the source code, split in two 🙂 More fails reported by the speakers:

  • Custom crypto (“because AES isn’t good enough?”)
  • AES used in ECB mode for db encryption
  • Delivered browsers to not consider subdomains in form fields
  • Data leakage in browsers
  • Customer transport security

How to improve the security of password managers:

  • Android provides a keystore, use it!
  • Use key derivation function
  • Avoid hardcoded keys
  • Do not abuse the account manager

The complete research is available here. [Slides are available here]

After the lunch, Antonios Altasis presented “An Attack-in-Depth Analysis of Multicast DNS and DNS Service Discovery”. The objective was to perform threat analysis and to release a tool to perform tests on a local network. The starting point was the RFC and identifying the potential risks. mDNS & DNS-SD are used for zero-conf networking. They are used by the AppleTV, the Google ChromeCast, home speakers, etc. mDNS (RFC6762) provides DNS-alike operations but on the local network (uses 5353). DNS-SD (RFC6763) allows clients to discover instances of a specific service (using standard DNS queries). mDNS uses the “.local” TLD via & FF02::FB. Antonios make a great review of the problems associated with these protocols. The possible attacks are:

  • Reconnaissance (when you search for a printer, all the services will be returned, this is useful to gather information about your victim. Easy to get info without scanning). I liked this.
  • Spoofing
  • DoS
  • Remote unicast interaction

mDNS implementation can be used to perform a DoS attack from remote locations. If most modern OS are protected, some embedded systems still use vulnerable Linux implementations. Interesting: Close to 1M of devices are listening to port 5353 on the Internet (Shodan). Not all of them are vulnerable but there are chances. During the demos, Antonios used the tool he developed: [Slides are available here]

Then, Patrick Wardle presented “OverSight: Exposing Spies on macOS”. Patrick presented a quick talk yesterday in the Commsec track. It was very nice so I expected also some nice content. Today the topic was pieces of malware on OSX that abuse the microphone and webcam. To protect against this, he developed a tool called OverSight. Why bad guys use webcams? To blackmail victims, Why governments use microphone to spy. From a developer point of view, how to access the webcam? Via the avfoundation framework. Sandboxed applications must have specific rights to access the camera (via entitlement ‘com’ but non sandboxed application do not require this entitlement to access the cam. videoSnap is a nice example of avfoundation use. The pending tool is audioSnap for the microphone. The best way to protect your webcam is to put a sticker on it. Note that it is also possible to restrict access to is via file permissions.

What about malware that use mic/cam? (note: the LED will always be on). Patrick reviewed some of them like yesterday:

  • The Hackingteam’s implant
  • Eleanor
  • Mokes
  • FruitFly

To protect against abusive access to the webcam & microphone, Patrick developed a nice tool called OverSight. The version 1.1 was just released with new features (better support for the mic, whitelisting apps which can access resources). The talk ended with a nice case study: Shazam was reported as listening all the time to the mic (even if disabled). This was reported by an OverSight user to Patrick. He decided to have a deeper look. He discovered that it’s not a bug but a feature and contacted Shazam. For performance reasons they use continuous recording on IOS but a shared SDK is used with OSX. Malicious or not? “OFF” means in fact “stop processing the recording” but don’t stop the recording.

Other tools developed by Patrick:

  • KnockKnock
  • BlockBlock
  • RansomWhere (detect encryption of files and high number of created files)

It was a very cool talk with lot of interesting information and tips to protect your OSX computers! [Slides are available here]

The last talk from my list was “Is There a Doctor in The House? Hacking Medical Devices and Healthcare Infrastructure” presented by Anirudh Duggal. Usually, such talks present vulnerabilities around the devices that we can find everywhere in hospitals but the the talk focused on something completely different: The protocol HL7 2.x. Hospitals have: devices (monitors, X-ray, MRI, …), networks, protocols (DICOM, HL7, FHIR, HTTP, FTP) and records (patients). HL7 is a messaging standard used by medical devices to achieve interoperability. Messages may contain patient info (PII), doctor info, patient visit details, allergy & diagnostics. Anirudh reviewed the different types of message that can be exchanged like “RDE” or ” Pharmacy Order Message”. The common attacks are:

  • MITM (everything is in clear text)
  • Message source not validated
  • DoS
  • Fuzzing

This is scaring to see that important information are exchanged with so poor protections. How to improve? According to Anirugh, here are some ideas:

  • Validate messages size
  • Enforce TLS
  • Input sanitization
  • Fault tolerance
  • Anonymization
  • Add consistency checks (checksum)

The future? HL7 will be replaced by FHIR a lightweight HTTP-based API. I learned interesting stuff about this protocol… [Slides are available here]

The closing keynote was given by Natalie Silvanovich working on the Google Project Zero. It was about the Shakra Javascript engine. Natalie reviewed the code and discovered 13 bugs, now fixed. She started the talk with a deep review of the principles of arrays in the Javascript engine. Arrays are very important in JS. There are simple but can quickly become complicate with arrays of arrays of arrays. Example:

var b = [ 1; “bob, {}, new RegExp() ];

The second part of the talk was dedicated to the review of the bug she found during her research time. I was a bit lost (the end of the day and not my preferred topic) but the work performed looked very nice.

The 2017’s edition is now over. Besides the talk, the main room was full of sponsor booths with nice challenges, hackerspaces, etc. A great edition! See you next year I hope!




[The post HITB Amsterdam 2017 Day #2 Wrap-Up has been first published on /dev/random]

Updating the VCSA is easy when it has internet access or if you can mount the update iso. On a private network, VMware assumes you have a webserver that can serve up the updaterepo files. In this article, we'll look at how to proceed when VCSA is on a private network where internet access is blocked, and there's no webserver available. The VCSA and PSC contain their own webserver that can be used for an HTTP based update. This procedure was tested on PSC/VCSA 6.0.

Follow these steps:

  • First, download the update repo zip (e.g. for 6.0 U3A, the filename is ) 
  • Transfer the updaterepo zip to a PSC or VCSA that will be used as the server. You can use Putty's pscp.exe on Windows or scp on Mac/Linux, but you'd have to run "chsh -s /bin/bash root" in the CLI shell before using pscp.exe/scp if your PSC/VCSA is set up with the appliancesh. 
    • chsh -s /bin/bash root
    • "c:\program files (x86)\putty\pscp.exe" VMware* root@psc-name-or-address:/tmp 
  • Change your PSC/VCSA root access back to the appliancesh if you changed it earlier: 
    • chsh -s /bin/appliancesh root
  • Make a directory for the repository files and unpack the updaterepo files there:
    • mkdir /srv/www/htdocs/6u3
    • chmod go+rx /srv/www/htdocs/6u3
    • cd /srv/www/htdocs/6u3
    • unzip /tmp/VMware-vCenter*
    • rm /tmp/VMware-vCenter*
  • Create a redirect using the HTTP rhttpproxy listener and restart it
    • echo "/6u3 local 7000 allow allow" > /etc/vmware-rhttpproxy/endpoints.conf.d/temp-update.conf 
    • /etc/init.d/vmware-rhttpproxy restart 
  • Create a /tmp/nginx.conf (I didn't save mine, but "listen 7000" is the key change from the default)
  • Start nginx
    • nginx -c /tmp/nginx.conf
  • Start the update via the VAMI. Change the repository URL in settings,  use http://psc-name-or-address/6u3/ as repository URL. Then use "Check URL". 
  • Afterwards, clean up: 
    • killall nginx
    • cd /srv/www/htdocs; rm -rf 6u3

P.S. I personally tested this using a PSC as webserver to update both that PSC, and also a VCSA appliance.
P.P.S. VMware released an update for VCSA 6.0 and 6.5 on the day I wrote this. For 6.0, the latest version is U3B at the time of writing, while I updated to U3A.

April 13, 2017

I was recently in a need to start "playing" with Openstack (working in an existing RDO setup) so I thought that it would be good idea to have my personal playground to start deploying from scratch/breaking/fixing that playground setup.

At first sight, Openstack looks impressive and "over-engineered", as it's complex and have zillions of modules to make it work. But then when you dive into it, you understand that the choice is yours to make it complex or not. Yeah, that sentence can look strange, but I'll explain you why.

First, you should just write your requirements, and then only have a look at the needed openstack components. For my personal playground, I just wanted to have a basic thing that would let me deploy VMs on demand, in the existing network, and so directly using bridge as I want the VMs to be directly integrated into the existing network/subnet.

So just by looking at the mentioned diagram, we just need :

  • keystone (needed for the identity service)
  • nova (hypervisor part)
  • neutron (handling the network part)
  • glance (to store the OS images that will be used to create the VMs)

Now that I have my requirements and list of needed components, let's see how to setup my PoC ... The RDO project has good doc for this, including the Quickstart guide. You can follow that guide, and as everything is packaged/built/tested and also delivered through CentOS mirror network, you can have a working RDO/openstack All-in-one setup working in minutes ...

The only issue is that it doesn't fit my need, as it will setup unneeded components, and the network layout isn't the one I wanted either, as it will be based on openvswitch, and other rules (so multiple layers I wanted to get rid of). The good news is that Packstack is in fact a wrapper tool around puppet modules, and it also supports lot of options to configure your PoC.

Let's assume that I wanted a PoC based on openstack-newton, and that my machine has two nics : eth0 for mgmt network and eth1 for VMs network. You don't need to configure the bridge on the eth1 interface, as that will be done automatically by neutron. So let's follow the quickstart guide, but we'll just adapt the packstack command line :

yum install centos-release-openstack-newton -y
systemctl disable firewalld
systemctl stop firewalld
systemctl disable NetworkManager
systemctl stop NetworkManager
systemctl enable network
systemctl start network
yum install -y openstack-packstack

Let's fix eth1 to ensure that it's started but without any IP on it :

sed -i 's/BOOTPROTO="dhcp"/BOOTPROTO="none"/' /etc/sysconfig/network-scripts/ifcfg-eth1
sed -i 's/ONBOOT="no"/ONBOOT="yes"/' /etc/sysconfig/network-scripts/ifcfg-eth1
ifup eth1

And now let's call packstack with the required option so that we'll use basic linux bridge (and so no openvswitch), and we'll instruct that it will have to use eth1 for that mapping

packstack --allinone --provision-demo=n --os-neutron-ml2-type-drivers=flat --os-neutron-ml2-mechanism-drivers=linuxbridge --os-neutron-ml2-flat-networks=physnet0 --os-neutron-l2-agent=linuxbridge --os-neutron-lb-interface-mappings=physnet0:eth1 --os-neutron-ml2-tenant-network-types=' ' --nagios-install=n 

At this stage we have openstack components installed, and /root/keystonerc_admin file that we can source for openstack CLI operations. We have instructed neutron to use linuxbridge, but we haven't (yet) created a network and a subnet tied to it, so let's do that now :

source /root/keystonerc_admin
neutron net-create --shared --provider:network_type=flat --provider:physical_network=physnet0 othernet
neutron subnet-create --name other_subnet --enable_dhcp --allocation-pool=start=,end= --gateway= --dns-nameserver= othernet

Before import image[s] and creating instances, there is one thing left to do : instruct dhcp_agent that metadata for cloud-init inside the VM will not be served from traditional "router" inside of openstack. And also don't forget to let traffic (in/out) pass through security group (see doc)

Just be sure to have enable_isolated_metadata = True in /etc/neutron/dhcp_agent.ini and then systemctl restart neutron-dhcp-agent : and from that point, cloud metadata will be served from dhcp too.

From that point you can just follow the quickstart guide to create projects/users, import images, create instances and/or do all this from cli too

One last remark with linuxbridge in an existing network : as neutron will have a dhcp-agent listening on the bridge, the provisioned VMs will get an IP from the pool declared in the "neutron subnet-create" command. However (and I saw that when I added other compute nodes in the same setup), you'll have a potential conflict with an existing dhcpd instance on the same segment/network, so your VM can potentially get their IP from your existing dhcpd instance on the network, and not from neutron. As a workaround, you can just ignore the mac addresses range used by openstack, so that your VMs will always get their IP from neutron dhcp. To do this, there are different options, depending on your local dhcpd instance :

  • for dnsmasq : dhcp-host=fa:16:3e:::*,ignore (see doc)
  • for ISC dhcpd : "ignore booting" (see doc)

The default mac addresses range for openstack VMs is indeed fa:16:3e:00:00:00 (see /etc/neutron/neutron.conf, so that can be changed too)

Those were some of my findings for my openstack PoC/playground. Now that I understand a little bit more all this, I'm currently working on some puppet integration for this, as there are official openstack puppet modules available on that one can import to deploy/configure openstack (and better than using packstack). But lot of "yaks to shave" to get to that point, so surely for another future blog post.

I’m back in Amsterdam for the 8th edition of the security conference Hack in the Box. Last year, I was not able to attend but I’m attending it for a while (you can reread all my wrap-up’s here). What to say? It’s a very strong organisation, everything running fine, a good team dedicated to attendees. This year, the conference was based on four(!) tracks: two regular ones, one dedicated to more “practical” presentations (HITBlabs) and the last one dedicated to small talks (30-60 mins).

Elly van den Heuvel opened the conference with a small 15-minutes introduction talk: “How prepared we are for the future?”. Elly works for the Dutch government as the “Cyber Security Council“. She gave some facts about the current security landscape from the place of women in infosec (things are changing slowly) to the message that cyber-security is important for our security in our daily life. For Elly, we are facing a revolution as big as the one we faced with the industrial revolution, maybe even bigger. Our goal as information security professional is to build a cyber security future for the next generations. They are already nice worldwide initiatives like the CERT’s or NIST and their guidelines. In companies, board members must take their responsibilities for cyber-security projects (budgets & times must be assigned to them). Elly declared the conference officially open 🙂

The first-day keynote was given by Saumil Shah. The title was “Redefining defences”. He started with a warning: this talk is disrupting and… it was! Saumil started with a step by to the past and how security/vulnerabilities evolved. It started with servers and today people are targeted. For years, we have implemented several layers of defence but with the same effect: all of them can be bypassed. Keep in mind that there will be always new vulnerabilities because products and applications have more and more features, are becoming more complex. I really liked the comparison with the Die Hard movie: It’s the Nakatomi building: we can walk through all the targets exactly in the movie when Bruce Willis travels in the building. Vendors invent new technologies to mitigate the exploits. There was a nice reference to the “Mitigator“. The next part of the keynote was focusing how the CISO daily job and the fight against auditors. A fact: “compliance is not security”. In 2001, the CIO position was split in CIO & CISO but budgets remained assigned to the CIA as “business enabler”. Today, we should have another split: The CISO position must be divided in CISO and COO (Compliance Officer). His/her job is to defend against auditors. It was a great keynote but the audience should be more C-level people instead of “technical people” who already agree on all the facts reviewed by Saumil. [Saumil’s slides are available here]

After the first coffee break, I had to choose between two tracks. My first choice was already difficult: hacking femtocell devices or IBM mainframes running z/OS. Even if the second focused on less known environments, mainframes are used in many critical operations so I decided to attend this talk. Ayoub Elaassal is a pentester who focused on this type of targets. People still have an old idea of mainframes. The good old IBM 370 was a big success. Today, the reality is different, modern mainframes are badass computers like the IBM zEC 13: 10TB of memory, 141 processors, cryptographic chips, etc. Who uses such computers? Almost every big companies from airlines, healthcare, insurance or finance ( Have a look at this nice gallery of mainframe consoles). Why? Because it’s powerful and stable. Many people (me first) don’t know a lot about mainframes: It’s not a web app, it uses a 3270 emulator over port 23 but we don’t know how it works. On top of the mainframe OS, IBM has an application layer called CICS (“Customer Information Control System”). For Ayoub, it looks like “a combination of Tomcat & Drupal before it was cool”. CICS is a very nice target because it is used a log: Ayoub gave a nice comparison: worldwide, 1.2M of request/sec are performed using the CICS product while Google reaches 200K requests/sec. Impressive! Before exploiting CICS, the first step was to explain how it works. The mainframe world is full of acronyms. not easy to understand immediately.  But then Ayoub explained how it abused a mainframe. The first attack was to jailbreak the CICS to get a console access (just like finding the admin web page). Mainframes contain a lot of juicy information. The next attack was to read sensitive files. Completed too! So, the next step is to pwn the device. CICS has a feature called “spool” functions. A spool is a dataset (or file) containing the output of a job. Idea: generate a dataset and send it to the job scheduler. Ayoub showed a demo of a Reverse shell in REXX. Like DC trust, you can have the same trust between mainframes and push code to another one. Replace NODE(LOCAL) by NODE(WASHDC). If the spool feature is not enabled, there are alternative techniques that were also reviewed. Finally, let’s to privileges escalation: They are three main levels: Special, Operations and Audit. Special can be considered as the “root” level. Those levels are defined by a simple bit in memory. If you can swap it, you get more privileges. It was the last example. From a novice point of view, this was difficult to follow but basically, mainframes can be compromised like any other computer. The more dangerous aspect is that people using mainframes think that they’re not targeted. Based on the data stored on them, they are really nice targets. All the Ayoub’s scripts are here. [Ayoub’s slides are available here]

The next talk was “Can’t Touch This: Cloning Any Android HCE Contactless Card” by Slawomir Jasek. Cloning things has always been a dream for people. And they succeeded in 1996 with Dolly the sheep. Later, in 2001, scientists make “Copycat”. Today we have also services to clone pets (if you have a lot of money to spend). Even if cloning humans is unethical, it remains a dream. So, we not close also objects? Especially if it can help to get some money. Mobile contactless payment cards are a good target. It’s illegal but bad guys don’t care. Such devices implement a lot of countermeasures but are we sure that they can’t be bypassed? Slawomir explained briefly the HCE technology. So, what are the different ways to abuse a payment application? The first one is of course to stole the phone. We can steal the card data via NFC (but they are already restriction: the phone screen must be turned on). We can’t pay but for motivated people, it should be possible to rebuild the mag stripe. Mobile apps use tokenization. Random card numbers are generated to pay and are used only for such operations. The transaction is protected by encrypted data. So, the next step is to steal the key. Online? Using man-in-the-middle attacks? Not easy. The key is stored on the phone. The key is also encrypted. How to access it? By reversing the app but it has a huge cost. What if we copy data across devices? They must be the same (model, OS, IMEI). We can copy the app + data but it’s not easy for a mass scale attack. The xposed framework helps to clone the device but it requires root access. Root detection is implemented in many apps. Slawomir performed a life demo: He copied data between two mobile phones using shell scripts and was able to make a payment with the cloned device. Note that the payments were performed on the same network and with small amounts of money. Google and banks have strong fraud detection systems. What about the Google push messages used by the application? Cloned devices received both messages but not always (not reliable). Then Slawomir talked about CDCVM which is a verification method that asks the user to give a PIN code but where… on its own device! Some apps do not support it but there is an API and it is possible to patch the application and enable the support (setting it to “True”) via an API call. What about other applications? As usual, some are good while others are bad (ex: some don’t event implement root detection). To conclude, can we prevent cloning? Not completely but we can make the process more difficult. According to Slawomir, the key is also to improve the backend and strong fraud detection controls (ex: based on the behaviour of the user). [Slawomir’s slides are available here]

After the lunch time, my choice was to attend Long Liu’s and Linan Has’s (which was not present) talk. The abstract looked nice: exploitation of the Chakracore core engine. This is a Javascript engine developed by Microsoft for its Edge browser. Today the framework is open source. Why is it a nice target according to the speaker? The source code is freely available, Edge is a nice attack surface. Long explained the different bug they found in the code and they helped them to win a lot of hacking contests. The problem was the monotonous voice of the speaker which just invited to take a small nap. The presentation ended with a nice demo of a web page visited by Edge and popping up a notepad running with system privileges. [Long’s slides are available here]

After the break, I switched to the track four to attend two small talks. But the quality was there! The first one by Patrick Wardle: “Meet and Greet with the MacOS Malware Class of 2016“. The presentation was a cool overview of the malware that targeted the OSX operating system. Yes, OSX is also targeted by malware today! For each of them, he reviewed:

  • The infection mechanism
  • The persistence mechanism
  • The features
  • The disinfection process

The examples covered by Patrick were:

  • Keranger
  • Keydnap
  • FakeFileOpener
  • Mokes
  • Komplex

He also presented some nice tools which could increase the security of your OSX environment. [Patrick’s slides are available here]

The next talk was presented by George Chatzisofroniou and covered a new wireless attach technique called Lure10. Wireless automatic association is not new (the well-known KARMA attack). This technique exists for years but modern operating systems implemented controls against this attack. But MitM attacks remains interesting because most applications do not implement countermeasures. In Windows 10, open networks are not added to the PNL (“Preferred Networks List”).  Microsoft developed a Wi-Fi Sense feature. The Lure10 attack tries to abuse it by making the Windows Location Service think that it is somewhere else and then mimic a Wifi Sence approved local network. In this case, we have an automatic association. A really cool attack that will be implemented in the next release of the wifiphisher phisher framework.  [George’s slides are available here]

My next choice was to attend a talk about sandboxing: “Shadow-Box: The Practical and Omnipotent Sandbox” by Seunghun Han. In short, Shadow-box is a lightweight hypervisor-based kernel protector. A fact: Linux kernels are everywhere today (computers, IoT, cars, etc). The kernel suffers from vulnerabilities and the risk of rootkits is always present. The classic ring (ring 0) is not enough to protect against those threats. Basically, the rootkit changes the system calls table and divert them to it to perform malicious activities.The idea behind Shadow-box is to use the VT technology to help in mitigating those threats. This is called “Ring -1”. Previous researches were already performed but suffered from many issues (mainly performance). The new research insists on lightweight and practical usage. Seunghun explained in detail how it works and ended with a nice demo. He tried to start a rootkit into a Linux kernel that has the Shadow-box module loaded. Detection was immediate and the rootkit not installed. Interesting but is it usable on a day-to-day basis? According to Seunghun, it is. The performance impact on the system is acceptable. [Seughun’ slides are available here]

The last talk of the day focused on TrendMicro products: “I Got 99 Trends and a # is All of Them! How We Found Over 100 RCE Vulnerabilities in Trend Micro Software” by Roberto Suggi Liverani and Steven Seeley. They research started after the disclosure of vulnerabilities. They decided to find more. Why Trendmicro? Nothing against the company but it’s a renowned vendor, they have a bug bounty program and they want to secure their software. The approach followed was to compromise the products without user interaction. They started with low-handing fruits, focused on components like libraries, scripts. The also use the same approach as used in malware analysis: check the behaviour and communications with external services and other components. They reviewed the following products:

  • Smart Protection Server
  • Data Loss Prevention
  • Control Manager
  • Interscan Web Security
  • Threat Discovery Appliance
  • Mobile Security for Enterprise
  • Safesync for Enterprise

The total amount of vulnerabilities they found was so impressive, most of them led to remote code execution. And, for most of them, it was quite trivial. [Roberto’s & Steven’s slides are available here]

This is the end of day #1. Stay tuned for more tomorrow.


[The post HITB Amsterdam 2017 Day #1 Wrap-Up has been first published on /dev/random]

Combining QFuture with QUndoCommand made a lot of sense for us. The undo and the redo methods of the QUndoCommand can also be asynchronous, of course. We wanted to use QFuture without involving threads, because our asynchronosity is done through a process and IPC, and not a thread. It’s the design mistake of QtConcurrent‘s run method, in my opinion. That meant using QFutureInterface instead (which is undocumented, but luckily public – so it’ll remain with us until at least Qt’s 6.y.z releases).

So how do we make a QUndoCommand that has a undo, and that has a redo method that returns a asynchronous QFuture<ResultType>?

We just did that, today. I’m very satisfied with the resulting API and design. It might have helped if QUndoStack would be a QUndoStack<T> and QUndoCommand would have been a QUndoCommand<T> with undo and redo’s return type being T. Just an idea for the Qt 6.y.z developers.

April 12, 2017

As a sysadmin, you probably deploy your bare-metal nodes through kickstarts in combination with pxe/dhcp. That's the most convenient way to deploy nodes in an existing environment. But what about having to remotely init a new DC/environement, without anything at all ? Suppose that you have a standalone node that you have to deploy, but there is no PXE/Dhcp environment configured (yet).

The simple solution would be to , as long as you have at least some kind of management/out-of-band network, to either ask the local DC people to burn the CentOS Minimal iso image on a usb stick, or other media. But I was in a need to deploy a machine without any remote hand available locally there to help me. The only things I had were :

  • access to the ipmi interface of that server
  • the fixed IP/netmask/gateway/dns settings for the NIC connected to that segment/vlan

One simple solution would have been to just "attach" the CentOS 7 iso as a virtual media, and then boot the machine, and setup from "locally emulated" cd-rom drive. But that's not something I wanted to do, as I didn't want to slow the install, as that would come from my local iso image, and so using my "slow" bandwidth. Instead, I directly wanted to use the Gbit link from that server to kick the install. So here is how you can do it with ipxe.iso. Ipxe is really helpful for such thing. The only "issue" was that I had to configure the nic first with Fixed IP (remember ? no dhcpd yet).

So, download the ipxe.iso image, add it as "virtual media" (and transfer will be fast, as that's under 1Mb), and boot the server. Once it boots from the iso image, don't let ipxe run, but instead hit CTRL/B when you see ipxe starting . Reason is that we don't want to let it starting the dhcp discover/offer/request/ack process, as we know that it will not work.

You're then presented with ipxe shell, so here we go (all parameters are obviously to be adapted, including net adapter number) :

set net0/ip x.x.x.x
set net0/netmask x.x.x.x
set net0/gateway x.x.x.x
set dns x.x.x.x

ifopen net0

From that point you should have network connectivity, so we can "just" chainload the CentOS pxe images and start the install :

chain net.ifnames=0 biosdevname=0 ksdevice=eth2 inst.repo= inst.lang=en_GB inst.keymap=be-latin1 inst.vnc inst.vncpassword=CHANGEME ip=x.x.x.x netmask=x.x.x.x gateway=x.x.x.x dns=x.x.x.x

Then you can just enjoy your CentOS install running all from network, and so at "full steam" ! You can also combine directly with inst.ks= to have a fully automated setup. Worth knowing that you can also regenerate/build an updated/customized ipxe.iso with those scripts directly too. That's more or less what we used to also have a 1Mb universal installer for CentOS 6 and 7, see , but that one defaults to dhcp

Hope it helps

I am not a hug fan of the Linux Nvidia drivers*, but once in a while I try them to check the performance of the machine. More often than not, I end up in a console and no X/Wayland.

I have seen some Ubuntu users reinstalling their machine after this f* up, so here my notes to fix it (I always forget the initramfs step and end up wasting a lot of time):

$ sudo apt-get remove --purge nvidia-*
$ sudo mv /etc/X11/xorg.conf /etc/X11/xorg.conf_pre-nvidia
$ sudo update-initramfs -u
$ reboot

*: I am not a fan of the Windows drivers either now that Nvidia decided to harvest emails and track you if you want updates.

Filed under: Uncategorized Tagged: driver, fix, nvidia, post-it, Ubuntu

April 11, 2017

The post Nginx might have 33% market share, Apache isn’t falling below 50% appeared first on

This is a response to a post published by W3 Techs titled "Nginx reaches 33.3% web server market share while Apache falls below 50%". It's gotten massive upvotes on Hacker News, but I believe it's a fundamentally flawed post.

Here's why.

How server adoption is measured

Let's take a quick moment to look at how W3 Techs can decide if a site is running Apache vs. Nginx. The secret lies in the HTTP headers the server sends on each response.

$ curl -I 2>/dev/null | grep 'Server:'
Server: nginx

That Server header is collected by W3 Techs and they draw pretty graphs from it.


Except, you can't rely on the Server alone for these statistics and claims.

You (often) can't hide the Nginx Server header

Nginx is most often used as a reverse proxy, for TLS, load balancing and HTTP/2. That's a part the article to right.

Nginx is the leading web servers supporting some the the more modern protocols, which is probably one of the reasons why people start using it. 76.8% of all sites supporting HTTP/2 use Nginx, while only 2.3% of those sites rely on Apache.

Yes, Nginx offers functionality that's either unstable or hard to get on Apache (ie: not on versions in current repositories).

As a result, Nginx is often deployed like this;

: 443 Nginx
|-> proxy to Apache

:80 Nginx
|-> forward traffic from HTTP -> HTTPs

To the outside world, Nginx is the only HTTP(s) server available. Since measurements of this stat are collected via the Server header, you get this effect.

: 443 Nginx
|- HTTP/1.1 200 OK
|- Server: nginx
|- Cache-Control: max-age=600
|- ...
  \ Apache
   |- HTTP/1.1 200 OK
   |- Server: Apache
   |- Cache-Control: max-age=600

Both Apache and Nginx generate the Server header, but Nginx replaces that header with its own as it sends the request to the client. You never see the Apache header, even though Apache is involved.

For instance, here's my website response;

$ curl -I 2>/dev/null | grep 'Server:'
Server: nginx

Spoiler: I use Nginx as an HTTP/2 proxy (in Docker) for Apache, which does all the heavy lifting. That header only tells you my edge is Nginx, it doesn't tell you what's behind it.

And since Nginx is most often deployed at the very edge, it's the surviving Server header.

Nginx supplements Apache

Sure, in some stacks, Nginx completely replaced Apache. There are clear benefits to do so. But a few years ago, many sysadmins & devs changed their stack from Apache to Nginx, only to come back to Apache after all.

This created a series of Apache configurations that learned the good parts from Nginx, while keeping the flexibility of Apache (aka: .htaccess). Turns out, Nginx forced a wider use of PHP-FPM (and other runtimes), that were later used in Apache as well.

A better title for the original article would be: Nginx runs on 33% of top websites, supplementing Apache deployments.

This is one of those rare occasions where 1 + 1 != 2. Nginx can have 33% market share and Apache can have 85% market share, because they're often combined on the same stack. Things don't have to add up to 100%.

The post Nginx might have 33% market share, Apache isn’t falling below 50% appeared first on

So work on Autoptimize 2.2 is almost finished and I need your help testing this version before releasing (targeting May, but that depends on you!). The more people I have testing, the faster I might be able to push this thing out and there’s a lot to look forward to;

  • New option: enable/ disable AO for logged in users for all you pagebuilders out there
  • New option: enable/ disable AO for cart/ checkout pages of WooCommerce, Easy Digital Downloads & WP eCommerce
  • New minification/ caching system, significantly speeding up your site for non-cached pages (previously part of a power-up)
  • Switched to rel=preload + Filamentgroup’s loadCSS for CSS deferring
  • Additional support for HTTP/2 setups (no GUI, you might need to have a look at the API to see/ use all possibilities)
  • Important improvements to the logic of which JS/ CSS can be optimized (getPath function) increasing reliability of the aggregation process
  • Updated to a newer version of the CSS Minification component (albeit not the 3.x one, which seems a tad too fresh and which would require me to drop support for PHP 5.2 which will come but just not yet)
  • API: Lots of extra filters, making AO (even) more flexible.
  • Lots of bugfixes and smaller improvements (see GitHub commit log)

So if you want to help:

  1. Download the zip-file from Github
  2. Overwrite the contents of wp-content/plugins/autoptimize with the contents of autoptimize-master from the zip
  3. Test and if any bug (regression) create an issue in GitHub (if it doesn’t exist already).

Very much looking forward to your feedback!