Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

November 23, 2017

I published the following diary on “Proactive Malicious Domain Search“:

In a previous diary, I presented a dashboard that I’m using to keep track of the DNS traffic on my networks. Tracking malicious domains is useful but what if you could, in a certain way, “predict” the upcoming domains that will be used to host phishing pages? Being a step ahead of the attackers is always good, right? Thanks to the CertStream service (provided by Cali Dog Security), you have access to a real-time certificate transparency log update stream… [Read more]


[The post [SANS ISC] Proactive Malicious Domain Search has been first published on /dev/random]

November 22, 2017

Over the past weeks I have shared an update on the Media Initiative and an update on the Layout Initiative. Today I wanted to give an update on the Workflow Initiative.

Creating great software doesn't happen overnight; it requires a desire for excellence and a disciplined approach. Like the Media and Layout Initiatives, the Workflow Initiative has taken such an approach. The disciplined and steady progress these initiative are making is something to be excited about.

8.4: The march towards stability

As you might recall from my last Workflow Initiative update, we added the Content Moderation module to Drupal 8.2 as an experimental module, and we added the Workflows module in Drupal 8.3 as well. The Workflows module allows for the creation of different publishing workflows with various states (e.g. draft, needs legal review, needs copy-editing, etc) and the Content Moderation module exposes these workflows to content authors.

As of Drupal 8.4, the Workflows module has been marked stable. Additionally, the Content Moderation module is marked beta in Drupal 8.4, and is down to two final blockers before marking stable. If you want to help with that, check out the Content Moderation module roadmap.

8.4: Making more entity types revisionable

To advance Drupal's workflow capabilities, more of Drupal's entity types needed to be made "revisionable". When content is revisionable, it becomes easier to move it through different workflow states or to stage content. Making more entity types revisionable is a necessary foundation for better content moderation, workflow and staging capabilities. But it was also hard work and took various people over a year of iterations — we worked on this throughout the Drupal 8.3 and Drupal 8.4 development cycle.

When working through this, we discovered various adjacent bugs (e.g. bugs related to content revisions and translations) that had to be worked through as well. As a plus, this has led to a more stable and reliable Drupal, even for those who don't use any of the workflow modules. This is a testament to our desire for excellence and disciplined approach.

8.5+: Looking forward to workspaces

While these foundational improvements in Drupal 8.3 and Drupal 8.4 are absolutely necessary to enable better content moderation and content staging functionality, they don't have much to show for in terms of user experience changes. Now a lot of this work is behind us, the Workflow Initiative changed its focus to stabilizing the Content Moderation module, but is also aiming to bring the Workspace module into Drupal core as an experimental module.

The Workspace module allows the creation of multiple environments, such as "Staging" or "Production", and allows moving collections of content between them. For example, the "Production" workspace is what visitors see when they visit your site. Then you might have a protected "Staging" workspace where content editors prepare new content before it's pushed to the Production workspace.

While workflows for individual content items are powerful, many sites want to publish multiple content items at once as a group. This includes new pages, updated pages, but also changes to blocks and menu items — hence our focus on making things like block content and menu items revisionable. 'Workspaces' group all these individual elements (pages, blocks and menus) into a logical package, so they can be prepared, previewed and published as a group. This is one of the most requested features and will be a valuable differentiator for Drupal. It looks pretty slick too:

Drupal workspaces prototype

I'm impressed with the work the Workflow team has accomplished during the Drupal 8.4 cycle: the Workflow module became stable, the Content Moderation module improved by leaps and bounds, and the under-the-hood work has prepared us for content staging via Workspaces. In the process, we've also fixed some long-standing technical debt in the revisions and translations systems, laying the foundation for future improvements.

Special thanks to Angie Byron for contributions to this blog post and to Dick Olsson, Tim Millwood and Jozef Toth for their feedback during the writing process.

November 21, 2017

November 20, 2017

One of the features present in the August release of the SELinux user space is its support for ioctl xperm rules in modular policies. In the past, this was only possible in monolithic ones (and CIL). Through this, allow rules can be extended to not only cover source (domain) and target (resource) identifiers, but also a specific number on which it applies. And ioctl's are the first (and currently only) permission on which this is implemented.

Note that ioctl-level permission controls isn't a new feature by itself, but the fact that it can be used in modular policies is.

What is ioctl?

Many interactions on a Linux system are done through system calls. From a security perspective, most system calls can be properly categorized based on who is executing the call and what the target of the call is. For instance, the unlink() system call has the following prototype:

int unlink(const char *pathname);

Considering that a process (source) is executing unlink (system call) against a target (path) is sufficient for most security implementations. Either the source has the permission to unlink that file or directory, or it hasn't. SELinux maps this to the unlink permission within the file or directory classes:

allow <domain> <resource> : { file dir }  unlink;

Now, ioctl() is somewhat different. It is a system call that allows device-specific operations which cannot be expressed by regular system calls. Devices can have multiple functions/capabilities, and with ioctl() these capabilities can be interrogated or updated. It has the following interface:

int ioctl(int fd, unsigned long request, ...);

The file descriptor is the target device on which an operation is launched. The second argument is the request, which is an integer whose value identifiers what kind of operation the ioctl() call is trying to execute. So unlike regular system calls, where the operation itself is the system call, ioctl() actually has a parameter that identifies this.

A list of possible parameter values on a socket for instance is available in the Linux kernel source code, under include/uapi/linnux/sockios.h.

SELinux allowxperm

For SELinux, having the purpose of the call as part of a parameter means that a regular mapping isn't sufficient. Allowing ioctl() commands for a domain against a resource is expressed as follows:

allow <domain> <resource> : <class> ioctl;

This of course does not allow policy developers to differentiate between harmless or informative calls (like SIOCGIFHWADDR to obtain the hardware address associated with a network device) and impactful calls (like SIOCADDRT to add a routing table entry).

To allow for a fine-grained policy approach, the SELinux developers introduced an extended allow permission, which is capable of differentiating based on an integer value.

For instance, to allow a domain to get a hardware address (SIOCGIFHWADDR, which is 0x8927) from a TCP socket:

allowxperm <domain> <resource> : tcp_socket ioctl 0x8927;

This additional parameter can also be ranged:

allowxperm <domain> <resource> : <class> ioctl 0x8910-0x8927;

And of course, it can also be used to complement (i.e. allow all ioctl parameters except a certain value):

allowxperm <domain> <resource> : <class> ioctl ~0x8927;

Small or negligible performance hit

According to a presentation given by Jeff Vander Stoep on the Linux Security Summit in 2015, the performance impact of this addition in SELinux is well under control, which helped in the introduction of this capability in the Android SELinux implementation.

As a result, interested readers can find examples of allowxperm invocations in the SELinux policy in Android, such as in the app.te file:

# only allow unprivileged socket ioctl commands
allowxperm { appdomain -bluetooth } self:{ rawip_socket tcp_socket udp_socket } ioctl { unpriv_sock_ioctls unpriv_tty_ioctls };

And with that, we again show how fine-grained the SELinux access controls can be.

November 17, 2017

I published the following diary on “Top-100 Malicious IP STIX Feed“.

Yesterday, we were contacted by one of our readers who asked if we provide a STIX feed of our blocked list or top-100 suspicious IP addresses. STIX means “Structured Threat Information eXpression” and enables organizations to share indicator of compromise (IOC) with peers in a consistent and machine readable manner… [Read more]

[The post [SANS ISC] Top-100 Malicious IP STIX Feed has been first published on /dev/random]

November 16, 2017

So I integrated a page cache (based on KeyCDN Cache Enabler) in Autoptimize, just to see how easy (or difficult) it would be. Turns out it was pretty easy, mostly because Cache Enabler (based on Cachify, which was very popular in Germany until the developer abandoned it) is well-written, simple and efficient. :-)

No plans to release this though. Or do you think I should?

I published the following diary on “Suspicious Domains Tracking Dashboard“.

Domain names remain a gold mine to investigate security incidents or to prevent some malicious activity to occur on your network (example by using a DNS firewall). The ISC has also a page dedicated to domain names. But how can we detect potentially malicious DNS activity if domains are not (yet) present in a blacklist? The typical case is DGA’s of Domain Generation Algorithm used by some malware families… [Read more]


[The post [SANS ISC] Suspicious Domains Tracking Dashboard has been first published on /dev/random]

November 15, 2017

I published the following diary on “If you want something done right, do it yourself!“.

Another day, another malicious document! I like to discover how the bad guys are creative to write new pieces of malicious code. Yesterday, I found another interesting sample. It’s always the same story, a malicious document is delivered by email. The document was called ‘Saudi Declare war Labenon.doc’ (interesting name by the way!). According to VT, it is already flagged as malicious by many antiviruses… [Read more]

[The post [SANS ISC] If you want something done right, do it yourself! has been first published on /dev/random]

Now Drupal 8.4 is released, and Drupal 8.5 development is underway, it is a good time to give an update on what is happening with Drupal's Layout Initiative.

8.4: Stable versions of layout functionality

Traditionally, site builders have used one of two layout solutions in Drupal: Panelizer and Panels. Both are contributed modules outside of Drupal core, and both achieved stable releases in the middle of 2017. Given the popularity of these modules, having stable releases closed a major functionality gap that prevented people from building sites with Drupal 8.

8.4: A Layout API in core

The Layout Discovery module added in Drupal 8.3 core has now been marked stable. This module adds a Layout API to core. Both the aforementioned Panelizer and Panels modules have already adopted the new Layout API with their 8.4 release. A unified Layout API in core eliminates fragmentation and encourages collaboration.

8.5+: A Layout Builder in core

Today, Drupal's layout management solutions exist as contributed modules. Because creating and building layouts is expected to be out-of-the-box functionality, we're working towards adding layout building capabilities to Drupal core.

Using the Layout Builder, you start by selecting predefined layouts for different sections of the page, and then populate those layouts with one or more blocks. I showed the Layout Builder in my DrupalCon Vienna keynote and it was really well received:

8.5+: Use the new Layout Builder UI for the Field Layout module

One of the nice improvements that went in Drupal 8.3 was the Field Layout module, which provides the ability to apply pre-defined layouts to what we call "entity displays". Instead of applying layouts to individual pages, you can apply layouts to types of content regardless of what page they are displayed on. For example, you can create a content type 'Recipe' and visually lay out the different fields that make up a recipe. Because the layout is associated with the recipe rather than with a specific page, recipes will be laid out consistently across your website regardless of what page they are shown on.

The basic functionality is already included in Drupal core as part of the experimental Fields Layout module. The goal for Drupal 8.5 is to stabilize the Fields Layout module, and to improve its user experience by using the new Layout Builder. Eventually, designing the layout for a recipe could look like this:

Drupal field layouts prototype

Layouts remains a strategic priority for Drupal 8 as it was the second most important site builder priority identified in my 2016 State of Drupal survey, right behind Migrations. I'm excited to see the work already accomplished by the Layout team, and look forward to seeing their progress in Drupal 8.5! If you want to help, check out the Layout Initiative roadmap.

Special thanks to Angie Byron for contributions to this blog post, to Tim Plunkett and Kris Vanderwater for their feedback during the writing process, and to Emilie Nouveau for the screenshot and video contributions.

November 13, 2017

Today, I am excited to announce that Michael Sullivan will be joining Acquia as its CEO.

The search for a new CEO

Last spring, Tom Erickson announced that he was stepping down as Acquia's CEO. For over eight years, Tom and I have been working side-by-side to build and run Acquia. I've been lucky to have Tom as my partner as he is one of the most talented leaders I know. When Tom announced he'd be stepping down as Acquia's CEO, finding a new CEO became my top priority for Acquia. For six months, the search consumed a good deal of my time. I was supported by a search committee drawn from Acquia's board of directors, including Rich D'Amore, Tom Bogan, and Michael Skok. Together, we screened over 140 candidates and interviewed 10 in-depth. Finding the right candidate was hard work and time consuming, but we kept the bar high at all times. As much as I enjoyed meeting so many great candidates and hearing their perspective on our business, I'm glad that the search is finally behind me.

The right fit for Acquia

Finding a business partner is like dating; you have to get to know each other, build trust, and see if there is a match. Identifying and recruiting the best candidate is difficult because unlike dating, you have to consider how the partnership will also impact your team, customers, partners, and community. Once I got to know Mike, it didn't take me long to realize how he could help scale Acquia and help make our customers and partners successful. I also realized how much I would enjoy working with him. The fit felt right.

With 25 years of senior leadership in SaaS, enterprise content management and content governance, Mike is well prepared to lead our business. Mike will join Acquia from Micro Focus, where he participated in the merger of Micro Focus with Hewlett Packard Enterprise's software business. The combined company became the world's seventh largest pure-play software company and the largest UK technology firm listed on the London Stock Exchange. At Micro Focus and Hewlett Packard Enterprise, Mike was the Senior Vice President and General Manager for Software-as-a-Service and was responsible for managing over 30 SaaS products.

This summer, I shared that Acquia expanded its focus from website management to data-driven customer journeys. We extended the capabilities of the Acquia Platform with journey orchestration, commerce integrations and digital asset management tools. The fact that Mike has so much experience running a diverse portfolio of SaaS products is something I really valued. Mike's expertise can guide us in our transformation from a single product company to a multi-product company.

Creating a partnership

For many years, I have woken up everyday determined to set a vision for the future, formulate a strategy to achieve that vision, and help my fellow Acquians figure out how to achieve that vision.

One of the most important things in finding a partner and CEO for Acquia was having a shared vision for the future and an understanding of the importance of cloud, Open Source, data-driven experiences, customer success and more. This was very important to me as I could not imagine working with a partner who isn't passionate about these same things. It is clear that Mike shares this vision and is excited about Acquia's future.

Furthermore, Mike's operational strength and enterprise experience will be a natural complement to my focus on vision and product strategy. His expertise will allow Acquia to accelerate its mission to "build the universal platform for the world's greatest digital experiences."

Formalizing my own role

In addition to Mike joining Acquia as CEO, my role will be elevated to Chairman. I will also continue in my position as Acquia CTO. My role has always extended beyond what is traditionally expected of a CTO; my responsibilities have bridged products and engineering, fundraising, investor relations, sales and marketing, resource allocation, and more. Serving as Chairman will formalize the various responsibilities I've taken on over the past decade. I'm also excited to work with Mike because it is an opportunity for me to learn from him and grow as a leader.

Acquia's next decade

The web has the power to change lives, educate the masses, create new economies, disrupt business models and make the world smaller in the best of ways. Digital will continue to change every industry, every company and every life on the planet. The next decade holds enormous promise for Acquia and Drupal because of what the power of digital holds for business and society at large. We are uniquely positioned to deliver the benefits of open source, cloud and data-driven experiences to help organizations succeed in an increasingly complex digital world.

I'm excited to welcome Mike to Acquia as its CEO because I believe he is the right fit for Acquia, has the experience it takes to be our CEO and will be a great business partner to bring Acquia's vision to life. Welcome to the team, Mike!

November 11, 2017

I published the following diary on “Keep An Eye on your Root Certificates“.

A few times a year, we can read in the news that a rogue root certificate was installed without the user consent. The latest story that pops up in my mind is the Savitech audio drivers which silently installs a root certificate. The risks associated with this kind of behaviour are multiple, the most important remains performing MitM attacks. New root certificates are not always the result of an attack or infection by a malware. Corporate end-points might also get new root certificates… [Read more]


[The post [SANS ISC] Keep An Eye on your Root Certificates has been first published on /dev/random]

November 10, 2017

This morning I uploaded version 0.1 of SReview, my video review and transcoding system, to Debian experimental. There's still some work to be done before it'll be perfectly easy to use by anyone, but I do think I've reached the point by now where it should have basic usability by now.

Quick HOWTO for how to use it:

  • Enable Debian experimental
  • Install the packages sreview-master, sreview-encoder, sreview-detect, and sreview-web. It's possible to install the four packages on different machines, but let's not go into too much detail there, yet.
  • The installation will create an sreview user and database, and will start the sreview-web service on port 8080, listening only to localhost. The sreview-web package also ships with an apache configuration snippet that shows how to proxy it from the interwebs if you want to.
  • Run sreview-config --action=dump. This will show you the current configuration of sreview. If you want to change something, either change it in /etc/sreview/, or just run sreview-config --set=variable=value --action=update.
  • Run sreview-user -d --action=create -u <your email>. This will create an administrator user in the sreview database.
  • Open a webbrowser, browse to http://localhost:8080/, and test whether you can log on.
  • Write a script to insert the schedule of your event into the SReview database. Look at the debconf and fosdem scripts for inspiration if you need it. Yeah, that's something I still need to genericize, but I'm not quite sure yet how to do that.
  • Either configure gridengine so that it will have the required queues and resources for SReview, or disable the qsub commands in the SReview state_actions configuration parameter (e.g., by way of sreview-config --action=update --set=state_actions=... or by editing /etc/sreview/
  • If you need notification, modify the state_actions entry for notification so that it sends out a notification (e.g., through an IRC bot or an email address, or something along those lines). Alternatively, enable the "anonreviews" option, so that the overview page has links to your talk.
  • Review the inputglob and parse_re configuration parameters of SReview. The first should contain a filesystem glob that will find your raw assets; the second should parse the filename into room, year, month, day, hour, minute, and second, components. Look at the defaults of those options for examples (or just use those, and store your files as /srv/sreview/incoming/<room>/<year>-<month>-<day>/<hour>:<minute>:<second>.*).
  • Provide an SVG file for opening credits, and point to it from the preroll_template configuration option.
  • Provide an SVG or PNG file for closing credits, and point to it from the postroll_template resp postroll configuration option.
  • Start recording, and watch SReview do its magic :-)

There's still some bits of the above list that I want to make easier to do, and there's still some things that shouldn't be strictly necessary, but all in all, I think SReview has now reached a certain level of maturity that means I felt confident doing its first upload to Debian.

Did you try it out? Let me know what you think!

In my blog post, "A plan for media management in Drupal 8", I talked about some of the challenges with media in Drupal, the hopes of end users of Drupal, and the plan that the team working on the Media Initiative was targeting for future versions of Drupal 8. That blog post is one year old today. Since that time we released both Drupal 8.3 and Drupal 8.4, and Drupal 8.5 development is in full swing. In other words, it's time for an update on this initiative's progress and next steps.

8.4: A Media API in core

Drupal 8.4 introduced a new Media API to core. For site builders, this means that Drupal 8.4 ships with the new Media module (albeit still hidden from the UI, pending necessary user experience improvements), which is an adaptation of the contributed Media Entity module. The new Media module provides a "base media entity". Having a "base media entity" means that all media assets — local images, PDF documents, YouTube videos, tweets, and so on — are revisable, extendable (fieldable), translatable and much more. It allows all media to be treated in a common way, regardless of where the media resource itself is stored. For end users, this translates into a more cohesive content authoring experience; you can use consistent tools for managing images, videos, and other media rather than different interfaces for each media type.

8.4+: Porting contributed modules to the new Media API

The contributed Media Entity module was a "foundational module" used by a large number of other contributed modules. It enables Drupal to integrate with Pinterest, Vimeo, Instagram, Twitter and much more. The next step is for all of these modules to adopt the new Media module in core. The required changes are laid out in the API change record, and typically only require a couple of hours to complete. The sooner these modules are updated, the sooner Drupal's rich media ecosystem can start benefitting from the new API in Drupal core. This is a great opportunity for intermediate contributors to pitch in.

8.5+: Add support for remote video in core

As proof of the power of the new Media API, the team is hoping to bring in support for remote video using the oEmbed format. This allows content authors to easily add e.g. YouTube videos to their posts. This has been a long-standing gap in Drupal's out-of-the-box media and asset handling, and would be a nice win.

8.6+: A Media Library in core

The top two requested features for the content creator persona are richer image and media integration and digital asset management.

The top content author improvements for DrupalThe results of the State of Drupal 2016 survey show the importance of the Media Initiative for content authors.

With a Media Library content authors can select pre-existing media from a library and easily embed it in their posts. Having a Media Library in core would be very impactful for content authors as it helps with both these feature requests.

During the 8.4 development cycle, a lot of great work was done to prototype the Media Library discussed in my previous Media Initiative blog post. I was able to show that progress in my DrupalCon Vienna keynote:

The Media Library work uses the new Media API in core. Now that the new Media API landed in Drupal 8.4 we can start focusing more on the Media Library. Due to bandwidth constraints, we don't think the Media Library will be ready in time for the Drupal 8.5 release. If you want to help contribute time or funding to the development of the Media Library, have a look at the roadmap of the Media Initiative or let me know and I'll get you in touch with the team behind the Media Initiative.

Special thanks to Angie Byron for contributions to this blog post and to Janez Urevc, Sean Blommaert, Marcos Cano Miranda, Adam G-H and Gábor Hojtsy for their feedback during the writing process.

November 07, 2017

I published the following diary on “Interesting VBA Dropper“.

Here is another sample that I found in my spam trap. The technique to infect the victim’s computer is interesting. I captured a mail with a malicious RTF document (SHA256: c247929d3f5c82247db9102d2dec28c27f73dc0824f8b386f92aad1a22fd8edd) that exploits the OLE2Link vulnerability (CVE-2017-0199). Once opened, the document fetches the following URL… [Read more]

[The post [SANS ISC] Interesting VBA Dropper has been first published on /dev/random]

November 06, 2017

Gnome logoI like what the Ubuntu people did when adopting Gnome as the new Desktop after the dismissal of Unity. When the change was announced some months ago, I decided to move to Gnome and see if I liked it. I did.

It’s a good idea to benefit of the small changes Ubuntu did to Gnome 3. Forking dash-to-dock was a great idea so untested updates (e.g. upstream) don’t break the desktop. I won’t discuss settings you can change through the “Settings” application (Ubuntu Dock settings) or through “Tweaks”:

$ sudo apt-get install gnome-tweak-tool

It’s a good idea, though, to remove third party extensions so you are sure you’re using the ones provided and adapted by Ubuntu. You can always add new extensions later (the most important ones are even packaged).
$ rm -rf ~/.local/share/gnome-shell/extensions/*

Working with Gnome 3, and in less extent with MacOS, taught me that I prefer bars and docks to autohide. I never did in the past, but I feel that Gnome (and MacOS) got this right. I certainly don’t like the full height dock: make it so small as needed. You can use the graphical “dconf Editor” tool to make the changes, but I prefer the safer command line (you won’t make a change by accident).

To prevent Ubuntu Dock to take all the vertical space (i.e., most of it is just an empty bar):

$ dconf write /org/gnome/shell/extensions/dash-to-dock/extend-height false

A neat Dock trick: when hovering over a icon on the dock, cycle through windows of the application while scrolling (or using two fingers). Way faster than click + select:

$ dconf write /org/gnome/shell/extensions/dash-to-dock/scroll-action "'cycle-windows'"

I set the dock to autohide in the regular “Settings” application. An extension is needed to do the same for the Top Bar (you need to log out, and the enable it through the “Tweaks” application):

$ sudo apt-get install gnome-shell-extension-autohidetopbar

Oh, just to be safe (e.g., in case you broke something), you can reset all the gnome settings with:

$ dconf reset -f /

Have a look at the comments for some extra settings (that I personally do not use, but many do).

Some options that I don’t use far people have asked me about (here and elsewhere)

Specially with the setting that allows scrolling above, you may want to only switch between windows of the same application in the active workspace. You can isolate workspaces with:

$ dconf write /org/gnome/shell/extensions/dash-to-dock/isolate-workspaces true

Hide the dock all the time, instead of only when needed. You can do this by disabling “intellihide”:

$ dconf write /org/gnome/shell/extensions/dash-to-dock/intellihide false

Filed under: Uncategorized Tagged: better-defaults-needed-department, dconf, gnome, Gnome3, Ubuntu, Ubuntu 17.10

The Drupal render pipeline and its caching capabilities have been the subject of quite a few talks of mine and of multiple writings. But all of those were very technical, very precise.

Over the past year and a half I’d heard multiple times there was a need for a more pragmatic talk, where only high-level principles are explained, and it is demonstrated how to step through the various layers with a debugger. So I set out to do just that.

I figured it made sense to spend 10–15 minutes explaining (using a hand-drawn diagram that I spent a lot of time tweaking) and spend the rest of the time stepping through things live. Yes, this was frightening. Yes, there were last-minute problems (my IDE suddenly didn’t allow font size scaling …), but it seems overall people were very satisfied :)

Have you seen and heard of Render API (with its render caching, lazy builders and render pipeline), Cache API (and its cache tags & contexts), Dynamic Page Cache, Page Cache and BigPipe? Have you cursed them, wondered about them, been confused by them?

I will show you three typical use cases:

  1. An uncacheable block
  2. A personalized block
  3. A cacheable block that you can see if you have a certain permission and that should update whenever some entity is updated

… and for each, will take you on the journey through the various layers: from rendering to render caching, on to Dynamic Page Cache and eventually Page Cache … or BigPipe.

Coming out of this session, you should have a concrete understanding of how these various layers cooperate, how you as a Drupal developer can use them to your advantage, and how you can test that it’s behaving correctly.

I’m a maintainer of Dynamic Page Cache and BigPipe, and an effective co-maintainer of Render API, Cache API and Page Cache.


November 04, 2017

As part of working in Acquia’s Office of the CTO, I’ve been working on the API-First Initiative for the past year and a half! Where are we at? Find out :)


November 03, 2017

I published the following diary on “Simple Analysis of an Obfuscated JAR File“.

Yesterday, I found in my spam trap a file named ‘0.19238000’ (SHA256: 7bddf3bf47293b4ad8ae64b8b770e0805402b487a4d025e31ef586e9a52add91). The ZIP archive contained a Java archive named ‘0.19238000 1509447305.jar’ (SHA256: b161c7c4b1e6750fce4ed381c0a6a2595a4d20c3b1bdb756a78b78ead0a92ce4). The file had a score of 0/61 in VT and looks to be a nice candidate for a quick analysis. .jar files are ZIP archives that contain compiled Java classes and a Manifest file that points to the initial class to load. Let’s decompile the classes. To achieve this, I’m using a small Docker container… [Read more]

[The post [SANS ISC] Simple Analysis of an Obfuscated JAR File has been first published on /dev/random]

November 01, 2017

October 31, 2017

The post Fall cleaning: shutting down some of my projects appeared first on

Just a quick heads-up, I'm taking several of my projects offline as they prove to be too much of a hassle.

Mailing List Archive

Almost 2 years ago, I started a project to mirror/archive public mailing lists in a readable, clean layout. It's been fun, but quite a burden to maintain. In particular;

  • The mailing list archive now consists of ~3.500.000 files, which meant special precautions with filesystems (running out of inodes) & other practical concerns related to backups.
  • Abuse & take-down reports: lots of folks mailed a public mailing list, apparently not realizing it would be -- you know -- public. So they ask their name & email to be removed from those that mirror it. Like me.

    Or, in one spectacular case, a take down request by Italian police because of some drug-related incident on the Jenkins public mailing list. Fun stuff. It involved threatening lawyer mails that I quickly complied with.

All of that combined, it isn't worth it. It got around 1.000 daily pageviews, mostly from the Swift mailing list community. Sorry folks, it's gone now.

This also means the death of the @foss_security & @oss_announce Twitter accounts, since those were built on top of the mailing list archive.

Manpages Archive

I ran a manpages archive on, but nobody used it (or knew about it), so I deleted it. Also hard to keep up to date, with different versions of packages & different manpages etc.

Also, it's surprisingly hard to download all manpages, extract them, turn them into HTML & throw them on a webserver. Who knew?!


This was a wild idea where I tried to crowdsource the meaning behind cryptocurrency price jumps. A coin can increase in value at a rate of 400% of more in just a few hours, the idea was to find the reason behind it and learn from it.

It also fueled the @CoinJumpApp account on Twitter, but that's dead now too.

The source is up on Github, but I'm no longer maintaining it.

More time for other stuff!

Either way, some projects just cause me more concern & effort than I care to put into them, so time to clean that up & get some more mental braincycles for different projects!

The post Fall cleaning: shutting down some of my projects appeared first on

This October, Acquia welcomed over 650 people to the fourth annual Acquia Engage conference. In my opening keynote, I talked about the evolution of Acquia's product strategy and the move from building websites to creating customer journeys. You can watch a recording of my keynote (30 minutes) or download a copy of my slides (54 MB).

I shared that a number of new technology trends have emerged, such as conversational interfaces, beacons, augmented reality, artificial intelligence and more. These trends give organizations the opportunity to re-imagine their customer experience. Existing customer experiences can be leapfrogged by taking advantage of more channels and more data (e.g. be more intelligent, be more personalized, and be more contextualized).

Digital market trends aligning

I gave an example of this in a blog post last week, which showed how augmented reality can improve the shopping experience and help customers make better choices. It's just one example of how these new technologies advance existing customer experiences and move our industry from website management to customer journey management.

Some of the most important market trends in digital for 2017

This is actually good news for Drupal as organizations will have to create and manage even more content. This content will need to be optimized for different channels and audience segments. However, it puts more emphasis on content modeling, content workflows, media management and web service integrations.

I believe that the transition from web content management to data-driven customer journeys is full of opportunity, and it has become clear that customers and partners are excited to take advantage of these new technology trends. This year's Acquia Engage showed how our own transformation will empower organizations to take advantage of new technology and transform how they do business.

While you use a tool every day, you get more and more knowledge about it but you also have plenty of ideas to improve it. I’m using Splunk on a daily basis within many customers’ environments as well as for personal purposes. When you have a big database of events, it becomes quickly mandatory to deploy techniques to help you to extract juicy information from this huge amount of data.  The classic way to do hunting is to submit IOC’s to Splunk (IP addresses, domains, hashes, etc) and to schedule searches or to search it in real time. A classic schema is:

Splunk Data Flow

Inputs are logs, OSINT sources or output from 3rd party tools. Outputs are enriched data. A good example is to use the MISP platform. Useful IOC’s are extracted at regular interval via the API and injected into Splunk for later searching and reporting.

# wget --header 'Authorization: xxxxx' \
       --no-check-certificate \
       -O /tmp/domain.txt \

This process has a limit: new IOC’s are not immediately available when exported on a daily basis (or every x hours). When we see new major threats like the Bad Rabbit last week, it is useful to have a way to search for the first IOCs released by security researchers. How to achieve this? You can run manually the export procedure by starting a connection on the Splunk server and executing commands (but people must have access to the console) or … use a custom search command! Splunk has a very nice language to perform queries but, do you know that you can expand it with your own commands? How?

A Splunk custom search command is just a small program written in a language that can be executed in the Splunk environment. My choice was to use Python. There is an SDK available. The principle is simple: input data are processed to generate new output data. The basis of any computer program.

I wrote a custom search command that interacts with MISP to get IOCs. Example:

Custom Search Command Example 1

The command syntax is:

|getmispioc [server=https://host:port] 

The only mandatory parameters are ‘eventid’ (to return IOCs from a specific event) or ‘last’ (to return IOCs from the last x (hours, days, week, or months). You can filter returned data by filtering on the ‘ids_only’ flag and/or a specific category or type. Example:

|getmispioc last=2d onlyids=y type=ip-dst

Custom Search Command Example 2

Now, you can integrate the command into more complex queries to search for IOCs across your logs. Here is an example with bind logs  to search for interesting domains:

[|getmispioc last=5d type=domain
 |rename value as query
 |fields query

Custom Search Command Example 3

The custom command is based on PyMISP. The script and installation details are available on my github account.

[The post Splunk Custom Search Command: Searching for MISP IOC’s has been first published on /dev/random]

I published the following diary on “Some Powershell Malicious Code“.

Powershell is a great language that can interact at a low-level with Microsoft Windows. While hunting, I found a nice piece of Powershell code. After some deeper checks, it appeared that the code was not brand new but it remains interesting to learn how a malware infects (or not) a computer and tries to collect interesting data from the victim… [Read more]

[The post [SANS ISC] Some Powershell Malicious Code has been first published on /dev/random]

Le réchauffement climatique est-il l’œuvre de l’homme ou est-ce un phénomène naturel ?

Le discours climato-sceptique est tellement nocif qu’il a réussi à créer un débat là où, si vous réfléchissez un petit peu, il ne devrait pas y avoir l’ombre d’un hésitation.

Nul besoin de recourir à des dizaines d’études, à des consensus de scientifiques ou à un quelconque argument d’autorité. Branchez votre cerveau et laissez-moi 5 minutes pour vous expliquer.

La quantité totale de carbone

Si on considère l’apport des météorites et l’évaporation de l’atmosphère dans l’espace comme négligeables, ce qu’ils sont, on peut considérer que le nombre d’atomes de carbone présents sur terre est fixe.

Il y a donc un nombre déterminé d’atomes de carbones sur la planète terre. Pendant des milliards d’années, ces atomes existaient essentiellement sous forme minérale (graphite, diamant), sous forme organique (tous les êtres vivants) et sous forme de CO2 dans l’atmosphère.

Le carbone sous forme minérale est stable et sa quantité n’a jamais vraiment évolué depuis la création de la planète. On peut donc sans scrupule se concentrer sur les atomes de carbones qui sont soit dans les êtres vivants (vous êtes essentiellement composés d’atomes de carbones), soit dans l’atmosphère sous forme de CO2.

Le cycle de la vie

Les plantes se nourrissent du CO2 de l’atmosphère pour capter le carbone qui leur permet de vivre. Elles rejettent ensuite l’oxygène excédentaire qui est pour elles un déchet. Séparer le CO2 en carbone et oxygène est une réaction endothermique qui demande de l’énergie. Cette énergie est fournie par le soleil grâce à la photosynthèse.

Les êtres vivants aérobiques, dont nous faisons partie, se nourrissent d’autres êtres vivants (plantes, animaux) afin de capter les atomes de carbone dont ils ont besoin. Ces atomes de carbones sont stockés et brûlés avec de l’oxygène afin de produire de l’énergie. Le déchet produit est le CO2. La combustion du carbone est une réaction exothermique, qui produit de l’énergie.

En résumé, vous mangez du carbone issu de plantes ou d’autres animaux, vous le stockez sous forme de sucre et de graisse et, lorsque votre corps a besoin d’énergie, ces atomes sont mis en réaction avec l’oxygène apporté par la respiration et le système sanguin. La réaction produit du CO2, qui est expiré, et de l’énergie dont une partie se dissipe sous forme de chaleur. C’est la raison pour laquelle vous avez chaud et êtes essoufflé pendant un effort : votre corps brûle plus de carbone qu’habituellement, il surchauffe et doit se débarrasser de beaucoup plus de CO2.

Un subtil équilibre

Comme on le voit, plus l’atmosphère va être riche en CO2, plus les plantes vont avoir de carbone à disposition et vont croître. C’est d’ailleurs une expérience simple : dans un environnement à haute teneur en CO2, les plantes sont bien plus florissantes.

Mais s’il y’a plus de carbone dans les plantes, il y’en a forcément moins dans l’atmosphère. Il s’ensuit donc une situation d’équilibre où le carbone capté par les plantes correspond à celui relâché par la respiration des animaux (ou par les plantes en décomposition).

Comme le CO2 est un gaz à effet de serre, cet équilibre carbone va avoir un impact direct sur le climat de la planète. Et, il y’a quelques milliards d’années, cet équilibre entraînait un climat bien plus chaud qu’aujourd’hui.

La fossilisation du carbone et le climat

Cependant, un processus a rompu cet équilibre. À leur mort, une partie des êtres vivants (cellules, plantes ou animaux) se sont enfoncés dans le sol. Le carbone qui les composaient n’a donc pas pu regagner l’atmosphère, que ce soit en se décomposant ou en servant de nourriture à d’autres animaux.

Sous le sol, la pression et le temps a fini par transformé ces cadavres en pétrole, charbon ou gaz naturel.

Toute cette quantité de carbone n’étant plus disponibles en surface, un nouvel équilibre s’est créé avec de moins en moins de CO2 dans l’atmosphère, ce qui entraina un refroidissement général de la planète. Cette ère glacière vit l’apparition d’Homo Sapiens.

L’évidence de la “défossilisation”

Si brûler du bois ou respirer sont des activités qui produisent du CO2, elles ne perturbent pas l’équilibre carbone de la planète. En effet, l’atome de carbone de la molécule de CO2 produite fait partie de l’équilibre actuel. Cet atome était très récemment dans l’atmosphère, a été capté par un être vivant avant d’y retourner.

Par contre, il parait évident que si on creuse pour aller chercher du carbone fossile (pétrole, gaz, charbon) pour le rejeter dans l’atmosphère en le brûlant, on va forcément augmenter augmenter la quantité totale de carbone dans le cycle de la planète et, de là, augmenter la quantité de CO2 dans l’atmosphère et donc la température. C’est la raison pour laquelle il est absurde de comparer les émissions de CO2 d’un cycliste et d’une voiture. Seule la voiture « défossilise » du carbone et a un impact sur le climat.

Un tel chamboulement pourrait être en théorie contrebalancé par une augmentation de la végétation pour absorber le CO2 en excédent. Malheureusement, ce changement est trop rapide pour permettre à la végétation de s’adapter. Pire : nous réduisons cette végétation, principalement via la déforestation en Amazonie.

Brûler des combustibles fossiles a donc un effet direct sur le réchauffement climatique. Si l’on brûlait toutes les réserves de combustible fossile de la planète, l’Antarctique fondrait complètement, la glace et la neige n’existerait plus sur la planète et le niveau des océans serait 30 à 40 mètres au dessus de l’actuel.

Mais alors, pourquoi un débat ?

Si les scientifiques sont absolument unanimes sur le fait que brûler des combustibles fossiles accentue le réchauffement climatique, cette vérité est particulièrement dérangeante pour le monde économique, qui vit littéralement en brûlant des combustibles fossiles.

Pendant un temps, l’idée a donc été émise que la planète était dans la phase de réchauffement d’un cycle naturel de variation du climat. Différents modèles se sont alors affrontés pour tenter de savoir quelle était la part de responsabilité humaine dans le réchauffement.

Mais force est de constater que ce débat est absurde. C’est comme si deux personnes au premier étage d’une maison en feu débattaient de l’origine de l’incendie : court-circuit accidentel ou acte criminel ? Il doit à présent vous sembler clair que brûler des combustibles fossiles accentue le réchauffement climatique, rendant la responsabilité humaine indiscutable.

Cependant, ces débats ont été exploités par le monde économique : « Regardez, les scientifiques ne sont pas d’accords sur certains détails du réchauffement climatiques. Donc le réchauffement climatique n’existe pas. »

Cette stratégie anti-scientifique est souvent utilisée : l’industrie du tabac, le scandale du Roundup, les créationistes. Tous prétendent que si les scientifiques sont en désaccord sur certains détails, on ne peut être certain et si on n’est pas certain, il faut continuer à faire comme avant. Au besoin, il suffit de graisser la patte à quelques scientifiques pour introduire le doute là où le consensus était parfait.

Contrairement aux créationistes ou à l’industrie du tabac, dont l’impact sur la planète reste relativement limité, l’ignorance dangereuse des climato-sceptiques sert les intérêts économiques du monde entier ! Ce faux débat permet à toute personne utilisant une voiture, à tout industriel brûlant des combustibles fossiles, à tout employé vivant indirectement de notre économie de se déresponsabiliser.

En résumé

Brûler des combustibles fossiles rejette dans l’atmosphère du carbone qui était auparavant inerte (d’où le terme fossile). Plus de carbone dans l’atmosphère entraîne un effet de serre et donc une augmentation de la température. C’est imparable et absolument indiscutable. Mais le climatosceptisme nous parle car il nous permet de nous déresponsabiliser, de ne pas questionner notre mode de vie.

Nous sommes dans une maison en feu mais comme certains pensent que le pyromane n’a fait qu’activer un feu qui couvait déjà, nous pouvons déclarer : c’est que l’incendie n’existe pas !


Photos par Lukas Schlagenhauf, US Department of Agriculture, Cameron Strandberg.

Vous avez aimé votre lecture ? Soutenez l’auteur sur Tipeee, Patreon, Paypal ou Liberapay. Même un don symbolique fait toute la différence ! Retrouvons-nous ensuite sur Facebook, Medium, Twitter ou Mastodon.

Ce texte est publié sous la licence CC-By BE.

October 29, 2017

De F-35 toont aan dat het Westen te idioot is om toestellen te maken die per ontwerp, en per toepassing, wel geschikt zijn.

In plaats daarvan wil men een politiek toestel dat uiteindelijk nergens goed in is.

Wat kan ik als techneut zeggen?

Tja, dwazen.

October 27, 2017

Michel Bauwens à Mons, le 16 novembre 2017Ce jeudi 16 novembre 2017 se déroulera la 63ème séance montoise des Jeudis du Libre de Belgique, en fait une organisation de Creative Valley au Mundaneum, à laquelle les JDL contribueront activement. Format exceptionnel, lieu inhabituel, il en va de même de l’horaire :

  • 15h00 – 17h30 : ateliers d’idéation
  • 18h00 – 19h30 : conférence de Michel Bauwens
  • 19h30 – 21h30 : soirée networking

Le sujet de cette séance : Sauver le Monde ? Vers une économie collaborative par la mise en commun des ressources… Ou quand la contribution du tout un chacun devient moteur du processus d’innovation.

  • Thématique : communauté
  • Public : Tout public
  • L’animateur conférencier : Michel Bauwens
  • Lieu de cette séance : Mundaneum, 76 rue de Nimy à 7000 Mons (cf. ce plan sur le site d’Openstreetmap)

Programme complet, description des ateliers et de la conférence, inscription obligatoire :

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, OpenSides, MeaWeb et Phonoid.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

October 26, 2017

How augmented reality can be used to superimpose product information

Last spring, Acquia Labs built a chatbot prototype that helps customers choose recipes and plan shopping lists with dietary restrictions and preferences in mind. The ability to interact with a chatbot assistant rather than having to research and plan everything on your own can make grocery shopping much easier. We wanted to take this a step further and explore how augmented reality could also improve the shopping experience.

The demo video above features how a shopper named Alex can interact with an augmented reality application to remove friction from her shopping experience at Freshland Market (a fictional grocery store). The Freshland Market mobile application not only guides Alex through her shopping list but also helps her to make more informed shopping decisions through augmented reality overlays. It superimposes useful information such as price, user ratings and recommended recipes, over shopping items detected by a smartphone camera. The application can personalize Alex's shopping experience by highlighting products that fit her dietary restrictions or preferences.

What is exciting about this demo is that the Acquia Labs team built the Freshland Market application with Drupal 8 and augmented reality technology that is commercially available today.

An augmented reality architecture using Drupal and Vuforia

The first step in developing the application was to use an augmented reality library, Vuforia, which identifies pre-configured targets. In our demo, these targets are images of product labels, such as the tomato sauce and cereal labels shown in the video. Each target is given a unique ID. This ID is used to query the Freshland Market Drupal site for content related to that target.

The Freshland Market site stores all of the product information in Drupal, including price, dietary concerns, and reviews. Thanks to Drupal's web services support and the JSON API module, Drupal 8 can serve content to the Freshland Market application. This means that if the Drupal content for Rosemary & Olive Oil chips is edited to mark the item on sale, this will automatically be reflected in the content superimposed through the mobile application.

In addition to information on price and nutrition, the Freshland Market site also stores the location of each product. This makes it possible to guide a shopper to the product's location in the store, evolving the shopping list into a shopping route. This makes finding grocery items easy.

Augmented reality is building momentum because it moves beyond the limits of a traditional user interface, or in our case, the traditional website. It superimposes a digital layer onto a user's actual world. This technology is still emerging, and is not as established as virtual assistants and wearables, but it continues to gain traction. In 2016, the augmented reality market was valued at $2.39 billion and it is expected to reach $61.39 billion by 2023.

What is exciting is that these new technology trends require content management solutions. In the featured demo, there is a large volume of product data and content that needs to be managed in order to serve the augmented reality capabilities of the Freshland Market mobile application. The Drupal community's emphasis on making Drupal API-first in addition to supporting distributions like Reservoir means that Drupal 8 is prepared to support emerging channels.

If you are ready to start reimagining how your organization interacts with its users, or how to take advantage of new technology trends, Acquia Labs is here to help.

Special thanks to Chris Hamper and Preston So for building the Freshland Market augmented reality application, and thank you to Ash Heath and Drew Robertson for producing the demo video.

I was in for a nice surprise this week. Andy Georges, Lieven Eeckhout and I received an ACM SIGPLAN award for the most influential OOPSLA 2007 paper. Our paper was called "Statistically rigorous Java performance evaluation" and was published 10 years ago at the OOPSLA conference. It helped set a standard for benchmarking the performance of Java applications. A quick look on the ACM website shows that our paper has been cited 156 times. The award was totally unexpected, but much appreciated. As much as I love my current job, thinking back to some of my PhD research makes me miss my academic work. Congratulations Andy and Lieven!

October 25, 2017

The idea of this Docker container came after reading the excellent Micah Hoffman’s blog post: Dark Web Report + TorGhost + EyeWitness == Goodness. Like Micah, I’m also receiving a daily file with new websites discovered on the (dark|deep) web (name it as you prefer). This service is provided by @hunchly Twitter account. Once a day, you get an XLS sheet with newly discovered websites. Micah explained how to process automatically these URLs. He’s using TorGhost with EyeWitness (written by Chris Truncer). It worked very well but it requires the installation of many libraries (EyeWitness has a lot of software dependencies) and I hate polluting my system with libraries used by only a few tools. That’s why I using Docker to run everything in a container. Also, processing a new XLS files every way is really a pain! So let’s automate as much as possible. To process the XLS file, I wrote a small Python script (see my previous blog post):

# ./ -w 'New Today' -c A -r 2- -s /tmp/HiddenServices.xlsx

I built a Docker container (‘TorWitness’)  that performs the following tasks:

  • Setup TorGhost and connect to the Tor network
  • Extract .onion URLs from XLS files
  • Take screenshots of the URLs via EyeWitness

But, sometimes, it can be helpful to visit other websites (not only on the .onion) via the Tor network. That’s why you can pass your own list of URLs as an argument.

Here is an example of container execution:

$ cat $HOME/torwitness/urls.txt
$ docker run \
     --rm \
     -it ÷
     -v $HOME/torwitness:/data \
     --cap-add=NET_ADMIN --cap-add=NET_RAW \
     torwitness \
      _____           ____ _               _
     |_   _|__  _ __ / ___| |__   ___  ___| |_
       | |/ _ \| '__| |  _| '_ \ / _ \/ __| __|
       | | (_) | |  | |_| | | | | (_) \__ \ |_
       |_|\___/|_|   \____|_| |_|\___/|___/\__|
    v2.0 - SusmithHCK |
[12:19:18] Configuring DNS resolv.conf file.. [done]
 * Starting tor daemon...  [OK]
[12:19:18] Starting tor service..  [done]
[12:19:19] setting up iptables rules [done]
[12:19:19] Fetching current IP...
[12:19:19] CURRENT IP :
Using environment variables:
Found Onion URLs to process:
#                                  EyeWitness                                  #
Starting Web Requests (2 Hosts)
Attempting to screenshot
Attempting to screenshot
Finished in 22.2635469437 seconds
$ cd $HOME/torwitness
$ ls
results-20171025113745 urls.txt
$ cd results-20171025113745
$ firefox result.html

If you don’t pass a file to the Docker, it will parse XLSX files in the /data directory and extract .onion URLs. There are multiple cases where this container can be helpful:

  • Browsing the dark web
  • Performing reconnaissance phase
  • Hunting
  • Browsing attacker’s resources

To build the Docker, instructions and the Dockerfile are available on my GitHub repository.

[The post “TorWitness” Docker Container: Automated (Tor) Websites Screenshots has been first published on /dev/random]

October 24, 2017

Excel sheets are very common files in corporate environments. It’s definitively not a security tool but it’s not rare to find useful information stored in such files. When these data must be processed for threat hunting or to collect IOC’s, it is mandatory to automate, as much as possible, the processing of data. Here a good example: Everyday, I’m receiving  one email that contains a list of new URLs found on the dark web (Tor network). I’m processing them automatically with a script but it’s a painful task: open the Excel sheet, select the URLs, copy, create a text file, paste and the file to the next script. Being a lazy guy, what not automate this?

Luckily, Python has a nice module called openpyxl which is very helpful to read/write XLS[X|M] files. Reading the content of a file can be performed with only a few lines of Python but I decided to write a small tool that could extract specific zones of the sheet based on the arguments. I called the script Its syntax is easy to understand:

# ./ -h
Usage: [options] <file> ...

 --version show program's version number and exit
 -h, --help show this help message and exit
 -w WORKBOOK, --workbook=WORKBOOK
 Workbook to extract data from
 -c COLS,... --cols=COLS Read columns (Format: "A", "A-" or "A-B")
 -r ROWS,... --rows=ROWS Read rows (Format: "1", "1-" or "1-10")
 -m MAX, --max=MAX Process maximum rows
 -p, --prefix Display cell name
 -s, --stop Stop processing when empty cell is found

You can specify the cells to dump with the ‘–cols’ and ‘–rows’ parameters. Only one, a range or starting from (‘A’, ‘A-C’ or ‘A-‘) and (‘1’, ‘1-100’ or ‘1-‘). Multiple ranges can be separated by commas. A maximum of cells to reports can be specified (the default is 65535). You can also stop processing cells when a first empty one is found. If no ranges are specified, the script dumps all cells starting from A1 (be careful!).

Here is a simple shell with an inventory of servers:

Example 1



To extract the list of IP addresses, we can use with the following syntax:

$ -r A -c 2-600 -s -w 'Sheet1' test.xlsx

Now, if the sheet is more complex and we have IP addresses spread in multiple cells:

Example 2



We use like this:

$ -r A,F -c 3-600 -s -w 'Sheet1' test.xlsx

The script is available on my github repository:



[The post Automatic Extraction of Data from Excel Sheet has been first published on /dev/random]

I published the following diary on “Stop relying on file extensions“.

Yesterday, I found an interesting file in my spam trap. It was called ‘16509878451.XLAM’. To be honest, I was not aware of this extension and I found this on the web: “A file with the XLAM file extension is an Excel Macro-Enabled Add-In file that’s used to add new functions to Excel. Similar to other spreadsheet file formats, XLAM files contain cells that are divided into rows and columns that can contain text, formulas, charts, images and… macros!” Indeed, the file contained some VBA code… [Read more]


[The post [SANS ISC] Stop relying on file extensions has been first published on /dev/random]

October 23, 2017

With asynchronous commands we have typical commands from the Model View ViewModel world that return asynchronously.

Whenever that happens we want result reporting and progress reporting. We basically want something like this in QML:

Item {
  id: container
  property ViewModel viewModel: ViewModel {}

  Connections {
    target: viewModel.asyncHelloCommand
    onExecuteProgressed: {
        progressBar.value = value
        progressBar.maximumValue = maximum
  ProgressBar {
     id: progressBar
  Button {
    enabled: viewModel.asyncHelloCommand.canExecute
    onClicked: viewModel.asyncHelloCommand.execute()

How do we do this? First we start with defining a AbstractAsyncCommand (impl. of protected APIs here):

class AbstractAsyncCommand : public AbstractCommand {
    AbstractAsyncCommand(QObject *parent=0);

    Q_INVOKABLE virtual QFuture<void*> executeAsync() = 0;
    virtual void execute() Q_DECL_OVERRIDE;
    void executeFinished(void* result);
    void executeProgressed(int value, int maximum);
    QSharedPointer<QFutureInterface<void*>> start();
    void progress(QSharedPointer<QFutureInterface<void*>> fut, int value, int total);
    void finish(QSharedPointer<QFutureInterface<void*>> fut, void* result);
    QVector<QSharedPointer<QFutureInterface<void*>>> m_futures;

After that we provide an implementation:

#include <QThreadPool>
#include <QRunnable>

#include <MVVM/Commands/AbstractAsyncCommand.h>

class AsyncHelloCommand: public AbstractAsyncCommand
    AsyncHelloCommand(QObject *parent=0);
    bool canExecute() const Q_DECL_OVERRIDE { return true; }
    QFuture<void*> executeAsync() Q_DECL_OVERRIDE;
    void* executeAsyncTaskFunc();
    QSharedPointer<QFutureInterface<void*>> current;
    QMutex mutex;

#include "asynchellocommand.h"

#include <QtConcurrent/QtConcurrent>

AsyncHelloCommand::AsyncHelloCommand(QObject* parent)
    : AbstractAsyncCommand(parent) { }

void* AsyncHelloCommand::executeAsyncTaskFunc()
    for (int i=0; i<10; i++) {
        qDebug() << "Hello Async!";
        progress(current, i, 10);
    return nullptr;

QFuture<void*> AsyncHelloCommand::executeAsync()
    current = start();
    QFutureWatcher<void*>* watcher = new QFutureWatcher<void*>(this);
    connect(watcher, &QFutureWatcher<void*>::progressValueChanged, this, [=]{
        progress(current, watcher->progressValue(), watcher->progressMaximum());
    connect(watcher, &QFutureWatcher<void*>::finished, this, [=]{
        void* result=watcher->result();
        finish(current, result);
    watcher->setFuture(QtConcurrent::run(this, &AsyncHelloCommand::executeAsyncTaskFunc));
    QFuture<void*> future = current->future();

    return future;

You can find the complete working example here.

October 21, 2017

Let’s go for more wrap-ups. The second day started smoothly with Haroon Meer’s keynote. There was only one track today, the second room being fully dedicated to hackerspaces. Harron is a renowned speaker and the title of his keynote was “Time to play ‘D’”. The intro was simple: Nothing new, no 0-day, he decided to start his keynote based on his previous talks, especially one from 2011: “Penetration testing considered harmful“. Things changed considerably from a hardware and software point of view but we are still facing the same security issues. Example: a today’s computer is based on multiple computers (think about the MacBook Pro and its touch bar which is based on the same hardware as the Apple watch). Generic security solutions fail and an AV can still be easily bypassed. He gave many good facts and advice. Instead of buying expensive appliances, use this money to hire skilled people. But usually, companies have a security issue and they fix it by deploying a solution that… introduces new issues. He insisted and gave examples of “Dirty Cheap Solutions”. With a few lines of Powershell, we can easily detect new accounts created in an Active Directory. Aaron gave another example with a service he created: You create files, URLs, DNS records that are linked an email address and, in case of breach or unexpected access, an alert is sent to you. Another one: regular people don’t use commands lines ‘uname’, ‘ifconfig’ or ‘whoami’. Create alerts to report when they are used!

The first regular talk was given by Tobias Schrödel: “Hacking drones and buying passwords in the Darknet“. What’s the relation between them? Nothing, Tobias just accepted to cover these two topics! The talk was very entertaining and Tobias is a very good speaker… The first part (drones) was made in a management style (with a tie) and the second one with a t-shirt, classic one. Why hacking drones? In Germany, like in many countries, the market for drones is growing quickly. Small models (more classified as “toys”) are using Wireless networks to be controlled and get the pictures from the camera. Those drones provide a SSID, DHCP and are managed via a web interface. So they can be compared to a flying router! There are different ways to take down a drone. The safest solution is to use eagles because they can drop out the drone out of the zone that must be secured. The attack he demonstrated was a simple de-auth attack. The second part of the talks focused on the black market. Not a lot of people already bought stuff on the Darknet (or they hide it) but they are nice webshops where you can buy passwords for many official shops like eBay, Zalando, Paypal, etc… But why a company should buy passwords on the Darkweb? A few years ago, Dropbox suffered from a mega leak with millions of passwords in the wild. That’s bad but even more when corporate email addresses are present in sensitive leaks like Ashley-Madison. In Germany, a big company found 10 email addresses in this leak. If employees are free in their private life, this could have a very huge impact in case of blackmailing: “Give us access to these internal documents or we make your wife/husband aware of your Ashley-Madison account. This is a way to protect its business.

Then, Adrian Vollmer presented “Attacking RDP with Seth – How to eavesdrop on poorly secured RDP connections“. Adrian explained in details how’s working the RDP protocols to authenticate users. In the past, he used Cain & Abel to attack RDP sessions but the tool is quite old and unmaintained (I’m still using it from time to time). So, he decided to write his own tool called Seth. It exploits a misconfiguration in many RDP services. RDP security is similar to SSL but not exactly the same. He explained how this can be abused do downgrade from Kerberos to RDP Security. In this case, a popup warning is displayed to the victim but it is always ignored.

After the morning coffee break, I expected a lot from this talk: “A heaven for Hackers: Breaking Log/SIEM Products” by Mehmet Ince. The talk was not based on ways to abuse a SIEM via the logs that it processed but based on the fact that a SIEM is an application integrating multiple components. The methodology he used was:

  • Read the documentation
  • Understand the features
  • Get a trial version
  • Break it to access console
  • Define attack vector(s)
  • Find a vulnerability

He reported three cases. The first one was AlienVault. They downloaded two versions (the latest one and the previous one) and make a big diff in the files. Based on this, three problems were found: object injection, authentication bypass and IP spooking through XFF. By putting the three together, they were able to get a SQL injection but RCE is always better. They successfully achieved this by created a rule on the application that triggered a command when an SSH denied connection was reported. Evil! The second case targeted ManagEngine. The product design was bad and password to connect to remote Windows systems were stored in a database. If was possible to get access to a console to perform SQL queries but the console obfuscated passwords. By renaming the field ‘password’ to ‘somethingelse’, passwords were displayed in clear text! (“SELECT password AS somethingelse FROM …”). In the third case, LogSign, it was more destructive: it was possible to get rid of the logs… so simple! This was a nice talk.

Then, Ben Seri & Gregory Vishnepolsky presented “BlueBorne Explained: Exploiting Android devices over the air“. This vulnerability was in the news recently and is quite important:

  • 5.3B devices vulnerable in the wild
  • 8 vulnerabilities,  4 critical
  • Multiple OS: Android, Linux, Windows, IOS
  • No user interaction or auth
  • Enables RCE, MitM and info leaks

They reviewed the basic of the Bluetooth protocol and the different services (like SDP – “Service Discovery Protocol”). They gave a huge amount of details… The finished with a live demo by compromising an Android phone via the BNEP service (“BT Network Authentication Protocol). Difficult to follow for me but a huge research!

After a lunch break and interesting discussions, back to the main theatre for the last set of talks. There were two presentations that I found less interesting (IMHO). Anto Joseph presented “Bug hunting using symbolic virtual machines“. Symbolic execution + fuzzing a winning combination to find vulnerabilities. Symbolic execution is a way to analyse the behaviour of a program to determine what inputs cause each part of a program to execute. The tool used by Anto was klee. He made a lot of demos to explain how the tool is working. It looks to be a great tool but it was difficult to follow for my poor brain.

The next talk started late due to a video issue with the speaker’s laptop. Dmitry Yudin presented ” PeopleSoft: HACK THE Planet^W university“. By university, we mean here the PeopleSoft Campus Solutions which is used in more than 1000 universities worldwide. The main components are a browser, a web server, an application, a batch server and a database. Multiple vulnerabilities have been found in this suite, Dmitry explained the CVE-2017-10366. He explained all the step to jump from one service to another until a complete compromise of the suite.

After the last break, the day finished with two interesting presentations. Kirils Solovjovs presented “Tools for effortless reverse engineering of MikroTik routers“. Mikrotik routers are used worldwide and can be considered as a nice target. They are based on Linux, but RouterOS is based on an old kernel from 2012 and is closed source. So, we need a jailbreak! Kirils explained two techniques to jailbreak the router. He also found a nice backdoor which requires a specific file to be created on the file system. He explained many features of RouterOS and also some security issues like in the backup process. It is possible to create a file containing ‘../../../../’, so it was possible to create the file required by the back door. He released on the tools here.

To cloture the day and the conference, Gábor Szappanos talked about “Office Exploit Builders“. Why? Because Office documents remain the main vector of infection to drop malwares. It’s important to have “good” tools to generate malicious documents but who’s writing them?  Usually, VBA macros are used but, with a modern version of Office, macros are disabled by default. It’s better to use an exploit. Based on a study conducted two years ago, APT groups lack of knowledge to build malicious documents so they need tools! Gábor reviewed three tools:

  • AKBuilder: Active since 2015, typically used by Nigerians scammers and cost ~$500
  • Ancalog Exploit Builder: Peak of activity in 2016, also used by scammers. Price is ~$300 (retired)
  • Microsoft Word Intruder: used by more “high” profile, it can drop more dangerous pieces of malware. Written in PHP for Windows, its price is ~$20000-$35000!

A nice presentation to close the day! So, this closes the two days of Hacktivity 2017, the first edition for me. Note that the presentations will be available on the website in the coming days!

[The post Hacktivity 2017 Wrap-Up Day 2 has been first published on /dev/random]

October 20, 2017

My wrap-up crazy week continues… I’m now in Budapest to attend Hacktivity for the first time. During the opening ceremony some figures were given about this event: 14th edition(!), 900 attendees from 23 different countries and 36 speakers. Here is a nice introduction video. The venue is nice with two tracks in parallel, workshops (called “Hello Workshops”), a hacker center, sponsor’ booths and… a wall-of-sheep! After so many years, you realize immediately that it is well organized and everything is under control.

As usual, the day started with a keynote. Costin Raiu from Kaspersky presented “Why some APT research is like palaeontology?” Daily, Kaspersky collects 500K malware samples and less than 50 are really interesting for his team. The idea to compare this job with palaeontology came from a picture of Nessie (the Lochness monster). We some something on the picture but are we sure that it’s a monster? Costin gave the example of Regin: They discovered the first sample in 1999, 1 in 2003 and 43 in 2007. Based in this, how to be certain that you found something interesting? Finding IOCs, C&Cs is like finding bones of a dinosaur. At the end, you have a complete skeleton and are able to publish your findings (the report). In the next part of the keynote, Costin gave examples of interesting cases they found with nice stories like the 0-day that was discovered thanks to the comment left by the developer in his code. The Costin’s advice is to learn Yara and to write good signatures to spot interesting stuff.

The first regular talk was presented by Zoltán Balázs: “How to hide your browser 0-days?‘. It was a mix of crypto and exploitation. The Zoltán’s research started after a discussion with a vendor that was sure to catch all kind of 0-day exploits against browsers. “Challenge accepted” for him. The problem with 0-day exploits is that they quickly become ex-0-day exploits when they are distributed by exploit kits. Why? Quickly, security researchers will find samples, analyze them and patches will be available soon. From an attacker point of view, this is very frustrating. You spend a lot of money and lose it quickly. The idea was to deliver the exploit using an encrypted channel between the browser and the dropper. The shellcode is encrypted, executed then download the malware (also via a safe channel is required). Zoltán explained how he implemented the encrypted channel using ECDH (that was the part of the talk about crypto). This is better than SSL because if you control the client, it is too easy to play MitM and inspect the traffic. It’s not possible with the replay attack that implemented Zoltán. The proof of concept has been released.

Then another Zoltán came on stage: Zoltán Wollner with a presentation called “Behind the Rabbit and beyond the USB“. He started with a scene of the show Mr Robot where they use a Rubber Ducky to get access to a computer. Indeed a classic USB stick might have hidden/evil features. The talk was in fact a presentation of the Bash Bunny tool from Hak5. This USB stick is … what you want! A keyboard, a flash drive, an Ethernet/serial adapter and more! He demonstrated some scenarios:

  • QuickCreds: stealing credentials from a locked computer
  • EternalBlue

This device is targeting low-hanging fruits but … just works! Note that it requires physical access to the target computer.

After the lunch coffee break, Mateusz Olejarka presented “REST API, pentester’s perspective“. Mateusz is a pentester and, by experience, he is facing more and more API when conducting penetration tests. The first time that an API was spotted in an attack was when @sehacure pwned a lot of Facebook accounts via the API and the password reset feature. On the regular website, he was rejected after a few attempts but the anti-bruteforce protection was not enabled on the beta Facebook site! Today RASK API are everywhere and most of the application and web tools have an API. An interesting number:  by 2018, 50% of B2B exchanges will be performed via web APIs. The principle of an API is simple: a web service that offers methods and process data in JSON (most of the time). Methods are GET/PUT/PATCH/DELETE/POST/… To test a REST API, we need some information: the endpoint, the documentation, get access to access key and sample calls. Mateusz explained how to find them. To find endpoints, we just try URI like “/api”, “/v1”, “/v1.1”, “/api/v1” or “/ping”, “/status”, “/health”, … Sometimes the documentation is available online or returned by the API itself. To find keys, two reliable sources are:

  • Apps / mobile apps
  • Github!

Also, fuzzing can be interesting to stress test the API. This one of my favourite talk, plenty of useful information if you are working in the pentesting area.

The next speaker was Leigh-Anne Galloway: “Money makes money: How to buy an ATM and what you can do with it“. She started with the history of ATMs. The first one was invented in 1967 (for Barclay’s in the UK). Today, there are 3.8M devices in the wild. The key players are Siemens Nixdorf, NSC and Fujitsu. She explained how difficult is was for her to just buy an ATM. Are you going through the official way or the “underground” way? After many issues, she finally was able to have an ATM delivered at her home. Cool but impossible to bring it in her apartment without causing damages. She decided to leave it on the parking and to perform the tests outside. In the second part, Leigh-Anne explained the different tests/attacks performed against the ATM: bruteforce, attack at OS level, at hardware and software level.

The event was split into two tracks, so I had to make choice. The afternoon started with Julien Thomas and “Limitations of Android permission system: packages, processes and user privacy“. He explained in details how are the access rights and permissions defined and enforced by Android. Amongst a deep review of the components, he also demonstrated an app that, once installed has no access, but, due to the process of revocation weaknesses, the app gets more access than initially.

Then Csaba Fitzl talked about malware and techniques used to protect themselves against security researchers and analysts: “How to convince a malware to avoid us?“. Malware authors are afraid of:

  • Security researchers
  • Sandboxes
  • Virtual machines
  • Hardened machines

Malware hates to be analysed and they sometimes avoid to infect certain targets (ex: they check the keyboard mapping to detect the country of the victim). Czaba reviewed several examples of known malware and how to detect if they are being monitored. The techniques are multiple and, as said Csaba, it could take weeks to review all of them. He also gave nice tips to harden your virtual machine/sandboxes to make them look really like a real computer used by humans. Then he gave some tips that he solved by writing small utilities to protect the victim. Example: mutex-grabber which monitors and automatically creates the found Mutexes on the local OS. The tools reviewed on the presentation are available here. Also a great talk with plenty of useful tips.

After the last coffee break, Harman Singh presented “Active Directory Threats & Detection: Heartbeat that keeps you alive may also kill you!“. Active Directories remain a juicy target because they are implemented in almost all organizations worldwide! He reviewed all the components of an Active Directory then explained some techniques like enumeration of accounts, how to collect data, how to achieve privilege escalation and access to juicy data.

Finally, Ignat Korchagin closed the day with a presentation “Exploiting USB/IP in Linux“. When he asked who know or use USB/IP in the room, nobody raised hands. Nobody was aware of this technique, same for me! The principle is nice: USB/IP allows you to use a USB device connected on computer A from computer B. The USB traffic (URB – USB Request Blocks) are sent over TCP/IP. More information is available here. This looks nice! But… The main problem is that the application level protocol is implemented at kernel level! A packet is based on a header + payload. The kernel gets the size of data to process via the header. This one can be controlled by an attacker and we are facing a nice buffer overflow! This vulnerability is referenced as CVE-2016-3955. Ignat also found a nice name for his vulnerability: “UBOAT” for “(U)SB/IP (B)uffer (O)verflow (AT)tack“. He’s still like for a nice logo :). Hopefully, to be vulnerable, many requirements must be fulfilled:

  • The kernel must be unpatched
  • The victim must use USB/IP
  • The victim must be a client
  • The victim must import at least one device
  • The victim must be root
  • The attacker must own the server or play MitM.

Ignat completed his talk with a live demo that crashed the computer (DoS) but there is probably a way use the head application to get remote code execution.

Enough for today, stay tuned for the second day!

[The post Hacktivity 2017 Wrap-Up Day 1 has been first published on /dev/random]

October 19, 2017 is already over and I’m currently waiting for my connecting flight in Munich, that’s the perfect opportunity to write my wrap-up. This one is shorter because I had to leave early to catch my flight to Hacktivity and I missed some talks scheduled in the afternoon. Thank Lufthansa for rebooking my flight so early in the afternoon… Anyway, it started again early (8AM) and John Bambenek opened the day with a talk called “How I’ve Broken Every Threat Intel Platform I’ve Ever Had (And Settled on MISP)”. The title was well chosen because John is a big fan of OSINT. He collects a lot of data and provides them for free via feeds (available here). He started to extract useful information from malware samples because the main problem today is the flood of samples that are constantly discovered. But how to find relevant information? He explained some of the dataset he’s generating. The first one is DGA or “Domain Generation Algorithm“.  DNS is a key indicator and is used everywhere. Checking a domain name may also reveal interesting information via the Whois databases. Even if data are fake, they can be helpful to link different campaigns or malware families together and get more intelligence about the attacker. If you can reverse the algorithm, you can predict the upcoming domains, prepare yourself better and also start takedown operations. The second dataset was the malware configurations. Yes, a malware is configurable (example: kill-switch domains, Bitcoin wallets, C2, campaign ID’s, etc). Mutex can be useful to correlated malware from different campaigns like DGA. John is also working on a new dataset based on the tool Yalda. In the second part of his presentation, he explained why most solutions he tested to handle this amount of data failed (CIF, CRITS, ThreatConnect, STIX, TAXII). The problem with XML (and also an advantage at the same time): XML can be very verbose to describe events. Finally, he explained how he’s now using MISP. If you’re interested in OSINT, John is definitively a key person to follow and he is also a SANS ISC handler.

The next talk was “Automation Attacks at Scale” by Will Glazier & Mayank Dhiman. Databases of stolen credentials are a goldmine for bad guys. They are available everywhere on the Internet. Ex: Just by crawling Pastebin, it is possible to collect ~20K passwords per day (note: but most of them are duplicates). It is tempting to test them but this requires a lot of resources. A valid password has a value on the black market but to test them, attackers must spend some bucks to buy resources when not available for free or can’t be abused). Will and Mayank explained how they are working to make some profit. They need tools to test credentials and collect information (Ex: Sentra, MBA, Hydra, PhantomJS, Curl, Wget, …). They need fresh meat (credentials), IP addresses (to make the rotation and avoid blacklists) and of course CPU resources. About IP rotation, they use often big cloud service providers (Amazon, Azure) because those big players on the Internet will almost never be blacklisted. They can also use compromised servers or IoT botnets. In the second part of the talk, some pieces of advice were provided to help to detect them (ex: most of them can be fingerprinted just via the User-Agent they use). A good advice is also to keep an idea on your API logs to see if some malicious activity is ongoing (bruteforce attacks).

Then we switched to pure hardware session with Obiwan666 who presented “Front door Nightmares. When smart is not secure“. The research started from a broken lock he found. The talk did not cover the “connected” locks that can manage with a smartphone but real security locks found in many enterprises and restricted environments. Such locks are useful because the key management is easier. No need to replace the lock if a key is lost, the access-rights must just be adapted on the lock. It is also possible to play with time constraints. They offer multiple ways to interact via the user: with something you have (a RFID token), something you are (biometrics) or something you know (a PIN code). Obiwan666 explained in details how such locks are built and, thanks to his job and background in electronics, he has access to plenty of nice devices to analyze the target. He showed X-ray pictures of the lock. X-Ray scanner isn’t very common! Then he explained different scenarios of attack. The first one was trivial: sometimes, the lock is mounted in the wrong way and the inner part is outside (“in the wild”). The second attack was a signal replay. Locks use a serial protocol that can be sniffed and replayed – no protection). I liked the “brain implant” attack: you just buy a new lock (same model), you program it to grant your access and replace the electronic part of the victim with yours…Of course, traditional lock-picking can be tested. Also, a thermal camera can reveal the PIN code if the local has a pinpad. I know some organizations which could be very interested to test their locks against all these attacks! 🙂

After an expected coffee break, another awesome research was presented by Aaron Kaplan and Éireann Leverett: “What is the max Reflected Distributed Denial of Service (rDDoS) potential of IPv4?“. DDoS attacks based on UDP amplification are not new but remain quite effective. The four protocols in the scope of the research were: DNS, NTP, SSDP and SNMP. But in theory, what could be the effect of a massive DDoS over the IPv4 network? They started the talk with one simple number:


The idea was to scan the Internet for vulnerable services and to classify them. Based on the location of the server, they were able to estimate the bandwidth available (ex: per countries) and to calculate the total amount of bandwidth that could be wasted by a massive attack. They showed nice statistics and findings. One of them was a relation between the bandwidth increase and the risk to affects other people on the Internet.

Then, the first half-day ended with the third keynote. This one was presented by Vladimir Kropotov, Fyodor Yarochkin: “Information Flows and Leaks in Social Media“. Social media are used everywhere today… for the good or the bad. They demonstrated how social network can react in case of a major event in the world (nothing related to computers). Some examples:

  • Trump and his awesome “Covfefe”
  • Macron and the French elections
  • The Manchester bombing
  • The fight of Barcelona for its independence

They mainly focused on the Twitter social network. They have tools to analyze the traffic and relations between people and the usage of specific hashtags. In the beginning of the keynote, many slides had content in Russian, no easy to read but the second part was interesting with the tracking of bots and how to detect them.

After the lunch break, there was again a lightning talk session then Eleanor Saitta came to present “On Strategy“. I did not follow them. The last talk I attended was a cool one: “Digital Vengeance: Exploiting Notorious C&C Toolkits” by Waylon Grange. The idea of the research was to try to compromize the attackers by applying the principle of offensive security. Big disclosure here: hacking back is often illegal and does not provide any gain but risks of liability, reputation… Waylon focused on RAT (“Remote Access Tools”) like Poison Ivy, Dark Comet or Xtreme RAT. Some of them already have known vulnerabilities. He demonstrated his finding and how he was able to compromise the computer of remote attackers. But what do when you are “in”? Search for interesting IP addresses (via netstat), browser the filesystem, install persistence, a keylogger or steal credentials, pivot, etc.

Sorry for the last presentation that I was unable to follow and report here. I had to leave for Hacktivity in Budapest. I’ll also miss the first edition of BSidesLuxembourg, any volunteer to write a wrap-up for me?  So to recap this edition of

  • Plenty of new stickers
  • New t-shirts and nice MISP sweat-shirt
  • Lot of coffee (and other types of drinks)
  • Nice restaurants
  • Excellent schedule
  • Lot of new friends (and old/classic ones)
  • My Twitter timeline exploded 😉

You can still expect more wrap-ups tomorrow but for another conference!

[The post 2017 Wrap-Up Day 3 has been first published on /dev/random]

October 18, 2017

As said yesterday, the second day started very (too?) early… The winner of the first slot was Aaron Zauner who talked about pseudo-random numbers generators. The complete title of the talk was “Because ‘User Random’ isn’t everything: a deep dive into CSPRGNs in Operating Systems & Programming Languages”. He started with an overview of random numbers generators and why we need them. They are used almost everywhere even in the Bash shell where you can use ${RANDOM}.  CSPRNG is also known as RNG or “Random Number Generator”. It is implemented at operating system level via /dev/urandom on Linux on RtlGenRandom() on Windows but also in programming languages. And sometimes, with security issues like CVE-2017-11671 (GCC fails to generate incorrect code for RDRAND/RDSEED. /dev/random & /dev/urandom devices are using really old code! (fro mid-90’s). According to Aaron, it was a pure luck if no major incident arises in the last years. And today? Aaron explained what changed with the kernel 4.2. Then he switched to the different language and how they are implementing random numbers generators. He covered Ruby, Node.js and Erlang. All of them did not implement proper random number generators but he also explained what changed to improve this feature. I was a little bit afraid of the talk at 8AM but it was nice and easy to understand for a crypto talk.

The next talk was “Keynterceptor: Press any key to continue” by Niels van Dijkhuizen. Attacks via HID USB devices are not new. Niels reviewed a timeline with all the well-known attacks from 2005 with the KeyHost USB logger until 207 with the BashBunny. The main problems with those attacks: they need an unlocked computer, some social engineer skills and an Internet connection (most of the time). They are products to protect against these attacks. Basically, they act as a USB firewall: USBProxy, USBGuest, GoodDog, DuckHunt, etc. Those products are Windows tools, for Linux, have a look at GRSecurity. Then Niels explains how own solution which gets rid of all the previous constraints: his implants is inline between the keyboard and the host. It must also have notions of real)time. The rogue device clones itself as a classic HID device (“HP Elite USB Keyboard”) and also adds random delays to fake a real human typing on a keyboard. This allows bypassing the DuckHunt tool. Niels makes a demonstration of his tool. It comes with another device called the “Companion” which has a 3G/4G module that connects to the Keynterceptor via a 433Mhz connection. A nice demo was broadcasted and his devices were available during the coffee break. This is a very nice tool for red teams…

Then, Clement Rouault, Thomas Imbert presented a view into ALPC-RPC.The idea of the talk: how to abuse the UAC feature in Microsoft Windows.They were curious about this feature. How to trigger the UAC manually? Via RPC! A very nice tool to investigate RPC interface is RpcView. Then, they switched to ALPC: what is it and how does ir work. It is a client/server solution. Clients connect to a port and exchange messages that have two parts: the PORT_MESSAGE header and APLC_MESSAGE_ATTRIBUTES. They explained in details how they reverse-engineering the messages and, of course, they discovered vulnerabilities. They were able to build a full RPC client in Python and, with the help of fuzzing techniques, they found bugs: NULL dereference, out-of-bounds access, logic bugs, etc. Based on their research, one CVE was created: CVE-2017-11783.

After the coffee break, a very special talk was scheduled: “The untold stories of Hackers in Detention”. Two hackers came on stage to tell how they were arrested and put in jail. It was a very interesting talk. They explained their personal stories how they were arrested, how it happened (interviews, etc). Also gave some advice: How to behave in jail, what to do and not do, the administrative tasks, etc. This was not recorded and, to respect them, no further details will be provided.

The second keynote was assigned to Ange Albertini: “Infosec and failure”. Ange’s presentation are always a good surprise. You never know how he will design his slides.As he said, his talk is not about “funny” failures. Infosec is typically about winning. The keynote was a suite of facts that prove us that we usually fail to provide good infosec services and pieces of advice, also in the way we communicate to other people. Ange likes retro-gaming and made several comparisons between the gaming and infosec industries. According to him, we should have some retropwning events to play and learn from old exploits. According to Ange, an Infosec crash is coming like the video game industry in 1983 and a new cycle is coming. It was a great keynote with plenty of real facts that we should take care of! Lean, improve, share, don’t be shy, be proactive.

After the lunch, I skipped the second session of lightning talks and got back for “Sigma – Generic Signatures for Log Events” by Thomas Patzke. Let’s talk with logs… When the talk started, my first feeling was “What? Another talk about logs?” but, in fact, it was interesting. The idea behind Sigma is that everybody is looking for a nice way to detect threats but all solutions have different features and syntax. Some example of threats are:

  • Authentication and accounts (large amount of failed logins, lateral movement, etc.)
  • Process execution (exec from an unusual location, unknown process relationship, evil hashes, etc…
  • Windows events

The problem we are facing: there is a lack of standardised format. Here comes Sigma. The goal of this tool is to write use case in YAML files that contain all the details to detect a security issue. Thomas gave some examples like detecting Mimikatz or webshells.

Sigma comes with a generator tool that can generate queries for multiple tools: Splunk, Elasticsearch or Logpoint. This is more complex than expected because field names are different, there are inconsistent file names, etc. In my opinion, Sigma could be useful to write use cases in total independence of any SIEM solution. It is still an ongoing project and, good news, recent versions of MISP can integrate Sigma. A field has been added and a tool exists to generate Sigma rules from MISP data.

The next talk was “SMT Solvers in the IT Security – deobfuscating binary code with logic” by Thaís Moreira Hamasaki. She started with an introduction to CLP or “Constraint Logic Programming”. Applications in infosec can be useful like malware de-obfuscation. Thais explained how to perform malware analysis using CLP. I did not follow more about this talk that was too theoretical for me.

Then, we came back to more practical stuff with Omar Eissa who presented “Network Automation is not your Safe Haven: Protocol Analysis and Vulnerabilities of Autonomic Network”. Omar is working for ERNW and they always provide good content. This time they tested the protocol used by Cisco to provision new routers. The goal is to make a router ready for use in a few minutes without any configuration: the Cisco Autonomic network. It’s a proprietary protocol developed by Cisco. Omar explained how this protocol is working and then how to abuse it. They found several vulnerabilities

  • CVE-2017-6664: There is no way to protect against malicious nodes within the network
  • CVE-2017-6665 : Possible to reset of the secure channel
  • CVE-2017-3849: registrar crash
  • CVE-2017-3850: DeathKiss – crash with 1 IPv6 packet
The talk had many demos that demonstrated the vulnerabilities above. A very nice talk.

The next speaker was Frank Denis who presented “API design for cryptography”. The idea of the talk started with a simple Google query: “How to encrypt stuff in C”. Frank found plenty of awful replies with many examples that you should never use. Crypto is hard to design but also hard to use. He reviewed several issues in the current crypto libraries then presented libhydrogen which is a library developed to solve all the issues introduced by the other libraries. Crypto is not easy to use and developer don’t read the documentation, they just expect some pieces of code that they can copy/paste. The library presented by Frank is called libhyrogen. You can find the source code here.

Then, Okhin came on stage to give an overview of the encryption VS the law in France. The title of his talk was “WTFRance”. He explained the status of the French law against encryption and tools. Basically, many political people would like to get rid of encryption to better fight crime. It was interesting to learn that France leads the fight against crypto and then push ideas at EU level. Note that he also mentioned several politician names that are “against” encryption.

The next talk was my preferred for this second day: “In Soviet Russia, Vulnerability Finds You” presented by Inbar Raz. Inbar is a regular speaker at and proposes always entertaining presentations! This time he came with several examples of interesting he found “by mistake”. Indeed, sometimes, you find interesting stuff by accident. Inbar game several examples like an issue on a taxi reservation website, the security of an airport in Poland or fighting against bots via the Tinder application. For each example, a status was given. It’s sad to see that some of them were unresolved for a while! An excellent talk, I like it!

The last slot was assigned to Jelena Milosevic. Jelena is a nurse but she has also a passion for infosec. Based on her job, she learns interesting stuff from healthcare environments. Her talk was a compilation of mistakes, facts and advice for hospitals and health-related services. We all know that those environments are usually very badly protected. It was, once again, proven by Jelena.

The day ended with the social event and the classic Powerpoint karaoke. Tomorrow, it will start again at 08AM with a very interesting talk…

[The post 2017 Wrap-Up Day 2 has been first published on /dev/random]

Weer vier politieke posts alvorens ik iets technisch schrijf. Pff. Wat een zagevent ben ik!

Ik beloof dat ik binnenkort een “How it’s made” over een AbstractFutureCommand ga schrijven. Dat is een Command in MVVM waar er een QFuture teruggegeven wordt bij een executeAsync() method.

Ik ben eerst nog even met m’n klant en haar ontwikkelaars bezig om één en ander door te voeren. Teneinde de toekomst van CNC besturingssoftware kei goed zal zijn.

We werken er aan op EMO ooit een bende software die bangelijk is te introduceren.

Hoe lok je de gepassioneerde computernerds?

  • Zorg ervoor dat ze opleiding krijgen. Ook in zaken die niet technisch zijn. Laat toe dat ze zich verdiepen in dieptechnische zaken. Bv. low level softwareontwikkeling, electronica, en zo verder. Combineer hun (bestaande) kennis met nieuwe toepassingen. Een gepassioneerde (computer)nerd wil een leven lang bijleren en vooral: al hun kennis combineren met andere ideeën;
  • Laat toe dat ze publiek laten zien wie ze zijn en wat ze kunnen. Laat zij die dat graag doen toe dat ze op bv. radio, Internet en TV komen vertellen hoe hun werk maatschappelijk relevant is. Spreek duidelijk af wat wel en wat niet geheim moet blijven, uiteraard;
  • Zorg ervoor dat ze met regelmaat naar een hackercon of een andere conference kunnen gaan. Uiteraard zowizo bv. FOSDEM (niet echt een hackercon, maar ga er toch maar met z’n allen naartoe). Maar bv. de CCC conferences in Duitsland, SHA2017 in Nederland, en zo verder. Wees daar in ieder geval, zonder schroom, aanwezig;
  • Organiseer misschien een eigen hackercon in België. Waarom niet?
  • Maak het niet te gemakkelijk om toe te treden. Dat je er 200 nodig hebt wil niet zeggen dat de eerste de beste goed genoeg zijn;
  • Zorg ervoor dat ze goed verdienen. Begrijp dat de privé hen meer biedt dan de overheid;
  • Publiceer met regelmaat (hun) code als open source op bv. github. Bv. een Wireshark plugin of log analysetools die onze overheid gebruikt? Laat ze helpen met andere open source projecten. Kijk bv. naar hoe we onze eID software (FireFox plugins, e.d.) publiceren;
  • We hebben veel kennis van encryptie in onze universiteiten (Rijndael), stuur ze op cursus daarover bij onze cryptografen;
  • Zorg ervoor dat onze diensten géén fouten maken tegen de Belgische wetgeving. Alle echte goei zijn zo idealistisch als Edward Snowden en willen goed doen voor de samenleving. M.a.w. De wet, de privacy commissie en het Comité I doen er toe.

Veel success. Ik ben erg benieuwd.

October 17, 2017 is ongoing in Luxembourg, already the thirteen edition! I arrived yesterday to attend a pre-conference event: the MISP summit. Today the regular talks were scheduled. It seems that more attendees joined this edition. The number of talks scheduled is impressive this year: 11 talks today and 12 talks on Wednesday and Thursday… Here is my wrap-up of the first day!

The first talk was not technical but very informative: “Myths and realities of attribution manipulation” presented by Félix Aimé & Ronan Mouchoux from Kaspersky. Many companies put more and more efforts in infowar instead of simple malware research. This affects many topics: cyber espionage, mass opinion manipulation or sabotage. The key is to perform attribution by putting a name on a cyber attack. You can see it as putting a tag on an attack. Note that sometimes, attribution suffers from a lack of naming convention like in the AV industry. Not easy to recognise the different actors. To perform this huge task, a lot of time and skills are required. They are many indicators available but many of them can be manipulated (ex: the country of origin, the C2, …). After a definition of attribution and the associated risks, Félix & Ronan reviewed some interesting examples:

  • The case of Turkey.TR domains that were DDoS after the Russian planes crashed
  • The case of Belgium accused to have done an airstrike against the locality of Hassadjek. A few days later, some Belgian media websites were DDoS’d.
As a conclusion to the talk, I like the quote: “You said fileless malware? APT actors try now to be less actor”.

The second slot was assigned to Sébastien (blotus) Blot, Thibault (buixor) Koechlin, Julien (jvoisin) Voisin who presented their solution to improve the security of PHP websites: Snuffleupagus (don’t ask me to pronounce it ;-). The complete title was: “Snuffleupagus – Killing bugclasses in PHP 7, virtual-patching the rest”. The speakers are working for a company provided hosting services and many of their customers are using PHP websites. Besides the classic security controls (OS-level hardening, custom IDS, WAF, …) they searched for a tool to improve the security of PHP. Suhosin is a nice solution but it does not support PHP7. So they decided to write their own tool: Snuffleupagus. They reviewed how to protect PHP with very nice features like the disable_function() feature. Some examples:


You can also restrict parameters passed to functions:

… param(“command”).value_r(“[$|…”).drop();

Then, the speakers demonstrated real vulnerabilities in a well-known tool written in PHP and how their solution could mitigate the vulnerabilities. This is a really nice project still in development but already used by many websites from the Alexa top-ranking list! The project is available here.

After a coffee break, Bouke van Leathem presented his project: “Randori”. In Japanse, Randori is a form of practice in which a designated aikidoka defends against multiple attackers in quick succession. To make it short, it’s the principle of action-reaction: You scan me, I scan you. Randori is a low interaction honeypot with a vengeance as defined by Bouke. The main idea is to reuse the credentials tested by the attackers against themselves. Bouke explained how it developed his honeypot, mainly the pam_randori PAM module. Collected credentials are re-used, no more no less, no code is executed on the remote system. Based on the collected information, Bouke explained in the second part of his talk, how he generated useful statistics to build botnet maps. One of the tools he used for this purpose is ssdeep. Note that the tool can be used in different ways: from an incident responder or ethical hacker perspectives. This project is very interesting and is also available here.

Before the lunch break, we had a keynote. The slot was assigned to Sarah Jamie Lewis and had the title: “Queer Privacy & Building Consensual Systems”. She started with a nice definition of privacy: “Privacy is the right to share information about you… only with people you trust”. Sarah wrote a book (with the same name as her keynote) and used it to illustrate her keynote. She read samples stories about Kath, Ada, Morgan. All those people had privacy issues and have to protect themselves. During the keynote, Sarah looked really affected by those stories but was it the right place to read some samples? I’m not sure. It looks to be a book that you have to read at home, relaxed and not at a security conference (just my $0.02). About privacy, as usual, the facts reviewed during the keynote were the same: our privacy is always threatened and there is a clear lack of solution.

After the lunch, a first lightning talk session was organized followed by Raúl B. Netto’s presentation: “ManaTI: Web Assistance for the Threat Analyst, supported by Domain Similarity”. ManaTI is a project to use machine learning techniques to assist an intuitive threat analyst to help in the discovery of security issues. I missed this talk because I was out with friends.

Then Paul Rascagnères, a regular speaker at, came to present tools and techniques to help in debugging malware code written in .Net. This framework is the key component of many Microsoft tools like Powershell. With a nice integration with the operating system, it is also used by bad guys to produce malicious code. Paul started by explained some .Net techniques used by malware (like Assembly.load()). The next part of the talk focused on PYKD, a Python extension for the WinDBG debugger. In a demo, Paul demonstrated how easy it is to use PYKD to debug malicious code.

The next talk was my preferred for this first day: “Device sensors meet the web – a story of sadness and regret” by Lukasz Olejnik. The idea behind this talk was to demonstrate how our privacy can be affected by connected devices or, simply, our browsers. All devices today handle plenty of personal data but web technologies were not designed with privacy in mind. With the modern web, a browser on your smartphone can take advantage of many sensors or connectivity (USB, NFC or Bluetooth). Modern devices have an API that can be queried by web browsers. The first example that Lukasz gave was the batteries. The power level can be queried from a browser. That’s a nice feature indeed but what about privacy issues? Firebox, by abusing the high precision readout can get useful information about the user behaviour. There are also evil scenarios: Just imagine that somebody is looking for a taxi and his /her battery is almost dead. The idea is to go back asap to home. If the taxi reservation page proposes 2 prices: 10€ for a 10 minutes drive and 5€ for a 30 minutes drive, guess which one will be chosen by the customer? Another example, even crazier, was the (ab)use of the light sensor in mobile phones. Lucasz demonstrated how it is possible to steal the browser history via the light sensor: The display emits light that reflects on objects and can be read/decoded. Scary! And other examples are multiple: tracking, behaviour, fingerprinting, etc… How to mitigate this? Not easy, ask permission to the user to access the data, disable the API, purge it from dangerous calls? Finally, Lucasz gave the last example with web payments (in one click) that also have security issues. This was a very nice talk with plenty of examples that should really open our eyes!

After the afternoon coffee break, Maxime Clementz and Antoine Goichot came on stage to present: “Malicious use of Microsoft Local Administrator Password Solution”. The local admin problem is not new with Microsoft operating systems. This account must be present and, within old environments, the password was often the same across all devices in the domain. This makes lateral movement so easy! To solve this issues, Microsoft implemented LAPS or “Local Administrator Password Solution”. How does it work? Random passwords are generated for the local admin. The goal of the talk was to explain how to perform privilege escalation within an environment that has LAPS deployed. In fact, this tools is not new. It was an open source project that was integrated into Microsoft Windows, a client-side extension (CSE). It’s just a DLL called AdmPwd.dll. First observation: the DLL is not signed and does not implement integrity checks. The idea of the PoC was to create a rogue DLL that ignores the temporary password expiration data and write generated passwords in a simple text file. It worked very well. Their recommendation to mitigate this kind of attack: validate the integrity/signature of the DLL.

The next presentation was about cars: “The Bicho: An Advanced Car Backdoor Maker” by Sheila Ayelen Berta. If we see more and more talks about connected cars, this time, it focused on “regular” cars that just have a CAN bus. Sheila explained the tools and hardware that helps to inject commands on a CAN bus. To achieve this, she used a platform called CANspy to sniff messages on a CAN bus. Then, via another tool called “Car Backdoor Maker 1.0”, she was able to generate CAN bus message. Basically, it’s a replay attack. A website has been created to list all CAB messages discovered: The payload is injected using a microcontroller connected to the CAN bus. It also has GPS capabilities that allow sending the CAN bus message depending on the cat localisation! The payload generator is available here.

Then, we came back to the issues regarding sharing information. Becky Kazansky presented: “Countering Security Threats by Sharing Information: Emerging Civil Society Practices”. I skipped this talk.

Finally, the first day finished with Parth Suhkla who presented “Intel AMT: Using & Abusing the Ghost in the Machine”. The presentation started with an overview of the AMT technology. It means “Active Management Technology” and is an out-of-band, management platform, embedded into Intel chipsets. The goal is to offer remote management capabilities without any OS. You can imagine that this feature looks juicy to attackers! Parth reviewed the core features of AMT and how it works. One important step is the provisioning options (can be performed via a local agent, remotely, via USB or the BIOS). There was already vulnerabilities discovered in AMT like the INTEL-SA-00075 that covered a privilege escalation issue. AMT was also used by the PLATINIUM attacker group who used the Serial Over LAN as a back channel. In the second part, Parth explained his research: how to abuse AMT? The requirements of the attack were:

  • Control the AMT
  • Implement persistence
  • Be stealthy
He reviewed all the possible scenarios with a rating (complex, easy, …). For each attack, if also explained how to mitigate and detect such attacks. Some interesting ideas:
  • Detect usual AMT ports in the network traffic
  • Query the ME interface for AMT status (easy on Windows, no tool for Linux)
  • Verify the book chain
  • Encrypt disk drives with the TPM chipset
  • Protect your BIOS (you already did it right?)
The last part covered the forensics investigations related to AMT. Again, an interesting talk.
That’s all for today! Note that talks have been recorded and are already available on Youtube! After our classic “Belgian dinner”, it’s time to take some hours of sleep, tomorrow 12 talks are scheduled! Stay tuned for more wrap-ups!

[The post 2017 Wrap-Up Day 1 has been first published on /dev/random]

October 16, 2017

The post Compile PHP from source: error: utf8_mime2text() has new signature appeared first on

It's been a while, but I had to recompile a PHP from source and ran into this problem during the ./configure stage.

$ ./configure
checking for IMAP Kerberos support... no
checking for IMAP SSL support... yes
checking for utf8_mime2text signature... new
checking for U8T_DECOMPOSE...
configure: error: utf8_mime2text() has new signature, but U8T_CANONICAL is missing.
This should not happen. Check config.log for additional information.

To resolve that utf8_mime2text() has new signature, but U8T_CANONICAL is missing error, on CentOS you can install the libc-client-devel package.

$ yum install libc-client-devel

After that, your ./configure should go through.

The post Compile PHP from source: error: utf8_mime2text() has new signature appeared first on

In this post I demonstrate an effective way to create iterators and generators in PHP and provide an example of a scenario in which using them makes sense.

Generators have been around since PHP 5.5, and iterators have been around since the Planck epoch. Even so, a lot of PHP developers do not know how to use them well and cannot recognize situations in which they are helpful. In this blog post I share insights I have gained over the years, that when sharing, always got an interested response from colleague developers. The post goes beyond the basics, provides a real world example, and includes a few tips and tricks. To not leave out those unfamiliar with Iterators the post starts with the “What are Iterators” section, which you can safely skip if you can already answer that question.

What are Iterators

PHP has an Iterator interface that you can implement to represent a collection. You can loop over an instance of an Iterator just like you can loop over an array:

function doStuff(Iterator $things) {
    foreach ($things as $thing) { /* ... */ }

Why would you bother implementing an Iterator subclass rather than just using an array? Let’s look at an example.

Imagine you have a directory with a bunch of text files. One of the files contains an ASCII NyanCat (~=[,,_,,]:3). It is the task of our code to find which file the NyanCat is hiding in.

We can get all the files by doing a glob( $path . '*.txt' ) and we can get the contents for a file with a file_get_contents. We could just have a foreach going over the glob result that does the file_get_contents. Luckily we realize this would violate separation of concerns and make the “does this file contain NyanCat” logic hard to test since it will be bound to the filesystem access code. Hence we create a function that gets the contents of the files, and ones with our logic in it:

function getContentsOfTextFiles(): array {
    // glob and file_get_contents

function findTextWithNyanCat(array $texts) {
    foreach ($texts as $text) { if ( /* ... */ ) { /* ... */ } }

function findNyanCat() {

While this approach is decoupled, a big drawback is that now we need to fetch the contents of all files and keep all of that in memory before we even start executing any of our logic. If NyanCat is hiding in the first file, we’ll have fetched the contents of all others for nothing. We can avoid this by using an Iterator, as they can fetch their values on demand: they are lazy.

class TextFileIterator implements Iterator {
    /* ... */
    public function current() {
        // return file_get_contents
    /* ... */

function findTextWithNyanCat(Iterator $texts) {
    foreach ($texts as $text) { if ( /* ... */ ) { /* ... */ } }

function findNyanCat() {
    findTextWithNyanCat(new TextFileIterator());

Our TextFileIterator gives us a nice place to put all the filesystem code, while to the outside just looking like a collection of texts. The function housing our logic, findTextWithNyanCat, does not know that the text comes from the filesystem. This means that if you decide to get texts from the database, you could just create a new DatabaseTextBlobIterator and pass it to the logic function without making any changes to the latter. Similarly, when testing the logic function, you can give it an ArrayIterator.

function testFindTextWithNyanCat() {
    /* ... */
    findTextWithNyanCat(new ArrayIterator(['test text', '~=[,,_,,]:3']));
    /* ... */

I wrote more about basic Iterator functionality in Lazy iterators in PHP and Python and Some fun with iterators. I also blogged about a library that provides some (Wikidata specific) iterators and a CLI tool build around an Iterator. For more on how generators work, see the off-site post Generators in PHP.

PHP’s collection type hierarchy

Let’s start by looking at PHP’s type hierarchy for collections as of PHP 7.1. These are the core types that I think are most important:

  •  iterable
    • array
    • Traversable
      • Iterator
        • Generator
      • IteratorAggregate

At the very top we have iterable, the supertype of both array and Traversable. If you are not familiar with this type or are using a version of PHP older than 7.1, don’t worry, we don’t need it for the rest of this blog post.

Iterator is the subtype of Traversable, and the same goes for IteratorAggregate. The standard library iterator_ functions such as iterator_to_array all take a Traversable. This is important since it means you can give them an IteratorAggregate, even though it is not an Iterator. Later on in this post we’ll get back to what exactly an IteratorAggregate is and why it is useful.

Finally we have Generator, which is a subtype of Iterator. That means all functions that accept an Iterator can be given a Generator, and, by extension, that you can use generators in combination with the Iterator classes in the Standard PHP Library such as LimitIterator and CachingIterator.

IteratorAggregate + Generator = <3

Generators are a nice and easy way to create iterators. Often you’ll only loop over them once, and not have any problem. However beware that generators create iterators that are not rewindable, which means that if you loop over them more than once, you’ll get an exception.

Imagine the scenario where you pass in a generator to a service that accepts an instance of Traversable:

$aGenerator = function() { /* ... yield ... */ };

public function doStuff(Traversable $things) {
    foreach ($things as $thing) { /* ... */ }

The service class in which doStuff resides does not know it is getting a Generator, it just knows it is getting a Traversable. When working on this class, it is entirely reasonable to iterate though $things a second time.

public function doStuff(Traversable $things) {
    foreach ($things as $thing) { /* ... */ }
    foreach ($things as $thing) { /* ... */ } // Boom if Generator!

This blows up if the provided $things is a Generator, because generators are non-rewindable. Note that it does not matter how you iterate through the value. Calling iterator_to_array with $things has the exact same result as using it in a foreach loop. Most, if not all, generators I have written, do not use resources or state that inherently prevents them from being rewindable. So the double-iteration issue can be unexpected and seemingly silly.

There is a simple and easy way to get around it though. This is where IteratorAggregate comes in. Classes implementing IteratorAggregate must implement the getIterator() method, which returns a Traversable. Creating one of these is extremely trivial:

class AwesomeWords implements \IteratorAggregate {
    public function getIterator() {
        yield 'So';
        yield 'Much';
        yield 'Such';

If you call getIterator, you’ll get a Generator instance, just like you’d expect. However, normally you never call this method. Instead you use the IteratorAggregate just as if it was an Iterator, by passing it to code that expects a Traversable. (This is also why usually you want to accept Traversable and not just Iterator.) We can now call our service that loops over the $things twice without any problem:

$aService->doStuff(new AwesomeWords()); // no boom!

By using IteratorAggregate we did not just solve the non-rewindable problem, we also found a good way to share our code. Sometimes it makes sense to use the code of a Generator in multiple classes, and sometimes it makes sense to have dedicated tests for the Generator. In both cases having a dedicated class and file to put it in is very helpful, and a lot nicer than exposing the generator via some public static function.

For cases where it does not make sense to share a Generator and you want to keep it entirely private, you might need to deal with the non-rewindable problem. For those cases you can use my Rewindable Generator library, which allows making your generators rewindable by wrapping their creation function:

$aGenerator = function() { /* ... yield ... */ };
$aService->doStuff(new RewindableGenerator($aGenerator));

A real-world example

A few months ago I refactored some code part of the Wikimedia Deutschland fundraising codebase. This code gets the filesystem paths of email templates by looking in a set of specified directories.

private function getMailTemplatesOnDisk( array $mailTemplatePaths ): array {
    $mailTemplatesOnDisk = [];

    foreach ( $mailTemplatePaths as $path ) {
        $mailFilesInFolder = glob( $path . '/Mail_*' );
        array_walk( $mailFilesInFolder, function( & $filename ) {
            $filename = basename( $filename ); // this would cause problems w/ mail templates in sub-folders
        } );
        $mailTemplatesOnDisk = array_merge( $mailTemplatesOnDisk, $mailFilesInFolder );

    return $mailTemplatesOnDisk;

This code made the class bound to the filesystem, which made it hard to test. In fact, this code was not tested. Furthermore, this code irked me, since I like code to be on the functional side. The array_walk mutates its by-reference variable and the assignment at the end of the loop mutates the return variable.

This was refactored using the awesome IteratorAggregate + Generator combo:

class MailTemplateFilenameTraversable implements \IteratorAggregate {
	public function __construct( array $mailTemplatePaths ) {
		$this->mailTemplatePaths = $mailTemplatePaths;

	public function getIterator() {
		foreach ( $this->mailTemplatePaths as $path ) {
			foreach ( glob( $path . '/Mail_*' ) as $fileName ) {
				yield basename( $fileName );

Much easier to read/understand code, no state mutation whatsoever, good separation of concerns, easier testing and reusability of this collection building code elsewhere.

See also: Use cases for PHP generators (off-site post).

Tips and Tricks

Generators can yield key value pairs:

yield "Iterators" => "are useful";
yield "Generators" => "are awesome";
// [ "Iterators" => "are useful", "Generators" => "are awesome" ]

You can use yield in PHPUnit data providers.

You can yield from an iterable.

yield from [1, 2, 3];
yield from new ArrayIterator([4, 5]);
// 1, 2, 3, 4, 5

// Flattens iterable[] into Generator
foreach ($collections as $collection) {
    yield from $collection;

Thanks for Leszek Manicki and Jan Dittrich for reviewing this blog post.

October 15, 2017

The post Get shell in running Docker container appeared first on

This has saved me more times than I can count, having the ability to debug a running container the way you would in a "normal" VM.

First, see which containers are running;

$ docker ps
CONTAINER ID        IMAGE               COMMAND                  [...] NAMES
925cc10d55df        66cc85c3f275        "gitlab-runner-ser..."   [...] runner-f500bed1-project-3888560-concurrent-0-mysql-0-wait-for-service
0ab431ea0bcf        3e3878acd190        "docker-entrypoint..."   [...] runner-f500bed1-project-3888560-concurrent-0-mysql-0
4d9de6c0fba1        nginx:alpine        "nginx -g 'daemon ..."   [...] nginx-container

To get a shell (Bash) on a container of choice, run this;

$ docker exec -i -t nginx-container /bin/bash

The nginx-container determines which container you want to enter, it's the name in the last column of the docker ps output.

Alternatively, use the container ID;

$ docker exec -i -t 4d9de6c0fba1 /bin/bash

Don't use docker attach, as that'll give you funky results if the initial command that's started in a Docker container is something like MongoDB or Redis, the instance will be killed.

The post Get shell in running Docker container appeared first on

October 13, 2017

I can’t remember why I started to write conference wrap-ups but it started in 2009 when I attended my first! I had a quick look at my blog archives and, until today, I wrote 184 wrap-ups!  The initial idea was probably to bring some material to colleagues who did not have the chance to attend the conference in Luxembourg. Quickly I got some very positive feedbacks from my existing blog readers and it started to attract more and more people. 

Wrap-Up Feedback

Why am I still writing such kind of articles today? For multiple reasons… The first one is personal: It helps me to learn new stuff. The exercise to keep the focus on a speaker and to take notes on the fly is complex. You need to listen, understand and summarize in real time. Usually, I’m writing very synthetic notes and I force myself to beautify the text the same day (otherwise, I’m quickly losing details). Often, my wrap-ups are published during the night.

The second one is for the community… If I’ve some content, why not share it? Honestly, based on the number of infosec events I’m attending,  I consider myself as a lucky guy. With my wrap-ups, I can share a (very) small piece of information that I collected. They are published “as is” without any restriction and review (read: errors can always be present!). I know that some people reuse them even if they attended the same conference. They need to report some content internally in their organization 😉 They are free but be fair to keep a link to the original article.

It won’t change and, next week, I’ll be in Luxembourg for Immediately after, I’ll fly to Budapest for Hacktivity. is one of my preferred events not only for the quality of the schedule but also for the relaxed atmosphere. I meet some friends once a year at! My first participation was in 2008 and this edition promises to be awesome with a bunch of interesting talks. Here is my pre-selection:

  • Randori, a low interaction honeypot with a vengeance (Bouke van Laethem)
  • Device sensors meet the web – a story of sadness and regret (Lukasz Olejnik)
  • The Bicho: An Advanced Car Backdoor Maker (Sheila Ayelen Berta, Claudio Caracciolo)
  • Keynterceptor: Press any key to continue (Niels van Dijkhuizen)
  • Sigma – Generic Signatures for Log Events (Thomas Patzke)
  • Front door Nightmares. When smart is not secure (ObiWan666)

Then, let’s go to Hacktivity. Contrariwise, it will be my first experience with this event. The conference has a very good reputation. A lot of nice topics and here is my pre-selection:

  • REST API, pentester’s perspective (Mateusz Olejarka)
  • Exploiting USB/IP in Linux (Ignat Korchagin)
  • Hacking drones and buying passwords in the Darknet (Tobias Schrödel)
  • A heaven for Hackers: Breaking Log/SIEM Products (Mehmet Ince)
  • BlueBorne Explained: Exploiting Android devices over the air (Ben Seri Gregory Vishnepolsky)

You can expect a massive amount of Tweets and daily wrap-ups during the week! Stay tuned and thanks again for reading my delusions…

[Note: You can follow the upcoming conferences that I will attend on the right side of this page in the “Upcoming Events” section]




[The post Wrap-Ups Storm Ahead! has been first published on /dev/random]

October 12, 2017

Children aren’t worried about the future. Young people aren’t worried about the future; they’re worried about us: us leading them into the future we envision

Jack Ma — Oct 2017, keynote speech at Alibaba Cloud’s Computing Conference in Hangzhou

I published the following diary on “Version control tools aren’t only for Developers“.

When you start to work on a big project or within a team of developers, it is very useful to use a version control system. The most known are probably ’svn’ or ‘git’. For developers, such tools are a great help to perform tasks like… [Read more]

[The post [SANS ISC] Version control tools aren’t only for Developers has been first published on /dev/random]

October 11, 2017

Four months ago, I shared that Acquia was on the verge of a shift equivalent to the decision to launch Acquia Fields and Drupal Gardens in 2008. As we entered Acquia's second decade, we outlined a goal to move from website management to data-driven customer journeys. Today, Acquia announced two new products that support this mission: Acquia Journey and Acquia Digital Asset Manager (DAM).

Last year on my blog, I shared a video that demonstrated what is possible with cross-channel user experiences and Drupal. We showed a sample supermarket chain called Gourmet Market. Gourmet Market wants its customers to not only shop online using its website, but to also use Amazon Echo or push notifications to do business with them. The Gourmet Market prototype showed an omnichannel customer experience that is both online and offline, in store and at home, and across multiple digital touchpoints. The Gourmet Market demo video was real, but required manual development and lacked easy customization. Today, the launch of Acquia Journey and Acquia DAM makes building these kind of customer experiences a lot easier. It marks an important milestone in Acquia's history, as it will accelerate our transition from website management to data-driven customer journeys.

A continuous journey across multiple digital touch points and devices

Introducing Acquia Journey

I've written a great deal about the Big Reverse of the Web, which describes the transition from "pull-based" delivery of the web, meaning we visit websites, to a "push-based" delivery, meaning the web comes to us. The Big Reverse forces a major re-architecture of the web to bring the right information, to the right person, at the right time, in the right context.

The Big Reverse also ushers in the shift from B2C to B2One, where organizations develop a one-to-one relationship with their customers, and contextual and personalized interactions are the norm. In the future, every organization will have to rethink how it interacts with customers.

Successfully delivering a B2One experience requires an understanding of your user's journey and matching the right information or service to the user's context. This alone is no easy feat, and many marketers and other digital experience builders often get frustrated with the challenge of rebuilding customer experiences. For example, although organizations can create brilliant campaigns and high-value content, it's difficult to effectively disseminate marketing efforts across multiple channels. When channels, data and marketing software act in different silos, it's nearly impossible to build a seamless customer experience. The inability to connect customer profiles and journey maps with various marketing tools can result in unsatisfied customers, failed conversion rates, and unrealized growth.

An animation showing Acquia's journey building solution

Acquia Journey delivers on this challenge by enabling marketers to build data-driven customer journeys. It allows marketers to easily map, assemble, orchestrate and manage customer experiences like the one we showed in our Gourmet Market prototype.

It's somewhat difficult to explain Acquia Journey in words — probably similar to trying to explain what a content management system does to someone who has never used one before. Acquia Journey provides a single interface to define and evaluate customer journeys across multiple interaction points. It combines a flowchart-style journey mapping tool with unified customer profiles and an automated decision engine. Rules-based triggers and logic select and deliver the best-next action for engaging customers.

One of the strengths of Acquia Journey is that it integrates many different technologies, from marketing and advertising technologies to CRM tools and commerce platforms. This makes it possible to quickly assemble powerful and complex customer journeys.

Implementing getBestNextExperience() creates both customer and business value

Acquia Journey will simplify how organizations deliver the "best next experience" for the customer. Providing users with the experience they not only want, but expect will increase conversion rates, grow brand awareness, and accelerate revenue. The ability for organizations to build more relevant user experiences not only aligns with our customers' needs but will enable them to make the biggest impact possible for their customers.

Acquia's evolving product offering also puts control of user data and experience back in the hands of the organization, instead of walled gardens. This is a step toward uniting the Open Web.

Introducing Acquia Digital Asset Manager (DAM)

Digital asset management systems have been around for a long time, and were originally hosted through on-premise servers. Today, most organizations have abandoned on-premise or do-it-yourself DAM solutions. After listening to our customers, it became clear that large organizations are seeking a digital asset management solution that centralizes control of creative assets for the entire company.

Many organizations lack a single-source of truth when it comes to managing digital assets. This challenge has been amplified as the number of assets has rapidly increased in a world with more devices, more channels, more campaigns, and more personalized and contextualized experiences. Acquia DAM provides a centralized repository for managing all rich media assets, including photos, videos, PDFs, and other corporate documents. Creative and marketing teams can upload and manage files in Acquia DAM, which can then be shared across the organization. Graphic designers, marketers and web managers all have a hand in translating creative concepts into experiences for their customers. With Acquia DAM, every team can rely on one dedicated application to gather requirements, share drafts, consolidate feedback and collect approvals for high-value marketing assets.

On top of Drupal's asset and media management capabilities, Acquia DAM provides various specialized functionality, such as automatic transcoding of assets upon download, image and video mark-up during approval workflows, and automated tagging for images using machine learning and image recognition.

A screenshot of Acquia's Digital Asset Management solutionBy using a drag-and-drop interface on Acquia DAM, employees can easily publish approved assets in addition to searching the repository for what they need.

Acquia DAM seamlessly integrates with both Drupal 7 and Drupal 8 (using Drupal's "media entities"). In addition to Drupal, Acquia DAM is built to integrate with the entirety of the Acquia Platform. This includes Acquia Lift and Acquia Journey, which means that any asset managed in the Acquia DAM repository can be utilized to create personalized experiences across multiple Drupal sites. Additionally, through a REST API, Acquia DAM can also be integrated with other marketing technologies. For example, Acquia DAM supports designers with a plug in to Adobe Creative Cloud, which integrates with Photoshop, InDesign and Illustrator.

Acquia's roadmap to data-driven customer journeys

Some of the most important market trends in digital for 2017

Throughout Acquia's first decade, we've been primarily focused on providing our customers with the tools and services necessary to scale and succeed with content management. We've been very successful with helping our customers scale and manage Drupal and cloud solutions. Drupal will remain a critical component to our customer's success, and we will continue to honor our history as committed supporters of open source, in addition to investing in Drupal's future.

However, many of our customers need more than content management to be digital winners. The ability to orchestrate customer experiences using content, user data, decisioning systems, analytics and more will be essential to an organization's success in the future. Acquia Journey and Acquia DAM will remove the complexity from how organizations build modern digital experiences and customer journeys. We believe that expanding our platform will be good not only for Acquia, but for our partners, the Drupal community, and our customers.

Acquia's product strategy for 2017 and beyond

October 10, 2017

Suppose that you have a RDO/Openstack cloud already in place, but that you'd want to automate some operations : what can you do ? On my side, I already mentioned that I used puppet to deploy initial clouds, but I still prefer Ansible myself when having to launch ad-hoc tasks, or even change configuration[s]. It's particulary true for our CI environment where we run "agentless" so all configuration changes happen through Ansible.

The good news is that Ansible has already some modules for Openstack but it has some requirements and a little bit of understanding before being able to use those.

First of all, all the ansible os_ modules need "shade" on the host included in the play, and that will be responsible of all os_ modules launch. At the time of writing this post, it's not yet available on, (a review is open so that will be soon available directly) but you can find the pkg on our CBS builders

Once installed, a simple os_image task was directly failing, despite the fact that auth: was present, and that's due to a simple reason : Ansible os_ modules still want to use v2 API, while it's now defaulting to v3 in Pike release. There is no way to force ansible itself to use v3, but as it uses shade behind the scene, there is a way to force this through os-client-config

That means that you just have to use a .yaml file (does that sound familiar for ansible ?) that will contain everything you need to know about specific cloud, and then just in ansible declare which cloud you're configuring.

That clouds.yaml file can be under $current_directory, ~/.config/openstack or /etc/openstack so it's up to you to decide where you want to temporary host it, but I selected /etc/openstack/ :

- name: Ensuring we have required pkgs for ansible/openstack
    name: python2-shade
    state: installed

- name: Ensuring local directory to hold the os-client-config file
    path: /etc/openstack
    state: directory
    owner: root
    group: root

- name: Adding clouds.yaml for os-client-config for further actions
    src: clouds.yaml.j2
    dest: /etc/openstack/clouds.yaml
    owner: root
    group: root
    mode: 0700

Of course such clouds.yaml file is itself a jinja2 template distributed by ansible on the host in the play before using the os_* modules :

  {{ cloud_name }}:
      username: admin
      project_name: admin
      password: {{ openstack_admin_pass }}
      auth_url: http://{{ openstack_controller }}:5000/v3/
      user_domain_name: default
      project_domain_name: default
    identity_api_version: 3

You just have to adapt to your needs (see doc for this) but the interesting part is the identity_api_version to force v3.

Then, you can use all that in a simple way through ansible tasks, in this case adding users to a project :

- name: Configuring OpenStack user[s]
    cloud: "{{ cloud_name }}"
    default_project: "{{ }}"
    domain: "{{ item.0.domain_id }}"
    name: "{{ item.1.login }}"
    email: "{{ }}"
    password: "{{ item.1.password }}"           
    - "{{ cloud_projects }}"
    - users  
  no_log: True

From a variables point of view, I decided to just have a simple structure to host project/users/roles/quotas like this :

  - name: demo
    description: demo project
    domain_id: default
    quota_cores: 20
    quota_instances: 10
    quota_ram: 40960
      - login: demo_user
        password: Ch@ngeM3
        role: admin # can be _member_ or admin
      - login: demo_user2
        password: Ch@ngeMe2

Now that it works, you can explore all the other os_* modules and I'm already using those to :

  • Import cloud images in glance
  • Create networks and subnets in neutron
  • Create projects/users/roles in keystone
  • Change quotas for those projects

I'm just discovering how powerful those tools are, so I'll probably discover much more interesting things to do with those later.

October 09, 2017

BruCON 0x09 is over! It’s time to have a look at the data captured during the last Thursday and Friday. As the previous years, the setup was almost the same: An Internet pipe with a bunch of access-points, everything interconnected through a pfSense firewall. The guest network (dedicated to attendees) traffic is captured and processed by a SecurityOnion server + basic full packet capture. We also used our classic wall-of-sheep to track the web browsing activity of our beloved visitors.

Let’s start with a few raw numbers. With the end of the 3G/4G roaming costs in Europe since June, most European visitors avoid the usage of wireless networks and prefer to remain connected via their mobile phone. In a few numbers:

  • 206 Gigabytes of PCAP files
  • 50.450 pictures collected by the wall-of-sheep
  • 19 credentials captured
  • 500+ unique devices connected to the WiFi network
  • 150 PE files downloaded (Windows executables)
  • 3 blocked users
  • 1 rogue DHCP server

We saw almost the same amount of traffic than the previous years (even if we had more people attending the conference!). What about our visitors?
Unique Wi-Fi Clients by OS over Time

Strange that we had some many “unknown” device. Probably due to an outdated MAC address prefixes databases.

Top 10 Applications by Usage - Summary

Good to see that SSL is the top protocol detected! UDP reached the third-position due to the massive use of VPN connections. Which is also good!

Our visitors communicated with 118K+ IP addresses from all over the word:

Worldwide Connections

Here is the top-20 of DNS requests logged:











































As most of the traffic captured was web-based, I had a look at the different tools/applications used to access web resources. Here is the top-20:












































I uploaded the 200+ gigabytes of PCAP data into my Moloch instance and searched for interesting traffic. What has been found:

  • One visitor polled his network devices (172.16.x.x) during the two days (5995 SNMP connections detected)
  • Two visitors performed RDP sessions to two external IP addresses
  • Two visitors generated SIP (VoIP) traffic with two remote servers
  • 29 remote IMAP servers were identified (strange, no POP3! 🙂
  • SSH connections were established with 36 remote servers (no telnet!)

Finally, our wall-of-sheep captured web traffic during the whole conference:

Wall of Sheep

Of course, we had some “p0rn denial of service attacks” but it’s part of the game right? See you for the 0x0A (10th edition) next year with, crossing fingers, more fun on the network!


[The post BruCON Network 0x09 Wrap-Up has been first published on /dev/random]

I published the following diary on “Base64 All The Things!“.

Here is an interesting maldoc sample captured with my spam trap. The attached file is “PO# 36-14673.DOC” and has a score of 6 on VT. The file contains Open XML data that refers to an invoice.. [Read more]

[The post [SANS ISC] Base64 All The Things! has been first published on /dev/random]

Initially I started creating a general post about PHP Generators, a feature introduced in PHP 5.5. However since I keep failing to come up with good examples for some cool ways to use Generators, I decided to do this mini post focusing on one such cool usage.

PHPUnit data providers

A commonly used PHPUnit feature is data providers. In a data provider you specify a list of argument lists, and the test methods that use the data provider get called once for each argument list.

Often data providers are created with an array variable in which the argument lists get stuffed. Example (including poor naming):

 * @dataProvider provideUserInfo
function testSomeStuff( string $userName, int $userAge ) {}

function provideUserInfo() {
    $return = [];

    $return[] = [ 'Such Name', 42 ];
    $return[] = [ 'Very Name', 23 ];
    $return['Named parameter set'] = [ 'So Name', 1337 ];

    return $return;

The not so nice thing here is that you have a variable (explicit state) and you modify it (mutable state). A more functional approach is to just return an array that holds the argument lists directly. However if your argument list creation is more complex than in this example, requiring state, this might not work. And when such state is required, you end up with more complexity and a higher chance that the $return variable will bite you.

Using yield

What you might not have realized is that data providers do not need to return an array. They need to return an iterable, so they can also return an Iterator, and by extension, a Generator. This means you can write the above data provider as follows:

function provideUserInfo() {
    yield [ 'Such Name', 42 ];
    yield [ 'Very Name', 23 ];
    yield 'Named parameter set' => [ 'So Name', 1337 ];

No explicit state to be seen!

Stay tuned for more generator goodness if I can overcome my own laziness (hint hint :))

October 06, 2017


Since its founding in 1837, Hermès has defined luxury. Renowned as an iconic brand within the fashion industry, Hermès is now setting the trend for how customers shop online. This week, Hermès launched its new site in Drupal!

Hermès married the abilities of Drupal as a CMS and Magento as an eCommerce engine to provide their customers with highly engaging shopping experiences. Hermès' new site is a great example of how iconic brands can use Drupal to power ambitious digital experiences. If you are in the mood for some retail therapy, check out!

The post Antwerp WordPress User Group offering public speaking course appeared first on

At the next WordPress Antwerp meetup, there will be a presentation and workshop on how to do public speaking, how to complete a CFP (Call For Presentation) and increase your chances to be accepted as a conference speaker.

I believe one of the biggest enhancers of my career has been public speaking & being a good communicator. In order to give others the same abilities I want to raise awareness for the efforts the WordPress Antwerp team are putting into this. It's rare I see a user group organise something like this and I'd love to see more of it.

To get started, sign up at the Meetup page of WordPress Antwerp & get started with public speaking!

(Sorry English readers, it's a local meetup that'll be Dutch/Flemish only)

The post Antwerp WordPress User Group offering public speaking course appeared first on

October 05, 2017

At work, I help maintain a smartcard middleware that is provided to Belgian citizens who want to use their electronic ID card to, e.g., log on to government websites. This middleware is a piece of software that hooks into various browsers and adds a way to access the smartcard in question, through whatever APIs the operating system and the browser in question provide for that purpose. The details of how that is done differ between each browser (and in the case of Google Chrome, for the same browser between different operating systems); but for Firefox (and Google Chrome on free operating systems), this is done by way of a PKCS#11 module.

For Firefox 57, mozilla decided to overhaul much of their browser. The changes are large and massive, and in some ways revolutionary. It's no surprise, therefore, that some of the changes break compatibility with older things.

One of the areas in which breaking changes were made is in the area of extensions to the browser. Previously, Firefox had various APIs available for extensions; right now, all APIs apart from the WebExtensions API are considered "legacy" and support for them will be removed from Firefox 57 going forward.

Since installing a PKCS#11 module manually is a bit complicated, and since the legacy APIs provided a way to do so automatically provided the user would first install an add-on (or provided the installer of the PKCS#11 module sideloads it), most parties who provide a PKCS#11 module for use with Firefox will provide an add-on to automatically install it. Since the alternative involves entering the right values in a dialog box that's hidden away somewhere deep in the preferences screen, the add-on option is much more user friendly.

I'm sure you can imagine my dismay when I found out that there was no WebExtensions API to provide the same functionality. So, after asking around a bit, I filed bug 1357391 to get a discussion started. While it took some convincing initially to get people to understand the reasons for wanting such an API, eventually the bug was assigned the "P5" priority -- essentially, a "we understand the need and won't block it, but we don't have the time to implement it. Patches welcome, though" statement.

Since having an add-on was something that work really wanted, and since I had the time, I got the go-ahead from management to look into implementing the required code myself. I made it obvious rather quickly that my background in Firefox was fairly limited, though, and so was assigned a mentor to help me through the process.

Having been a Debian Developer for the past fifteen years, I do understand how to develop free software. Yet, the experience was different enough that still learned some new things about free software development, which was somewhat unexpected.

Unfortunately, the process took much longer than I had hoped, which meant that the patch was not ready by the time Firefox 57 was branched off mozilla's "central" repository. The result of that is that while my patch has been merged into what will eventually become Firefox 58, it looks strongly as though it won't make it into Firefox 57. That's going to cause some severe headaches, which I'm not looking forward to; and while I can certainly understand the reasons for not wanting to grant the exception for the merge into 57, I can't help but feeling like this is a missed opportunity.

Anyway, writing code for the massive Open Source project that mozilla is has been a load of fun, and in the process I've learned a lot -- not only about Open Source development in general, but also about this weird little thing that Javascript is. That might actually be useful for this other project that I've got running here.

In closing, I'd like to thank Tomislav 'zombie' Jovanovic for mentoring me during the whole process, without whom it would have been doubtful if I would even have been ready by now. Apologies for any procedural mistakes I've made, and good luck in your future endeavours! :-)

October 04, 2017

The post Laravel Forge + Envoyer + Managed Hosting = Nucleus appeared first on

I've been enjoying using Laravel a lot lately, a modern PHP framework that comes with queues, a CLI component, decent standards and an incredibly large package ecosystem, not the least by the Spatie guys who publish a ton of their work online.

What has always fascinated me by the Laravel ecosystem is that the creator, Taylor Otwell, saw the bigger picture of application development. It's not just about writing code, it's about running infrastructure to support it (Laravel Forge), deploying code reliably (Laravel Envoyer), writing maintainable tests (Laravel Dusk), ... Everything is neatly packaged and available.

Forge: a managed hosting alternative

With Forge, everyone can create a Laravel-optimized server on providers like Digital Ocean, Linode or Ocean in mere minutes. A VM gets deployed, a config is written and you can SSH and git clone to get started.

While I see the appeal, the sysadmin in me wonders;

  • Who monitors those servers? If MySQL crashes at 11PM, who fixes it?
  • Who takes care of the updates? The security ones get auto-applied (a sane default), but who takes care of package updates?
  • Who handles the security of the machines? Do you know what's running? Do you know its configs? What are you exposing? What versions are you running?
  • Who takes care of the back-ups of the databases and files? How regularly are they stored? Are they copied offsite?
  • How quickly can you get up and running again if a server crashes? Or if it accidentally gets deleted?

So in general: who manages that Forge server?

I fear, for many sites deployed via Forge, there isn't anyone actively managing that server. That's a shame, because even though Forge's default are OK'ish, one day you'll wish your site/server was actually managed by a team that understands hosting and servers. And that's where we come in.

At Nucleus, we specialize in managed hosting & custom made server setups. And we've got a pre-built template specific for running Laravel applications. We manage all the configs, we take care of back-ups, the 24/7 support and interventions, the monitoring & graphing of your CPU/memory/disk capacity, monthly reporting of uptime, etc.

If you're looking for Managed Laravel Hosting, come have a chat. Our Laravel-optimized servers come pre-configured with all you need like PHP 7.1, Redis, the schedule:run cron, supervisor workers, a pre-generated .env config, a deploy script, pre-installed composer/yarn, SSH access, ... well, all you need to reliably run Laravel.

Deploy with the ease of Envoyer, tailored to our servers

I'll admit, our server setup is slightly different than Forge's. I think it's better in a couple of critical ways, though, which is why we've tailored the deploy mechanisme to our setup.

For starters, we run CentOS over the latest Ubuntu, for stability. But we combine it with modern packages, so you get PHP 7.1, MariaDB 10.2, Redis 4 & all other up-to-date packages you'd expect.

We also run multiple PHP-FPM master pools for better OPCache efficiency,  multiple Redis instances, tight firewalling, an opinionated (but proven) directory layout, ... all things that slightly influence your deployment. To make that easier, we publish a simple Laravel package to help take care of your deploys.

To install, run these 2 commands in your Laravel application;

$ composer require nucleus/laravel-deploy
$ php artisan vendor:publish --provider=Nucleus\\Deploy\\DeployServiceProvider

After that, deploying to your Nucleus server(s) is as simple as:

$ php artisan deploy

That's it.

The deploy reads a few parameters from your .env configuration (like host, username, your git repository location etc.) and handles the rest.

It uses the nucleus/laravel-deploy package, whose source is up on Github. Feedback is more than welcome! It's only a 1.0 version now, we plan to extend the functionality with HipChat/Slack hooks, better notifications, multi-server support & whatever fancy things we can come up with.

Don't like the way it deploys? Change it. It's a Laravel Blade template, easy to read, easy to extend. It's based on Spatie's deploy script, tuned to our stack.

Ease of use + stability + quality = Nucleus

OK, sounds like some marketing BS, I agree. ;-)

If you're developing a Laravel application and you're looking for reliable, quality hosting by a team of experts who -- quite literally -- speak your language, poke me on Twitter as @mattiasgeniar or have a look at the website. I'd love to have a chat with you to see how we can help support your business and how we can improve our Laravel-focussed hosting offering.

The post Laravel Forge + Envoyer + Managed Hosting = Nucleus appeared first on

October 03, 2017

We are pleased to announce the developer rooms that will be organised at FOSDEM 2018. Developer rooms are assigned to self-organising groups to work together on open source projects, to discuss topics relevant to a broader subset of the community, etc. The individual developer room organisers will issue their calls for participation in the next few days. We will update this table with links to the calls for participation. Both days, 3 February & 4 February 2018 Topic Call for Participation CfP deadline Embedded, Mobile and Automotive announcement 2017-12-11 Virtualization and IaaS announcement 2017-12-01 Saturday 3 February 2018 Topic舰
Mijn vriendin is EU-burger. Geen enkel niveau van overheid in België kent haar burgerlijke staat, dus die is "onbepaald".

De stad Leuven stuurt haar een brief. Ze heeft niet graag mensen met burgerlijke staat "onbepaald" in haar registers staan. Dat geeft miserie, onder meer als je wettelijk wil gaan samenwonen. Hoe moeilijk kan het zijn, anno 2017, om van "onbepaald" "ongehuwd" te maken?

Dag 1. Mijn vriendin luistert braaf en schiet in actie. Ze vraagt bij de Letse overheid haar gegevens uit het bevolkingsregister op. Elektronisch. 11 pagina's. Op pagina 1 in enkele lijntjes alle nodige gegevens om "ongehuwd" te worden voor de Belgische overheid. In het Lets weliswaar.

Burgerlijke stand Leuven is enkel op afspraak bereikbaar. Voor de meeste zaken moet je meerdere keren langsgaan omdat er iets niet helemaal volgens het boekje is. We emailen dus, om zeker te zijn dat dit document goed genoeg is.

Dag 6. De (vriendelijke!) ambtenaar burgerlijke stand antwoordt op onze email. Hij weigert in eerste instantie dit document. "Het document dat u doorstuurt lijkt niet op de documenten in onze bronnen.
Onze bronnen stellen dat de documenten afgegeven dienen te worden door het ministerie van Binnenlandse zaken van Letland. Daarnaast zal, om de aanpassing mogelijk te maken, het document vertaald dienen te worden naar het Nederlands door een beëdigd vertaler. Bijgevoegd vindt u een lijst met vertalers."
Dag 7. Het elektronisch uittreksel bevolkingsregister is weldegelijk afgegeven door het ministerie van Binnenlandse zaken van Letland. Hun website vermeldt dat ook duidelijk. Ik stuur een email terug dus, met citaat en vertaling naar het Nederlands van de relevante delen van de website van het ministerie van Binnenlandse zaken van Letland.
Plus de contactgegevens van een beëdigd vertaler Lets-Nederlands, want de lijst van beëdigd vertalers die hij had opgestuurd bevatte er geen.
Ik bel ook even, en krijg de belofte dat een en ander "vandaag nog" aan het diensthoofd wordt voorgelegd.

Dag 13. Ik stuur de dienst burgerlijke stand van de stad een herinnering, want nog geen antwoord op mijn bericht. Dezelfde dag nog antwoord van het diensthoofd burgerlijke stand. "Wij kunnen dit document aanvaarden (zonder legalisatie, Letland en België zijn beide partij bij het Verdrag van Brussel van 1987), maar met beëdigde vertaling naar het Nederlands."

Ik vraag ook  een offerte op voor beëdigde vertaling van het hele document Herinner u: alle nuttige info staat op de eerste paar lijntjes, maar op mijn herhaalde vraag of het voldoende was dat te vertalen was er nog geen antwoord. Soms is gemoedsrust ook iets waard...

Dag 14. Antwoord op onze offerteaanvraag voor beëdigde vertaling: 80€ ex. BTW, inclusief legalisatie bij de rechtbank. Ah, maar ik herinner me het antwoord van het diensthoofd burgerlijke stand: "Wij kunnen dit document aanvaarden (zonder legalisatie), maar met beëdigde vertaling naar het Nederlands." Ik vraag en krijg dus een kleine korting omdat legalisatie niet nodig is...

Dag 17. De elektronische vertaling is klaar. Nu nog wachten op een scan met de handtekening van de beëdigd vertaler erop.

Dag 19. Het weekend is voorbij. De dienst burgerlijke stand werkt weer. Ik bel maar even om te vragen of ik een scan mag doorsturen, of dat we een afspraak moeten maken (1 week wachttijd) en persoonlijk overhandigen. We moeten het zowiezo persoonlijk komen afgeven. En het origineel uittreksel bevolkingsuittreksel mag dan wel vrij van legalisatie mag zijn, de beëdigde vertaling is dat niet. Die moet gelegaliseerd worden. Door de rechtbank van eerste aanleg waar de beëdigd vertaler de eed heeft afgelegd. Niet Leuven dus, maar Antwerpen.

We emailen de beëdigd vertaler met de vraag toch legalisatie te voorzien.

De wachttijd op het stadhuis voor dit soort afspraak is miniaal een week. Ik maak dus alvast 2 afspraken voor begin volgende week. Als er een niet nodig blijkt te zijn kan ik ze nog steeds annuleren. Het alternatief is mogelijk een week extra vertraging.

Dag 20. Vlot antwoord van de beëdigd vertaler. Voor 25€ ex. BTW extra (15€ duurder dan de oorspronkelijke quote inclusief legalisatie) kan ze de legalisatie voorzien. Een billijk voorstel waar we graag op ingaan.

Wordt vervolgd...

Een speldekopje hoop voor de toekomst is een EU-verordening uit juli 2016 . Er komt vlottere gegevensuitwisseling tussen de EU-lidstaten. Misschien. Binnen enkele jaren. Op papier. In een heel beperkt aantal gevallen.

Liefste Europa, liefste EU-lidstaten,

Een beetje meer ambitie graag! Er is geen enkel, maar dan ook geen enkel excuus om dit zo ongelofelijk traag en omslachtig te laten verlopen. Los dit op. Niet in 2021, 2024 of later, maar nog dit jaar!

October 02, 2017

Drupal react

Last week at DrupalCon Vienna, I proposed adding a modern JavaScript framework to Drupal core. After the keynote, I met with core committers, framework managers, JavaScript subsystem maintainers, and JavaScript experts in the Drupal community to discuss next steps. In this blog post, I look back on how things have evolved, since the last time we explored adding a new JavaScript framework to Drupal core two years ago, and what we believe are the next steps after DrupalCon Vienna.

As a group, we agreed that we had learned a lot from watching the JavaScript community grow and change since our initial exploration. We agreed that today, React would be the most promising option given its expansive adoption by developers, its unopinionated and component-based nature, and its well-suitedness to building new Drupal interfaces in an incremental way. Today, I'm formally proposing that the Drupal community adopt React, after discussion and experimentation has taken place.

Two years ago, it was premature to pick a JavaScript framework

Three years ago, I developed several convictions related to "headless Drupal" or "decoupled Drupal". I believed that:

  1. More and more organizations wanted a headless Drupal so they can use a modern JavaScript framework to build application-like experiences.
  2. Drupal's authoring and site building experience could be improved by using a more modern JavaScript framework.
  3. JavaScript and Node.js were going to take the world by storm and that we would be smart to increase the amount of JavaScript expertise in our community.

(For the purposes of this blog post, I use the term "framework" to include both full MV* frameworks such as Angular, and also view-only libraries such as React combined piecemeal with additional libraries for managing routing, states, etc.)

By September 2015, I had built up enough conviction to write several long blog posts about these views (post 1, post 2, post 3). I felt we could accomplish all three things by adding a JavaScript framework to Drupal core. After careful analysis, I recommended that we consider React, Ember and Angular. My first choice was Ember, because I had concerns about a patent clause in Facebook's open-source license (since removed) and because Angular 2 was not yet in a stable release.

At the time, the Drupal community didn't like the idea of picking a JavaScript framework. The overwhelming reactions were these: it's too early to tell which JavaScript framework is going to win, the risk of picking the wrong JavaScript framework is too big, picking a single framework would cause us to lose users that favor other frameworks, etc. In addition, there were a lot of different preferences for a wide variety of JavaScript frameworks. While I'd have preferred to make a bold move, the community's concerns were valid.

Focusing on Drupal's web services instead

By May of 2016, after listening to the community, I changed my approach; instead of adding a specific JavaScript framework to Drupal, I decided we should double down on improving Drupal's web service APIs. Instead of being opinionated about what JavaScript framework to use, we would allow people to use their JavaScript framework of choice.

I did a deep dive on the state of Drupal's web services in early 2016 and helped define various next steps (post 1, post 2, post 3). I asked a few of the OCTO team members to focus on improving Drupal 8's web services APIs; funded improvements to Drupal core's REST API, as well as JSON API, GraphQL and OpenAPI; supported the creation of Waterwheel projects to help bootstrap an ecosystem of JavaScript front-end integrations; and most recently supported the development of Reservoir, a Drupal distribution for headless Drupal. There is also a lot of innovation coming from the community with lots of work on the Contenta distribution, JSON API, GraphQL, and more.

The end result? Drupal's web service APIs have progressed significantly the past year. Ed Faulkner of Ember told us: "I'm impressed by how fast Drupal made lots of progress with its REST API and the JSON API contrib module!". It's a good sign when a core maintainer of one of the leading JavaScript frameworks acknowledges Drupal's progress.

The current state of JavaScript in Drupal

Looking back, I'm glad we decided to focus first on improving Drupal's web services APIs; we discovered that there was a lot of work left to stabilize them. Cleanly integrating a JavaScript framework with Drupal would have been challenging 18 months ago. While there is still more work to be done, Drupal 8's available web service APIs have matured significantly.

Furthermore, by not committing to a specific framework, we are seeing Drupal developers explore a range of JavaScript frameworks and members of multiple JavaScript framework communities consuming Drupal's web services. I've seen Drupal 8 used as a content repository behind Angular, Ember, React, Vue, and other JavaScript frameworks. Very cool!

There is a lot to like about how Drupal's web service APIs matured and how we've seen Drupal integrated with a variety of different frameworks. But there is also no denying that not having a JavaScript framework in core came with certain tradeoffs:

  1. It created a barrier for significantly leveling up the Drupal community's JavaScript skills. In my opinion, we still lack sufficient JavaScript expertise among Drupal core contributors. While we do have JavaScript experts working hard to maintain and improve our existing JavaScript code, I would love to see more experts join that team.
  2. It made it harder to accelerate certain improvements to Drupal's authoring and site building experience.
  3. It made it harder to demonstrate how new best practices and certain JavaScript approaches could be leveraged and extended by core and contributed modules to create new Drupal features.

One trend we are now seeing is that traditional MV* frameworks are giving way to component libraries; most people seem to want a way to compose interfaces and interactions with reusable components (e.g. libraries like React, Vue, Polymer, and Glimmer) rather than use a framework with a heavy focus on MV* workflows (e.g. frameworks like Angular and Ember). This means that my original recommendation of Ember needs to be revisited.

Several years later, we still don't know what JavaScript framework will win, if any, and I'm willing to bet that waiting two more years won't give us any more clarity. JavaScript frameworks will continue to evolve and take new shapes. Picking a single one will always be difficult and to some degree "premature". That said, I see React having the most momentum today.

My recommendations at DrupalCon Vienna

Given that it's been almost two years since I last suggested adding a JavaScript framework to core, I decided to bring the topic back in my DrupalCon Vienna keynote presentation. Prior to my keynote, there had been some renewed excitement and momentum behind the idea. Two years later, here is what I recommended we should do next:

  • Invest more in Drupal's API-first initiative. In 2017, there is no denying that decoupled architectures and headless Drupal will be a big part of our future. We need to keep investing in Drupal's web service APIs. At a minimum, we should expand Drupal's web service APIs and standardize on JSON API. Separately, we need to examine how to give API consumers more access to and control over Drupal's capabilities.
  • Embrace all JavaScript frameworks for building Drupal-powered applications. We should give developers the flexibility to use their JavaScript framework of choice when building front-end applications on top of Drupal — so they can use the right tool for the job. The fact that you can front Drupal with Ember, Angular, Vue, React, and others is a great feature. We should also invest in expanding the Waterwheel ecosystem so we have SDKs and references for all these frameworks.
  • Pick a framework for Drupal's own administrative user interfaces. Drupal should pick a JavaScript framework for its own administrative interface. I'm not suggesting we abandon our stable base of PHP code; I'm just suggesting that we leverage JavaScript for the things that JavaScript is great at by moving relevant parts of our code from PHP to JavaScript. Specifically, Drupal's authoring and site building experience could benefit from user experience improvements. A JavaScript framework could make our content modeling, content listing, and configuration tools faster and more application-like by using instantaneous feedback rather than submitting form after form. Furthermore, using a decoupled administrative interface would allow us to dogfood our own web service APIs.
  • Let's start small by redesigning and rebuilding one or two features. Instead of rewriting the entirety of Drupal's administrative user interfaces, let's pick one or two features, and rewrite their UIs using a preselected JavaScript framework. This allows us to learn more about the pros and cons, allows us to dogfood some of our own APIs, and if we ultimately need to switch to another JavaScript framework or approach, it won't be very painful to rewrite or roll the changes back.

Selecting a JavaScript framework for Drupal's administrative UIs

In my keynote, I proposed a new strategic initiative to test and research how Drupal's administrative UX could be improved by using a JavaScript framework. The feedback was very positive.

As a first step, we have to choose which JavaScript framework will be used as part of the research. Following the keynote, we had several meetings at DrupalCon Vienna to discuss the proposed initiative with core committers, all of the JavaScript subsystem maintainers, as well as developers with real-world experience building decoupled applications using Drupal's APIs.

There was unanimous agreement that:

  1. Adding a JavaScript framework to Drupal core is a good idea.
  2. We want to have sufficient real-use experience to make a final decision prior to 8.6.0's development period (Q1 2018). To start, the Watchdog page would be the least intrusive interface to rebuild and would give us important insights before kicking off work on more complex interfaces.
  3. While a few people named alternative options, React was our preferred option, by far, due to its high degree of adoption, component-based and unopinionated nature, and its potential to make Drupal developers' skills more future-proof.
  4. This adoption should be carried out in a limited and incremental way so that the decision is easily reversible if better approaches come later on.

We created an issue on the Drupal core queue to discuss this more.


Drupal supporting different JavaScript front endsDrupal should support a variety of JavaScript libraries on the user-facing front end while relying on a single shared framework as a standard across Drupal administrative interfaces.

In short, I continue to believe that adopting more JavaScript is important for the future of Drupal. My original recommendation to include a modern JavaScript framework (or JavaScript libraries) for Drupal's administrative user interfaces still stands. I believe we should allow developers to use their JavaScript framework of choice to build front-end applications on top of Drupal and that we can start small with one or two administrative user interfaces.

After meeting with core maintainers, JavaScript subsystem maintainers, and framework managers at DrupalCon Vienna, I believe that React is the right direction to move for Drupal's administrative interfaces, but we encourage everyone in the community to discuss our recommendation. Doing so would allow us to make Drupal easier to use for site builders and content creators in an incremental and reversible way, keep Drupal developers' skills relevant in an increasingly JavaScript-driven world, move us ahead with modern tools for building user interfaces.

Special thanks to Preston So for contributions to this blog post and to Matt Grill, Wim Leers, Jason Enter, Gábor Hojtsy, and Alex Bronstein for their feedback during the writing process.

I published the following diary on “Investigating Security Incidents with Passive DNS“.

Sometimes when you need to investigate a security incident or to check for suspicious activity, you become frustrated because the online resource that you’re trying to reach has already been cleaned. We cannot blame system administrators and webmasters who are just doing their job. If some servers or websites remains compromised for weeks, others are very quickly restored/patched/cleaned to get rid of the malicious content. It’s the same for domain names. Domains registered only for malicious purposes can be suspended to prevent further attacks. If the domain is not suspended, the offending DNS record is simply removed… [Read more]

[The post [SANS ISC] Investigating Security Incidents with Passive DNS has been first published on /dev/random]

September 28, 2017

Recently we got our hands on some aarch64 (aka ARMv8 / 64Bits) nodes running in a remote DC. On my (already too long) TODO/TOTEST list I had the idea of testing armhfp VM on top of aarch64. Reason is that when I need to test our packages, using my own Cubietruck or RaspberryPi3 is time consuming : removing the sdcard, reflashing with the correct CentOS 7 image and booting/testing the pkg/update/etc ...

So is that possible to just automate this through available aarch64 node as hypervisor ? Sure ! and it's just pretty straightforward if you have already played with libvirt. Let's so start with a CentOS 7 aarch64 minimal setup and then :

yum install qemu-kvm-tools qemu-kvm virt-install libvirt libvirt-python libguestfs-tools-c
systemctl enable libvirtd --now

That's pretty basic but for armhfp we'll have to do some extra steps : qemu normally tries to simulate a bios/uefi boot, which armhfp doesn't support, and qemu doesn't emulate the mandatory uboot to just chainload to the RootFS from the guest VM.

So here is just what we need :

  • Import the RootFS from an existing image
curl|unxz >/var/lib/libvirt/images/CentOS-Userland-7-armv7hl-Minimal-1708-CubieTruck.img
  • Convert image to qcow2 (that will give us more flexibility) and extend it a little bit
qemu-img convert -f raw -O qcow2 /var/lib/libvirt/images/CentOS-Userland-7-armv7hl-Minimal-1708-CubieTruck.img /var/lib/libvirt/images/CentOS-Userland-7-armv7hl-Minimal-1708-guest.qcow2
qemu-img resize /var/lib/libvirt/images/CentOS-Userland-7-armv7hl-Minimal-1708-guest.qcow2 +15G
  • Extract kernel+initrd as libvirt will boot that directly for the VM
mkdir /var/lib/libvirt/armhfp-boot
virt-copy-out -a /var/lib/libvirt/images/CentOS-Userland-7-armv7hl-Minimal-1708-guest.qcow2 /boot/ /var/lib/libvirt/armhfp-boot/

So now that we have a RootFS, and also kernel/initrd, we can just use virt-install to create the VM (pointing to existing backend qcow2) :

virt-install \
 --name centos7_armhfp \
 --memory 4096 \
 --boot kernel=/var/lib/libvirt/armhfp-boot/boot/vmlinuz-4.9.40-203.el7.armv7hl,initrd=/var/lib/libvirt/armhfp-boot/boot/initramfs-4.9.40-203.el7.armv7hl.img,kernel_args="console=ttyAMA0 rw root=/dev/sda3" \
 --disk /var/lib/libvirt/images/CentOS-Userland-7-armv7hl-Minimal-1708-guest.qcow2 \
 --import \
 --arch armv7l \
 --machine virt \

And here we go : we have a armhfp VM that boots really fast (compared to a armhfp board using a microsd card of course)

At this stage, you can configure the node, etc.. The only thing you have to remember is that of course kernel will be provided from outside the VM, so just extract it from an updated VM to boot on that kernel. Let's show how to do that, as in the above example, we configured the VM to run with 4Gb of ram, but only 3 are really seen inside (remember the 32bits mode and so the need for PAE on i386 ?)

So let's use this example to show how to switch kernel : From the armhfp VM :

# Let extend first as we have bigger disk
growpart /dev/sda 3
resize2fs /dev/sda3
yum update -y
yum install kernel-lpae
systemctl poweroff # we'll modify libvirt conf file for new kernel

Back to the hypervisor we can again extract needed files :

virt-copy-out -a /var/lib/libvirt/images/CentOS-Userland-7-armv7hl-Minimal-1708-guest.qcow2 /boot/vmlinuz-4.9.50-203.el7.armv7hl+lpae /var/lib/libvirt/armhfp-boot/boot/
virt-copy-out -a /var/lib/libvirt/images/CentOS-Userland-7-armv7hl-Minimal-1708-guest.qcow2 /boot/initramfs-4.9.50-203.el7.armv7hl+lpae.img /var/lib/libvirt/armhfp-boot/boot/

And just virsh edit centos7_armhfp so that kernel and armhfp are pointing to correct location:


Now that we have a "gold" image, we can even use exiting tools to provision quickly other nodes on that hypervisor ! :

time virt-clone --original centos7_armhfp --name armhfp_guest1 --file /var/lib/libvirt/images/armhfp_guest1.qcow2
Allocating 'armhfp_guest1.qcow2'                                               |  18 GB  00:00:02     

Clone 'armhfp_guest1' created successfully.

real    0m2.809s
user    0m0.473s
sys 0m0.062s

time virt-sysprep --add /var/lib/libvirt/images/armhfp_guest1.qcow2 --operations defaults,net-hwaddr,machine-id,net-hostname,ssh-hostkeys,udev-persistent-net --hostname guest1

virsh start armhfp_guest1

As simple as that. Of course, in the previous example we were just using the default network from libvirt, and not any bridge, but you get the idea : all the rest with well-known concept for libvirt on linux.

When you are performing penetration tests for your customers, you need to build your personal arsenal. Tools, pieces of hardware and software are collected here and there depending on your engagements to increase your toolbox. To perform Wireless intrusion tests, I’m a big fan of the WiFi Pineapple. I’ve one for years (model MK5). It’s not the very latest but it still does a good job. But, recently, after a discussion with a friend, I bought a new wireless toy: the WiNX!

The device is very small (3.5 x 3 CM) based on an ESP-WROOM32 module. It comes with a single interface: a micro USB port to get some power and provide the serial console. No need to have a TCP/IP stack or a browser to manage it, you can just connect it to any device that has a USB port and a terminal emulator (minicom, putty, screen, …). It could be not very user-friendly to some of you but I really like this! The best solution that I found until now is to use the Arduino IDE and its serial monitor tool. You can type your commands in the dedicated field and get the results in the main window:


The device can be flashed with different versions of the firmware that offer the following core features. You can use the WiNX as:

  •  a WiFi scanner
  • a WiFi sniffer
  • a WiFi honeypot

Of course, my preferred mode is the honeypot. If the firmware comes with default example of captive portals, it’s very easy to design your own. The only restrictions are the size of the HTML page (must be less than 150KB) and it must include all the components (CSS, images – Base64 encoded). The form may contain your own fields (ex: add a token, CAPTCHA, CC number, etc) and must just post to the “/”, the web server, to see all the fields logged on the internal storage.

Here is an example of a deceptive page that I made for testing purposes:

Exki Portal Sample

To use the device, you just need to plug it into a computer and it boots in a few seconds. Even better, you can use with a power bank and leave it in a discreet place! Cheap, small, easy to manage, I’d definitively recommend adding this gadget to your arsenal!


[The post WiNX: The Ultra-Portable Wireless Attacking Platform has been first published on /dev/random]