Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

January 17, 2020

We have just published the second set of interviews with our main track and keynote speakers. The following interviews give you a lot of interesting reading material about various topics, from the forgotten history of early Unix to the next technological shift of computing: Daniel Riek: How Containers and Kubernetes re-defined the GNU/Linux Operating System. A Greybeard's Worst Nightmare David Stewart: Improving protections against speculative execution side channel James Bottomley: The Selfish Contributor Explained Jon 'maddog' Hall: FOSSH - 2000 to 2020 and beyond!. maddog continues to pontificate Justin W. Flory and Michael Nolan: Freedom and AI: Can Free Software舰
Here’s a small little trick when using mysqldump: you don’t have to dump an entire database, you can dump individual tables too.

January 15, 2020

Happy nineteenth birthday

Nineteen years ago today, I released Drupal 1.0.0. Every day, for the past nineteen years, the Drupal community has collaborated on providing the world with an Open Source CMS and making a difference on how the web is built and run.

It's easy to forget that software is written one line of code at the time. And that adoption is driven one website at the time. I look back on nearly two decades of Drupal, and I'm incredibly proud of our community, and how we've contributed to a more open, independent internet.

Today, many of us in the Drupal community are working towards the launch of Drupal 9. Major releases of Drupal only happen every 3-5 years. They are an opportunity to bring our community together, create something meaningful, and celebrate our collective work. But more importantly, major releases are our best opportunity to re-engage past users and attract new users, explaining why Drupal is better than closed CMS platforms.

As we mark Drupal's 19th year, let's all work together on a successful launch of Drupal 9 in 2020, including a wide-spread marketing strategy. It's the best birthday present we can give to Drupal.

I spent the last week or so building Docker images and a set of YAML files that allows one to run SReview, my 99%-automated video review and transcode system, inside minikube, a program that sets up a mini Kubernetes cluster inside a VM for development purposes.

I wish the above paragraph would say "inside Kubernetes", but alas, unless your Kubernetes implementation has a ReadWriteMany volume that can be used from multiple nodes, this is not quite the case yet. In order to fix that, I am working on adding an abstraction layer that will transparently download files from an S3-compatible object store; but until that is ready, this work is not yet useful for large installations.

But that's fine! If you're wanting to run SReview for a small conference, you can do so with minikube. It won't have the redundancy and reliability things that proper Kubernetes provides you, but then you don't really need that for a conference of a few days.

Here's what you do:

  • Download minikube (see the link above)
  • Run minikube start, and wait for it to finish
  • Run minikube addon enable ingress
  • Clone the SReview git repository
  • From the toplevel of that repository, run perl -I lib scripts/sreview-config -a dump|sensible-pager to see an overview of the available configuration options.
  • Edit the file dockerfiles/kube/master.yaml to add your configuration variables, following the instructions near the top
  • Once the file is configured to your liking, run kubectl apply -f master.yaml -f storage-minikube.yaml
  • Add sreview.example.com to /etc/hosts, and have it point to the output of minikube ip.
  • Create preroll and postroll templates, and download them to minikube in the location that the example config file suggests. Hint: minikube ssh has wget.
  • Store your raw recorded assets under /mnt/vda1/inputdata, using the format you specified for the $inputglob and $parse_re configuration values.
  • Profit!

This doesn't explain how to add a schedule to the database. My next big project (which probably won't happen until after the next FOSDEM is to add a more advanced administrator's interface, so that you can just log in and add things from there. For now though, you have to run kubectl port-forward svc/sreview-database 5432, and then use psql to localhost to issue SQL commands. Yes, that sucks.

Having said that, if you're interested in trying this out, give it a go. Feedback welcome!

(many thanks to the people on the #debian-devel IRC channel for helping me understand how Kubernetes is supposed to work -- wouldn't have worked nearly as nice without them)

January 14, 2020

I’ve been studying the Rust Programming Language over the holidays, here are some of my first impressions. My main interest in Rust is compiling performance-critical code to WebAssembly for usage in the browser.

Rust is an ambitious language: it tries to eliminate broad categories of browser errors by detecting them during compilation. This requires more help from the programmer: reasoning about what a program does exactly is famously impossible (the halting problem), but that doesn’t mean we can’t think about some aspects, provided that we give the compiler the right inputs. Memory management is the big thing in Rust where this applies. By indicating where a value is owned and where it is only temporarily borrowed, the compiler is able to infer the life-cycle of values. Similar things apply for type safety, handling errors, multi-threading and preventing null references.

All very cool off-course, but nothing in life is for free: it requires a much higher level of precise input with regards to what exactly you’re trying to achieve. So programming in Rust is less careless than other languages, but the end result is guaranteed correctness. I’d say that’s worth it.

This very strict mode of compilation also means that the compiler is very picky about what it accepts. You can expect many error messages and much fighting (initially) to even get your program to compile. The error messages are very good though, so usually (but not always) they give a pretty good indication of what to fix. And once it compiles you’re rather certain that the result is good.

Another consequence is that Rust is by no means a small language. Compared to the rather succinct Go, there’s an enormous amount of concepts and syntax. All needed, but it certainly doesn’t make things easier to read.

Other random thoughts:

  • It’s a mistake to see a reference as a pointer. They’re not the same thing, but it’s very easy to confuse them while learning Rust. Thinking about moving ownership takes some adaptation.
  • Lifetimes are hard and confusing at first. This is one of the points where I feel you spend more attention to getting the language right than the actual functionality of your code.
  • Rust has the same composable IO abstractions (Read/Write) as in the Go io package. These are excellent and a joy to work with.
  • My main worry is the complexity of the language: each new corner-case of correctness will lead to the addition of more language complexity. Have we reached the end or will things keep getting harder? One example of where the model already feels like it’s reaching the limits is RefCell.

In all, I’d say Rust is a good addition to the toolbox, for places where it makes sense. But I don’t foresee it replacing Go yet as my go-to language on the backend. It all comes down to the situation, finding the right balance between the need for performance/correctness and productivity: the right tool for the job. To be continued.


Comments | More on rocketeer.be | @rubenv on Twitter

Wow, what a year 2019 was for Acquia!

Acquia 2018 business metrics

At the beginning of every year, I like to publish a retrospective to look back and take stock of how far Acquia has come over the past 12 months. I take the time to write these retrospectives because I want to keep a record of the changes we've gone through as a company and how my personal thinking is evolving from year to year.

If you'd like to read my previous retrospectives, they can be found here: 2018, 2017, 2016, 2015, 2014, 2013, 2012, 2011, 2010, 2009. This year marks the publishing of my eleventh retrospective. When read together, these posts provide a comprehensive overview of Acquia's growth and trajectory.

Our product strategy remained steady in 2019. We continued to invest heavily in (1) our Web Content Management solutions, while (2) accelerating our move into the broader Digital Experience Platform market. Let's talk about both.

Acquia's continued focus on Web Content Management

In 2019, for the sixth year in a row, Acquia was recognized as a Leader in the Gartner Magic Quadrant for Web Content Management. Our tenure as a top vendor is a strong endorsement for the "Web Content Management in the Cloud" part of our strategy.

We continued to invest heavily in Acquia Cloud in 2019. As a result, Acquia Cloud remains the most secure, scalable and compliant cloud for Drupal. An example and highlight was the successful delivery of Special Counsel Robert Mueller's long-awaited report. According to Federal Computer Week, by 5pm on the day of the report's release, there had already been 587 million site visits, with 247 million happening within the first hour — a 7,000% increase in traffic. I'm proud of Acquia's ability to deliver at a very critical moment.

Time-to-value and costs are big drivers for our customers; people don't want to spend a lot of time installing, building or upgrading their websites. Throughout 2019, this trend has been the primary driver for our investments in Acquia Cloud and Drupal.

  • We have more than 15 employees who contribute to Drupal full-time; the majority of them focused on making Drupal easier to use and maintain. As a result of that, Acquia remained the largest contributor to Drupal in 2019.
  • In September, we announced that Acquia acquired Cohesion, a Software-as-a-Service visual Drupal website builder. Cohesion empowers marketers, content authors and designers to build Drupal websites faster and cheaper than ever before.
  • We launched a multitude of new features for Acquia Cloud which enabled our customers to make their sites faster and more secure. To make our customer's sites faster, we added a free CDN for all Cloud customers. All our customers also got a New Relic Pro subscription for application performance management (APM). We released Acquia Cloud API v2 with double the number of endpoints to maximize customer productivity, added single-sign on capabilities, obtained FIPS compliance, and much more.
  • We rolled out many "under the hood" improvements; for example, thanks to various infrastructure improvements our customers' sites saw performance improvements anywhere from 30% to 60% at no cost to them.
  • Making Acquia Cloud easier to buy and use by enhancing our self-service capabilities has been a major focus throughout all of 2019. The fruits of these efforts will start to become more publicly visible in 2020. I'm excited to share more with you in future blog posts.

At the end of 2019, Gartner announced it is ending their Magic Quadrant for Web Content Management. We're proud of our six year leadership streak, right up to this Magic Quadrant's end. Instead, Gartner is going to focus on the broader scope of Digital Experience Platforms, leaving stand-alone Web Content Management platforms behind.

Gartner's decision to drop the Web Content Management Magic Quadrant is consistent with the second part of our product strategy; a transition from Web Content Management to Digital Experience Management.

Acquia's expansion into Digital Experience Management

We started our expansion from Web Content Management to the Digital Experience Platform market five years ago, in 2014. We believed, and still believe, that just having a website is no longer sufficient: customers expect to interact with brands through their websites, email, chat and more. The real challenge for most organizations is to drive personalized customer experiences across all these different channels and to make those customer experiences highly relevant.

For five years now, we've been patient investors and builders, delivering products like Acquia Lift, our web personalization tool. In June, we released a completely new version of Acquia Lift. We redesigned the user interface and workflows from scratch, added various new capabilities to make it easier for marketers to run website personalization campaigns, added multi-lingual support and much more. Hands down, the new Acquia Lift offers the best web personalization for Drupal.

In addition to organic growth, we also made two strategic acquisitions to accelerate our investment in becoming a full-blown Digital Experience Platform:

  1. In May, Acquia acquired Mautic, an open marketing automation platform. Mautic helps open up more channels for Acquia: email, push notifications, and more. Like Drupal, Mautic is Open Source, which helps us deliver the only Open Digital Experience Platform as an alternative to the expensive, closed, and stagnant marketing clouds.
  2. In December, we announced that Acquia acquired AgilOne, a leading Customer Data Platform (CDP). To make customer experiences more relevant, organizations need to better understand their customers: what they are interested in, what they purchased, when they last interacted with the support organization, how they prefer to consume information, etc. Without a doubt, organizations want to better understand their customers and use data-driven decisions to accelerate growth.

We have a clear vision for how to redefine a Digital Experience Platform such that it is free of any silos.

A diagram shows how Acquia solutions unify experience creation, content and user data across different platforms.

In 2020, expect us to integrate the data and experience layers of Lift, Mautic and AgilOne, but still offer each capability on its own aligned with our best-of-breed approach. We believe that this will benefit not only our customers, but also our agency partners.

Momentum

Demand for our Open Digital Experience Platform continued to grow among the world's most well-known brands. New customers include Liverpool Football Club, NEC Corporation, TDK Corporation, L'Oreal Group, Jewelers Mutual Insurance, Chevron Phillips Chemical, Lonely Planet, and GOL Airlines among hundreds of others.

We ended the year with more than 1,050 Acquians working around the globe with offices in 14 locations. The three acquisitions we made during the year added an additional 150 new Acquians to the team. We celebrated the move to our new and bigger India office in Pune, and ended the year with 80 employees in India. We celebrated over 200 promotions or role changes showing great development and progression within our team.

We continued to introduce Acquia to more people in 2019. Our takeover of the Kendall Square subway station near MIT in Cambridge, Massachusetts, in April, for instance, helped introduce more than 272,000 daily commuters to our company. In addition to posters on every wall of the station, the campaign — in which photographs of fellow Acquians were prominently featured — included Acquia branding on entry turnstiles, 75 digital live boards, and geo-targeted mobile ads.

Last but not least, we continued our tradition of "Giving back more", a core part of our DNA or values. We sponsored 250 kids in the Wonderfund Gift Drive (an increase of 50 from 2018), raised money to help 1,000 kids in India to get back to school after the floods in Kolhapur, raised more than $10,000 for Girls Who Code, $10,000 for Cancer Research UK, and more.

Some personal reflections

With such a strong focus on product and engineering, 2019 was one of the busiest years for me personally. We grew our R&D organization by about 100 employees in 2019. This meant I spent a lot of time restructuring, improving and scaling the R&D organization to make sure we could handle the increased capacity, and to help make sure all our different initiatives remain on track.

On top of that, Acquia received a substantial majority investment from Vista Equity Partners. Attracting a world-class partner like Vista involved a lot of work, and was a huge milestone for the company.

It feels a bit surreal that we crossed 1,000 employees in 2019.

There were also some low-lights in 2019. On Christmas, Acquia's SVP of Engineering Mike Aeschliman, unexpectedly passed away. Mike was one of the three people I worked most closely with and his passing is a big loss for Acquia. I miss Mike greatly.

If I have one regret for 2019, it is that I was almost entirely internally focused. I missed hitting the road — either to visit employees, customers or Drupal and Mautic community members around the world. I hope to find a better balance in 2020.

Thank you

2019 was a busy year, but also a very rewarding year. I remain very excited about Acquia's long-term opportunity, and believe we've steered the company to be positioned at the right place at the right time. All of this is not to say 2020 will be easy. Quite the contrary, as we have a lot of work ahead of us in 2020, including the release of Drupal 9. 2020 should be another exciting year for us!

Thank you for your support in 2019!

January 13, 2020

Systemd allows you to configure a service so that it automatically restarts in case it’s crashed.

January 11, 2020

If your non-geek partner and/or kids are joining you to FOSDEM, they may be interested in spending some time exploring Brussels while you attend the conference. Like previous years, FOSDEM is organising sightseeing tours.

January 09, 2020

We have performed some interviews with main track speakers from various tracks. To get up to speed with the topics discussed in the main track talks, you can start reading the following interviews: Amanda Brock: United Nations Technology and Innovation Labs. Open Source isn't just eating the world, it's changing it Daniel Stenberg: HTTP/3 for everyone. The next generation HTTP is coming Free Ekanayaka: dqlite: High-availability SQLite. An embeddable, distributed and fault tolerant SQL engine Geir Høydalsvik: MySQL Goes to 8! James Shubin: Over Twenty Years Of Automation Joe Conway: SECCOMP your PostgreSQL Ludovic Courtès: Guix: Unifying provisioning, deployment, and舰

January 08, 2020

January 07, 2020

On 7 January, 2020, the Drupal module JSON:API 1.x was officially marked unsupported. This date was chosen because it is exactly 1 year after the release of JSON:API 2.0, the version of JSON:API that was eventually committed to core. Since then, the JSON:API maintainers have been urging users to upgrade to the 2.x branch and then to switch to the Drupal core version.

We understand that there are still users remaining on the 1.x branch. We will maintain security coverage of the 8.x-1.x branch for 90 days. That is, on 6 April, 2020, all support for JSON:API, not in Drupal core, will end. Please upgrade your sites accordingly.

Thanks to my fellow maintainers Gabe Sullice and Mateu Aguiló for writing this announcement!

January 06, 2020

I decided to use the holiday break to do a link audit for my personal blog. I found hundreds of links that broke and hundreds of links that now redirect. This wasn't a surprise, as I haven't spent much time maintaining links in the 13 years I've been blogging.

Broken links

Some of the broken links were internal, but the vast majority were external.

"Internal links" are links that go from one page on https://dri.es to a different page on https://dri.es. Fixing broken links feels good so I went ahead and fixed all internal links.

It's a different story for external links. "External links" are links that point to domains not under my control.

For example, in 2007 I thanked Sun Microsystems for donating a Sun Fire X4200 server to the Drupal project. In my post, I linked to http://www.sun.com/servers/entry/x4200, the Sun Fire X4200 product page. Sun has since been acquired by Oracle, the page has been removed, and the link is now dead. I saw the following options: change this particular link to point to (1) a Wikipedia page on the Sun Fire series, (2) an archived copy of the original page on archive.org, or (3) remove the link. In this case, I decided to update the link to point to Wikipedia.

Some sites that I link to have since been hijacked by porn sites. The URL used for Hillary Clinton's 2008 campaign website now points to a porn site, for example. This is unfortunate so I simply removed those links.

Redirects

Some of the external links now have URL redirects. I found what I call "obvious redirects" and "less obvious redirects".

An example of an "obvious redirect" was a link to Apple's pressroom. In my 2015 Acquia retrospective I linked to an Apple press release, https://www.apple.com/pr/library/2015/12/03Apple-Releases-Swift-as-Open-Source.html, to highlight that large organizations like Apple were starting to embrace Open Source. Today, that link automatically redirects to https://www.apple.com/newsroom/2015/12/03Apple-Releases-Swift-as-Open-Source. A slightly different URL, but ultimately the same content. One day, that redirect might cease to exist, so it felt like a good idea to update my blog post to use the new link instead. I went ahead and updated hundreds of "obvious redirects".

The more interesting case is what I call "less obvious redirects". For example, in 2012 I blogged about how the White House contributed to Drupal. It was the first time in history that the White House contributed to Open Source, and Drupal in particular. It's something that many of us in the Drupal community are very proud of. In my blog post, I linked to http://www.whitehouse.gov/blog/2012/08/23/open-sourcing-we-the-people, a page on whitehouse.gov explaining their decision to contribute. That link now redirects to http://obamawhitehouse.archives.gov/blog/2012/08/23/open-sourcing-we-the-people, a permanent archive of the Obama administration's White House website. For me, it is less obvious what to do about this link: updating the link future proofs my site, but at the cost of losing some of its significance and historic value. For now, I left the original whitehouse.gov link in place.

How to best care for old links?

I'm not entirely sure why I picked the Wikipedia link over archive.org when I updated the Sun Fire X4200 blog post, or why I left the original whitehouse.gov link in place. I also left many broken links in place because I'm undecided about what to do with them.

It is important that we care for old links. Before I continue my link clean up, I'd like to come up with a more structured, and possibly automatable, approach for link maintenance. I'm curious to learn how others care for old links, and if you know of any best practices, guidelines, or even automations.

January 05, 2020

December 30, 2019

With FOSDEM just around the corner, it is time for us to enlist your help. Every year, an enthusiastic band of volunteers make FOSDEM happen and make it a fun and safe place for all our attendees. We could not do this without you. This year we again need as many hands as possible, especially for heralding during the conference, during the buildup (starting Friday at noon) and teardown (Sunday evening). No need to worry about missing lunch. Food will be provided. Would you like to be part of the team that makes FOSDEM tick? Sign up here! You舰

December 28, 2019

There’s a tweet going around in the last few days that highlights that Ubuntu’s default MOTD (Message of the Day) fetches information from an Ubuntu server.

December 26, 2019

I just released AO26, which comes with a bunch of new features, improvements and bugfixes.

  • New: Autoptimize can be configured at network level or at individual site-level when on multisite.
  • Extra: new option to specify what resources need to be preloaded.
  • Extra: add display=swap to Autoptimized (CSS-based) Google Fonts.
  • Images: support for lazyloading of background-images when set in the inline style attribute of a div.
  • Images: updated to lazysizes 5.2.
  • CSS/ JS: no longer add type attributes to Autoptimized resources.
  • Improvement: cache clearing now also integrates with Kinsta, WP-Optimize & Nginx helper.
  • Added “Critical CSS” tab to highlight the criticalcss.com integration, which will be fully included in Autoptimize 2.7.
  • Batch of misc. smaller improvements & fixes, more info in the GitHub commit log.

The release has been tested extensively (automated unit testing, manual testing on several of my own sites and testing by users of the beta-version on Github), but as with all software it is very unlikely to be bug-free. Feel free to post any issue with the update here or to create a separate topic in this forum.

Happy holidays to all!
frank

December 25, 2019

To make live backups of a guest on an ESXi host’s SSH UNIX shell, you need to utilize the fact that when a snapshot of a VMDK file gets made, the original VMDK file turns so called read-only. Releasing the locks that would otherwise withhold vmkfstools from creating a clone.

This means that if you make a snapshot that you can use vmkfstools of the non-snapshot VMDK files from which the snapshot was made.

Let’s get started scripting this.

GUEST="GUESTNAME"
DISKS="$GUEST EXTRADISK"
SRC=/vmfs/volumes/STORAGE/$GUEST
DST=/vmfs/volumes/STORAGE/backup/$GUEST

First get the VmId:

VMID=`vim-cmd vmsvc/getallvms | grep $GUEST | cut -d " " -f -1`

Create a poor man’s backup snapshot on $GUEST:

vim-cmd vmsvc/snapshot.create $VMID backup poor-mans-backup 0 0

Create the clones of the non-snapshot VMDK files (the one without numbers after $DISK)

mkdir -p $DST
for DISK in $DISKS; do
   vmkfstools -i $SRC/$DISK.vmdk $DST/$DISK.vmdk -d sesparse
done

Now remove the snapshots from $GUEST:

vim-cmd vmsvc/snapshot.removeall $VMID

Now, copy the VMX file:

cp $SRC/$GUEST.vmx $DST/$GUEST.vmx

Alternatively you can use ghettoVCB which is a little program that does the same thing.

December 23, 2019

For a Jenkins environment I had to automate the creation of a lot of identical build agents. Identical up until of course the network configuration. Sure I could have used Docker or what not. But the organization standardized on VMWare ESXi. So I had to work with the tools I got.

A neat trick that you can do with VMWare is to write so called guestinfo variables in the VMX file of your guests.

You can get SSH access to the UNIX-like environment of a VMWare ESXi host. In that environment you can do typical UNIX scripting.

First we prepare a template that has VMWare guest tools installed. We punch the zeros of the vmdk file and all that stuff. So that it’s nicely packaged and quick to make clones from. On the guest you do:

dd if=/dev/zero of=/largefile bs=10M ; rm /largefile

On the ESXi host you do:

vmkfstools --punchzero /vmfs/volumes/STORAGE/template/DISK.vmdk

Now you can for example do this (on the ESXi host’s UNIX environment):

SRC=/vmfs/volumes/STORAGE/template
DST=/vmfs/volumes/STORAGE/auto
mkdir -p $DST/$1

# Don't use cp to make copies of vmdk files. It'll just
# take ages longer as it will copy 0x0 bytes too.
# vmkfstools is what you should use instead
vmkfstools -i $SRC/DISK.vmdk $DST/$1/DISK.vmdk -d thin

# Replace some values in the destination VMX file
cat $SRC/TEMPLATE.vmx | sed s/TEMPLATE/$1/g > $DST/$1/$1.vmx

And now of course you add the guestinfo variables:

echo "guestinfo.HOSTN=$1" >> $DST/$1/$1.vmx
echo "guestinfo.EXTRA=$2" >> $DST/$1/$1.vmx

Now when the guest boots, you can make a script to read those guestinfo things out and let it for example configure itself (on the guest):

#! /bin/sh
HOSTN=`vmtoolsd --cmd "info-get guestinfo.HOSTN"`
EXTRA=`vmtoolsd --cmd "info-get guestinfo.EXTRA"`
if test "$EXTRA" = "provision"; then
   echo $HOSTN > /etc/hostname
   reboot
fi

Some other useful VMWare ESXi commands:

# Register the VMX as a new virtual machine
VIMID=`vim-cmd /solo/register $DST/$1/$1.vmx`

# Turn it on
vim-cmd /vmsvc/power.on $VIMID &

# Answer 'Copied' on the question whether it got
# copied or moved
sleep 2
VMMSG=`vim-cmd /vmsvc/message $VIMID | grep "Virtual machine message" | cut -d : -f -1 | cut -d " " -f 4`
if [ ! -z $VMMSG ]; then
    vim-cmd /vmsvc/message $VIMID $VMMSG 2
fi

That should be all you need. I’m sure we can adapt the $1.vmx file such that the question doesn’t get asked. But my solution with answering the question also worked for me.

Next thing we know you’re putting a loop around this and you just ‘programmed’ creating a few hundred Jenkins build agents on some powerful piece of ESXi equipment. Imagine that. Bread on the table and the entire flock of programmers of your customer happy.

But! Please don’t hire me to do your DevOps. I’ve been there before several times. It sucks. You get to herd brogrammers. They suck the blood out of you with their massive ignorance on almost all really simple standard things (like versioning, building, packaging, branching, etc. Anything that must not be invented here). Instead of people who take the time to be professional about their job and read five lines of documentation, they’ll waste your time with their nonsense self invented crap. Which you end up having to automate. Which they always make utterly impossible and (of course) non-standard. While the standard techniques are ten million times better and more easy.

Yesterday I fixed my Bestway Lay Z Spa. It gave the infamous E02.

Opening up the thing it was. Because in a video the guy explained about the water flow sensor being a magnetic switch I decided to try taking the sensor itself out of the component. Then I tried with a external magnet to get the detached switch to close. The error was gone and I could make the motor run without any water flowing. That’s probably not a great idea if you don’t want to damage anything. So, of course, I didn’t do that for too long.

However. When I reinserted the sensor into the component, and closed the valve myself, the ER02 error did still happen. I figured the magnet that gets pushed to the ceiling of the component was somehow weakened.

Then I noticed a little notch on it. I marked it in a red circle:

I decided to take a flat file and file it off. When I now closed the valve myself, I could just like with the magnet make the motor run without any water flowing.

I reassembled it all. Reattached the device to the bath tube. It all works. Warm water this evening! I hope there will be stars outside.

December 22, 2019

Our keyserver is now accepting submissions for the FOSDEM 2020 keysigning event. The annual PGP keysigning event at FOSDEM is one of the largest of its kind. With more than one hundred participants every year, it is an excellent opportunity to strengthen the web of trust. For instructions on how to participate in this event, see the keysigning page. Key submissions close on Wednesday 22 January, to give us some time to generate and distribute the list of participants. Remember to bring a printed copy of this list to FOSDEM.

December 20, 2019

A few weeks ago I toyed with the idea of making a JSON prettifier myself. I often use these to re-format and debug a minified JSON payload.

December 19, 2019

As part of our Oh Dear! monitoring service, we run workers. A lot of workers. Every site uptime check is a new worker, every broken links job is a new worker, every certificate check is a- you get the idea.

December 12, 2019

I published the following diary on isc.sans.edu: “Code & Data Reuse in the Malware Ecosystem“:

In the past, I already had the opportunity to give some “security awareness” sessions to developers. One topic that was always debated is the reuse of existing code. Indeed, for a developer, it’s tempting to not reinvent the wheel when somebody already wrote a piece of code that achieves the expected results. From a gain of time perspective, it’s a win for the developers who can focus on other code. Of course, this can have side effects and introduce bugs, backdoors, etc… but it’s not today’s topic. Malware developers are also developers and have the same behavior. Code reuse has been already discussed several times… [Read more]

[The post [SANS ISC] Code & Data Reuse in the Malware Ecosystem has been first published on /dev/random]

We’ve all been there, right? You debug something, you try telnet to see if a port is open, and now you’re stuck in a telnet session.

December 11, 2019

I'm excited to announce that Acquia has signed a definitive agreement to acquire AgilOne, a leading Customer Data Platform (CDP).

CDPs pull customer data from multiple sources, clean it up and combine it to create a single customer profile. That unified profile is then made available to marketing and business systems to improve the customer experience.

For the past 12 months, I've been watching the CDP space closely and have talked to a dozen CDP vendors. I believe that every organization will need a CDP (although most organizations don't realize it yet).

Why AgilOne?

According to independent research firm The CDP Institute, CDPs are a part of a rapidly growing software category that is expected to exceed $1 billion in revenue in 2019. While the CDP market is relatively new and small, a plethora of CDPs exist in the market today.

One of the reasons we really liked AgilOne is their machine learning capabilities — they will give our customers a competitive advantage. AgilOne supports machine learning models that intelligently segment customers and predict customer behaviors (e.g. when a customer is likely to purchase something). This allows for the creation and optimization of next-best action models to optimize offers and messages to customers on a 1:1 basis.

For example, lululemon, one of the most popular brands in workout apparel, collects data across a variety of online and offline customer experiences, including in-store events and website interactions, commerce transactions, email marketing, and more. AgilOne helped them integrate all those systems and create unified customer data profiles. This unlocked a lot of data that was previously siloed. Once lululemon better understood its customers' behaviors, they leveraged AgilOne's machine learning capabilities to increase attendance to local events by 25%, grow revenue from digital marketing campaigns by 10-15%, and increase site visits by 50%.

Another example is TUMI, a manufacturer of high-end suitcases. TUMI turned to AgilOne and AI to personalize outbound marketing (like emails, push notifications and one-to-one chat), smarten its digital advertising strategy, and improve the customer experience and service. The results? TUMI sent 40 million fewer emails in 2017 and made more money from them. Before AgilOne, TUMI's e-commerce revenue decreased. After they implemented AgilOne, it increased sixfold.

Fundamentally improving the customer experience

Having a great customer experience is more important than ever before — it's what sets competitors apart from one another. Taxis and Ubers both get people from point A to B, but Uber's customer experience is usually superior.

Building a customer experience online used to be pretty straightforward; all you needed was a simple website. Today, it's a lot more involved.

The real challenge for most organizations is not to redesign their website with the latest and greatest JavaScript framework. No, the real challenge is to drive relevant customer experiences across all the different channels — including web, mobile, social, email and voice — and to make those customer experiences highly relevant.

I've long maintained that the two fundamental building blocks to delivering great digital experiences are (1) content and (2) user data. This is consistent with the diagram I've been using in presentations and on my blog for many years where "user profile" and "content repository" represent two systems of record (though updated for the AgilOne acquisition).

A diagram that shows organizations need both good user data and good content to deliver relevant digital experiences.

To drive results, wrangling data is not optional

To dramatically improve customer experiences, organizations need to understand their customers: what they are interested in, what they purchased, when they last interacted with the support organization, how they prefer to consume information, etc.

But as an organization's technology stack grows, user data becomes siloed within different platforms:

A diagram that illustrates how user data is siloed within different platforms, including web, email marketing, commerce, and CRM

When an organization doesn't have a 360º view of its customers, it can't deliver a great experience to its customers. We have all interacted with a help desk person that didn't know what you recently purchased, is asking you questions you've answered multiple times before, or isn't aware that you already got some help troubleshooting through social media.

Hence, the need for integrating all your backend systems and creating a unified customer profile. AgilOne addresses this challenge, and has helped many of the world's largest brands understand and engage better with their customers.

A diagram that shows how user data is unified with AgilOne across web, email marketing, commerce, social media, and CRM.

Acquia's strategy and vision

It's easy to see how AgilOne is an important part of Acquia's vision to deliver the industry's only open digital experience platform. Together, with Drupal, Lift and Mautic, AgilOne will allow us to redefine the customer experience stack. Everything is based on Open Source and open APIs, and designed from the ground up to make it easier for marketers to create relevant, personal campaigns across a variety of channels.

A diagram shows how Acquia solutions unify experience creation, content and user data across different platforms.

Welcome to the team, AgilOne! You are a big part of Acquia's future.

Het land in crisis: alle hoofdredacteuren van het land schrijven opiniestukken!

Honderden vrouwen naar de rechtbank. Christine Mussche fantaseert zich al rijk: driehonderd keer een factuur van een paar duizend Euro! Dat is een villa. Dat zal vast een vruchtgebruik worden. Want haar kuuroord voor misnoegde vrouwen kan nadien nog omgetoverd worden in een sauna- en massagesalon of een heus bedevaartsoord voor het Belgisch feminisme.

Met wat mevrouw de advocaat er waarschijnlijk aan gaat verdienen hadden we er ook een begrotingstekort van een gemiddeld groot dorp mee kunnen oplossen, er een school of een typisch gemeentelijk zwembad mee kunnen bouwen.

Meanwhile: we hebben ook al maanden geen regering. Nobody cares.

December 10, 2019

Just sent in my vote. After carefully considering what I consider to be important, and reading all the options, I ended up with 84312756.

There are two options that rule out any compromise position; choice 1, "Focus on systemd", essentially says that anything not systemd is unimportant and we should just drop it. At the same time, choice 6, "support for multiple init systems is required", essentially says that you have to keep supporting other systems no matter what the rest of the world is doing lalala I'm not listening mom he's stealing my candy again.

Debian has always been a very diverse community; as a result, historically, we've provided a wide range of valid choices to our users. The result of that is that some of our derivatives have chosen Debian to base their very non-standard distribution on. It is therefore, to me, no surprise that Devuan, the very strongly no-systemd distribution, was based on Debian and not Fedora or openSUSE.

Personally I consider this variety in our users to be a good thing, and something to treasure; so any option that essentially throws that away, like choice 1, is not something I could live with. At the same time, the world has evolved, and most of our users do use systemd these days, which does provide a number of features over and above the older sysvinit system. Ignoring that fact, like option 6 does, is unreasonable in today's world.

So neither of those two options are acceptable to me.

That just leaves the other available options. All of those accept the fact that systemd has become the primary option, but differ in the amount of priority given to alternate options. The one outlier is option 5; it tries to give general guidance on portability issues, rather than commenting on the systemd situation specifically. While I can understand that desire, I don't agree with it, so it definitely doesn't get first position for me. At the same time, I don't think it's a terrible option, in that it still provides an opinion that I agree with. All in all, it barely made the cutoff above further discussion -- but really only barely so.

How did you vote?

December 06, 2019

It’s a classic issue for BotConf attendees, the last day is always a little bit stronger due to the social event organized every Thursday night. This year, we are in the French area where good wines are produced and the event took place at the “Cité du Vin”. The night was short but I was present at the first talk! Ready as usual!

The first talk was “End-to-end Botnet Monitoring with Automated Config Extraction and Emulated Network Participation” by Kevin O’Reilly and Keith Jarvis. Sometimes, you hesitate to attend the first conference of the day due to a lack of motivate and you have to force yourself.

Today, it really deserved to wake up on time to attend this wonderful talk. It started with a comparison of two distinct approaches to analyze malware samples: emulator vs. sandbox. For a sandbox, you provide a sample and results can be used as input for an emulator: C2, IPs, encryption keys, etc… (and vice-versa). The next part of the talk was dedicated to CAPE (“Config And Payload Extraction”) which is a fork of Cuckoo but with many decoding features that help to unpack and analyze via the browser. It has an API monitor, a debugger, a dumper, import reconstructor, etc. Many demos were performed. I’ll definitively install it to replace my old Cuckoo instance! The project is here.

The next talk was presented by Suguru Ishimaru, Manabu Niseki and Hiroaki Ogawa: “Roaming Mantis: A melting pot of Android bots“. This campaign started in Japan in 2018 and compromised residentials routers. Classic attacks: DNS settings were changed to redirect victims to rogue websites that… delivered malware samples! The talk covered malware samples that were used in the campaign:

  • MoqHao
  • FakeSpy
  • FakeCop
  • FunkyBot

For each of them, the malware was analyzed using the same way: how was it delivered, compromization, connections with the C2. The relation between them? They used the same technique to steal money. The money laundering process was also reviewed.

After a welcome coffee break,”The Cereals Botnet” was presented by Robert Neumann and Gergely Eberhardt. This time, no Android, no Windows but… NASes! (storage devices). This makes it a pretty unique botnet of approximatively 10K bots. The target was mainly D-Link NAS devices (consumer models).


They started to explain how the NAS was compromized using a vulnerability in a CGI script. To better understand how it worked, they simply connected a NAS on the wild Internet (note: don’t try this at home!) and were able to observe the behavior of the attacker (unusual HTTP request, outgoing traffic and suspicious processes. The exploit was via the SMS notification system in system_mgr.cgi. Once compromized, there was also a backdoor installed, many software components (a VPN client) and persistence was implemented too. I like the way the bot communicated with the C2: Via a feed RSS! Easy! Finally they explained that the vulnerability was patched by the vendor (years later!) but many devices remain unpatched…

The next talk was the result of a student’s research who tried to develop a tool to improve the generation of YARA rules. “YARA-Signator: Automated Generation of Code-based YARA Rules” presented by Felix Bilstein and Daniel Plohmann.

No need to present YARA. A fact is that most public rules (73%) are based on text strings but code-based rules are nice: they are robust, harder to circumvent by attackers and easier to automate but… not manually! A tool can be used for this purpose: YARA-Signator. The approach was to create signatures based on families from Malpedia. First, they explained how the tool works then the results. Interesting if you’re a big fan of YARA. The code is here.

After the lunch, the scheduled talk was “Using a Cryptographic Weakness for Malware Traffic Clustering and IDS Rule Generation” by Matthijs Bomhoff and Saskia Hoogma. This is kind of talk that I’m fear of after a nice lunch… The idea of the talk was to explain that bad encryption implementations by attackers can be used to track them.

There was no coffee break foreseen in this afternoon, so we continued with three talks in a row. “Zen: A Complex Campaign of Harmful Android Apps” by Lukasz Siewierski. A common question related to malware samples is: Are all apps are coming from the same author (or group?). Lukasz explained different techniques called “Zen” that perform malicious activities:

  • Repacking with a custom ad SDK
  • Click fraud
  • Rooting the device
  • Fake Google account creation
  • Code injection
  • Persistence

It was a review of another Android malware…

Martijn Grooten continues with “Malspam is Different Spam“. Martijn explained that, if spam is not a new topic, it remains a common infection vector. Why some emails are more likely to pass through our filters? A few words about spam-filtering, we rely on:

  • Block lists (IP, domains)
  • Sender verify (SPF, DKIM, DMARC, etc)
  • Content filtering (viagra, casino, …)
  • Link & attachments scanning

Some emails go to the sam traps and results are sent to the spam filter (which can also update itself). Spam scales badly. Then, Martijn showed some better examples that have chances to not be blocked.

Finally, the conference ended with “Demystifying Banking Trojans from Latin America” by Juraj Hornák, Jakub Soucek and Martin Jirkal. They presented the LATAM banking landscape (targetting Spanish & Portuguese speaking people) with malware. They explained the techniques used by the different malware families with the final goal to see the relations between them:

I like the “easter egg and human error” part where they explained some mistakes make by the attackers like… keeping a command in the code to spawn a calc.exe 😉

This edition could be called the “Android Malware Edition” seeing the number of presentations related to Android! (Like we had the “DGA Edition” a few years ago). This wrap-up closes these three days of BotConf 2019! Here are some stats provided by the organization:

  • CFP: 73 submissions
  • 3 workshops
  • 1 keynote
  • 28 presentations
  • 50 speakers
  • 1730 minutes of shared knowledge.

As usual, they also disclosed the location of the 2020 edition: we will be back to Nantes!

[The post BotConf 2019 Wrap-Up Day #3 has been first published on /dev/random]

December 05, 2019

The second day is over. Here is my daily wrap-up. Today was a national strike day in France and a lot of problems were expected with public transports. However, the organization provided buses to help attendees to travel between the city center and the venue. Great service as always 😉

After some coffee, the day started with another TLP:AMBER talk: “Preinstalled Gems on Cheap Mobile Phones” by Laura Guevara. So, to respect the TLP, nothing will be disclosed. Just keep in mind that if you use cheap Android smartphones, you get what you paid for…

Then, Brian Carter @
presented “Honor Among Thieves: How Stealer Malware Fuels an Underground Economy of Compromised Accounts”. It started with facts: Attackers make mistakes and leak online information like panels, configuration files or logs. You can find this information just via a browser, no need to hack them back.

Brian’s research goals were:

  • Understand the value of stolen data
  • Develop tools to collect, parse data
  • Collect info help identify malware families based on C2.
  • Develop detection rules to catch them

But, very important, we cannot commit crimes, not contribute to malicious activities or harm victims of malware. The golden rule here is: Be ethic! Research had also limitations, it’s impossible to collect everything and data may not be representative! What was the collection process? Terabytes of data, 1M stealer logs archives, many panels, builders, chat logs, forum posts, and market listings. Again, “stolen data” were collected from ethical sources like VT, shares, open directories. Brian also commented on the economics of stealers: The market is relatively small today and only a few criminals are successful in their business and stealer malware is low-volume. Here is an example of prices:

Then, when you collected so much data, what to do with a huge amount of data? Warn users if they are compromized, research adversaries, develop countermeasures, take down? This was an interesting talk.

The next speakers were Alexander Eremin and Alexey Shulmin who presented “Bot with Rootkit: Update and Mine!”. The research started with a “nice” sample that installed a Microsoft patch. Cool isn’t? Of course, the dropper was obfuscated and, once decoded, it used the classic VirtualAlloc() and GetModuleHandleA() to decrypt the data.

When they were able to extract strings, they continued to investigate and discovered interesting behaviors. The first one, the develop was nice enough to leave debugging comments via a write_to_log() function. The malware checked the OS version, the keyboard layout and created a MUTEX. Until now, nothing fancy, but the malware also installed a Microsoft patch, if not already installed: KB3033929. Why? The patch was required to allow check_crypt32_version() and support of SHA-2! Then, they explained how C2 communications are performed, based on HTTP and encrypted using RC4 and Base64 encoding. Then, a rootkit is installed to finally deploy a cryptominer. The rootkit is like Necurs and uses IOCTL-like registry keys. Example of commands supported:

  • 0x220004 – update rootkit
  • 0x220008 – update payload

After a caffeine refill, it was the turn of Tom Ueltschi who presented “DESKTOP-Group – Tracking a Persistent Threat Group (using Email Headers)”. Tom is a regular speaker at Botconf and always provides good content. This talk was tagged as TLP:GREEN. Good idea: he will try to make a “white” version of his slides as soon as possible. Remember that TLP:GREEN means “Limited disclosure, restricted to the community.”

Before the lunch break, two short presentations were scheduled. “The Bagsu Banker Case” by Benoît Ancel. Sorry, TLP:AMBER again.

The following talk was “Tracking Samples on a Budget” by <redacted>. It covered a personal project running for two years now which explained how to acquire a collection of malware samples… but being a student, with no budget! What are the (free) sources available? Open-source repositories, honeypots, pivots, feed, sandboxes, existing malware zoo like malshare or malc0de, etc. You can run also your own honeypot but it’s difficult to deploy a lot of them. The talk covered the components put in place to crawl the web, how to avoid some stupid limitations that you’ll face. Then comes the post-processing:

  • Upload new samples to VT
  • Enrichment (metadata, strings, …)
  • YARA rules!
  • Pivots

A good idea is also to search recursively for open directories that remain a goldmine. Here is the sample tracking lifecycle as implemented:

Acquire URL > Crawl > Download > Postprocessing > Store > Pivot

What about the results? Running for 2 years, 270K unique samples have been discovered, 25% of them not on VT, 600GB of data collected, 78% of the samples have 5+ score on VT. I had the opportunity to talk later with the speaker, it’s a great project… The code is running fine for 2 years and just does the job! Amazing project!

After the lunch, we had another restricted presentation (sorry, interesting content can’t be disclosed) – TLP:RED – by Thomas Dubier and Christophe Rieunier: “Botnet Tracking Story: from Spam Mail to Money Laundering”.

The next talk was “Finding Neutrino Botnet: from Web Scans to Botnet Architecture” by Kirill Shipulin and Alexey Goncharov. This research started with some interesting hits on a honeypot. They adapted the honeypot to respond positively and mimick a webshell. The scan was brute-forcing different webshells with a strange command: “die(md5(Ch3ck1ng))”. The malware they found had a classic behavior: check if already installed, exfiltrate system information and download/execute a payload (a cryptominer in this case). It implemented persistence via WMI, was fileless and also killed its competitors.

The next malware to be analyzed was BackSwap: “BackSwap Malware Campaign Evolution” by Carlos Rubio Ricote and David Pastor Sanz. This malware was found by eSet in May 2018 via trojanized apps (OllyDbg, Filezilla, ZoomIt, …). It used PIC – Position Independent Code and used shellcodes hidden in BMP images (see the DKMC tool). They explained how the malware worked, how the configuration was encrypted and, once decoded, what were the parameters like the statistics URL, the C2, User-Agent touse, and the injection delimiter. They decrypted the web inject and explained the technique used: via the navigation bar or the developer console. Then, they reviewed some discovered campaigns targetting banks in Poland, Spain.

After the afternoon coffee break, Mathieu Tartare came on stage to talk about “Winnti Arsenal: Brand-new Supplies“. WINNIT is a group that is often cited in the news and that compromized multiple organizations (telco, editors, healthcare, gambling, etc…) They are specialized in supply-chain attacks. They look to be active since 2013 (first public report) and in 2017… there was the famous CCleaner case! In 2018, the gaming industry was compromised. The 1st part of this talk covered this. The technique used by the malware was CRT patching. A function is called to unpack/execute the malicious code before returning to the original code. The payload was packed using a custom packer using RC4. The first stage is a downloader that gets its C2 config then gathers information about the victim’s computer. Amongst them, the C: drive name and volume ID are exfiltrated. The 2nd stage will decrypt a payload that was encrypted using… the volume ID! Finally, a cryptominer is installed. Mathieu also covered two backdoors: PortReuse that works like a good old port-knocking. It sniffs the network traffic and waits for a specific magic packet. The second one was ShadowPad.

The last talk for today was “DFIR & Crisis Management – Post-mortems & Lessons Learned in the Pain from the Field” by Vincent Nguyen. He was involved in many security incidents and explained some interesting facts about them. How they worked, what they found (or not 😉 and also, very important, lessons learned to improve the IR process.

The scheduled ended with a set of lightning talks. 19 talks of 3 mins with lot of interesting information, tools, stories, etc.

[The post BotConf 2019 Wrap-Up Day #2 has been first published on /dev/random]

If you’re setting up an Ubuntu 18.04 LTS server, you might want to give it a static IP address instead of one assigned by your router over DHCP.

December 04, 2019

Hello from Bordeaux, France where I’m attending the 7th edition (already!) of the BotConf security conference dedicated to fighting against botnets. After Nantes, Nancy, Paris, Lyon, Montpellier, Toulouse and now Bordeaux, their “tour de France” is almost completed. What will be the next location? I attended all the previous editions and many wrap-up’s are available on this blog. So, let’s start with the 2019 edition!

The conference was kicked off by Eric Freyssinet. This year, they received 73 submissions to the call for paper and 3 workshops (organized yesterday). The selection process was difficult and, according to Eric, we can expect interesting talks.

After the introduction, the first talk was ‘DeStroid – Fighting String Encryption in Android Malware” presented by Daniel Baier. He worked with Martin Lambertz on this topic.

Analyzing Android malware is not that hard because most developers use the standard API and applications can easily be decompiled. So, they have to use alternative techniques to obfuscate their interesting content. One of these is the use of encryption to hide strings. They are decrypted at run time. As you can imagine, doing this process manually is a pain. To test the DeStroid tool, they use the data set provided by Malpedia (also presented at BotConf preciously). They detected three techniques used to encrypt strings: by using a string repository, by passing strings to a decryption routine and via native libraries. Each technique was reviewed by Daniel. From a statistics point of view, 52% of the test Android samples use strings encryption and 56% of them use the “direct pass” method. The DeStroid tool was also compared to other solutions like JMD, Deobfuscator or Dex-Oracle. The tool is available here if you’re interested.

The next speaker was Marco Riccardi who presented “Golden Chickens: Uncovering A Malware-as-a-Service (MaaS) Provider and Two New Threat Actors Using It“. The talk was flagged as TLP:AMBER so I won’t disclose details about it. Just the same remark as usual which this kind of “sensitive” slides, can you trust a room full of 500 people? I like a slide (without any confidential information) that explained how to NOT do attribution:

IR > Found sample > Use technique “X” > Google for “technique X” > Found references to “China” > We are attacked by China > Case closed

The next talk focused on the gaming industry. I’m definitely not a game addict, so I’m always curious about what happens in this field. Basically, like any domain, if some profit can be made, bad guys are present! Making huge profits and targeting millions of players, it is normal to see attacks targetting big names like Counter-Strike. Ivan Korolev and Igor Zdobnov presented “Unrevealing the Architecture Behind the Counter-Strike 1.6 Botnet: Zero-Days and Trojans”. Do you remember that Counter-Strike was born in 1999? What’s the business behind CS 1.6? People want to play with more and more people. Servers need to be “promoted” to attract more gamers. “Boosting” is the technique to attract new players. If it can be performed with good & clean techniques, it can also be performed via malware. Ivan & Igor reviewed the different vulnerabilities found in the CS client and how it was (ab)used to build the botnet. Vulnerabilities were found in the parsing of SAV files (saved game files), BMP files. By exploiting vulnerabilities in the game, the client becomes part of a botnet. A rogue server can issue commands to a client like:

  • redirecting players to another server
  • tampering with configs
  • change behavior of the game

In the next part of the presentation, the trojan Belonard was described.
It is active since 05/2017 and constantly use new exploits & modules. Finally, they explained the take-down process.

After the lunch breach, the keynote was presented by Gilles Schwoerer and Michal Salat. Michal is working for an AV vendor and Gilles is working for the French law enforcement services. Their keynote was called “Putting an end to Retadup“.

What is Retadup? Michal started the technical part and explained the facts around this malware: It’s a worm that affected 1.3M+ computers via malicious LNK files. It’s not only a botnet but also a platform malware being used by others to distribute more malware through it. The original version was developed in AutoIt and has persistence, extraction of the victim’s details. It also implements a lot of anti-analysis techniques. Communications with the C2 is performed via a hardcoded list of domains and simple HTTP GET request, Base64 or hex-encoded. When the malware was discovered and analyzed, the C2 was found to be located in France. That was the second part of the keynote, presented by Gilles who explained the take-down process. Their plan was to present an empty update to the bots to make them inactive. This method did not prevent the infection but was less dangerous for the end-user. US Agencies were also active in the process to redirect the DNS to the rogue C2 server deployed by the French police. This keynote was a great example of collaboration between a vendor and LE services.

The next presentation was about an “Android Botnet Analysis – Shaoye Botnet” by Min-Chun Tsai, Jen-Ho Hsiao and Ding-You Hsiao. Like the previous talk, they did a review of another malware targeting Android devices. In Chinese, “Shaoye” means “Young master”. They analyzed two versions of the malware. The first one used DNS hijacking in residential routers to redirect victims to a rogue web site. The second version used compromised websites to spread malware. In the first, a fake Facebook app was installed and, in the second version, a fake Sagawa express application (Sagawa is a major transportation company in Japan). The malware samples were completely analyzed to explain how they work.

After a welcome coffee break, a very interesting talk was presented by Piotr Bialczak and Adrian Korczak: “Tracking Botnets with Long Term Sandboxing”. The idea behind the research was about improving the analysis of bot in a sandbox for a long period of time. We know that reverse engineering malware costs a lot of time and we use sandboxes as much as possible. The biggest constraint is that time allowed to execute, usually a few minutes. Malware developers know this and make their malware wait for a long period of time (> sandbox timeout) but increase the chances that the sandbox will stop by itself. They created a “LTS” – “Long Term Sandboxing” System that is optimized to allow a bot to run for a long time but without the technical constraints. Based on Qemu and a system of external snapshots combined with other tools like an ELK stack and Moloch, they are able to reduce the CPU resource, network bandwidth, etc. They analyzed 20 families of well-known botnets and showed interest information they learned at the network level, SMTP traffic or DGA. For example, it was possible to detect unusual protocols, traffic to non-standard ports).

The next talk was “Insights and Trends in the Data-Center Security Landscape” by Daniel Goldberg and Ophir Harpaz. There were some changes but I covered this talk last week at DeepSec (see my wrap-up here).

Then, Dimitris Theodorakis and Ryan Castellucci presented “The Hunt for 3ve“. Here again another family of malware that was dissected. This one targeted online ads. Yes, ads remain a business. In a few numbers: 1.8M+ infected computers, 3B+ ad requests/day, 10K+ spoofed domains, 60K+ accounts. Eve used three techniques that were reviewed by Dimitris and Ryan:

  • Boaxxe (residential proxy)
  • BGP hi-jacking
  • Kovter

Here again, after the technical details, they explained how the take-down process.

Finally, the first day ended with “Guildma: Timers Sent from Hell” presented by Adolf Streda, Luigino Camastra, and Jan Vojtešek. This time was analyzed malware was not only a RAT but also a spyware, password stealer and banking malware. The malware was spread through spam campaigns and was targeting mainly Brazil (at a first stage) then they targeted more countries.

That the end of day 1! Many botnets were covered, always with the same approach: hunting, find samples, analyze them, learn how they work and organize the take-down. See you tomorrow for the second wrap-up!

[The post BotConf 2019 Wrap-Up Day #1 has been first published on /dev/random]

December 01, 2019

Once upon a time I wrote a weekly newsletter on Linux, open source & web development.

November 30, 2019

November 29, 2019

Here we go for the second wrap-up! DeepSec is over, flying back tomorrow to Belgium. My first choice today was to attend: “How To Create a Botnet of GSM-devices” by Aleksandr Kolchanov. Don’t forget that GSM devices are not only “phones”. Aleksandr covered nice devices like alarm systems, electric sockets, smart-home controllers, industrial controllers, trackers and… smartwatches for kids!

They all have features like to send notifications via SMS, call pre-configured numbers but also be configured or polled via SMS. Example of attacks? Brute-force the PIN code, spoof calls, use “hidden” SMS commands. Ok, but what are the reasons to hack them? We have direct attacks (unlock the door, steal stuff) or spying: abuse the built-in microphone. Attacks on the property are also interesting: switch off electric devices (a water pump, a heating system). Also terrorism or political actions? Financial attacks (call or send SMS to premium numbers). Why a botnet? The get some money! Just use it to send huge amounts of SMS but also to DoS or for political/terrorism actions: Can you imagine thousands of alarms at the same time. Thanks to powerful marketing, people buy them so we have many devices in the wild:

  • Default settings
  • Stupid vulnerabilities
  • Not properly installed
  • Insecure by default
  • Cheap!
  • Absence of certification

After the introduction, Aleksandr explained how he performed attacks against different devices. It’s easy to hack them but the real challenge is to find targets. How? You can do a mass scanning and call all numbers but it will cost money and some operators will detect you (“Why are your calling xxx times per day?”) How to search without making a call? They are web services provided by some operators that help to get info about used numbers, they are open API, databases, leaked data, etc… Once you have enough valid devices, it’s time to build the botnet:

Scan > Identify > Attack > Change settings > Profit!

It was an interesting talk to kick off the day!

The next talk was about… pacemakers! Wait, everything has been said about those devices, right? A lot of material has already been published. The big story was in 2017 when a big flaw was discovered. The talk presented by Tobias Zillner was called “500.000 Recalled Pacemakers, 2 Billion $ Stock Value Loss – The Story Behind”.

When you need to assess such medical devices, where to get one? On a second-hand webshop! Just have a look at dotmed.com, their stock of medical devices is awesome! The eco-system tested was: pacemakers / programmers/home monitors and the “Merlin Net” alias “the cloud”. The first attack vector covered by Tobias was the new generation of devices that use wireless technologies (SDR), low power, short-range (2M) – 401-406Mhz). How to find technical specs? Just check the FCC-ID and search for it. Google remains always your best friend. The vulnerabilities found were an energy depletion attack (draining the battery) and a… crash of the pacemaker! The next target was the “Merlin@Home” device which is a home monitoring system. They are easy to find on eBay:

Just perform an attack like against any embedded Linux device: Connect a console, boot it, press a key to get the bootloader, change the boot command add “init=/bin/bash” like any Linux and boot in single-user mode! Once inside the box, it’s easy to find a lot of data left by developers (source code, SSH keys, encryption keys, source code, … The second part of the talk was dedicated to the full-disclosure process.

After a short coffee break, Fabio Nigi presented “IPFS As a Distributed Alternative to Logs Collection”. The idea behind this talk was to try to solve a classic headache for people who are involved in log management tasks. This can quickly become a nightmare due to the ever-changing topologies, the number of assets, amount of logs to collect and process. Storage is a pain to manage.

So, Fabio had the idea to use IPFS. IPFS means “Interplanetary file system” and is a P2P distributed file system that helps to store files in multiple locations. He introduced the tool, how it works (it look interesting, I wasn’t aware of it). Then he demonstrated how to interconnect it with a log collection solution using different tools like IPFS GW, React, Brig or Minerva. It’s an interesting approach, however, the project is still in the development phase (as stated on the website)…

There were many interesting talks today and, with a dual-track conference, it’s not always easy to choose the one that will be the most entertaining or interesting. My next choice was “Extracting a 19-Year-Old Code Execution from WinRAR” by Nadav Grossman.

WinRAR is a well-known tool to handle many archive formats. As the tool is very popular, it’s a great target for attackers because it is installed on many computers! After a very long part about fuzzing (the techniques, tools like WinAFL), Nadav explained how the vulnerability was found. It was located in a DLL used to process ACE files. Many details were disclosed and, if you are interested, there is a blog post available here. Note that since the vulnerability has been found and disclosed, the support of ACE archives has been removed from the last versions of WinRAR!

After the lunch break, I attended “Setting up an Opensource Threat Detection Program” by Lance Buttars (Ingo Money). This was an interesting talk about tools that you can deploy to protect your web services but also counterattack the bad guys. Many tools are used in Lance’s arsenal (ModSecurity, Reverse proxies, Fail2ban, etc…)

Lance also explained what honeypots are and the different types of data that you collect: domains, files, ports, SQL tables or DB. For each type, he gave some examples. Note that “active defense” is not allowed in many countries!

And the day continued with “Once Upon a Time in the West – A story on DNS Attacks” by Valentina Palacín and Ruth Esmeralda Barbacil. They reviewed well-known DNS attack techniques (DNS tunneling, hijacking, and poisoning) then they presented a timeline of major threats that affected DNS services and that abused the protocols like:

  • DNSChangerOperation Ghost Click
  • Syrian Electronic Army
  • Craiglist Hijacked
  • Oilrig: Suspected Iranian
  • Project Sauron (suspected USA)Darkhydrus (
  • Bernhard PoS
  • FIN7
  • DNSpionage
  • SeaTurtle

For each of them, they applied the Mitre ATT&CK framework. Nothing really new but a good recap which concludes that DNS is a key protocol and that it must be carefully controlled.

The two next talks focused more on penetration testing: “What’s Wrong with WebSocket APIs? Unveiling Vulnerabilities in WebSocket APIs
by Mikhail Egorov. He already published a lot of researches around WebSocket and started with a review of the protocol. Then he described different types of attacks. The second one was “Abusing Google Play Billing for Fun and Unlimited Credits!” by Guillaume Lopes. Guillaume explained how Google provides a payment framework for developers. Like the previous talk, it started with a review of the framework then how it was abused. He tested 50 apps, 29 were vulnerable to this attack. All developers were contacted and only 1 replied!

To close the day, Robert Sell presented “Techniques and Tools for Becoming an Intelligence Operator“. Open-source intelligence can be used in many fields: forensics, research, etc. Robert defines it as “Information that is hard to find but freely available”.

He explained how to prepare yourself to perform investigations, which tools to use, network connections, creation of profiles on social network and many more. The list of tools and URLs provided by Robert was amazing! Don’t forget that good OpSec is important. If you’re excited to search for information about your target, (s)he won’t probably be as excited as you! Also, keep in mind, that all techniques used can also be used against you!

That’s all Folks! DeepSec is over! Thanks again to the organizers for a great event!

[The post DeepSec 2019 Wrap-Up Day #2 has been first published on /dev/random]

November 28, 2019

Hello from Vienna where I’m at the DeepSec conference. Initially, I was scheduled to give my OSSEC training but it was canceled due to a lack of students. Anyway, the organizers proposed to me to join (huge thanks to them!). So, here is a wrap-up of the first day!

After the short opening ceremony by René Pfeiffer, the DeepSec organizer, the day started with a keynote. The slot was assigned to Raphaël Vinot and Quinn Norton: “Computer security is simple, the world is not”. 

I was curious based on the title but the idea was very interesting. Every day, as security practitioners, we deal with computers but we also have to deal with people that use a computer we are protecting! We have to take them into account. Based on different stories, Raphaël and Quinn gave their view of how to communicate with our users. First principle: Listen to them! Let them explain with their own words and shut up. Even if it’s technically incorrect, they could have interesting information for us. The second principle is the following: If you don’t listen to your users, you don’t know how to make your job! The next principle is to learn how they work because you must adapt to them. More close to your users you are, the more you can understand the risks they are facing. Also, don’t say “Do this, then this, …” but explain what is behind the action, why they have to do this. Don’t go too technical if people don’t ask details. Don’t scare your users! The classic example is the motivated user that has to finish his/her presentation for tomorrow morning. She/he must transfer files to home but how? If you block all the classic file transfer services, be sure that the worst one will be used. Instead, promote a tool that you trust and that is reliable! Very interesting keynote!

The first regular talk was presented by Abraham Aranguren: “Chinese Police & Cloudpets”. If you don’t Cloudpets, have a look at this video. What could go wrong? A lot! The security of this connected toy was so bad that major resellers like Walmart or Amazon decided to stop selling it. It’s a connected toy linked to mobile apps to exchange messages between parents and kids. Nice isn’t it? But they made a lot of mistakes regarding the security of the products. Abraham reviewed them:

  • Bad BlueTooth implementation, no control to pair or push/fetch data from the toy
  • Unprotected MongoDB indexed by Shodan, attacked multiple times by ransomware
  • Unencrypted firmware
  • Can be used as a spy device
  • The domain used to interact with the toy is now for sale (mycloudpets.com)
  • No HTTPS support
  • All recordings available in an S3 bucket (800K customers!)

The next part of the talk was about mobile apps used by Chinese police to track people, especially the Muslim population in Xinjiang: IJOP & BXAQ. IJOP means “Integrated Joint Operations Platform” and is an application used to collect private information about people and to perform big data analysis. The idea is to collect unusual behaviors and report them to central services for further investigations. The app was analyzed, reverse-engineered and what they found is scaring. Collected data are:

  • Recording of height & blood type
  • Anomaly detection
  • Political data
  • Religious data
  • Education level
  • Abnormal electricity use
  • Problematic tools -> to make explosives?
  • IF stopped using phone 

The BXAQ app is a trojan that is installed even on tourists phones to collect “interesting” data about them:

  • It scans the device on which it is installed
  • Collected info: calendar, contacts, calls, SMS, IMEI, IMSI, hardware details
  • Scan files on SD card (hash comparison)
  • A zip file created (without any password) and uploaded to police server

After a welcomed coffee break, I came back to the same track to attend “Mastering AWS pen testing and methodology” by Ankit Giri. The idea behind this talk is to get a better idea about how to pentest an AWS environment.

The talk was full of tips & tricks but also references to tools. The idea is to start by enumerating the AWS accounts used by the platform as well as the services. To achieve this, you can use aws-inventory. Then check for CloudWatch, CloudTrail of BillingAlerts. Check the configuration of services being used. Make notes of services interacting with each other. S3 buckets are, of course, juicy targets. Another tool presented was S3SCanner. Then keep an eye on the IAM: how accounts are managed, what are the access rights, keys, roles. In this case, PMApper can be useful. EV2 virtual systems must be reviewed to find open ports, ingress/egress traffic, and their security groups! If you are interested in testing AWS environments, have a look at this arsenal. To complete the presentation, a demo of prowler was performed by Ankit.

Then Yuri Chemerkin presented “Still Secure. We Empower What We Harden Because We Can Conceal“. To be honest with you, I did not understand the goal of the presentation, the speaker was not very engaging and many content was in Russian… Apparently, while discussing with other people who attended the talk, it was related to the leak of information from many tools and how to use them in security testing…

The next one was much more interesting: “Android Malware Adventures: Analyzing Samples and Breaking into C&C” presented by Kürşat Oğuzhan Akıncı & Mert Can Coşkuner. The talk covered the hunt for malware in the mobile apps ecosystem, mainly Android (>70% of new malware are targeting Android phones). Even if Google implemented checks for all apps submitted to the Play store, the solution is not bullet-proof and, like on Windows systems, malware developers have techniques to bypass sandbox detection… They explained how they spotted a campaign targetting Turkey. They analyzed the malware and successfully exploited the C2 server which was vulnerable to:

  • Directory listing
  • Lack of encryption keys
  • Password found in source code
  • Weak upload feature, they uploaded a webshell
  • SQLi
  • Stored XSS

In the end, they uncovered the campaign, they hacked back (with proper authorization!), they restored stolen data and prevented further incidents. Eight threat actors were arrested.

My next choice was again a presentation about the analysis of a known campaign: “The Turtle Gone Ninja – Investigation of an Unusual Crypto-Mining Campaign” presented by Ophir Harpaz, Daniel Goldberg.


The campaign was “NanshOu” and it’s not a classic one. Ophir & Daniel gave many technical details about the malware, how it infected thousands of MSSQL servers to deploy a crypto-miner. Why servers? Because they require less interaction, they have better uptime, they have lot of resources and are maintained by poor IT teams ;-). The infection path was: scanning for MSSQL servers, brute force them, enable execution of code (via xp-cmdshell()), drop files and execute them.


Then, Tim Berghoff and Hauke Gierow presented “The Daily Malware Grind” – Looking Beyond the Cybers“. They performed a review of the threat landscape, ransomware, crypto-miners, RATs, etc… Interesting fact: old malware remains active.


Lior Yaari talked about a hot topic these days: “The Future Is Here – Modern Attack Surface On Automotive“. Do you know that IDS are coming to connected cars automotive today? It’s a fact, cars are ultra-connected today and it will be worse in the future. If, in the year 2005, cars had an AUX connected and USB ports, today they have GPS, 4G, BT, WiFi and a lot of telemetrics data sent to the manufacturer! By 2025, cars will be part of clouds, be connected to PLC, talk to electric chargers, gas stations, etc. Instead of using ODB2 connections, we will use regular apps to interact with them. Lior gave multiple examples of potential issues that people will face with their connected cards. A great topic!


To close the first day, I attended “Practical Security Awareness – Lessons Learnt and Best Practices” by Stefan Schumacher. He explained in detail why awareness trainings are not always successful.

It’s over for today! Stay tuned for the next wrap-up tomorrow! I’m expecting a lot from some presentations!

[The post DeepSec 2019 Wrap-Up Day #1 has been first published on /dev/random]

November 27, 2019

There’s an annoying little bug in VirtualBox that can cause your network config in the VM to become invalid after a reboot of your host Mac.
There’s an annoying little bug in VirtualBox that can cause your network config in the VM to become invalid after a reboot of your host Mac.
I’ve got a Mac Mini at home to act as a server & desktop. On it there are several Virtual Box VMs, among which a a pihole to block unwanted requests via DNS.

November 25, 2019

I’ve really come to appreciate the elegance in the io abstractions in Go. The seemingly simple patterns of io.Reader and io.Writer open up a world of easily composable data pipelines.

Need to add compression? Just wrap the Writer with a gzip.Writer, etc.

But there are some subtleties to be aware off, that might bite you.

Let’s have a look at the description of io.Reader.Read():

Read(p []byte) (n int, err error)

Read reads up to len(p) bytes into p. It returns the number of bytes read (0 <= n <= len(p)) and any error encountered. Even if Read returns n < len(p), it may use all of p as scratch space during the call. If some data is available but not len(p) bytes, Read conventionally returns what is available instead of waiting for more.

This is fairly straightforward. You call Read() with a byte slice, which it may fill up. The key point here being may. Most IO sources (e.g. a file) will generally read the full buffer, until you reach the end of the file.

But not all of them. For instance, a gzip.Writer tends to do incomplete reads, requiring multiple Read() calls.

Recommendation: If you need to read a buffer in full, use io.ReadFull() instead of Read().

When Read encounters an error or end-of-file condition after successfully reading n > 0 bytes, it returns the number of bytes read. It may return the (non-nil) error from the same call or return the error (and n == 0) from a subsequent call. An instance of this general case is that a Reader returning a non-zero number of bytes at the end of the input stream may return either err == EOF or err == nil. The next Read should return 0, EOF.

Callers should always process the n > 0 bytes returned before considering the error err. Doing so correctly handles I/O errors that happen after reading some bytes and also both of the allowed EOF behaviors.

This means it’s perfectly legal to return both n (and thus read a number of bytes) and an error at the same time.

It also means that the standard pattern of immediately checking for an error is wrong:

// Don't do this
n, err := in.Read(buf)
if err != nil {
    // Handle err
}
// Do something with n and buf

Always process n / buf first, then check for the presence of an error.

Implementations of Read are discouraged from returning a zero byte count with a nil error, except when len(p) == 0. Callers should treat a return of 0 and nil as indicating that nothing happened; in particular it does not indicate EOF.

The important take-away here: always check for err == io.EOF, some implementations might give you an empty read even if there is still data to come.


Running into either of these corner cases is generally rare, since most IO sources are quite well-behaved. But being aware of the corner cases will save you a massive amount of debugging once you do run into them.


Comments | More on rocketeer.be | @rubenv on Twitter

I published the following diary on isc.sans.edu: “My Little DoH Setup“:

“DoH”, this 3-letters acronym is a buzzword on the Internet in 2019! It has been implemented in Firefox, Microsoft announced that Windows will support it soon. They are pro & con about encrypting DNS requests in  HTTPS but it’s not the goal of this diary to restart the debate. In a previous diary, he explained how to prevent DoH to be used by Firefox but, this time, I’ll play on the other side and explain to you how to implement it in a way to keep control of your DNS traffic (read: how to keep an eye on DNS request performed by users and systems)… [Read more]

[The post [SANS ISC] My Little DoH Setup has been first published on /dev/random]

I’ve been using the Caddy webserver for all my projects lately. Here’s my current default config for a Laravel project.

November 22, 2019

I published the following diary on isc.sans.edu: “Abusing Web Filters Misconfiguration for Reconnaissance“:

Yesterday, an interesting incident was detected while working at a customer SOC. They use a “next-generation” firewall that implements a web filter based on categories. This is common in many organizations today: Users’ web traffic is allowed/denied based on an URL categorization database (like “adult content”, “hacking”, “gambling”, …). How was it detected? [Read more]

[The post [SANS ISC] Abusing Web Filters Misconfiguration for Reconnaissance has been first published on /dev/random]

November 21, 2019

If you have a Laravel project, you might be surprised to find your routes are probably available on a number of different URLs.
I ran into this error when I tried any git operation (like git clone and git pull) on a Mac.

November 20, 2019

It’s beginning to look a lot like Christmas ladies & gentlemen and I’m planning on putting Autoptimize 2.6 nicely wrapped up under your Christmas tree.

The new version comes with the following changes;

  • New: Autoptimize can be configured at network level or at individual site-level when on multisite.
  • Extra: new option to specify what resources need to be preloaded.
  • Extra: add `display=swap` to Autoptimized (CSS-based) Google Fonts.
  • Images: support for lazyloading of background-images when set in the inline style attribute of a div.
  • Images: updated to lazysizes 5.1.2 (5.2 is in beta now, might be integrated in AO26 if released in time).
  • CSS/ JS: no longer add type attributes to Autoptimized resources.
  • Improvement: cache clearing now also integrates with Kinsta, WP-Optimize & Nginx helper.
  • Added “Critical CSS” tab to highlight the criticalcss.com integration, which will be fully included in Autoptimize 2.7.
  • Large batch of misc. smaller improvements & fixes, more info in the [GitHub commit log](https://github.com/futtta/autoptimize/commits/beta).

So in order to make this the smoothest release possible I would like the beta to be downloaded and tested by as many people as possible, including you!

Any issue/ question/ bug can be reported here as a reply or if a bug and if you’re more technically inclined over at Github as an issue. Looking forward to your feedback!

We had an interesting security report for our Oh Dear monitoring service. An attacker could load a specific URL and trigger a 404 page in which they controlled the output.

November 19, 2019

My announcement the other day has resulted in a small amount of feedback already (through various channels), and a few extra repositories to be added. There was, however, enough feedback (and the manner of it unstructured enough) that I think it's time for a bit of a follow-up:

  • If you have any ideas about future directions this should take, then thank you! Please file them as issues against the extrepo-data repository, using the wishlist tag (if appropriate).
  • If you have a repository that you'd like to see added, awesome! Ideally you would just file a merge request; Vincent Bernat already did so, and I merged it this morning. If you can't figure out how to do so, then please file a bug with the information needed for the repository (deb URI, gpg key signing the repository, description of what type of software the repository contains, supported suites and components), and preferably also pointing to the gap in the documentation that makes it hard for you to understand, so I can improve ;-)
  • One questioner asked why we don't put third-party repository metadata into .deb packages, and ship them that way. The reason for this is that the Debian archive doesn't change after release, which is the point where most external repositories would be updated for the new release. As such, while it might be useful to have something like that in certain cases (e.g., the (now-defunct) pkg-mozilla-archive-keyring package follows this scheme), it can't work in all cases. In contrast, extrepo is designed to be updateable after the release of a particular Debian suite.
I’ve been moving some projects around lately and found myself in need of a weird thing I hadn’t considered before: specifying a specific SSH private key for running things like git clone or git pull.

November 18, 2019

It took a little longer than expected, but we are happy to announce the list of accepted stands for FOSDEM 2020. New this year is that some stands will switch between Saturday and Sunday, so we can give more projects the opportunity to present themselves to the community. There will be stands in the K, H and AW building, but who will be where will be announced closer to the event. We hope to see you all in February! Entire Conference Stand CentOS Debian Gentoo Linux FreeBSD Fedora Project openSUSE openMandriva illumos Automative Grade Linux Coreboot + Flashrom +舰

November 17, 2019

All Drupal core initiatives with leads attending DrupalCon Amsterdam took part in an experimental PechaKucha-style keynote format (up to 15 slides each, 20 seconds per slide):

Drupal 8’s continuous innovation cycle resulted in amazing improvements that made today’s Drupal stand out far above Drupal 8.0.0 as originally released. Drupal core initiatives played a huge role in making that transformation happen. In this keynote, various initiative leads will take turns to highlight new capabilities, challenges they faced on the way and other stories from core development.

I represented the API-First Initiative and chose to keep it as short and powerful as possible by only using four of my five allowed minutes. I focused on the human side, with cross-company collaboration across timezones to get JSON:API into Drupal 8.7 core, how we invited community feedback to make it even better in the upcoming Drupal 8.8 release, and how the ecosystem around JSON:API is growing quickly!

It was a very interesting experience to have ten people together on stage, with nearly zero room for improvisation. I think it worked pretty well — it definitely forces a much more focused delivery! Huge thanks to Gábor Hojtsy, who was not on stage but did all the behind-the-scenes coordination.

Extra information: 
DrupalCon Amsterdam
Amsterdam, Netherlands

Debian ships with a lot of packages. This allows our users to easily install software without too much effort -- just run apt-get install foo, and foo gets installed.

However, Debian does not ship with everything, and for that reason there sometimes are things that are not installable with just the Debian repositories. Examples include:

  • Software that is not (yet) packaged for Debian
  • A repository for software that is in Debian, but for the bleeding-edge version of that software (e.g., maintained by upstream, or maintained by the Debian packager for that software)
  • Software that is not DFSG-free, and cannot be included into Debian's non-free repository due to licensing issues, but that can freely be installed by Debian users.

In order to enable and use such repositories on a Debian system, a user currently has to perform some actions that may be insecure:

  • Some repositories provide a link to a repository and a key, and expect the user to perform all actions to enable that repository manually. This works, but mistakes are easy to make (especially for beginners), and therefore it is not ideal.
  • Some repositories provide a script to enable the repository, which must be run as root. Such scripts are not signed when run. In other words, we expect users to run untrusted code downloaded from the Internet, when everywhere else we tell people that doing so is a bad idea.
  • Some repositories provide a single .deb file that can be installed, and which enables the necessary repositories in the apt configuration. Since, in contrast to RPM files, Debian packages are not signed, this means that the configuration is not enabled securely.

While there is a tool to enable package signatures in Debian packages, the dpkg tool does not enforce the existence of such signatures, and therefore it is possible for an attacker to replace the (signed) .deb file with an unsigned variant, bypassing the whole signature.

In an effort to remedy this whole situation, I looked at creating extrepo, a package that would download repository metadata from a special-purpose repository, verify the signatures placed on that metadata, and if everything matches, enable the repository by creating the necessary apt configuration files.

This should allow users to enable external repository "foo" by running extrepo enable foo, rather than downloading a script from foo's website and executing it as root -- or other similarly insecure options.

The extrepo package has been uploaded to Debian; and so once NEW processing has finished, will be available in Debian unstable.

I might upload it to backports too if no issues are found, but that won't be an immediate thing.

November 15, 2019

I want to show you how I use a tool called goaccess to do some quick analysis of access logs on webservers.

November 12, 2019

I just finished the last bits of migration for this site. It involved moving my podcasts from the podcast.

November 09, 2019

Here’s a quick tip on how to combine multiple separate videos into a single one, using ffmpeg.

November 08, 2019

I published the following diary on isc.sans.edu: “Microsoft Apps Diverted from Their Main Use“:

This week, the CERT.eu organized its yearly conference in Brussels. Across many interesting presentations, one of them covered what they called the “cat’n’mouse” game that Blue and Red teams are playing continuously. When the Blue team has detected an attack technique, they write a rule or implement a new control to detect or block it. Then, the Red team has to find an alternative attack path, and so one… A classic example is the detection of malicious via parent/child process relations. It’s quite common to implement the following simple rule (in Sigma format)… [Read more]

[The post [SANS ISC] Microsoft Apps Diverted from Their Main Use has been first published on /dev/random]

If you run a Laravel application purely as a headless API, you can benefit from disabling the HTTP sessions.